All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/29] Add KVM LoongArch support
@ 2023-02-20  6:57 Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 01/29] LoongArch: KVM: Add kvm related header files Tianrui Zhao
                   ` (28 more replies)
  0 siblings, 29 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

This series adds KVM LoongArch support. Loongson 3A5000 supports hardware
assisted virtualization. With cpu virtualization, there are separate
hw-supported user mode and kernel mode in guest mode. With memory
virtualization, there are two-level hw mmu table for guest mode and host
mode. Also there is separate hw cpu timer with consant frequency in
guest mode, so that vm can migrate between hosts with different freq.
Currently, we are able to boot LoongArch Linux Guests.

Few key aspects of KVM LoongArch added by this series are:
1. Enable kvm hardware function when kvm module is loaded.
2. Implement VM and vcpu related ioctl interface such as vcpu create,
   vcpu run etc. GET_ONE_REG/SET_ONE_REG ioctl commands are use to
   get general registers one by one.
3. Hardware access about MMU, timer and csr are emulated in kernel.
4. Hardwares such as mmio and iocsr device are emulated in user space
   such as APIC, IPI, pci devices etc.

The running environment of LoongArch virt machine:
1. Cross tools to build kernel and uefi:
   $ wget https://github.com/loongson/build-tools/releases/download/2022.09.06/loongarch64-clfs-6.3-cross-tools-gcc-glibc.tar.xz
   tar -vxf loongarch64-clfs-6.3-cross-tools-gcc-glibc.tar.xz  -C /opt
   export PATH=/opt/cross-tools/bin:$PATH
   export LD_LIBRARY_PATH=/opt/cross-tools/lib:$LD_LIBRARY_PATH
   export LD_LIBRARY_PATH=/opt/cross-tools/loongarch64-unknown-linux-gnu/lib/:$LD_LIBRARY_PATH
2. This series is based on the linux source code:
   https://github.com/loongson/linux-loongarch-kvm
   Build command:
   git checkout kvm-loongarch
   make ARCH=loongarch CROSS_COMPILE=loongarch64-unknown-linux-gnu- loongson3_defconfig
   make ARCH=loongarch CROSS_COMPILE=loongarch64-unknown-linux-gnu-
3. QEMU hypervisor with LoongArch supported:
   https://github.com/loongson/qemu
   Build command:
   git checkout kvm-loongarch
   ./configure --target-list="loongarch64-softmmu"  --enable-kvm
   make
4. Uefi bios of LoongArch virt machine:
   Reference: https://github.com/tianocore/edk2-platforms/tree/master/Platform/Loongson/LoongArchQemuPkg#readme
5. you can also access the binary files we have already build:
   https://github.com/yangxiaojuan-loongson/qemu-binary

The command to boot loongarch virt machine:
   $ qemu-system-loongarch64 -machine virt -m 4G -cpu la464 \
   -smp 1 -bios QEMU_EFI.fd -kernel vmlinuz.efi -initrd ramdisk \
   -serial stdio   -monitor telnet:localhost:4495,server,nowait \
   -append "root=/dev/ram rdinit=/sbin/init console=ttyS0,115200" \
   --nographic

Changes since v1:
1. Seprate the original patch-01 and patch-03 into small patches, and the
patches mainly contain kvm module init, module exit, vcpu create, vcpu run,
etc.
2. Remove the original KVM_{GET,SET}_CSRS ioctl in the kvm uapi header,
and we use the common KVM_{GET,SET}_ONE_REG to access register.
3. Use BIT(x) to replace the "1 << n_bits" statement.
                                                                                  
Tianrui Zhao (29):
  LoongArch: KVM: Add kvm related header files
  LoongArch: KVM: Implement kvm module related interface
  LoongArch: KVM: Implement kvm hardware enable, disable interface
  LoongArch: KVM: Implement VM related functions
  LoongArch: KVM: Add vcpu related header files
  LoongArch: KVM: Implement vcpu create and destroy interface
  LoongArch: KVM: Implement vcpu run interface
  LoongArch: KVM: Implement vcpu handle exit interface
  LoongArch: KVM: Implement vcpu get, vcpu set registers
  LoongArch: KVM: Implement vcpu ENABLE_CAP, CHECK_EXTENSION ioctl
    interface
  LoongArch: KVM: Implement fpu related operations for vcpu
  LoongArch: KVM: Implement vcpu interrupt operations
  LoongArch: KVM: Implement misc vcpu related interfaces
  LoongArch: KVM: Implement vcpu load and vcpu put operations
  LoongArch: KVM: Implement vcpu status description
  LoongArch: KVM: Implement update VM id function
  LoongArch: KVM: Implement virtual machine tlb operations
  LoongArch: KVM: Implement vcpu timer operations
  LoongArch: KVM: Implement kvm mmu operations
  LoongArch: KVM: Implement handle csr excption
  LoongArch: KVM: Implement handle iocsr exception
  LoongArch: KVM: Implement handle idle exception
  LoongArch: KVM: Implement handle gspr exception
  LoongArch: KVM: Implement handle mmio exception
  LoongArch: KVM: Implement handle fpu exception
  LoongArch: KVM: Implement kvm exception vector
  LoongArch: KVM: Implement vcpu world switch
  LoongArch: KVM: Implement probe virtualization when loongarch cpu init
  LoongArch: KVM: Enable kvm config and add the makefile

 arch/loongarch/Kbuild                      |    1 +
 arch/loongarch/Kconfig                     |    2 +
 arch/loongarch/configs/loongson3_defconfig |    2 +
 arch/loongarch/include/asm/cpu-features.h  |   22 +
 arch/loongarch/include/asm/cpu-info.h      |   13 +
 arch/loongarch/include/asm/inst.h          |   16 +
 arch/loongarch/include/asm/kvm_csr.h       |   89 ++
 arch/loongarch/include/asm/kvm_host.h      |  257 +++++
 arch/loongarch/include/asm/kvm_types.h     |   11 +
 arch/loongarch/include/asm/kvm_vcpu.h      |  112 ++
 arch/loongarch/include/asm/loongarch.h     |  195 +++-
 arch/loongarch/include/uapi/asm/kvm.h      |  107 ++
 arch/loongarch/kernel/asm-offsets.c        |   32 +
 arch/loongarch/kernel/cpu-probe.c          |   53 +
 arch/loongarch/kvm/Kconfig                 |   38 +
 arch/loongarch/kvm/Makefile                |   21 +
 arch/loongarch/kvm/exit.c                  |  702 ++++++++++++
 arch/loongarch/kvm/interrupt.c             |  126 +++
 arch/loongarch/kvm/main.c                  |  152 +++
 arch/loongarch/kvm/mmu.c                   |  821 ++++++++++++++
 arch/loongarch/kvm/switch.S                |  327 ++++++
 arch/loongarch/kvm/timer.c                 |  266 +++++
 arch/loongarch/kvm/tlb.c                   |   31 +
 arch/loongarch/kvm/trace.h                 |  137 +++
 arch/loongarch/kvm/vcpu.c                  | 1118 ++++++++++++++++++++
 arch/loongarch/kvm/vm.c                    |   85 ++
 arch/loongarch/kvm/vmid.c                  |   64 ++
 include/uapi/linux/kvm.h                   |   12 +
 28 files changed, 4806 insertions(+), 6 deletions(-)
 create mode 100644 arch/loongarch/include/asm/kvm_csr.h
 create mode 100644 arch/loongarch/include/asm/kvm_host.h
 create mode 100644 arch/loongarch/include/asm/kvm_types.h
 create mode 100644 arch/loongarch/include/asm/kvm_vcpu.h
 create mode 100644 arch/loongarch/include/uapi/asm/kvm.h
 create mode 100644 arch/loongarch/kvm/Kconfig
 create mode 100644 arch/loongarch/kvm/Makefile
 create mode 100644 arch/loongarch/kvm/exit.c
 create mode 100644 arch/loongarch/kvm/interrupt.c
 create mode 100644 arch/loongarch/kvm/main.c
 create mode 100644 arch/loongarch/kvm/mmu.c
 create mode 100644 arch/loongarch/kvm/switch.S
 create mode 100644 arch/loongarch/kvm/timer.c
 create mode 100644 arch/loongarch/kvm/tlb.c
 create mode 100644 arch/loongarch/kvm/trace.h
 create mode 100644 arch/loongarch/kvm/vcpu.c
 create mode 100644 arch/loongarch/kvm/vm.c
 create mode 100644 arch/loongarch/kvm/vmid.c

-- 
2.31.1


^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v2 01/29] LoongArch: KVM: Add kvm related header files
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20 18:22   ` Paolo Bonzini
                     ` (2 more replies)
  2023-02-20  6:57 ` [PATCH v2 02/29] LoongArch: KVM: Implement kvm module related interface Tianrui Zhao
                   ` (27 subsequent siblings)
  28 siblings, 3 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Add LoongArch KVM related header files, including kvm.h,
kvm_host.h, kvm_types.h. All of thoese are about LoongArch
virtualization features and kvm interfaces.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/include/asm/cpu-features.h |  22 ++
 arch/loongarch/include/asm/kvm_host.h     | 257 ++++++++++++++++++++++
 arch/loongarch/include/asm/kvm_types.h    |  11 +
 arch/loongarch/include/uapi/asm/kvm.h     | 107 +++++++++
 include/uapi/linux/kvm.h                  |  12 +
 5 files changed, 409 insertions(+)
 create mode 100644 arch/loongarch/include/asm/kvm_host.h
 create mode 100644 arch/loongarch/include/asm/kvm_types.h
 create mode 100644 arch/loongarch/include/uapi/asm/kvm.h

diff --git a/arch/loongarch/include/asm/cpu-features.h b/arch/loongarch/include/asm/cpu-features.h
index b07974218..345b7674a 100644
--- a/arch/loongarch/include/asm/cpu-features.h
+++ b/arch/loongarch/include/asm/cpu-features.h
@@ -64,5 +64,27 @@
 #define cpu_has_guestid		cpu_opt(LOONGARCH_CPU_GUESTID)
 #define cpu_has_hypervisor	cpu_opt(LOONGARCH_CPU_HYPERVISOR)
 
+#define cpu_has_matc_guest	(cpu_data[0].guest_cfg & BIT(0))
+#define cpu_has_matc_root	(cpu_data[0].guest_cfg & BIT(1))
+#define cpu_has_matc_nest	(cpu_data[0].guest_cfg & BIT(2))
+#define cpu_has_sitp		(cpu_data[0].guest_cfg & BIT(6))
+#define cpu_has_titp		(cpu_data[0].guest_cfg & BIT(8))
+#define cpu_has_toep		(cpu_data[0].guest_cfg & BIT(10))
+#define cpu_has_topp		(cpu_data[0].guest_cfg & BIT(12))
+#define cpu_has_torup		(cpu_data[0].guest_cfg & BIT(14))
+#define cpu_has_gcip_all	(cpu_data[0].guest_cfg & BIT(16))
+#define cpu_has_gcip_hit	(cpu_data[0].guest_cfg & BIT(17))
+#define cpu_has_gcip_secure	(cpu_data[0].guest_cfg & BIT(18))
+
+/*
+ * Guest capabilities
+ */
+#define cpu_guest_has_conf1	(cpu_data[0].guest.conf & BIT(1))
+#define cpu_guest_has_conf2	(cpu_data[0].guest.conf & BIT(2))
+#define cpu_guest_has_conf3	(cpu_data[0].guest.conf & BIT(3))
+#define cpu_guest_has_fpu	(cpu_data[0].guest.options & LOONGARCH_CPU_FPU)
+#define cpu_guest_has_perf	(cpu_data[0].guest.options & LOONGARCH_CPU_PMP)
+#define cpu_guest_has_watch	(cpu_data[0].guest.options & LOONGARCH_CPU_WATCH)
+#define cpu_guest_has_lsx	(cpu_data[0].guest.ases & LOONGARCH_ASE_LSX)
 
 #endif /* __ASM_CPU_FEATURES_H */
diff --git a/arch/loongarch/include/asm/kvm_host.h b/arch/loongarch/include/asm/kvm_host.h
new file mode 100644
index 000000000..fa464e476
--- /dev/null
+++ b/arch/loongarch/include/asm/kvm_host.h
@@ -0,0 +1,257 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#ifndef __ASM_LOONGARCH_KVM_HOST_H__
+#define __ASM_LOONGARCH_KVM_HOST_H__
+
+#include <linux/cpumask.h>
+#include <linux/mutex.h>
+#include <linux/hrtimer.h>
+#include <linux/interrupt.h>
+#include <linux/types.h>
+#include <linux/kvm.h>
+#include <linux/kvm_types.h>
+#include <linux/threads.h>
+#include <linux/spinlock.h>
+
+#include <asm/inst.h>
+#include <asm/loongarch.h>
+
+/* Loongarch KVM register ids */
+#define LOONGARCH_CSR_32(_R, _S)	\
+	(KVM_REG_LOONGARCH_CSR | KVM_REG_SIZE_U32 | (8 * (_R) + (_S)))
+
+#define LOONGARCH_CSR_64(_R, _S)	\
+	(KVM_REG_LOONGARCH_CSR | KVM_REG_SIZE_U64 | (8 * (_R) + (_S)))
+
+#define KVM_IOC_CSRID(id)		LOONGARCH_CSR_64(id, 0)
+#define KVM_GET_IOC_CSRIDX(id)		((id & KVM_CSR_IDX_MASK) >> 3)
+
+#define KVM_MAX_VCPUS			256
+/* memory slots that does not exposed to userspace */
+#define KVM_PRIVATE_MEM_SLOTS		0
+
+#define KVM_HALT_POLL_NS_DEFAULT	500000
+
+struct kvm_vm_stat {
+	struct kvm_vm_stat_generic generic;
+};
+
+struct kvm_vcpu_stat {
+	struct kvm_vcpu_stat_generic generic;
+	u64 idle_exits;
+	u64 signal_exits;
+	u64 int_exits;
+	u64 cpucfg_exits;
+};
+
+struct kvm_arch_memory_slot {
+};
+
+struct kvm_context {
+	unsigned long vpid_mask;
+	unsigned long vpid_cache;
+	void *kvm_eentry;
+	void *kvm_enter_guest;
+	unsigned long page_order;
+	struct kvm_vcpu *last_vcpu;
+};
+
+struct kvm_arch {
+	/* Guest physical mm */
+	struct mm_struct gpa_mm;
+	/* Mask of CPUs needing GPA ASID flush */
+	cpumask_t asid_flush_mask;
+
+	unsigned char online_vcpus;
+	unsigned char is_migrate;
+	s64 time_offset;
+	struct kvm_context __percpu *vmcs;
+};
+
+
+#define LOONGARCH_CSRS		0x100
+#define CSR_UCWIN_BASE		0x100
+#define CSR_UCWIN_SIZE		0x10
+#define CSR_DMWIN_BASE		0x180
+#define CSR_DMWIN_SIZE		0x4
+#define CSR_PERF_BASE		0x200
+#define CSR_PERF_SIZE		0x8
+#define CSR_DEBUG_BASE		0x500
+#define CSR_DEBUG_SIZE		0x3
+#define CSR_ALL_SIZE		0x800
+
+struct loongarch_csrs {
+	unsigned long csrs[CSR_ALL_SIZE];
+};
+
+/* Resume Flags */
+#define RESUME_FLAG_DR		(1<<0)	/* Reload guest nonvolatile state? */
+#define RESUME_FLAG_HOST	(1<<1)	/* Resume host? */
+
+#define RESUME_GUEST		0
+#define RESUME_GUEST_DR		RESUME_FLAG_DR
+#define RESUME_HOST		RESUME_FLAG_HOST
+
+enum emulation_result {
+	EMULATE_DONE,		/* no further processing */
+	EMULATE_DO_MMIO,	/* kvm_run filled with MMIO request */
+	EMULATE_FAIL,		/* can't emulate this instruction */
+	EMULATE_WAIT,		/* WAIT instruction */
+	EMULATE_EXCEPT,		/* A guest exception has been generated */
+	EMULATE_DO_IOCSR,	/* handle IOCSR request */
+};
+
+#define KVM_NR_MEM_OBJS		4
+#define KVM_LARCH_FPU		(0x1 << 0)
+
+struct kvm_vcpu_arch {
+	unsigned long guest_eentry;
+	unsigned long host_eentry;
+	int (*vcpu_run)(struct kvm_run *run, struct kvm_vcpu *vcpu);
+	int (*handle_exit)(struct kvm_run *run, struct kvm_vcpu *vcpu);
+
+	/* Host registers preserved across guest mode execution */
+	unsigned long host_stack;
+	unsigned long host_gp;
+	unsigned long host_pgd;
+	unsigned long host_pgdhi;
+	unsigned long host_entryhi;
+
+	/* Host CSR registers used when handling exits from guest */
+	unsigned long badv;
+	unsigned long host_estat;
+	unsigned long badi;
+	unsigned long host_ecfg;
+	unsigned long host_percpu;
+
+	/* GPRS */
+	unsigned long gprs[32];
+	unsigned long pc;
+
+	/* FPU State */
+	struct loongarch_fpu fpu FPU_ALIGN;
+	/* Which auxiliary state is loaded (KVM_LOONGARCH_AUX_*) */
+	unsigned int aux_inuse;
+
+	/* CSR State */
+	struct loongarch_csrs *csr;
+
+	/* GPR used as IO source/target */
+	u32 io_gpr;
+
+	struct hrtimer swtimer;
+	/* Count timer control KVM register */
+	u32 count_ctl;
+
+	/* Bitmask of exceptions that are pending */
+	unsigned long irq_pending;
+	/* Bitmask of pending exceptions to be cleared */
+	unsigned long irq_clear;
+
+	/* Cache some mmu pages needed inside spinlock regions */
+	struct kvm_mmu_memory_cache mmu_page_cache;
+
+	/* vcpu's vpid is different on each host cpu in an smp system */
+	u64 vpid[NR_CPUS];
+
+	/* Period of stable timer tick in ns */
+	u64 timer_period;
+	/* Frequency of stable timer in Hz */
+	u64 timer_mhz;
+	/* Stable bias from the raw time */
+	u64 timer_bias;
+	/* Dynamic nanosecond bias (multiple of timer_period) to avoid overflow */
+	s64 timer_dyn_bias;
+	/* Save ktime */
+	ktime_t stable_ktime_saved;
+
+	u64 core_ext_ioisr[4];
+
+	/* Last CPU the VCPU state was loaded on */
+	int last_sched_cpu;
+	/* Last CPU the VCPU actually executed guest code on */
+	int last_exec_cpu;
+
+	u8 fpu_enabled;
+	struct kvm_guest_debug_arch guest_debug;
+};
+
+static inline unsigned long readl_sw_gcsr(struct loongarch_csrs *csr, int reg)
+{
+	return csr->csrs[reg];
+}
+
+static inline void writel_sw_gcsr(struct loongarch_csrs *csr, int reg,
+		unsigned long val)
+{
+	csr->csrs[reg] = val;
+}
+
+/* Helpers */
+static inline bool _kvm_guest_has_fpu(struct kvm_vcpu_arch *arch)
+{
+	return cpu_has_fpu && arch->fpu_enabled;
+}
+
+void _kvm_init_fault(void);
+
+/* Debug: dump vcpu state */
+int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu);
+
+/* MMU handling */
+int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long badv, bool write);
+void kvm_flush_tlb_all(void);
+void _kvm_destroy_mm(struct kvm *kvm);
+pgd_t *kvm_pgd_alloc(void);
+
+#define KVM_ARCH_WANT_MMU_NOTIFIER
+int kvm_unmap_hva_range(struct kvm *kvm,
+			unsigned long start, unsigned long end, bool blockable);
+void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
+
+static inline void update_pc(struct kvm_vcpu_arch *arch)
+{
+	arch->pc += 4;
+}
+
+/**
+ * kvm_is_ifetch_fault() - Find whether a TLBL exception is due to ifetch fault.
+ * @vcpu:	Virtual CPU.
+ *
+ * Returns:	Whether the TLBL exception was likely due to an instruction
+ *		fetch fault rather than a data load fault.
+ */
+static inline bool kvm_is_ifetch_fault(struct kvm_vcpu_arch *arch)
+{
+	if (arch->pc == arch->badv)
+		return true;
+
+	return false;
+}
+
+/* Misc */
+static inline void kvm_arch_hardware_unsetup(void) {}
+static inline void kvm_arch_sync_events(struct kvm *kvm) {}
+static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
+static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
+static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
+static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
+static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
+static inline void kvm_arch_free_memslot(struct kvm *kvm,
+				   struct kvm_memory_slot *slot) {}
+void _kvm_check_vmid(struct kvm_vcpu *vcpu, int cpu);
+enum hrtimer_restart kvm_swtimer_wakeup(struct hrtimer *timer);
+int kvm_flush_tlb_gpa(struct kvm_vcpu *vcpu, unsigned long gpa);
+void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm,
+					const struct kvm_memory_slot *memslot);
+void kvm_init_vmcs(struct kvm *kvm);
+void kvm_vector_entry(void);
+int  kvm_enter_guest(struct kvm_run *run, struct kvm_vcpu *vcpu);
+extern const unsigned long kvm_vector_size;
+extern const unsigned long kvm_enter_guest_size;
+#endif /* __ASM_LOONGARCH_KVM_HOST_H__ */
diff --git a/arch/loongarch/include/asm/kvm_types.h b/arch/loongarch/include/asm/kvm_types.h
new file mode 100644
index 000000000..060647b5f
--- /dev/null
+++ b/arch/loongarch/include/asm/kvm_types.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#ifndef _ASM_LOONGARCH_KVM_TYPES_H
+#define _ASM_LOONGARCH_KVM_TYPES_H
+
+#define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE	4
+
+#endif /* _ASM_LOONGARCH_KVM_TYPES_H */
diff --git a/arch/loongarch/include/uapi/asm/kvm.h b/arch/loongarch/include/uapi/asm/kvm.h
new file mode 100644
index 000000000..4192a5120
--- /dev/null
+++ b/arch/loongarch/include/uapi/asm/kvm.h
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#ifndef __UAPI_ASM_LOONGARCH_KVM_H
+#define __UAPI_ASM_LOONGARCH_KVM_H
+
+#include <linux/types.h>
+
+/*
+ * KVM Loongarch specific structures and definitions.
+ *
+ * Some parts derived from the x86 version of this file.
+ */
+
+#define __KVM_HAVE_READONLY_MEM
+
+#define KVM_COALESCED_MMIO_PAGE_OFFSET 1
+
+/*
+ * for KVM_GET_REGS and KVM_SET_REGS
+ */
+struct kvm_regs {
+	/* out (KVM_GET_REGS) / in (KVM_SET_REGS) */
+	__u64 gpr[32];
+	__u64 pc;
+};
+
+/*
+ * for KVM_GET_FPU and KVM_SET_FPU
+ */
+struct kvm_fpu {
+	__u32 fcsr;
+	__u32 none;
+	__u64 fcc;    /* 8x8 */
+	struct kvm_fpureg {
+		__u64 val64[4];	//support max 256 bits
+	} fpr[32];
+};
+
+/*
+ * For LOONGARCH, we use KVM_SET_ONE_REG and KVM_GET_ONE_REG to access various
+ * registers.  The id field is broken down as follows:
+ *
+ *  bits[63..52] - As per linux/kvm.h
+ *  bits[51..32] - Must be zero.
+ *  bits[31..16] - Register set.
+ *
+ * Register set = 0: GP registers from kvm_regs (see definitions below).
+ *
+ * Register set = 1: CSR registers.
+ *
+ * Register set = 2: KVM specific registers (see definitions below).
+ *
+ * Register set = 3: FPU / SIMD registers (see definitions below).
+ *
+ * Other sets registers may be added in the future.  Each set would
+ * have its own identifier in bits[31..16].
+ */
+
+#define KVM_REG_LOONGARCH_GP		(KVM_REG_LOONGARCH | 0x00000ULL)
+#define KVM_REG_LOONGARCH_CSR		(KVM_REG_LOONGARCH | 0x10000ULL)
+#define KVM_REG_LOONGARCH_KVM		(KVM_REG_LOONGARCH | 0x20000ULL)
+#define KVM_REG_LOONGARCH_FPU		(KVM_REG_LOONGARCH | 0x30000ULL)
+#define KVM_REG_LOONGARCH_MASK		(KVM_REG_LOONGARCH | 0x30000ULL)
+#define KVM_CSR_IDX_MASK		(0x10000 - 1)
+
+/*
+ * KVM_REG_LOONGARCH_KVM - KVM specific control registers.
+ */
+
+#define KVM_REG_LOONGARCH_COUNTER	(KVM_REG_LOONGARCH_KVM | KVM_REG_SIZE_U64 | 3)
+#define KVM_REG_LOONGARCH_VCPU_RESET	(KVM_REG_LOONGARCH_KVM | KVM_REG_SIZE_U64 | 4)
+
+struct kvm_debug_exit_arch {
+};
+
+/* for KVM_SET_GUEST_DEBUG */
+struct kvm_guest_debug_arch {
+};
+
+/* definition of registers in kvm_run */
+struct kvm_sync_regs {
+};
+
+/* dummy definition */
+struct kvm_sregs {
+};
+
+struct kvm_iocsr_entry {
+	__u32 addr;
+	__u32 pad;
+	__u64 data;
+};
+
+struct kvm_loongarch_interrupt {
+	/* in */
+	__u32 cpu;
+	__u32 irq;
+};
+
+#define KVM_NR_IRQCHIPS		1
+#define KVM_IRQCHIP_NUM_PINS	64
+#define KVM_MAX_CORES		256
+
+#endif /* __UAPI_ASM_LOONGARCH_KVM_H */
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 55155e262..fa9d0e18f 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -264,6 +264,7 @@ struct kvm_xen_exit {
 #define KVM_EXIT_RISCV_SBI        35
 #define KVM_EXIT_RISCV_CSR        36
 #define KVM_EXIT_NOTIFY           37
+#define KVM_EXIT_LOONGARCH_IOCSR  38
 
 /* For KVM_EXIT_INTERNAL_ERROR */
 /* Emulate instruction failed. */
@@ -336,6 +337,13 @@ struct kvm_run {
 			__u32 len;
 			__u8  is_write;
 		} mmio;
+		/* KVM_EXIT_LOONGARCH_IOCSR */
+		struct {
+			__u64 phys_addr;
+			__u8  data[8];
+			__u32 len;
+			__u8  is_write;
+		} iocsr_io;
 		/* KVM_EXIT_HYPERCALL */
 		struct {
 			__u64 nr;
@@ -1175,6 +1183,9 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_DIRTY_LOG_RING_ACQ_REL 223
 #define KVM_CAP_S390_PROTECTED_ASYNC_DISABLE 224
 #define KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP 225
+#define KVM_CAP_LOONGARCH_FPU 226
+#define KVM_CAP_LOONGARCH_LSX 227
+#define KVM_CAP_LOONGARCH_VZ 228
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
@@ -1345,6 +1356,7 @@ struct kvm_dirty_tlb {
 #define KVM_REG_ARM64		0x6000000000000000ULL
 #define KVM_REG_MIPS		0x7000000000000000ULL
 #define KVM_REG_RISCV		0x8000000000000000ULL
+#define KVM_REG_LOONGARCH	0x9000000000000000ULL
 
 #define KVM_REG_SIZE_SHIFT	52
 #define KVM_REG_SIZE_MASK	0x00f0000000000000ULL
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 02/29] LoongArch: KVM: Implement kvm module related interface
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 01/29] LoongArch: KVM: Add kvm related header files Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20 17:46   ` Paolo Bonzini
  2023-02-20  6:57 ` [PATCH v2 03/29] LoongArch: KVM: Implement kvm hardware enable, disable interface Tianrui Zhao
                   ` (26 subsequent siblings)
  28 siblings, 1 reply; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement loongarch kvm module init, module exit interface,
using kvm context to save the vpid info and vcpu world switch
interface pointer.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/main.c | 81 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 81 insertions(+)
 create mode 100644 arch/loongarch/kvm/main.c

diff --git a/arch/loongarch/kvm/main.c b/arch/loongarch/kvm/main.c
new file mode 100644
index 000000000..d7969d02a
--- /dev/null
+++ b/arch/loongarch/kvm/main.c
@@ -0,0 +1,81 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/err.h>
+#include <linux/module.h>
+#include <linux/kvm_host.h>
+#include <asm/kvm_host.h>
+#include <asm/cacheflush.h>
+
+static struct kvm_context __percpu *vmcs;
+
+int kvm_arch_init(void *opaque)
+{
+	struct kvm_context *context;
+	unsigned long vpid_mask;
+	int cpu, order;
+	void *addr;
+
+	vmcs = alloc_percpu(struct kvm_context);
+	if (!vmcs) {
+		pr_err("kvm: failed to allocate percpu kvm_context\n");
+		return -ENOMEM;
+	}
+
+	order = get_order(kvm_vector_size + kvm_enter_guest_size);
+	addr = (void *)__get_free_pages(GFP_KERNEL, order);
+	if (!addr) {
+		free_percpu(vmcs);
+		return -ENOMEM;
+	}
+
+	memcpy(addr, kvm_vector_entry, kvm_vector_size);
+	memcpy(addr + kvm_vector_size, kvm_enter_guest, kvm_enter_guest_size);
+	flush_icache_range((unsigned long)addr, (unsigned long)addr +
+				kvm_vector_size + kvm_enter_guest_size);
+
+	vpid_mask = read_csr_gstat();
+	vpid_mask = (vpid_mask & CSR_GSTAT_GIDBIT) >> CSR_GSTAT_GIDBIT_SHIFT;
+	if (vpid_mask)
+		vpid_mask = GENMASK(vpid_mask - 1, 0);
+
+	for_each_possible_cpu(cpu) {
+		context = per_cpu_ptr(vmcs, cpu);
+		context->vpid_mask = vpid_mask;
+		context->vpid_cache = context->vpid_mask + 1;
+		context->last_vcpu = NULL;
+		context->kvm_eentry = addr;
+		context->kvm_enter_guest = addr + kvm_vector_size;
+		context->page_order = order;
+	}
+
+	_kvm_init_fault();
+
+	return 0;
+}
+
+void kvm_arch_exit(void)
+{
+	struct kvm_context *context = per_cpu_ptr(vmcs, 0);
+
+	free_pages((unsigned long)context->kvm_eentry, context->page_order);
+	free_percpu(vmcs);
+}
+
+static int kvm_loongarch_init(void)
+{
+	if (!cpu_has_lvz)
+		return 0;
+
+	return kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
+}
+
+static void kvm_loongarch_exit(void)
+{
+	kvm_exit();
+}
+
+module_init(kvm_loongarch_init);
+module_exit(kvm_loongarch_exit);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 03/29] LoongArch: KVM: Implement kvm hardware enable, disable interface
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 01/29] LoongArch: KVM: Add kvm related header files Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 02/29] LoongArch: KVM: Implement kvm module related interface Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 04/29] LoongArch: KVM: Implement VM related functions Tianrui Zhao
                   ` (25 subsequent siblings)
  28 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement kvm hardware enable, disable interface, setting
the guest config register to enable virtualization features
when called the interface.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/main.c | 71 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 71 insertions(+)

diff --git a/arch/loongarch/kvm/main.c b/arch/loongarch/kvm/main.c
index d7969d02a..6afd97823 100644
--- a/arch/loongarch/kvm/main.c
+++ b/arch/loongarch/kvm/main.c
@@ -11,6 +11,77 @@
 
 static struct kvm_context __percpu *vmcs;
 
+void kvm_init_vmcs(struct kvm *kvm)
+{
+	kvm->arch.vmcs = vmcs;
+}
+
+long kvm_arch_dev_ioctl(struct file *filp,
+			unsigned int ioctl, unsigned long arg)
+{
+	return -ENOIOCTLCMD;
+}
+
+int kvm_arch_check_processor_compat(void *opaque)
+{
+	return 0;
+}
+
+int kvm_arch_hardware_setup(void *opaque)
+{
+	return 0;
+}
+
+int kvm_arch_hardware_enable(void)
+{
+	unsigned long gcfg = 0;
+
+	/* First init gtlbc, gcfg, gstat, gintc. All guest use the same config */
+	clear_csr_gtlbc(CSR_GTLBC_USETGID | CSR_GTLBC_TOTI);
+	write_csr_gcfg(0);
+	write_csr_gstat(0);
+	write_csr_gintc(0);
+
+	/*
+	 * Enable virtualization features granting guest direct control of
+	 * certain features:
+	 * GCI=2:       Trap on init or unimplement cache instruction.
+	 * TORU=0:      Trap on Root Unimplement.
+	 * CACTRL=1:    Root control cache.
+	 * TOP=0:       Trap on Previlege.
+	 * TOE=0:       Trap on Exception.
+	 * TIT=0:       Trap on Timer.
+	 */
+	if (cpu_has_gcip_all)
+		gcfg |= CSR_GCFG_GCI_SECURE;
+	if (cpu_has_matc_root)
+		gcfg |= CSR_GCFG_MATC_ROOT;
+
+	gcfg |= CSR_GCFG_TIT;
+	write_csr_gcfg(gcfg);
+
+	kvm_flush_tlb_all();
+
+	/* Enable using TGID  */
+	set_csr_gtlbc(CSR_GTLBC_USETGID);
+	kvm_debug("gtlbc:%llx gintc:%llx gstat:%llx gcfg:%llx",
+			read_csr_gtlbc(), read_csr_gintc(),
+			read_csr_gstat(), read_csr_gcfg());
+
+	return 0;
+}
+
+void kvm_arch_hardware_disable(void)
+{
+	clear_csr_gtlbc(CSR_GTLBC_USETGID | CSR_GTLBC_TOTI);
+	write_csr_gcfg(0);
+	write_csr_gstat(0);
+	write_csr_gintc(0);
+
+	/* Flush any remaining guest TLB entries */
+	kvm_flush_tlb_all();
+}
+
 int kvm_arch_init(void *opaque)
 {
 	struct kvm_context *context;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 04/29] LoongArch: KVM: Implement VM related functions
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (2 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 03/29] LoongArch: KVM: Implement kvm hardware enable, disable interface Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 05/29] LoongArch: KVM: Add vcpu related header files Tianrui Zhao
                   ` (24 subsequent siblings)
  28 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement loongarch VM operations: Init and destroy vm interface,
allocating memory page to save the vm pgd when init vm. Implement
vm check extension, such as getting vcpu number info, memory slots
info, and fpu info. And implement vm status description.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vm.c | 85 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 85 insertions(+)
 create mode 100644 arch/loongarch/kvm/vm.c

diff --git a/arch/loongarch/kvm/vm.c b/arch/loongarch/kvm/vm.c
new file mode 100644
index 000000000..6efa6689b
--- /dev/null
+++ b/arch/loongarch/kvm/vm.c
@@ -0,0 +1,85 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/kvm_host.h>
+#include <asm/kvm_host.h>
+
+#define KVM_LOONGARCH_VERSION 1
+
+const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
+	KVM_GENERIC_VM_STATS(),
+};
+
+const struct kvm_stats_header kvm_vm_stats_header = {
+	.name_size = KVM_STATS_NAME_SIZE,
+	.num_desc = ARRAY_SIZE(kvm_vm_stats_desc),
+	.id_offset =  sizeof(struct kvm_stats_header),
+	.desc_offset = sizeof(struct kvm_stats_header) + KVM_STATS_NAME_SIZE,
+	.data_offset = sizeof(struct kvm_stats_header) + KVM_STATS_NAME_SIZE +
+					sizeof(kvm_vm_stats_desc),
+};
+
+int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
+{
+	/* Allocate page table to map GPA -> RPA */
+	kvm->arch.gpa_mm.pgd = kvm_pgd_alloc();
+	if (!kvm->arch.gpa_mm.pgd)
+		return -ENOMEM;
+
+	kvm_init_vmcs(kvm);
+	return 0;
+}
+
+void kvm_arch_destroy_vm(struct kvm *kvm)
+{
+	kvm_destroy_vcpus(kvm);
+	_kvm_destroy_mm(kvm);
+}
+
+int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+{
+	int r;
+
+	switch (ext) {
+	case KVM_CAP_ONE_REG:
+	case KVM_CAP_ENABLE_CAP:
+	case KVM_CAP_READONLY_MEM:
+	case KVM_CAP_SYNC_MMU:
+	case KVM_CAP_IMMEDIATE_EXIT:
+	case KVM_CAP_IOEVENTFD:
+		r = 1;
+		break;
+	case KVM_CAP_NR_VCPUS:
+		r = num_online_cpus();
+		break;
+	case KVM_CAP_MAX_VCPUS:
+		r = KVM_MAX_VCPUS;
+		break;
+	case KVM_CAP_MAX_VCPU_ID:
+		r = KVM_MAX_VCPU_IDS;
+		break;
+	case KVM_CAP_NR_MEMSLOTS:
+		r = KVM_USER_MEM_SLOTS;
+		break;
+	case KVM_CAP_LOONGARCH_FPU:
+		/* We don't handle systems with inconsistent cpu_has_fpu */
+		r = !!cpu_has_fpu;
+		break;
+	case KVM_CAP_LOONGARCH_VZ:
+		/* get user defined kvm version */
+		r = KVM_LOONGARCH_VERSION;
+		break;
+	default:
+		r = 0;
+		break;
+	}
+
+	return r;
+}
+
+long kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
+{
+	return -ENOIOCTLCMD;
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 05/29] LoongArch: KVM: Add vcpu related header files
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (3 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 04/29] LoongArch: KVM: Implement VM related functions Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20 18:57   ` WANG Xuerui
  2023-02-21  4:44   ` Xi Ruoyao
  2023-02-20  6:57 ` [PATCH v2 06/29] LoongArch: KVM: Implement vcpu create and destroy interface Tianrui Zhao
                   ` (23 subsequent siblings)
  28 siblings, 2 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Add LoongArch vcpu related header files, including vcpu csr
information, irq number defines, and some vcpu interfaces.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/include/asm/cpu-info.h  |  13 ++
 arch/loongarch/include/asm/kvm_vcpu.h  | 112 ++++++++++++++
 arch/loongarch/include/asm/loongarch.h | 195 ++++++++++++++++++++++++-
 arch/loongarch/kvm/trace.h             | 137 +++++++++++++++++
 4 files changed, 451 insertions(+), 6 deletions(-)
 create mode 100644 arch/loongarch/include/asm/kvm_vcpu.h
 create mode 100644 arch/loongarch/kvm/trace.h

diff --git a/arch/loongarch/include/asm/cpu-info.h b/arch/loongarch/include/asm/cpu-info.h
index cd73a6f57..1b426a2ca 100644
--- a/arch/loongarch/include/asm/cpu-info.h
+++ b/arch/loongarch/include/asm/cpu-info.h
@@ -32,6 +32,15 @@ struct cache_desc {
 #define CACHE_LEVEL_MAX		3
 #define CACHE_LEAVES_MAX	6
 
+struct guest_info {
+	unsigned long		ases;
+	unsigned long		ases_dyn;
+	unsigned long		options;
+	unsigned long		options_dyn;
+	unsigned char		conf;
+	unsigned int		kscratch_mask;
+};
+
 struct cpuinfo_loongarch {
 	u64			asid_cache;
 	unsigned long		asid_mask;
@@ -60,6 +69,10 @@ struct cpuinfo_loongarch {
 	unsigned int		watch_dreg_count;   /* Number data breakpoints */
 	unsigned int		watch_ireg_count;   /* Number instruction breakpoints */
 	unsigned int		watch_reg_use_cnt; /* min(NUM_WATCH_REGS, watch_dreg_count + watch_ireg_count), Usable by ptrace */
+
+	/* VZ & Guest features */
+	struct guest_info	guest;
+	unsigned long		guest_cfg;
 } __aligned(SMP_CACHE_BYTES);
 
 extern struct cpuinfo_loongarch cpu_data[];
diff --git a/arch/loongarch/include/asm/kvm_vcpu.h b/arch/loongarch/include/asm/kvm_vcpu.h
new file mode 100644
index 000000000..66ec9bc52
--- /dev/null
+++ b/arch/loongarch/include/asm/kvm_vcpu.h
@@ -0,0 +1,112 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#ifndef __ASM_LOONGARCH_KVM_VCPU_H__
+#define __ASM_LOONGARCH_KVM_VCPU_H__
+
+#include <linux/kvm_host.h>
+#include <asm/loongarch.h>
+#include <asm/kvm_host.h>
+
+#define LARCH_INT_SIP0				0
+#define LARCH_INT_SIP1				1
+#define LARCH_INT_IP0				2
+#define LARCH_INT_IP1				3
+#define LARCH_INT_IP2				4
+#define LARCH_INT_IP3				5
+#define LARCH_INT_IP4				6
+#define LARCH_INT_IP5				7
+#define LARCH_INT_IP6				8
+#define LARCH_INT_IP7				9
+#define LARCH_INT_PMU				10
+#define LARCH_INT_TIMER				11
+#define LARCH_INT_IPI				12
+#define LOONGARCH_EXC_MAX			(LARCH_INT_IPI + 1)
+#define LOONGARCH_EXC_IPNUM			(LOONGARCH_EXC_MAX)
+
+/* Controlled by 0x5 guest exst */
+#define CPU_SIP0				(_ULCAST_(1))
+#define CPU_SIP1				(_ULCAST_(1) << 1)
+#define CPU_PMU					(_ULCAST_(1) << 10)
+#define CPU_TIMER				(_ULCAST_(1) << 11)
+#define CPU_IPI					(_ULCAST_(1) << 12)
+
+/* Controlled by 0x52 guest exception VIP
+ * aligned to exst bit 5~12
+ */
+#define CPU_IP0					(_ULCAST_(1))
+#define CPU_IP1					(_ULCAST_(1) << 1)
+#define CPU_IP2					(_ULCAST_(1) << 2)
+#define CPU_IP3					(_ULCAST_(1) << 3)
+#define CPU_IP4					(_ULCAST_(1) << 4)
+#define CPU_IP5					(_ULCAST_(1) << 5)
+#define CPU_IP6					(_ULCAST_(1) << 6)
+#define CPU_IP7					(_ULCAST_(1) << 7)
+
+#define MNSEC_PER_SEC				(NSEC_PER_SEC >> 20)
+
+/* KVM_IRQ_LINE irq field index values */
+#define KVM_LOONGSON_IRQ_TYPE_SHIFT		24
+#define KVM_LOONGSON_IRQ_TYPE_MASK		0xff
+#define KVM_LOONGSON_IRQ_VCPU_SHIFT		16
+#define KVM_LOONGSON_IRQ_VCPU_MASK		0xff
+#define KVM_LOONGSON_IRQ_NUM_SHIFT		0
+#define KVM_LOONGSON_IRQ_NUM_MASK		0xffff
+
+/* irq_type field */
+#define KVM_LOONGSON_IRQ_TYPE_CPU_IP		0
+#define KVM_LOONGSON_IRQ_TYPE_CPU_IO		1
+#define KVM_LOONGSON_IRQ_TYPE_HT		2
+#define KVM_LOONGSON_IRQ_TYPE_MSI		3
+#define KVM_LOONGSON_IRQ_TYPE_IOAPIC		4
+#define KVM_LOONGSON_IRQ_TYPE_ROUTE		5
+
+/* out-of-kernel GIC cpu interrupt injection irq_number field */
+#define KVM_LOONGSON_IRQ_CPU_IRQ		0
+#define KVM_LOONGSON_IRQ_CPU_FIQ		1
+#define KVM_LOONGSON_CPU_IP_NUM			8
+
+typedef union loongarch_instruction  larch_inst;
+typedef int (*exit_handle_fn)(struct kvm_vcpu *);
+
+int  _kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst);
+int  _kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst);
+int  _kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int  _kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int  _kvm_emu_idle(struct kvm_vcpu *vcpu);
+int  _kvm_handle_pv_hcall(struct kvm_vcpu *vcpu);
+int  _kvm_pending_timer(struct kvm_vcpu *vcpu);
+int  _kvm_handle_fault(struct kvm_vcpu *vcpu, int fault);
+void _kvm_deliver_intr(struct kvm_vcpu *vcpu);
+
+void kvm_own_fpu(struct kvm_vcpu *vcpu);
+void kvm_lose_fpu(struct kvm_vcpu *vcpu);
+void kvm_save_fpu(struct loongarch_fpu *fpu);
+void kvm_restore_fpu(struct loongarch_fpu *fpu);
+void kvm_restore_fcsr(struct loongarch_fpu *fpu);
+
+void kvm_acquire_timer(struct kvm_vcpu *vcpu);
+void kvm_reset_timer(struct kvm_vcpu *vcpu);
+enum hrtimer_restart kvm_count_timeout(struct kvm_vcpu *vcpu);
+void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long hz);
+void kvm_restore_timer(struct kvm_vcpu *vcpu);
+void kvm_save_timer(struct kvm_vcpu *vcpu);
+
+/*
+ * Loongarch KVM guest interrupt handling.
+ */
+static inline void _kvm_queue_irq(struct kvm_vcpu *vcpu, unsigned int irq)
+{
+	set_bit(irq, &vcpu->arch.irq_pending);
+	clear_bit(irq, &vcpu->arch.irq_clear);
+}
+
+static inline void _kvm_dequeue_irq(struct kvm_vcpu *vcpu, unsigned int irq)
+{
+	clear_bit(irq, &vcpu->arch.irq_pending);
+	set_bit(irq, &vcpu->arch.irq_clear);
+}
+
+#endif /* __ASM_LOONGARCH_KVM_VCPU_H__ */
diff --git a/arch/loongarch/include/asm/loongarch.h b/arch/loongarch/include/asm/loongarch.h
index 7f8d57a61..7b74605dd 100644
--- a/arch/loongarch/include/asm/loongarch.h
+++ b/arch/loongarch/include/asm/loongarch.h
@@ -236,6 +236,44 @@ static __always_inline u64 csr_xchg64(u64 val, u64 mask, u32 reg)
 	return __csrxchg_d(val, mask, reg);
 }
 
+/* GCSR */
+static inline u64 gcsr_read(u32 reg)
+{
+	u64 val = 0;
+
+	asm volatile (
+		"parse_r __reg, %[val]\n\t"
+		".word 0x5 << 24 | %[reg] << 10 | 0 << 5 | __reg\n\t"
+		: [val] "+r" (val)
+		: [reg] "i" (reg)
+		: "memory");
+
+	return val;
+}
+
+static inline void gcsr_write(u64 val, u32 reg)
+{
+	asm volatile (
+		"parse_r __reg, %[val]\n\t"
+		".word 0x5 << 24 | %[reg] << 10 | 1 << 5 | __reg\n\t"
+		: [val] "+r" (val)
+		: [reg] "i" (reg)
+		: "memory");
+}
+
+static inline u64 gcsr_xchg(u64 val, u64 mask, u32 reg)
+{
+	asm volatile (
+		"parse_r __rd, %[val]\n\t"
+		"parse_r __rj, %[mask]\n\t"
+		".word 0x5 << 24 | %[reg] << 10 | __rj << 5 | __rd\n\t"
+		: [val] "+r" (val)
+		: [mask] "r" (mask), [reg] "i" (reg)
+		: "memory");
+
+	return val;
+}
+
 /* IOCSR */
 static __always_inline u32 iocsr_read32(u32 reg)
 {
@@ -309,6 +347,7 @@ static __always_inline void iocsr_write64(u64 val, u32 reg)
 #define LOONGARCH_CSR_ECFG		0x4	/* Exception config */
 #define  CSR_ECFG_VS_SHIFT		16
 #define  CSR_ECFG_VS_WIDTH		3
+#define  CSR_ECFG_VS_SHIFT_END		(CSR_ECFG_VS_SHIFT + CSR_ECFG_VS_WIDTH - 1)
 #define  CSR_ECFG_VS			(_ULCAST_(0x7) << CSR_ECFG_VS_SHIFT)
 #define  CSR_ECFG_IM_SHIFT		0
 #define  CSR_ECFG_IM_WIDTH		13
@@ -397,13 +436,14 @@ static __always_inline void iocsr_write64(u64 val, u32 reg)
 #define  CSR_TLBLO1_V			(_ULCAST_(0x1) << CSR_TLBLO1_V_SHIFT)
 
 #define LOONGARCH_CSR_GTLBC		0x15	/* Guest TLB control */
-#define  CSR_GTLBC_RID_SHIFT		16
-#define  CSR_GTLBC_RID_WIDTH		8
-#define  CSR_GTLBC_RID			(_ULCAST_(0xff) << CSR_GTLBC_RID_SHIFT)
+#define  CSR_GTLBC_TGID_SHIFT		16
+#define  CSR_GTLBC_TGID_WIDTH		8
+#define  CSR_GTLBC_TGID_SHIFT_END	(CSR_GTLBC_TGID_SHIFT + CSR_GTLBC_TGID_WIDTH - 1)
+#define  CSR_GTLBC_TGID			(_ULCAST_(0xff) << CSR_GTLBC_TGID_SHIFT)
 #define  CSR_GTLBC_TOTI_SHIFT		13
 #define  CSR_GTLBC_TOTI			(_ULCAST_(0x1) << CSR_GTLBC_TOTI_SHIFT)
-#define  CSR_GTLBC_USERID_SHIFT		12
-#define  CSR_GTLBC_USERID		(_ULCAST_(0x1) << CSR_GTLBC_USERID_SHIFT)
+#define  CSR_GTLBC_USETGID_SHIFT	12
+#define  CSR_GTLBC_USETGID		(_ULCAST_(0x1) << CSR_GTLBC_USETGID_SHIFT)
 #define  CSR_GTLBC_GMTLBSZ_SHIFT	0
 #define  CSR_GTLBC_GMTLBSZ_WIDTH	6
 #define  CSR_GTLBC_GMTLBSZ		(_ULCAST_(0x3f) << CSR_GTLBC_GMTLBSZ_SHIFT)
@@ -555,6 +595,7 @@ static __always_inline void iocsr_write64(u64 val, u32 reg)
 #define LOONGARCH_CSR_GSTAT		0x50	/* Guest status */
 #define  CSR_GSTAT_GID_SHIFT		16
 #define  CSR_GSTAT_GID_WIDTH		8
+#define  CSR_GSTAT_GID_SHIFT_END	(CSR_GSTAT_GID_SHIFT + CSR_GSTAT_GID_WIDTH - 1)
 #define  CSR_GSTAT_GID			(_ULCAST_(0xff) << CSR_GSTAT_GID_SHIFT)
 #define  CSR_GSTAT_GIDBIT_SHIFT		4
 #define  CSR_GSTAT_GIDBIT_WIDTH		6
@@ -605,6 +646,12 @@ static __always_inline void iocsr_write64(u64 val, u32 reg)
 #define  CSR_GCFG_MATC_GUEST		(_ULCAST_(0x0) << CSR_GCFG_MATC_SHITF)
 #define  CSR_GCFG_MATC_ROOT		(_ULCAST_(0x1) << CSR_GCFG_MATC_SHITF)
 #define  CSR_GCFG_MATC_NEST		(_ULCAST_(0x2) << CSR_GCFG_MATC_SHITF)
+#define  CSR_GCFG_MATP_SHITF		0
+#define  CSR_GCFG_MATP_WIDTH		4
+#define  CSR_GCFG_MATP_MASK		(_ULCAST_(0x3) << CSR_GCFG_MATP_SHITF)
+#define  CSR_GCFG_MATP_GUEST		(_ULCAST_(0x0) << CSR_GCFG_MATP_SHITF)
+#define  CSR_GCFG_MATP_ROOT		(_ULCAST_(0x1) << CSR_GCFG_MATP_SHITF)
+#define  CSR_GCFG_MATP_NEST		(_ULCAST_(0x2) << CSR_GCFG_MATP_SHITF)
 
 #define LOONGARCH_CSR_GINTC		0x52	/* Guest interrupt control */
 #define  CSR_GINTC_HC_SHIFT		16
@@ -1273,6 +1320,131 @@ static inline void write_csr_tlbrefill_pagesize(unsigned int size)
 #define write_csr_perfctrl3(val)	csr_write64(val, LOONGARCH_CSR_PERFCTRL3)
 #define write_csr_perfcntr3(val)	csr_write64(val, LOONGARCH_CSR_PERFCNTR3)
 
+/* Guest related CSRS */
+#define read_csr_gtlbc()		csr_read64(LOONGARCH_CSR_GTLBC)
+#define write_csr_gtlbc(val)		csr_write64(val, LOONGARCH_CSR_GTLBC)
+#define read_csr_trgp()			csr_read64(LOONGARCH_CSR_TRGP)
+#define read_csr_gcfg()			csr_read64(LOONGARCH_CSR_GCFG)
+#define write_csr_gcfg(val)		csr_write64(val, LOONGARCH_CSR_GCFG)
+#define read_csr_gstat()		csr_read64(LOONGARCH_CSR_GSTAT)
+#define write_csr_gstat(val)		csr_write64(val, LOONGARCH_CSR_GSTAT)
+#define read_csr_gintc()		csr_read64(LOONGARCH_CSR_GINTC)
+#define write_csr_gintc(val)		csr_write64(val, LOONGARCH_CSR_GINTC)
+#define read_csr_gcntc()		csr_read64(LOONGARCH_CSR_GCNTC)
+#define write_csr_gcntc(val)		csr_write64(val, LOONGARCH_CSR_GCNTC)
+
+/* Guest CSRS read and write */
+#define read_gcsr_crmd()		gcsr_read(LOONGARCH_CSR_CRMD)
+#define write_gcsr_crmd(val)		gcsr_write(val, LOONGARCH_CSR_CRMD)
+#define read_gcsr_prmd()		gcsr_read(LOONGARCH_CSR_PRMD)
+#define write_gcsr_prmd(val)		gcsr_write(val, LOONGARCH_CSR_PRMD)
+#define read_gcsr_euen()		gcsr_read(LOONGARCH_CSR_EUEN)
+#define write_gcsr_euen(val)		gcsr_write(val, LOONGARCH_CSR_EUEN)
+#define read_gcsr_misc()		gcsr_read(LOONGARCH_CSR_MISC)
+#define write_gcsr_misc(val)		gcsr_write(val, LOONGARCH_CSR_MISC)
+#define read_gcsr_ecfg()		gcsr_read(LOONGARCH_CSR_ECFG)
+#define write_gcsr_ecfg(val)		gcsr_write(val, LOONGARCH_CSR_ECFG)
+#define read_gcsr_estat()		gcsr_read(LOONGARCH_CSR_ESTAT)
+#define write_gcsr_estat(val)		gcsr_write(val, LOONGARCH_CSR_ESTAT)
+#define read_gcsr_era()			gcsr_read(LOONGARCH_CSR_ERA)
+#define write_gcsr_era(val)		gcsr_write(val, LOONGARCH_CSR_ERA)
+#define read_gcsr_badv()		gcsr_read(LOONGARCH_CSR_BADV)
+#define write_gcsr_badv(val)		gcsr_write(val, LOONGARCH_CSR_BADV)
+#define read_gcsr_badi()		gcsr_read(LOONGARCH_CSR_BADI)
+#define write_gcsr_badi(val)		gcsr_write(val, LOONGARCH_CSR_BADI)
+#define read_gcsr_eentry()		gcsr_read(LOONGARCH_CSR_EENTRY)
+#define write_gcsr_eentry(val)		gcsr_write(val, LOONGARCH_CSR_EENTRY)
+
+#define read_gcsr_tlbidx()		gcsr_read(LOONGARCH_CSR_TLBIDX)
+#define write_gcsr_tlbidx(val)		gcsr_write(val, LOONGARCH_CSR_TLBIDX)
+#define read_gcsr_tlbhi()		gcsr_read(LOONGARCH_CSR_TLBEHI)
+#define write_gcsr_tlbhi(val)		gcsr_write(val, LOONGARCH_CSR_TLBEHI)
+#define read_gcsr_tlblo0()		gcsr_read(LOONGARCH_CSR_TLBELO0)
+#define write_gcsr_tlblo0(val)		gcsr_write(val, LOONGARCH_CSR_TLBELO0)
+#define read_gcsr_tlblo1()		gcsr_read(LOONGARCH_CSR_TLBELO1)
+#define write_gcsr_tlblo1(val)		gcsr_write(val, LOONGARCH_CSR_TLBELO1)
+
+#define read_gcsr_asid()		gcsr_read(LOONGARCH_CSR_ASID)
+#define write_gcsr_asid(val)		gcsr_write(val, LOONGARCH_CSR_ASID)
+#define read_gcsr_pgdl()		gcsr_read(LOONGARCH_CSR_PGDL)
+#define write_gcsr_pgdl(val)		gcsr_write(val, LOONGARCH_CSR_PGDL)
+#define read_gcsr_pgdh()		gcsr_read(LOONGARCH_CSR_PGDH)
+#define write_gcsr_pgdh(val)		gcsr_write(val, LOONGARCH_CSR_PGDH)
+#define write_gcsr_pgd(val)		gcsr_write(val, LOONGARCH_CSR_PGD)
+#define read_gcsr_pgd()			gcsr_read(LOONGARCH_CSR_PGD)
+#define read_gcsr_pwctl0()		gcsr_read(LOONGARCH_CSR_PWCTL0)
+#define write_gcsr_pwctl0(val)		gcsr_write(val, LOONGARCH_CSR_PWCTL0)
+#define read_gcsr_pwctl1()		gcsr_read(LOONGARCH_CSR_PWCTL1)
+#define write_gcsr_pwctl1(val)		gcsr_write(val, LOONGARCH_CSR_PWCTL1)
+#define read_gcsr_stlbpgsize()		gcsr_read(LOONGARCH_CSR_STLBPGSIZE)
+#define write_gcsr_stlbpgsize(val)	gcsr_write(val, LOONGARCH_CSR_STLBPGSIZE)
+#define read_gcsr_rvacfg()		gcsr_read(LOONGARCH_CSR_RVACFG)
+#define write_gcsr_rvacfg(val)		gcsr_write(val, LOONGARCH_CSR_RVACFG)
+
+#define read_gcsr_cpuid()		gcsr_read(LOONGARCH_CSR_CPUID)
+#define write_gcsr_cpuid(val)		gcsr_write(val, LOONGARCH_CSR_CPUID)
+#define read_gcsr_prcfg1()		gcsr_read(LOONGARCH_CSR_PRCFG1)
+#define write_gcsr_prcfg1(val)		gcsr_write(val, LOONGARCH_CSR_PRCFG1)
+#define read_gcsr_prcfg2()		gcsr_read(LOONGARCH_CSR_PRCFG2)
+#define write_gcsr_prcfg2(val)		gcsr_write(val, LOONGARCH_CSR_PRCFG2)
+#define read_gcsr_prcfg3()		gcsr_read(LOONGARCH_CSR_PRCFG3)
+#define write_gcsr_prcfg3(val)		gcsr_write(val, LOONGARCH_CSR_PRCFG3)
+
+#define read_gcsr_kscratch0()		gcsr_read(LOONGARCH_CSR_KS0)
+#define write_gcsr_kscratch0(val)	gcsr_write(val, LOONGARCH_CSR_KS0)
+#define read_gcsr_kscratch1()		gcsr_read(LOONGARCH_CSR_KS1)
+#define write_gcsr_kscratch1(val)	gcsr_write(val, LOONGARCH_CSR_KS1)
+#define read_gcsr_kscratch2()		gcsr_read(LOONGARCH_CSR_KS2)
+#define write_gcsr_kscratch2(val)	gcsr_write(val, LOONGARCH_CSR_KS2)
+#define read_gcsr_kscratch3()		gcsr_read(LOONGARCH_CSR_KS3)
+#define write_gcsr_kscratch3(val)	gcsr_write(val, LOONGARCH_CSR_KS3)
+#define read_gcsr_kscratch4()		gcsr_read(LOONGARCH_CSR_KS4)
+#define write_gcsr_kscratch4(val)	gcsr_write(val, LOONGARCH_CSR_KS4)
+#define read_gcsr_kscratch5()		gcsr_read(LOONGARCH_CSR_KS5)
+#define write_gcsr_kscratch5(val)	gcsr_write(val, LOONGARCH_CSR_KS5)
+#define read_gcsr_kscratch6()		gcsr_read(LOONGARCH_CSR_KS6)
+#define write_gcsr_kscratch6(val)	gcsr_write(val, LOONGARCH_CSR_KS6)
+#define read_gcsr_kscratch7()		gcsr_read(LOONGARCH_CSR_KS7)
+#define write_gcsr_kscratch7(val)	gcsr_write(val, LOONGARCH_CSR_KS7)
+
+#define read_gcsr_timerid()		gcsr_read(LOONGARCH_CSR_TMID)
+#define write_gcsr_timerid(val)		gcsr_write(val, LOONGARCH_CSR_TMID)
+#define read_gcsr_timercfg()		gcsr_read(LOONGARCH_CSR_TCFG)
+#define write_gcsr_timercfg(val)	gcsr_write(val, LOONGARCH_CSR_TCFG)
+#define read_gcsr_timertick()		gcsr_read(LOONGARCH_CSR_TVAL)
+#define write_gcsr_timertick(val)	gcsr_write(val, LOONGARCH_CSR_TVAL)
+#define read_gcsr_timeroffset()		gcsr_read(LOONGARCH_CSR_CNTC)
+#define write_gcsr_timeroffset(val)	gcsr_write(val, LOONGARCH_CSR_CNTC)
+
+#define read_gcsr_llbctl()		gcsr_read(LOONGARCH_CSR_LLBCTL)
+#define write_gcsr_llbctl(val)		gcsr_write(val, LOONGARCH_CSR_LLBCTL)
+
+#define read_gcsr_tlbrentry()		gcsr_read(LOONGARCH_CSR_TLBRENTRY)
+#define write_gcsr_tlbrentry(val)	gcsr_write(val, LOONGARCH_CSR_TLBRENTRY)
+#define read_gcsr_tlbrbadv()		gcsr_read(LOONGARCH_CSR_TLBRBADV)
+#define write_gcsr_tlbrbadv(val)	gcsr_write(val, LOONGARCH_CSR_TLBRBADV)
+#define read_gcsr_tlbrera()		gcsr_read(LOONGARCH_CSR_TLBRERA)
+#define write_gcsr_tlbrera(val)		gcsr_write(val, LOONGARCH_CSR_TLBRERA)
+#define read_gcsr_tlbrsave()		gcsr_read(LOONGARCH_CSR_TLBRSAVE)
+#define write_gcsr_tlbrsave(val)	gcsr_write(val, LOONGARCH_CSR_TLBRSAVE)
+#define read_gcsr_tlbrelo0()		gcsr_read(LOONGARCH_CSR_TLBRELO0)
+#define write_gcsr_tlbrelo0(val)	gcsr_write(val, LOONGARCH_CSR_TLBRELO0)
+#define read_gcsr_tlbrelo1()		gcsr_read(LOONGARCH_CSR_TLBRELO1)
+#define write_gcsr_tlbrelo1(val)	gcsr_write(val, LOONGARCH_CSR_TLBRELO1)
+#define read_gcsr_tlbrehi()		gcsr_read(LOONGARCH_CSR_TLBREHI)
+#define write_gcsr_tlbrehi(val)		gcsr_write(val, LOONGARCH_CSR_TLBREHI)
+#define read_gcsr_tlbrprmd()		gcsr_read(LOONGARCH_CSR_TLBRPRMD)
+#define write_gcsr_tlbrprmd(val)	gcsr_write(val, LOONGARCH_CSR_TLBRPRMD)
+
+#define read_gcsr_directwin0()		gcsr_read(LOONGARCH_CSR_DMWIN0)
+#define write_gcsr_directwin0(val)	gcsr_write(val, LOONGARCH_CSR_DMWIN0)
+#define read_gcsr_directwin1()		gcsr_read(LOONGARCH_CSR_DMWIN1)
+#define write_gcsr_directwin1(val)	gcsr_write(val, LOONGARCH_CSR_DMWIN1)
+#define read_gcsr_directwin2()		gcsr_read(LOONGARCH_CSR_DMWIN2)
+#define write_gcsr_directwin2(val)	gcsr_write(val, LOONGARCH_CSR_DMWIN2)
+#define read_gcsr_directwin3()		gcsr_read(LOONGARCH_CSR_DMWIN3)
+#define write_gcsr_directwin3(val)	gcsr_write(val, LOONGARCH_CSR_DMWIN3)
+
 /*
  * Manipulate bits in a register.
  */
@@ -1315,15 +1487,26 @@ change_##name(unsigned long change, unsigned long val)		\
 }
 
 #define __BUILD_CSR_OP(name)	__BUILD_CSR_COMMON(csr_##name)
+#define __BUILD_GCSR_OP(name)	__BUILD_CSR_COMMON(gcsr_##name)
 
 __BUILD_CSR_OP(euen)
 __BUILD_CSR_OP(ecfg)
 __BUILD_CSR_OP(tlbidx)
+__BUILD_CSR_OP(gcfg)
+__BUILD_CSR_OP(gstat)
+__BUILD_CSR_OP(gtlbc)
+__BUILD_CSR_OP(gintc)
+__BUILD_GCSR_OP(llbctl)
+__BUILD_GCSR_OP(tlbidx)
 
 #define set_csr_estat(val)	\
 	csr_xchg32(val, val, LOONGARCH_CSR_ESTAT)
 #define clear_csr_estat(val)	\
 	csr_xchg32(~(val), val, LOONGARCH_CSR_ESTAT)
+#define set_gcsr_estat(val)	\
+	gcsr_xchg(val, val, LOONGARCH_CSR_ESTAT)
+#define clear_gcsr_estat(val)	\
+	gcsr_xchg(~(val), val, LOONGARCH_CSR_ESTAT)
 
 #endif /* __ASSEMBLY__ */
 
@@ -1408,7 +1591,7 @@ __BUILD_CSR_OP(tlbidx)
 #define EXCCODE_WATCH		19	/* Watch address reference */
 #define EXCCODE_BTDIS		20	/* Binary Trans. Disabled */
 #define EXCCODE_BTE		21	/* Binary Trans. Exception */
-#define EXCCODE_PSI		22	/* Guest Privileged Error */
+#define EXCCODE_GSPR		22	/* Guest Privileged Error */
 #define EXCCODE_HYP		23	/* Hypercall */
 #define EXCCODE_GCM		24	/* Guest CSR modified */
 	#define EXCSUBCODE_GCSC		0	/* Software caused */
diff --git a/arch/loongarch/kvm/trace.h b/arch/loongarch/kvm/trace.h
new file mode 100644
index 000000000..1813410e2
--- /dev/null
+++ b/arch/loongarch/kvm/trace.h
@@ -0,0 +1,137 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#if !defined(_TRACE_KVM_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_KVM_H
+
+#include <linux/tracepoint.h>
+#include <asm/kvm_csr.h>
+
+#undef	TRACE_SYSTEM
+#define TRACE_SYSTEM		kvm
+#define TRACE_INCLUDE_PATH	.
+#define TRACE_INCLUDE_FILE	trace
+
+/*
+ * Tracepoints for VM enters
+ */
+DECLARE_EVENT_CLASS(kvm_transition,
+	TP_PROTO(struct kvm_vcpu *vcpu),
+	TP_ARGS(vcpu),
+	TP_STRUCT__entry(
+		__field(unsigned long, pc)
+	),
+
+	TP_fast_assign(
+		__entry->pc = vcpu->arch.pc;
+	),
+
+	TP_printk("PC: 0x%08lx",
+		  __entry->pc)
+);
+
+DEFINE_EVENT(kvm_transition, kvm_enter,
+	     TP_PROTO(struct kvm_vcpu *vcpu),
+	     TP_ARGS(vcpu));
+
+DEFINE_EVENT(kvm_transition, kvm_reenter,
+	     TP_PROTO(struct kvm_vcpu *vcpu),
+	     TP_ARGS(vcpu));
+
+DEFINE_EVENT(kvm_transition, kvm_out,
+	     TP_PROTO(struct kvm_vcpu *vcpu),
+	     TP_ARGS(vcpu));
+
+/* Further exit reasons */
+#define KVM_TRACE_EXIT_IDLE		64
+#define KVM_TRACE_EXIT_CACHE		65
+#define KVM_TRACE_EXIT_SIGNAL		66
+
+/* Tracepoints for VM exits */
+#define kvm_trace_symbol_exit_types					\
+	({ KVM_TRACE_EXIT_IDLE,		"IDLE" },			\
+	{ KVM_TRACE_EXIT_CACHE,		"CACHE" },			\
+	{ KVM_TRACE_EXIT_SIGNAL,	"Signal" })
+
+TRACE_EVENT(kvm_exit,
+	    TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
+	    TP_ARGS(vcpu, reason),
+	    TP_STRUCT__entry(
+			__field(unsigned long, pc)
+			__field(unsigned int, reason)
+	    ),
+
+	    TP_fast_assign(
+			__entry->pc = vcpu->arch.pc;
+			__entry->reason = reason;
+	    ),
+
+	    TP_printk("[%s]PC: 0x%08lx",
+		      __print_symbolic(__entry->reason,
+				       kvm_trace_symbol_exit_types),
+		      __entry->pc)
+);
+
+#define KVM_TRACE_AUX_RESTORE		0
+#define KVM_TRACE_AUX_SAVE		1
+#define KVM_TRACE_AUX_ENABLE		2
+#define KVM_TRACE_AUX_DISABLE		3
+#define KVM_TRACE_AUX_DISCARD		4
+
+#define KVM_TRACE_AUX_FPU		1
+
+#define kvm_trace_symbol_aux_op				\
+	({ KVM_TRACE_AUX_RESTORE, "restore" },		\
+	{ KVM_TRACE_AUX_SAVE,    "save" },		\
+	{ KVM_TRACE_AUX_ENABLE,  "enable" },		\
+	{ KVM_TRACE_AUX_DISABLE, "disable" },		\
+	{ KVM_TRACE_AUX_DISCARD, "discard" })
+
+#define kvm_trace_symbol_aux_state			\
+	{ KVM_TRACE_AUX_FPU,     "FPU" },		\
+
+TRACE_EVENT(kvm_aux,
+	    TP_PROTO(struct kvm_vcpu *vcpu, unsigned int op,
+		     unsigned int state),
+	    TP_ARGS(vcpu, op, state),
+	    TP_STRUCT__entry(
+			__field(unsigned long, pc)
+			__field(u8, op)
+			__field(u8, state)
+	    ),
+
+	    TP_fast_assign(
+			__entry->pc = vcpu->arch.pc;
+			__entry->op = op;
+			__entry->state = state;
+	    ),
+
+	    TP_printk("%s %s PC: 0x%08lx",
+		      __print_symbolic(__entry->op,
+				       kvm_trace_symbol_aux_op),
+		      __print_symbolic(__entry->state,
+				       kvm_trace_symbol_aux_state),
+		      __entry->pc)
+);
+
+TRACE_EVENT(kvm_vpid_change,
+	    TP_PROTO(struct kvm_vcpu *vcpu, unsigned long vpid),
+	    TP_ARGS(vcpu, vpid),
+	    TP_STRUCT__entry(
+			__field(unsigned long, vpid)
+	    ),
+
+	    TP_fast_assign(
+			__entry->vpid = vpid;
+	    ),
+
+	    TP_printk("vpid: 0x%08lx",
+		      __entry->vpid)
+);
+
+#endif /* _TRACE_KVM_H */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 06/29] LoongArch: KVM: Implement vcpu create and destroy interface
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (4 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 05/29] LoongArch: KVM: Add vcpu related header files Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20 17:53   ` Paolo Bonzini
  2023-02-20  6:57 ` [PATCH v2 07/29] LoongArch: KVM: Implement vcpu run interface Tianrui Zhao
                   ` (22 subsequent siblings)
  28 siblings, 1 reply; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement vcpu create and destroy interface, saving some info
into vcpu arch structure such as vcpu exception entrance, vcpu
enter guest pointer, etc. Init vcpu timer and set address
translation mode when vcpu create.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vcpu.c | 94 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 94 insertions(+)
 create mode 100644 arch/loongarch/kvm/vcpu.c

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
new file mode 100644
index 000000000..4d355bcff
--- /dev/null
+++ b/arch/loongarch/kvm/vcpu.c
@@ -0,0 +1,94 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/kvm_host.h>
+#include <asm/fpu.h>
+#include <asm/loongarch.h>
+#include <asm/setup.h>
+#include <asm/time.h>
+#include <asm/kvm_host.h>
+
+#define CREATE_TRACE_POINTS
+#include "trace.h"
+
+int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
+{
+	return 0;
+}
+
+int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
+{
+	int i;
+	unsigned long timer_hz;
+	struct loongarch_csrs *csr;
+	struct kvm_context *kvm_context = per_cpu_ptr(vcpu->kvm->arch.vmcs, 0);
+
+	for_each_possible_cpu(i)
+		vcpu->arch.vpid[i] = 0;
+
+	hrtimer_init(&vcpu->arch.swtimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED);
+	vcpu->arch.swtimer.function = kvm_swtimer_wakeup;
+	vcpu->arch.fpu_enabled = true;
+	vcpu->kvm->arch.online_vcpus = vcpu->vcpu_id + 1;
+
+	vcpu->arch.guest_eentry = (unsigned long)kvm_context->kvm_eentry;
+	vcpu->arch.vcpu_run = kvm_context->kvm_enter_guest;
+	vcpu->arch.handle_exit = _kvm_handle_exit;
+	vcpu->arch.csr = kzalloc(sizeof(struct loongarch_csrs), GFP_KERNEL);
+	if (!vcpu->arch.csr)
+		return -ENOMEM;
+
+	/*
+	 * kvm all exceptions share one exception entry, and host <-> guest switch
+	 * also switch excfg.VS field, keep host excfg.VS info here
+	 */
+	vcpu->arch.host_ecfg = (read_csr_ecfg() & CSR_ECFG_VS);
+
+	/* Init */
+	vcpu->arch.last_sched_cpu = -1;
+	vcpu->arch.last_exec_cpu = -1;
+
+	/*
+	 * Initialize guest register state to valid architectural reset state.
+	 */
+	timer_hz = calc_const_freq();
+	kvm_init_timer(vcpu, timer_hz);
+
+	/* Set Initialize mode for GUEST */
+	csr = vcpu->arch.csr;
+	kvm_write_sw_gcsr(csr, LOONGARCH_CSR_CRMD, CSR_CRMD_DA);
+
+	/* Set cpuid */
+	kvm_write_sw_gcsr(csr, LOONGARCH_CSR_TMID, vcpu->vcpu_id);
+
+	/* start with no pending virtual guest interrupts */
+	csr->csrs[LOONGARCH_CSR_GINTC] = 0;
+
+	return 0;
+}
+
+void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
+{
+}
+
+void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
+{
+	int cpu;
+	struct kvm_context *context;
+
+	hrtimer_cancel(&vcpu->arch.swtimer);
+	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache);
+	kfree(vcpu->arch.csr);
+
+	/*
+	 * If the VCPU is freed and reused as another VCPU, we don't want the
+	 * matching pointer wrongly hanging around in last_vcpu.
+	 */
+	for_each_possible_cpu(cpu) {
+		context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu);
+		if (context->last_vcpu == vcpu)
+			context->last_vcpu = NULL;
+	}
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 07/29] LoongArch: KVM: Implement vcpu run interface
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (5 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 06/29] LoongArch: KVM: Implement vcpu create and destroy interface Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20 18:44   ` Paolo Bonzini
  2023-02-20  6:57 ` [PATCH v2 08/29] LoongArch: KVM: Implement vcpu handle exit interface Tianrui Zhao
                   ` (21 subsequent siblings)
  28 siblings, 1 reply; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement vcpu run interface, handling mmio, iocsr reading fault
and deliver interrupt, lose fpu before vcpu enter guest.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vcpu.c | 81 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 81 insertions(+)

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index 4d355bcff..571ac8b9d 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -18,6 +18,26 @@ int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
 	return 0;
 }
 
+/* Returns 1 if the guest TLB may be clobbered */
+static int _kvm_check_requests(struct kvm_vcpu *vcpu, int cpu)
+{
+	int ret = 0;
+	int i;
+
+	if (!kvm_request_pending(vcpu))
+		return 0;
+
+	if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu)) {
+		/* Drop all vpids for this VCPU */
+		for_each_possible_cpu(i)
+			vcpu->arch.vpid[i] = 0;
+		/* This will clobber guest TLB contents too */
+		ret = 1;
+	}
+
+	return ret;
+}
+
 int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 {
 	int i;
@@ -92,3 +112,64 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
 			context->last_vcpu = NULL;
 	}
 }
+
+int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
+{
+	int r = -EINTR;
+	int cpu;
+	struct kvm_run *run = vcpu->run;
+
+	vcpu_load(vcpu);
+
+	kvm_sigset_activate(vcpu);
+
+	if (vcpu->mmio_needed) {
+		if (!vcpu->mmio_is_write)
+			_kvm_complete_mmio_read(vcpu, run);
+		vcpu->mmio_needed = 0;
+	}
+
+	if (run->exit_reason == KVM_EXIT_LOONGARCH_IOCSR) {
+		if (!run->iocsr_io.is_write)
+			_kvm_complete_iocsr_read(vcpu, run);
+	}
+
+	/* clear exit_reason */
+	run->exit_reason = KVM_EXIT_UNKNOWN;
+	if (run->immediate_exit)
+		goto out;
+
+	lose_fpu(1);
+
+	local_irq_disable();
+	guest_enter_irqoff();
+	trace_kvm_enter(vcpu);
+
+	/*
+	 * Make sure the read of VCPU requests in vcpu_run() callback is not
+	 * reordered ahead of the write to vcpu->mode, or we could miss a TLB
+	 * flush request while the requester sees the VCPU as outside of guest
+	 * mode and not needing an IPI.
+	 */
+	smp_store_mb(vcpu->mode, IN_GUEST_MODE);
+
+	cpu = smp_processor_id();
+	kvm_acquire_timer(vcpu);
+	/* Check if we have any exceptions/interrupts pending */
+	_kvm_deliver_intr(vcpu);
+
+	_kvm_check_requests(vcpu, cpu);
+	_kvm_check_vmid(vcpu, cpu);
+	vcpu->arch.host_eentry = csr_read64(LOONGARCH_CSR_EENTRY);
+	r = vcpu->arch.vcpu_run(run, vcpu);
+
+	trace_kvm_out(vcpu);
+	guest_exit_irqoff();
+	local_irq_enable();
+
+out:
+	kvm_sigset_deactivate(vcpu);
+
+	vcpu_put(vcpu);
+	return r;
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 08/29] LoongArch: KVM: Implement vcpu handle exit interface
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (6 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 07/29] LoongArch: KVM: Implement vcpu run interface Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20 17:46   ` Paolo Bonzini
  2023-02-20 18:45   ` Paolo Bonzini
  2023-02-20  6:57 ` [PATCH v2 09/29] LoongArch: KVM: Implement vcpu get, vcpu set registers Tianrui Zhao
                   ` (20 subsequent siblings)
  28 siblings, 2 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement vcpu handle exit interface, getting the exit code by ESTAT
register and using kvm exception vector to handle it.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vcpu.c | 86 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 86 insertions(+)

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index 571ac8b9d..e08a4faa0 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -38,6 +38,92 @@ static int _kvm_check_requests(struct kvm_vcpu *vcpu, int cpu)
 	return ret;
 }
 
+/*
+ * Return value is in the form (errcode<<2 | RESUME_FLAG_HOST | RESUME_FLAG_NV)
+ */
+static int _kvm_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
+{
+	unsigned long exst = vcpu->arch.host_estat;
+	u32 intr = exst & 0x1fff; /* ignore NMI */
+	u32 exccode = (exst & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT;
+	u32 __user *opc = (u32 __user *) vcpu->arch.pc;
+	int ret = RESUME_GUEST, cpu;
+
+	vcpu->mode = OUTSIDE_GUEST_MODE;
+
+	/* Set a default exit reason */
+	run->exit_reason = KVM_EXIT_UNKNOWN;
+	run->ready_for_interrupt_injection = 1;
+
+	/*
+	 * Set the appropriate status bits based on host CPU features,
+	 * before we hit the scheduler
+	 */
+
+	local_irq_enable();
+
+	kvm_debug("%s: exst: %lx, PC: %p, kvm_run: %p, kvm_vcpu: %p\n",
+			__func__, exst, opc, run, vcpu);
+	trace_kvm_exit(vcpu, exccode);
+	if (exccode) {
+		ret = _kvm_handle_fault(vcpu, exccode);
+	} else {
+		WARN(!intr, "suspicious vm exiting");
+		++vcpu->stat.int_exits;
+
+		if (need_resched())
+			cond_resched();
+
+		ret = RESUME_GUEST;
+	}
+
+	cond_resched();
+
+	local_irq_disable();
+
+	if (ret == RESUME_GUEST)
+		kvm_acquire_timer(vcpu);
+
+	if (!(ret & RESUME_HOST)) {
+		_kvm_deliver_intr(vcpu);
+		/* Only check for signals if not already exiting to userspace */
+		if (signal_pending(current)) {
+			run->exit_reason = KVM_EXIT_INTR;
+			ret = (-EINTR << 2) | RESUME_HOST;
+			++vcpu->stat.signal_exits;
+			trace_kvm_exit(vcpu, KVM_TRACE_EXIT_SIGNAL);
+		}
+	}
+
+	if (ret == RESUME_GUEST) {
+		trace_kvm_reenter(vcpu);
+
+		/*
+		 * Make sure the read of VCPU requests in vcpu_reenter()
+		 * callback is not reordered ahead of the write to vcpu->mode,
+		 * or we could miss a TLB flush request while the requester sees
+		 * the VCPU as outside of guest mode and not needing an IPI.
+		 */
+		smp_store_mb(vcpu->mode, IN_GUEST_MODE);
+
+		cpu = smp_processor_id();
+		_kvm_check_requests(vcpu, cpu);
+		_kvm_check_vmid(vcpu, cpu);
+		vcpu->arch.host_eentry = csr_read64(LOONGARCH_CSR_EENTRY);
+
+		/*
+		 * If FPU are enabled (i.e. the guest's FPU context
+		 * is live), restore FCSR0.
+		 */
+		if (_kvm_guest_has_fpu(&vcpu->arch) &&
+			read_csr_euen() & (CSR_EUEN_FPEN)) {
+			kvm_restore_fcsr(&vcpu->arch.fpu);
+		}
+	}
+
+	return ret;
+}
+
 int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 {
 	int i;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 09/29] LoongArch: KVM: Implement vcpu get, vcpu set registers
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (7 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 08/29] LoongArch: KVM: Implement vcpu handle exit interface Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 10/29] LoongArch: KVM: Implement vcpu ENABLE_CAP, CHECK_EXTENSION ioctl interface Tianrui Zhao
                   ` (19 subsequent siblings)
  28 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement loongarch vcpu get registers and set registers operations, it
is called when user space use the ioctl interface to get or set regs.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vcpu.c | 375 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 375 insertions(+)

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index e08a4faa0..4b7642bce 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -13,6 +13,381 @@
 #define CREATE_TRACE_POINTS
 #include "trace.h"
 
+int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v, int force)
+{
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+
+	GET_HW_GCSR(id, LOONGARCH_CSR_CRMD, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_PRMD, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_EUEN, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_MISC, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_ECFG, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_ESTAT, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_ERA, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_BADV, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_BADI, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_EENTRY, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_TLBIDX, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_TLBEHI, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_TLBELO0, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_TLBELO1, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_ASID, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_PGDL, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_PGDH, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_PWCTL0, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_PWCTL1, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_STLBPGSIZE, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_RVACFG, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_CPUID, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_PRCFG1, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_PRCFG2, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_PRCFG3, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_KS0, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_KS1, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_KS2, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_KS3, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_KS4, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_KS5, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_KS6, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_KS7, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_TMID, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_TCFG, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_TVAL, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_CNTC, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_LLBCTL, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_TLBRENTRY, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_TLBRBADV, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_TLBRERA, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_TLBRSAVE, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_TLBRELO0, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_TLBRELO1, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_TLBREHI, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_TLBRPRMD, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_DMWIN0, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_DMWIN1, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_DMWIN2, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_DMWIN3, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_MWPS, v);
+	GET_HW_GCSR(id, LOONGARCH_CSR_FWPS, v);
+
+	GET_SW_GCSR(csr, id, LOONGARCH_CSR_IMPCTL1, v);
+	GET_SW_GCSR(csr, id, LOONGARCH_CSR_IMPCTL2, v);
+	GET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRCTL, v);
+	GET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRINFO1, v);
+	GET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRINFO2, v);
+	GET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRENTRY, v);
+	GET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRERA, v);
+	GET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRSAVE, v);
+	GET_SW_GCSR(csr, id, LOONGARCH_CSR_CTAG, v);
+	GET_SW_GCSR(csr, id, LOONGARCH_CSR_DEBUG, v);
+	GET_SW_GCSR(csr, id, LOONGARCH_CSR_DERA, v);
+	GET_SW_GCSR(csr, id, LOONGARCH_CSR_DESAVE, v);
+
+	GET_SW_GCSR(csr, id, LOONGARCH_CSR_TINTCLR, v);
+
+	if (force && (id < CSR_ALL_SIZE)) {
+		*v = kvm_read_sw_gcsr(csr, id);
+		return 0;
+	}
+
+	return -1;
+}
+
+int _kvm_setcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v, int force)
+{
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+	int ret;
+
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_CRMD, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_PRMD, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_EUEN, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_MISC, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_ECFG, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_ERA, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_BADV, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_BADI, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_EENTRY, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBIDX, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBEHI, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBELO0, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBELO1, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_ASID, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_PGDL, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_PGDH, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_PWCTL0, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_PWCTL1, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_STLBPGSIZE, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_RVACFG, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_CPUID, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_KS0, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_KS1, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_KS2, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_KS3, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_KS4, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_KS5, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_KS6, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_KS7, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_TMID, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_TCFG, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_TVAL, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_CNTC, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_LLBCTL, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBRENTRY, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBRBADV, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBRERA, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBRSAVE, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBRELO0, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBRELO1, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBREHI, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_TLBRPRMD, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_DMWIN0, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_DMWIN1, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_DMWIN2, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_DMWIN3, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_MWPS, v);
+	SET_HW_GCSR(csr, id, LOONGARCH_CSR_FWPS, v);
+
+	SET_SW_GCSR(csr, id, LOONGARCH_CSR_IMPCTL1, v);
+	SET_SW_GCSR(csr, id, LOONGARCH_CSR_IMPCTL2, v);
+	SET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRCTL, v);
+	SET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRINFO1, v);
+	SET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRINFO2, v);
+	SET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRENTRY, v);
+	SET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRERA, v);
+	SET_SW_GCSR(csr, id, LOONGARCH_CSR_MERRSAVE, v);
+	SET_SW_GCSR(csr, id, LOONGARCH_CSR_CTAG, v);
+	SET_SW_GCSR(csr, id, LOONGARCH_CSR_DEBUG, v);
+	SET_SW_GCSR(csr, id, LOONGARCH_CSR_DERA, v);
+	SET_SW_GCSR(csr, id, LOONGARCH_CSR_DESAVE, v);
+	SET_SW_GCSR(csr, id, LOONGARCH_CSR_PRCFG1, v);
+	SET_SW_GCSR(csr, id, LOONGARCH_CSR_PRCFG2, v);
+	SET_SW_GCSR(csr, id, LOONGARCH_CSR_PRCFG3, v);
+
+	SET_SW_GCSR(csr, id, LOONGARCH_CSR_PGD, v);
+	SET_SW_GCSR(csr, id, LOONGARCH_CSR_TINTCLR, v);
+
+	ret = -1;
+	switch (id) {
+	case LOONGARCH_CSR_ESTAT:
+		write_gcsr_estat(*v);
+		/* estat IP0~IP7 inject through guestexcept */
+		write_csr_gintc(((*v) >> 2)  & 0xff);
+		ret = 0;
+		break;
+	default:
+		if (force && (id < CSR_ALL_SIZE)) {
+			kvm_set_sw_gcsr(csr, id, *v);
+			ret = 0;
+		}
+		break;
+	}
+
+	return ret;
+}
+
+static int _kvm_get_one_reg(struct kvm_vcpu *vcpu,
+		const struct kvm_one_reg *reg, s64 *v)
+{
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+	int reg_idx, ret;
+
+	if ((reg->id & KVM_IOC_CSRID(0)) == KVM_IOC_CSRID(0)) {
+		reg_idx = KVM_GET_IOC_CSRIDX(reg->id);
+		ret = _kvm_getcsr(vcpu, reg_idx, v, 0);
+		if (ret == 0)
+			return ret;
+	}
+
+	switch (reg->id) {
+	case KVM_REG_LOONGARCH_COUNTER:
+		*v = drdtime() + vcpu->kvm->arch.time_offset;
+		break;
+	default:
+		if ((reg->id & KVM_REG_LOONGARCH_MASK) != KVM_REG_LOONGARCH_CSR)
+			return -EINVAL;
+
+		reg_idx = KVM_GET_IOC_CSRIDX(reg->id);
+		if (reg_idx < CSR_ALL_SIZE)
+			*v = kvm_read_sw_gcsr(csr, reg_idx);
+		else
+			return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int _kvm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+{
+	int ret;
+	s64 v;
+
+	ret = _kvm_get_one_reg(vcpu, reg, &v);
+	if (ret)
+		return ret;
+
+	ret = -EINVAL;
+	if ((reg->id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U64) {
+		u64 __user *uaddr = (u64 __user *)(long)reg->addr;
+
+		ret = put_user(v, uaddr);
+	} else if ((reg->id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U32) {
+		u32 __user *uaddr = (u32 __user *)(long)reg->addr;
+		u32 v32 = (u32)v;
+
+		ret = put_user(v32, uaddr);
+	}
+
+	return ret;
+}
+
+static int _kvm_set_one_reg(struct kvm_vcpu *vcpu,
+		const struct kvm_one_reg *reg,
+		s64 v)
+{
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+	int ret = 0;
+	unsigned long flags;
+	u64 val;
+	int reg_idx;
+
+	val = v;
+	if ((reg->id & KVM_IOC_CSRID(0)) == KVM_IOC_CSRID(0)) {
+		reg_idx = KVM_GET_IOC_CSRIDX(reg->id);
+		ret = _kvm_setcsr(vcpu, reg_idx, &val, 0);
+		if (ret == 0)
+			return ret;
+	}
+
+	switch (reg->id) {
+	case KVM_REG_LOONGARCH_COUNTER:
+		local_irq_save(flags);
+		/*
+		 * gftoffset is relative with board, not vcpu
+		 * only set for the first time for smp system
+		 */
+		if (vcpu->vcpu_id == 0)
+			vcpu->kvm->arch.time_offset = (signed long)(v - drdtime());
+		write_csr_gcntc((ulong)vcpu->kvm->arch.time_offset);
+		local_irq_restore(flags);
+		break;
+	case KVM_REG_LOONGARCH_VCPU_RESET:
+		kvm_reset_timer(vcpu);
+		memset(&vcpu->arch.irq_pending, 0, sizeof(vcpu->arch.irq_pending));
+		memset(&vcpu->arch.irq_clear, 0, sizeof(vcpu->arch.irq_clear));
+		break;
+	default:
+		if ((reg->id & KVM_REG_LOONGARCH_MASK) != KVM_REG_LOONGARCH_CSR)
+			return -EINVAL;
+
+		reg_idx = KVM_GET_IOC_CSRIDX(reg->id);
+		if (reg_idx < CSR_ALL_SIZE)
+			kvm_write_sw_gcsr(csr, reg_idx, v);
+		else
+			return -EINVAL;
+	}
+	return ret;
+}
+
+static int _kvm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+{
+	s64 v;
+	int ret;
+
+	ret = -EINVAL;
+	if ((reg->id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U64) {
+		u64 __user *uaddr;
+
+		uaddr = (u64 __user *)(long)reg->addr;
+		ret = get_user(v, uaddr);
+	} else if ((reg->id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U32) {
+		u32 __user *uaddr;
+		s32 v32;
+
+		uaddr = (u32 __user *)(long)reg->addr;
+		ret = get_user(v32, uaddr);
+		v = (s64)v32;
+	}
+
+	if (ret)
+		return -EFAULT;
+
+	return _kvm_set_one_reg(vcpu, reg, v);
+}
+
+int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
+				  struct kvm_sregs *sregs)
+{
+	return -ENOIOCTLCMD;
+}
+
+int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
+				  struct kvm_sregs *sregs)
+{
+	return -ENOIOCTLCMD;
+}
+
+int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
+{
+	int i;
+
+	vcpu_load(vcpu);
+
+	for (i = 0; i < ARRAY_SIZE(vcpu->arch.gprs); i++)
+		regs->gpr[i] = vcpu->arch.gprs[i];
+
+	regs->pc = vcpu->arch.pc;
+
+	vcpu_put(vcpu);
+	return 0;
+}
+
+int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
+{
+	int i;
+
+	vcpu_load(vcpu);
+
+	for (i = 1; i < ARRAY_SIZE(vcpu->arch.gprs); i++)
+		vcpu->arch.gprs[i] = regs->gpr[i];
+	vcpu->arch.gprs[0] = 0; /* zero is special, and cannot be set. */
+	vcpu->arch.pc = regs->pc;
+
+	vcpu_put(vcpu);
+	return 0;
+}
+
+long kvm_arch_vcpu_ioctl(struct file *filp,
+			 unsigned int ioctl, unsigned long arg)
+{
+	struct kvm_vcpu *vcpu = filp->private_data;
+	void __user *argp = (void __user *)arg;
+	long r;
+
+	vcpu_load(vcpu);
+
+	switch (ioctl) {
+	case KVM_SET_ONE_REG:
+	case KVM_GET_ONE_REG: {
+		struct kvm_one_reg reg;
+
+		r = -EFAULT;
+		if (copy_from_user(&reg, argp, sizeof(reg)))
+			break;
+		if (ioctl == KVM_SET_ONE_REG)
+			r = _kvm_set_reg(vcpu, &reg);
+		else
+			r = _kvm_get_reg(vcpu, &reg);
+		break;
+	}
+	default:
+		r = -ENOIOCTLCMD;
+		break;
+	}
+
+	vcpu_put(vcpu);
+	return r;
+}
+
 int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
 {
 	return 0;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 10/29] LoongArch: KVM: Implement vcpu ENABLE_CAP, CHECK_EXTENSION ioctl interface
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (8 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 09/29] LoongArch: KVM: Implement vcpu get, vcpu set registers Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 11/29] LoongArch: KVM: Implement fpu related operations for vcpu Tianrui Zhao
                   ` (18 subsequent siblings)
  28 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement loongarch vcpu KVM_ENABLE_CAP, KVM_CHECK_EXTENSION ioctl
interface.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vcpu.c | 46 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 46 insertions(+)

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index 4b7642bce..2ad9d126e 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -356,6 +356,29 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 	return 0;
 }
 
+static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
+				     struct kvm_enable_cap *cap)
+{
+	int r = 0;
+
+	if (!kvm_vm_ioctl_check_extension(vcpu->kvm, cap->cap))
+		return -EINVAL;
+	if (cap->flags)
+		return -EINVAL;
+	if (cap->args[0])
+		return -EINVAL;
+
+	switch (cap->cap) {
+	case KVM_CAP_LOONGARCH_FPU:
+		break;
+	default:
+		r = -EINVAL;
+		break;
+	}
+
+	return r;
+}
+
 long kvm_arch_vcpu_ioctl(struct file *filp,
 			 unsigned int ioctl, unsigned long arg)
 {
@@ -379,6 +402,29 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 			r = _kvm_get_reg(vcpu, &reg);
 		break;
 	}
+	case KVM_ENABLE_CAP: {
+		struct kvm_enable_cap cap;
+
+		r = -EFAULT;
+		if (copy_from_user(&cap, argp, sizeof(cap)))
+			break;
+		r = kvm_vcpu_ioctl_enable_cap(vcpu, &cap);
+		break;
+	}
+	case KVM_CHECK_EXTENSION: {
+		unsigned int ext;
+
+		if (copy_from_user(&ext, argp, sizeof(ext)))
+			return -EFAULT;
+		switch (ext) {
+		case KVM_CAP_LOONGARCH_FPU:
+			r = !!cpu_has_fpu;
+			break;
+		default:
+			break;
+		}
+		break;
+	}
 	default:
 		r = -ENOIOCTLCMD;
 		break;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 11/29] LoongArch: KVM: Implement fpu related operations for vcpu
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (9 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 10/29] LoongArch: KVM: Implement vcpu ENABLE_CAP, CHECK_EXTENSION ioctl interface Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 12/29] LoongArch: KVM: Implement vcpu interrupt operations Tianrui Zhao
                   ` (17 subsequent siblings)
  28 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement loongarch fpu related interface for vcpu, such as get fpu, set
fpu, own fpu and lose fpu, etc.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vcpu.c | 70 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 70 insertions(+)

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index 2ad9d126e..5c7216607 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -434,6 +434,76 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 	return r;
 }
 
+int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
+{
+	int i = 0;
+
+	/* no need vcpu_load and vcpu_put */
+	fpu->fcsr = vcpu->arch.fpu.fcsr;
+	fpu->fcc = vcpu->arch.fpu.fcc;
+	for (i = 0; i < NUM_FPU_REGS; i++)
+		memcpy(&fpu->fpr[i], &vcpu->arch.fpu.fpr[i], FPU_REG_WIDTH / 64);
+
+	return 0;
+}
+
+int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
+{
+	int i = 0;
+
+	/* no need vcpu_load and vcpu_put */
+	vcpu->arch.fpu.fcsr = fpu->fcsr;
+	vcpu->arch.fpu.fcc = fpu->fcc;
+	for (i = 0; i < NUM_FPU_REGS; i++)
+		memcpy(&vcpu->arch.fpu.fpr[i], &fpu->fpr[i], FPU_REG_WIDTH / 64);
+
+	return 0;
+}
+
+/* Enable FPU for guest and restore context */
+void kvm_own_fpu(struct kvm_vcpu *vcpu)
+{
+	unsigned long sr;
+
+	preempt_disable();
+
+	sr = kvm_read_hw_gcsr(LOONGARCH_CSR_EUEN);
+
+	/*
+	 * Enable FPU for guest
+	 * We set FR and FRE according to guest context
+	 */
+	set_csr_euen(CSR_EUEN_FPEN);
+
+	/* If guest FPU state not active, restore it now */
+	if (!(vcpu->arch.aux_inuse & KVM_LARCH_FPU)) {
+		kvm_restore_fpu(&vcpu->arch.fpu);
+		vcpu->arch.aux_inuse |= KVM_LARCH_FPU;
+		trace_kvm_aux(vcpu, KVM_TRACE_AUX_RESTORE, KVM_TRACE_AUX_FPU);
+	} else {
+		trace_kvm_aux(vcpu, KVM_TRACE_AUX_ENABLE, KVM_TRACE_AUX_FPU);
+	}
+
+	preempt_enable();
+}
+
+/* Save and disable FPU */
+void kvm_lose_fpu(struct kvm_vcpu *vcpu)
+{
+	preempt_disable();
+
+	if (vcpu->arch.aux_inuse & KVM_LARCH_FPU) {
+		kvm_save_fpu(&vcpu->arch.fpu);
+		vcpu->arch.aux_inuse &= ~KVM_LARCH_FPU;
+		trace_kvm_aux(vcpu, KVM_TRACE_AUX_SAVE, KVM_TRACE_AUX_FPU);
+
+		/* Disable FPU */
+		clear_csr_euen(CSR_EUEN_FPEN);
+	}
+
+	preempt_enable();
+}
+
 int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
 {
 	return 0;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 12/29] LoongArch: KVM: Implement vcpu interrupt operations
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (10 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 11/29] LoongArch: KVM: Implement fpu related operations for vcpu Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 13/29] LoongArch: KVM: Implement misc vcpu related interfaces Tianrui Zhao
                   ` (16 subsequent siblings)
  28 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement vcpu interrupt operations such as vcpu set irq and
vcpu clear irq, using set_gcsr_estat to set irq which is
parsed by the irq bitmap.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/interrupt.c | 126 +++++++++++++++++++++++++++++++++
 arch/loongarch/kvm/vcpu.c      |  45 ++++++++++++
 2 files changed, 171 insertions(+)
 create mode 100644 arch/loongarch/kvm/interrupt.c

diff --git a/arch/loongarch/kvm/interrupt.c b/arch/loongarch/kvm/interrupt.c
new file mode 100644
index 000000000..02267a71d
--- /dev/null
+++ b/arch/loongarch/kvm/interrupt.c
@@ -0,0 +1,126 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <asm/kvm_vcpu.h>
+
+static unsigned int int_to_coreint[LOONGARCH_EXC_MAX] = {
+	[LARCH_INT_TIMER]	= CPU_TIMER,
+	[LARCH_INT_IPI]		= CPU_IPI,
+	[LARCH_INT_SIP0]	= CPU_SIP0,
+	[LARCH_INT_SIP1]	= CPU_SIP1,
+	[LARCH_INT_IP0]		= CPU_IP0,
+	[LARCH_INT_IP1]		= CPU_IP1,
+	[LARCH_INT_IP2]		= CPU_IP2,
+	[LARCH_INT_IP3]		= CPU_IP3,
+	[LARCH_INT_IP4]		= CPU_IP4,
+	[LARCH_INT_IP5]		= CPU_IP5,
+	[LARCH_INT_IP6]		= CPU_IP6,
+	[LARCH_INT_IP7]		= CPU_IP7,
+};
+
+static int _kvm_irq_deliver(struct kvm_vcpu *vcpu, unsigned int priority)
+{
+	unsigned int irq = 0;
+
+	clear_bit(priority, &vcpu->arch.irq_pending);
+	if (priority < LOONGARCH_EXC_MAX)
+		irq = int_to_coreint[priority];
+
+	switch (priority) {
+	case LARCH_INT_TIMER:
+	case LARCH_INT_IPI:
+	case LARCH_INT_SIP0:
+	case LARCH_INT_SIP1:
+		set_gcsr_estat(irq);
+		break;
+
+	case LARCH_INT_IP0:
+	case LARCH_INT_IP1:
+	case LARCH_INT_IP2:
+	case LARCH_INT_IP3:
+	case LARCH_INT_IP4:
+	case LARCH_INT_IP5:
+	case LARCH_INT_IP6:
+	case LARCH_INT_IP7:
+		set_csr_gintc(irq);
+		break;
+
+	default:
+		break;
+	}
+
+	return 1;
+}
+
+static int _kvm_irq_clear(struct kvm_vcpu *vcpu, unsigned int priority)
+{
+	unsigned int irq = 0;
+
+	clear_bit(priority, &vcpu->arch.irq_clear);
+	if (priority < LOONGARCH_EXC_MAX)
+		irq = int_to_coreint[priority];
+
+	switch (priority) {
+	case LARCH_INT_TIMER:
+	case LARCH_INT_IPI:
+	case LARCH_INT_SIP0:
+	case LARCH_INT_SIP1:
+		clear_gcsr_estat(irq);
+		break;
+
+	case LARCH_INT_IP0:
+	case LARCH_INT_IP1:
+	case LARCH_INT_IP2:
+	case LARCH_INT_IP3:
+	case LARCH_INT_IP4:
+	case LARCH_INT_IP5:
+	case LARCH_INT_IP6:
+	case LARCH_INT_IP7:
+		clear_csr_gintc(irq);
+		break;
+
+	default:
+		break;
+	}
+
+	return 1;
+}
+
+void _kvm_deliver_intr(struct kvm_vcpu *vcpu)
+{
+	unsigned long *pending = &vcpu->arch.irq_pending;
+	unsigned long *pending_clr = &vcpu->arch.irq_clear;
+	unsigned int priority;
+
+	if (!(*pending) && !(*pending_clr))
+		return;
+
+	if (*pending_clr) {
+		priority = __ffs(*pending_clr);
+		while (priority <= LOONGARCH_EXC_IPNUM) {
+			_kvm_irq_clear(vcpu, priority);
+			priority = find_next_bit(pending_clr,
+					BITS_PER_BYTE * sizeof(*pending_clr),
+					priority + 1);
+		}
+	}
+
+	if (*pending) {
+		priority = __ffs(*pending);
+		while (priority <= LOONGARCH_EXC_IPNUM) {
+			_kvm_irq_deliver(vcpu, priority);
+			priority = find_next_bit(pending,
+					BITS_PER_BYTE * sizeof(*pending),
+					priority + 1);
+		}
+	}
+}
+
+int _kvm_pending_timer(struct kvm_vcpu *vcpu)
+{
+	return test_bit(LARCH_INT_TIMER, &vcpu->arch.irq_pending);
+}
diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index 5c7216607..a60ac6576 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -504,6 +504,51 @@ void kvm_lose_fpu(struct kvm_vcpu *vcpu)
 	preempt_enable();
 }
 
+int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu,
+			     struct kvm_loongarch_interrupt *irq)
+{
+	int intr = (int)irq->irq;
+	struct kvm_vcpu *dvcpu = NULL;
+
+	if (irq->cpu == -1)
+		dvcpu = vcpu;
+	else
+		dvcpu = kvm_get_vcpu(vcpu->kvm, irq->cpu);
+
+	if (intr > 0)
+		_kvm_queue_irq(dvcpu, intr);
+	else if (intr < 0)
+		_kvm_dequeue_irq(dvcpu, -intr);
+	else {
+		kvm_err("%s: invalid interrupt ioctl (%d:%d)\n", __func__,
+				irq->cpu, irq->irq);
+		return -EINVAL;
+	}
+
+	kvm_vcpu_kick(dvcpu);
+	return 0;
+}
+
+long kvm_arch_vcpu_async_ioctl(struct file *filp,
+			       unsigned int ioctl, unsigned long arg)
+{
+	struct kvm_vcpu *vcpu = filp->private_data;
+	void __user *argp = (void __user *)arg;
+
+	if (ioctl == KVM_INTERRUPT) {
+		struct kvm_loongarch_interrupt irq;
+
+		if (copy_from_user(&irq, argp, sizeof(irq)))
+			return -EFAULT;
+		kvm_debug("[%d] %s: irq: %d\n", vcpu->vcpu_id, __func__,
+			  irq.irq);
+
+		return kvm_vcpu_ioctl_interrupt(vcpu, &irq);
+	}
+
+	return -ENOIOCTLCMD;
+}
+
 int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
 {
 	return 0;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 13/29] LoongArch: KVM: Implement misc vcpu related interfaces
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (11 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 12/29] LoongArch: KVM: Implement vcpu interrupt operations Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20 18:50   ` Paolo Bonzini
  2023-02-20  6:57 ` [PATCH v2 14/29] LoongArch: KVM: Implement vcpu load and vcpu put operations Tianrui Zhao
                   ` (15 subsequent siblings)
  28 siblings, 1 reply; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement some misc vcpu relaterd interfaces, such as vcpu runnable,
vcpu should kick, vcpu dump regs, etc.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vcpu.c | 112 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 112 insertions(+)

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index a60ac6576..cf33ae2ba 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -13,6 +13,118 @@
 #define CREATE_TRACE_POINTS
 #include "trace.h"
 
+int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
+{
+	return !!(vcpu->arch.irq_pending);
+}
+
+int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu)
+{
+	return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE;
+}
+
+bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
+{
+	return false;
+}
+
+vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
+{
+	return VM_FAULT_SIGBUS;
+}
+
+int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu,
+				  struct kvm_translation *tr)
+{
+	return 0;
+}
+
+int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
+{
+	return _kvm_pending_timer(vcpu) ||
+		kvm_read_hw_gcsr(LOONGARCH_CSR_ESTAT) &
+			(1 << (EXCCODE_TIMER - EXCCODE_INT_START));
+}
+
+int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu)
+{
+	int i;
+
+	if (!vcpu)
+		return -1;
+
+	kvm_debug("VCPU Register Dump:\n");
+	kvm_debug("\tpc = 0x%08lx\n", vcpu->arch.pc);
+	kvm_debug("\texceptions: %08lx\n", vcpu->arch.irq_pending);
+
+	for (i = 0; i < 32; i += 4) {
+		kvm_debug("\tgpr%02d: %08lx %08lx %08lx %08lx\n", i,
+		       vcpu->arch.gprs[i],
+		       vcpu->arch.gprs[i + 1],
+		       vcpu->arch.gprs[i + 2], vcpu->arch.gprs[i + 3]);
+	}
+
+	kvm_debug("\tCRMOD: 0x%08llx, exst: 0x%08llx\n",
+		  kvm_read_hw_gcsr(LOONGARCH_CSR_CRMD),
+		  kvm_read_hw_gcsr(LOONGARCH_CSR_ESTAT));
+
+	kvm_debug("\tERA: 0x%08llx\n", kvm_read_hw_gcsr(LOONGARCH_CSR_ERA));
+
+	return 0;
+}
+
+int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
+				    struct kvm_mp_state *mp_state)
+{
+	return -ENOIOCTLCMD;
+}
+
+int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
+				    struct kvm_mp_state *mp_state)
+{
+	return -ENOIOCTLCMD;
+}
+
+int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
+					struct kvm_guest_debug *dbg)
+{
+	return -EINVAL;
+}
+
+static int lvcpu_stat_get(void *address, u64 *val)
+{
+	*val = *(u64 *)address;
+	return 0;
+}
+DEFINE_SIMPLE_ATTRIBUTE(lvcpu_stat_fops, lvcpu_stat_get, NULL, "%llu\n");
+
+static int vcpu_pid_get(void *arg, u64 *val)
+{
+	struct kvm_vcpu *vcpu = (struct kvm_vcpu *)arg;
+
+	if (vcpu)
+		*val = pid_vnr(vcpu->pid);
+	return 0;
+}
+DEFINE_SIMPLE_ATTRIBUTE(vcpu_pid_fops, vcpu_pid_get, NULL, "%llu\n");
+
+/**
+ * kvm_migrate_count() - Migrate timer.
+ * @vcpu:       Virtual CPU.
+ *
+ * Migrate hrtimer to the current CPU by cancelling and restarting it
+ * if it was running prior to being cancelled.
+ *
+ * Must be called when the VCPU is migrated to a different CPU to ensure that
+ * timer expiry during guest execution interrupts the guest and causes the
+ * interrupt to be delivered in a timely manner.
+ */
+static void kvm_migrate_count(struct kvm_vcpu *vcpu)
+{
+	if (hrtimer_cancel(&vcpu->arch.swtimer))
+		hrtimer_restart(&vcpu->arch.swtimer);
+}
+
 int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v, int force)
 {
 	struct loongarch_csrs *csr = vcpu->arch.csr;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 14/29] LoongArch: KVM: Implement vcpu load and vcpu put operations
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (12 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 13/29] LoongArch: KVM: Implement misc vcpu related interfaces Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 15/29] LoongArch: KVM: Implement vcpu status description Tianrui Zhao
                   ` (14 subsequent siblings)
  28 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement loongarch vcpu load and vcpu put operations, including
load csr value into hardware and save csr value into vcpu structure.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vcpu.c | 192 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 192 insertions(+)

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index cf33ae2ba..f649f47c4 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -847,6 +847,198 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
 	}
 }
 
+static int _kvm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+{
+	struct kvm_context *context;
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+	bool migrated, all;
+
+	/*
+	 * Have we migrated to a different CPU?
+	 * If so, any old guest TLB state may be stale.
+	 */
+	migrated = (vcpu->arch.last_sched_cpu != cpu);
+
+	/*
+	 * Was this the last VCPU to run on this CPU?
+	 * If not, any old guest state from this VCPU will have been clobbered.
+	 */
+	context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu);
+	all = migrated || (context->last_vcpu != vcpu);
+	context->last_vcpu = vcpu;
+
+	/*
+	 * Restore timer state regardless
+	 */
+	kvm_restore_timer(vcpu);
+
+	/* Control guest page CCA attribute */
+	change_csr_gcfg(CSR_GCFG_MATC_MASK, CSR_GCFG_MATC_ROOT);
+	/* Don't bother restoring registers multiple times unless necessary */
+	if (!all)
+		return 0;
+
+	write_csr_gcntc((ulong)vcpu->kvm->arch.time_offset);
+	/*
+	 * Restore guest CSR registers
+	 */
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_CRMD);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PRMD);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_EUEN);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_MISC);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_ECFG);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_ERA);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_BADV);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_BADI);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_EENTRY);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBIDX);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBEHI);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBELO0);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBELO1);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_ASID);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PGDL);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PGDH);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PWCTL0);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PWCTL1);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_STLBPGSIZE);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_RVACFG);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_CPUID);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS0);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS1);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS2);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS3);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS4);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS5);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS6);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS7);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TMID);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_CNTC);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRENTRY);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRBADV);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRERA);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRSAVE);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRELO0);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRELO1);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBREHI);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRPRMD);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_DMWIN0);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_DMWIN1);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_DMWIN2);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_DMWIN3);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_LLBCTL);
+
+	/* restore Root.Guestexcept from unused Guest guestexcept register */
+	write_csr_gintc(csr->csrs[LOONGARCH_CSR_GINTC]);
+
+	/*
+	 * We should clear linked load bit to break interrupted atomics. This
+	 * prevents a SC on the next VCPU from succeeding by matching a LL on
+	 * the previous VCPU.
+	 */
+	if (vcpu->kvm->created_vcpus > 1)
+		set_gcsr_llbctl(CSR_LLBCTL_WCLLB);
+
+	return 0;
+}
+
+void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+{
+	unsigned long flags;
+
+	local_irq_save(flags);
+	vcpu->cpu = cpu;
+	if (vcpu->arch.last_sched_cpu != cpu) {
+		kvm_debug("[%d->%d]KVM VCPU[%d] switch\n",
+				vcpu->arch.last_sched_cpu, cpu, vcpu->vcpu_id);
+		/*
+		 * Migrate the timer interrupt to the current CPU so that it
+		 * always interrupts the guest and synchronously triggers a
+		 * guest timer interrupt.
+		 */
+		kvm_migrate_count(vcpu);
+	}
+
+	/* restore guest state to registers */
+	_kvm_vcpu_load(vcpu, cpu);
+	local_irq_restore(flags);
+}
+
+static int _kvm_vcpu_put(struct kvm_vcpu *vcpu, int cpu)
+{
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+
+	kvm_lose_fpu(vcpu);
+
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_CRMD);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PRMD);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_EUEN);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_MISC);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ECFG);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ERA);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_BADV);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_BADI);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_EENTRY);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBIDX);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBEHI);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBELO0);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBELO1);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ASID);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PGDL);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PGDH);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PGD);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PWCTL0);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PWCTL1);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_STLBPGSIZE);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_RVACFG);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_CPUID);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PRCFG1);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PRCFG2);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PRCFG3);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS0);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS1);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS2);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS3);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS4);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS5);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS6);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS7);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TMID);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_CNTC);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_LLBCTL);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRENTRY);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRBADV);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRERA);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRSAVE);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRELO0);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRELO1);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBREHI);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRPRMD);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_DMWIN0);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_DMWIN1);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_DMWIN2);
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_DMWIN3);
+
+	/* save Root.Guestexcept in unused Guest guestexcept register */
+	kvm_save_timer(vcpu);
+	csr->csrs[LOONGARCH_CSR_GINTC] = read_csr_gintc();
+	return 0;
+}
+
+void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
+{
+	unsigned long flags;
+	int cpu;
+
+	local_irq_save(flags);
+	cpu = smp_processor_id();
+	vcpu->arch.last_sched_cpu = cpu;
+	vcpu->cpu = -1;
+
+	/* save guest state in registers */
+	_kvm_vcpu_put(vcpu, cpu);
+	local_irq_restore(flags);
+}
+
 int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 {
 	int r = -EINTR;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 15/29] LoongArch: KVM: Implement vcpu status description
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (13 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 14/29] LoongArch: KVM: Implement vcpu load and vcpu put operations Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 16/29] LoongArch: KVM: Implement update VM id function Tianrui Zhao
                   ` (13 subsequent siblings)
  28 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement loongarch vcpu status description such as idle exits counter,
signal exits counter, cpucfg exits counter, etc.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vcpu.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index f649f47c4..cfd35b4d6 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -13,6 +13,23 @@
 #define CREATE_TRACE_POINTS
 #include "trace.h"
 
+const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
+	KVM_GENERIC_VCPU_STATS(),
+	STATS_DESC_COUNTER(VCPU, idle_exits),
+	STATS_DESC_COUNTER(VCPU, signal_exits),
+	STATS_DESC_COUNTER(VCPU, int_exits),
+	STATS_DESC_COUNTER(VCPU, cpucfg_exits),
+};
+
+const struct kvm_stats_header kvm_vcpu_stats_header = {
+	.name_size = KVM_STATS_NAME_SIZE,
+	.num_desc = ARRAY_SIZE(kvm_vcpu_stats_desc),
+	.id_offset = sizeof(struct kvm_stats_header),
+	.desc_offset = sizeof(struct kvm_stats_header) + KVM_STATS_NAME_SIZE,
+	.data_offset = sizeof(struct kvm_stats_header) + KVM_STATS_NAME_SIZE +
+		       sizeof(kvm_vcpu_stats_desc),
+};
+
 int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
 {
 	return !!(vcpu->arch.irq_pending);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 16/29] LoongArch: KVM: Implement update VM id function
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (14 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 15/29] LoongArch: KVM: Implement vcpu status description Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 17/29] LoongArch: KVM: Implement virtual machine tlb operations Tianrui Zhao
                   ` (12 subsequent siblings)
  28 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement kvm check vmid and update vmid, the vmid should be checked before
vcpu enter guest.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vmid.c | 64 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 64 insertions(+)
 create mode 100644 arch/loongarch/kvm/vmid.c

diff --git a/arch/loongarch/kvm/vmid.c b/arch/loongarch/kvm/vmid.c
new file mode 100644
index 000000000..82729968e
--- /dev/null
+++ b/arch/loongarch/kvm/vmid.c
@@ -0,0 +1,64 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/kvm_host.h>
+#include "trace.h"
+
+static void _kvm_update_vpid(struct kvm_vcpu *vcpu, int cpu)
+{
+	struct kvm_context *context;
+	unsigned long vpid;
+
+	context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu);
+	vpid = context->vpid_cache + 1;
+	if (!(vpid & context->vpid_mask)) {
+		/* finish round of 64 bit loop */
+		if (unlikely(!vpid))
+			vpid = context->vpid_mask + 1;
+
+		/* vpid 0 reserved for root */
+		++vpid;
+
+		/* start new vpid cycle */
+		kvm_flush_tlb_all();
+	}
+
+	context->vpid_cache = vpid;
+	vcpu->arch.vpid[cpu] = vpid;
+}
+
+void _kvm_check_vmid(struct kvm_vcpu *vcpu, int cpu)
+{
+	struct kvm_context *context;
+	bool migrated;
+	unsigned long ver, old, vpid;
+
+	/*
+	 * Are we entering guest context on a different CPU to last time?
+	 * If so, the VCPU's guest TLB state on this CPU may be stale.
+	 */
+	context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu);
+	migrated = (vcpu->arch.last_exec_cpu != cpu);
+	vcpu->arch.last_exec_cpu = cpu;
+
+	/*
+	 * Check if our vpid is of an older version
+	 *
+	 * We also discard the stored vpid if we've executed on
+	 * another CPU, as the guest mappings may have changed without
+	 * hypervisor knowledge.
+	 */
+	ver = vcpu->arch.vpid[cpu] & ~context->vpid_mask;
+	old = context->vpid_cache  & ~context->vpid_mask;
+	if (migrated || (ver != old)) {
+		_kvm_update_vpid(vcpu, cpu);
+		trace_kvm_vpid_change(vcpu, vcpu->arch.vpid[cpu]);
+	}
+
+	/* Restore GSTAT(0x50).vpid */
+	vpid = (vcpu->arch.vpid[cpu] & context->vpid_mask)
+		<< CSR_GSTAT_GID_SHIFT;
+	change_csr_gstat(context->vpid_mask << CSR_GSTAT_GID_SHIFT, vpid);
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 17/29] LoongArch: KVM: Implement virtual machine tlb operations
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (15 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 16/29] LoongArch: KVM: Implement update VM id function Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 18/29] LoongArch: KVM: Implement vcpu timer operations Tianrui Zhao
                   ` (11 subsequent siblings)
  28 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement loongarch virtual machine tlb operations such as flush tlb by
specific gpa parameter and flush all of the virt machines tlb.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/tlb.c | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)
 create mode 100644 arch/loongarch/kvm/tlb.c

diff --git a/arch/loongarch/kvm/tlb.c b/arch/loongarch/kvm/tlb.c
new file mode 100644
index 000000000..66e116cf2
--- /dev/null
+++ b/arch/loongarch/kvm/tlb.c
@@ -0,0 +1,31 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/kvm_host.h>
+#include <asm/tlb.h>
+
+int kvm_flush_tlb_gpa(struct kvm_vcpu *vcpu, unsigned long gpa)
+{
+	preempt_disable();
+	gpa &= (PAGE_MASK << 1);
+	invtlb(INVTLB_GID_ADDR, read_csr_gstat() & CSR_GSTAT_GID, gpa);
+	preempt_enable();
+	return 0;
+}
+
+/**
+ * kvm_flush_tlb_all() - Flush all root TLB entries for
+ * guests.
+ *
+ * Invalidate all entries including GVA-->GPA and GPA-->HPA mappings.
+ */
+void kvm_flush_tlb_all(void)
+{
+	unsigned long flags;
+
+	local_irq_save(flags);
+	invtlb_all(INVTLB_ALLGID, 0, 0);
+	local_irq_restore(flags);
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 18/29] LoongArch: KVM: Implement vcpu timer operations
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (16 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 17/29] LoongArch: KVM: Implement virtual machine tlb operations Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 19/29] LoongArch: KVM: Implement kvm mmu operations Tianrui Zhao
                   ` (10 subsequent siblings)
  28 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement loongarch vcpu timer operations such as init kvm timer,
require kvm timer, save kvm timer and restore kvm timer. When
vcpu exit, we use kvm soft timer to emulate hardware timer. If
timeout happens, the vcpu timer interrupt will be set and it is
going to be handled at vcpu next entrance.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/timer.c | 266 +++++++++++++++++++++++++++++++++++++
 1 file changed, 266 insertions(+)
 create mode 100644 arch/loongarch/kvm/timer.c

diff --git a/arch/loongarch/kvm/timer.c b/arch/loongarch/kvm/timer.c
new file mode 100644
index 000000000..2c7677248
--- /dev/null
+++ b/arch/loongarch/kvm/timer.c
@@ -0,0 +1,266 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/kvm_host.h>
+#include <asm/kvm_csr.h>
+#include <asm/kvm_vcpu.h>
+
+/* low level hrtimer wake routine */
+enum hrtimer_restart kvm_swtimer_wakeup(struct hrtimer *timer)
+{
+	struct kvm_vcpu *vcpu;
+
+	vcpu = container_of(timer, struct kvm_vcpu, arch.swtimer);
+	_kvm_queue_irq(vcpu, LARCH_INT_TIMER);
+	rcuwait_wake_up(&vcpu->wait);
+	return kvm_count_timeout(vcpu);
+}
+
+/*
+ * ktime_to_tick() - Scale ktime_t to a 64-bit stable timer.
+ *
+ * Caches the dynamic nanosecond bias in vcpu->arch.timer_dyn_bias.
+ */
+static unsigned long ktime_to_tick(struct kvm_vcpu *vcpu, ktime_t now)
+{
+	s64 now_ns, periods;
+	unsigned long delta;
+
+	now_ns = ktime_to_ns(now);
+	delta = now_ns + vcpu->arch.timer_dyn_bias;
+
+	if (delta >= vcpu->arch.timer_period) {
+		/* If delta is out of safe range the bias needs adjusting */
+		periods = div64_s64(now_ns, vcpu->arch.timer_period);
+		vcpu->arch.timer_dyn_bias = -periods * vcpu->arch.timer_period;
+		/* Recalculate delta with new bias */
+		delta = now_ns + vcpu->arch.timer_dyn_bias;
+	}
+
+	/*
+	 * We've ensured that:
+	 *   delta < timer_period
+	 */
+	return div_u64(delta * vcpu->arch.timer_mhz, MNSEC_PER_SEC);
+}
+
+/**
+ * kvm_resume_hrtimer() - Resume hrtimer, updating expiry.
+ * @vcpu:	Virtual CPU.
+ * @now:	ktime at point of resume.
+ * @val:	stable timer at point of resume.
+ *
+ * Resumes the timer and updates the timer expiry based on @now and @count.
+ */
+static void kvm_resume_hrtimer(struct kvm_vcpu *vcpu, ktime_t now,
+				unsigned long val)
+{
+	unsigned long delta;
+	ktime_t expire;
+
+	/* Stable timer decreased to zero or
+	 * initialize to zero, set 4 second timer
+	 */
+	delta = div_u64(val * MNSEC_PER_SEC, vcpu->arch.timer_mhz);
+	expire = ktime_add_ns(now, delta);
+
+	/* Update hrtimer to use new timeout */
+	hrtimer_cancel(&vcpu->arch.swtimer);
+	hrtimer_start(&vcpu->arch.swtimer, expire, HRTIMER_MODE_ABS_PINNED);
+}
+
+/**
+ * kvm_init_timer() - Initialise stable timer.
+ * @vcpu:	Virtual CPU.
+ * @timer_hz:	Frequency of timer.
+ *
+ * Initialise the timer to the specified frequency, zero it, and set it going if
+ * it's enabled.
+ */
+void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long timer_hz)
+{
+	ktime_t now;
+	unsigned long ticks;
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+
+	ticks = (unsigned long)MNSEC_PER_SEC * CSR_TCFG_VAL;
+	vcpu->arch.timer_mhz = timer_hz >> 20;
+	vcpu->arch.timer_period = div_u64(ticks, vcpu->arch.timer_mhz);
+	vcpu->arch.timer_dyn_bias = 0;
+
+	/* Starting at 0 */
+	ticks = 0;
+	now = ktime_get();
+	vcpu->arch.timer_bias = ticks - ktime_to_tick(vcpu, now);
+	vcpu->arch.timer_bias &= CSR_TCFG_VAL;
+	kvm_write_sw_gcsr(csr, LOONGARCH_CSR_TVAL, ticks);
+}
+
+/**
+ * kvm_count_timeout() - Push timer forward on timeout.
+ * @vcpu:	Virtual CPU.
+ *
+ * Handle an hrtimer event by push the hrtimer forward a period.
+ *
+ * Returns:	The hrtimer_restart value to return to the hrtimer subsystem.
+ */
+enum hrtimer_restart kvm_count_timeout(struct kvm_vcpu *vcpu)
+{
+	unsigned long cfg;
+
+	/* Add the Count period to the current expiry time */
+	cfg = kvm_read_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_TCFG);
+	if (cfg & CSR_TCFG_PERIOD) {
+		hrtimer_add_expires_ns(&vcpu->arch.swtimer, cfg & CSR_TCFG_VAL);
+		return HRTIMER_RESTART;
+	} else
+		return HRTIMER_NORESTART;
+}
+
+/*
+ * kvm_restore_timer() - Restore timer state.
+ * @vcpu:       Virtual CPU.
+ *
+ * Restore soft timer state from saved context.
+ */
+void kvm_restore_timer(struct kvm_vcpu *vcpu)
+{
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+	ktime_t saved_ktime, now;
+	unsigned long val, new, delta;
+	int expired = 0;
+	unsigned long cfg;
+
+	/*
+	 * Set guest stable timer cfg csr
+	 */
+	cfg = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_TCFG);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_ESTAT);
+	if (!(cfg & CSR_TCFG_EN)) {
+		kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TCFG);
+		kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TVAL);
+		return;
+	}
+
+	now = ktime_get();
+	saved_ktime = vcpu->arch.stable_ktime_saved;
+	val = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_TVAL);
+
+	/*hrtimer not expire */
+	delta = ktime_to_tick(vcpu, ktime_sub(now, saved_ktime));
+	if (delta >= val) {
+		expired = 1;
+		if (cfg & CSR_TCFG_PERIOD)
+			new = (delta - val) % (cfg & CSR_TCFG_VAL);
+		else
+			new = 1;
+	} else
+		new = val - delta;
+
+	new &= CSR_TCFG_VAL;
+	write_gcsr_timercfg(cfg);
+	write_gcsr_timertick(new);
+	if (expired)
+		_kvm_queue_irq(vcpu, LARCH_INT_TIMER);
+}
+
+/*
+ * kvm_acquire_timer() - Switch to hard timer state.
+ * @vcpu:       Virtual CPU.
+ *
+ * Restore hard timer state on top of existing soft timer state if possible.
+ *
+ * Since hard timer won't remain active over preemption, preemption should be
+ * disabled by the caller.
+ */
+void kvm_acquire_timer(struct kvm_vcpu *vcpu)
+{
+	unsigned long flags, guestcfg;
+
+	guestcfg = read_csr_gcfg();
+	if (!(guestcfg & CSR_GCFG_TIT))
+		return;
+
+	/* enable guest access to hard timer */
+	write_csr_gcfg(guestcfg & ~CSR_GCFG_TIT);
+
+	/*
+	 * Freeze the soft-timer and sync the guest stable timer with it. We do
+	 * this with interrupts disabled to avoid latency.
+	 */
+	local_irq_save(flags);
+	hrtimer_cancel(&vcpu->arch.swtimer);
+	local_irq_restore(flags);
+}
+
+
+/*
+ * _kvm_save_timer() - Switch to software emulation of guest timer.
+ * @vcpu:       Virtual CPU.
+ *
+ * Save guest timer state and switch to software emulation of guest
+ * timer. The hard timer must already be in use, so preemption should be
+ * disabled.
+ */
+static ktime_t _kvm_save_timer(struct kvm_vcpu *vcpu, unsigned long *val)
+{
+	unsigned long end_time;
+	ktime_t before_time;
+
+	before_time = ktime_get();
+
+	/*
+	 * Record a final stable timer which we will transfer to the soft-timer.
+	 */
+	end_time = read_gcsr_timertick();
+	*val = end_time;
+
+	kvm_resume_hrtimer(vcpu, before_time, end_time);
+	return before_time;
+}
+
+/*
+ * kvm_save_timer() - Save guest timer state.
+ * @vcpu:       Virtual CPU.
+ *
+ * Save guest timer state and switch to soft guest timer if hard timer was in
+ * use.
+ */
+void kvm_save_timer(struct kvm_vcpu *vcpu)
+{
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+	unsigned long guestcfg, val;
+	ktime_t save_ktime;
+
+	preempt_disable();
+	guestcfg = read_csr_gcfg();
+	if (!(guestcfg & CSR_GCFG_TIT)) {
+		/* disable guest use of hard timer */
+		write_csr_gcfg(guestcfg | CSR_GCFG_TIT);
+
+		/* save hard timer state */
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TCFG);
+		if (kvm_read_sw_gcsr(csr, LOONGARCH_CSR_TCFG) & CSR_TCFG_EN) {
+			save_ktime = _kvm_save_timer(vcpu, &val);
+			kvm_write_sw_gcsr(csr, LOONGARCH_CSR_TVAL, val);
+			vcpu->arch.stable_ktime_saved = save_ktime;
+			if (val == CSR_TCFG_VAL)
+				_kvm_queue_irq(vcpu, LARCH_INT_TIMER);
+		} else {
+			kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TVAL);
+		}
+	}
+
+	/* save timer-related state to VCPU context */
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ESTAT);
+	preempt_enable();
+}
+
+void kvm_reset_timer(struct kvm_vcpu *vcpu)
+{
+	write_gcsr_timercfg(0);
+	kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_TCFG, 0);
+	hrtimer_cancel(&vcpu->arch.swtimer);
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 19/29] LoongArch: KVM: Implement kvm mmu operations
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (17 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 18/29] LoongArch: KVM: Implement vcpu timer operations Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 20/29] LoongArch: KVM: Implement handle csr excption Tianrui Zhao
                   ` (9 subsequent siblings)
  28 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement loongarch kvm mmu, it is used to switch gpa to hpa when
guest exit because of address translation exception. This patch
implement allocate gpa page table, search gpa from it and flush guest
gpa in the table.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/mmu.c | 821 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 821 insertions(+)
 create mode 100644 arch/loongarch/kvm/mmu.c

diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
new file mode 100644
index 000000000..049824f8e
--- /dev/null
+++ b/arch/loongarch/kvm/mmu.c
@@ -0,0 +1,821 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/highmem.h>
+#include <linux/hugetlb.h>
+#include <linux/page-flags.h>
+#include <linux/kvm_host.h>
+#include <linux/uaccess.h>
+#include <asm/kvm_host.h>
+#include <asm/mmu_context.h>
+#include <asm/pgalloc.h>
+#include <asm/tlb.h>
+
+/*
+ * KVM_MMU_CACHE_MIN_PAGES is the number of GPA page table translation levels
+ * for which pages need to be cached.
+ */
+#if defined(__PAGETABLE_PMD_FOLDED)
+#define KVM_MMU_CACHE_MIN_PAGES 1
+#else
+#define KVM_MMU_CACHE_MIN_PAGES 2
+#endif
+
+/**
+ * kvm_pgd_alloc() - Allocate and initialise a KVM GPA page directory.
+ *
+ * Allocate a blank KVM GPA page directory (PGD) for representing guest physical
+ * to host physical page mappings.
+ *
+ * Returns:	Pointer to new KVM GPA page directory.
+ *		NULL on allocation failure.
+ */
+pgd_t *kvm_pgd_alloc(void)
+{
+	pgd_t *pgd;
+
+	pgd = (pgd_t *)__get_free_pages(GFP_KERNEL, 0);
+	if (pgd)
+		pgd_init((void *)pgd);
+
+	return pgd;
+}
+
+/**
+ * kvm_walk_pgd() - Walk page table with optional allocation.
+ * @pgd:	Page directory pointer.
+ * @addr:	Address to index page table using.
+ * @cache:	MMU page cache to allocate new page tables from, or NULL.
+ *
+ * Walk the page tables pointed to by @pgd to find the PTE corresponding to the
+ * address @addr. If page tables don't exist for @addr, they will be created
+ * from the MMU cache if @cache is not NULL.
+ *
+ * Returns:	Pointer to pte_t corresponding to @addr.
+ *		NULL if a page table doesn't exist for @addr and !@cache.
+ *		NULL if a page table allocation failed.
+ */
+static pte_t *kvm_walk_pgd(pgd_t *pgd, struct kvm_mmu_memory_cache *cache,
+				unsigned long addr)
+{
+	p4d_t *p4d;
+	pud_t *pud;
+	pmd_t *pmd;
+
+	pgd += pgd_index(addr);
+	if (pgd_none(*pgd)) {
+		/* Not used yet */
+		BUG();
+		return NULL;
+	}
+	p4d = p4d_offset(pgd, addr);
+	pud = pud_offset(p4d, addr);
+	if (pud_none(*pud)) {
+		pmd_t *new_pmd;
+
+		if (!cache)
+			return NULL;
+		new_pmd = kvm_mmu_memory_cache_alloc(cache);
+		pmd_init((void *)new_pmd);
+		pud_populate(NULL, pud, new_pmd);
+	}
+	pmd = pmd_offset(pud, addr);
+	if (pmd_none(*pmd)) {
+		pte_t *new_pte;
+
+		if (!cache)
+			return NULL;
+		new_pte = kvm_mmu_memory_cache_alloc(cache);
+		clear_page(new_pte);
+		pmd_populate_kernel(NULL, pmd, new_pte);
+	}
+	return pte_offset_kernel(pmd, addr);
+}
+
+/* Caller must hold kvm->mm_lock */
+static pte_t *kvm_pte_for_gpa(struct kvm *kvm,
+				struct kvm_mmu_memory_cache *cache,
+				unsigned long addr)
+{
+	return kvm_walk_pgd(kvm->arch.gpa_mm.pgd, cache, addr);
+}
+
+/*
+ * kvm_flush_gpa_{pte,pmd,pud,pgd,pt}.
+ * Flush a range of guest physical address space from the VM's GPA page tables.
+ */
+
+static bool kvm_flush_gpa_pte(pte_t *pte, unsigned long start_gpa,
+				   unsigned long end_gpa, unsigned long *data)
+{
+	int i_min = pte_index(start_gpa);
+	int i_max = pte_index(end_gpa);
+	bool safe_to_remove = (i_min == 0 && i_max == PTRS_PER_PTE - 1);
+	int i;
+
+	for (i = i_min; i <= i_max; ++i) {
+		if (!pte_present(pte[i]))
+			continue;
+
+		set_pte(pte + i, __pte(0));
+		if (data)
+			*data += 1;
+	}
+	return safe_to_remove;
+}
+
+static bool kvm_flush_gpa_pmd(pmd_t *pmd, unsigned long start_gpa,
+				   unsigned long end_gpa, unsigned long *data)
+{
+	pte_t *pte;
+	unsigned long end = ~0ul;
+	int i_min = pmd_index(start_gpa);
+	int i_max = pmd_index(end_gpa);
+	bool safe_to_remove = (i_min == 0 && i_max == PTRS_PER_PMD - 1);
+	int i;
+
+	for (i = i_min; i <= i_max; ++i, start_gpa = 0) {
+		if (!pmd_present(pmd[i]))
+			continue;
+
+		pte = pte_offset_kernel(pmd + i, 0);
+		if (i == i_max)
+			end = end_gpa;
+
+		if (kvm_flush_gpa_pte(pte, start_gpa, end, data)) {
+			pmd_clear(pmd + i);
+			pte_free_kernel(NULL, pte);
+		} else {
+			safe_to_remove = false;
+		}
+	}
+	return safe_to_remove;
+}
+
+static bool kvm_flush_gpa_pud(pud_t *pud, unsigned long start_gpa,
+				   unsigned long end_gpa, unsigned long *data)
+{
+	pmd_t *pmd;
+	unsigned long end = ~0ul;
+	int i_min = pud_index(start_gpa);
+	int i_max = pud_index(end_gpa);
+	bool safe_to_remove = (i_min == 0 && i_max == PTRS_PER_PUD - 1);
+	int i;
+
+	for (i = i_min; i <= i_max; ++i, start_gpa = 0) {
+		if (!pud_present(pud[i]))
+			continue;
+
+		pmd = pmd_offset(pud + i, 0);
+		if (i == i_max)
+			end = end_gpa;
+
+		if (kvm_flush_gpa_pmd(pmd, start_gpa, end, data)) {
+			pud_clear(pud + i);
+			pmd_free(NULL, pmd);
+		} else {
+			safe_to_remove = false;
+		}
+	}
+	return safe_to_remove;
+}
+
+static bool kvm_flush_gpa_pgd(pgd_t *pgd, unsigned long start_gpa,
+				unsigned long end_gpa, unsigned long *data)
+{
+	p4d_t *p4d;
+	pud_t *pud;
+	unsigned long end = ~0ul;
+	int i_min = pgd_index(start_gpa);
+	int i_max = pgd_index(end_gpa);
+	bool safe_to_remove = (i_min == 0 && i_max == PTRS_PER_PGD - 1);
+	int i;
+
+	for (i = i_min; i <= i_max; ++i, start_gpa = 0) {
+		if (!pgd_present(pgd[i]))
+			continue;
+
+		p4d = p4d_offset(pgd, 0);
+		pud = pud_offset(p4d + i, 0);
+		if (i == i_max)
+			end = end_gpa;
+
+		if (kvm_flush_gpa_pud(pud, start_gpa, end, data)) {
+			pgd_clear(pgd + i);
+			pud_free(NULL, pud);
+		} else {
+			safe_to_remove = false;
+		}
+	}
+	return safe_to_remove;
+}
+
+/**
+ * kvm_flush_gpa_range() - Flush a range of guest physical addresses.
+ * @kvm:	KVM pointer.
+ * @start_gfn:	Guest frame number of first page in GPA range to flush.
+ * @end_gfn:	Guest frame number of last page in GPA range to flush.
+ *
+ * Flushes a range of GPA mappings from the GPA page tables.
+ *
+ * The caller must hold the @kvm->mmu_lock spinlock.
+ *
+ * Returns:	Whether its safe to remove the top level page directory because
+ *		all lower levels have been removed.
+ */
+static bool kvm_flush_gpa_range(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn, void *data)
+{
+	return kvm_flush_gpa_pgd(kvm->arch.gpa_mm.pgd,
+				start_gfn << PAGE_SHIFT,
+				end_gfn << PAGE_SHIFT, (unsigned long *)data);
+}
+
+/*
+ * kvm_mkclean_gpa_pt.
+ * Mark a range of guest physical address space clean (writes fault) in the VM's
+ * GPA page table to allow dirty page tracking.
+ */
+
+static int kvm_mkclean_pte(pte_t *pte, unsigned long start, unsigned long end)
+{
+	int ret = 0;
+	int i_min = pte_index(start);
+	int i_max = pte_index(end);
+	int i;
+	pte_t val;
+
+	for (i = i_min; i <= i_max; ++i) {
+		val = pte[i];
+		if (pte_present(val) && pte_dirty(val)) {
+			set_pte(pte + i, pte_mkclean(val));
+			ret = 1;
+		}
+	}
+	return ret;
+}
+
+static int kvm_mkclean_pmd(pmd_t *pmd, unsigned long start, unsigned long end)
+{
+	int ret = 0;
+	pte_t *pte;
+	unsigned long cur_end = ~0ul;
+	int i_min = pmd_index(start);
+	int i_max = pmd_index(end);
+	int i;
+
+	for (i = i_min; i <= i_max; ++i, start = 0) {
+		if (!pmd_present(pmd[i]))
+			continue;
+
+		pte = pte_offset_kernel(pmd + i, 0);
+		if (i == i_max)
+			cur_end = end;
+
+		ret |= kvm_mkclean_pte(pte, start, cur_end);
+	}
+
+	return ret;
+}
+
+static int kvm_mkclean_pud(pud_t *pud, unsigned long start, unsigned long end)
+{
+	int ret = 0;
+	pmd_t *pmd;
+	unsigned long cur_end = ~0ul;
+	int i_min = pud_index(start);
+	int i_max = pud_index(end);
+	int i;
+
+	for (i = i_min; i <= i_max; ++i, start = 0) {
+		if (!pud_present(pud[i]))
+			continue;
+
+		pmd = pmd_offset(pud + i, 0);
+		if (i == i_max)
+			cur_end = end;
+
+		ret |= kvm_mkclean_pmd(pmd, start, cur_end);
+	}
+	return ret;
+}
+
+static int kvm_mkclean_pgd(pgd_t *pgd, unsigned long start, unsigned long end)
+{
+	int ret = 0;
+	p4d_t *p4d;
+	pud_t *pud;
+	unsigned long cur_end = ~0ul;
+	int i_min = pgd_index(start);
+	int i_max = pgd_index(end);
+	int i;
+
+	for (i = i_min; i <= i_max; ++i, start = 0) {
+		if (!pgd_present(pgd[i]))
+			continue;
+
+		p4d = p4d_offset(pgd, 0);
+		pud = pud_offset(p4d + i, 0);
+		if (i == i_max)
+			cur_end = end;
+
+		ret |= kvm_mkclean_pud(pud, start, cur_end);
+	}
+	return ret;
+}
+
+/**
+ * kvm_mkclean_gpa_pt() - Make a range of guest physical addresses clean.
+ * @kvm:	KVM pointer.
+ * @start_gfn:	Guest frame number of first page in GPA range to flush.
+ * @end_gfn:	Guest frame number of last page in GPA range to flush.
+ *
+ * Make a range of GPA mappings clean so that guest writes will fault and
+ * trigger dirty page logging.
+ *
+ * The caller must hold the @kvm->mmu_lock spinlock.
+ *
+ * Returns:	Whether any GPA mappings were modified, which would require
+ *		derived mappings (GVA page tables & TLB enties) to be
+ *		invalidated.
+ */
+static int kvm_mkclean_gpa_pt(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn)
+{
+	return kvm_mkclean_pgd(kvm->arch.gpa_mm.pgd, start_gfn << PAGE_SHIFT,
+				end_gfn << PAGE_SHIFT);
+}
+
+/**
+ * kvm_arch_mmu_enable_log_dirty_pt_masked() - write protect dirty pages
+ * @kvm:	The KVM pointer
+ * @slot:	The memory slot associated with mask
+ * @gfn_offset:	The gfn offset in memory slot
+ * @mask:	The mask of dirty pages at offset 'gfn_offset' in this memory
+ *		slot to be write protected
+ *
+ * Walks bits set in mask write protects the associated pte's. Caller must
+ * acquire @kvm->mmu_lock.
+ */
+void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+		struct kvm_memory_slot *slot,
+		gfn_t gfn_offset, unsigned long mask)
+{
+	gfn_t base_gfn = slot->base_gfn + gfn_offset;
+	gfn_t start = base_gfn +  __ffs(mask);
+	gfn_t end = base_gfn + __fls(mask);
+
+	kvm_mkclean_gpa_pt(kvm, start, end);
+}
+
+void kvm_arch_commit_memory_region(struct kvm *kvm,
+				   struct kvm_memory_slot *old,
+				   const struct kvm_memory_slot *new,
+				   enum kvm_mr_change change)
+{
+	int needs_flush;
+
+	/*
+	 * If dirty page logging is enabled, write protect all pages in the slot
+	 * ready for dirty logging.
+	 *
+	 * There is no need to do this in any of the following cases:
+	 * CREATE:	No dirty mappings will already exist.
+	 * MOVE/DELETE:	The old mappings will already have been cleaned up by
+	 *		kvm_arch_flush_shadow_memslot()
+	 */
+	if (change == KVM_MR_FLAGS_ONLY &&
+	    (!(old->flags & KVM_MEM_LOG_DIRTY_PAGES) &&
+	     new->flags & KVM_MEM_LOG_DIRTY_PAGES)) {
+		spin_lock(&kvm->mmu_lock);
+		/* Write protect GPA page table entries */
+		needs_flush = kvm_mkclean_gpa_pt(kvm, new->base_gfn,
+					new->base_gfn + new->npages - 1);
+		if (needs_flush)
+			kvm_flush_remote_tlbs(kvm);
+		spin_unlock(&kvm->mmu_lock);
+	}
+}
+
+void kvm_arch_flush_shadow_all(struct kvm *kvm)
+{
+	/* Flush whole GPA */
+	kvm_flush_gpa_range(kvm, 0, ~0UL, NULL);
+	/* Flush vpid for each VCPU individually */
+	kvm_flush_remote_tlbs(kvm);
+}
+
+void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
+		struct kvm_memory_slot *slot)
+{
+	unsigned long npages;
+
+	/*
+	 * The slot has been made invalid (ready for moving or deletion), so we
+	 * need to ensure that it can no longer be accessed by any guest VCPUs.
+	 */
+
+	npages = 0;
+	spin_lock(&kvm->mmu_lock);
+	/* Flush slot from GPA */
+	kvm_flush_gpa_range(kvm, slot->base_gfn,
+			slot->base_gfn + slot->npages - 1, &npages);
+	/* Let implementation do the rest */
+	if (npages)
+		kvm_flush_remote_tlbs(kvm);
+	spin_unlock(&kvm->mmu_lock);
+}
+
+void _kvm_destroy_mm(struct kvm *kvm)
+{
+	/* It should always be safe to remove after flushing the whole range */
+	WARN_ON(!kvm_flush_gpa_range(kvm, 0, ~0UL, NULL));
+	pgd_free(NULL, kvm->arch.gpa_mm.pgd);
+	kvm->arch.gpa_mm.pgd = NULL;
+}
+
+/*
+ * Mark a range of guest physical address space old (all accesses fault) in the
+ * VM's GPA page table to allow detection of commonly used pages.
+ */
+
+static int kvm_mkold_pte(pte_t *pte, unsigned long start, unsigned long end)
+{
+	int ret = 0;
+	int i_min = pte_index(start);
+	int i_max = pte_index(end);
+	int i;
+	pte_t old, new;
+
+	for (i = i_min; i <= i_max; ++i) {
+		if (!pte_present(pte[i]))
+			continue;
+
+		old = pte[i];
+		new = pte_mkold(old);
+		if (pte_val(new) == pte_val(old))
+			continue;
+		set_pte(pte + i, new);
+		ret = 1;
+	}
+
+	return ret;
+}
+
+static int kvm_mkold_pmd(pmd_t *pmd, unsigned long start, unsigned long end)
+{
+	int ret = 0;
+	pte_t *pte;
+	unsigned long cur_end = ~0ul;
+	int i_min = pmd_index(start);
+	int i_max = pmd_index(end);
+	int i;
+
+	for (i = i_min; i <= i_max; ++i, start = 0) {
+		if (!pmd_present(pmd[i]))
+			continue;
+
+		pte = pte_offset_kernel(pmd + i, 0);
+		if (i == i_max)
+			cur_end = end;
+
+		ret |= kvm_mkold_pte(pte, start, cur_end);
+	}
+
+	return ret;
+}
+
+static int kvm_mkold_pud(pud_t *pud, unsigned long start, unsigned long end)
+{
+	int ret = 0;
+	pmd_t *pmd;
+	unsigned long cur_end = ~0ul;
+	int i_min = pud_index(start);
+	int i_max = pud_index(end);
+	int i;
+
+	for (i = i_min; i <= i_max; ++i, start = 0) {
+		if (!pud_present(pud[i]))
+			continue;
+
+		pmd = pmd_offset(pud + i, 0);
+		if (i == i_max)
+			cur_end = end;
+
+		ret |= kvm_mkold_pmd(pmd, start, cur_end);
+	}
+
+	return ret;
+}
+
+static int kvm_mkold_pgd(pgd_t *pgd, unsigned long start, unsigned long end)
+{
+	int ret = 0;
+	p4d_t *p4d;
+	pud_t *pud;
+	unsigned long cur_end = ~0ul;
+	int i_min = pgd_index(start);
+	int i_max = pgd_index(end);
+	int i;
+
+	for (i = i_min; i <= i_max; ++i, start = 0) {
+		if (!pgd_present(pgd[i]))
+			continue;
+
+		p4d = p4d_offset(pgd, 0);
+		pud = pud_offset(p4d + i, 0);
+		if (i == i_max)
+			cur_end = end;
+
+		ret |= kvm_mkold_pud(pud, start, cur_end);
+	}
+
+	return ret;
+}
+
+bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
+{
+	unsigned long npages = 0;
+
+	kvm_flush_gpa_range(kvm, range->start, range->end, &npages);
+	return npages > 0;
+}
+
+bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
+{
+	gpa_t gpa = range->start << PAGE_SHIFT;
+	pte_t hva_pte = range->pte;
+	pte_t *ptep = kvm_pte_for_gpa(kvm, NULL, gpa);
+	pte_t old_pte;
+
+	if (!ptep)
+		return false;
+
+	/* Mapping may need adjusting depending on memslot flags */
+	old_pte = *ptep;
+	if (range->slot->flags & KVM_MEM_LOG_DIRTY_PAGES && !pte_dirty(old_pte))
+		hva_pte = pte_mkclean(hva_pte);
+	else if (range->slot->flags & KVM_MEM_READONLY)
+		hva_pte = pte_wrprotect(hva_pte);
+
+	set_pte(ptep, hva_pte);
+
+	/* Replacing an absent or old page doesn't need flushes */
+	if (!pte_present(old_pte) || !pte_young(old_pte))
+		return false;
+
+	/* Pages swapped, aged, moved, or cleaned require flushes */
+	return !pte_present(hva_pte) ||
+	       !pte_young(hva_pte) ||
+	       pte_pfn(old_pte) != pte_pfn(hva_pte) ||
+	       (pte_dirty(old_pte) && !pte_dirty(hva_pte));
+}
+
+bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
+{
+	return kvm_mkold_pgd(kvm->arch.gpa_mm.pgd, range->start << PAGE_SHIFT,
+				range->end << PAGE_SHIFT);
+}
+
+bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
+{
+	gpa_t gpa = range->start << PAGE_SHIFT;
+	pte_t *ptep = kvm_pte_for_gpa(kvm, NULL, gpa);
+
+	if (ptep && pte_present(*ptep) && pte_young(*ptep))
+		return true;
+
+	return false;
+}
+
+/**
+ * kvm_map_page_fast() - Fast path GPA fault handler.
+ * @vcpu:		VCPU pointer.
+ * @gpa:		Guest physical address of fault.
+ * @write:	Whether the fault was due to a write.
+ *
+ * Perform fast path GPA fault handling, doing all that can be done without
+ * calling into KVM. This handles marking old pages young (for idle page
+ * tracking), and dirtying of clean pages (for dirty page logging).
+ *
+ * Returns:	0 on success, in which case we can update derived mappings and
+ *		resume guest execution.
+ *		-EFAULT on failure due to absent GPA mapping or write to
+ *		read-only page, in which case KVM must be consulted.
+ */
+static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa,
+				   bool write)
+{
+	struct kvm *kvm = vcpu->kvm;
+	gfn_t gfn = gpa >> PAGE_SHIFT;
+	pte_t *ptep;
+	kvm_pfn_t pfn = 0;
+	bool pfn_valid = false;
+	int ret = 0;
+
+	spin_lock(&kvm->mmu_lock);
+
+	/* Fast path - just check GPA page table for an existing entry */
+	ptep = kvm_pte_for_gpa(kvm, NULL, gpa);
+	if (!ptep || !pte_present(*ptep)) {
+		ret = -EFAULT;
+		goto out;
+	}
+
+	/* Track access to pages marked old */
+	if (!pte_young(*ptep)) {
+		set_pte(ptep, pte_mkyoung(*ptep));
+		pfn = pte_pfn(*ptep);
+		pfn_valid = true;
+		/* call kvm_set_pfn_accessed() after unlock */
+	}
+	if (write && !pte_dirty(*ptep)) {
+		if (!pte_write(*ptep)) {
+			ret = -EFAULT;
+			goto out;
+		}
+
+		/* Track dirtying of writeable pages */
+		set_pte(ptep, pte_mkdirty(*ptep));
+		pfn = pte_pfn(*ptep);
+		mark_page_dirty(kvm, gfn);
+		kvm_set_pfn_dirty(pfn);
+	}
+
+out:
+	spin_unlock(&kvm->mmu_lock);
+	if (pfn_valid)
+		kvm_set_pfn_accessed(pfn);
+	return ret;
+}
+
+/**
+ * kvm_map_page() - Map a guest physical page.
+ * @vcpu:		VCPU pointer.
+ * @gpa:		Guest physical address of fault.
+ * @write:	Whether the fault was due to a write.
+ *
+ * Handle GPA faults by creating a new GPA mapping (or updating an existing
+ * one).
+ *
+ * This takes care of marking pages young or dirty (idle/dirty page tracking),
+ * asking KVM for the corresponding PFN, and creating a mapping in the GPA page
+ * tables. Derived mappings (GVA page tables and TLBs) must be handled by the
+ * caller.
+ *
+ * Returns:	0 on success
+ *		-EFAULT if there is no memory region at @gpa or a write was
+ *		attempted to a read-only memory region. This is usually handled
+ *		as an MMIO access.
+ */
+static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
+{
+	bool writeable;
+	int srcu_idx, err = 0, retry_no = 0;
+	unsigned long hva;
+	unsigned long mmu_seq;
+	unsigned long prot_bits;
+	pte_t *ptep, new_pte;
+	kvm_pfn_t pfn;
+	gfn_t gfn = gpa >> PAGE_SHIFT;
+	struct vm_area_struct *vma;
+	struct kvm *kvm = vcpu->kvm;
+	struct kvm_memory_slot *memslot;
+	struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
+
+	/* Try the fast path to handle old / clean pages */
+	srcu_idx = srcu_read_lock(&kvm->srcu);
+	err = kvm_map_page_fast(vcpu, gpa, write);
+	if (!err)
+		goto out;
+
+	memslot = gfn_to_memslot(kvm, gfn);
+	hva = gfn_to_hva_memslot_prot(memslot, gfn, &writeable);
+	if (kvm_is_error_hva(hva) || (write && !writeable))
+		goto out;
+
+	/* Let's check if we will get back a huge page backed by hugetlbfs */
+	mmap_read_lock(current->mm);
+	vma = find_vma_intersection(current->mm, hva, hva + 1);
+	if (unlikely(!vma)) {
+		kvm_err("Failed to find VMA for hva 0x%lx\n", hva);
+		mmap_read_unlock(current->mm);
+		err = -EFAULT;
+		goto out;
+	}
+	mmap_read_unlock(current->mm);
+
+	/* We need a minimum of cached pages ready for page table creation */
+	err = kvm_mmu_topup_memory_cache(memcache, KVM_MMU_CACHE_MIN_PAGES);
+	if (err)
+		goto out;
+
+retry:
+	/*
+	 * Used to check for invalidations in progress, of the pfn that is
+	 * returned by pfn_to_pfn_prot below.
+	 */
+	mmu_seq = kvm->mmu_invalidate_seq;
+	/*
+	 * Ensure the read of mmu_invalidate_seq isn't reordered with PTE reads in
+	 * gfn_to_pfn_prot() (which calls get_user_pages()), so that we don't
+	 * risk the page we get a reference to getting unmapped before we have a
+	 * chance to grab the mmu_lock without mmu_invalidate_retry() noticing.
+	 *
+	 * This smp_rmb() pairs with the effective smp_wmb() of the combination
+	 * of the pte_unmap_unlock() after the PTE is zapped, and the
+	 * spin_lock() in kvm_mmu_invalidate_invalidate_<page|range_end>() before
+	 * mmu_invalidate_seq is incremented.
+	 */
+	smp_rmb();
+
+	/* Slow path - ask KVM core whether we can access this GPA */
+	pfn = gfn_to_pfn_prot(kvm, gfn, write, &writeable);
+	if (is_error_noslot_pfn(pfn)) {
+		err = -EFAULT;
+		goto out;
+	}
+
+	spin_lock(&kvm->mmu_lock);
+	/* Check if an invalidation has taken place since we got pfn */
+	if (mmu_invalidate_retry(kvm, mmu_seq)) {
+		/*
+		 * This can happen when mappings are changed asynchronously, but
+		 * also synchronously if a COW is triggered by
+		 * gfn_to_pfn_prot().
+		 */
+		spin_unlock(&kvm->mmu_lock);
+		kvm_set_pfn_accessed(pfn);
+		kvm_release_pfn_clean(pfn);
+		if (retry_no > 100) {
+			retry_no = 0;
+			schedule();
+		}
+		retry_no++;
+		goto retry;
+	}
+
+	/*
+	 * For emulated devices such virtio device, actual cache attribute is
+	 * determined by physical machine.
+	 * For pass through physical device, it should be uncachable
+	 */
+	prot_bits = _PAGE_PRESENT | __READABLE;
+	if (vma->vm_flags & (VM_IO | VM_PFNMAP))
+		prot_bits |= _CACHE_SUC;
+	else
+		prot_bits |= _CACHE_CC;
+
+	if (writeable) {
+		prot_bits |= _PAGE_WRITE;
+		if (write) {
+			prot_bits |= __WRITEABLE;
+			mark_page_dirty(kvm, gfn);
+			kvm_set_pfn_dirty(pfn);
+		}
+	}
+
+	/* Ensure page tables are allocated */
+	ptep = kvm_pte_for_gpa(kvm, memcache, gpa);
+	new_pte = pfn_pte(pfn, __pgprot(prot_bits));
+	set_pte(ptep, new_pte);
+
+	err = 0;
+	spin_unlock(&kvm->mmu_lock);
+	kvm_release_pfn_clean(pfn);
+	kvm_set_pfn_accessed(pfn);
+out:
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+	return err;
+}
+
+int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
+{
+	int ret;
+
+	ret = kvm_map_page(vcpu, gpa, write);
+	if (ret)
+		return ret;
+
+	/* Invalidate this entry in the TLB */
+	return kvm_flush_tlb_gpa(vcpu, gpa);
+}
+
+void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
+{
+
+}
+
+int kvm_arch_prepare_memory_region(struct kvm *kvm,
+				   const struct kvm_memory_slot *old,
+				   struct kvm_memory_slot *new,
+				   enum kvm_mr_change change)
+{
+	return 0;
+}
+
+void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm,
+					const struct kvm_memory_slot *memslot)
+{
+	kvm_flush_remote_tlbs(kvm);
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 20/29] LoongArch: KVM: Implement handle csr excption
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (18 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 19/29] LoongArch: KVM: Implement kvm mmu operations Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 21/29] LoongArch: KVM: Implement handle iocsr exception Tianrui Zhao
                   ` (8 subsequent siblings)
  28 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement kvm handle loongarch vcpu exit caused by reading and
writing csr. Using loongarch_csr structure to emulate the registers.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/include/asm/kvm_csr.h |  89 +++++++++++++++++++++++
 arch/loongarch/kvm/exit.c            | 101 +++++++++++++++++++++++++++
 2 files changed, 190 insertions(+)
 create mode 100644 arch/loongarch/include/asm/kvm_csr.h
 create mode 100644 arch/loongarch/kvm/exit.c

diff --git a/arch/loongarch/include/asm/kvm_csr.h b/arch/loongarch/include/asm/kvm_csr.h
new file mode 100644
index 000000000..44fcd724c
--- /dev/null
+++ b/arch/loongarch/include/asm/kvm_csr.h
@@ -0,0 +1,89 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#ifndef __ASM_LOONGARCH_KVM_CSR_H__
+#define __ASM_LOONGARCH_KVM_CSR_H__
+#include <asm/loongarch.h>
+#include <asm/kvm_host.h>
+#include <asm/kvm_vcpu.h>
+#include <linux/uaccess.h>
+#include <linux/kvm_host.h>
+
+#define kvm_read_hw_gcsr(id)		gcsr_read(id)
+#define kvm_write_hw_gcsr(csr, id, val)	gcsr_write(val, id)
+
+int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v, int force);
+int _kvm_setcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v, int force);
+
+int _kvm_emu_iocsr(larch_inst inst, struct kvm_run *run, struct kvm_vcpu *vcpu);
+
+static inline void kvm_save_hw_gcsr(struct loongarch_csrs *csr, int gid)
+{
+	csr->csrs[gid] = gcsr_read(gid);
+}
+
+static inline void kvm_restore_hw_gcsr(struct loongarch_csrs *csr, int gid)
+{
+	gcsr_write(csr->csrs[gid], gid);
+}
+
+static inline unsigned long kvm_read_sw_gcsr(struct loongarch_csrs *csr, int gid)
+{
+	return csr->csrs[gid];
+}
+
+static inline void kvm_write_sw_gcsr(struct loongarch_csrs *csr, int gid, unsigned long val)
+{
+	csr->csrs[gid] = val;
+}
+
+static inline void kvm_set_sw_gcsr(struct loongarch_csrs *csr, int gid, unsigned long val)
+{
+	csr->csrs[gid] |= val;
+}
+
+static inline void kvm_change_sw_gcsr(struct loongarch_csrs *csr, int gid, unsigned long mask,
+	unsigned long val)
+{
+	unsigned long _mask = mask;
+
+	csr->csrs[gid] &= ~_mask;
+	csr->csrs[gid] |= val & _mask;
+}
+
+
+#define GET_HW_GCSR(id, csrid, v)				\
+	do {							\
+		if (csrid == id) {				\
+			*v = (long)kvm_read_hw_gcsr(csrid);	\
+			return 0;				\
+		}						\
+	} while (0)
+
+#define GET_SW_GCSR(csr, id, csrid, v)				\
+	do {							\
+		if (csrid == id) {				\
+			*v = kvm_read_sw_gcsr(csr, id);		\
+			return 0;				\
+		}						\
+	} while (0)
+
+#define SET_HW_GCSR(csr, id, csrid, v)				\
+	do {							\
+		if (csrid == id) {				\
+			kvm_write_hw_gcsr(csr, csrid, *v);	\
+			return 0;				\
+		}						\
+	} while (0)
+
+#define SET_SW_GCSR(csr, id, csrid, v)				\
+	do {							\
+		if (csrid == id) {				\
+			kvm_write_sw_gcsr(csr, csrid, *v);	\
+			return 0;				\
+		}						\
+	} while (0)
+
+#endif	/* __ASM_LOONGARCH_KVM_CSR_H__ */
diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c
new file mode 100644
index 000000000..dc37827d9
--- /dev/null
+++ b/arch/loongarch/kvm/exit.c
@@ -0,0 +1,101 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/module.h>
+#include <linux/preempt.h>
+#include <linux/vmalloc.h>
+#include <asm/fpu.h>
+#include <asm/inst.h>
+#include <asm/time.h>
+#include <asm/tlb.h>
+#include <asm/loongarch.h>
+#include <asm/numa.h>
+#include <asm/kvm_vcpu.h>
+#include <asm/kvm_csr.h>
+#include <linux/kvm_host.h>
+#include <asm/mmzone.h>
+
+#define CREATE_TRACE_POINTS
+#include "trace.h"
+
+static unsigned long _kvm_emu_read_csr(struct kvm_vcpu *vcpu, int csrid)
+{
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+	unsigned long val = 0;
+
+	val = 0;
+	if (csrid < 4096)
+		val = kvm_read_sw_gcsr(csr, csrid);
+	else
+		pr_warn_once("Unsupport csrread 0x%x with pc %lx\n",
+			csrid, vcpu->arch.pc);
+	return val;
+}
+
+static void _kvm_emu_write_csr(struct kvm_vcpu *vcpu, int csrid,
+	unsigned long val)
+{
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+
+	if (csrid < 4096)
+		kvm_write_sw_gcsr(csr, csrid, val);
+	else
+		pr_warn_once("Unsupport csrwrite 0x%x with pc %lx\n",
+				csrid, vcpu->arch.pc);
+}
+
+static void _kvm_emu_xchg_csr(struct kvm_vcpu *vcpu, int csrid,
+	unsigned long csr_mask, unsigned long val)
+{
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+
+	if (csrid < 4096) {
+		unsigned long orig;
+
+		orig = kvm_read_sw_gcsr(csr, csrid);
+		orig &= ~csr_mask;
+		orig |= val & csr_mask;
+		kvm_write_sw_gcsr(csr, csrid, orig);
+	} else
+		pr_warn_once("Unsupport csrxchg 0x%x with pc %lx\n",
+				csrid, vcpu->arch.pc);
+}
+
+static int _kvm_handle_csr(struct kvm_vcpu *vcpu, larch_inst inst)
+{
+	unsigned int rd, rj, csrid;
+	unsigned long csr_mask;
+	unsigned long val = 0;
+
+	/*
+	 * CSR value mask imm
+	 * rj = 0 means csrrd
+	 * rj = 1 means csrwr
+	 * rj != 0,1 means csrxchg
+	 */
+	rd = inst.reg2csr_format.rd;
+	rj = inst.reg2csr_format.rj;
+	csrid = inst.reg2csr_format.csr;
+
+	/* Process CSR ops */
+	if (rj == 0) {
+		/* process csrrd */
+		val = _kvm_emu_read_csr(vcpu, csrid);
+		vcpu->arch.gprs[rd] = val;
+	} else if (rj == 1) {
+		/* process csrwr */
+		val = vcpu->arch.gprs[rd];
+		_kvm_emu_write_csr(vcpu, csrid, val);
+	} else {
+		/* process csrxchg */
+		val = vcpu->arch.gprs[rd];
+		csr_mask = vcpu->arch.gprs[rj];
+		_kvm_emu_xchg_csr(vcpu, csrid, csr_mask, val);
+	}
+
+	return EMULATE_DONE;
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 21/29] LoongArch: KVM: Implement handle iocsr exception
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (19 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 20/29] LoongArch: KVM: Implement handle csr excption Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 22/29] LoongArch: KVM: Implement handle idle exception Tianrui Zhao
                   ` (7 subsequent siblings)
  28 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement kvm handle vcpu iocsr exception, setting the iocsr info into
vcpu_run and return to user space to handle it.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/include/asm/inst.h | 16 ++++++
 arch/loongarch/kvm/exit.c         | 92 +++++++++++++++++++++++++++++++
 2 files changed, 108 insertions(+)

diff --git a/arch/loongarch/include/asm/inst.h b/arch/loongarch/include/asm/inst.h
index 7eedd83fd..8ed137814 100644
--- a/arch/loongarch/include/asm/inst.h
+++ b/arch/loongarch/include/asm/inst.h
@@ -50,6 +50,14 @@ enum reg2_op {
 	revbd_op	= 0x0f,
 	revh2w_op	= 0x10,
 	revhd_op	= 0x11,
+	iocsrrdb_op	= 0x19200,
+	iocsrrdh_op	= 0x19201,
+	iocsrrdw_op	= 0x19202,
+	iocsrrdd_op	= 0x19203,
+	iocsrwrb_op	= 0x19204,
+	iocsrwrh_op	= 0x19205,
+	iocsrwrw_op	= 0x19206,
+	iocsrwrd_op	= 0x19207,
 };
 
 enum reg2i5_op {
@@ -261,6 +269,13 @@ struct reg3sa2_format {
 	unsigned int opcode : 15;
 };
 
+struct reg2csr_format {
+	unsigned int rd : 5;
+	unsigned int rj : 5;
+	unsigned int csr : 14;
+	unsigned int opcode : 8;
+};
+
 union loongarch_instruction {
 	unsigned int word;
 	struct reg0i26_format	reg0i26_format;
@@ -275,6 +290,7 @@ union loongarch_instruction {
 	struct reg2bstrd_format	reg2bstrd_format;
 	struct reg3_format	reg3_format;
 	struct reg3sa2_format	reg3sa2_format;
+	struct reg2csr_format	reg2csr_format;
 };
 
 #define LOONGARCH_INSN_SIZE	sizeof(union loongarch_instruction)
diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c
index dc37827d9..f02e2b940 100644
--- a/arch/loongarch/kvm/exit.c
+++ b/arch/loongarch/kvm/exit.c
@@ -99,3 +99,95 @@ static int _kvm_handle_csr(struct kvm_vcpu *vcpu, larch_inst inst)
 
 	return EMULATE_DONE;
 }
+
+int _kvm_emu_iocsr(larch_inst inst, struct kvm_run *run, struct kvm_vcpu *vcpu)
+{
+	u32 rd, rj, opcode;
+	u32 addr;
+	unsigned long val;
+	int ret;
+
+	/*
+	 * Each IOCSR with different opcode
+	 */
+	rd = inst.reg2_format.rd;
+	rj = inst.reg2_format.rj;
+	opcode = inst.reg2_format.opcode;
+	addr = vcpu->arch.gprs[rj];
+	ret = EMULATE_DO_IOCSR;
+	run->iocsr_io.phys_addr = addr;
+	run->iocsr_io.is_write = 0;
+
+	/* LoongArch is Little endian */
+	switch (opcode) {
+	case iocsrrdb_op:
+		run->iocsr_io.len = 1;
+		break;
+	case iocsrrdh_op:
+		run->iocsr_io.len = 2;
+		break;
+	case iocsrrdw_op:
+		run->iocsr_io.len = 4;
+		break;
+	case iocsrrdd_op:
+		run->iocsr_io.len = 8;
+		break;
+	case iocsrwrb_op:
+		run->iocsr_io.len = 1;
+		run->iocsr_io.is_write = 1;
+		break;
+	case iocsrwrh_op:
+		run->iocsr_io.len = 2;
+		run->iocsr_io.is_write = 1;
+		break;
+	case iocsrwrw_op:
+		run->iocsr_io.len = 4;
+		run->iocsr_io.is_write = 1;
+		break;
+	case iocsrwrd_op:
+		run->iocsr_io.len = 8;
+		run->iocsr_io.is_write = 1;
+		break;
+	default:
+		ret = EMULATE_FAIL;
+		break;
+	}
+
+	if (ret == EMULATE_DO_IOCSR) {
+		if (run->iocsr_io.is_write) {
+			val = vcpu->arch.gprs[rd];
+			memcpy(run->iocsr_io.data, &val, run->iocsr_io.len);
+		}
+		vcpu->arch.io_gpr = rd;
+	}
+
+	return ret;
+}
+
+int _kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+	unsigned long *gpr = &vcpu->arch.gprs[vcpu->arch.io_gpr];
+	enum emulation_result er = EMULATE_DONE;
+
+	switch (run->iocsr_io.len) {
+	case 8:
+		*gpr = *(s64 *)run->iocsr_io.data;
+		break;
+	case 4:
+		*gpr = *(int *)run->iocsr_io.data;
+		break;
+	case 2:
+		*gpr = *(short *)run->iocsr_io.data;
+		break;
+	case 1:
+		*gpr = *(char *) run->iocsr_io.data;
+		break;
+	default:
+		kvm_err("Bad IOCSR length: %d,addr is 0x%lx",
+				run->iocsr_io.len, vcpu->arch.badv);
+		er = EMULATE_FAIL;
+		break;
+	}
+
+	return er;
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 22/29] LoongArch: KVM: Implement handle idle exception
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (20 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 21/29] LoongArch: KVM: Implement handle iocsr exception Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20 18:40   ` Paolo Bonzini
  2023-02-20  6:57 ` [PATCH v2 23/29] LoongArch: KVM: Implement handle gspr exception Tianrui Zhao
                   ` (6 subsequent siblings)
  28 siblings, 1 reply; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement kvm handle loongarch vcpu idle exception, using kvm_vcpu_block
to emulate it.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/exit.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c
index f02e2b940..a6beb83a0 100644
--- a/arch/loongarch/kvm/exit.c
+++ b/arch/loongarch/kvm/exit.c
@@ -191,3 +191,15 @@ int _kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run)
 
 	return er;
 }
+
+int _kvm_emu_idle(struct kvm_vcpu *vcpu)
+{
+	++vcpu->stat.idle_exits;
+	trace_kvm_exit(vcpu, KVM_TRACE_EXIT_IDLE);
+	if (!vcpu->arch.irq_pending) {
+		kvm_save_timer(vcpu);
+		kvm_vcpu_block(vcpu);
+	}
+
+	return EMULATE_DONE;
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 23/29] LoongArch: KVM: Implement handle gspr exception
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (21 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 22/29] LoongArch: KVM: Implement handle idle exception Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 24/29] LoongArch: KVM: Implement handle mmio exception Tianrui Zhao
                   ` (5 subsequent siblings)
  28 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement kvm handle gspr exception interface, including emulate the
reading and writing of cpucfg, csr, iocsr resource.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/exit.c | 114 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 114 insertions(+)

diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c
index a6beb83a0..75c61272b 100644
--- a/arch/loongarch/kvm/exit.c
+++ b/arch/loongarch/kvm/exit.c
@@ -203,3 +203,117 @@ int _kvm_emu_idle(struct kvm_vcpu *vcpu)
 
 	return EMULATE_DONE;
 }
+
+static int _kvm_trap_handle_gspr(struct kvm_vcpu *vcpu)
+{
+	enum emulation_result er = EMULATE_DONE;
+	struct kvm_run *run = vcpu->run;
+	larch_inst inst;
+	unsigned long curr_pc;
+	int rd, rj;
+	unsigned int index;
+
+	/*
+	 *  Fetch the instruction.
+	 */
+	inst.word = vcpu->arch.badi;
+	curr_pc = vcpu->arch.pc;
+	update_pc(&vcpu->arch);
+
+	er = EMULATE_FAIL;
+	switch (((inst.word >> 24) & 0xff)) {
+	case 0x0:
+		/* cpucfg GSPR */
+		if (inst.reg2_format.opcode == 0x1B) {
+			rd = inst.reg2_format.rd;
+			rj = inst.reg2_format.rj;
+			++vcpu->stat.cpucfg_exits;
+			index = vcpu->arch.gprs[rj];
+
+			vcpu->arch.gprs[rd] = read_cpucfg(index);
+			/* Nested KVM is not supported */
+			if (index == 2)
+				vcpu->arch.gprs[rd] &= ~CPUCFG2_LVZP;
+			if (index == 6)
+				vcpu->arch.gprs[rd] &= ~CPUCFG6_PMP;
+			er = EMULATE_DONE;
+		}
+		break;
+	case 0x4:
+		/* csr GSPR */
+		er = _kvm_handle_csr(vcpu, inst);
+		break;
+	case 0x6:
+		/* iocsr,cache,idle GSPR */
+		switch (((inst.word >> 22) & 0x3ff)) {
+		case 0x18:
+			/* cache GSPR */
+			er = EMULATE_DONE;
+			trace_kvm_exit(vcpu, KVM_TRACE_EXIT_CACHE);
+			break;
+		case 0x19:
+			/* iocsr/idle GSPR */
+			switch (((inst.word >> 15) & 0x1ffff)) {
+			case 0xc90:
+				/* iocsr GSPR */
+				er = _kvm_emu_iocsr(inst, run, vcpu);
+				break;
+			case 0xc91:
+				/* idle GSPR */
+				er = _kvm_emu_idle(vcpu);
+				break;
+			default:
+				er = EMULATE_FAIL;
+				break;
+			}
+			break;
+		default:
+			er = EMULATE_FAIL;
+			break;
+		}
+		break;
+	default:
+		er = EMULATE_FAIL;
+		break;
+	}
+
+	/* Rollback PC only if emulation was unsuccessful */
+	if (er == EMULATE_FAIL) {
+		kvm_err("[%#lx]%s: unsupported gspr instruction 0x%08x\n",
+			curr_pc, __func__, inst.word);
+
+		kvm_arch_vcpu_dump_regs(vcpu);
+		vcpu->arch.pc = curr_pc;
+	}
+	return er;
+}
+
+/*
+ * Execute cpucfg instruction will tirggerGSPR,
+ * Also the access to unimplemented csrs 0x15
+ * 0x16, 0x50~0x53, 0x80, 0x81, 0x90~0x95, 0x98
+ * 0xc0~0xff, 0x100~0x109, 0x500~0x502,
+ * cache_op, idle_op iocsr ops the same
+ */
+static int _kvm_handle_gspr(struct kvm_vcpu *vcpu)
+{
+	enum emulation_result er = EMULATE_DONE;
+	int ret = RESUME_GUEST;
+
+	er = _kvm_trap_handle_gspr(vcpu);
+
+	if (er == EMULATE_DONE) {
+		ret = RESUME_GUEST;
+	} else if (er == EMULATE_DO_MMIO) {
+		vcpu->run->exit_reason = KVM_EXIT_MMIO;
+		ret = RESUME_HOST;
+	} else if (er == EMULATE_DO_IOCSR) {
+		vcpu->run->exit_reason = KVM_EXIT_LOONGARCH_IOCSR;
+		ret = RESUME_HOST;
+	} else {
+		kvm_err("%s internal error\n", __func__);
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		ret = RESUME_HOST;
+	}
+	return ret;
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 24/29] LoongArch: KVM: Implement handle mmio exception
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (22 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 23/29] LoongArch: KVM: Implement handle gspr exception Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 25/29] LoongArch: KVM: Implement handle fpu exception Tianrui Zhao
                   ` (4 subsequent siblings)
  28 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement handle mmio exception, setting the mmio info into vcpu_run and
return to user space to handle it.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/exit.c | 308 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 308 insertions(+)

diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c
index 75c61272b..30e64ba72 100644
--- a/arch/loongarch/kvm/exit.c
+++ b/arch/loongarch/kvm/exit.c
@@ -204,6 +204,265 @@ int _kvm_emu_idle(struct kvm_vcpu *vcpu)
 	return EMULATE_DONE;
 }
 
+int _kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst)
+{
+	struct kvm_run *run = vcpu->run;
+	unsigned int rd, op8, opcode;
+	unsigned long rd_val = 0;
+	void *data = run->mmio.data;
+	unsigned long curr_pc;
+	int ret;
+
+	/*
+	 * Update PC and hold onto current PC in case there is
+	 * an error and we want to rollback the PC
+	 */
+	curr_pc = vcpu->arch.pc;
+	update_pc(&vcpu->arch);
+
+	op8 = (inst.word >> 24) & 0xff;
+	run->mmio.phys_addr = vcpu->arch.badv;
+	ret = EMULATE_DO_MMIO;
+	if (op8 < 0x28) {
+		/* stptrw/d process */
+		rd = inst.reg2i14_format.rd;
+		opcode = inst.reg2i14_format.opcode;
+
+		switch (opcode) {
+		case stptrd_op:
+			run->mmio.len = 8;
+			*(unsigned long *)data = vcpu->arch.gprs[rd];
+			break;
+		case stptrw_op:
+			run->mmio.len = 4;
+			*(unsigned int *)data = vcpu->arch.gprs[rd];
+			break;
+		default:
+			ret = EMULATE_FAIL;
+			break;
+		}
+	} else if (op8 < 0x30) {
+		/* st.b/h/w/d  process */
+		rd = inst.reg2i12_format.rd;
+		opcode = inst.reg2i12_format.opcode;
+		rd_val = vcpu->arch.gprs[rd];
+
+		switch (opcode) {
+		case std_op:
+			run->mmio.len = 8;
+			*(unsigned long *)data = rd_val;
+			break;
+		case stw_op:
+			run->mmio.len = 4;
+			*(unsigned int *)data = rd_val;
+			break;
+		case sth_op:
+			run->mmio.len = 2;
+			*(unsigned short *)data = rd_val;
+			break;
+		case stb_op:
+			run->mmio.len = 1;
+			*(unsigned char *)data = rd_val;
+			break;
+		default:
+			ret = EMULATE_FAIL;
+			break;
+		}
+	} else if (op8 == 0x38) {
+		/* stxb/h/w/d process */
+		rd = inst.reg3_format.rd;
+		opcode = inst.reg3_format.opcode;
+
+		switch (opcode) {
+		case stxb_op:
+			run->mmio.len = 1;
+			*(unsigned char *)data = vcpu->arch.gprs[rd];
+			break;
+		case stxh_op:
+			run->mmio.len = 2;
+			*(unsigned short *)data = vcpu->arch.gprs[rd];
+			break;
+		case stxw_op:
+			run->mmio.len = 4;
+			*(unsigned int *)data = vcpu->arch.gprs[rd];
+			break;
+		case stxd_op:
+			run->mmio.len = 8;
+			*(unsigned long *)data = vcpu->arch.gprs[rd];
+			break;
+		default:
+			ret = EMULATE_FAIL;
+			break;
+		}
+	} else
+		ret = EMULATE_FAIL;
+
+	if (ret == EMULATE_DO_MMIO) {
+		run->mmio.is_write = 1;
+		vcpu->mmio_needed = 1;
+		vcpu->mmio_is_write = 1;
+	} else {
+		vcpu->arch.pc = curr_pc;
+		kvm_err("Write not supporded inst=0x%08x @%lx BadVaddr:%#lx\n",
+			inst.word, vcpu->arch.pc, vcpu->arch.badv);
+		kvm_arch_vcpu_dump_regs(vcpu);
+		/* Rollback PC if emulation was unsuccessful */
+	}
+
+	return ret;
+}
+
+int _kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst)
+{
+	unsigned int op8, opcode, rd;
+	struct kvm_run *run = vcpu->run;
+	int ret;
+
+	run->mmio.phys_addr = vcpu->arch.badv;
+	vcpu->mmio_needed = 2;	/* signed */
+	op8 = (inst.word >> 24) & 0xff;
+	ret = EMULATE_DO_MMIO;
+
+	if (op8 < 0x28) {
+		/* ldptr.w/d process */
+		rd = inst.reg2i14_format.rd;
+		opcode = inst.reg2i14_format.opcode;
+
+		switch (opcode) {
+		case ldptrd_op:
+			run->mmio.len = 8;
+			break;
+		case ldptrw_op:
+			run->mmio.len = 4;
+			break;
+		default:
+			break;
+		}
+	} else if (op8 < 0x2f) {
+		/* ld.b/h/w/d, ld.bu/hu/wu process */
+		rd = inst.reg2i12_format.rd;
+		opcode = inst.reg2i12_format.opcode;
+
+		switch (opcode) {
+		case ldd_op:
+			run->mmio.len = 8;
+			break;
+		case ldwu_op:
+			vcpu->mmio_needed = 1;	/* unsigned */
+			run->mmio.len = 4;
+			break;
+		case ldw_op:
+			run->mmio.len = 4;
+			break;
+		case ldhu_op:
+			vcpu->mmio_needed = 1;	/* unsigned */
+			run->mmio.len = 2;
+			break;
+		case ldh_op:
+			run->mmio.len = 2;
+			break;
+		case ldbu_op:
+			vcpu->mmio_needed = 1;	/* unsigned */
+			run->mmio.len = 1;
+			break;
+		case ldb_op:
+			run->mmio.len = 1;
+			break;
+		default:
+			ret = EMULATE_FAIL;
+			break;
+		}
+	} else if (op8 == 0x38) {
+		/* ldxb/h/w/d, ldxb/h/wu, ldgtb/h/w/d, ldleb/h/w/d process */
+		rd = inst.reg3_format.rd;
+		opcode = inst.reg3_format.opcode;
+
+		switch (opcode) {
+		case ldxb_op:
+			run->mmio.len = 1;
+			break;
+		case ldxbu_op:
+			run->mmio.len = 1;
+			vcpu->mmio_needed = 1;	/* unsigned */
+			break;
+		case ldxh_op:
+			run->mmio.len = 2;
+			break;
+		case ldxhu_op:
+			run->mmio.len = 2;
+			vcpu->mmio_needed = 1;	/* unsigned */
+			break;
+		case ldxw_op:
+			run->mmio.len = 4;
+			break;
+		case ldxwu_op:
+			run->mmio.len = 4;
+			vcpu->mmio_needed = 1;	/* unsigned */
+			break;
+		case ldxd_op:
+			run->mmio.len = 8;
+			break;
+		default:
+			ret = EMULATE_FAIL;
+			break;
+		}
+	} else
+		ret = EMULATE_FAIL;
+
+	if (ret == EMULATE_DO_MMIO) {
+		/* Set for _kvm_complete_mmio_read use */
+		vcpu->arch.io_gpr = rd;
+		run->mmio.is_write = 0;
+		vcpu->mmio_is_write = 0;
+	} else {
+		kvm_err("Load not supporded inst=0x%08x @%lx BadVaddr:%#lx\n",
+			inst.word, vcpu->arch.pc, vcpu->arch.badv);
+		kvm_arch_vcpu_dump_regs(vcpu);
+		vcpu->mmio_needed = 0;
+	}
+	return ret;
+}
+
+int _kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+	unsigned long *gpr = &vcpu->arch.gprs[vcpu->arch.io_gpr];
+	enum emulation_result er = EMULATE_DONE;
+
+	/* update with new PC */
+	update_pc(&vcpu->arch);
+	switch (run->mmio.len) {
+	case 8:
+		*gpr = *(s64 *)run->mmio.data;
+		break;
+	case 4:
+		if (vcpu->mmio_needed == 2)
+			*gpr = *(int *)run->mmio.data;
+		else
+			*gpr = *(unsigned int *)run->mmio.data;
+		break;
+	case 2:
+		if (vcpu->mmio_needed == 2)
+			*gpr = *(short *) run->mmio.data;
+		else
+			*gpr = *(unsigned short *)run->mmio.data;
+
+		break;
+	case 1:
+		if (vcpu->mmio_needed == 2)
+			*gpr = *(char *) run->mmio.data;
+		else
+			*gpr = *(unsigned char *) run->mmio.data;
+		break;
+	default:
+		kvm_err("Bad MMIO length: %d,addr is 0x%lx",
+				run->mmio.len, vcpu->arch.badv);
+		er = EMULATE_FAIL;
+		break;
+	}
+
+	return er;
+}
+
 static int _kvm_trap_handle_gspr(struct kvm_vcpu *vcpu)
 {
 	enum emulation_result er = EMULATE_DONE;
@@ -317,3 +576,52 @@ static int _kvm_handle_gspr(struct kvm_vcpu *vcpu)
 	}
 	return ret;
 }
+
+static int _kvm_handle_mmu_fault(struct kvm_vcpu *vcpu, bool write)
+{
+	struct kvm_run *run = vcpu->run;
+	unsigned long badv = vcpu->arch.badv;
+	larch_inst inst;
+	enum emulation_result er = EMULATE_DONE;
+	int ret;
+
+	ret = kvm_handle_mm_fault(vcpu, badv, write);
+	if (ret) {
+		/* Treat as MMIO */
+		inst.word = vcpu->arch.badi;
+		if (write) {
+			er = _kvm_emu_mmio_write(vcpu, inst);
+		} else {
+			/* A code fetch fault doesn't count as an MMIO */
+			if (kvm_is_ifetch_fault(&vcpu->arch)) {
+				kvm_err("%s ifetch error addr:%lx\n", __func__, badv);
+				run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+				return RESUME_HOST;
+			}
+
+			er = _kvm_emu_mmio_read(vcpu, inst);
+		}
+	}
+
+	if (er == EMULATE_DONE) {
+		ret = RESUME_GUEST;
+	} else if (er == EMULATE_DO_MMIO) {
+		run->exit_reason = KVM_EXIT_MMIO;
+		ret = RESUME_HOST;
+	} else {
+		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		ret = RESUME_HOST;
+	}
+
+	return ret;
+}
+
+static int _kvm_handle_write_fault(struct kvm_vcpu *vcpu)
+{
+	return _kvm_handle_mmu_fault(vcpu, true);
+}
+
+static int _kvm_handle_read_fault(struct kvm_vcpu *vcpu)
+{
+	return _kvm_handle_mmu_fault(vcpu, false);
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 25/29] LoongArch: KVM: Implement handle fpu exception
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (23 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 24/29] LoongArch: KVM: Implement handle mmio exception Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 26/29] LoongArch: KVM: Implement kvm exception vector Tianrui Zhao
                   ` (3 subsequent siblings)
  28 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement handle fpu exception, using kvm_own_fpu to enable fpu for
guest.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/exit.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c
index 30e64ba72..89c90faa1 100644
--- a/arch/loongarch/kvm/exit.c
+++ b/arch/loongarch/kvm/exit.c
@@ -625,3 +625,30 @@ static int _kvm_handle_read_fault(struct kvm_vcpu *vcpu)
 {
 	return _kvm_handle_mmu_fault(vcpu, false);
 }
+
+/**
+ * _kvm_handle_fpu_disabled() - Guest used fpu however it is disabled at host
+ * @vcpu:	Virtual CPU context.
+ *
+ * Handle when the guest attempts to use fpu which hasn't been allowed
+ * by the root context.
+ */
+static int _kvm_handle_fpu_disabled(struct kvm_vcpu *vcpu)
+{
+	struct kvm_run *run = vcpu->run;
+
+	/*
+	 * If guest FPU not present, the FPU operation should have been
+	 * treated as a reserved instruction!
+	 * If FPU already in use, we shouldn't get this at all.
+	 */
+	if (WARN_ON(!_kvm_guest_has_fpu(&vcpu->arch) ||
+				vcpu->arch.aux_inuse & KVM_LARCH_FPU)) {
+		kvm_err("%s internal error\n", __func__);
+		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		return RESUME_HOST;
+	}
+
+	kvm_own_fpu(vcpu);
+	return RESUME_GUEST;
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 26/29] LoongArch: KVM: Implement kvm exception vector
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (24 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 25/29] LoongArch: KVM: Implement handle fpu exception Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 27/29] LoongArch: KVM: Implement vcpu world switch Tianrui Zhao
                   ` (2 subsequent siblings)
  28 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement kvm exception vector, using _kvm_fault_tables array to save
the handle function pointer and it is used when vcpu handle exit.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/exit.c | 48 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c
index 89c90faa1..6fd3219bb 100644
--- a/arch/loongarch/kvm/exit.c
+++ b/arch/loongarch/kvm/exit.c
@@ -652,3 +652,51 @@ static int _kvm_handle_fpu_disabled(struct kvm_vcpu *vcpu)
 	kvm_own_fpu(vcpu);
 	return RESUME_GUEST;
 }
+
+/*
+ * Loongarch KVM callback handling for not implemented guest exiting
+ */
+static int _kvm_fault_ni(struct kvm_vcpu *vcpu)
+{
+	unsigned long estat, badv;
+	unsigned int exccode, inst;
+
+	/*
+	 *  Fetch the instruction.
+	 */
+	badv = vcpu->arch.badv;
+	estat = vcpu->arch.host_estat;
+	exccode = (estat & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT;
+	inst = vcpu->arch.badi;
+	kvm_err("Exccode: %d PC=%#lx inst=0x%08x BadVaddr=%#lx estat=%#llx\n",
+			exccode, vcpu->arch.pc, inst, badv, read_gcsr_estat());
+	kvm_arch_vcpu_dump_regs(vcpu);
+	vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+
+	return RESUME_HOST;
+}
+
+static exit_handle_fn _kvm_fault_tables[EXCCODE_INT_START] = {
+	[EXCCODE_TLBL]		= _kvm_handle_read_fault,
+	[EXCCODE_TLBI]		= _kvm_handle_read_fault,
+	[EXCCODE_TLBNR]		= _kvm_handle_read_fault,
+	[EXCCODE_TLBNX]		= _kvm_handle_read_fault,
+	[EXCCODE_TLBS]		= _kvm_handle_write_fault,
+	[EXCCODE_TLBM]		= _kvm_handle_write_fault,
+	[EXCCODE_FPDIS]		= _kvm_handle_fpu_disabled,
+	[EXCCODE_GSPR]		= _kvm_handle_gspr,
+};
+
+int _kvm_handle_fault(struct kvm_vcpu *vcpu, int fault)
+{
+	return _kvm_fault_tables[fault](vcpu);
+}
+
+void _kvm_init_fault(void)
+{
+	int i;
+
+	for (i = 0; i < EXCCODE_INT_START; i++)
+		if (!_kvm_fault_tables[i])
+			_kvm_fault_tables[i] = _kvm_fault_ni;
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 27/29] LoongArch: KVM: Implement vcpu world switch
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (25 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 26/29] LoongArch: KVM: Implement kvm exception vector Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-21  7:45   ` Paolo Bonzini
  2023-02-21  8:18   ` Paolo Bonzini
  2023-02-20  6:57 ` [PATCH v2 28/29] LoongArch: KVM: Implement probe virtualization when loongarch cpu init Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 29/29] LoongArch: KVM: Enable kvm config and add the makefile Tianrui Zhao
  28 siblings, 2 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement loongarch vcpu world switch, including vcpu enter guest and
vcpu exit from guest, both operations need to save or restore the host
and guest registers.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kernel/asm-offsets.c |  32 +++
 arch/loongarch/kvm/switch.S         | 327 ++++++++++++++++++++++++++++
 2 files changed, 359 insertions(+)
 create mode 100644 arch/loongarch/kvm/switch.S

diff --git a/arch/loongarch/kernel/asm-offsets.c b/arch/loongarch/kernel/asm-offsets.c
index 4bdb203fc..655741c03 100644
--- a/arch/loongarch/kernel/asm-offsets.c
+++ b/arch/loongarch/kernel/asm-offsets.c
@@ -9,6 +9,7 @@
 #include <linux/mm.h>
 #include <linux/kbuild.h>
 #include <linux/suspend.h>
+#include <linux/kvm_host.h>
 #include <asm/cpu-info.h>
 #include <asm/ptrace.h>
 #include <asm/processor.h>
@@ -272,3 +273,34 @@ void output_pbe_defines(void)
 	BLANK();
 }
 #endif
+
+void output_kvm_defines(void)
+{
+	COMMENT(" KVM/LOONGARCH Specific offsets. ");
+
+	OFFSET(VCPU_FCSR0, kvm_vcpu_arch, fpu.fcsr);
+	OFFSET(VCPU_FCC, kvm_vcpu_arch, fpu.fcc);
+	BLANK();
+
+	OFFSET(KVM_VCPU_ARCH, kvm_vcpu, arch);
+	OFFSET(KVM_VCPU_KVM, kvm_vcpu, kvm);
+	OFFSET(KVM_VCPU_RUN, kvm_vcpu, run);
+	BLANK();
+
+	OFFSET(KVM_ARCH_HSTACK, kvm_vcpu_arch, host_stack);
+	OFFSET(KVM_ARCH_HGP, kvm_vcpu_arch, host_gp);
+	OFFSET(KVM_ARCH_HANDLE_EXIT, kvm_vcpu_arch, handle_exit);
+	OFFSET(KVM_ARCH_HPGD, kvm_vcpu_arch, host_pgd);
+	OFFSET(KVM_ARCH_GEENTRY, kvm_vcpu_arch, guest_eentry);
+	OFFSET(KVM_ARCH_GPC, kvm_vcpu_arch, pc);
+	OFFSET(KVM_ARCH_GGPR, kvm_vcpu_arch, gprs);
+	OFFSET(KVM_ARCH_HESTAT, kvm_vcpu_arch, host_estat);
+	OFFSET(KVM_ARCH_HBADV, kvm_vcpu_arch, badv);
+	OFFSET(KVM_ARCH_HBADI, kvm_vcpu_arch, badi);
+	OFFSET(KVM_ARCH_HECFG, kvm_vcpu_arch, host_ecfg);
+	OFFSET(KVM_ARCH_HEENTRY, kvm_vcpu_arch, host_eentry);
+	OFFSET(KVM_ARCH_HPERCPU, kvm_vcpu_arch, host_percpu);
+
+	OFFSET(KVM_GPGD, kvm, arch.gpa_mm.pgd);
+	BLANK();
+}
diff --git a/arch/loongarch/kvm/switch.S b/arch/loongarch/kvm/switch.S
new file mode 100644
index 000000000..c0b8062ac
--- /dev/null
+++ b/arch/loongarch/kvm/switch.S
@@ -0,0 +1,327 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/linkage.h>
+#include <asm/stackframe.h>
+#include <asm/asm.h>
+#include <asm/asmmacro.h>
+#include <asm/regdef.h>
+#include <asm/loongarch.h>
+#include <asm/export.h>
+
+#define RESUME_HOST	(1 << 1)
+
+#define PT_GPR_OFFSET(x)	(PT_R0 + 8*x)
+#define CONFIG_GUEST_CRMD	((1 << CSR_CRMD_DACM_SHIFT) | \
+				 (1 << CSR_CRMD_DACF_SHIFT) | \
+				 CSR_CRMD_PG | PLV_KERN)
+	.text
+
+.macro kvm_save_host_gpr base
+	.irp n,1,2,3,22,23,24,25,26,27,28,29,30,31
+	st.d	$r\n, \base, PT_GPR_OFFSET(\n)
+	.endr
+.endm
+
+.macro kvm_restore_host_gpr base
+	.irp n,1,2,3,22,23,24,25,26,27,28,29,30,31
+	ld.d	$r\n, \base, PT_GPR_OFFSET(\n)
+	.endr
+.endm
+
+/*
+ * prepare switch to guest
+ * @param:
+ *  KVM_ARCH: kvm_vcpu_arch, don't touch it until 'ertn'
+ *  GPRNUM: KVM_ARCH gpr number
+ *  tmp, tmp1: temp register
+ */
+.macro kvm_switch_to_guest KVM_ARCH GPRNUM tmp tmp1
+	/* set host excfg.VS=0, all exceptions share one exception entry */
+	csrrd	\tmp, LOONGARCH_CSR_ECFG
+	bstrins.w	\tmp, zero, CSR_ECFG_VS_SHIFT_END, CSR_ECFG_VS_SHIFT
+	csrwr	\tmp, LOONGARCH_CSR_ECFG
+
+	/* Load up the new EENTRY */
+	ld.d	\tmp, \KVM_ARCH, KVM_ARCH_GEENTRY
+	csrwr	\tmp, LOONGARCH_CSR_EENTRY
+
+	/* Set Guest ERA */
+	ld.d	\tmp, \KVM_ARCH, KVM_ARCH_GPC
+	csrwr	\tmp, LOONGARCH_CSR_ERA
+
+	/* Save host PGDL */
+	csrrd	\tmp, LOONGARCH_CSR_PGDL
+	st.d	\tmp, \KVM_ARCH, KVM_ARCH_HPGD
+
+	/* Switch to kvm */
+	ld.d	\tmp1, \KVM_ARCH, KVM_VCPU_KVM - KVM_VCPU_ARCH
+
+	/* Load guest PGDL */
+	lu12i.w \tmp, KVM_GPGD
+	srli.w \tmp, \tmp, 12
+	ldx.d  \tmp, \tmp1, \tmp
+	csrwr	\tmp, LOONGARCH_CSR_PGDL
+
+	/* Mix GID and RID */
+	csrrd	\tmp1, LOONGARCH_CSR_GSTAT
+	bstrpick.w	\tmp1, \tmp1, CSR_GSTAT_GID_SHIFT_END, CSR_GSTAT_GID_SHIFT
+	csrrd	\tmp, LOONGARCH_CSR_GTLBC
+	bstrins.w	\tmp, \tmp1, CSR_GTLBC_TGID_SHIFT_END, CSR_GTLBC_TGID_SHIFT
+	csrwr	\tmp, LOONGARCH_CSR_GTLBC
+
+	/*
+	 * Switch to guest:
+	 *  GSTAT.PGM = 1, ERRCTL.ISERR = 0, TLBRPRMD.ISTLBR = 0
+	 *  ertn
+	 */
+
+	/* Prepare enable Intr before enter guest */
+	ori	\tmp, zero, CSR_PRMD_PIE
+	csrxchg	\tmp, \tmp, LOONGARCH_CSR_PRMD
+
+	/* Set PVM bit to setup ertn to guest context */
+	ori	\tmp, zero, CSR_GSTAT_PVM
+	csrxchg	\tmp, \tmp, LOONGARCH_CSR_GSTAT
+
+	/* Load Guest gprs */
+	ld.d    $r1,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 1)
+	ld.d    $r2,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 2)
+	ld.d    $r3,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 3)
+	ld.d    $r4,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 4)
+	ld.d    $r5,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 5)
+	ld.d    $r7,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 7)
+	ld.d    $r8,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 8)
+	ld.d    $r9,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 9)
+	ld.d    $r10,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 10)
+	ld.d    $r11,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 11)
+	ld.d    $r12,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 12)
+	ld.d    $r13,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 13)
+	ld.d    $r14,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 14)
+	ld.d    $r15,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 15)
+	ld.d    $r16,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 16)
+	ld.d    $r17,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 17)
+	ld.d    $r18,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 18)
+	ld.d    $r19,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 19)
+	ld.d    $r20,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 20)
+	ld.d    $r21,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 21)
+	ld.d    $r22,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 22)
+	ld.d    $r23,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 23)
+	ld.d    $r24,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 24)
+	ld.d    $r25,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 25)
+	ld.d    $r26,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 26)
+	ld.d    $r27,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 27)
+	ld.d    $r28,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 28)
+	ld.d    $r29,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 29)
+	ld.d    $r30,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 30)
+	ld.d    $r31,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 31)
+	/* Load KVM_ARCH register */
+	ld.d	\KVM_ARCH, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * \GPRNUM)
+
+	ertn
+.endm
+
+/* load kvm_vcpu to a2 and store a1 for free use */
+	.section	.text
+	.cfi_sections	.debug_frame
+SYM_CODE_START(kvm_vector_entry)
+	csrwr	a2,   KVM_TEMP_KS
+	csrrd	a2,   KVM_VCPU_KS
+	addi.d	a2,   a2, KVM_VCPU_ARCH
+
+	/* After save gprs, free to use any gpr */
+        st.d    $r1,  a2, (KVM_ARCH_GGPR + 8 * 1)
+        st.d    $r2,  a2, (KVM_ARCH_GGPR + 8 * 2)
+        st.d    $r3,  a2, (KVM_ARCH_GGPR + 8 * 3)
+        st.d    $r4,  a2, (KVM_ARCH_GGPR + 8 * 4)
+        st.d    $r5,  a2, (KVM_ARCH_GGPR + 8 * 5)
+        st.d    $r7,  a2, (KVM_ARCH_GGPR + 8 * 7)
+        st.d    $r8,  a2, (KVM_ARCH_GGPR + 8 * 8)
+        st.d    $r9,  a2, (KVM_ARCH_GGPR + 8 * 9)
+        st.d    $r10, a2, (KVM_ARCH_GGPR + 8 * 10)
+        st.d    $r11, a2, (KVM_ARCH_GGPR + 8 * 11)
+        st.d    $r12, a2, (KVM_ARCH_GGPR + 8 * 12)
+        st.d    $r13, a2, (KVM_ARCH_GGPR + 8 * 13)
+        st.d    $r14, a2, (KVM_ARCH_GGPR + 8 * 14)
+        st.d    $r15, a2, (KVM_ARCH_GGPR + 8 * 15)
+        st.d    $r16, a2, (KVM_ARCH_GGPR + 8 * 16)
+        st.d    $r17, a2, (KVM_ARCH_GGPR + 8 * 17)
+        st.d    $r18, a2, (KVM_ARCH_GGPR + 8 * 18)
+        st.d    $r19, a2, (KVM_ARCH_GGPR + 8 * 19)
+        st.d    $r20, a2, (KVM_ARCH_GGPR + 8 * 20)
+        st.d    $r21, a2, (KVM_ARCH_GGPR + 8 * 21)
+        st.d    $r22, a2, (KVM_ARCH_GGPR + 8 * 22)
+        st.d    $r23, a2, (KVM_ARCH_GGPR + 8 * 23)
+        st.d    $r24, a2, (KVM_ARCH_GGPR + 8 * 24)
+        st.d    $r25, a2, (KVM_ARCH_GGPR + 8 * 25)
+        st.d    $r26, a2, (KVM_ARCH_GGPR + 8 * 26)
+        st.d    $r27, a2, (KVM_ARCH_GGPR + 8 * 27)
+        st.d    $r28, a2, (KVM_ARCH_GGPR + 8 * 28)
+        st.d    $r29, a2, (KVM_ARCH_GGPR + 8 * 29)
+        st.d    $r30, a2, (KVM_ARCH_GGPR + 8 * 30)
+        st.d    $r31, a2, (KVM_ARCH_GGPR + 8 * 31)
+	/* Save guest a2 */
+	csrrd	t0,   KVM_TEMP_KS
+	st.d	t0,   a2, (KVM_ARCH_GGPR + 8 * REG_A2)
+
+	/* a2: kvm_vcpu_arch, a1 is free to use */
+	csrrd	s1,   KVM_VCPU_KS
+	ld.d	s0,   s1, KVM_VCPU_RUN
+
+	csrrd	t0,   LOONGARCH_CSR_ESTAT
+	st.d	t0,   a2, KVM_ARCH_HESTAT
+	csrrd	t0,   LOONGARCH_CSR_ERA
+	st.d	t0,   a2, KVM_ARCH_GPC
+	csrrd	t0,   LOONGARCH_CSR_BADV
+	st.d	t0,   a2, KVM_ARCH_HBADV
+	csrrd	t0,   LOONGARCH_CSR_BADI
+	st.d	t0,   a2, KVM_ARCH_HBADI
+
+	/* Restore host excfg.VS */
+	csrrd	t0, LOONGARCH_CSR_ECFG
+	ld.d	t1, a2, KVM_ARCH_HECFG
+	or	t0, t0, t1
+	csrwr	t0, LOONGARCH_CSR_ECFG
+
+	/* Restore host eentry */
+	ld.d	t0, a2, KVM_ARCH_HEENTRY
+	csrwr	t0, LOONGARCH_CSR_EENTRY
+
+#if defined(CONFIG_CPU_HAS_FPU)
+	/* Save FPU context */
+	csrrd       t0, LOONGARCH_CSR_EUEN
+	ori         t1, zero, CSR_EUEN_FPEN
+	and         t2, t0, t1
+	beqz        t2, 1f
+	movfcsr2gr  t3, fcsr0
+	st.d        t3,	a2, VCPU_FCSR0
+
+	movcf2gr    t3, $fcc0
+	or          t2, t3, zero
+	movcf2gr    t3, $fcc1
+	bstrins.d   t2, t3, 0xf, 0x8
+	movcf2gr    t3, $fcc2
+	bstrins.d   t2, t3, 0x17, 0x10
+	movcf2gr    t3, $fcc3
+	bstrins.d   t2, t3, 0x1f, 0x18
+	movcf2gr    t3, $fcc4
+	bstrins.d   t2, t3, 0x27, 0x20
+	movcf2gr    t3, $fcc5
+	bstrins.d   t2, t3, 0x2f, 0x28
+	movcf2gr    t3, $fcc6
+	bstrins.d   t2, t3, 0x37, 0x30
+	movcf2gr    t3, $fcc7
+	bstrins.d   t2, t3, 0x3f, 0x38
+	st.d        t2, a2, VCPU_FCC
+	movgr2fcsr  fcsr0, zero
+1:
+#endif
+	ld.d    t0, a2, KVM_ARCH_HPGD
+	csrwr   t0, LOONGARCH_CSR_PGDL
+
+	/* Disable PVM bit for keeping from into guest */
+	ori	t0, zero, CSR_GSTAT_PVM
+	csrxchg	zero, t0, LOONGARCH_CSR_GSTAT
+	/* Clear GTLBC.TGID field */
+	csrrd	t0, LOONGARCH_CSR_GTLBC
+	bstrins.w	t0, zero, CSR_GTLBC_TGID_SHIFT_END, CSR_GTLBC_TGID_SHIFT
+	csrwr	t0, LOONGARCH_CSR_GTLBC
+	/* Enable Address Map mode */
+	ori	t0, zero, CONFIG_GUEST_CRMD
+	csrwr	t0, LOONGARCH_CSR_CRMD
+	ld.d	tp, a2, KVM_ARCH_HGP
+	ld.d	sp, a2, KVM_ARCH_HSTACK
+	/* restore per cpu register */
+	ld.d	$r21, a2, KVM_ARCH_HPERCPU
+	addi.d	sp, sp, -PT_SIZE
+
+	/* Prepare handle exception */
+	or	a0, s0, zero
+	or	a1, s1, zero
+	ld.d	t8, a2, KVM_ARCH_HANDLE_EXIT
+	jirl	ra,t8, 0
+
+	ori	t0, zero, CSR_CRMD_IE
+	csrxchg	zero, t0, LOONGARCH_CSR_CRMD
+	or	a2, s1, zero
+	addi.d	a2, a2, KVM_VCPU_ARCH
+
+	andi	t0, a0, RESUME_HOST
+	bnez	t0, ret_to_host
+
+	/*
+         * return to guest
+         * save per cpu register again, maybe switched to another cpu
+         */
+	st.d	$r21, a2, KVM_ARCH_HPERCPU
+
+	/* Save kvm_vcpu to kscratch */
+	csrwr	s1, KVM_VCPU_KS
+	kvm_switch_to_guest a2 REG_A2 t0 t1
+
+ret_to_host:
+	ld.d    a2, a2, KVM_ARCH_HSTACK
+	addi.d  a2, a2, -PT_SIZE
+	srai.w  a3, a0, 2
+	or      a0, a3, zero
+	kvm_restore_host_gpr    a2
+	jirl    zero, ra, 0
+SYM_CODE_END(kvm_vector_entry)
+kvm_vector_entry_end:
+
+/*
+ * int kvm_enter_guest(struct kvm_run *run, struct kvm_vcpu *vcpu)
+ *
+ * @register_param:
+ *  a0: kvm_run* run
+ *  a1: kvm_vcpu* vcpu
+ */
+SYM_FUNC_START(kvm_enter_guest)
+	/* allocate space in stack bottom */
+	addi.d	a2, sp, -PT_SIZE
+	/* save host gprs */
+	kvm_save_host_gpr a2
+
+	/* save host crmd,prmd csr to stack */
+	csrrd	a3, LOONGARCH_CSR_CRMD
+	st.d	a3, a2, PT_CRMD
+	csrrd	a3, LOONGARCH_CSR_PRMD
+	st.d	a3, a2, PT_PRMD
+
+	addi.d	a2, a1, KVM_VCPU_ARCH
+	st.d	sp, a2, KVM_ARCH_HSTACK
+	st.d	tp, a2, KVM_ARCH_HGP
+	/* Save per cpu register */
+	st.d	$r21, a2, KVM_ARCH_HPERCPU
+
+	/* Save kvm_vcpu to kscratch */
+	csrwr	a1, KVM_VCPU_KS
+	kvm_switch_to_guest	a2 REG_A2 t0 t1
+SYM_FUNC_END(kvm_enter_guest)
+kvm_enter_guest_end:
+
+	.section ".rodata"
+SYM_DATA(kvm_vector_size,
+		.quad kvm_vector_entry_end - kvm_vector_entry)
+SYM_DATA(kvm_enter_guest_size,
+		.quad kvm_enter_guest_end - kvm_enter_guest)
+
+
+SYM_FUNC_START(kvm_save_fpu)
+	fpu_save_double a0 t1
+	jirl    zero, ra, 0
+SYM_FUNC_END(kvm_save_fpu)
+
+SYM_FUNC_START(kvm_restore_fpu)
+	fpu_restore_double a0 t1
+	jirl    zero, ra, 0
+SYM_FUNC_END(kvm_restore_fpu)
+
+SYM_FUNC_START(kvm_restore_fcsr)
+	fpu_restore_csr a0 t1
+	fpu_restore_cc  a0 t1 t2
+
+	jirl    zero, ra, 0
+SYM_FUNC_END(kvm_restore_fcsr)
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 28/29] LoongArch: KVM: Implement probe virtualization when loongarch cpu init
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (26 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 27/29] LoongArch: KVM: Implement vcpu world switch Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20  6:57 ` [PATCH v2 29/29] LoongArch: KVM: Enable kvm config and add the makefile Tianrui Zhao
  28 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Implement probe virtualization when loongarch cpu init, including guest
gid info, guest fpu info, etc.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kernel/cpu-probe.c | 53 +++++++++++++++++++++++++++++++
 1 file changed, 53 insertions(+)

diff --git a/arch/loongarch/kernel/cpu-probe.c b/arch/loongarch/kernel/cpu-probe.c
index 3a3fce2d7..9c3483d9a 100644
--- a/arch/loongarch/kernel/cpu-probe.c
+++ b/arch/loongarch/kernel/cpu-probe.c
@@ -176,6 +176,57 @@ static void cpu_probe_common(struct cpuinfo_loongarch *c)
 	}
 }
 
+static inline void cpu_probe_guestinfo(struct cpuinfo_loongarch *c)
+{
+	unsigned long guestinfo;
+
+	guestinfo = read_csr_gstat();
+	if (guestinfo & CSR_GSTAT_GIDBIT) {
+		c->options |= LOONGARCH_CPU_GUESTID;
+		write_csr_gstat(0);
+	}
+}
+
+static inline void cpu_probe_lvz(struct cpuinfo_loongarch *c)
+{
+	unsigned long gcfg, gprcfg1;
+
+	cpu_probe_guestinfo(c);
+
+	c->guest.options |= LOONGARCH_CPU_FPU;
+	c->guest.options_dyn |= LOONGARCH_CPU_FPU;
+	c->guest.options_dyn |= LOONGARCH_CPU_PMP;
+
+	c->guest.ases |= LOONGARCH_CPU_LSX;
+	c->guest.ases_dyn |= LOONGARCH_CPU_LSX;
+	gprcfg1 = read_gcsr_prcfg1();
+	c->guest.kscratch_mask = GENMASK((gprcfg1 & CSR_CONF1_KSNUM) - 1, 0);
+
+	gcfg = read_csr_gcfg();
+	if (gcfg & CSR_GCFG_MATP_GUEST)
+		c->guest_cfg |= BIT(0);
+	if (gcfg & CSR_GCFG_MATP_ROOT)
+		c->guest_cfg |= BIT(1);
+	if (gcfg & CSR_GCFG_MATP_NEST)
+		c->guest_cfg |= BIT(2);
+	if (gcfg & CSR_GCFG_SITP)
+		c->guest_cfg |= BIT(6);
+	if (gcfg & CSR_GCFG_TITP)
+		c->guest_cfg |= BIT(8);
+	if (gcfg & CSR_GCFG_TOEP)
+		c->guest_cfg |= BIT(10);
+	if (gcfg & CSR_GCFG_TOPP)
+		c->guest_cfg |= BIT(12);
+	if (gcfg & CSR_GCFG_TORUP)
+		c->guest_cfg |= BIT(14);
+	if (gcfg & CSR_GCFG_GCIP_ALL)
+		c->guest_cfg |= BIT(16);
+	if (gcfg & CSR_GCFG_GCIP_HIT)
+		c->guest_cfg |= BIT(17);
+	if (gcfg & CSR_GCFG_GCIP_SECURE)
+		c->guest_cfg |= BIT(18);
+}
+
 #define MAX_NAME_LEN	32
 #define VENDOR_OFFSET	0
 #define CPUNAME_OFFSET	9
@@ -289,6 +340,8 @@ void cpu_probe(void)
 	if (cpu == 0)
 		__ua_limit = ~((1ull << cpu_vabits) - 1);
 #endif
+	if (cpu_has_lvz)
+		cpu_probe_lvz(c);
 
 	cpu_report();
 }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v2 29/29] LoongArch: KVM: Enable kvm config and add the makefile
  2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
                   ` (27 preceding siblings ...)
  2023-02-20  6:57 ` [PATCH v2 28/29] LoongArch: KVM: Implement probe virtualization when loongarch cpu init Tianrui Zhao
@ 2023-02-20  6:57 ` Tianrui Zhao
  2023-02-20  9:47   ` kernel test robot
  2023-02-20 11:09   ` kernel test robot
  28 siblings, 2 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-20  6:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

Enable loongarch kvm config and add the makefile to support build kvm
module.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/Kbuild                      |  1 +
 arch/loongarch/Kconfig                     |  2 ++
 arch/loongarch/configs/loongson3_defconfig |  2 ++
 arch/loongarch/kvm/Kconfig                 | 38 ++++++++++++++++++++++
 arch/loongarch/kvm/Makefile                | 21 ++++++++++++
 5 files changed, 64 insertions(+)
 create mode 100644 arch/loongarch/kvm/Kconfig
 create mode 100644 arch/loongarch/kvm/Makefile

diff --git a/arch/loongarch/Kbuild b/arch/loongarch/Kbuild
index b01f5cdb2..40be8a169 100644
--- a/arch/loongarch/Kbuild
+++ b/arch/loongarch/Kbuild
@@ -2,6 +2,7 @@ obj-y += kernel/
 obj-y += mm/
 obj-y += net/
 obj-y += vdso/
+obj-y += kvm/
 
 # for cleaning
 subdir- += boot
diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
index 9cc8b84f7..424ad9392 100644
--- a/arch/loongarch/Kconfig
+++ b/arch/loongarch/Kconfig
@@ -142,6 +142,7 @@ config LOONGARCH
 	select USE_PERCPU_NUMA_NODE_ID
 	select USER_STACKTRACE_SUPPORT
 	select ZONE_DMA32
+	select HAVE_KVM
 
 config 32BIT
 	bool
@@ -541,3 +542,4 @@ source "drivers/acpi/Kconfig"
 endmenu
 
 source "drivers/firmware/Kconfig"
+source "arch/loongarch/kvm/Kconfig"
diff --git a/arch/loongarch/configs/loongson3_defconfig b/arch/loongarch/configs/loongson3_defconfig
index eb84cae64..9a6e31b43 100644
--- a/arch/loongarch/configs/loongson3_defconfig
+++ b/arch/loongarch/configs/loongson3_defconfig
@@ -62,6 +62,8 @@ CONFIG_EFI_ZBOOT=y
 CONFIG_EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER=y
 CONFIG_EFI_CAPSULE_LOADER=m
 CONFIG_EFI_TEST=m
+CONFIG_VIRTUALIZATION=y
+CONFIG_KVM=m
 CONFIG_MODULES=y
 CONFIG_MODULE_FORCE_LOAD=y
 CONFIG_MODULE_UNLOAD=y
diff --git a/arch/loongarch/kvm/Kconfig b/arch/loongarch/kvm/Kconfig
new file mode 100644
index 000000000..8a999b4c0
--- /dev/null
+++ b/arch/loongarch/kvm/Kconfig
@@ -0,0 +1,38 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# KVM configuration
+#
+
+source "virt/kvm/Kconfig"
+
+menuconfig VIRTUALIZATION
+	bool "Virtualization"
+	help
+	  Say Y here to get to see options for using your Linux host to run
+	  other operating systems inside virtual machines (guests).
+	  This option alone does not add any kernel code.
+
+	  If you say N, all options in this submenu will be skipped and
+	  disabled.
+
+if VIRTUALIZATION
+
+config KVM
+	tristate "Kernel-based Virtual Machine (KVM) support"
+	depends on HAVE_KVM
+	select MMU_NOTIFIER
+	select ANON_INODES
+	select PREEMPT_NOTIFIERS
+	select KVM_MMIO
+	select KVM_GENERIC_DIRTYLOG_READ_PROTECT
+	select HAVE_KVM_VCPU_ASYNC_IOCTL
+	select HAVE_KVM_EVENTFD
+	select SRCU
+	help
+	  Support hosting virtualized guest machines using hardware
+	  virtualization extensions. You will need a fairly processor
+	  equipped with virtualization extensions.
+
+	  If unsure, say N.
+
+endif # VIRTUALIZATION
diff --git a/arch/loongarch/kvm/Makefile b/arch/loongarch/kvm/Makefile
new file mode 100644
index 000000000..42e9dcc18
--- /dev/null
+++ b/arch/loongarch/kvm/Makefile
@@ -0,0 +1,21 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for LOONGARCH KVM support
+#
+
+ccflags-y += -I $(srctree)/$(src)
+
+include $(srctree)/virt/kvm/Makefile.kvm
+
+obj-$(CONFIG_KVM) += kvm.o
+
+kvm-y += main.o
+kvm-y += vm.o
+kvm-y += vmid.o
+kvm-y += tlb.o
+kvm-y += mmu.o
+kvm-y += vcpu.o
+kvm-y += exit.o
+kvm-y += interrupt.o
+kvm-y += timer.o
+kvm-y += switch.o
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 29/29] LoongArch: KVM: Enable kvm config and add the makefile
  2023-02-20  6:57 ` [PATCH v2 29/29] LoongArch: KVM: Enable kvm config and add the makefile Tianrui Zhao
@ 2023-02-20  9:47   ` kernel test robot
  2023-02-20 11:09   ` kernel test robot
  1 sibling, 0 replies; 70+ messages in thread
From: kernel test robot @ 2023-02-20  9:47 UTC (permalink / raw)
  To: Tianrui Zhao, Paolo Bonzini
  Cc: oe-kbuild-all, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, linux-kernel, kvm, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, maobibo

Hi Tianrui,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[also build test WARNING on v6.2]
[cannot apply to kvm/queue kvm/linux-next next-20230220]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Tianrui-Zhao/LoongArch-KVM-Add-kvm-related-header-files/20230220-151305
patch link:    https://lore.kernel.org/r/20230220065735.1282809-30-zhaotianrui%40loongson.cn
patch subject: [PATCH v2 29/29] LoongArch: KVM: Enable kvm config and add the makefile
config: loongarch-defconfig (https://download.01.org/0day-ci/archive/20230220/202302201710.ERtpPSuD-lkp@intel.com/config)
compiler: loongarch64-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/55ee4e26440ad32966cf3ee796b8a519c77ac66b
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Tianrui-Zhao/LoongArch-KVM-Add-kvm-related-header-files/20230220-151305
        git checkout 55ee4e26440ad32966cf3ee796b8a519c77ac66b
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=loongarch olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=loongarch SHELL=/bin/bash arch/loongarch/kvm/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>
| Link: https://lore.kernel.org/oe-kbuild-all/202302201710.ERtpPSuD-lkp@intel.com/

All warnings (new ones prefixed by >>):

   arch/loongarch/kvm/vcpu.c: In function 'kvm_own_fpu':
>> arch/loongarch/kvm/vcpu.c:595:23: warning: variable 'sr' set but not used [-Wunused-but-set-variable]
     595 |         unsigned long sr;
         |                       ^~
   arch/loongarch/kvm/vcpu.c: At top level:
>> arch/loongarch/kvm/vcpu.c:636:5: warning: no previous prototype for 'kvm_vcpu_ioctl_interrupt' [-Wmissing-prototypes]
     636 | int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu,
         |     ^~~~~~~~~~~~~~~~~~~~~~~~
   In file included from include/linux/huge_mm.h:8,
                    from include/linux/mm.h:746,
                    from include/linux/kvm_host.h:16,
                    from arch/loongarch/kvm/vcpu.c:6:
   arch/loongarch/kvm/vcpu.c:126:25: warning: 'vcpu_pid_fops' defined but not used [-Wunused-const-variable=]
     126 | DEFINE_SIMPLE_ATTRIBUTE(vcpu_pid_fops, vcpu_pid_get, NULL, "%llu\n");
         |                         ^~~~~~~~~~~~~
   include/linux/fs.h:3496:37: note: in definition of macro 'DEFINE_SIMPLE_ATTRIBUTE_XSIGNED'
    3496 | static const struct file_operations __fops = {                          \
         |                                     ^~~~~~
   arch/loongarch/kvm/vcpu.c:126:1: note: in expansion of macro 'DEFINE_SIMPLE_ATTRIBUTE'
     126 | DEFINE_SIMPLE_ATTRIBUTE(vcpu_pid_fops, vcpu_pid_get, NULL, "%llu\n");
         | ^~~~~~~~~~~~~~~~~~~~~~~
   arch/loongarch/kvm/vcpu.c:116:25: warning: 'lvcpu_stat_fops' defined but not used [-Wunused-const-variable=]
     116 | DEFINE_SIMPLE_ATTRIBUTE(lvcpu_stat_fops, lvcpu_stat_get, NULL, "%llu\n");
         |                         ^~~~~~~~~~~~~~~
   include/linux/fs.h:3496:37: note: in definition of macro 'DEFINE_SIMPLE_ATTRIBUTE_XSIGNED'
    3496 | static const struct file_operations __fops = {                          \
         |                                     ^~~~~~
   arch/loongarch/kvm/vcpu.c:116:1: note: in expansion of macro 'DEFINE_SIMPLE_ATTRIBUTE'
     116 | DEFINE_SIMPLE_ATTRIBUTE(lvcpu_stat_fops, lvcpu_stat_get, NULL, "%llu\n");
         | ^~~~~~~~~~~~~~~~~~~~~~~


vim +/sr +595 arch/loongarch/kvm/vcpu.c

81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  591  
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  592  /* Enable FPU for guest and restore context */
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  593  void kvm_own_fpu(struct kvm_vcpu *vcpu)
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  594  {
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20 @595  	unsigned long sr;
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  596  
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  597  	preempt_disable();
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  598  
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  599  	sr = kvm_read_hw_gcsr(LOONGARCH_CSR_EUEN);
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  600  
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  601  	/*
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  602  	 * Enable FPU for guest
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  603  	 * We set FR and FRE according to guest context
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  604  	 */
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  605  	set_csr_euen(CSR_EUEN_FPEN);
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  606  
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  607  	/* If guest FPU state not active, restore it now */
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  608  	if (!(vcpu->arch.aux_inuse & KVM_LARCH_FPU)) {
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  609  		kvm_restore_fpu(&vcpu->arch.fpu);
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  610  		vcpu->arch.aux_inuse |= KVM_LARCH_FPU;
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  611  		trace_kvm_aux(vcpu, KVM_TRACE_AUX_RESTORE, KVM_TRACE_AUX_FPU);
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  612  	} else {
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  613  		trace_kvm_aux(vcpu, KVM_TRACE_AUX_ENABLE, KVM_TRACE_AUX_FPU);
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  614  	}
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  615  
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  616  	preempt_enable();
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  617  }
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  618  
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  619  /* Save and disable FPU */
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  620  void kvm_lose_fpu(struct kvm_vcpu *vcpu)
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  621  {
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  622  	preempt_disable();
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  623  
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  624  	if (vcpu->arch.aux_inuse & KVM_LARCH_FPU) {
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  625  		kvm_save_fpu(&vcpu->arch.fpu);
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  626  		vcpu->arch.aux_inuse &= ~KVM_LARCH_FPU;
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  627  		trace_kvm_aux(vcpu, KVM_TRACE_AUX_SAVE, KVM_TRACE_AUX_FPU);
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  628  
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  629  		/* Disable FPU */
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  630  		clear_csr_euen(CSR_EUEN_FPEN);
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  631  	}
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  632  
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  633  	preempt_enable();
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  634  }
81d0b9f4fa1f11 Tianrui Zhao 2023-02-20  635  
a4dadfc6695b38 Tianrui Zhao 2023-02-20 @636  int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu,
a4dadfc6695b38 Tianrui Zhao 2023-02-20  637  			     struct kvm_loongarch_interrupt *irq)
a4dadfc6695b38 Tianrui Zhao 2023-02-20  638  {
a4dadfc6695b38 Tianrui Zhao 2023-02-20  639  	int intr = (int)irq->irq;
a4dadfc6695b38 Tianrui Zhao 2023-02-20  640  	struct kvm_vcpu *dvcpu = NULL;
a4dadfc6695b38 Tianrui Zhao 2023-02-20  641  
a4dadfc6695b38 Tianrui Zhao 2023-02-20  642  	if (irq->cpu == -1)
a4dadfc6695b38 Tianrui Zhao 2023-02-20  643  		dvcpu = vcpu;
a4dadfc6695b38 Tianrui Zhao 2023-02-20  644  	else
a4dadfc6695b38 Tianrui Zhao 2023-02-20  645  		dvcpu = kvm_get_vcpu(vcpu->kvm, irq->cpu);
a4dadfc6695b38 Tianrui Zhao 2023-02-20  646  
a4dadfc6695b38 Tianrui Zhao 2023-02-20  647  	if (intr > 0)
a4dadfc6695b38 Tianrui Zhao 2023-02-20  648  		_kvm_queue_irq(dvcpu, intr);
a4dadfc6695b38 Tianrui Zhao 2023-02-20  649  	else if (intr < 0)
a4dadfc6695b38 Tianrui Zhao 2023-02-20  650  		_kvm_dequeue_irq(dvcpu, -intr);
a4dadfc6695b38 Tianrui Zhao 2023-02-20  651  	else {
a4dadfc6695b38 Tianrui Zhao 2023-02-20  652  		kvm_err("%s: invalid interrupt ioctl (%d:%d)\n", __func__,
a4dadfc6695b38 Tianrui Zhao 2023-02-20  653  				irq->cpu, irq->irq);
a4dadfc6695b38 Tianrui Zhao 2023-02-20  654  		return -EINVAL;
a4dadfc6695b38 Tianrui Zhao 2023-02-20  655  	}
a4dadfc6695b38 Tianrui Zhao 2023-02-20  656  
a4dadfc6695b38 Tianrui Zhao 2023-02-20  657  	kvm_vcpu_kick(dvcpu);
a4dadfc6695b38 Tianrui Zhao 2023-02-20  658  	return 0;
a4dadfc6695b38 Tianrui Zhao 2023-02-20  659  }
a4dadfc6695b38 Tianrui Zhao 2023-02-20  660  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 29/29] LoongArch: KVM: Enable kvm config and add the makefile
  2023-02-20  6:57 ` [PATCH v2 29/29] LoongArch: KVM: Enable kvm config and add the makefile Tianrui Zhao
  2023-02-20  9:47   ` kernel test robot
@ 2023-02-20 11:09   ` kernel test robot
  1 sibling, 0 replies; 70+ messages in thread
From: kernel test robot @ 2023-02-20 11:09 UTC (permalink / raw)
  To: Tianrui Zhao, Paolo Bonzini
  Cc: oe-kbuild-all, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, linux-kernel, kvm, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, maobibo

Hi Tianrui,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v6.2]
[cannot apply to kvm/queue kvm/linux-next next-20230220]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Tianrui-Zhao/LoongArch-KVM-Add-kvm-related-header-files/20230220-151305
patch link:    https://lore.kernel.org/r/20230220065735.1282809-30-zhaotianrui%40loongson.cn
patch subject: [PATCH v2 29/29] LoongArch: KVM: Enable kvm config and add the makefile
config: loongarch-allmodconfig (https://download.01.org/0day-ci/archive/20230220/202302201849.mzOz4b5p-lkp@intel.com/config)
compiler: loongarch64-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/55ee4e26440ad32966cf3ee796b8a519c77ac66b
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Tianrui-Zhao/LoongArch-KVM-Add-kvm-related-header-files/20230220-151305
        git checkout 55ee4e26440ad32966cf3ee796b8a519c77ac66b
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=loongarch olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=loongarch SHELL=/bin/bash arch/loongarch/kvm/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>
| Link: https://lore.kernel.org/oe-kbuild-all/202302201849.mzOz4b5p-lkp@intel.com/

All error/warnings (new ones prefixed by >>):

   In file included from include/trace/define_trace.h:102,
                    from arch/loongarch/kvm/trace.h:137,
                    from arch/loongarch/kvm/vcpu.c:14:
   arch/loongarch/kvm/./trace.h: In function 'trace_raw_output_kvm_exit':
   arch/loongarch/kvm/./trace.h:54:31: warning: left-hand operand of comma expression has no effect [-Wunused-value]
      54 |         ({ KVM_TRACE_EXIT_IDLE,         "IDLE" },                       \
         |                               ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:58:1: note: in expansion of macro 'TRACE_EVENT'
      58 | TRACE_EVENT(kvm_exit,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:71:13: note: in expansion of macro 'TP_printk'
      71 |             TP_printk("[%s]PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:72:23: note: in expansion of macro '__print_symbolic'
      72 |                       __print_symbolic(__entry->reason,
         |                       ^~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:73:40: note: in expansion of macro 'kvm_trace_symbol_exit_types'
      73 |                                        kvm_trace_symbol_exit_types),
         |                                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~
>> arch/loongarch/kvm/./trace.h:54:48: error: expected ';' before '}' token
      54 |         ({ KVM_TRACE_EXIT_IDLE,         "IDLE" },                       \
         |                                                ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:58:1: note: in expansion of macro 'TRACE_EVENT'
      58 | TRACE_EVENT(kvm_exit,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:71:13: note: in expansion of macro 'TP_printk'
      71 |             TP_printk("[%s]PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:72:23: note: in expansion of macro '__print_symbolic'
      72 |                       __print_symbolic(__entry->reason,
         |                       ^~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:73:40: note: in expansion of macro 'kvm_trace_symbol_exit_types'
      73 |                                        kvm_trace_symbol_exit_types),
         |                                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~
>> arch/loongarch/kvm/./trace.h:54:49: error: expected ')' before ',' token
      54 |         ({ KVM_TRACE_EXIT_IDLE,         "IDLE" },                       \
         |                                                 ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:58:1: note: in expansion of macro 'TRACE_EVENT'
      58 | TRACE_EVENT(kvm_exit,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:71:13: note: in expansion of macro 'TP_printk'
      71 |             TP_printk("[%s]PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:72:23: note: in expansion of macro '__print_symbolic'
      72 |                       __print_symbolic(__entry->reason,
         |                       ^~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:73:40: note: in expansion of macro 'kvm_trace_symbol_exit_types'
      73 |                                        kvm_trace_symbol_exit_types),
         |                                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~
>> arch/loongarch/kvm/./trace.h:54:9: warning: initialization of 'long unsigned int' from 'char *' makes integer from pointer without a cast [-Wint-conversion]
      54 |         ({ KVM_TRACE_EXIT_IDLE,         "IDLE" },                       \
         |         ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:58:1: note: in expansion of macro 'TRACE_EVENT'
      58 | TRACE_EVENT(kvm_exit,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:71:13: note: in expansion of macro 'TP_printk'
      71 |             TP_printk("[%s]PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:72:23: note: in expansion of macro '__print_symbolic'
      72 |                       __print_symbolic(__entry->reason,
         |                       ^~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:73:40: note: in expansion of macro 'kvm_trace_symbol_exit_types'
      73 |                                        kvm_trace_symbol_exit_types),
         |                                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:54:9: note: (near initialization for 'symbols[0].mask')
      54 |         ({ KVM_TRACE_EXIT_IDLE,         "IDLE" },                       \
         |         ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:58:1: note: in expansion of macro 'TRACE_EVENT'
      58 | TRACE_EVENT(kvm_exit,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:71:13: note: in expansion of macro 'TP_printk'
      71 |             TP_printk("[%s]PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:72:23: note: in expansion of macro '__print_symbolic'
      72 |                       __print_symbolic(__entry->reason,
         |                       ^~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:73:40: note: in expansion of macro 'kvm_trace_symbol_exit_types'
      73 |                                        kvm_trace_symbol_exit_types),
         |                                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~
>> arch/loongarch/kvm/./trace.h:54:9: error: initializer element is not constant
      54 |         ({ KVM_TRACE_EXIT_IDLE,         "IDLE" },                       \
         |         ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:58:1: note: in expansion of macro 'TRACE_EVENT'
      58 | TRACE_EVENT(kvm_exit,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:71:13: note: in expansion of macro 'TP_printk'
      71 |             TP_printk("[%s]PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:72:23: note: in expansion of macro '__print_symbolic'
      72 |                       __print_symbolic(__entry->reason,
         |                       ^~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:73:40: note: in expansion of macro 'kvm_trace_symbol_exit_types'
      73 |                                        kvm_trace_symbol_exit_types),
         |                                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:54:9: note: (near initialization for 'symbols[0].mask')
      54 |         ({ KVM_TRACE_EXIT_IDLE,         "IDLE" },                       \
         |         ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:58:1: note: in expansion of macro 'TRACE_EVENT'
      58 | TRACE_EVENT(kvm_exit,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:71:13: note: in expansion of macro 'TP_printk'
      71 |             TP_printk("[%s]PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:72:23: note: in expansion of macro '__print_symbolic'
      72 |                       __print_symbolic(__entry->reason,
         |                       ^~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:73:40: note: in expansion of macro 'kvm_trace_symbol_exit_types'
      73 |                                        kvm_trace_symbol_exit_types),
         |                                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~
>> include/trace/stages/stage3_trace_output.h:77:37: warning: braces around scalar initializer
      77 |                 static const struct trace_print_flags symbols[] =       \
         |                                     ^~~~~~~~~~~~~~~~~
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:58:1: note: in expansion of macro 'TRACE_EVENT'
      58 | TRACE_EVENT(kvm_exit,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:71:13: note: in expansion of macro 'TP_printk'
      71 |             TP_printk("[%s]PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:72:23: note: in expansion of macro '__print_symbolic'
      72 |                       __print_symbolic(__entry->reason,
         |                       ^~~~~~~~~~~~~~~~
   include/trace/stages/stage3_trace_output.h:77:37: note: (near initialization for 'symbols[0].name')
      77 |                 static const struct trace_print_flags symbols[] =       \
         |                                     ^~~~~~~~~~~~~~~~~
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:58:1: note: in expansion of macro 'TRACE_EVENT'
      58 | TRACE_EVENT(kvm_exit,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:71:13: note: in expansion of macro 'TP_printk'
      71 |             TP_printk("[%s]PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:72:23: note: in expansion of macro '__print_symbolic'
      72 |                       __print_symbolic(__entry->reason,
         |                       ^~~~~~~~~~~~~~~~
>> include/trace/stages/stage3_trace_output.h:78:43: warning: initialization of 'const char *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
      78 |                         { symbol_array, { -1, NULL }};                  \
         |                                           ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:58:1: note: in expansion of macro 'TRACE_EVENT'
      58 | TRACE_EVENT(kvm_exit,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:71:13: note: in expansion of macro 'TP_printk'
      71 |             TP_printk("[%s]PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:72:23: note: in expansion of macro '__print_symbolic'
      72 |                       __print_symbolic(__entry->reason,
         |                       ^~~~~~~~~~~~~~~~
   include/trace/stages/stage3_trace_output.h:78:43: note: (near initialization for 'symbols[0].name')
      78 |                         { symbol_array, { -1, NULL }};                  \
         |                                           ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:58:1: note: in expansion of macro 'TRACE_EVENT'
      58 | TRACE_EVENT(kvm_exit,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:71:13: note: in expansion of macro 'TP_printk'
      71 |             TP_printk("[%s]PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:72:23: note: in expansion of macro '__print_symbolic'
      72 |                       __print_symbolic(__entry->reason,
         |                       ^~~~~~~~~~~~~~~~
>> include/linux/stddef.h:8:14: warning: excess elements in scalar initializer
       8 | #define NULL ((void *)0)
         |              ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:58:1: note: in expansion of macro 'TRACE_EVENT'
      58 | TRACE_EVENT(kvm_exit,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:71:13: note: in expansion of macro 'TP_printk'
      71 |             TP_printk("[%s]PC: 0x%08lx",
         |             ^~~~~~~~~
   include/trace/stages/stage3_trace_output.h:78:47: note: in expansion of macro 'NULL'
      78 |                         { symbol_array, { -1, NULL }};                  \
         |                                               ^~~~
   arch/loongarch/kvm/./trace.h:72:23: note: in expansion of macro '__print_symbolic'
      72 |                       __print_symbolic(__entry->reason,
         |                       ^~~~~~~~~~~~~~~~
   include/linux/stddef.h:8:14: note: (near initialization for 'symbols[0].name')
       8 | #define NULL ((void *)0)
         |              ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:58:1: note: in expansion of macro 'TRACE_EVENT'
      58 | TRACE_EVENT(kvm_exit,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:71:13: note: in expansion of macro 'TP_printk'
      71 |             TP_printk("[%s]PC: 0x%08lx",
         |             ^~~~~~~~~
   include/trace/stages/stage3_trace_output.h:78:47: note: in expansion of macro 'NULL'
      78 |                         { symbol_array, { -1, NULL }};                  \
         |                                               ^~~~
   arch/loongarch/kvm/./trace.h:72:23: note: in expansion of macro '__print_symbolic'
      72 |                       __print_symbolic(__entry->reason,
         |                       ^~~~~~~~~~~~~~~~
>> include/trace/stages/stage3_trace_output.h:78:25: warning: missing braces around initializer [-Wmissing-braces]
      78 |                         { symbol_array, { -1, NULL }};                  \
         |                         ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:58:1: note: in expansion of macro 'TRACE_EVENT'
      58 | TRACE_EVENT(kvm_exit,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:71:13: note: in expansion of macro 'TP_printk'
      71 |             TP_printk("[%s]PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:72:23: note: in expansion of macro '__print_symbolic'
      72 |                       __print_symbolic(__entry->reason,
         |                       ^~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h: In function 'trace_raw_output_kvm_aux':
   arch/loongarch/kvm/./trace.h:86:33: warning: left-hand operand of comma expression has no effect [-Wunused-value]
      86 |         ({ KVM_TRACE_AUX_RESTORE, "restore" },          \
         |                                 ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:95:1: note: in expansion of macro 'TRACE_EVENT'
      95 | TRACE_EVENT(kvm_aux,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:111:13: note: in expansion of macro 'TP_printk'
     111 |             TP_printk("%s %s PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:112:23: note: in expansion of macro '__print_symbolic'
     112 |                       __print_symbolic(__entry->op,
         |                       ^~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:113:40: note: in expansion of macro 'kvm_trace_symbol_aux_op'
     113 |                                        kvm_trace_symbol_aux_op),
         |                                        ^~~~~~~~~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:86:45: error: expected ';' before '}' token
      86 |         ({ KVM_TRACE_AUX_RESTORE, "restore" },          \
         |                                             ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:95:1: note: in expansion of macro 'TRACE_EVENT'
      95 | TRACE_EVENT(kvm_aux,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:111:13: note: in expansion of macro 'TP_printk'
     111 |             TP_printk("%s %s PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:112:23: note: in expansion of macro '__print_symbolic'
     112 |                       __print_symbolic(__entry->op,
         |                       ^~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:113:40: note: in expansion of macro 'kvm_trace_symbol_aux_op'
     113 |                                        kvm_trace_symbol_aux_op),
         |                                        ^~~~~~~~~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:86:46: error: expected ')' before ',' token
      86 |         ({ KVM_TRACE_AUX_RESTORE, "restore" },          \
         |                                              ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:95:1: note: in expansion of macro 'TRACE_EVENT'
      95 | TRACE_EVENT(kvm_aux,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:111:13: note: in expansion of macro 'TP_printk'
     111 |             TP_printk("%s %s PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:112:23: note: in expansion of macro '__print_symbolic'
     112 |                       __print_symbolic(__entry->op,
         |                       ^~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:113:40: note: in expansion of macro 'kvm_trace_symbol_aux_op'
     113 |                                        kvm_trace_symbol_aux_op),
         |                                        ^~~~~~~~~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:86:9: warning: initialization of 'long unsigned int' from 'char *' makes integer from pointer without a cast [-Wint-conversion]
      86 |         ({ KVM_TRACE_AUX_RESTORE, "restore" },          \
         |         ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:95:1: note: in expansion of macro 'TRACE_EVENT'
      95 | TRACE_EVENT(kvm_aux,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:111:13: note: in expansion of macro 'TP_printk'
     111 |             TP_printk("%s %s PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:112:23: note: in expansion of macro '__print_symbolic'
     112 |                       __print_symbolic(__entry->op,
         |                       ^~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:113:40: note: in expansion of macro 'kvm_trace_symbol_aux_op'
     113 |                                        kvm_trace_symbol_aux_op),
         |                                        ^~~~~~~~~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:86:9: note: (near initialization for 'symbols[0].mask')
      86 |         ({ KVM_TRACE_AUX_RESTORE, "restore" },          \
         |         ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:95:1: note: in expansion of macro 'TRACE_EVENT'
      95 | TRACE_EVENT(kvm_aux,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:111:13: note: in expansion of macro 'TP_printk'
     111 |             TP_printk("%s %s PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:112:23: note: in expansion of macro '__print_symbolic'
     112 |                       __print_symbolic(__entry->op,
         |                       ^~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:113:40: note: in expansion of macro 'kvm_trace_symbol_aux_op'
     113 |                                        kvm_trace_symbol_aux_op),
         |                                        ^~~~~~~~~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:86:9: error: initializer element is not constant
      86 |         ({ KVM_TRACE_AUX_RESTORE, "restore" },          \
         |         ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:95:1: note: in expansion of macro 'TRACE_EVENT'
      95 | TRACE_EVENT(kvm_aux,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:111:13: note: in expansion of macro 'TP_printk'
     111 |             TP_printk("%s %s PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:112:23: note: in expansion of macro '__print_symbolic'
     112 |                       __print_symbolic(__entry->op,
         |                       ^~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:113:40: note: in expansion of macro 'kvm_trace_symbol_aux_op'
     113 |                                        kvm_trace_symbol_aux_op),
         |                                        ^~~~~~~~~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:86:9: note: (near initialization for 'symbols[0].mask')
      86 |         ({ KVM_TRACE_AUX_RESTORE, "restore" },          \
         |         ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:95:1: note: in expansion of macro 'TRACE_EVENT'
      95 | TRACE_EVENT(kvm_aux,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:111:13: note: in expansion of macro 'TP_printk'
     111 |             TP_printk("%s %s PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:112:23: note: in expansion of macro '__print_symbolic'
     112 |                       __print_symbolic(__entry->op,
         |                       ^~~~~~~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:113:40: note: in expansion of macro 'kvm_trace_symbol_aux_op'
     113 |                                        kvm_trace_symbol_aux_op),
         |                                        ^~~~~~~~~~~~~~~~~~~~~~~
>> include/trace/stages/stage3_trace_output.h:77:37: warning: braces around scalar initializer
      77 |                 static const struct trace_print_flags symbols[] =       \
         |                                     ^~~~~~~~~~~~~~~~~
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:95:1: note: in expansion of macro 'TRACE_EVENT'
      95 | TRACE_EVENT(kvm_aux,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:111:13: note: in expansion of macro 'TP_printk'
     111 |             TP_printk("%s %s PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:112:23: note: in expansion of macro '__print_symbolic'
     112 |                       __print_symbolic(__entry->op,
         |                       ^~~~~~~~~~~~~~~~
   include/trace/stages/stage3_trace_output.h:77:37: note: (near initialization for 'symbols[0].name')
      77 |                 static const struct trace_print_flags symbols[] =       \
         |                                     ^~~~~~~~~~~~~~~~~
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:95:1: note: in expansion of macro 'TRACE_EVENT'
      95 | TRACE_EVENT(kvm_aux,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:111:13: note: in expansion of macro 'TP_printk'
     111 |             TP_printk("%s %s PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:112:23: note: in expansion of macro '__print_symbolic'
     112 |                       __print_symbolic(__entry->op,
         |                       ^~~~~~~~~~~~~~~~
>> include/trace/stages/stage3_trace_output.h:78:43: warning: initialization of 'const char *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
      78 |                         { symbol_array, { -1, NULL }};                  \
         |                                           ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:95:1: note: in expansion of macro 'TRACE_EVENT'
      95 | TRACE_EVENT(kvm_aux,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:111:13: note: in expansion of macro 'TP_printk'
     111 |             TP_printk("%s %s PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:112:23: note: in expansion of macro '__print_symbolic'
     112 |                       __print_symbolic(__entry->op,
         |                       ^~~~~~~~~~~~~~~~
   include/trace/stages/stage3_trace_output.h:78:43: note: (near initialization for 'symbols[0].name')
      78 |                         { symbol_array, { -1, NULL }};                  \
         |                                           ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:95:1: note: in expansion of macro 'TRACE_EVENT'
      95 | TRACE_EVENT(kvm_aux,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:111:13: note: in expansion of macro 'TP_printk'
     111 |             TP_printk("%s %s PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:112:23: note: in expansion of macro '__print_symbolic'
     112 |                       __print_symbolic(__entry->op,
         |                       ^~~~~~~~~~~~~~~~
>> include/linux/stddef.h:8:14: warning: excess elements in scalar initializer
       8 | #define NULL ((void *)0)
         |              ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:95:1: note: in expansion of macro 'TRACE_EVENT'
      95 | TRACE_EVENT(kvm_aux,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:111:13: note: in expansion of macro 'TP_printk'
     111 |             TP_printk("%s %s PC: 0x%08lx",
         |             ^~~~~~~~~
   include/trace/stages/stage3_trace_output.h:78:47: note: in expansion of macro 'NULL'
      78 |                         { symbol_array, { -1, NULL }};                  \
         |                                               ^~~~
   arch/loongarch/kvm/./trace.h:112:23: note: in expansion of macro '__print_symbolic'
     112 |                       __print_symbolic(__entry->op,
         |                       ^~~~~~~~~~~~~~~~
   include/linux/stddef.h:8:14: note: (near initialization for 'symbols[0].name')
       8 | #define NULL ((void *)0)
         |              ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:95:1: note: in expansion of macro 'TRACE_EVENT'
      95 | TRACE_EVENT(kvm_aux,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:111:13: note: in expansion of macro 'TP_printk'
     111 |             TP_printk("%s %s PC: 0x%08lx",
         |             ^~~~~~~~~
   include/trace/stages/stage3_trace_output.h:78:47: note: in expansion of macro 'NULL'
      78 |                         { symbol_array, { -1, NULL }};                  \
         |                                               ^~~~
   arch/loongarch/kvm/./trace.h:112:23: note: in expansion of macro '__print_symbolic'
     112 |                       __print_symbolic(__entry->op,
         |                       ^~~~~~~~~~~~~~~~
>> include/trace/stages/stage3_trace_output.h:78:25: warning: missing braces around initializer [-Wmissing-braces]
      78 |                         { symbol_array, { -1, NULL }};                  \
         |                         ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:95:1: note: in expansion of macro 'TRACE_EVENT'
      95 | TRACE_EVENT(kvm_aux,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:111:13: note: in expansion of macro 'TP_printk'
     111 |             TP_printk("%s %s PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:112:23: note: in expansion of macro '__print_symbolic'
     112 |                       __print_symbolic(__entry->op,
         |                       ^~~~~~~~~~~~~~~~
>> include/trace/stages/stage3_trace_output.h:78:39: error: expected expression before ',' token
      78 |                         { symbol_array, { -1, NULL }};                  \
         |                                       ^
   include/trace/trace_events.h:203:34: note: in definition of macro 'DECLARE_EVENT_CLASS'
     203 |         trace_event_printf(iter, print);                                \
         |                                  ^~~~~
   include/trace/trace_events.h:45:30: note: in expansion of macro 'PARAMS'
      45 |                              PARAMS(print));                   \
         |                              ^~~~~~
   arch/loongarch/kvm/./trace.h:95:1: note: in expansion of macro 'TRACE_EVENT'
      95 | TRACE_EVENT(kvm_aux,
         | ^~~~~~~~~~~
   arch/loongarch/kvm/./trace.h:111:13: note: in expansion of macro 'TP_printk'
     111 |             TP_printk("%s %s PC: 0x%08lx",
         |             ^~~~~~~~~
   arch/loongarch/kvm/./trace.h:114:23: note: in expansion of macro '__print_symbolic'
     114 |                       __print_symbolic(__entry->state,
         |                       ^~~~~~~~~~~~~~~~
   arch/loongarch/kvm/vcpu.c: In function 'kvm_own_fpu':
   arch/loongarch/kvm/vcpu.c:595:23: warning: variable 'sr' set but not used [-Wunused-but-set-variable]
     595 |         unsigned long sr;
         |                       ^~
   arch/loongarch/kvm/vcpu.c: At top level:
   arch/loongarch/kvm/vcpu.c:636:5: warning: no previous prototype for 'kvm_vcpu_ioctl_interrupt' [-Wmissing-prototypes]
     636 | int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu,
         |     ^~~~~~~~~~~~~~~~~~~~~~~~
   In file included from include/linux/huge_mm.h:8,
                    from include/linux/mm.h:746,
                    from include/linux/kvm_host.h:16,
                    from arch/loongarch/kvm/vcpu.c:6:
   arch/loongarch/kvm/vcpu.c:126:25: warning: 'vcpu_pid_fops' defined but not used [-Wunused-const-variable=]
     126 | DEFINE_SIMPLE_ATTRIBUTE(vcpu_pid_fops, vcpu_pid_get, NULL, "%llu\n");
         |                         ^~~~~~~~~~~~~
   include/linux/fs.h:3496:37: note: in definition of macro 'DEFINE_SIMPLE_ATTRIBUTE_XSIGNED'
    3496 | static const struct file_operations __fops = {                          \
         |                                     ^~~~~~
   arch/loongarch/kvm/vcpu.c:126:1: note: in expansion of macro 'DEFINE_SIMPLE_ATTRIBUTE'
     126 | DEFINE_SIMPLE_ATTRIBUTE(vcpu_pid_fops, vcpu_pid_get, NULL, "%llu\n");
         | ^~~~~~~~~~~~~~~~~~~~~~~
   arch/loongarch/kvm/vcpu.c:116:25: warning: 'lvcpu_stat_fops' defined but not used [-Wunused-const-variable=]
     116 | DEFINE_SIMPLE_ATTRIBUTE(lvcpu_stat_fops, lvcpu_stat_get, NULL, "%llu\n");
         |                         ^~~~~~~~~~~~~~~
   include/linux/fs.h:3496:37: note: in definition of macro 'DEFINE_SIMPLE_ATTRIBUTE_XSIGNED'
    3496 | static const struct file_operations __fops = {                          \
         |                                     ^~~~~~
   arch/loongarch/kvm/vcpu.c:116:1: note: in expansion of macro 'DEFINE_SIMPLE_ATTRIBUTE'
     116 | DEFINE_SIMPLE_ATTRIBUTE(lvcpu_stat_fops, lvcpu_stat_get, NULL, "%llu\n");
         | ^~~~~~~~~~~~~~~~~~~~~~~
..


vim +54 arch/loongarch/kvm/./trace.h

d73e24865c7b73 Tianrui Zhao 2023-02-20  51  
d73e24865c7b73 Tianrui Zhao 2023-02-20  52  /* Tracepoints for VM exits */
d73e24865c7b73 Tianrui Zhao 2023-02-20  53  #define kvm_trace_symbol_exit_types					\
d73e24865c7b73 Tianrui Zhao 2023-02-20 @54  	({ KVM_TRACE_EXIT_IDLE,		"IDLE" },			\
d73e24865c7b73 Tianrui Zhao 2023-02-20  55  	{ KVM_TRACE_EXIT_CACHE,		"CACHE" },			\
d73e24865c7b73 Tianrui Zhao 2023-02-20  56  	{ KVM_TRACE_EXIT_SIGNAL,	"Signal" })
d73e24865c7b73 Tianrui Zhao 2023-02-20  57  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 02/29] LoongArch: KVM: Implement kvm module related interface
  2023-02-20  6:57 ` [PATCH v2 02/29] LoongArch: KVM: Implement kvm module related interface Tianrui Zhao
@ 2023-02-20 17:46   ` Paolo Bonzini
  2023-02-21  3:02     ` Tianrui Zhao
  2023-02-21  6:59     ` maobibo
  0 siblings, 2 replies; 70+ messages in thread
From: Paolo Bonzini @ 2023-02-20 17:46 UTC (permalink / raw)
  To: Tianrui Zhao
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

On 2/20/23 07:57, Tianrui Zhao wrote:
> +	order = get_order(kvm_vector_size + kvm_enter_guest_size);
> +	addr = (void *)__get_free_pages(GFP_KERNEL, order);
> +	if (!addr) {
> +		free_percpu(vmcs);
> +		return -ENOMEM;
> +	}
> +
> +	memcpy(addr, kvm_vector_entry, kvm_vector_size);
> +	memcpy(addr + kvm_vector_size, kvm_enter_guest, kvm_enter_guest_size);
> +	flush_icache_range((unsigned long)addr, (unsigned long)addr +
> +				kvm_vector_size + kvm_enter_guest_size);
> +
> +	vpid_mask = read_csr_gstat();
> +	vpid_mask = (vpid_mask & CSR_GSTAT_GIDBIT) >> CSR_GSTAT_GIDBIT_SHIFT;
> +	if (vpid_mask)
> +		vpid_mask = GENMASK(vpid_mask - 1, 0);
> +
> +	for_each_possible_cpu(cpu) {
> +		context = per_cpu_ptr(vmcs, cpu);
> +		context->vpid_mask = vpid_mask;
> +		context->vpid_cache = context->vpid_mask + 1;
> +		context->last_vcpu = NULL;
> +		context->kvm_eentry = addr;
> +		context->kvm_enter_guest = addr + kvm_vector_size;
> +		context->page_order = order;
> +	}

A lot of these variables are constant across all pCPUs, any reason to 
have them in a per-CPU variable?  Likewise, since they are all the same 
as the constant global vmcs variable, why make them part of struct 
kvm_context instead of just making them globals?

Also, why does the world switch code need a copy?

Paolo


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 08/29] LoongArch: KVM: Implement vcpu handle exit interface
  2023-02-20  6:57 ` [PATCH v2 08/29] LoongArch: KVM: Implement vcpu handle exit interface Tianrui Zhao
@ 2023-02-20 17:46   ` Paolo Bonzini
  2023-02-20 18:45   ` Paolo Bonzini
  1 sibling, 0 replies; 70+ messages in thread
From: Paolo Bonzini @ 2023-02-20 17:46 UTC (permalink / raw)
  To: Tianrui Zhao
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

On 2/20/23 07:57, Tianrui Zhao wrote:
> +	if (ret == RESUME_GUEST)
> +		kvm_acquire_timer(vcpu);
> +
> +	if (!(ret & RESUME_HOST)) {
> +		_kvm_deliver_intr(vcpu);
> +		/* Only check for signals if not already exiting to userspace */
> +		if (signal_pending(current)) {
> +			run->exit_reason = KVM_EXIT_INTR;
> +			ret = (-EINTR << 2) | RESUME_HOST;
> +			++vcpu->stat.signal_exits;
> +			trace_kvm_exit(vcpu, KVM_TRACE_EXIT_SIGNAL);
> +		}
> +	}
> +
> +	if (ret == RESUME_GUEST) {
> +		trace_kvm_reenter(vcpu);
> +
> +		/*
> +		 * Make sure the read of VCPU requests in vcpu_reenter()
> +		 * callback is not reordered ahead of the write to vcpu->mode,
> +		 * or we could miss a TLB flush request while the requester sees
> +		 * the VCPU as outside of guest mode and not needing an IPI.
> +		 */
> +		smp_store_mb(vcpu->mode, IN_GUEST_MODE);
> +
> +		cpu = smp_processor_id();
> +		_kvm_check_requests(vcpu, cpu);
> +		_kvm_check_vmid(vcpu, cpu);
> +		vcpu->arch.host_eentry = csr_read64(LOONGARCH_CSR_EENTRY);
> +
> +		/*
> +		 * If FPU are enabled (i.e. the guest's FPU context
> +		 * is live), restore FCSR0.
> +		 */
> +		if (_kvm_guest_has_fpu(&vcpu->arch) &&
> +			read_csr_euen() & (CSR_EUEN_FPEN)) {
> +			kvm_restore_fcsr(&vcpu->arch.fpu);
> +		}
> +	}

Please avoid copying code from arch/mips/kvm since it's already pretty ugly.


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 06/29] LoongArch: KVM: Implement vcpu create and destroy interface
  2023-02-20  6:57 ` [PATCH v2 06/29] LoongArch: KVM: Implement vcpu create and destroy interface Tianrui Zhao
@ 2023-02-20 17:53   ` Paolo Bonzini
  2023-02-22  1:52     ` Tianrui Zhao
  0 siblings, 1 reply; 70+ messages in thread
From: Paolo Bonzini @ 2023-02-20 17:53 UTC (permalink / raw)
  To: Tianrui Zhao
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

On 2/20/23 07:57, Tianrui Zhao wrote:
> +	vcpu->arch.guest_eentry = (unsigned long)kvm_context->kvm_eentry;
> +	vcpu->arch.vcpu_run = kvm_context->kvm_enter_guest;
> +	vcpu->arch.handle_exit = _kvm_handle_exit;

Here as well, whatever is constant must not be stored in struct 
kvm_arch_vcpu.

Paolo


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 01/29] LoongArch: KVM: Add kvm related header files
  2023-02-20  6:57 ` [PATCH v2 01/29] LoongArch: KVM: Add kvm related header files Tianrui Zhao
@ 2023-02-20 18:22   ` Paolo Bonzini
  2023-02-21  2:56     ` Tianrui Zhao
  2023-02-20 18:54   ` WANG Xuerui
  2023-02-21  4:36   ` Xi Ruoyao
  2 siblings, 1 reply; 70+ messages in thread
From: Paolo Bonzini @ 2023-02-20 18:22 UTC (permalink / raw)
  To: Tianrui Zhao
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

On 2/20/23 07:57, Tianrui Zhao wrote:
> +
> +/* Resume Flags */
> +#define RESUME_FLAG_DR		(1<<0)	/* Reload guest nonvolatile state? */
> +#define RESUME_FLAG_HOST	(1<<1)	/* Resume host? */
> +
> +#define RESUME_GUEST		0
> +#define RESUME_GUEST_DR		RESUME_FLAG_DR
> +#define RESUME_HOST		RESUME_FLAG_HOST
> +

Most of this code is dead, I'll give more instructions in a reply to 
patch 8.

> +	unsigned long guest_eentry;
> +	unsigned long host_eentry;
> +	int (*vcpu_run)(struct kvm_run *run, struct kvm_vcpu *vcpu);
> +	int (*handle_exit)(struct kvm_run *run, struct kvm_vcpu *vcpu);
> +
> +	/* Host registers preserved across guest mode execution */
> +	unsigned long host_stack;
> +	unsigned long host_gp;
> +	unsigned long host_pgd;
> +	unsigned long host_pgdhi;
> +	unsigned long host_entryhi;
> +
> +	/* Host CSR registers used when handling exits from guest */
> +	unsigned long badv;
> +	unsigned long host_estat;
> +	unsigned long badi;
> +	unsigned long host_ecfg;
> +	unsigned long host_percpu;
> +
> +	/* GPRS */
> +	unsigned long gprs[32];
> +	unsigned long pc;
> +
> +	/* FPU State */
> +	struct loongarch_fpu fpu FPU_ALIGN;
> +	/* Which auxiliary state is loaded (KVM_LOONGARCH_AUX_*) */
> +	unsigned int aux_inuse;
> +
> +	/* CSR State */
> +	struct loongarch_csrs *csr;
> +
> +	/* GPR used as IO source/target */
> +	u32 io_gpr;
> +
> +	struct hrtimer swtimer;
> +	/* Count timer control KVM register */
> +	u32 count_ctl;
> +
> +	/* Bitmask of exceptions that are pending */
> +	unsigned long irq_pending;
> +	/* Bitmask of pending exceptions to be cleared */
> +	unsigned long irq_clear;
> +
> +	/* Cache some mmu pages needed inside spinlock regions */
> +	struct kvm_mmu_memory_cache mmu_page_cache;
> +
> +	/* vcpu's vpid is different on each host cpu in an smp system */
> +	u64 vpid[NR_CPUS];

In _kvm_check_vmid(), you already have

+	if (migrated || (ver != old)) {
+		_kvm_update_vpid(vcpu, cpu);
+		trace_kvm_vpid_change(vcpu, vcpu->arch.vpid[cpu]);
+	}

so a vpid will never be recycled if a vCPU migrates from physical CPU A 
to B and back to A.

So please keep the current VPID in the per-cpu struct vmcs, and you can 
just copy it from there in _kvm_check_vmid().

> +	/* Period of stable timer tick in ns */
> +	u64 timer_period;
> +	/* Frequency of stable timer in Hz */
> +	u64 timer_mhz;
> +	/* Stable bias from the raw time */
> +	u64 timer_bias;
> +	/* Dynamic nanosecond bias (multiple of timer_period) to avoid overflow */
> +	s64 timer_dyn_bias;
> +	/* Save ktime */
> +	ktime_t stable_ktime_saved;
> +
> +	u64 core_ext_ioisr[4];
> +
> +	/* Last CPU the VCPU state was loaded on */
> +	int last_sched_cpu;
> +	/* Last CPU the VCPU actually executed guest code on */
> +	int last_exec_cpu;
> +
> +	u8 fpu_enabled;

This field is always true, please remove it.

> +	struct kvm_guest_debug_arch guest_debug;

This struct is empty, please remove it.

Paolo


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 22/29] LoongArch: KVM: Implement handle idle exception
  2023-02-20  6:57 ` [PATCH v2 22/29] LoongArch: KVM: Implement handle idle exception Tianrui Zhao
@ 2023-02-20 18:40   ` Paolo Bonzini
  2023-02-21  9:48     ` Tianrui Zhao
  0 siblings, 1 reply; 70+ messages in thread
From: Paolo Bonzini @ 2023-02-20 18:40 UTC (permalink / raw)
  To: Tianrui Zhao
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

On 2/20/23 07:57, Tianrui Zhao wrote:
> +int _kvm_emu_idle(struct kvm_vcpu *vcpu)
> +{
> +	++vcpu->stat.idle_exits;
> +	trace_kvm_exit(vcpu, KVM_TRACE_EXIT_IDLE);

Please add a separate tracepoint, don't overload trace_kvm_exit().

Likewise for _kvm_trap_handle_gspr().

I think _kvm_trap_handle_gspr() should have a tracepoint whose parameter 
is inst.word.

Paolo

> +	if (!vcpu->arch.irq_pending) {
> +		kvm_save_timer(vcpu);
> +		kvm_vcpu_block(vcpu);
> +	}
> +
> +	return EMULATE_DONE;


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 07/29] LoongArch: KVM: Implement vcpu run interface
  2023-02-20  6:57 ` [PATCH v2 07/29] LoongArch: KVM: Implement vcpu run interface Tianrui Zhao
@ 2023-02-20 18:44   ` Paolo Bonzini
  2023-02-22  2:08     ` Tianrui Zhao
  0 siblings, 1 reply; 70+ messages in thread
From: Paolo Bonzini @ 2023-02-20 18:44 UTC (permalink / raw)
  To: Tianrui Zhao
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

On 2/20/23 07:57, Tianrui Zhao wrote:
> +	lose_fpu(1);

Is this enough to clear CSR_EUEN_FPEN?

Paolo


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 08/29] LoongArch: KVM: Implement vcpu handle exit interface
  2023-02-20  6:57 ` [PATCH v2 08/29] LoongArch: KVM: Implement vcpu handle exit interface Tianrui Zhao
  2023-02-20 17:46   ` Paolo Bonzini
@ 2023-02-20 18:45   ` Paolo Bonzini
  2023-02-21  3:17     ` Tianrui Zhao
  1 sibling, 1 reply; 70+ messages in thread
From: Paolo Bonzini @ 2023-02-20 18:45 UTC (permalink / raw)
  To: Tianrui Zhao
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

On 2/20/23 07:57, Tianrui Zhao wrote:
> + * Return value is in the form (errcode<<2 | RESUME_FLAG_HOST | RESUME_FLAG_NV)

As far as I can see, RESUME_FLAG_NV does not exist anymore and this is 
just copied from arch/mips?

You can keep RESUME_HOST/RESUME_GUEST for the individual functions, but 
here please make it just "1" for resume guest, and "<= 0" for resume 
host.  This is easy enough to check from assembly and removes the srai by 2.

> +static int _kvm_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
> +{
> +	unsigned long exst = vcpu->arch.host_estat;
> +	u32 intr = exst & 0x1fff; /* ignore NMI */
> +	u32 exccode = (exst & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT;
> +	u32 __user *opc = (u32 __user *) vcpu->arch.pc;
> +	int ret = RESUME_GUEST, cpu;
> +
> +	vcpu->mode = OUTSIDE_GUEST_MODE;
> +
> +	/* Set a default exit reason */
> +	run->exit_reason = KVM_EXIT_UNKNOWN;
> +	run->ready_for_interrupt_injection = 1;
> +
> +	/*
> +	 * Set the appropriate status bits based on host CPU features,
> +	 * before we hit the scheduler
> +	 */

Stale comment?

> +	local_irq_enable();

Please add guest_state_exit_irqoff() here.

> +	kvm_debug("%s: exst: %lx, PC: %p, kvm_run: %p, kvm_vcpu: %p\n",
> +			__func__, exst, opc, run, vcpu);

Please add the information to the kvm_exit tracepoint (thus also 
removing variables such as "exst" or "opc" from this function) instead 
of calling kvm_debug().

> +	trace_kvm_exit(vcpu, exccode);
> +	if (exccode) {
> +		ret = _kvm_handle_fault(vcpu, exccode);
> +	} else {
> +		WARN(!intr, "suspicious vm exiting");
> +		++vcpu->stat.int_exits;
> +
> +		if (need_resched())
> +			cond_resched();

This "if" is not necessary because there is already a cond_resched() below.

> +		ret = RESUME_GUEST;

This "ret" is not necessary because "ret" is already initialized to 
RESUME_GUEST above, you can either remove it or remove the initializer.

> +	}
> +
> +	cond_resched();
> +	local_irq_disable();

At this point, ret is either RESUME_GUEST or RESUME_HOST.  So, the "if"s 
below are either all taken or all not taken, and most of this code:

	kvm_acquire_timer(vcpu);
	_kvm_deliver_intr(vcpu);

	if (signal_pending(current)) {
		run->exit_reason = KVM_EXIT_INTR;
		ret = (-EINTR << 2) | RESUME_HOST;
		++vcpu->stat.signal_exits;
		// no need for a tracepoint here
		// trace_kvm_exit(vcpu, KVM_TRACE_EXIT_SIGNAL);
	}

	trace_kvm_reenter(vcpu);

	/*
	 * Make sure the read of VCPU requests in vcpu_reenter()
	 * callback is not reordered ahead of the write to vcpu->mode,
	 * or we could miss a TLB flush request while the requester sees
	 * the VCPU as outside of guest mode and not needing an IPI.
	 */
	smp_store_mb(vcpu->mode, IN_GUEST_MODE);

	cpu = smp_processor_id();
	_kvm_check_requests(vcpu, cpu);
	_kvm_check_vmid(vcpu, cpu);
	vcpu->arch.host_eentry = csr_read64(LOONGARCH_CSR_EENTRY);

	/*
	 * If FPU are enabled (i.e. the guest's FPU context
	 * is live), restore FCSR0.
	 */
	if (_kvm_guest_has_fpu(&vcpu->arch) &&
		read_csr_euen() & (CSR_EUEN_FPEN)) {
		kvm_restore_fcsr(&vcpu->arch.fpu);
	}

(all except for the "if (signal_pending(current))" and the final "if") 
is pretty much duplicated with kvm_arch_vcpu_ioctl_run(); the remaining 
code can also be done from kvm_arch_vcpu_ioctl_run(), the cost is small. 
  Please move it to a separate function, for example:

int kvm_pre_enter_guest(struct kvm_vcpu *vcpu)
{
	if (signal_pending(current)) {
		run->exit_reason = KVM_EXIT_INTR;
		++vcpu->stat.signal_exits;
		return -EINTR;
	}

	kvm_acquire_timer(vcpu);
	_kvm_deliver_intr(vcpu);

	...

	if (_kvm_guest_has_fpu(&vcpu->arch) &&
		read_csr_euen() & (CSR_EUEN_FPEN)) {
		kvm_restore_fcsr(&vcpu->arch.fpu);
	}
	return 1;
}

Call it from _kvm_handle_exit():

	if (ret == RESUME_HOST)
		return 0;

	r = kvm_pre_enter_guest(vcpu);
	if (r > 0) {
		trace_kvm_reenter(vcpu);
		guest_state_enter_irqoff();
	}

	return r;

and from kvm_arch_vcpu_ioctl_run():

	local_irq_disable();
	guest_timing_enter_irqoff();
	r = kvm_pre_enter_guest(vcpu);
	if (r > 0) {
		trace_kvm_enter(vcpu);
		/*
		 * This should actually not be a function pointer, but
		 * just for clarity */
		 */
		guest_state_enter_irqoff();
		r = vcpu->arch.vcpu_run(run, vcpu);
		/* guest_state_exit_irqoff() already done.  */
		trace_kvm_out(vcpu);
	}
	guest_timing_exit_irqoff();
	local_irq_enable();

out:
	kvm_sigset_deactivate(vcpu);

	vcpu_put(vcpu);
	return r;

Paolo

> +	if (ret == RESUME_GUEST)
> +		kvm_acquire_timer(vcpu);
> +
> +	if (!(ret & RESUME_HOST)) {
> +		_kvm_deliver_intr(vcpu);
> +		/* Only check for signals if not already exiting to userspace */
> +		if (signal_pending(current)) {
> +			run->exit_reason = KVM_EXIT_INTR;
> +			ret = (-EINTR << 2) | RESUME_HOST;
> +			++vcpu->stat.signal_exits;
> +			trace_kvm_exit(vcpu, KVM_TRACE_EXIT_SIGNAL);
> +		}
> +	}
> +
> +	if (ret == RESUME_GUEST) {
> +		trace_kvm_reenter(vcpu);
> +
> +		/*
> +		 * Make sure the read of VCPU requests in vcpu_reenter()
> +		 * callback is not reordered ahead of the write to vcpu->mode,
> +		 * or we could miss a TLB flush request while the requester sees
> +		 * the VCPU as outside of guest mode and not needing an IPI.
> +		 */
> +		smp_store_mb(vcpu->mode, IN_GUEST_MODE);
> +
> +		cpu = smp_processor_id();
> +		_kvm_check_requests(vcpu, cpu);
> +		_kvm_check_vmid(vcpu, cpu);
> +		vcpu->arch.host_eentry = csr_read64(LOONGARCH_CSR_EENTRY);
> +
> +		/*
> +		 * If FPU are enabled (i.e. the guest's FPU context
> +		 * is live), restore FCSR0.
> +		 */
> +		if (_kvm_guest_has_fpu(&vcpu->arch) &&
> +			read_csr_euen() & (CSR_EUEN_FPEN)) {
> +			kvm_restore_fcsr(&vcpu->arch.fpu);
> +		}
> +	}
> +
> +	return ret;
> +}
> +
>   int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
>   {
>   	int i;


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 13/29] LoongArch: KVM: Implement misc vcpu related interfaces
  2023-02-20  6:57 ` [PATCH v2 13/29] LoongArch: KVM: Implement misc vcpu related interfaces Tianrui Zhao
@ 2023-02-20 18:50   ` Paolo Bonzini
  2023-02-21  3:19     ` Tianrui Zhao
  0 siblings, 1 reply; 70+ messages in thread
From: Paolo Bonzini @ 2023-02-20 18:50 UTC (permalink / raw)
  To: Tianrui Zhao
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

On 2/20/23 07:57, Tianrui Zhao wrote:
> +
> +int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu,
> +				  struct kvm_translation *tr)
> +{
> +	return 0;
> +}
> +

Please return -EINVAL instead.

Paolo


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 01/29] LoongArch: KVM: Add kvm related header files
  2023-02-20  6:57 ` [PATCH v2 01/29] LoongArch: KVM: Add kvm related header files Tianrui Zhao
  2023-02-20 18:22   ` Paolo Bonzini
@ 2023-02-20 18:54   ` WANG Xuerui
  2023-02-21  4:36   ` Xi Ruoyao
  2 siblings, 0 replies; 70+ messages in thread
From: WANG Xuerui @ 2023-02-20 18:54 UTC (permalink / raw)
  To: Tianrui Zhao, Paolo Bonzini
  Cc: Huacai Chen, Greg Kroah-Hartman, loongarch, linux-kernel, kvm,
	Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton, maobibo

Hi,

On 2/20/23 14:57, Tianrui Zhao wrote:
> Add LoongArch KVM related header files, including kvm.h,
> kvm_host.h, kvm_types.h. All of thoese are about LoongArch
> virtualization features and kvm interfaces.
>
> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
> ---
>   arch/loongarch/include/asm/cpu-features.h |  22 ++
>   arch/loongarch/include/asm/kvm_host.h     | 257 ++++++++++++++++++++++
>   arch/loongarch/include/asm/kvm_types.h    |  11 +
>   arch/loongarch/include/uapi/asm/kvm.h     | 107 +++++++++
>   include/uapi/linux/kvm.h                  |  12 +
>   5 files changed, 409 insertions(+)
>   create mode 100644 arch/loongarch/include/asm/kvm_host.h
>   create mode 100644 arch/loongarch/include/asm/kvm_types.h
>   create mode 100644 arch/loongarch/include/uapi/asm/kvm.h
>
> diff --git a/arch/loongarch/include/asm/cpu-features.h b/arch/loongarch/include/asm/cpu-features.h
> index b07974218..345b7674a 100644
> --- a/arch/loongarch/include/asm/cpu-features.h
> +++ b/arch/loongarch/include/asm/cpu-features.h
> @@ -64,5 +64,27 @@
>   #define cpu_has_guestid		cpu_opt(LOONGARCH_CPU_GUESTID)
>   #define cpu_has_hypervisor	cpu_opt(LOONGARCH_CPU_HYPERVISOR)
>   
> +#define cpu_has_matc_guest	(cpu_data[0].guest_cfg & BIT(0))
> +#define cpu_has_matc_root	(cpu_data[0].guest_cfg & BIT(1))
> +#define cpu_has_matc_nest	(cpu_data[0].guest_cfg & BIT(2))
> +#define cpu_has_sitp		(cpu_data[0].guest_cfg & BIT(6))
> +#define cpu_has_titp		(cpu_data[0].guest_cfg & BIT(8))
> +#define cpu_has_toep		(cpu_data[0].guest_cfg & BIT(10))
> +#define cpu_has_topp		(cpu_data[0].guest_cfg & BIT(12))
> +#define cpu_has_torup		(cpu_data[0].guest_cfg & BIT(14))
> +#define cpu_has_gcip_all	(cpu_data[0].guest_cfg & BIT(16))
> +#define cpu_has_gcip_hit	(cpu_data[0].guest_cfg & BIT(17))
> +#define cpu_has_gcip_secure	(cpu_data[0].guest_cfg & BIT(18))

Where can we get the public documentation for all this fresh LoongArch 
Virtualization Extension? Without documentation it's hard for outsiders 
to make effective reviews...

Same for large swaths of code below.

> +
> +/*
> + * Guest capabilities
> + */
> +#define cpu_guest_has_conf1	(cpu_data[0].guest.conf & BIT(1))
> +#define cpu_guest_has_conf2	(cpu_data[0].guest.conf & BIT(2))
> +#define cpu_guest_has_conf3	(cpu_data[0].guest.conf & BIT(3))
> +#define cpu_guest_has_fpu	(cpu_data[0].guest.options & LOONGARCH_CPU_FPU)
> +#define cpu_guest_has_perf	(cpu_data[0].guest.options & LOONGARCH_CPU_PMP)
> +#define cpu_guest_has_watch	(cpu_data[0].guest.options & LOONGARCH_CPU_WATCH)
> +#define cpu_guest_has_lsx	(cpu_data[0].guest.ases & LOONGARCH_ASE_LSX)
>   
>   #endif /* __ASM_CPU_FEATURES_H */
> diff --git a/arch/loongarch/include/asm/kvm_host.h b/arch/loongarch/include/asm/kvm_host.h
> new file mode 100644
> index 000000000..fa464e476
> --- /dev/null
> +++ b/arch/loongarch/include/asm/kvm_host.h
> @@ -0,0 +1,257 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
> + */
> +
> +#ifndef __ASM_LOONGARCH_KVM_HOST_H__
> +#define __ASM_LOONGARCH_KVM_HOST_H__
> +
> +#include <linux/cpumask.h>
> +#include <linux/mutex.h>
> +#include <linux/hrtimer.h>
> +#include <linux/interrupt.h>
> +#include <linux/types.h>
> +#include <linux/kvm.h>
> +#include <linux/kvm_types.h>
> +#include <linux/threads.h>
> +#include <linux/spinlock.h>
> +
> +#include <asm/inst.h>
> +#include <asm/loongarch.h>
> +
> +/* Loongarch KVM register ids */
> +#define LOONGARCH_CSR_32(_R, _S)	\
> +	(KVM_REG_LOONGARCH_CSR | KVM_REG_SIZE_U32 | (8 * (_R) + (_S)))
> +
> +#define LOONGARCH_CSR_64(_R, _S)	\
> +	(KVM_REG_LOONGARCH_CSR | KVM_REG_SIZE_U64 | (8 * (_R) + (_S)))
> +
> +#define KVM_IOC_CSRID(id)		LOONGARCH_CSR_64(id, 0)
> +#define KVM_GET_IOC_CSRIDX(id)		((id & KVM_CSR_IDX_MASK) >> 3)
> +
> +#define KVM_MAX_VCPUS			256
> +/* memory slots that does not exposed to userspace */
> +#define KVM_PRIVATE_MEM_SLOTS		0
> +
> +#define KVM_HALT_POLL_NS_DEFAULT	500000
> +
> +struct kvm_vm_stat {
> +	struct kvm_vm_stat_generic generic;
> +};
> +
> +struct kvm_vcpu_stat {
> +	struct kvm_vcpu_stat_generic generic;
> +	u64 idle_exits;
> +	u64 signal_exits;
> +	u64 int_exits;
> +	u64 cpucfg_exits;
> +};
> +
> +struct kvm_arch_memory_slot {
> +};
> +
> +struct kvm_context {
> +	unsigned long vpid_mask;
> +	unsigned long vpid_cache;
> +	void *kvm_eentry;
> +	void *kvm_enter_guest;
> +	unsigned long page_order;
> +	struct kvm_vcpu *last_vcpu;
> +};
> +
> +struct kvm_arch {
> +	/* Guest physical mm */
> +	struct mm_struct gpa_mm;
> +	/* Mask of CPUs needing GPA ASID flush */
> +	cpumask_t asid_flush_mask;
> +
> +	unsigned char online_vcpus;
> +	unsigned char is_migrate;
> +	s64 time_offset;
> +	struct kvm_context __percpu *vmcs;
> +};
> +
> +
> +#define LOONGARCH_CSRS		0x100
> +#define CSR_UCWIN_BASE		0x100
> +#define CSR_UCWIN_SIZE		0x10
> +#define CSR_DMWIN_BASE		0x180
> +#define CSR_DMWIN_SIZE		0x4
> +#define CSR_PERF_BASE		0x200
> +#define CSR_PERF_SIZE		0x8
> +#define CSR_DEBUG_BASE		0x500
> +#define CSR_DEBUG_SIZE		0x3
> +#define CSR_ALL_SIZE		0x800
> +
> +struct loongarch_csrs {
> +	unsigned long csrs[CSR_ALL_SIZE];
> +};
> +
> +/* Resume Flags */
> +#define RESUME_FLAG_DR		(1<<0)	/* Reload guest nonvolatile state? */
> +#define RESUME_FLAG_HOST	(1<<1)	/* Resume host? */
> +
> +#define RESUME_GUEST		0
> +#define RESUME_GUEST_DR		RESUME_FLAG_DR
> +#define RESUME_HOST		RESUME_FLAG_HOST
> +
> +enum emulation_result {
> +	EMULATE_DONE,		/* no further processing */
> +	EMULATE_DO_MMIO,	/* kvm_run filled with MMIO request */
> +	EMULATE_FAIL,		/* can't emulate this instruction */
> +	EMULATE_WAIT,		/* WAIT instruction */
> +	EMULATE_EXCEPT,		/* A guest exception has been generated */
> +	EMULATE_DO_IOCSR,	/* handle IOCSR request */
> +};
> +
> +#define KVM_NR_MEM_OBJS		4
> +#define KVM_LARCH_FPU		(0x1 << 0)
> +
> +struct kvm_vcpu_arch {
> +	unsigned long guest_eentry;
> +	unsigned long host_eentry;
> +	int (*vcpu_run)(struct kvm_run *run, struct kvm_vcpu *vcpu);
> +	int (*handle_exit)(struct kvm_run *run, struct kvm_vcpu *vcpu);
> +
> +	/* Host registers preserved across guest mode execution */
> +	unsigned long host_stack;
> +	unsigned long host_gp;
> +	unsigned long host_pgd;
> +	unsigned long host_pgdhi;
> +	unsigned long host_entryhi;
> +
> +	/* Host CSR registers used when handling exits from guest */
> +	unsigned long badv;
> +	unsigned long host_estat;
> +	unsigned long badi;
> +	unsigned long host_ecfg;
> +	unsigned long host_percpu;
> +
> +	/* GPRS */
> +	unsigned long gprs[32];
> +	unsigned long pc;
> +
> +	/* FPU State */
> +	struct loongarch_fpu fpu FPU_ALIGN;
> +	/* Which auxiliary state is loaded (KVM_LOONGARCH_AUX_*) */
> +	unsigned int aux_inuse;
> +
> +	/* CSR State */
> +	struct loongarch_csrs *csr;
> +
> +	/* GPR used as IO source/target */
> +	u32 io_gpr;
> +
> +	struct hrtimer swtimer;
> +	/* Count timer control KVM register */
> +	u32 count_ctl;
> +
> +	/* Bitmask of exceptions that are pending */
> +	unsigned long irq_pending;
> +	/* Bitmask of pending exceptions to be cleared */
> +	unsigned long irq_clear;
> +
> +	/* Cache some mmu pages needed inside spinlock regions */
> +	struct kvm_mmu_memory_cache mmu_page_cache;
> +
> +	/* vcpu's vpid is different on each host cpu in an smp system */
> +	u64 vpid[NR_CPUS];
> +
> +	/* Period of stable timer tick in ns */
> +	u64 timer_period;
> +	/* Frequency of stable timer in Hz */
> +	u64 timer_mhz;
> +	/* Stable bias from the raw time */
> +	u64 timer_bias;
> +	/* Dynamic nanosecond bias (multiple of timer_period) to avoid overflow */
> +	s64 timer_dyn_bias;
> +	/* Save ktime */
> +	ktime_t stable_ktime_saved;
> +
> +	u64 core_ext_ioisr[4];
> +
> +	/* Last CPU the VCPU state was loaded on */
> +	int last_sched_cpu;
> +	/* Last CPU the VCPU actually executed guest code on */
> +	int last_exec_cpu;
> +
> +	u8 fpu_enabled;
> +	struct kvm_guest_debug_arch guest_debug;
> +};
> +
> +static inline unsigned long readl_sw_gcsr(struct loongarch_csrs *csr, int reg)
> +{
> +	return csr->csrs[reg];
> +}
> +
> +static inline void writel_sw_gcsr(struct loongarch_csrs *csr, int reg,
> +		unsigned long val)
> +{
> +	csr->csrs[reg] = val;
> +}
> +
> +/* Helpers */
> +static inline bool _kvm_guest_has_fpu(struct kvm_vcpu_arch *arch)
> +{
> +	return cpu_has_fpu && arch->fpu_enabled;
> +}
> +
> +void _kvm_init_fault(void);
> +
> +/* Debug: dump vcpu state */
> +int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu);
> +
> +/* MMU handling */
> +int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long badv, bool write);
> +void kvm_flush_tlb_all(void);
> +void _kvm_destroy_mm(struct kvm *kvm);
> +pgd_t *kvm_pgd_alloc(void);
> +
> +#define KVM_ARCH_WANT_MMU_NOTIFIER
> +int kvm_unmap_hva_range(struct kvm *kvm,
> +			unsigned long start, unsigned long end, bool blockable);
> +void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
> +int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
> +int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
> +
> +static inline void update_pc(struct kvm_vcpu_arch *arch)
> +{
> +	arch->pc += 4;
> +}
> +
> +/**
> + * kvm_is_ifetch_fault() - Find whether a TLBL exception is due to ifetch fault.
> + * @vcpu:	Virtual CPU.
> + *
> + * Returns:	Whether the TLBL exception was likely due to an instruction
> + *		fetch fault rather than a data load fault.
> + */
> +static inline bool kvm_is_ifetch_fault(struct kvm_vcpu_arch *arch)
> +{
> +	if (arch->pc == arch->badv)
> +		return true;
> +
> +	return false;
> +}
> +
> +/* Misc */
> +static inline void kvm_arch_hardware_unsetup(void) {}
> +static inline void kvm_arch_sync_events(struct kvm *kvm) {}
> +static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
> +static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
> +static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
> +static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
> +static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
> +static inline void kvm_arch_free_memslot(struct kvm *kvm,
> +				   struct kvm_memory_slot *slot) {}
> +void _kvm_check_vmid(struct kvm_vcpu *vcpu, int cpu);
> +enum hrtimer_restart kvm_swtimer_wakeup(struct hrtimer *timer);
> +int kvm_flush_tlb_gpa(struct kvm_vcpu *vcpu, unsigned long gpa);
> +void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm,
> +					const struct kvm_memory_slot *memslot);
> +void kvm_init_vmcs(struct kvm *kvm);
> +void kvm_vector_entry(void);
> +int  kvm_enter_guest(struct kvm_run *run, struct kvm_vcpu *vcpu);
> +extern const unsigned long kvm_vector_size;
> +extern const unsigned long kvm_enter_guest_size;
> +#endif /* __ASM_LOONGARCH_KVM_HOST_H__ */
> diff --git a/arch/loongarch/include/asm/kvm_types.h b/arch/loongarch/include/asm/kvm_types.h
> new file mode 100644
> index 000000000..060647b5f
> --- /dev/null
> +++ b/arch/loongarch/include/asm/kvm_types.h
> @@ -0,0 +1,11 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
> + */
> +
> +#ifndef _ASM_LOONGARCH_KVM_TYPES_H
> +#define _ASM_LOONGARCH_KVM_TYPES_H
> +
> +#define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE	4
> +
> +#endif /* _ASM_LOONGARCH_KVM_TYPES_H */
> diff --git a/arch/loongarch/include/uapi/asm/kvm.h b/arch/loongarch/include/uapi/asm/kvm.h
> new file mode 100644
> index 000000000..4192a5120
> --- /dev/null
> +++ b/arch/loongarch/include/uapi/asm/kvm.h
> @@ -0,0 +1,107 @@
> +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
> +/*
> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
> + */
> +
> +#ifndef __UAPI_ASM_LOONGARCH_KVM_H
> +#define __UAPI_ASM_LOONGARCH_KVM_H
> +
> +#include <linux/types.h>
> +
> +/*
> + * KVM Loongarch specific structures and definitions.
> + *
> + * Some parts derived from the x86 version of this file.
> + */
> +
> +#define __KVM_HAVE_READONLY_MEM
> +
> +#define KVM_COALESCED_MMIO_PAGE_OFFSET 1
> +
> +/*
> + * for KVM_GET_REGS and KVM_SET_REGS
> + */
> +struct kvm_regs {
> +	/* out (KVM_GET_REGS) / in (KVM_SET_REGS) */
> +	__u64 gpr[32];
> +	__u64 pc;
> +};
> +
> +/*
> + * for KVM_GET_FPU and KVM_SET_FPU
> + */
> +struct kvm_fpu {
> +	__u32 fcsr;
> +	__u32 none;
> +	__u64 fcc;    /* 8x8 */
> +	struct kvm_fpureg {
> +		__u64 val64[4];	//support max 256 bits
> +	} fpr[32];
> +};
> +
> +/*
> + * For LOONGARCH, we use KVM_SET_ONE_REG and KVM_GET_ONE_REG to access various
> + * registers.  The id field is broken down as follows:
> + *
> + *  bits[63..52] - As per linux/kvm.h
> + *  bits[51..32] - Must be zero.
> + *  bits[31..16] - Register set.
> + *
> + * Register set = 0: GP registers from kvm_regs (see definitions below).
> + *
> + * Register set = 1: CSR registers.
> + *
> + * Register set = 2: KVM specific registers (see definitions below).
> + *
> + * Register set = 3: FPU / SIMD registers (see definitions below).
> + *
> + * Other sets registers may be added in the future.  Each set would
> + * have its own identifier in bits[31..16].
> + */
> +
> +#define KVM_REG_LOONGARCH_GP		(KVM_REG_LOONGARCH | 0x00000ULL)
> +#define KVM_REG_LOONGARCH_CSR		(KVM_REG_LOONGARCH | 0x10000ULL)
> +#define KVM_REG_LOONGARCH_KVM		(KVM_REG_LOONGARCH | 0x20000ULL)
> +#define KVM_REG_LOONGARCH_FPU		(KVM_REG_LOONGARCH | 0x30000ULL)
> +#define KVM_REG_LOONGARCH_MASK		(KVM_REG_LOONGARCH | 0x30000ULL)
> +#define KVM_CSR_IDX_MASK		(0x10000 - 1)
> +
> +/*
> + * KVM_REG_LOONGARCH_KVM - KVM specific control registers.
> + */
> +
> +#define KVM_REG_LOONGARCH_COUNTER	(KVM_REG_LOONGARCH_KVM | KVM_REG_SIZE_U64 | 3)
> +#define KVM_REG_LOONGARCH_VCPU_RESET	(KVM_REG_LOONGARCH_KVM | KVM_REG_SIZE_U64 | 4)
> +
> +struct kvm_debug_exit_arch {
> +};
> +
> +/* for KVM_SET_GUEST_DEBUG */
> +struct kvm_guest_debug_arch {
> +};
> +
> +/* definition of registers in kvm_run */
> +struct kvm_sync_regs {
> +};
> +
> +/* dummy definition */
> +struct kvm_sregs {
> +};
> +
> +struct kvm_iocsr_entry {
> +	__u32 addr;
> +	__u32 pad;
> +	__u64 data;
> +};
> +
> +struct kvm_loongarch_interrupt {
> +	/* in */
> +	__u32 cpu;
> +	__u32 irq;
> +};
> +
> +#define KVM_NR_IRQCHIPS		1
> +#define KVM_IRQCHIP_NUM_PINS	64
> +#define KVM_MAX_CORES		256
> +
> +#endif /* __UAPI_ASM_LOONGARCH_KVM_H */
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 55155e262..fa9d0e18f 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -264,6 +264,7 @@ struct kvm_xen_exit {
>   #define KVM_EXIT_RISCV_SBI        35
>   #define KVM_EXIT_RISCV_CSR        36
>   #define KVM_EXIT_NOTIFY           37
> +#define KVM_EXIT_LOONGARCH_IOCSR  38
>   
>   /* For KVM_EXIT_INTERNAL_ERROR */
>   /* Emulate instruction failed. */
> @@ -336,6 +337,13 @@ struct kvm_run {
>   			__u32 len;
>   			__u8  is_write;
>   		} mmio;
> +		/* KVM_EXIT_LOONGARCH_IOCSR */
> +		struct {
> +			__u64 phys_addr;
> +			__u8  data[8];
> +			__u32 len;
> +			__u8  is_write;
> +		} iocsr_io;
>   		/* KVM_EXIT_HYPERCALL */
>   		struct {
>   			__u64 nr;
> @@ -1175,6 +1183,9 @@ struct kvm_ppc_resize_hpt {
>   #define KVM_CAP_DIRTY_LOG_RING_ACQ_REL 223
>   #define KVM_CAP_S390_PROTECTED_ASYNC_DISABLE 224
>   #define KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP 225
> +#define KVM_CAP_LOONGARCH_FPU 226
> +#define KVM_CAP_LOONGARCH_LSX 227
> +#define KVM_CAP_LOONGARCH_VZ 228
>   
>   #ifdef KVM_CAP_IRQ_ROUTING
>   
> @@ -1345,6 +1356,7 @@ struct kvm_dirty_tlb {
>   #define KVM_REG_ARM64		0x6000000000000000ULL
>   #define KVM_REG_MIPS		0x7000000000000000ULL
>   #define KVM_REG_RISCV		0x8000000000000000ULL
> +#define KVM_REG_LOONGARCH	0x9000000000000000ULL
>   
>   #define KVM_REG_SIZE_SHIFT	52
>   #define KVM_REG_SIZE_MASK	0x00f0000000000000ULL

-- 
WANG "xen0n" Xuerui

Linux/LoongArch mailing list: https://lore.kernel.org/loongarch/


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 05/29] LoongArch: KVM: Add vcpu related header files
  2023-02-20  6:57 ` [PATCH v2 05/29] LoongArch: KVM: Add vcpu related header files Tianrui Zhao
@ 2023-02-20 18:57   ` WANG Xuerui
  2023-02-27  1:39     ` Tianrui Zhao
  2023-02-21  4:44   ` Xi Ruoyao
  1 sibling, 1 reply; 70+ messages in thread
From: WANG Xuerui @ 2023-02-20 18:57 UTC (permalink / raw)
  To: Tianrui Zhao, Paolo Bonzini
  Cc: Huacai Chen, Greg Kroah-Hartman, loongarch, linux-kernel, kvm,
	Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton, maobibo

Hi,

On 2/20/23 14:57, Tianrui Zhao wrote:
> Add LoongArch vcpu related header files, including vcpu csr
> information, irq number defines, and some vcpu interfaces.
>
> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
> ---
>   arch/loongarch/include/asm/cpu-info.h  |  13 ++
>   arch/loongarch/include/asm/kvm_vcpu.h  | 112 ++++++++++++++
>   arch/loongarch/include/asm/loongarch.h | 195 ++++++++++++++++++++++++-
>   arch/loongarch/kvm/trace.h             | 137 +++++++++++++++++
>   4 files changed, 451 insertions(+), 6 deletions(-)
>   create mode 100644 arch/loongarch/include/asm/kvm_vcpu.h
>   create mode 100644 arch/loongarch/kvm/trace.h
>
> diff --git a/arch/loongarch/include/asm/cpu-info.h b/arch/loongarch/include/asm/cpu-info.h
> index cd73a6f57..1b426a2ca 100644
> --- a/arch/loongarch/include/asm/cpu-info.h
> +++ b/arch/loongarch/include/asm/cpu-info.h
> @@ -32,6 +32,15 @@ struct cache_desc {
>   #define CACHE_LEVEL_MAX		3
>   #define CACHE_LEAVES_MAX	6
>   
> +struct guest_info {
> +	unsigned long		ases;
> +	unsigned long		ases_dyn;
> +	unsigned long		options;
> +	unsigned long		options_dyn;
> +	unsigned char		conf;
> +	unsigned int		kscratch_mask;
> +};
> +
>   struct cpuinfo_loongarch {
>   	u64			asid_cache;
>   	unsigned long		asid_mask;
> @@ -60,6 +69,10 @@ struct cpuinfo_loongarch {
>   	unsigned int		watch_dreg_count;   /* Number data breakpoints */
>   	unsigned int		watch_ireg_count;   /* Number instruction breakpoints */
>   	unsigned int		watch_reg_use_cnt; /* min(NUM_WATCH_REGS, watch_dreg_count + watch_ireg_count), Usable by ptrace */
> +
> +	/* VZ & Guest features */
> +	struct guest_info	guest;
> +	unsigned long		guest_cfg;
>   } __aligned(SMP_CACHE_BYTES);
>   
>   extern struct cpuinfo_loongarch cpu_data[];
> diff --git a/arch/loongarch/include/asm/kvm_vcpu.h b/arch/loongarch/include/asm/kvm_vcpu.h
> new file mode 100644
> index 000000000..66ec9bc52
> --- /dev/null
> +++ b/arch/loongarch/include/asm/kvm_vcpu.h
> @@ -0,0 +1,112 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
> + */
> +
> +#ifndef __ASM_LOONGARCH_KVM_VCPU_H__
> +#define __ASM_LOONGARCH_KVM_VCPU_H__
> +
> +#include <linux/kvm_host.h>
> +#include <asm/loongarch.h>
> +#include <asm/kvm_host.h>
> +
> +#define LARCH_INT_SIP0				0
> +#define LARCH_INT_SIP1				1
> +#define LARCH_INT_IP0				2
> +#define LARCH_INT_IP1				3
> +#define LARCH_INT_IP2				4
> +#define LARCH_INT_IP3				5
> +#define LARCH_INT_IP4				6
> +#define LARCH_INT_IP5				7
> +#define LARCH_INT_IP6				8
> +#define LARCH_INT_IP7				9
> +#define LARCH_INT_PMU				10
> +#define LARCH_INT_TIMER				11
> +#define LARCH_INT_IPI				12
> +#define LOONGARCH_EXC_MAX			(LARCH_INT_IPI + 1)
> +#define LOONGARCH_EXC_IPNUM			(LOONGARCH_EXC_MAX)
There are effectively identical definitions in <asm/loongarch.h>, why do 
you choose to re-define all of these without deviating from the 
architectural standards?
> +
> +/* Controlled by 0x5 guest exst */
> +#define CPU_SIP0				(_ULCAST_(1))
> +#define CPU_SIP1				(_ULCAST_(1) << 1)
> +#define CPU_PMU					(_ULCAST_(1) << 10)
> +#define CPU_TIMER				(_ULCAST_(1) << 11)
> +#define CPU_IPI					(_ULCAST_(1) << 12)
> +
> +/* Controlled by 0x52 guest exception VIP
> + * aligned to exst bit 5~12
> + */
> +#define CPU_IP0					(_ULCAST_(1))
> +#define CPU_IP1					(_ULCAST_(1) << 1)
> +#define CPU_IP2					(_ULCAST_(1) << 2)
> +#define CPU_IP3					(_ULCAST_(1) << 3)
> +#define CPU_IP4					(_ULCAST_(1) << 4)
> +#define CPU_IP5					(_ULCAST_(1) << 5)
> +#define CPU_IP6					(_ULCAST_(1) << 6)
> +#define CPU_IP7					(_ULCAST_(1) << 7)
> +
> +#define MNSEC_PER_SEC				(NSEC_PER_SEC >> 20)
> +
> +/* KVM_IRQ_LINE irq field index values */
> +#define KVM_LOONGSON_IRQ_TYPE_SHIFT		24
> +#define KVM_LOONGSON_IRQ_TYPE_MASK		0xff
> +#define KVM_LOONGSON_IRQ_VCPU_SHIFT		16
> +#define KVM_LOONGSON_IRQ_VCPU_MASK		0xff
> +#define KVM_LOONGSON_IRQ_NUM_SHIFT		0
> +#define KVM_LOONGSON_IRQ_NUM_MASK		0xffff
> +
> +/* irq_type field */
> +#define KVM_LOONGSON_IRQ_TYPE_CPU_IP		0
> +#define KVM_LOONGSON_IRQ_TYPE_CPU_IO		1
> +#define KVM_LOONGSON_IRQ_TYPE_HT		2
> +#define KVM_LOONGSON_IRQ_TYPE_MSI		3
> +#define KVM_LOONGSON_IRQ_TYPE_IOAPIC		4
> +#define KVM_LOONGSON_IRQ_TYPE_ROUTE		5
> +
> +/* out-of-kernel GIC cpu interrupt injection irq_number field */
> +#define KVM_LOONGSON_IRQ_CPU_IRQ		0
> +#define KVM_LOONGSON_IRQ_CPU_FIQ		1
> +#define KVM_LOONGSON_CPU_IP_NUM			8
> +
> +typedef union loongarch_instruction  larch_inst;
> +typedef int (*exit_handle_fn)(struct kvm_vcpu *);
> +
> +int  _kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst);
> +int  _kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst);
> +int  _kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
> +int  _kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
> +int  _kvm_emu_idle(struct kvm_vcpu *vcpu);
> +int  _kvm_handle_pv_hcall(struct kvm_vcpu *vcpu);
> +int  _kvm_pending_timer(struct kvm_vcpu *vcpu);
> +int  _kvm_handle_fault(struct kvm_vcpu *vcpu, int fault);
> +void _kvm_deliver_intr(struct kvm_vcpu *vcpu);
> +
> +void kvm_own_fpu(struct kvm_vcpu *vcpu);
> +void kvm_lose_fpu(struct kvm_vcpu *vcpu);
> +void kvm_save_fpu(struct loongarch_fpu *fpu);
> +void kvm_restore_fpu(struct loongarch_fpu *fpu);
> +void kvm_restore_fcsr(struct loongarch_fpu *fpu);
> +
> +void kvm_acquire_timer(struct kvm_vcpu *vcpu);
> +void kvm_reset_timer(struct kvm_vcpu *vcpu);
> +enum hrtimer_restart kvm_count_timeout(struct kvm_vcpu *vcpu);
> +void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long hz);
> +void kvm_restore_timer(struct kvm_vcpu *vcpu);
> +void kvm_save_timer(struct kvm_vcpu *vcpu);
> +
> +/*
> + * Loongarch KVM guest interrupt handling.
> + */
> +static inline void _kvm_queue_irq(struct kvm_vcpu *vcpu, unsigned int irq)
> +{
> +	set_bit(irq, &vcpu->arch.irq_pending);
> +	clear_bit(irq, &vcpu->arch.irq_clear);
> +}
> +
> +static inline void _kvm_dequeue_irq(struct kvm_vcpu *vcpu, unsigned int irq)
> +{
> +	clear_bit(irq, &vcpu->arch.irq_pending);
> +	set_bit(irq, &vcpu->arch.irq_clear);
> +}
> +
> +#endif /* __ASM_LOONGARCH_KVM_VCPU_H__ */
> diff --git a/arch/loongarch/include/asm/loongarch.h b/arch/loongarch/include/asm/loongarch.h
> index 7f8d57a61..7b74605dd 100644
> --- a/arch/loongarch/include/asm/loongarch.h
> +++ b/arch/loongarch/include/asm/loongarch.h
> @@ -236,6 +236,44 @@ static __always_inline u64 csr_xchg64(u64 val, u64 mask, u32 reg)
>   	return __csrxchg_d(val, mask, reg);
>   }
>   
> +/* GCSR */
> +static inline u64 gcsr_read(u32 reg)
> +{
> +	u64 val = 0;
> +
> +	asm volatile (
> +		"parse_r __reg, %[val]\n\t"
> +		".word 0x5 << 24 | %[reg] << 10 | 0 << 5 | __reg\n\t"
Ah. MIPS (LoongISA) memories strike back hard. Where's the public ISA 
manual so we aren't forced to blindly trust your code drop?
> +		: [val] "+r" (val)
> +		: [reg] "i" (reg)
> +		: "memory");
> +
> +	return val;
> +}
> +
> +static inline void gcsr_write(u64 val, u32 reg)
> +{
> +	asm volatile (
> +		"parse_r __reg, %[val]\n\t"
> +		".word 0x5 << 24 | %[reg] << 10 | 1 << 5 | __reg\n\t"
> +		: [val] "+r" (val)
> +		: [reg] "i" (reg)
> +		: "memory");
> +}
> +
> +static inline u64 gcsr_xchg(u64 val, u64 mask, u32 reg)
> +{
> +	asm volatile (
> +		"parse_r __rd, %[val]\n\t"
> +		"parse_r __rj, %[mask]\n\t"
> +		".word 0x5 << 24 | %[reg] << 10 | __rj << 5 | __rd\n\t"
> +		: [val] "+r" (val)
> +		: [mask] "r" (mask), [reg] "i" (reg)
> +		: "memory");
> +
> +	return val;
> +}
> +
>   /* IOCSR */
>   static __always_inline u32 iocsr_read32(u32 reg)
>   {
> @@ -309,6 +347,7 @@ static __always_inline void iocsr_write64(u64 val, u32 reg)
>   #define LOONGARCH_CSR_ECFG		0x4	/* Exception config */
>   #define  CSR_ECFG_VS_SHIFT		16
>   #define  CSR_ECFG_VS_WIDTH		3
> +#define  CSR_ECFG_VS_SHIFT_END		(CSR_ECFG_VS_SHIFT + CSR_ECFG_VS_WIDTH - 1)
>   #define  CSR_ECFG_VS			(_ULCAST_(0x7) << CSR_ECFG_VS_SHIFT)
>   #define  CSR_ECFG_IM_SHIFT		0
>   #define  CSR_ECFG_IM_WIDTH		13
> @@ -397,13 +436,14 @@ static __always_inline void iocsr_write64(u64 val, u32 reg)
>   #define  CSR_TLBLO1_V			(_ULCAST_(0x1) << CSR_TLBLO1_V_SHIFT)
>   
>   #define LOONGARCH_CSR_GTLBC		0x15	/* Guest TLB control */
> -#define  CSR_GTLBC_RID_SHIFT		16
> -#define  CSR_GTLBC_RID_WIDTH		8
> -#define  CSR_GTLBC_RID			(_ULCAST_(0xff) << CSR_GTLBC_RID_SHIFT)
> +#define  CSR_GTLBC_TGID_SHIFT		16
> +#define  CSR_GTLBC_TGID_WIDTH		8
> +#define  CSR_GTLBC_TGID_SHIFT_END	(CSR_GTLBC_TGID_SHIFT + CSR_GTLBC_TGID_WIDTH - 1)
> +#define  CSR_GTLBC_TGID			(_ULCAST_(0xff) << CSR_GTLBC_TGID_SHIFT)
>   #define  CSR_GTLBC_TOTI_SHIFT		13
>   #define  CSR_GTLBC_TOTI			(_ULCAST_(0x1) << CSR_GTLBC_TOTI_SHIFT)
> -#define  CSR_GTLBC_USERID_SHIFT		12
> -#define  CSR_GTLBC_USERID		(_ULCAST_(0x1) << CSR_GTLBC_USERID_SHIFT)
> +#define  CSR_GTLBC_USETGID_SHIFT	12
> +#define  CSR_GTLBC_USETGID		(_ULCAST_(0x1) << CSR_GTLBC_USETGID_SHIFT)
>   #define  CSR_GTLBC_GMTLBSZ_SHIFT	0
>   #define  CSR_GTLBC_GMTLBSZ_WIDTH	6
>   #define  CSR_GTLBC_GMTLBSZ		(_ULCAST_(0x3f) << CSR_GTLBC_GMTLBSZ_SHIFT)
> @@ -555,6 +595,7 @@ static __always_inline void iocsr_write64(u64 val, u32 reg)
>   #define LOONGARCH_CSR_GSTAT		0x50	/* Guest status */
>   #define  CSR_GSTAT_GID_SHIFT		16
>   #define  CSR_GSTAT_GID_WIDTH		8
> +#define  CSR_GSTAT_GID_SHIFT_END	(CSR_GSTAT_GID_SHIFT + CSR_GSTAT_GID_WIDTH - 1)
>   #define  CSR_GSTAT_GID			(_ULCAST_(0xff) << CSR_GSTAT_GID_SHIFT)
>   #define  CSR_GSTAT_GIDBIT_SHIFT		4
>   #define  CSR_GSTAT_GIDBIT_WIDTH		6
> @@ -605,6 +646,12 @@ static __always_inline void iocsr_write64(u64 val, u32 reg)
>   #define  CSR_GCFG_MATC_GUEST		(_ULCAST_(0x0) << CSR_GCFG_MATC_SHITF)
>   #define  CSR_GCFG_MATC_ROOT		(_ULCAST_(0x1) << CSR_GCFG_MATC_SHITF)
>   #define  CSR_GCFG_MATC_NEST		(_ULCAST_(0x2) << CSR_GCFG_MATC_SHITF)
> +#define  CSR_GCFG_MATP_SHITF		0
> +#define  CSR_GCFG_MATP_WIDTH		4
> +#define  CSR_GCFG_MATP_MASK		(_ULCAST_(0x3) << CSR_GCFG_MATP_SHITF)
> +#define  CSR_GCFG_MATP_GUEST		(_ULCAST_(0x0) << CSR_GCFG_MATP_SHITF)
> +#define  CSR_GCFG_MATP_ROOT		(_ULCAST_(0x1) << CSR_GCFG_MATP_SHITF)
> +#define  CSR_GCFG_MATP_NEST		(_ULCAST_(0x2) << CSR_GCFG_MATP_SHITF)
>   
>   #define LOONGARCH_CSR_GINTC		0x52	/* Guest interrupt control */
>   #define  CSR_GINTC_HC_SHIFT		16
> @@ -1273,6 +1320,131 @@ static inline void write_csr_tlbrefill_pagesize(unsigned int size)
>   #define write_csr_perfctrl3(val)	csr_write64(val, LOONGARCH_CSR_PERFCTRL3)
>   #define write_csr_perfcntr3(val)	csr_write64(val, LOONGARCH_CSR_PERFCNTR3)
>   
> +/* Guest related CSRS */
> +#define read_csr_gtlbc()		csr_read64(LOONGARCH_CSR_GTLBC)
> +#define write_csr_gtlbc(val)		csr_write64(val, LOONGARCH_CSR_GTLBC)
> +#define read_csr_trgp()			csr_read64(LOONGARCH_CSR_TRGP)
> +#define read_csr_gcfg()			csr_read64(LOONGARCH_CSR_GCFG)
> +#define write_csr_gcfg(val)		csr_write64(val, LOONGARCH_CSR_GCFG)
> +#define read_csr_gstat()		csr_read64(LOONGARCH_CSR_GSTAT)
> +#define write_csr_gstat(val)		csr_write64(val, LOONGARCH_CSR_GSTAT)
> +#define read_csr_gintc()		csr_read64(LOONGARCH_CSR_GINTC)
> +#define write_csr_gintc(val)		csr_write64(val, LOONGARCH_CSR_GINTC)
> +#define read_csr_gcntc()		csr_read64(LOONGARCH_CSR_GCNTC)
> +#define write_csr_gcntc(val)		csr_write64(val, LOONGARCH_CSR_GCNTC)
> +
> +/* Guest CSRS read and write */
> +#define read_gcsr_crmd()		gcsr_read(LOONGARCH_CSR_CRMD)
> +#define write_gcsr_crmd(val)		gcsr_write(val, LOONGARCH_CSR_CRMD)
> +#define read_gcsr_prmd()		gcsr_read(LOONGARCH_CSR_PRMD)
> +#define write_gcsr_prmd(val)		gcsr_write(val, LOONGARCH_CSR_PRMD)
> +#define read_gcsr_euen()		gcsr_read(LOONGARCH_CSR_EUEN)
> +#define write_gcsr_euen(val)		gcsr_write(val, LOONGARCH_CSR_EUEN)
> +#define read_gcsr_misc()		gcsr_read(LOONGARCH_CSR_MISC)
> +#define write_gcsr_misc(val)		gcsr_write(val, LOONGARCH_CSR_MISC)
> +#define read_gcsr_ecfg()		gcsr_read(LOONGARCH_CSR_ECFG)
> +#define write_gcsr_ecfg(val)		gcsr_write(val, LOONGARCH_CSR_ECFG)
> +#define read_gcsr_estat()		gcsr_read(LOONGARCH_CSR_ESTAT)
> +#define write_gcsr_estat(val)		gcsr_write(val, LOONGARCH_CSR_ESTAT)
> +#define read_gcsr_era()			gcsr_read(LOONGARCH_CSR_ERA)
> +#define write_gcsr_era(val)		gcsr_write(val, LOONGARCH_CSR_ERA)
> +#define read_gcsr_badv()		gcsr_read(LOONGARCH_CSR_BADV)
> +#define write_gcsr_badv(val)		gcsr_write(val, LOONGARCH_CSR_BADV)
> +#define read_gcsr_badi()		gcsr_read(LOONGARCH_CSR_BADI)
> +#define write_gcsr_badi(val)		gcsr_write(val, LOONGARCH_CSR_BADI)
> +#define read_gcsr_eentry()		gcsr_read(LOONGARCH_CSR_EENTRY)
> +#define write_gcsr_eentry(val)		gcsr_write(val, LOONGARCH_CSR_EENTRY)
> +
> +#define read_gcsr_tlbidx()		gcsr_read(LOONGARCH_CSR_TLBIDX)
> +#define write_gcsr_tlbidx(val)		gcsr_write(val, LOONGARCH_CSR_TLBIDX)
> +#define read_gcsr_tlbhi()		gcsr_read(LOONGARCH_CSR_TLBEHI)
> +#define write_gcsr_tlbhi(val)		gcsr_write(val, LOONGARCH_CSR_TLBEHI)
> +#define read_gcsr_tlblo0()		gcsr_read(LOONGARCH_CSR_TLBELO0)
> +#define write_gcsr_tlblo0(val)		gcsr_write(val, LOONGARCH_CSR_TLBELO0)
> +#define read_gcsr_tlblo1()		gcsr_read(LOONGARCH_CSR_TLBELO1)
> +#define write_gcsr_tlblo1(val)		gcsr_write(val, LOONGARCH_CSR_TLBELO1)
> +
> +#define read_gcsr_asid()		gcsr_read(LOONGARCH_CSR_ASID)
> +#define write_gcsr_asid(val)		gcsr_write(val, LOONGARCH_CSR_ASID)
> +#define read_gcsr_pgdl()		gcsr_read(LOONGARCH_CSR_PGDL)
> +#define write_gcsr_pgdl(val)		gcsr_write(val, LOONGARCH_CSR_PGDL)
> +#define read_gcsr_pgdh()		gcsr_read(LOONGARCH_CSR_PGDH)
> +#define write_gcsr_pgdh(val)		gcsr_write(val, LOONGARCH_CSR_PGDH)
> +#define write_gcsr_pgd(val)		gcsr_write(val, LOONGARCH_CSR_PGD)
> +#define read_gcsr_pgd()			gcsr_read(LOONGARCH_CSR_PGD)
> +#define read_gcsr_pwctl0()		gcsr_read(LOONGARCH_CSR_PWCTL0)
> +#define write_gcsr_pwctl0(val)		gcsr_write(val, LOONGARCH_CSR_PWCTL0)
> +#define read_gcsr_pwctl1()		gcsr_read(LOONGARCH_CSR_PWCTL1)
> +#define write_gcsr_pwctl1(val)		gcsr_write(val, LOONGARCH_CSR_PWCTL1)
> +#define read_gcsr_stlbpgsize()		gcsr_read(LOONGARCH_CSR_STLBPGSIZE)
> +#define write_gcsr_stlbpgsize(val)	gcsr_write(val, LOONGARCH_CSR_STLBPGSIZE)
> +#define read_gcsr_rvacfg()		gcsr_read(LOONGARCH_CSR_RVACFG)
> +#define write_gcsr_rvacfg(val)		gcsr_write(val, LOONGARCH_CSR_RVACFG)
> +
> +#define read_gcsr_cpuid()		gcsr_read(LOONGARCH_CSR_CPUID)
> +#define write_gcsr_cpuid(val)		gcsr_write(val, LOONGARCH_CSR_CPUID)
> +#define read_gcsr_prcfg1()		gcsr_read(LOONGARCH_CSR_PRCFG1)
> +#define write_gcsr_prcfg1(val)		gcsr_write(val, LOONGARCH_CSR_PRCFG1)
> +#define read_gcsr_prcfg2()		gcsr_read(LOONGARCH_CSR_PRCFG2)
> +#define write_gcsr_prcfg2(val)		gcsr_write(val, LOONGARCH_CSR_PRCFG2)
> +#define read_gcsr_prcfg3()		gcsr_read(LOONGARCH_CSR_PRCFG3)
> +#define write_gcsr_prcfg3(val)		gcsr_write(val, LOONGARCH_CSR_PRCFG3)
> +
> +#define read_gcsr_kscratch0()		gcsr_read(LOONGARCH_CSR_KS0)
> +#define write_gcsr_kscratch0(val)	gcsr_write(val, LOONGARCH_CSR_KS0)
> +#define read_gcsr_kscratch1()		gcsr_read(LOONGARCH_CSR_KS1)
> +#define write_gcsr_kscratch1(val)	gcsr_write(val, LOONGARCH_CSR_KS1)
> +#define read_gcsr_kscratch2()		gcsr_read(LOONGARCH_CSR_KS2)
> +#define write_gcsr_kscratch2(val)	gcsr_write(val, LOONGARCH_CSR_KS2)
> +#define read_gcsr_kscratch3()		gcsr_read(LOONGARCH_CSR_KS3)
> +#define write_gcsr_kscratch3(val)	gcsr_write(val, LOONGARCH_CSR_KS3)
> +#define read_gcsr_kscratch4()		gcsr_read(LOONGARCH_CSR_KS4)
> +#define write_gcsr_kscratch4(val)	gcsr_write(val, LOONGARCH_CSR_KS4)
> +#define read_gcsr_kscratch5()		gcsr_read(LOONGARCH_CSR_KS5)
> +#define write_gcsr_kscratch5(val)	gcsr_write(val, LOONGARCH_CSR_KS5)
> +#define read_gcsr_kscratch6()		gcsr_read(LOONGARCH_CSR_KS6)
> +#define write_gcsr_kscratch6(val)	gcsr_write(val, LOONGARCH_CSR_KS6)
> +#define read_gcsr_kscratch7()		gcsr_read(LOONGARCH_CSR_KS7)
> +#define write_gcsr_kscratch7(val)	gcsr_write(val, LOONGARCH_CSR_KS7)
> +
> +#define read_gcsr_timerid()		gcsr_read(LOONGARCH_CSR_TMID)
> +#define write_gcsr_timerid(val)		gcsr_write(val, LOONGARCH_CSR_TMID)
> +#define read_gcsr_timercfg()		gcsr_read(LOONGARCH_CSR_TCFG)
> +#define write_gcsr_timercfg(val)	gcsr_write(val, LOONGARCH_CSR_TCFG)
> +#define read_gcsr_timertick()		gcsr_read(LOONGARCH_CSR_TVAL)
> +#define write_gcsr_timertick(val)	gcsr_write(val, LOONGARCH_CSR_TVAL)
> +#define read_gcsr_timeroffset()		gcsr_read(LOONGARCH_CSR_CNTC)
> +#define write_gcsr_timeroffset(val)	gcsr_write(val, LOONGARCH_CSR_CNTC)
> +
> +#define read_gcsr_llbctl()		gcsr_read(LOONGARCH_CSR_LLBCTL)
> +#define write_gcsr_llbctl(val)		gcsr_write(val, LOONGARCH_CSR_LLBCTL)
> +
> +#define read_gcsr_tlbrentry()		gcsr_read(LOONGARCH_CSR_TLBRENTRY)
> +#define write_gcsr_tlbrentry(val)	gcsr_write(val, LOONGARCH_CSR_TLBRENTRY)
> +#define read_gcsr_tlbrbadv()		gcsr_read(LOONGARCH_CSR_TLBRBADV)
> +#define write_gcsr_tlbrbadv(val)	gcsr_write(val, LOONGARCH_CSR_TLBRBADV)
> +#define read_gcsr_tlbrera()		gcsr_read(LOONGARCH_CSR_TLBRERA)
> +#define write_gcsr_tlbrera(val)		gcsr_write(val, LOONGARCH_CSR_TLBRERA)
> +#define read_gcsr_tlbrsave()		gcsr_read(LOONGARCH_CSR_TLBRSAVE)
> +#define write_gcsr_tlbrsave(val)	gcsr_write(val, LOONGARCH_CSR_TLBRSAVE)
> +#define read_gcsr_tlbrelo0()		gcsr_read(LOONGARCH_CSR_TLBRELO0)
> +#define write_gcsr_tlbrelo0(val)	gcsr_write(val, LOONGARCH_CSR_TLBRELO0)
> +#define read_gcsr_tlbrelo1()		gcsr_read(LOONGARCH_CSR_TLBRELO1)
> +#define write_gcsr_tlbrelo1(val)	gcsr_write(val, LOONGARCH_CSR_TLBRELO1)
> +#define read_gcsr_tlbrehi()		gcsr_read(LOONGARCH_CSR_TLBREHI)
> +#define write_gcsr_tlbrehi(val)		gcsr_write(val, LOONGARCH_CSR_TLBREHI)
> +#define read_gcsr_tlbrprmd()		gcsr_read(LOONGARCH_CSR_TLBRPRMD)
> +#define write_gcsr_tlbrprmd(val)	gcsr_write(val, LOONGARCH_CSR_TLBRPRMD)
> +
> +#define read_gcsr_directwin0()		gcsr_read(LOONGARCH_CSR_DMWIN0)
> +#define write_gcsr_directwin0(val)	gcsr_write(val, LOONGARCH_CSR_DMWIN0)
> +#define read_gcsr_directwin1()		gcsr_read(LOONGARCH_CSR_DMWIN1)
> +#define write_gcsr_directwin1(val)	gcsr_write(val, LOONGARCH_CSR_DMWIN1)
> +#define read_gcsr_directwin2()		gcsr_read(LOONGARCH_CSR_DMWIN2)
> +#define write_gcsr_directwin2(val)	gcsr_write(val, LOONGARCH_CSR_DMWIN2)
> +#define read_gcsr_directwin3()		gcsr_read(LOONGARCH_CSR_DMWIN3)
> +#define write_gcsr_directwin3(val)	gcsr_write(val, LOONGARCH_CSR_DMWIN3)
> +
>   /*
>    * Manipulate bits in a register.
>    */
> @@ -1315,15 +1487,26 @@ change_##name(unsigned long change, unsigned long val)		\
>   }
>   
>   #define __BUILD_CSR_OP(name)	__BUILD_CSR_COMMON(csr_##name)
> +#define __BUILD_GCSR_OP(name)	__BUILD_CSR_COMMON(gcsr_##name)
>   
>   __BUILD_CSR_OP(euen)
>   __BUILD_CSR_OP(ecfg)
>   __BUILD_CSR_OP(tlbidx)
> +__BUILD_CSR_OP(gcfg)
> +__BUILD_CSR_OP(gstat)
> +__BUILD_CSR_OP(gtlbc)
> +__BUILD_CSR_OP(gintc)
> +__BUILD_GCSR_OP(llbctl)
> +__BUILD_GCSR_OP(tlbidx)
>   
>   #define set_csr_estat(val)	\
>   	csr_xchg32(val, val, LOONGARCH_CSR_ESTAT)
>   #define clear_csr_estat(val)	\
>   	csr_xchg32(~(val), val, LOONGARCH_CSR_ESTAT)
> +#define set_gcsr_estat(val)	\
> +	gcsr_xchg(val, val, LOONGARCH_CSR_ESTAT)
> +#define clear_gcsr_estat(val)	\
> +	gcsr_xchg(~(val), val, LOONGARCH_CSR_ESTAT)
>   
>   #endif /* __ASSEMBLY__ */
>   
> @@ -1408,7 +1591,7 @@ __BUILD_CSR_OP(tlbidx)
>   #define EXCCODE_WATCH		19	/* Watch address reference */
>   #define EXCCODE_BTDIS		20	/* Binary Trans. Disabled */
>   #define EXCCODE_BTE		21	/* Binary Trans. Exception */
> -#define EXCCODE_PSI		22	/* Guest Privileged Error */
> +#define EXCCODE_GSPR		22	/* Guest Privileged Error */
>   #define EXCCODE_HYP		23	/* Hypercall */
>   #define EXCCODE_GCM		24	/* Guest CSR modified */
>   	#define EXCSUBCODE_GCSC		0	/* Software caused */
> diff --git a/arch/loongarch/kvm/trace.h b/arch/loongarch/kvm/trace.h
> new file mode 100644
> index 000000000..1813410e2
> --- /dev/null
> +++ b/arch/loongarch/kvm/trace.h
> @@ -0,0 +1,137 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
> + */
> +
> +#if !defined(_TRACE_KVM_H) || defined(TRACE_HEADER_MULTI_READ)
> +#define _TRACE_KVM_H
> +
> +#include <linux/tracepoint.h>
> +#include <asm/kvm_csr.h>
> +
> +#undef	TRACE_SYSTEM
> +#define TRACE_SYSTEM		kvm
> +#define TRACE_INCLUDE_PATH	.
> +#define TRACE_INCLUDE_FILE	trace
> +
> +/*
> + * Tracepoints for VM enters
> + */
> +DECLARE_EVENT_CLASS(kvm_transition,
> +	TP_PROTO(struct kvm_vcpu *vcpu),
> +	TP_ARGS(vcpu),
> +	TP_STRUCT__entry(
> +		__field(unsigned long, pc)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->pc = vcpu->arch.pc;
> +	),
> +
> +	TP_printk("PC: 0x%08lx",
> +		  __entry->pc)
> +);
> +
> +DEFINE_EVENT(kvm_transition, kvm_enter,
> +	     TP_PROTO(struct kvm_vcpu *vcpu),
> +	     TP_ARGS(vcpu));
> +
> +DEFINE_EVENT(kvm_transition, kvm_reenter,
> +	     TP_PROTO(struct kvm_vcpu *vcpu),
> +	     TP_ARGS(vcpu));
> +
> +DEFINE_EVENT(kvm_transition, kvm_out,
> +	     TP_PROTO(struct kvm_vcpu *vcpu),
> +	     TP_ARGS(vcpu));
> +
> +/* Further exit reasons */
> +#define KVM_TRACE_EXIT_IDLE		64
> +#define KVM_TRACE_EXIT_CACHE		65
> +#define KVM_TRACE_EXIT_SIGNAL		66
> +
> +/* Tracepoints for VM exits */
> +#define kvm_trace_symbol_exit_types					\
> +	({ KVM_TRACE_EXIT_IDLE,		"IDLE" },			\
> +	{ KVM_TRACE_EXIT_CACHE,		"CACHE" },			\
> +	{ KVM_TRACE_EXIT_SIGNAL,	"Signal" })
> +
> +TRACE_EVENT(kvm_exit,
> +	    TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
> +	    TP_ARGS(vcpu, reason),
> +	    TP_STRUCT__entry(
> +			__field(unsigned long, pc)
> +			__field(unsigned int, reason)
> +	    ),
> +
> +	    TP_fast_assign(
> +			__entry->pc = vcpu->arch.pc;
> +			__entry->reason = reason;
> +	    ),
> +
> +	    TP_printk("[%s]PC: 0x%08lx",
> +		      __print_symbolic(__entry->reason,
> +				       kvm_trace_symbol_exit_types),
> +		      __entry->pc)
> +);
> +
> +#define KVM_TRACE_AUX_RESTORE		0
> +#define KVM_TRACE_AUX_SAVE		1
> +#define KVM_TRACE_AUX_ENABLE		2
> +#define KVM_TRACE_AUX_DISABLE		3
> +#define KVM_TRACE_AUX_DISCARD		4
> +
> +#define KVM_TRACE_AUX_FPU		1
> +
> +#define kvm_trace_symbol_aux_op				\
> +	({ KVM_TRACE_AUX_RESTORE, "restore" },		\
> +	{ KVM_TRACE_AUX_SAVE,    "save" },		\
> +	{ KVM_TRACE_AUX_ENABLE,  "enable" },		\
> +	{ KVM_TRACE_AUX_DISABLE, "disable" },		\
> +	{ KVM_TRACE_AUX_DISCARD, "discard" })
> +
> +#define kvm_trace_symbol_aux_state			\
> +	{ KVM_TRACE_AUX_FPU,     "FPU" },		\
> +
> +TRACE_EVENT(kvm_aux,
> +	    TP_PROTO(struct kvm_vcpu *vcpu, unsigned int op,
> +		     unsigned int state),
> +	    TP_ARGS(vcpu, op, state),
> +	    TP_STRUCT__entry(
> +			__field(unsigned long, pc)
> +			__field(u8, op)
> +			__field(u8, state)
> +	    ),
> +
> +	    TP_fast_assign(
> +			__entry->pc = vcpu->arch.pc;
> +			__entry->op = op;
> +			__entry->state = state;
> +	    ),
> +
> +	    TP_printk("%s %s PC: 0x%08lx",
> +		      __print_symbolic(__entry->op,
> +				       kvm_trace_symbol_aux_op),
> +		      __print_symbolic(__entry->state,
> +				       kvm_trace_symbol_aux_state),
> +		      __entry->pc)
> +);
> +
> +TRACE_EVENT(kvm_vpid_change,
> +	    TP_PROTO(struct kvm_vcpu *vcpu, unsigned long vpid),
> +	    TP_ARGS(vcpu, vpid),
> +	    TP_STRUCT__entry(
> +			__field(unsigned long, vpid)
> +	    ),
> +
> +	    TP_fast_assign(
> +			__entry->vpid = vpid;
> +	    ),
> +
> +	    TP_printk("vpid: 0x%08lx",
> +		      __entry->vpid)
> +);
> +
> +#endif /* _TRACE_KVM_H */
> +
> +/* This part must be outside protection */
> +#include <trace/define_trace.h>

-- 
WANG "xen0n" Xuerui

Linux/LoongArch mailing list: https://lore.kernel.org/loongarch/


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 01/29] LoongArch: KVM: Add kvm related header files
  2023-02-20 18:22   ` Paolo Bonzini
@ 2023-02-21  2:56     ` Tianrui Zhao
  2023-02-21  6:49       ` Paolo Bonzini
  0 siblings, 1 reply; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-21  2:56 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo



在 2023年02月21日 02:22, Paolo Bonzini 写道:
> On 2/20/23 07:57, Tianrui Zhao wrote:
>> +
>> +/* Resume Flags */
>> +#define RESUME_FLAG_DR        (1<<0)    /* Reload guest nonvolatile 
>> state? */
>> +#define RESUME_FLAG_HOST    (1<<1)    /* Resume host? */
>> +
>> +#define RESUME_GUEST        0
>> +#define RESUME_GUEST_DR        RESUME_FLAG_DR
>> +#define RESUME_HOST        RESUME_FLAG_HOST
>> +
>
> Most of this code is dead, I'll give more instructions in a reply to 
> patch 8.
>
>> +    unsigned long guest_eentry;
>> +    unsigned long host_eentry;
>> +    int (*vcpu_run)(struct kvm_run *run, struct kvm_vcpu *vcpu);
>> +    int (*handle_exit)(struct kvm_run *run, struct kvm_vcpu *vcpu);
>> +
>> +    /* Host registers preserved across guest mode execution */
>> +    unsigned long host_stack;
>> +    unsigned long host_gp;
>> +    unsigned long host_pgd;
>> +    unsigned long host_pgdhi;
>> +    unsigned long host_entryhi;
>> +
>> +    /* Host CSR registers used when handling exits from guest */
>> +    unsigned long badv;
>> +    unsigned long host_estat;
>> +    unsigned long badi;
>> +    unsigned long host_ecfg;
>> +    unsigned long host_percpu;
>> +
>> +    /* GPRS */
>> +    unsigned long gprs[32];
>> +    unsigned long pc;
>> +
>> +    /* FPU State */
>> +    struct loongarch_fpu fpu FPU_ALIGN;
>> +    /* Which auxiliary state is loaded (KVM_LOONGARCH_AUX_*) */
>> +    unsigned int aux_inuse;
>> +
>> +    /* CSR State */
>> +    struct loongarch_csrs *csr;
>> +
>> +    /* GPR used as IO source/target */
>> +    u32 io_gpr;
>> +
>> +    struct hrtimer swtimer;
>> +    /* Count timer control KVM register */
>> +    u32 count_ctl;
>> +
>> +    /* Bitmask of exceptions that are pending */
>> +    unsigned long irq_pending;
>> +    /* Bitmask of pending exceptions to be cleared */
>> +    unsigned long irq_clear;
>> +
>> +    /* Cache some mmu pages needed inside spinlock regions */
>> +    struct kvm_mmu_memory_cache mmu_page_cache;
>> +
>> +    /* vcpu's vpid is different on each host cpu in an smp system */
>> +    u64 vpid[NR_CPUS];
>
> In _kvm_check_vmid(), you already have
>
> +    if (migrated || (ver != old)) {
> +        _kvm_update_vpid(vcpu, cpu);
> +        trace_kvm_vpid_change(vcpu, vcpu->arch.vpid[cpu]);
> +    }
>
> so a vpid will never be recycled if a vCPU migrates from physical CPU 
> A to B and back to A.
>
> So please keep the current VPID in the per-cpu struct vmcs, and you 
> can just copy it from there in _kvm_check_vmid().

Thanks,  that is to say we should remove the vpid[NR_CPUS] array and it 
is enough to use only one vpid variable?

Thanks
Tianrui Zhao

>
>> +    /* Period of stable timer tick in ns */
>> +    u64 timer_period;
>> +    /* Frequency of stable timer in Hz */
>> +    u64 timer_mhz;
>> +    /* Stable bias from the raw time */
>> +    u64 timer_bias;
>> +    /* Dynamic nanosecond bias (multiple of timer_period) to avoid 
>> overflow */
>> +    s64 timer_dyn_bias;
>> +    /* Save ktime */
>> +    ktime_t stable_ktime_saved;
>> +
>> +    u64 core_ext_ioisr[4];
>> +
>> +    /* Last CPU the VCPU state was loaded on */
>> +    int last_sched_cpu;
>> +    /* Last CPU the VCPU actually executed guest code on */
>> +    int last_exec_cpu;
>> +
>> +    u8 fpu_enabled;
>
> This field is always true, please remove it.

Thanks, i will remove this variable.

Thanks
Tianrui Zhao

>
>> +    struct kvm_guest_debug_arch guest_debug;
>
> This struct is empty, please remove it.

Ok, I will remove it.

Thanks
Tianrui Zhao

>
> Paolo


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 02/29] LoongArch: KVM: Implement kvm module related interface
  2023-02-20 17:46   ` Paolo Bonzini
@ 2023-02-21  3:02     ` Tianrui Zhao
  2023-02-21  6:59     ` maobibo
  1 sibling, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-21  3:02 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo



在 2023年02月21日 01:46, Paolo Bonzini 写道:
> On 2/20/23 07:57, Tianrui Zhao wrote:
>> +    order = get_order(kvm_vector_size + kvm_enter_guest_size);
>> +    addr = (void *)__get_free_pages(GFP_KERNEL, order);
>> +    if (!addr) {
>> +        free_percpu(vmcs);
>> +        return -ENOMEM;
>> +    }
>> +
>> +    memcpy(addr, kvm_vector_entry, kvm_vector_size);
>> +    memcpy(addr + kvm_vector_size, kvm_enter_guest, 
>> kvm_enter_guest_size);
>> +    flush_icache_range((unsigned long)addr, (unsigned long)addr +
>> +                kvm_vector_size + kvm_enter_guest_size);
>> +
>> +    vpid_mask = read_csr_gstat();
>> +    vpid_mask = (vpid_mask & CSR_GSTAT_GIDBIT) >> 
>> CSR_GSTAT_GIDBIT_SHIFT;
>> +    if (vpid_mask)
>> +        vpid_mask = GENMASK(vpid_mask - 1, 0);
>> +
>> +    for_each_possible_cpu(cpu) {
>> +        context = per_cpu_ptr(vmcs, cpu);
>> +        context->vpid_mask = vpid_mask;
>> +        context->vpid_cache = context->vpid_mask + 1;
>> +        context->last_vcpu = NULL;
>> +        context->kvm_eentry = addr;
>> +        context->kvm_enter_guest = addr + kvm_vector_size;
>> +        context->page_order = order;
>> +    }
>
> A lot of these variables are constant across all pCPUs, any reason to 
> have them in a per-CPU variable?  Likewise, since they are all the 
> same as the constant global vmcs variable, why make them part of 
> struct kvm_context instead of just making them globals?

Ok thanks, it is more appropriate to use global variables to save those 
information.

Thanks
Tianrui Zhao

>
> Also, why does the world switch code need a copy?
>
> Paolo


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 08/29] LoongArch: KVM: Implement vcpu handle exit interface
  2023-02-20 18:45   ` Paolo Bonzini
@ 2023-02-21  3:17     ` Tianrui Zhao
  0 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-21  3:17 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo



在 2023年02月21日 02:45, Paolo Bonzini 写道:
> On 2/20/23 07:57, Tianrui Zhao wrote:
>> + * Return value is in the form (errcode<<2 | RESUME_FLAG_HOST | 
>> RESUME_FLAG_NV)
>
> As far as I can see, RESUME_FLAG_NV does not exist anymore and this is 
> just copied from arch/mips?
>
> You can keep RESUME_HOST/RESUME_GUEST for the individual functions, 
> but here please make it just "1" for resume guest, and "<= 0" for 
> resume host.  This is easy enough to check from assembly and removes 
> the srai by 2.
>
>> +static int _kvm_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
>> +{
>> +    unsigned long exst = vcpu->arch.host_estat;
>> +    u32 intr = exst & 0x1fff; /* ignore NMI */
>> +    u32 exccode = (exst & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT;
>> +    u32 __user *opc = (u32 __user *) vcpu->arch.pc;
>> +    int ret = RESUME_GUEST, cpu;
>> +
>> +    vcpu->mode = OUTSIDE_GUEST_MODE;
>> +
>> +    /* Set a default exit reason */
>> +    run->exit_reason = KVM_EXIT_UNKNOWN;
>> +    run->ready_for_interrupt_injection = 1;
>> +
>> +    /*
>> +     * Set the appropriate status bits based on host CPU features,
>> +     * before we hit the scheduler
>> +     */
>
> Stale comment?

I will remove this comment.

Thanks
Tianrui Zhao

>
>> +    local_irq_enable();
>
> Please add guest_state_exit_irqoff() here.

I will add this function here.

Thanks
Tianrui Zhao

>
>> +    kvm_debug("%s: exst: %lx, PC: %p, kvm_run: %p, kvm_vcpu: %p\n",
>> +            __func__, exst, opc, run, vcpu);
>
> Please add the information to the kvm_exit tracepoint (thus also 
> removing variables such as "exst" or "opc" from this function) instead 
> of calling kvm_debug().

Ok, i will fix the kvm exit tracepoint function.

Thanks
Tianrui Zhao

>
>> +    trace_kvm_exit(vcpu, exccode);
>> +    if (exccode) {
>> +        ret = _kvm_handle_fault(vcpu, exccode);
>> +    } else {
>> +        WARN(!intr, "suspicious vm exiting");
>> +        ++vcpu->stat.int_exits;
>> +
>> +        if (need_resched())
>> +            cond_resched();
>
> This "if" is not necessary because there is already a cond_resched() 
> below.

Thanks, I will remove this cond_resched function.

Thanks
Tianrui Zhao

>
>> +        ret = RESUME_GUEST;
>
> This "ret" is not necessary because "ret" is already initialized to 
> RESUME_GUEST above, you can either remove it or remove the initializer.

Ok, i will remove this "ret" .

Thanks
Tianrui Zhao

>
>> +    }
>> +
>> +    cond_resched();
>> +    local_irq_disable();
>
> At this point, ret is either RESUME_GUEST or RESUME_HOST.  So, the 
> "if"s below are either all taken or all not taken, and most of this code:
>
>     kvm_acquire_timer(vcpu);
>     _kvm_deliver_intr(vcpu);
>
>     if (signal_pending(current)) {
>         run->exit_reason = KVM_EXIT_INTR;
>         ret = (-EINTR << 2) | RESUME_HOST;
>         ++vcpu->stat.signal_exits;
>         // no need for a tracepoint here
>         // trace_kvm_exit(vcpu, KVM_TRACE_EXIT_SIGNAL);
>     }
>
>     trace_kvm_reenter(vcpu);
>
>     /*
>      * Make sure the read of VCPU requests in vcpu_reenter()
>      * callback is not reordered ahead of the write to vcpu->mode,
>      * or we could miss a TLB flush request while the requester sees
>      * the VCPU as outside of guest mode and not needing an IPI.
>      */
>     smp_store_mb(vcpu->mode, IN_GUEST_MODE);
>
>     cpu = smp_processor_id();
>     _kvm_check_requests(vcpu, cpu);
>     _kvm_check_vmid(vcpu, cpu);
>     vcpu->arch.host_eentry = csr_read64(LOONGARCH_CSR_EENTRY);
>
>     /*
>      * If FPU are enabled (i.e. the guest's FPU context
>      * is live), restore FCSR0.
>      */
>     if (_kvm_guest_has_fpu(&vcpu->arch) &&
>         read_csr_euen() & (CSR_EUEN_FPEN)) {
>         kvm_restore_fcsr(&vcpu->arch.fpu);
>     }
>
> (all except for the "if (signal_pending(current))" and the final "if") 
> is pretty much duplicated with kvm_arch_vcpu_ioctl_run(); the 
> remaining code can also be done from kvm_arch_vcpu_ioctl_run(), the 
> cost is small.  Please move it to a separate function, for example:
>
> int kvm_pre_enter_guest(struct kvm_vcpu *vcpu)
> {
>     if (signal_pending(current)) {
>         run->exit_reason = KVM_EXIT_INTR;
>         ++vcpu->stat.signal_exits;
>         return -EINTR;
>     }
>
>     kvm_acquire_timer(vcpu);
>     _kvm_deliver_intr(vcpu);
>
>     ...
>
>     if (_kvm_guest_has_fpu(&vcpu->arch) &&
>         read_csr_euen() & (CSR_EUEN_FPEN)) {
>         kvm_restore_fcsr(&vcpu->arch.fpu);
>     }
>     return 1;
> }
>
> Call it from _kvm_handle_exit():
>
>     if (ret == RESUME_HOST)
>         return 0;
>
>     r = kvm_pre_enter_guest(vcpu);
>     if (r > 0) {
>         trace_kvm_reenter(vcpu);
>         guest_state_enter_irqoff();
>     }
>
>     return r;
>
> and from kvm_arch_vcpu_ioctl_run():
>
>     local_irq_disable();
>     guest_timing_enter_irqoff();
>     r = kvm_pre_enter_guest(vcpu);
>     if (r > 0) {
>         trace_kvm_enter(vcpu);
>         /*
>          * This should actually not be a function pointer, but
>          * just for clarity */
>          */
>         guest_state_enter_irqoff();
>         r = vcpu->arch.vcpu_run(run, vcpu);
>         /* guest_state_exit_irqoff() already done.  */
>         trace_kvm_out(vcpu);
>     }
>     guest_timing_exit_irqoff();
>     local_irq_enable();
>
> out:
>     kvm_sigset_deactivate(vcpu);
>
>     vcpu_put(vcpu);
>     return r;
>
> Paolo

Thanks, I will reorganize this code and add the kvm_pre_enter_guest 
function, and
apply it in the vcpu_handle_exit and vcpu_run.

Thanks
Tianrui Zhao

>
>> +    if (ret == RESUME_GUEST)
>> +        kvm_acquire_timer(vcpu);
>> +
>> +    if (!(ret & RESUME_HOST)) {
>> +        _kvm_deliver_intr(vcpu);
>> +        /* Only check for signals if not already exiting to 
>> userspace */
>> +        if (signal_pending(current)) {
>> +            run->exit_reason = KVM_EXIT_INTR;
>> +            ret = (-EINTR << 2) | RESUME_HOST;
>> +            ++vcpu->stat.signal_exits;
>> +            trace_kvm_exit(vcpu, KVM_TRACE_EXIT_SIGNAL);
>> +        }
>> +    }
>> +
>> +    if (ret == RESUME_GUEST) {
>> +        trace_kvm_reenter(vcpu);
>> +
>> +        /*
>> +         * Make sure the read of VCPU requests in vcpu_reenter()
>> +         * callback is not reordered ahead of the write to vcpu->mode,
>> +         * or we could miss a TLB flush request while the requester 
>> sees
>> +         * the VCPU as outside of guest mode and not needing an IPI.
>> +         */
>> +        smp_store_mb(vcpu->mode, IN_GUEST_MODE);
>> +
>> +        cpu = smp_processor_id();
>> +        _kvm_check_requests(vcpu, cpu);
>> +        _kvm_check_vmid(vcpu, cpu);
>> +        vcpu->arch.host_eentry = csr_read64(LOONGARCH_CSR_EENTRY);
>> +
>> +        /*
>> +         * If FPU are enabled (i.e. the guest's FPU context
>> +         * is live), restore FCSR0.
>> +         */
>> +        if (_kvm_guest_has_fpu(&vcpu->arch) &&
>> +            read_csr_euen() & (CSR_EUEN_FPEN)) {
>> +            kvm_restore_fcsr(&vcpu->arch.fpu);
>> +        }
>> +    }
>> +
>> +    return ret;
>> +}
>> +
>>   int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
>>   {
>>       int i;


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 13/29] LoongArch: KVM: Implement misc vcpu related interfaces
  2023-02-20 18:50   ` Paolo Bonzini
@ 2023-02-21  3:19     ` Tianrui Zhao
  0 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-21  3:19 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo



在 2023年02月21日 02:50, Paolo Bonzini 写道:
> On 2/20/23 07:57, Tianrui Zhao wrote:
>> +
>> +int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu,
>> +                  struct kvm_translation *tr)
>> +{
>> +    return 0;
>> +}
>> +
>
> Please return -EINVAL instead.
>

Thanks, i will fix this issue.

Thanks
Tianrui Zhao

> Paolo


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 01/29] LoongArch: KVM: Add kvm related header files
  2023-02-20  6:57 ` [PATCH v2 01/29] LoongArch: KVM: Add kvm related header files Tianrui Zhao
  2023-02-20 18:22   ` Paolo Bonzini
  2023-02-20 18:54   ` WANG Xuerui
@ 2023-02-21  4:36   ` Xi Ruoyao
  2023-02-24  1:27     ` Tianrui Zhao
  2 siblings, 1 reply; 70+ messages in thread
From: Xi Ruoyao @ 2023-02-21  4:36 UTC (permalink / raw)
  To: Tianrui Zhao, Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

On Mon, 2023-02-20 at 14:57 +0800, Tianrui Zhao wrote:

/* snip */

> +/*
> + * for KVM_GET_FPU and KVM_SET_FPU
> + */
> +struct kvm_fpu {
> +	__u32 fcsr;
> +	__u32 none;
> +	__u64 fcc;    /* 8x8 */
> +	struct kvm_fpureg {
> +		__u64 val64[4];	//support max 256 bits
> +	} fpr[32];

Do we need __attribute__((__aligned__(16))) for fpureg (like
sc_extcontext in struct sigcontext)?

> +};
> +
> +/*
> + * For LOONGARCH, we use KVM_SET_ONE_REG and KVM_GET_ONE_REG to
          ^^^^^^^^^
          LoongArch

> access various
> + * registers.  The id field is broken down as follows:
> + *
> + *  bits[63..52] - As per linux/kvm.h
> + *  bits[51..32] - Must be zero.
> + *  bits[31..16] - Register set.

-- 
Xi Ruoyao <xry111@xry111.site>
School of Aerospace Science and Technology, Xidian University

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 05/29] LoongArch: KVM: Add vcpu related header files
  2023-02-20  6:57 ` [PATCH v2 05/29] LoongArch: KVM: Add vcpu related header files Tianrui Zhao
  2023-02-20 18:57   ` WANG Xuerui
@ 2023-02-21  4:44   ` Xi Ruoyao
  2023-02-21  6:46     ` maobibo
  1 sibling, 1 reply; 70+ messages in thread
From: Xi Ruoyao @ 2023-02-21  4:44 UTC (permalink / raw)
  To: Tianrui Zhao, Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

On Mon, 2023-02-20 at 14:57 +0800, Tianrui Zhao wrote:
> +/* GCSR */
> +static inline u64 gcsr_read(u32 reg)
> +{
> +       u64 val = 0;
> +
> +       asm volatile (
> +               "parse_r __reg, %[val]\n\t"
> +               ".word 0x5 << 24 | %[reg] << 10 | 0 << 5 | __reg\n\t"

Don't do this.  You should add the instruction to binutils first, then
make CONFIG_KVM depend on the assembler supporting this instruction. 
This is completely unreadable and only fine for an internal PoC.

> +               : [val] "+r" (val)
> +               : [reg] "i" (reg)
> +               : "memory");
> +
> +       return val;
> +}

-- 
Xi Ruoyao <xry111@xry111.site>
School of Aerospace Science and Technology, Xidian University

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 05/29] LoongArch: KVM: Add vcpu related header files
  2023-02-21  4:44   ` Xi Ruoyao
@ 2023-02-21  6:46     ` maobibo
  2023-02-21  6:48       ` Paolo Bonzini
  2023-02-21  7:12       ` Xi Ruoyao
  0 siblings, 2 replies; 70+ messages in thread
From: maobibo @ 2023-02-21  6:46 UTC (permalink / raw)
  To: Xi Ruoyao, Tianrui Zhao, Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton



在 2023/2/21 12:44, Xi Ruoyao 写道:
> On Mon, 2023-02-20 at 14:57 +0800, Tianrui Zhao wrote:
>> +/* GCSR */
>> +static inline u64 gcsr_read(u32 reg)
>> +{
>> +       u64 val = 0;
>> +
>> +       asm volatile (
>> +               "parse_r __reg, %[val]\n\t"
>> +               ".word 0x5 << 24 | %[reg] << 10 | 0 << 5 | __reg\n\t"
> 
> Don't do this.  You should add the instruction to binutils first, then
> make CONFIG_KVM depend on the assembler supporting this instruction. 
> This is completely unreadable and only fine for an internal PoC.

We are preparing to submit these instruction support for binutils,
however it is still necessary. Supposing that it is supported in future
gcc version, we can not drop existing gcc 12/13 supporting to compiling
kernel with LoongArch architecture.

Maybe there will be human readable code like this:
#if GCC_SUPPORT_KVM_INSTR
  ...
#else
  asm volatile (".word   "
  ...
#endif

Regards
Bibo, Mao
> 
>> +               : [val] "+r" (val)
>> +               : [reg] "i" (reg)
>> +               : "memory");
>> +
>> +       return val;
>> +}
> 


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 05/29] LoongArch: KVM: Add vcpu related header files
  2023-02-21  6:46     ` maobibo
@ 2023-02-21  6:48       ` Paolo Bonzini
  2023-02-21  7:12       ` Xi Ruoyao
  1 sibling, 0 replies; 70+ messages in thread
From: Paolo Bonzini @ 2023-02-21  6:48 UTC (permalink / raw)
  To: maobibo, Xi Ruoyao, Tianrui Zhao
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton

On 2/21/23 07:46, maobibo wrote:
>>> +       asm volatile (
>>> +               "parse_r __reg, %[val]\n\t"
>>> +               ".word 0x5 << 24 | %[reg] << 10 | 0 << 5 | __reg\n\t"
>> Don't do this.  You should add the instruction to binutils first, then
>> make CONFIG_KVM depend on the assembler supporting this instruction.
>> This is completely unreadable and only fine for an internal PoC.
> We are preparing to submit these instruction support for binutils,
> however it is still necessary. Supposing that it is supported in future
> gcc version, we can not drop existing gcc 12/13 supporting to compiling
> kernel with LoongArch architecture.
> 
> Maybe there will be human readable code like this:
> #if GCC_SUPPORT_KVM_INSTR
>    ...
> #else
>    asm volatile (".word   "
>    ...
> #endif

I agree, just add a comment with what would be the assembly code, i.e. 
something like

	/* Instructions only available in binutils v.... or later */
	asm volatile (
                "parse_r __reg, %[val]\n\t"
	       /* instrname %[val], %[reg] */
                ".word 0x5 << 24 | %[reg] << 10 | 0 << 5 | __reg\n\t"

Paolo


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 01/29] LoongArch: KVM: Add kvm related header files
  2023-02-21  2:56     ` Tianrui Zhao
@ 2023-02-21  6:49       ` Paolo Bonzini
  0 siblings, 0 replies; 70+ messages in thread
From: Paolo Bonzini @ 2023-02-21  6:49 UTC (permalink / raw)
  To: Tianrui Zhao
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

On 2/21/23 03:56, Tianrui Zhao wrote:
>>
>> In _kvm_check_vmid(), you already have
>>
>> +    if (migrated || (ver != old)) {
>> +        _kvm_update_vpid(vcpu, cpu);
>> +        trace_kvm_vpid_change(vcpu, vcpu->arch.vpid[cpu]);
>> +    }
>>
>> so a vpid will never be recycled if a vCPU migrates from physical CPU 
>> A to B and back to A.
>>
>> So please keep the current VPID in the per-cpu struct vmcs, and you 
>> can just copy it from there in _kvm_check_vmid().
> 
> Thanks,  that is to say we should remove the vpid[NR_CPUS] array and it 
> is enough to use only one vpid variable?

Yes, you need a vpid variable here and one in struct kvm_context.

Paolo


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 02/29] LoongArch: KVM: Implement kvm module related interface
  2023-02-20 17:46   ` Paolo Bonzini
  2023-02-21  3:02     ` Tianrui Zhao
@ 2023-02-21  6:59     ` maobibo
  2023-02-21  8:14       ` Paolo Bonzini
  1 sibling, 1 reply; 70+ messages in thread
From: maobibo @ 2023-02-21  6:59 UTC (permalink / raw)
  To: Paolo Bonzini, Tianrui Zhao
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton



在 2023/2/21 01:46, Paolo Bonzini 写道:
> On 2/20/23 07:57, Tianrui Zhao wrote:
>> +    order = get_order(kvm_vector_size + kvm_enter_guest_size);
>> +    addr = (void *)__get_free_pages(GFP_KERNEL, order);
>> +    if (!addr) {
>> +        free_percpu(vmcs);
>> +        return -ENOMEM;
>> +    }
>> +
>> +    memcpy(addr, kvm_vector_entry, kvm_vector_size);
>> +    memcpy(addr + kvm_vector_size, kvm_enter_guest, kvm_enter_guest_size);
>> +    flush_icache_range((unsigned long)addr, (unsigned long)addr +
>> +                kvm_vector_size + kvm_enter_guest_size);
>> +
>> +    vpid_mask = read_csr_gstat();
>> +    vpid_mask = (vpid_mask & CSR_GSTAT_GIDBIT) >> CSR_GSTAT_GIDBIT_SHIFT;
>> +    if (vpid_mask)
>> +        vpid_mask = GENMASK(vpid_mask - 1, 0);
>> +
>> +    for_each_possible_cpu(cpu) {
>> +        context = per_cpu_ptr(vmcs, cpu);
>> +        context->vpid_mask = vpid_mask;
>> +        context->vpid_cache = context->vpid_mask + 1;
>> +        context->last_vcpu = NULL;
>> +        context->kvm_eentry = addr;
>> +        context->kvm_enter_guest = addr + kvm_vector_size;
>> +        context->page_order = order;
>> +    }
> 
> A lot of these variables are constant across all pCPUs, any reason to have them in a per-CPU variable?  Likewise, since they are all the same as the constant global vmcs variable, why make them part of struct kvm_context instead of just making them globals?
> 
Paolo,

Thanks for reviewing these patches.

Originally we think that global variables make c files depending with
each other, and global variables is not faster than percpu, so that
we removes global variables. we are ok to make them globals.

> Also, why does the world switch code need a copy?
There will be problem in world switch code if there is page fault reenter,
since pgd register is shared between root kernel and kvm hypervisor. 
World switch entry need be unmapped area, cannot be tlb mapped area.

In future if hw pagetable walking is supported, or there is separate pgd
registers between root kernel and kvm hypervisor, copying about world switch
code will not be used.

Regards
Bibo, Mao
> 
> Paolo


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 05/29] LoongArch: KVM: Add vcpu related header files
  2023-02-21  6:46     ` maobibo
  2023-02-21  6:48       ` Paolo Bonzini
@ 2023-02-21  7:12       ` Xi Ruoyao
  2023-02-21  7:35         ` Paolo Bonzini
  1 sibling, 1 reply; 70+ messages in thread
From: Xi Ruoyao @ 2023-02-21  7:12 UTC (permalink / raw)
  To: maobibo, Tianrui Zhao, Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton

On Tue, 2023-02-21 at 14:46 +0800, maobibo wrote:
> > On Mon, 2023-02-20 at 14:57 +0800, Tianrui Zhao wrote:
> > > +/* GCSR */
> > > +static inline u64 gcsr_read(u32 reg)
> > > +{
> > > +       u64 val = 0;
> > > +
> > > +       asm volatile (
> > > +               "parse_r __reg, %[val]\n\t"
> > > +               ".word 0x5 << 24 | %[reg] << 10 | 0 << 5 | __reg\n\t"
> > 
> > Don't do this.  You should add the instruction to binutils first, then
> > make CONFIG_KVM depend on the assembler supporting this instruction.
> > This is completely unreadable and only fine for an internal PoC.
> 
> We are preparing to submit these instruction support for binutils,
> however it is still necessary. Supposing that it is supported in future
> gcc version, we can not drop existing gcc 12/13 supporting to compiling
> kernel with LoongArch architecture.

You can drop the support for KVM with less capable Binutils versions,
like:

config AS_HAS_LVZ
    def_bool $(as-instr some_gcsr_insn \$r0, \$gcsr0)

config KVM
    depends on AS_HAS_LVZ

> 
> Maybe there will be human readable code like this:
> #if GCC_SUPPORT_KVM_INSTR
>   ...
> #else
>   asm volatile (".word   "
>   ...
> #endif
> 
> Regards
> Bibo, Mao

-- 
Xi Ruoyao <xry111@xry111.site>
School of Aerospace Science and Technology, Xidian University

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 05/29] LoongArch: KVM: Add vcpu related header files
  2023-02-21  7:12       ` Xi Ruoyao
@ 2023-02-21  7:35         ` Paolo Bonzini
  0 siblings, 0 replies; 70+ messages in thread
From: Paolo Bonzini @ 2023-02-21  7:35 UTC (permalink / raw)
  To: Xi Ruoyao, maobibo, Tianrui Zhao
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton

On 2/21/23 08:12, Xi Ruoyao wrote:
>> We are preparing to submit these instruction support for binutils,
>> however it is still necessary. Supposing that it is supported in future
>> gcc version, we can not drop existing gcc 12/13 supporting to compiling
>> kernel with LoongArch architecture.
> You can drop the support for KVM with less capable Binutils versions,
> like:
> 
> config AS_HAS_LVZ
>      def_bool $(as-instr some_gcsr_insn \$r0, \$gcsr0)
> 
> config KVM
>      depends on AS_HAS_LVZ
> 

There are precedents in Linux for using .word when necessary.  There's 
no reason to prevent using KVM on old Binutils versions.

Paolo


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 27/29] LoongArch: KVM: Implement vcpu world switch
  2023-02-20  6:57 ` [PATCH v2 27/29] LoongArch: KVM: Implement vcpu world switch Tianrui Zhao
@ 2023-02-21  7:45   ` Paolo Bonzini
  2023-02-21 13:00     ` Tianrui Zhao
  2023-02-21  8:18   ` Paolo Bonzini
  1 sibling, 1 reply; 70+ messages in thread
From: Paolo Bonzini @ 2023-02-21  7:45 UTC (permalink / raw)
  To: Tianrui Zhao
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

On 2/20/23 07:57, Tianrui Zhao wrote:
> +	/* Load Guest gprs */
> +	ld.d    $r1,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 1)
> +	ld.d    $r2,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 2)
> +	ld.d    $r3,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 3)
> +	ld.d    $r4,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 4)
> +	ld.d    $r5,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 5)
> +	ld.d    $r7,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 7)
> +	ld.d    $r8,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 8)
> +	ld.d    $r9,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 9)
> +	ld.d    $r10,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 10)
> +	ld.d    $r11,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 11)
> +	ld.d    $r12,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 12)
> +	ld.d    $r13,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 13)
> +	ld.d    $r14,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 14)
> +	ld.d    $r15,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 15)
> +	ld.d    $r16,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 16)
> +	ld.d    $r17,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 17)
> +	ld.d    $r18,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 18)
> +	ld.d    $r19,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 19)
> +	ld.d    $r20,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 20)
> +	ld.d    $r21,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 21)
> +	ld.d    $r22,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 22)
> +	ld.d    $r23,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 23)
> +	ld.d    $r24,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 24)
> +	ld.d    $r25,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 25)
> +	ld.d    $r26,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 26)
> +	ld.d    $r27,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 27)
> +	ld.d    $r28,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 28)
> +	ld.d    $r29,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 29)
> +	ld.d    $r30,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 30)
> +	ld.d    $r31,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 31)
> +	/* Load KVM_ARCH register */
> +	ld.d	\KVM_ARCH, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * \GPRNUM)

This in practice relies on KVM_ARCH being a2 so please remove the 
KVM_ARCH and GPRNUM arguments from the macro; just replace \KVM_ARCH 
with a2 as needed.

Also, in these ld.d and st.d sequences you may want to use the ABI names 
instead of the rNN names, so it's clearer that you are skipping the 
KVM_ARCH register.

Paolo


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 02/29] LoongArch: KVM: Implement kvm module related interface
  2023-02-21  6:59     ` maobibo
@ 2023-02-21  8:14       ` Paolo Bonzini
  2023-02-21 10:18         ` maobibo
  0 siblings, 1 reply; 70+ messages in thread
From: Paolo Bonzini @ 2023-02-21  8:14 UTC (permalink / raw)
  To: maobibo, Tianrui Zhao
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton

On 2/21/23 07:59, maobibo wrote:
>> Also, why does the world switch code need a copy?
> There will be problem in world switch code if there is page fault reenter,
> since pgd register is shared between root kernel and kvm hypervisor.
> World switch entry need be unmapped area, cannot be tlb mapped area.

So if I understand correctly the processor is in direct address 
translation mode until the "csrwr t0, LOONGARCH_CSR_CRMD" instruction. 
Where does it leave paged mode?

Can you please also add comments to kvm_vector_entry explaining the 
processor state after a VZ exception entry (interrupts, paging, ...)?

Paolo


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 27/29] LoongArch: KVM: Implement vcpu world switch
  2023-02-20  6:57 ` [PATCH v2 27/29] LoongArch: KVM: Implement vcpu world switch Tianrui Zhao
  2023-02-21  7:45   ` Paolo Bonzini
@ 2023-02-21  8:18   ` Paolo Bonzini
  2023-02-21 12:58     ` Tianrui Zhao
  1 sibling, 1 reply; 70+ messages in thread
From: Paolo Bonzini @ 2023-02-21  8:18 UTC (permalink / raw)
  To: Tianrui Zhao
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

On 2/20/23 07:57, Tianrui Zhao wrote:
> +	or	a0, s0, zero
> +	or	a1, s1, zero
> +	ld.d	t8, a2, KVM_ARCH_HANDLE_EXIT
> +	jirl	ra,t8, 0
> +	ori	t0, zero, CSR_CRMD_IE
> +	csrxchg	zero, t0, LOONGARCH_CSR_CRMD

_kvm_handle_exit returns with the interrupts disabled.

Can you please add a comment to explain why CRMD.IE needs to be cleared 
here, or remove these two instructions if unnecessary?

Paolo

> +	or	a2, s1, zero
> +	addi.d	a2, a2, KVM_VCPU_ARCH
> +
> +	andi	t0, a0, RESUME_HOST
> +	bnez	t0, ret_to_host


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 22/29] LoongArch: KVM: Implement handle idle exception
  2023-02-20 18:40   ` Paolo Bonzini
@ 2023-02-21  9:48     ` Tianrui Zhao
  0 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-21  9:48 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo



在 2023年02月21日 02:40, Paolo Bonzini 写道:
> On 2/20/23 07:57, Tianrui Zhao wrote:
>> +int _kvm_emu_idle(struct kvm_vcpu *vcpu)
>> +{
>> +    ++vcpu->stat.idle_exits;
>> +    trace_kvm_exit(vcpu, KVM_TRACE_EXIT_IDLE);
>
> Please add a separate tracepoint, don't overload trace_kvm_exit().
>
> Likewise for _kvm_trap_handle_gspr().
>
> I think _kvm_trap_handle_gspr() should have a tracepoint whose 
> parameter is inst.word.

Thanks, I will add the tracepoint for _kvm_emu_idle and 
_kvm_trap_handle_gspr.

Thanks
Tianrui Zhao

>
> Paolo
>
>> +    if (!vcpu->arch.irq_pending) {
>> +        kvm_save_timer(vcpu);
>> +        kvm_vcpu_block(vcpu);
>> +    }
>> +
>> +    return EMULATE_DONE;


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 02/29] LoongArch: KVM: Implement kvm module related interface
  2023-02-21  8:14       ` Paolo Bonzini
@ 2023-02-21 10:18         ` maobibo
  2023-02-21 10:37           ` WANG Xuerui
  0 siblings, 1 reply; 70+ messages in thread
From: maobibo @ 2023-02-21 10:18 UTC (permalink / raw)
  To: Paolo Bonzini, Tianrui Zhao
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton



在 2023/2/21 16:14, Paolo Bonzini 写道:
> On 2/21/23 07:59, maobibo wrote:
>>> Also, why does the world switch code need a copy?
>> There will be problem in world switch code if there is page fault reenter,
>> since pgd register is shared between root kernel and kvm hypervisor.
>> World switch entry need be unmapped area, cannot be tlb mapped area.
> 
> So if I understand correctly the processor is in direct address translation mode until the "csrwr t0, LOONGARCH_CSR_CRMD" instruction. Where does it leave paged mode?
The processor still in paged mode during world switch context. For example
when vm exits from guest mode to root mode, it executes world switch code
from kvm_vector_entry, PC register points to HVA address, however vmid from
LOONGARCH_CSR_GTLBC is not clear to root mode. If there is page fault
exception, hardware treats it exception from GPA-->HPA rather than that
from HVA --> HPA, since vmid info in CSR_GTLBC is not zero.

In page mode, there are two kinds of address: unmapped address and 
tlb mapped address. For unmapped address there is only cachable/uncachable
attribution, but not RWX attr; and there is no tlb handling for it.
For simplicity,  unmapped address can be treated as window filtered address.

It will be fully root mode only after this piece of code is executed
during world switch context; vmid is zero and PC points to HVA.
        ori     t0, zero, CSR_GSTAT_PVM
        csrxchg zero, t0, LOONGARCH_CSR_GSTAT
        /* Clear GTLBC.TGID field */
        csrrd   t0, LOONGARCH_CSR_GTLBC
        bstrins.w       t0, zero, CSR_GTLBC_TGID_SHIFT_END, CSR_GTLBC_TGID_SHIFT
        csrwr   t0, LOONGARCH_CSR_GTLBC

> 
> Can you please also add comments to kvm_vector_entry explaining the processor state after a VZ exception entry (interrupts, paging, ...)?
Yeap, we will add more comments about these critical exception entry.

Regards
Bibo, Mao
> 
> Paolo


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 02/29] LoongArch: KVM: Implement kvm module related interface
  2023-02-21 10:18         ` maobibo
@ 2023-02-21 10:37           ` WANG Xuerui
  2023-02-21 11:39             ` maobibo
  0 siblings, 1 reply; 70+ messages in thread
From: WANG Xuerui @ 2023-02-21 10:37 UTC (permalink / raw)
  To: maobibo, Paolo Bonzini, Tianrui Zhao
  Cc: Huacai Chen, Greg Kroah-Hartman, loongarch, linux-kernel, kvm,
	Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton

On 2023/2/21 18:18, maobibo wrote:
> 
> 
> 在 2023/2/21 16:14, Paolo Bonzini 写道:
>> On 2/21/23 07:59, maobibo wrote:
>>>> Also, why does the world switch code need a copy?
>>> There will be problem in world switch code if there is page fault reenter,
>>> since pgd register is shared between root kernel and kvm hypervisor.
>>> World switch entry need be unmapped area, cannot be tlb mapped area.
>>
>> So if I understand correctly the processor is in direct address translation mode until the "csrwr t0, LOONGARCH_CSR_CRMD" instruction. Where does it leave paged mode?
> The processor still in paged mode during world switch context. For example
> when vm exits from guest mode to root mode, it executes world switch code
> from kvm_vector_entry, PC register points to HVA address, however vmid from
> LOONGARCH_CSR_GTLBC is not clear to root mode. If there is page fault
> exception, hardware treats it exception from GPA-->HPA rather than that
> from HVA --> HPA, since vmid info in CSR_GTLBC is not zero.
> 
> In page mode, there are two kinds of address: unmapped address and
> tlb mapped address. For unmapped address there is only cachable/uncachable
> attribution, but not RWX attr; and there is no tlb handling for it.
> For simplicity,  unmapped address can be treated as window filtered address.
> 
> It will be fully root mode only after this piece of code is executed
> during world switch context; vmid is zero and PC points to HVA.
>          ori     t0, zero, CSR_GSTAT_PVM
>          csrxchg zero, t0, LOONGARCH_CSR_GSTAT
>          /* Clear GTLBC.TGID field */
>          csrrd   t0, LOONGARCH_CSR_GTLBC
>          bstrins.w       t0, zero, CSR_GTLBC_TGID_SHIFT_END, CSR_GTLBC_TGID_SHIFT
>          csrwr   t0, LOONGARCH_CSR_GTLBC

AFAIK all of these is probably coming from Volume 3 of LoongArch ISA 
Manual, which is unfortunately not publicly available at the moment. For 
sake of meaningful reviews, when can we expect to get our hands on the 
manuals?

-- 
WANG "xen0n" Xuerui

Linux/LoongArch mailing list: https://lore.kernel.org/loongarch/


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 02/29] LoongArch: KVM: Implement kvm module related interface
  2023-02-21 10:37           ` WANG Xuerui
@ 2023-02-21 11:39             ` maobibo
  2023-02-21 12:38               ` WANG Xuerui
  0 siblings, 1 reply; 70+ messages in thread
From: maobibo @ 2023-02-21 11:39 UTC (permalink / raw)
  To: WANG Xuerui, Paolo Bonzini, Tianrui Zhao
  Cc: Huacai Chen, Greg Kroah-Hartman, loongarch, linux-kernel, kvm,
	Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton



在 2023/2/21 18:37, WANG Xuerui 写道:
> On 2023/2/21 18:18, maobibo wrote:
>>
>>
>> 在 2023/2/21 16:14, Paolo Bonzini 写道:
>>> On 2/21/23 07:59, maobibo wrote:
>>>>> Also, why does the world switch code need a copy?
>>>> There will be problem in world switch code if there is page fault reenter,
>>>> since pgd register is shared between root kernel and kvm hypervisor.
>>>> World switch entry need be unmapped area, cannot be tlb mapped area.
>>>
>>> So if I understand correctly the processor is in direct address translation mode until the "csrwr t0, LOONGARCH_CSR_CRMD" instruction. Where does it leave paged mode?
>> The processor still in paged mode during world switch context. For example
>> when vm exits from guest mode to root mode, it executes world switch code
>> from kvm_vector_entry, PC register points to HVA address, however vmid from
>> LOONGARCH_CSR_GTLBC is not clear to root mode. If there is page fault
>> exception, hardware treats it exception from GPA-->HPA rather than that
>> from HVA --> HPA, since vmid info in CSR_GTLBC is not zero.
>>
>> In page mode, there are two kinds of address: unmapped address and
>> tlb mapped address. For unmapped address there is only cachable/uncachable
>> attribution, but not RWX attr; and there is no tlb handling for it.
>> For simplicity,  unmapped address can be treated as window filtered address.
>>
>> It will be fully root mode only after this piece of code is executed
>> during world switch context; vmid is zero and PC points to HVA.
>>          ori     t0, zero, CSR_GSTAT_PVM
>>          csrxchg zero, t0, LOONGARCH_CSR_GSTAT
>>          /* Clear GTLBC.TGID field */
>>          csrrd   t0, LOONGARCH_CSR_GTLBC
>>          bstrins.w       t0, zero, CSR_GTLBC_TGID_SHIFT_END, CSR_GTLBC_TGID_SHIFT
>>          csrwr   t0, LOONGARCH_CSR_GTLBC
> 
> AFAIK all of these is probably coming from Volume 3 of LoongArch ISA Manual, which is unfortunately not publicly available at the moment. For sake of meaningful reviews, when can we expect to get our hands on the manuals?
We are pushing to public the virtualization manual inside, it is convenient
to sw developer to review the code. However I am not sure about the date :(

Regards
Bibo, Mao
> 


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 02/29] LoongArch: KVM: Implement kvm module related interface
  2023-02-21 11:39             ` maobibo
@ 2023-02-21 12:38               ` WANG Xuerui
  0 siblings, 0 replies; 70+ messages in thread
From: WANG Xuerui @ 2023-02-21 12:38 UTC (permalink / raw)
  To: maobibo, Paolo Bonzini, Tianrui Zhao
  Cc: Huacai Chen, Greg Kroah-Hartman, loongarch, linux-kernel, kvm,
	Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton

On 2023/2/21 19:39, maobibo wrote:
> 
> 
> 在 2023/2/21 18:37, WANG Xuerui 写道:
>> On 2023/2/21 18:18, maobibo wrote:
>>>
>>>
>>> 在 2023/2/21 16:14, Paolo Bonzini 写道:
>>>> On 2/21/23 07:59, maobibo wrote:
>>>>>> Also, why does the world switch code need a copy?
>>>>> There will be problem in world switch code if there is page fault reenter,
>>>>> since pgd register is shared between root kernel and kvm hypervisor.
>>>>> World switch entry need be unmapped area, cannot be tlb mapped area.
>>>>
>>>> So if I understand correctly the processor is in direct address translation mode until the "csrwr t0, LOONGARCH_CSR_CRMD" instruction. Where does it leave paged mode?
>>> The processor still in paged mode during world switch context. For example
>>> when vm exits from guest mode to root mode, it executes world switch code
>>> from kvm_vector_entry, PC register points to HVA address, however vmid from
>>> LOONGARCH_CSR_GTLBC is not clear to root mode. If there is page fault
>>> exception, hardware treats it exception from GPA-->HPA rather than that
>>> from HVA --> HPA, since vmid info in CSR_GTLBC is not zero.
>>>
>>> In page mode, there are two kinds of address: unmapped address and
>>> tlb mapped address. For unmapped address there is only cachable/uncachable
>>> attribution, but not RWX attr; and there is no tlb handling for it.
>>> For simplicity,  unmapped address can be treated as window filtered address.
>>>
>>> It will be fully root mode only after this piece of code is executed
>>> during world switch context; vmid is zero and PC points to HVA.
>>>           ori     t0, zero, CSR_GSTAT_PVM
>>>           csrxchg zero, t0, LOONGARCH_CSR_GSTAT
>>>           /* Clear GTLBC.TGID field */
>>>           csrrd   t0, LOONGARCH_CSR_GTLBC
>>>           bstrins.w       t0, zero, CSR_GTLBC_TGID_SHIFT_END, CSR_GTLBC_TGID_SHIFT
>>>           csrwr   t0, LOONGARCH_CSR_GTLBC
>>
>> AFAIK all of these is probably coming from Volume 3 of LoongArch ISA Manual, which is unfortunately not publicly available at the moment. For sake of meaningful reviews, when can we expect to get our hands on the manuals?
> We are pushing to public the virtualization manual inside, it is convenient
> to sw developer to review the code. However I am not sure about the date :(

Well, that's kinda expected, but it's nice to see some progress and 
certainly your open attitude to this matter is constructive. Thanks for 
sharing this and looking forward to the eventual docs release then!

-- 
WANG "xen0n" Xuerui

Linux/LoongArch mailing list: https://lore.kernel.org/loongarch/


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 27/29] LoongArch: KVM: Implement vcpu world switch
  2023-02-21  8:18   ` Paolo Bonzini
@ 2023-02-21 12:58     ` Tianrui Zhao
  0 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-21 12:58 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo



在 2023年02月21日 16:18, Paolo Bonzini 写道:
> On 2/20/23 07:57, Tianrui Zhao wrote:
>> +    or    a0, s0, zero
>> +    or    a1, s1, zero
>> +    ld.d    t8, a2, KVM_ARCH_HANDLE_EXIT
>> +    jirl    ra,t8, 0
>> +    ori    t0, zero, CSR_CRMD_IE
>> +    csrxchg    zero, t0, LOONGARCH_CSR_CRMD
>
> _kvm_handle_exit returns with the interrupts disabled.
>
> Can you please add a comment to explain why CRMD.IE needs to be 
> cleared here, or remove these two instructions if unnecessary?
>
> Paolo

Thanks,  the interrupts have already been disabled when _kvm_handle_exit 
returns,  and I will remove the two instructions.

Thanks
Tianrui Zhao

>
>> +    or    a2, s1, zero
>> +    addi.d    a2, a2, KVM_VCPU_ARCH
>> +
>> +    andi    t0, a0, RESUME_HOST
>> +    bnez    t0, ret_to_host


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 27/29] LoongArch: KVM: Implement vcpu world switch
  2023-02-21  7:45   ` Paolo Bonzini
@ 2023-02-21 13:00     ` Tianrui Zhao
  0 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-21 13:00 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo



在 2023年02月21日 15:45, Paolo Bonzini 写道:
> On 2/20/23 07:57, Tianrui Zhao wrote:
>> +    /* Load Guest gprs */
>> +    ld.d    $r1,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 1)
>> +    ld.d    $r2,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 2)
>> +    ld.d    $r3,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 3)
>> +    ld.d    $r4,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 4)
>> +    ld.d    $r5,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 5)
>> +    ld.d    $r7,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 7)
>> +    ld.d    $r8,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 8)
>> +    ld.d    $r9,   \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 9)
>> +    ld.d    $r10,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 10)
>> +    ld.d    $r11,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 11)
>> +    ld.d    $r12,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 12)
>> +    ld.d    $r13,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 13)
>> +    ld.d    $r14,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 14)
>> +    ld.d    $r15,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 15)
>> +    ld.d    $r16,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 16)
>> +    ld.d    $r17,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 17)
>> +    ld.d    $r18,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 18)
>> +    ld.d    $r19,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 19)
>> +    ld.d    $r20,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 20)
>> +    ld.d    $r21,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 21)
>> +    ld.d    $r22,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 22)
>> +    ld.d    $r23,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 23)
>> +    ld.d    $r24,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 24)
>> +    ld.d    $r25,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 25)
>> +    ld.d    $r26,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 26)
>> +    ld.d    $r27,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 27)
>> +    ld.d    $r28,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 28)
>> +    ld.d    $r29,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 29)
>> +    ld.d    $r30,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 30)
>> +    ld.d    $r31,  \KVM_ARCH,  (KVM_ARCH_GGPR + 8 * 31)
>> +    /* Load KVM_ARCH register */
>> +    ld.d    \KVM_ARCH, \KVM_ARCH, (KVM_ARCH_GGPR + 8 * \GPRNUM)
>
> This in practice relies on KVM_ARCH being a2 so please remove the 
> KVM_ARCH and GPRNUM arguments from the macro; just replace \KVM_ARCH 
> with a2 as needed.
>
> Also, in these ld.d and st.d sequences you may want to use the ABI 
> names instead of the rNN names, so it's clearer that you are skipping 
> the KVM_ARCH register.
>
> Paolo

Thanks, I will use a2 to replace the KVM_ARCH and remove the GPRNUM 
argument.

Thanks
Tianrui Zhao



^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 06/29] LoongArch: KVM: Implement vcpu create and destroy interface
  2023-02-20 17:53   ` Paolo Bonzini
@ 2023-02-22  1:52     ` Tianrui Zhao
  2023-02-22 12:17       ` Paolo Bonzini
  0 siblings, 1 reply; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-22  1:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo



在 2023年02月21日 01:53, Paolo Bonzini 写道:
> On 2/20/23 07:57, Tianrui Zhao wrote:
>> +    vcpu->arch.guest_eentry = (unsigned long)kvm_context->kvm_eentry;
>> +    vcpu->arch.vcpu_run = kvm_context->kvm_enter_guest;
>> +    vcpu->arch.handle_exit = _kvm_handle_exit;
>
> Here as well, whatever is constant must not be stored in struct 
> kvm_arch_vcpu.
>
> Paolo

Thanks,  we use this in vcpu_arch because the vcpu_arch is used as 
argument in switch.S' methods, we can quickly access the guest_eentry 
and handle_exit by using the  KVM_ARCH_GEENTRY, KVM_ARCH_HANDLE_EXIT 
offsets. If we change to global variable ,  we should relocate it in 
switch.S and may lead  to lower accessing speed.


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 07/29] LoongArch: KVM: Implement vcpu run interface
  2023-02-20 18:44   ` Paolo Bonzini
@ 2023-02-22  2:08     ` Tianrui Zhao
  0 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-22  2:08 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo



在 2023年02月21日 02:44, Paolo Bonzini 写道:
> On 2/20/23 07:57, Tianrui Zhao wrote:
>> +    lose_fpu(1);
>
> Is this enough to clear CSR_EUEN_FPEN?

Yes, in lose_fpu function, the task's CSR_EUEN_FPEN flag is unmasked.

Thanks
Tianrui Zhao

>
>
> Paolo


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 06/29] LoongArch: KVM: Implement vcpu create and destroy interface
  2023-02-22  1:52     ` Tianrui Zhao
@ 2023-02-22 12:17       ` Paolo Bonzini
  2023-02-23  1:23         ` Tianrui Zhao
  0 siblings, 1 reply; 70+ messages in thread
From: Paolo Bonzini @ 2023-02-22 12:17 UTC (permalink / raw)
  To: Tianrui Zhao
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo

On 2/22/23 02:52, Tianrui Zhao wrote:
>>
>>> +    vcpu->arch.guest_eentry = (unsigned long)kvm_context->kvm_eentry;
>>> +    vcpu->arch.vcpu_run = kvm_context->kvm_enter_guest;
>>> +    vcpu->arch.handle_exit = _kvm_handle_exit;
>>
>> Here as well, whatever is constant must not be stored in struct 
>> kvm_arch_vcpu.
>>
>> Paolo
> 
> Thanks,  we use this in vcpu_arch because the vcpu_arch is used as 
> argument in switch.S' methods, we can quickly access the guest_eentry 
> and handle_exit by using the  KVM_ARCH_GEENTRY, KVM_ARCH_HANDLE_EXIT 
> offsets. If we change to global variable ,  we should relocate it in 
> switch.S and may lead  to lower accessing speed.

For guest_eentry and handle_exit this is correct so you can add a 
comment in kvm_host.h, like

	/* Pointers stored here for easy access from assembly code.  */

However, vcpu->arch.vcpu_run is not used in switch.S so there is no need 
to store it in struct kvm_arch_vcpu.  Since you're already going to move 
kvm_enter_guest out of kvm_context and into a global variable, please 
give it the right pointer-to-function type instead of using unsigned long.

Paolo


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 06/29] LoongArch: KVM: Implement vcpu create and destroy interface
  2023-02-22 12:17       ` Paolo Bonzini
@ 2023-02-23  1:23         ` Tianrui Zhao
  0 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-23  1:23 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo



在 2023年02月22日 20:17, Paolo Bonzini 写道:
> On 2/22/23 02:52, Tianrui Zhao wrote:
>>>
>>>> +    vcpu->arch.guest_eentry = (unsigned long)kvm_context->kvm_eentry;
>>>> +    vcpu->arch.vcpu_run = kvm_context->kvm_enter_guest;
>>>> +    vcpu->arch.handle_exit = _kvm_handle_exit;
>>>
>>> Here as well, whatever is constant must not be stored in struct 
>>> kvm_arch_vcpu.
>>>
>>> Paolo
>>
>> Thanks,  we use this in vcpu_arch because the vcpu_arch is used as 
>> argument in switch.S' methods, we can quickly access the guest_eentry 
>> and handle_exit by using the  KVM_ARCH_GEENTRY, KVM_ARCH_HANDLE_EXIT 
>> offsets. If we change to global variable , we should relocate it in 
>> switch.S and may lead  to lower accessing speed.
>
> For guest_eentry and handle_exit this is correct so you can add a 
> comment in kvm_host.h, like
>
>     /* Pointers stored here for easy access from assembly code. */
>
> However, vcpu->arch.vcpu_run is not used in switch.S so there is no 
> need to store it in struct kvm_arch_vcpu.  Since you're already going 
> to move kvm_enter_guest out of kvm_context and into a global variable, 
> please give it the right pointer-to-function type instead of using 
> unsigned long.
>
> Paolo

Thanks, I will remove this vcpu_run and replace it with the new global 
variable, and fix the pointer-to-function type.

Thanks
Tianrui Zhao



^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 01/29] LoongArch: KVM: Add kvm related header files
  2023-02-21  4:36   ` Xi Ruoyao
@ 2023-02-24  1:27     ` Tianrui Zhao
  0 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-24  1:27 UTC (permalink / raw)
  To: Xi Ruoyao, Paolo Bonzini
  Cc: Huacai Chen, WANG Xuerui, Greg Kroah-Hartman, loongarch,
	linux-kernel, kvm, Jens Axboe, Mark Brown, Alex Deucher,
	Oliver Upton, maobibo



在 2023年02月21日 12:36, Xi Ruoyao 写道:
> On Mon, 2023-02-20 at 14:57 +0800, Tianrui Zhao wrote:
>
> /* snip */
>
>> +/*
>> + * for KVM_GET_FPU and KVM_SET_FPU
>> + */
>> +struct kvm_fpu {
>> +	__u32 fcsr;
>> +	__u32 none;
>> +	__u64 fcc;    /* 8x8 */
>> +	struct kvm_fpureg {
>> +		__u64 val64[4];	//support max 256 bits
>> +	} fpr[32];
> Do we need __attribute__((__aligned__(16))) for fpureg (like
> sc_extcontext in struct sigcontext)?

Thanks,  the loongarch_fpu variable is already has FPU_ALIGN feature in 
the kvm_vcpu_arch structure, so it need not to use __aligned__(16).

Thanks
Tianrui Zhao

>
>> +};
>> +
>> +/*
>> + * For LOONGARCH, we use KVM_SET_ONE_REG and KVM_GET_ONE_REG to
>            ^^^^^^^^^
>            LoongArch

Thanks, I will fix it

Thanks
Tianrui Zhao

>
>> access various
>> + * registers.  The id field is broken down as follows:
>> + *
>> + *  bits[63..52] - As per linux/kvm.h
>> + *  bits[51..32] - Must be zero.
>> + *  bits[31..16] - Register set.


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v2 05/29] LoongArch: KVM: Add vcpu related header files
  2023-02-20 18:57   ` WANG Xuerui
@ 2023-02-27  1:39     ` Tianrui Zhao
  0 siblings, 0 replies; 70+ messages in thread
From: Tianrui Zhao @ 2023-02-27  1:39 UTC (permalink / raw)
  To: WANG Xuerui, Paolo Bonzini
  Cc: Huacai Chen, Greg Kroah-Hartman, loongarch, linux-kernel, kvm,
	Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton, maobibo



在 2023年02月21日 02:57, WANG Xuerui 写道:
> Hi,
>
> On 2/20/23 14:57, Tianrui Zhao wrote:
>> Add LoongArch vcpu related header files, including vcpu csr
>> information, irq number defines, and some vcpu interfaces.
>>
>> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
>> ---
>>   arch/loongarch/include/asm/cpu-info.h  |  13 ++
>>   arch/loongarch/include/asm/kvm_vcpu.h  | 112 ++++++++++++++
>>   arch/loongarch/include/asm/loongarch.h | 195 ++++++++++++++++++++++++-
>>   arch/loongarch/kvm/trace.h             | 137 +++++++++++++++++
>>   4 files changed, 451 insertions(+), 6 deletions(-)
>>   create mode 100644 arch/loongarch/include/asm/kvm_vcpu.h
>>   create mode 100644 arch/loongarch/kvm/trace.h
>>
>> diff --git a/arch/loongarch/include/asm/cpu-info.h 
>> b/arch/loongarch/include/asm/cpu-info.h
>> index cd73a6f57..1b426a2ca 100644
>> --- a/arch/loongarch/include/asm/cpu-info.h
>> +++ b/arch/loongarch/include/asm/cpu-info.h
>> @@ -32,6 +32,15 @@ struct cache_desc {
>>   #define CACHE_LEVEL_MAX        3
>>   #define CACHE_LEAVES_MAX    6
>>   +struct guest_info {
>> +    unsigned long        ases;
>> +    unsigned long        ases_dyn;
>> +    unsigned long        options;
>> +    unsigned long        options_dyn;
>> +    unsigned char        conf;
>> +    unsigned int        kscratch_mask;
>> +};
>> +
>>   struct cpuinfo_loongarch {
>>       u64            asid_cache;
>>       unsigned long        asid_mask;
>> @@ -60,6 +69,10 @@ struct cpuinfo_loongarch {
>>       unsigned int        watch_dreg_count;   /* Number data 
>> breakpoints */
>>       unsigned int        watch_ireg_count;   /* Number instruction 
>> breakpoints */
>>       unsigned int        watch_reg_use_cnt; /* min(NUM_WATCH_REGS, 
>> watch_dreg_count + watch_ireg_count), Usable by ptrace */
>> +
>> +    /* VZ & Guest features */
>> +    struct guest_info    guest;
>> +    unsigned long        guest_cfg;
>>   } __aligned(SMP_CACHE_BYTES);
>>     extern struct cpuinfo_loongarch cpu_data[];
>> diff --git a/arch/loongarch/include/asm/kvm_vcpu.h 
>> b/arch/loongarch/include/asm/kvm_vcpu.h
>> new file mode 100644
>> index 000000000..66ec9bc52
>> --- /dev/null
>> +++ b/arch/loongarch/include/asm/kvm_vcpu.h
>> @@ -0,0 +1,112 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
>> + */
>> +
>> +#ifndef __ASM_LOONGARCH_KVM_VCPU_H__
>> +#define __ASM_LOONGARCH_KVM_VCPU_H__
>> +
>> +#include <linux/kvm_host.h>
>> +#include <asm/loongarch.h>
>> +#include <asm/kvm_host.h>
>> +
>> +#define LARCH_INT_SIP0                0
>> +#define LARCH_INT_SIP1                1
>> +#define LARCH_INT_IP0                2
>> +#define LARCH_INT_IP1                3
>> +#define LARCH_INT_IP2                4
>> +#define LARCH_INT_IP3                5
>> +#define LARCH_INT_IP4                6
>> +#define LARCH_INT_IP5                7
>> +#define LARCH_INT_IP6                8
>> +#define LARCH_INT_IP7                9
>> +#define LARCH_INT_PMU                10
>> +#define LARCH_INT_TIMER                11
>> +#define LARCH_INT_IPI                12
>> +#define LOONGARCH_EXC_MAX            (LARCH_INT_IPI + 1)
>> +#define LOONGARCH_EXC_IPNUM            (LOONGARCH_EXC_MAX)
> There are effectively identical definitions in <asm/loongarch.h>, why 
> do you choose to re-define all of these without deviating from the 
> architectural standards?

Thanks, we are going to fix those definitions.

Thanks
Tianrui Zhao

>
>> +
>> +/* Controlled by 0x5 guest exst */
>> +#define CPU_SIP0                (_ULCAST_(1))
>> +#define CPU_SIP1                (_ULCAST_(1) << 1)
>> +#define CPU_PMU                    (_ULCAST_(1) << 10)
>> +#define CPU_TIMER                (_ULCAST_(1) << 11)
>> +#define CPU_IPI                    (_ULCAST_(1) << 12)
>> +
>> +/* Controlled by 0x52 guest exception VIP
>> + * aligned to exst bit 5~12
>> + */
>> +#define CPU_IP0                    (_ULCAST_(1))
>> +#define CPU_IP1                    (_ULCAST_(1) << 1)
>> +#define CPU_IP2                    (_ULCAST_(1) << 2)
>> +#define CPU_IP3                    (_ULCAST_(1) << 3)
>> +#define CPU_IP4                    (_ULCAST_(1) << 4)
>> +#define CPU_IP5                    (_ULCAST_(1) << 5)
>> +#define CPU_IP6                    (_ULCAST_(1) << 6)
>> +#define CPU_IP7                    (_ULCAST_(1) << 7)
>> +
>> +#define MNSEC_PER_SEC                (NSEC_PER_SEC >> 20)
>> +
>> +/* KVM_IRQ_LINE irq field index values */
>> +#define KVM_LOONGSON_IRQ_TYPE_SHIFT        24
>> +#define KVM_LOONGSON_IRQ_TYPE_MASK        0xff
>> +#define KVM_LOONGSON_IRQ_VCPU_SHIFT        16
>> +#define KVM_LOONGSON_IRQ_VCPU_MASK        0xff
>> +#define KVM_LOONGSON_IRQ_NUM_SHIFT        0
>> +#define KVM_LOONGSON_IRQ_NUM_MASK        0xffff
>> +
>> +/* irq_type field */
>> +#define KVM_LOONGSON_IRQ_TYPE_CPU_IP        0
>> +#define KVM_LOONGSON_IRQ_TYPE_CPU_IO        1
>> +#define KVM_LOONGSON_IRQ_TYPE_HT        2
>> +#define KVM_LOONGSON_IRQ_TYPE_MSI        3
>> +#define KVM_LOONGSON_IRQ_TYPE_IOAPIC        4
>> +#define KVM_LOONGSON_IRQ_TYPE_ROUTE        5
>> +
>> +/* out-of-kernel GIC cpu interrupt injection irq_number field */
>> +#define KVM_LOONGSON_IRQ_CPU_IRQ        0
>> +#define KVM_LOONGSON_IRQ_CPU_FIQ        1
>> +#define KVM_LOONGSON_CPU_IP_NUM            8
>> +
>> +typedef union loongarch_instruction  larch_inst;
>> +typedef int (*exit_handle_fn)(struct kvm_vcpu *);
>> +
>> +int  _kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst);
>> +int  _kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst);
>> +int  _kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run 
>> *run);
>> +int  _kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run 
>> *run);
>> +int  _kvm_emu_idle(struct kvm_vcpu *vcpu);
>> +int  _kvm_handle_pv_hcall(struct kvm_vcpu *vcpu);
>> +int  _kvm_pending_timer(struct kvm_vcpu *vcpu);
>> +int  _kvm_handle_fault(struct kvm_vcpu *vcpu, int fault);
>> +void _kvm_deliver_intr(struct kvm_vcpu *vcpu);
>> +
>> +void kvm_own_fpu(struct kvm_vcpu *vcpu);
>> +void kvm_lose_fpu(struct kvm_vcpu *vcpu);
>> +void kvm_save_fpu(struct loongarch_fpu *fpu);
>> +void kvm_restore_fpu(struct loongarch_fpu *fpu);
>> +void kvm_restore_fcsr(struct loongarch_fpu *fpu);
>> +
>> +void kvm_acquire_timer(struct kvm_vcpu *vcpu);
>> +void kvm_reset_timer(struct kvm_vcpu *vcpu);
>> +enum hrtimer_restart kvm_count_timeout(struct kvm_vcpu *vcpu);
>> +void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long hz);
>> +void kvm_restore_timer(struct kvm_vcpu *vcpu);
>> +void kvm_save_timer(struct kvm_vcpu *vcpu);
>> +
>> +/*
>> + * Loongarch KVM guest interrupt handling.
>> + */
>> +static inline void _kvm_queue_irq(struct kvm_vcpu *vcpu, unsigned 
>> int irq)
>> +{
>> +    set_bit(irq, &vcpu->arch.irq_pending);
>> +    clear_bit(irq, &vcpu->arch.irq_clear);
>> +}
>> +
>> +static inline void _kvm_dequeue_irq(struct kvm_vcpu *vcpu, unsigned 
>> int irq)
>> +{
>> +    clear_bit(irq, &vcpu->arch.irq_pending);
>> +    set_bit(irq, &vcpu->arch.irq_clear);
>> +}
>> +
>> +#endif /* __ASM_LOONGARCH_KVM_VCPU_H__ */
>> diff --git a/arch/loongarch/include/asm/loongarch.h 
>> b/arch/loongarch/include/asm/loongarch.h
>> index 7f8d57a61..7b74605dd 100644
>> --- a/arch/loongarch/include/asm/loongarch.h
>> +++ b/arch/loongarch/include/asm/loongarch.h
>> @@ -236,6 +236,44 @@ static __always_inline u64 csr_xchg64(u64 val, 
>> u64 mask, u32 reg)
>>       return __csrxchg_d(val, mask, reg);
>>   }
>>   +/* GCSR */
>> +static inline u64 gcsr_read(u32 reg)
>> +{
>> +    u64 val = 0;
>> +
>> +    asm volatile (
>> +        "parse_r __reg, %[val]\n\t"
>> +        ".word 0x5 << 24 | %[reg] << 10 | 0 << 5 | __reg\n\t"
> Ah. MIPS (LoongISA) memories strike back hard. Where's the public ISA 
> manual so we aren't forced to blindly trust your code drop?
>> +        : [val] "+r" (val)
>> +        : [reg] "i" (reg)
>> +        : "memory");
>> +
>> +    return val;
>> +}
>> +
>> +static inline void gcsr_write(u64 val, u32 reg)
>> +{
>> +    asm volatile (
>> +        "parse_r __reg, %[val]\n\t"
>> +        ".word 0x5 << 24 | %[reg] << 10 | 1 << 5 | __reg\n\t"
>> +        : [val] "+r" (val)
>> +        : [reg] "i" (reg)
>> +        : "memory");
>> +}
>> +
>> +static inline u64 gcsr_xchg(u64 val, u64 mask, u32 reg)
>> +{
>> +    asm volatile (
>> +        "parse_r __rd, %[val]\n\t"
>> +        "parse_r __rj, %[mask]\n\t"
>> +        ".word 0x5 << 24 | %[reg] << 10 | __rj << 5 | __rd\n\t"
>> +        : [val] "+r" (val)
>> +        : [mask] "r" (mask), [reg] "i" (reg)
>> +        : "memory");
>> +
>> +    return val;
>> +}
>> +
>>   /* IOCSR */
>>   static __always_inline u32 iocsr_read32(u32 reg)
>>   {
>> @@ -309,6 +347,7 @@ static __always_inline void iocsr_write64(u64 
>> val, u32 reg)
>>   #define LOONGARCH_CSR_ECFG        0x4    /* Exception config */
>>   #define  CSR_ECFG_VS_SHIFT        16
>>   #define  CSR_ECFG_VS_WIDTH        3
>> +#define  CSR_ECFG_VS_SHIFT_END        (CSR_ECFG_VS_SHIFT + 
>> CSR_ECFG_VS_WIDTH - 1)
>>   #define  CSR_ECFG_VS            (_ULCAST_(0x7) << CSR_ECFG_VS_SHIFT)
>>   #define  CSR_ECFG_IM_SHIFT        0
>>   #define  CSR_ECFG_IM_WIDTH        13
>> @@ -397,13 +436,14 @@ static __always_inline void iocsr_write64(u64 
>> val, u32 reg)
>>   #define  CSR_TLBLO1_V            (_ULCAST_(0x1) << CSR_TLBLO1_V_SHIFT)
>>     #define LOONGARCH_CSR_GTLBC        0x15    /* Guest TLB control */
>> -#define  CSR_GTLBC_RID_SHIFT        16
>> -#define  CSR_GTLBC_RID_WIDTH        8
>> -#define  CSR_GTLBC_RID            (_ULCAST_(0xff) << 
>> CSR_GTLBC_RID_SHIFT)
>> +#define  CSR_GTLBC_TGID_SHIFT        16
>> +#define  CSR_GTLBC_TGID_WIDTH        8
>> +#define  CSR_GTLBC_TGID_SHIFT_END    (CSR_GTLBC_TGID_SHIFT + 
>> CSR_GTLBC_TGID_WIDTH - 1)
>> +#define  CSR_GTLBC_TGID            (_ULCAST_(0xff) << 
>> CSR_GTLBC_TGID_SHIFT)
>>   #define  CSR_GTLBC_TOTI_SHIFT        13
>>   #define  CSR_GTLBC_TOTI            (_ULCAST_(0x1) << 
>> CSR_GTLBC_TOTI_SHIFT)
>> -#define  CSR_GTLBC_USERID_SHIFT        12
>> -#define  CSR_GTLBC_USERID        (_ULCAST_(0x1) << 
>> CSR_GTLBC_USERID_SHIFT)
>> +#define  CSR_GTLBC_USETGID_SHIFT    12
>> +#define  CSR_GTLBC_USETGID        (_ULCAST_(0x1) << 
>> CSR_GTLBC_USETGID_SHIFT)
>>   #define  CSR_GTLBC_GMTLBSZ_SHIFT    0
>>   #define  CSR_GTLBC_GMTLBSZ_WIDTH    6
>>   #define  CSR_GTLBC_GMTLBSZ        (_ULCAST_(0x3f) << 
>> CSR_GTLBC_GMTLBSZ_SHIFT)
>> @@ -555,6 +595,7 @@ static __always_inline void iocsr_write64(u64 
>> val, u32 reg)
>>   #define LOONGARCH_CSR_GSTAT        0x50    /* Guest status */
>>   #define  CSR_GSTAT_GID_SHIFT        16
>>   #define  CSR_GSTAT_GID_WIDTH        8
>> +#define  CSR_GSTAT_GID_SHIFT_END    (CSR_GSTAT_GID_SHIFT + 
>> CSR_GSTAT_GID_WIDTH - 1)
>>   #define  CSR_GSTAT_GID            (_ULCAST_(0xff) << 
>> CSR_GSTAT_GID_SHIFT)
>>   #define  CSR_GSTAT_GIDBIT_SHIFT        4
>>   #define  CSR_GSTAT_GIDBIT_WIDTH        6
>> @@ -605,6 +646,12 @@ static __always_inline void iocsr_write64(u64 
>> val, u32 reg)
>>   #define  CSR_GCFG_MATC_GUEST        (_ULCAST_(0x0) << 
>> CSR_GCFG_MATC_SHITF)
>>   #define  CSR_GCFG_MATC_ROOT        (_ULCAST_(0x1) << 
>> CSR_GCFG_MATC_SHITF)
>>   #define  CSR_GCFG_MATC_NEST        (_ULCAST_(0x2) << 
>> CSR_GCFG_MATC_SHITF)
>> +#define  CSR_GCFG_MATP_SHITF        0
>> +#define  CSR_GCFG_MATP_WIDTH        4
>> +#define  CSR_GCFG_MATP_MASK        (_ULCAST_(0x3) << 
>> CSR_GCFG_MATP_SHITF)
>> +#define  CSR_GCFG_MATP_GUEST        (_ULCAST_(0x0) << 
>> CSR_GCFG_MATP_SHITF)
>> +#define  CSR_GCFG_MATP_ROOT        (_ULCAST_(0x1) << 
>> CSR_GCFG_MATP_SHITF)
>> +#define  CSR_GCFG_MATP_NEST        (_ULCAST_(0x2) << 
>> CSR_GCFG_MATP_SHITF)
>>     #define LOONGARCH_CSR_GINTC        0x52    /* Guest interrupt 
>> control */
>>   #define  CSR_GINTC_HC_SHIFT        16
>> @@ -1273,6 +1320,131 @@ static inline void 
>> write_csr_tlbrefill_pagesize(unsigned int size)
>>   #define write_csr_perfctrl3(val)    csr_write64(val, 
>> LOONGARCH_CSR_PERFCTRL3)
>>   #define write_csr_perfcntr3(val)    csr_write64(val, 
>> LOONGARCH_CSR_PERFCNTR3)
>>   +/* Guest related CSRS */
>> +#define read_csr_gtlbc()        csr_read64(LOONGARCH_CSR_GTLBC)
>> +#define write_csr_gtlbc(val)        csr_write64(val, 
>> LOONGARCH_CSR_GTLBC)
>> +#define read_csr_trgp() csr_read64(LOONGARCH_CSR_TRGP)
>> +#define read_csr_gcfg() csr_read64(LOONGARCH_CSR_GCFG)
>> +#define write_csr_gcfg(val)        csr_write64(val, LOONGARCH_CSR_GCFG)
>> +#define read_csr_gstat()        csr_read64(LOONGARCH_CSR_GSTAT)
>> +#define write_csr_gstat(val)        csr_write64(val, 
>> LOONGARCH_CSR_GSTAT)
>> +#define read_csr_gintc()        csr_read64(LOONGARCH_CSR_GINTC)
>> +#define write_csr_gintc(val)        csr_write64(val, 
>> LOONGARCH_CSR_GINTC)
>> +#define read_csr_gcntc()        csr_read64(LOONGARCH_CSR_GCNTC)
>> +#define write_csr_gcntc(val)        csr_write64(val, 
>> LOONGARCH_CSR_GCNTC)
>> +
>> +/* Guest CSRS read and write */
>> +#define read_gcsr_crmd()        gcsr_read(LOONGARCH_CSR_CRMD)
>> +#define write_gcsr_crmd(val)        gcsr_write(val, LOONGARCH_CSR_CRMD)
>> +#define read_gcsr_prmd()        gcsr_read(LOONGARCH_CSR_PRMD)
>> +#define write_gcsr_prmd(val)        gcsr_write(val, LOONGARCH_CSR_PRMD)
>> +#define read_gcsr_euen()        gcsr_read(LOONGARCH_CSR_EUEN)
>> +#define write_gcsr_euen(val)        gcsr_write(val, LOONGARCH_CSR_EUEN)
>> +#define read_gcsr_misc()        gcsr_read(LOONGARCH_CSR_MISC)
>> +#define write_gcsr_misc(val)        gcsr_write(val, LOONGARCH_CSR_MISC)
>> +#define read_gcsr_ecfg()        gcsr_read(LOONGARCH_CSR_ECFG)
>> +#define write_gcsr_ecfg(val)        gcsr_write(val, LOONGARCH_CSR_ECFG)
>> +#define read_gcsr_estat()        gcsr_read(LOONGARCH_CSR_ESTAT)
>> +#define write_gcsr_estat(val)        gcsr_write(val, 
>> LOONGARCH_CSR_ESTAT)
>> +#define read_gcsr_era()            gcsr_read(LOONGARCH_CSR_ERA)
>> +#define write_gcsr_era(val)        gcsr_write(val, LOONGARCH_CSR_ERA)
>> +#define read_gcsr_badv()        gcsr_read(LOONGARCH_CSR_BADV)
>> +#define write_gcsr_badv(val)        gcsr_write(val, LOONGARCH_CSR_BADV)
>> +#define read_gcsr_badi()        gcsr_read(LOONGARCH_CSR_BADI)
>> +#define write_gcsr_badi(val)        gcsr_write(val, LOONGARCH_CSR_BADI)
>> +#define read_gcsr_eentry() gcsr_read(LOONGARCH_CSR_EENTRY)
>> +#define write_gcsr_eentry(val)        gcsr_write(val, 
>> LOONGARCH_CSR_EENTRY)
>> +
>> +#define read_gcsr_tlbidx() gcsr_read(LOONGARCH_CSR_TLBIDX)
>> +#define write_gcsr_tlbidx(val)        gcsr_write(val, 
>> LOONGARCH_CSR_TLBIDX)
>> +#define read_gcsr_tlbhi() gcsr_read(LOONGARCH_CSR_TLBEHI)
>> +#define write_gcsr_tlbhi(val)        gcsr_write(val, 
>> LOONGARCH_CSR_TLBEHI)
>> +#define read_gcsr_tlblo0() gcsr_read(LOONGARCH_CSR_TLBELO0)
>> +#define write_gcsr_tlblo0(val)        gcsr_write(val, 
>> LOONGARCH_CSR_TLBELO0)
>> +#define read_gcsr_tlblo1() gcsr_read(LOONGARCH_CSR_TLBELO1)
>> +#define write_gcsr_tlblo1(val)        gcsr_write(val, 
>> LOONGARCH_CSR_TLBELO1)
>> +
>> +#define read_gcsr_asid()        gcsr_read(LOONGARCH_CSR_ASID)
>> +#define write_gcsr_asid(val)        gcsr_write(val, LOONGARCH_CSR_ASID)
>> +#define read_gcsr_pgdl()        gcsr_read(LOONGARCH_CSR_PGDL)
>> +#define write_gcsr_pgdl(val)        gcsr_write(val, LOONGARCH_CSR_PGDL)
>> +#define read_gcsr_pgdh()        gcsr_read(LOONGARCH_CSR_PGDH)
>> +#define write_gcsr_pgdh(val)        gcsr_write(val, LOONGARCH_CSR_PGDH)
>> +#define write_gcsr_pgd(val)        gcsr_write(val, LOONGARCH_CSR_PGD)
>> +#define read_gcsr_pgd()            gcsr_read(LOONGARCH_CSR_PGD)
>> +#define read_gcsr_pwctl0() gcsr_read(LOONGARCH_CSR_PWCTL0)
>> +#define write_gcsr_pwctl0(val)        gcsr_write(val, 
>> LOONGARCH_CSR_PWCTL0)
>> +#define read_gcsr_pwctl1() gcsr_read(LOONGARCH_CSR_PWCTL1)
>> +#define write_gcsr_pwctl1(val)        gcsr_write(val, 
>> LOONGARCH_CSR_PWCTL1)
>> +#define read_gcsr_stlbpgsize() gcsr_read(LOONGARCH_CSR_STLBPGSIZE)
>> +#define write_gcsr_stlbpgsize(val)    gcsr_write(val, 
>> LOONGARCH_CSR_STLBPGSIZE)
>> +#define read_gcsr_rvacfg() gcsr_read(LOONGARCH_CSR_RVACFG)
>> +#define write_gcsr_rvacfg(val)        gcsr_write(val, 
>> LOONGARCH_CSR_RVACFG)
>> +
>> +#define read_gcsr_cpuid()        gcsr_read(LOONGARCH_CSR_CPUID)
>> +#define write_gcsr_cpuid(val)        gcsr_write(val, 
>> LOONGARCH_CSR_CPUID)
>> +#define read_gcsr_prcfg1() gcsr_read(LOONGARCH_CSR_PRCFG1)
>> +#define write_gcsr_prcfg1(val)        gcsr_write(val, 
>> LOONGARCH_CSR_PRCFG1)
>> +#define read_gcsr_prcfg2() gcsr_read(LOONGARCH_CSR_PRCFG2)
>> +#define write_gcsr_prcfg2(val)        gcsr_write(val, 
>> LOONGARCH_CSR_PRCFG2)
>> +#define read_gcsr_prcfg3() gcsr_read(LOONGARCH_CSR_PRCFG3)
>> +#define write_gcsr_prcfg3(val)        gcsr_write(val, 
>> LOONGARCH_CSR_PRCFG3)
>> +
>> +#define read_gcsr_kscratch0() gcsr_read(LOONGARCH_CSR_KS0)
>> +#define write_gcsr_kscratch0(val)    gcsr_write(val, LOONGARCH_CSR_KS0)
>> +#define read_gcsr_kscratch1() gcsr_read(LOONGARCH_CSR_KS1)
>> +#define write_gcsr_kscratch1(val)    gcsr_write(val, LOONGARCH_CSR_KS1)
>> +#define read_gcsr_kscratch2() gcsr_read(LOONGARCH_CSR_KS2)
>> +#define write_gcsr_kscratch2(val)    gcsr_write(val, LOONGARCH_CSR_KS2)
>> +#define read_gcsr_kscratch3() gcsr_read(LOONGARCH_CSR_KS3)
>> +#define write_gcsr_kscratch3(val)    gcsr_write(val, LOONGARCH_CSR_KS3)
>> +#define read_gcsr_kscratch4() gcsr_read(LOONGARCH_CSR_KS4)
>> +#define write_gcsr_kscratch4(val)    gcsr_write(val, LOONGARCH_CSR_KS4)
>> +#define read_gcsr_kscratch5() gcsr_read(LOONGARCH_CSR_KS5)
>> +#define write_gcsr_kscratch5(val)    gcsr_write(val, LOONGARCH_CSR_KS5)
>> +#define read_gcsr_kscratch6() gcsr_read(LOONGARCH_CSR_KS6)
>> +#define write_gcsr_kscratch6(val)    gcsr_write(val, LOONGARCH_CSR_KS6)
>> +#define read_gcsr_kscratch7() gcsr_read(LOONGARCH_CSR_KS7)
>> +#define write_gcsr_kscratch7(val)    gcsr_write(val, LOONGARCH_CSR_KS7)
>> +
>> +#define read_gcsr_timerid() gcsr_read(LOONGARCH_CSR_TMID)
>> +#define write_gcsr_timerid(val)        gcsr_write(val, 
>> LOONGARCH_CSR_TMID)
>> +#define read_gcsr_timercfg() gcsr_read(LOONGARCH_CSR_TCFG)
>> +#define write_gcsr_timercfg(val)    gcsr_write(val, LOONGARCH_CSR_TCFG)
>> +#define read_gcsr_timertick() gcsr_read(LOONGARCH_CSR_TVAL)
>> +#define write_gcsr_timertick(val)    gcsr_write(val, 
>> LOONGARCH_CSR_TVAL)
>> +#define read_gcsr_timeroffset() gcsr_read(LOONGARCH_CSR_CNTC)
>> +#define write_gcsr_timeroffset(val)    gcsr_write(val, 
>> LOONGARCH_CSR_CNTC)
>> +
>> +#define read_gcsr_llbctl() gcsr_read(LOONGARCH_CSR_LLBCTL)
>> +#define write_gcsr_llbctl(val)        gcsr_write(val, 
>> LOONGARCH_CSR_LLBCTL)
>> +
>> +#define read_gcsr_tlbrentry() gcsr_read(LOONGARCH_CSR_TLBRENTRY)
>> +#define write_gcsr_tlbrentry(val)    gcsr_write(val, 
>> LOONGARCH_CSR_TLBRENTRY)
>> +#define read_gcsr_tlbrbadv() gcsr_read(LOONGARCH_CSR_TLBRBADV)
>> +#define write_gcsr_tlbrbadv(val)    gcsr_write(val, 
>> LOONGARCH_CSR_TLBRBADV)
>> +#define read_gcsr_tlbrera() gcsr_read(LOONGARCH_CSR_TLBRERA)
>> +#define write_gcsr_tlbrera(val)        gcsr_write(val, 
>> LOONGARCH_CSR_TLBRERA)
>> +#define read_gcsr_tlbrsave() gcsr_read(LOONGARCH_CSR_TLBRSAVE)
>> +#define write_gcsr_tlbrsave(val)    gcsr_write(val, 
>> LOONGARCH_CSR_TLBRSAVE)
>> +#define read_gcsr_tlbrelo0() gcsr_read(LOONGARCH_CSR_TLBRELO0)
>> +#define write_gcsr_tlbrelo0(val)    gcsr_write(val, 
>> LOONGARCH_CSR_TLBRELO0)
>> +#define read_gcsr_tlbrelo1() gcsr_read(LOONGARCH_CSR_TLBRELO1)
>> +#define write_gcsr_tlbrelo1(val)    gcsr_write(val, 
>> LOONGARCH_CSR_TLBRELO1)
>> +#define read_gcsr_tlbrehi() gcsr_read(LOONGARCH_CSR_TLBREHI)
>> +#define write_gcsr_tlbrehi(val)        gcsr_write(val, 
>> LOONGARCH_CSR_TLBREHI)
>> +#define read_gcsr_tlbrprmd() gcsr_read(LOONGARCH_CSR_TLBRPRMD)
>> +#define write_gcsr_tlbrprmd(val)    gcsr_write(val, 
>> LOONGARCH_CSR_TLBRPRMD)
>> +
>> +#define read_gcsr_directwin0() gcsr_read(LOONGARCH_CSR_DMWIN0)
>> +#define write_gcsr_directwin0(val)    gcsr_write(val, 
>> LOONGARCH_CSR_DMWIN0)
>> +#define read_gcsr_directwin1() gcsr_read(LOONGARCH_CSR_DMWIN1)
>> +#define write_gcsr_directwin1(val)    gcsr_write(val, 
>> LOONGARCH_CSR_DMWIN1)
>> +#define read_gcsr_directwin2() gcsr_read(LOONGARCH_CSR_DMWIN2)
>> +#define write_gcsr_directwin2(val)    gcsr_write(val, 
>> LOONGARCH_CSR_DMWIN2)
>> +#define read_gcsr_directwin3() gcsr_read(LOONGARCH_CSR_DMWIN3)
>> +#define write_gcsr_directwin3(val)    gcsr_write(val, 
>> LOONGARCH_CSR_DMWIN3)
>> +
>>   /*
>>    * Manipulate bits in a register.
>>    */
>> @@ -1315,15 +1487,26 @@ change_##name(unsigned long change, unsigned 
>> long val)        \
>>   }
>>     #define __BUILD_CSR_OP(name) __BUILD_CSR_COMMON(csr_##name)
>> +#define __BUILD_GCSR_OP(name) __BUILD_CSR_COMMON(gcsr_##name)
>>     __BUILD_CSR_OP(euen)
>>   __BUILD_CSR_OP(ecfg)
>>   __BUILD_CSR_OP(tlbidx)
>> +__BUILD_CSR_OP(gcfg)
>> +__BUILD_CSR_OP(gstat)
>> +__BUILD_CSR_OP(gtlbc)
>> +__BUILD_CSR_OP(gintc)
>> +__BUILD_GCSR_OP(llbctl)
>> +__BUILD_GCSR_OP(tlbidx)
>>     #define set_csr_estat(val)    \
>>       csr_xchg32(val, val, LOONGARCH_CSR_ESTAT)
>>   #define clear_csr_estat(val)    \
>>       csr_xchg32(~(val), val, LOONGARCH_CSR_ESTAT)
>> +#define set_gcsr_estat(val)    \
>> +    gcsr_xchg(val, val, LOONGARCH_CSR_ESTAT)
>> +#define clear_gcsr_estat(val)    \
>> +    gcsr_xchg(~(val), val, LOONGARCH_CSR_ESTAT)
>>     #endif /* __ASSEMBLY__ */
>>   @@ -1408,7 +1591,7 @@ __BUILD_CSR_OP(tlbidx)
>>   #define EXCCODE_WATCH        19    /* Watch address reference */
>>   #define EXCCODE_BTDIS        20    /* Binary Trans. Disabled */
>>   #define EXCCODE_BTE        21    /* Binary Trans. Exception */
>> -#define EXCCODE_PSI        22    /* Guest Privileged Error */
>> +#define EXCCODE_GSPR        22    /* Guest Privileged Error */
>>   #define EXCCODE_HYP        23    /* Hypercall */
>>   #define EXCCODE_GCM        24    /* Guest CSR modified */
>>       #define EXCSUBCODE_GCSC        0    /* Software caused */
>> diff --git a/arch/loongarch/kvm/trace.h b/arch/loongarch/kvm/trace.h
>> new file mode 100644
>> index 000000000..1813410e2
>> --- /dev/null
>> +++ b/arch/loongarch/kvm/trace.h
>> @@ -0,0 +1,137 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
>> + */
>> +
>> +#if !defined(_TRACE_KVM_H) || defined(TRACE_HEADER_MULTI_READ)
>> +#define _TRACE_KVM_H
>> +
>> +#include <linux/tracepoint.h>
>> +#include <asm/kvm_csr.h>
>> +
>> +#undef    TRACE_SYSTEM
>> +#define TRACE_SYSTEM        kvm
>> +#define TRACE_INCLUDE_PATH    .
>> +#define TRACE_INCLUDE_FILE    trace
>> +
>> +/*
>> + * Tracepoints for VM enters
>> + */
>> +DECLARE_EVENT_CLASS(kvm_transition,
>> +    TP_PROTO(struct kvm_vcpu *vcpu),
>> +    TP_ARGS(vcpu),
>> +    TP_STRUCT__entry(
>> +        __field(unsigned long, pc)
>> +    ),
>> +
>> +    TP_fast_assign(
>> +        __entry->pc = vcpu->arch.pc;
>> +    ),
>> +
>> +    TP_printk("PC: 0x%08lx",
>> +          __entry->pc)
>> +);
>> +
>> +DEFINE_EVENT(kvm_transition, kvm_enter,
>> +         TP_PROTO(struct kvm_vcpu *vcpu),
>> +         TP_ARGS(vcpu));
>> +
>> +DEFINE_EVENT(kvm_transition, kvm_reenter,
>> +         TP_PROTO(struct kvm_vcpu *vcpu),
>> +         TP_ARGS(vcpu));
>> +
>> +DEFINE_EVENT(kvm_transition, kvm_out,
>> +         TP_PROTO(struct kvm_vcpu *vcpu),
>> +         TP_ARGS(vcpu));
>> +
>> +/* Further exit reasons */
>> +#define KVM_TRACE_EXIT_IDLE        64
>> +#define KVM_TRACE_EXIT_CACHE        65
>> +#define KVM_TRACE_EXIT_SIGNAL        66
>> +
>> +/* Tracepoints for VM exits */
>> +#define kvm_trace_symbol_exit_types                    \
>> +    ({ KVM_TRACE_EXIT_IDLE,        "IDLE" },            \
>> +    { KVM_TRACE_EXIT_CACHE,        "CACHE" },            \
>> +    { KVM_TRACE_EXIT_SIGNAL,    "Signal" })
>> +
>> +TRACE_EVENT(kvm_exit,
>> +        TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
>> +        TP_ARGS(vcpu, reason),
>> +        TP_STRUCT__entry(
>> +            __field(unsigned long, pc)
>> +            __field(unsigned int, reason)
>> +        ),
>> +
>> +        TP_fast_assign(
>> +            __entry->pc = vcpu->arch.pc;
>> +            __entry->reason = reason;
>> +        ),
>> +
>> +        TP_printk("[%s]PC: 0x%08lx",
>> +              __print_symbolic(__entry->reason,
>> +                       kvm_trace_symbol_exit_types),
>> +              __entry->pc)
>> +);
>> +
>> +#define KVM_TRACE_AUX_RESTORE        0
>> +#define KVM_TRACE_AUX_SAVE        1
>> +#define KVM_TRACE_AUX_ENABLE        2
>> +#define KVM_TRACE_AUX_DISABLE        3
>> +#define KVM_TRACE_AUX_DISCARD        4
>> +
>> +#define KVM_TRACE_AUX_FPU        1
>> +
>> +#define kvm_trace_symbol_aux_op                \
>> +    ({ KVM_TRACE_AUX_RESTORE, "restore" },        \
>> +    { KVM_TRACE_AUX_SAVE,    "save" },        \
>> +    { KVM_TRACE_AUX_ENABLE,  "enable" },        \
>> +    { KVM_TRACE_AUX_DISABLE, "disable" },        \
>> +    { KVM_TRACE_AUX_DISCARD, "discard" })
>> +
>> +#define kvm_trace_symbol_aux_state            \
>> +    { KVM_TRACE_AUX_FPU,     "FPU" },        \
>> +
>> +TRACE_EVENT(kvm_aux,
>> +        TP_PROTO(struct kvm_vcpu *vcpu, unsigned int op,
>> +             unsigned int state),
>> +        TP_ARGS(vcpu, op, state),
>> +        TP_STRUCT__entry(
>> +            __field(unsigned long, pc)
>> +            __field(u8, op)
>> +            __field(u8, state)
>> +        ),
>> +
>> +        TP_fast_assign(
>> +            __entry->pc = vcpu->arch.pc;
>> +            __entry->op = op;
>> +            __entry->state = state;
>> +        ),
>> +
>> +        TP_printk("%s %s PC: 0x%08lx",
>> +              __print_symbolic(__entry->op,
>> +                       kvm_trace_symbol_aux_op),
>> +              __print_symbolic(__entry->state,
>> +                       kvm_trace_symbol_aux_state),
>> +              __entry->pc)
>> +);
>> +
>> +TRACE_EVENT(kvm_vpid_change,
>> +        TP_PROTO(struct kvm_vcpu *vcpu, unsigned long vpid),
>> +        TP_ARGS(vcpu, vpid),
>> +        TP_STRUCT__entry(
>> +            __field(unsigned long, vpid)
>> +        ),
>> +
>> +        TP_fast_assign(
>> +            __entry->vpid = vpid;
>> +        ),
>> +
>> +        TP_printk("vpid: 0x%08lx",
>> +              __entry->vpid)
>> +);
>> +
>> +#endif /* _TRACE_KVM_H */
>> +
>> +/* This part must be outside protection */
>> +#include <trace/define_trace.h>
>


^ permalink raw reply	[flat|nested] 70+ messages in thread

end of thread, other threads:[~2023-02-27  1:39 UTC | newest]

Thread overview: 70+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-20  6:57 [PATCH v2 00/29] Add KVM LoongArch support Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 01/29] LoongArch: KVM: Add kvm related header files Tianrui Zhao
2023-02-20 18:22   ` Paolo Bonzini
2023-02-21  2:56     ` Tianrui Zhao
2023-02-21  6:49       ` Paolo Bonzini
2023-02-20 18:54   ` WANG Xuerui
2023-02-21  4:36   ` Xi Ruoyao
2023-02-24  1:27     ` Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 02/29] LoongArch: KVM: Implement kvm module related interface Tianrui Zhao
2023-02-20 17:46   ` Paolo Bonzini
2023-02-21  3:02     ` Tianrui Zhao
2023-02-21  6:59     ` maobibo
2023-02-21  8:14       ` Paolo Bonzini
2023-02-21 10:18         ` maobibo
2023-02-21 10:37           ` WANG Xuerui
2023-02-21 11:39             ` maobibo
2023-02-21 12:38               ` WANG Xuerui
2023-02-20  6:57 ` [PATCH v2 03/29] LoongArch: KVM: Implement kvm hardware enable, disable interface Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 04/29] LoongArch: KVM: Implement VM related functions Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 05/29] LoongArch: KVM: Add vcpu related header files Tianrui Zhao
2023-02-20 18:57   ` WANG Xuerui
2023-02-27  1:39     ` Tianrui Zhao
2023-02-21  4:44   ` Xi Ruoyao
2023-02-21  6:46     ` maobibo
2023-02-21  6:48       ` Paolo Bonzini
2023-02-21  7:12       ` Xi Ruoyao
2023-02-21  7:35         ` Paolo Bonzini
2023-02-20  6:57 ` [PATCH v2 06/29] LoongArch: KVM: Implement vcpu create and destroy interface Tianrui Zhao
2023-02-20 17:53   ` Paolo Bonzini
2023-02-22  1:52     ` Tianrui Zhao
2023-02-22 12:17       ` Paolo Bonzini
2023-02-23  1:23         ` Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 07/29] LoongArch: KVM: Implement vcpu run interface Tianrui Zhao
2023-02-20 18:44   ` Paolo Bonzini
2023-02-22  2:08     ` Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 08/29] LoongArch: KVM: Implement vcpu handle exit interface Tianrui Zhao
2023-02-20 17:46   ` Paolo Bonzini
2023-02-20 18:45   ` Paolo Bonzini
2023-02-21  3:17     ` Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 09/29] LoongArch: KVM: Implement vcpu get, vcpu set registers Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 10/29] LoongArch: KVM: Implement vcpu ENABLE_CAP, CHECK_EXTENSION ioctl interface Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 11/29] LoongArch: KVM: Implement fpu related operations for vcpu Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 12/29] LoongArch: KVM: Implement vcpu interrupt operations Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 13/29] LoongArch: KVM: Implement misc vcpu related interfaces Tianrui Zhao
2023-02-20 18:50   ` Paolo Bonzini
2023-02-21  3:19     ` Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 14/29] LoongArch: KVM: Implement vcpu load and vcpu put operations Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 15/29] LoongArch: KVM: Implement vcpu status description Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 16/29] LoongArch: KVM: Implement update VM id function Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 17/29] LoongArch: KVM: Implement virtual machine tlb operations Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 18/29] LoongArch: KVM: Implement vcpu timer operations Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 19/29] LoongArch: KVM: Implement kvm mmu operations Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 20/29] LoongArch: KVM: Implement handle csr excption Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 21/29] LoongArch: KVM: Implement handle iocsr exception Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 22/29] LoongArch: KVM: Implement handle idle exception Tianrui Zhao
2023-02-20 18:40   ` Paolo Bonzini
2023-02-21  9:48     ` Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 23/29] LoongArch: KVM: Implement handle gspr exception Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 24/29] LoongArch: KVM: Implement handle mmio exception Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 25/29] LoongArch: KVM: Implement handle fpu exception Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 26/29] LoongArch: KVM: Implement kvm exception vector Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 27/29] LoongArch: KVM: Implement vcpu world switch Tianrui Zhao
2023-02-21  7:45   ` Paolo Bonzini
2023-02-21 13:00     ` Tianrui Zhao
2023-02-21  8:18   ` Paolo Bonzini
2023-02-21 12:58     ` Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 28/29] LoongArch: KVM: Implement probe virtualization when loongarch cpu init Tianrui Zhao
2023-02-20  6:57 ` [PATCH v2 29/29] LoongArch: KVM: Enable kvm config and add the makefile Tianrui Zhao
2023-02-20  9:47   ` kernel test robot
2023-02-20 11:09   ` kernel test robot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.