linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v13 00/30] Add KVM LoongArch support
@ 2023-06-09  9:04 Tianrui Zhao
  2023-06-09  9:04 ` [PATCH v13 03/30] LoongArch: KVM: Implement kvm hardware enable, disable interface Tianrui Zhao
                   ` (8 more replies)
  0 siblings, 9 replies; 17+ messages in thread
From: Tianrui Zhao @ 2023-06-09  9:04 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui, tangyouling

This series adds KVM LoongArch support. Loongson 3A5000 supports hardware
assisted virtualization. With cpu virtualization, there are separate
hw-supported user mode and kernel mode in guest mode. With memory
virtualization, there are two-level hw mmu table for guest mode and host
mode. Also there is separate hw cpu timer with consant frequency in
guest mode, so that vm can migrate between hosts with different freq.
Currently, we are able to boot LoongArch Linux Guests.

Few key aspects of KVM LoongArch added by this series are:
1. Enable kvm hardware function when kvm module is loaded.
2. Implement VM and vcpu related ioctl interface such as vcpu create,
   vcpu run etc. GET_ONE_REG/SET_ONE_REG ioctl commands are use to
   get general registers one by one.
3. Hardware access about MMU, timer and csr are emulated in kernel.
4. Hardwares such as mmio and iocsr device are emulated in user space
   such as APIC, IPI, pci devices etc.

The running environment of LoongArch virt machine:
1. Cross tools to build kernel and uefi:
   $ wget https://github.com/loongson/build-tools/releases/download/2022.09.06/loongarch64-clfs-6.3-cross-tools-gcc-glibc.tar.xz
   tar -vxf loongarch64-clfs-6.3-cross-tools-gcc-glibc.tar.xz  -C /opt
   export PATH=/opt/cross-tools/bin:$PATH
   export LD_LIBRARY_PATH=/opt/cross-tools/lib:$LD_LIBRARY_PATH
   export LD_LIBRARY_PATH=/opt/cross-tools/loongarch64-unknown-linux-gnu/lib/:$LD_LIBRARY_PATH
2. This series is based on the linux source code:
   https://github.com/loongson/linux-loongarch-kvm
   Build command:
   git checkout kvm-loongarch
   make ARCH=loongarch CROSS_COMPILE=loongarch64-unknown-linux-gnu- loongson3_defconfig
   make ARCH=loongarch CROSS_COMPILE=loongarch64-unknown-linux-gnu-
3. QEMU hypervisor with LoongArch supported:
   https://github.com/loongson/qemu
   Build command:
   git checkout kvm-loongarch
   ./configure --target-list="loongarch64-softmmu"  --enable-kvm
   make
4. Uefi bios of LoongArch virt machine:
   Link: https://github.com/tianocore/edk2-platforms/tree/master/Platform/Loongson/LoongArchQemuPkg#readme
5. you can also access the binary files we have already build:
   https://github.com/yangxiaojuan-loongson/qemu-binary
The command to boot loongarch virt machine:
   $ qemu-system-loongarch64 -machine virt -m 4G -cpu la464 \
   -smp 1 -bios QEMU_EFI.fd -kernel vmlinuz.efi -initrd ramdisk \
   -serial stdio   -monitor telnet:localhost:4495,server,nowait \
   -append "root=/dev/ram rdinit=/sbin/init console=ttyS0,115200" \
   --nographic

Changes for v13:
1. Remove patch-28 "Implement probe virtualization when cpu init", as the
virtualization information about FPU,PMP,LSX in guest.options,options_dyn
is not used and the gcfg reg value can be read in kvm_hardware_enable, so
remove the previous cpu_probe_lvz function.
2. Fix vcpu_enable_cap interface, it should return -EINVAL directly, as
FPU cap is enable by default, and do not support any other caps now.
3. Simplify the jirl instruction with jr when without return addr,
simplify case HW0 ... HW7 statment in interrupt.c
4. Rename host_stack,host_gp in kvm_vcpu_arch to host_sp,host_tp.
5. Remove 'cpu' parameter in _kvm_check_requests, as 'cpu' is not used,
and remove 'cpu' parameter in kvm_check_vmid function, as it can get
cpu number by itself.

Changes for v12:
1. Improve the gcsr write/read/xchg interface to avoid the previous
instruction statment like parse_r and make the code easy understanding,
they are implemented in asm/insn-def.h and the instructions consistent
of "opcode" "rj" "rd" "simm14" arguments.
2. Fix the maintainers list of LoongArch KVM.

Changes for v11:
1. Add maintainers for LoongArch KVM.

Changes for v10:
1. Fix grammatical problems in LoongArch documentation.
2. It is not necessary to save or restore the LOONGARCH_CSR_PGD when
vcpu put and vcpu load, so we remove it.

Changes for v9:
1. Apply the new defined interrupt number macros in loongarch.h to kvm,
such as INT_SWI0, INT_HWI0, INT_TI, INT_IPI, etc. And remove the
previous unused macros.
2. Remove unused variables in kvm_vcpu_arch, and reorder the variables
to make them more standard.

Changes for v8:
1. Adjust the cpu_data.guest.options structure, add the ases flag into
it, and remove the previous guest.ases. We do this to keep consistent
with host cpu_data.options structure.
2. Remove the "#include <asm/kvm_host.h>" in some files which also
include the "<linux/kvm_host.h>". As linux/kvm_host.h already include
the asm/kvm_host.h.
3. Fix some unstandard spelling and grammar errors in comments, and
improve a little code format to make it easier and standard.

Changes for v7:
1. Fix the kvm_save/restore_hw_gcsr compiling warnings reported by
kernel test robot. The report link is:
https://lore.kernel.org/oe-kbuild-all/202304131526.iXfLaVZc-lkp@intel.com/
2. Fix loongarch kvm trace related compiling problems.

Changes for v6:
1. Fix the Documentation/virt/kvm/api.rst compile warning about
loongarch parts.

Changes for v5:
1. Implement get/set mp_state ioctl interface, and only the
KVM_MP_STATE_RUNNABLE state is supported now, and other states
will be completed in the future. The state is also used when vcpu
run idle instruction, if vcpu state is changed to RUNNABLE, the
vcpu will have the possibility to be woken up.
2. Supplement kvm document about loongarch-specific part, such as add
api introduction for GET/SET_ONE_REG, GET/SET_FPU, GET/SET_MP_STATE,
etc.
3. Improve the kvm_switch_to_guest function in switch.S, remove the
previous tmp,tmp1 arguments and replace it with t0,t1 reg.

Changes for v4:
1. Add a csr_need_update flag in _vcpu_put, as most csr registers keep
unchanged during process context switch, so we need not to update it
every time. We can do this only if the soft csr is different form hardware.
That is to say all of csrs should update after vcpu enter guest, as for
set_csr_ioctl, we have written soft csr to keep consistent with hardware.
2. Improve get/set_csr_ioctl interface, we set SW or HW or INVALID flag
for all csrs according to it's features when kvm init. In get/set_csr_ioctl,
if csr is HW, we use gcsrrd/ gcsrwr instruction to access it, else if csr is
SW, we use software to emulate it, and others return false.
3. Add set_hw_gcsr function in csr_ops.S, and it is used in set_csr_ioctl.
We have splited hw gcsr into three parts, so we can calculate the code offset
by gcsrid and jump here to run the gcsrwr instruction. We use this function to
make the code easier and avoid to use the previous SET_HW_GCSR(XXX) interface.
4. Improve kvm mmu functions, such as flush page table and make clean page table
interface.

Changes for v3:
1. Remove the vpid array list in kvm_vcpu_arch and use a vpid variable here,
because a vpid will never be recycled if a vCPU migrates from physical CPU A
to B and back to A.
2. Make some constant variables in kvm_context to global such as vpid_mask,
guest_eentry, enter_guest, etc.
3. Add some new tracepoints, such as kvm_trace_idle, kvm_trace_cache,
kvm_trace_gspr, etc.
4. There are some duplicate codes in kvm_handle_exit and kvm_vcpu_run,
so we move it to a new function kvm_pre_enter_guest.
5. Change the RESUME_HOST, RESUME_GUEST value, return 1 for resume guest
and "<= 0" for resume host.
6. Fcsr and fpu registers are saved/restored together.

Changes for v2:
1. Seprate the original patch-01 and patch-03 into small patches, and the
patches mainly contain kvm module init, module exit, vcpu create, vcpu run,
etc.
2. Remove the original KVM_{GET,SET}_CSRS ioctl in the kvm uapi header,
and we use the common KVM_{GET,SET}_ONE_REG to access register.
3. Use BIT(x) to replace the "1 << n_bits" statement.

Tianrui Zhao (30):
  LoongArch: KVM: Add kvm related header files
  LoongArch: KVM: Implement kvm module related interface
  LoongArch: KVM: Implement kvm hardware enable, disable interface
  LoongArch: KVM: Implement VM related functions
  LoongArch: KVM: Add vcpu related header files
  LoongArch: KVM: Implement vcpu create and destroy interface
  LoongArch: KVM: Implement vcpu run interface
  LoongArch: KVM: Implement vcpu handle exit interface
  LoongArch: KVM: Implement vcpu get, vcpu set registers
  LoongArch: KVM: Implement vcpu ENABLE_CAP ioctl interface
  LoongArch: KVM: Implement fpu related operations for vcpu
  LoongArch: KVM: Implement vcpu interrupt operations
  LoongArch: KVM: Implement misc vcpu related interfaces
  LoongArch: KVM: Implement vcpu load and vcpu put operations
  LoongArch: KVM: Implement vcpu status description
  LoongArch: KVM: Implement update VM id function
  LoongArch: KVM: Implement virtual machine tlb operations
  LoongArch: KVM: Implement vcpu timer operations
  LoongArch: KVM: Implement kvm mmu operations
  LoongArch: KVM: Implement handle csr excption
  LoongArch: KVM: Implement handle iocsr exception
  LoongArch: KVM: Implement handle idle exception
  LoongArch: KVM: Implement handle gspr exception
  LoongArch: KVM: Implement handle mmio exception
  LoongArch: KVM: Implement handle fpu exception
  LoongArch: KVM: Implement kvm exception vector
  LoongArch: KVM: Implement vcpu world switch
  LoongArch: KVM: Enable kvm config and add the makefile
  LoongArch: KVM: Supplement kvm document about LoongArch-specific part
  LoongArch: KVM: Add maintainers for LoongArch KVM

 Documentation/virt/kvm/api.rst             |  71 +-
 MAINTAINERS                                |  12 +
 arch/loongarch/Kbuild                      |   1 +
 arch/loongarch/Kconfig                     |   2 +
 arch/loongarch/configs/loongson3_defconfig |   2 +
 arch/loongarch/include/asm/insn-def.h      |  55 ++
 arch/loongarch/include/asm/inst.h          |  16 +
 arch/loongarch/include/asm/kvm_csr.h       | 231 ++++++
 arch/loongarch/include/asm/kvm_host.h      | 253 ++++++
 arch/loongarch/include/asm/kvm_types.h     |  11 +
 arch/loongarch/include/asm/kvm_vcpu.h      |  97 +++
 arch/loongarch/include/asm/loongarch.h     |  20 +-
 arch/loongarch/include/uapi/asm/kvm.h      | 106 +++
 arch/loongarch/kernel/asm-offsets.c        |  32 +
 arch/loongarch/kvm/Kconfig                 |  38 +
 arch/loongarch/kvm/Makefile                |  22 +
 arch/loongarch/kvm/csr_ops.S               |  76 ++
 arch/loongarch/kvm/exit.c                  | 707 +++++++++++++++++
 arch/loongarch/kvm/interrupt.c             | 113 +++
 arch/loongarch/kvm/main.c                  | 347 ++++++++
 arch/loongarch/kvm/mmu.c                   | 725 +++++++++++++++++
 arch/loongarch/kvm/switch.S                | 301 +++++++
 arch/loongarch/kvm/timer.c                 | 266 +++++++
 arch/loongarch/kvm/tlb.c                   |  32 +
 arch/loongarch/kvm/trace.h                 | 168 ++++
 arch/loongarch/kvm/vcpu.c                  | 869 +++++++++++++++++++++
 arch/loongarch/kvm/vm.c                    |  76 ++
 arch/loongarch/kvm/vmid.c                  |  66 ++
 include/uapi/linux/kvm.h                   |   9 +
 29 files changed, 4710 insertions(+), 14 deletions(-)
 create mode 100644 arch/loongarch/include/asm/insn-def.h
 create mode 100644 arch/loongarch/include/asm/kvm_csr.h
 create mode 100644 arch/loongarch/include/asm/kvm_host.h
 create mode 100644 arch/loongarch/include/asm/kvm_types.h
 create mode 100644 arch/loongarch/include/asm/kvm_vcpu.h
 create mode 100644 arch/loongarch/include/uapi/asm/kvm.h
 create mode 100644 arch/loongarch/kvm/Kconfig
 create mode 100644 arch/loongarch/kvm/Makefile
 create mode 100644 arch/loongarch/kvm/csr_ops.S
 create mode 100644 arch/loongarch/kvm/exit.c
 create mode 100644 arch/loongarch/kvm/interrupt.c
 create mode 100644 arch/loongarch/kvm/main.c
 create mode 100644 arch/loongarch/kvm/mmu.c
 create mode 100644 arch/loongarch/kvm/switch.S
 create mode 100644 arch/loongarch/kvm/timer.c
 create mode 100644 arch/loongarch/kvm/tlb.c
 create mode 100644 arch/loongarch/kvm/trace.h
 create mode 100644 arch/loongarch/kvm/vcpu.c
 create mode 100644 arch/loongarch/kvm/vm.c
 create mode 100644 arch/loongarch/kvm/vmid.c

-- 
2.39.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v13 03/30] LoongArch: KVM: Implement kvm hardware enable, disable interface
  2023-06-09  9:04 [PATCH v13 00/30] Add KVM LoongArch support Tianrui Zhao
@ 2023-06-09  9:04 ` Tianrui Zhao
  2023-06-09  9:04 ` [PATCH v13 05/30] LoongArch: KVM: Add vcpu related header files Tianrui Zhao
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 17+ messages in thread
From: Tianrui Zhao @ 2023-06-09  9:04 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui, tangyouling

Implement kvm hardware enable, disable interface, setting
the guest config register to enable virtualization features
when called the interface.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/main.c | 64 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 64 insertions(+)

diff --git a/arch/loongarch/kvm/main.c b/arch/loongarch/kvm/main.c
index f98c1619725f..5ebae1ea7565 100644
--- a/arch/loongarch/kvm/main.c
+++ b/arch/loongarch/kvm/main.c
@@ -195,6 +195,70 @@ static void _kvm_init_gcsr_flag(void)
 	set_gcsr_sw_flag(LOONGARCH_CSR_PERFCNTR3);
 }
 
+void kvm_init_vmcs(struct kvm *kvm)
+{
+	kvm->arch.vmcs = vmcs;
+}
+
+long kvm_arch_dev_ioctl(struct file *filp,
+			unsigned int ioctl, unsigned long arg)
+{
+	return -ENOIOCTLCMD;
+}
+
+#ifdef CONFIG_KVM_GENERIC_HARDWARE_ENABLING
+int kvm_arch_hardware_enable(void)
+{
+	unsigned long env, gcfg = 0;
+
+	env = read_csr_gcfg();
+	/* First init gtlbc, gcfg, gstat, gintc. All guest use the same config */
+	clear_csr_gtlbc(CSR_GTLBC_USETGID | CSR_GTLBC_TOTI);
+	write_csr_gcfg(0);
+	write_csr_gstat(0);
+	write_csr_gintc(0);
+
+	/*
+	 * Enable virtualization features granting guest direct control of
+	 * certain features:
+	 * GCI=2:       Trap on init or unimplement cache instruction.
+	 * TORU=0:      Trap on Root Unimplement.
+	 * CACTRL=1:    Root control cache.
+	 * TOP=0:       Trap on Previlege.
+	 * TOE=0:       Trap on Exception.
+	 * TIT=0:       Trap on Timer.
+	 */
+	if (env & CSR_GCFG_GCIP_ALL)
+		gcfg |= CSR_GCFG_GCI_SECURE;
+	if (env & CSR_GCFG_MATC_ROOT)
+		gcfg |= CSR_GCFG_MATC_ROOT;
+
+	gcfg |= CSR_GCFG_TIT;
+	write_csr_gcfg(gcfg);
+
+	kvm_flush_tlb_all();
+
+	/* Enable using TGID  */
+	set_csr_gtlbc(CSR_GTLBC_USETGID);
+	kvm_debug("gtlbc:%llx gintc:%llx gstat:%llx gcfg:%llx",
+			read_csr_gtlbc(), read_csr_gintc(),
+			read_csr_gstat(), read_csr_gcfg());
+
+	return 0;
+}
+
+void kvm_arch_hardware_disable(void)
+{
+	clear_csr_gtlbc(CSR_GTLBC_USETGID | CSR_GTLBC_TOTI);
+	write_csr_gcfg(0);
+	write_csr_gstat(0);
+	write_csr_gintc(0);
+
+	/* Flush any remaining guest TLB entries */
+	kvm_flush_tlb_all();
+}
+#endif
+
 static int kvm_loongarch_env_init(void)
 {
 	struct kvm_context *context;
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v13 05/30] LoongArch: KVM: Add vcpu related header files
  2023-06-09  9:04 [PATCH v13 00/30] Add KVM LoongArch support Tianrui Zhao
  2023-06-09  9:04 ` [PATCH v13 03/30] LoongArch: KVM: Implement kvm hardware enable, disable interface Tianrui Zhao
@ 2023-06-09  9:04 ` Tianrui Zhao
  2023-06-15  9:51   ` Huacai Chen
  2023-06-09  9:04 ` [PATCH v13 07/30] LoongArch: KVM: Implement vcpu run interface Tianrui Zhao
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 17+ messages in thread
From: Tianrui Zhao @ 2023-06-09  9:04 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui, tangyouling

Add LoongArch vcpu related header files, including vcpu csr
information, irq number defines, and some vcpu interfaces.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/include/asm/insn-def.h  |  55 ++++++
 arch/loongarch/include/asm/kvm_csr.h   | 231 +++++++++++++++++++++++++
 arch/loongarch/include/asm/kvm_vcpu.h  |  97 +++++++++++
 arch/loongarch/include/asm/loongarch.h |  20 ++-
 arch/loongarch/kvm/trace.h             | 168 ++++++++++++++++++
 5 files changed, 566 insertions(+), 5 deletions(-)
 create mode 100644 arch/loongarch/include/asm/insn-def.h
 create mode 100644 arch/loongarch/include/asm/kvm_csr.h
 create mode 100644 arch/loongarch/include/asm/kvm_vcpu.h
 create mode 100644 arch/loongarch/kvm/trace.h

diff --git a/arch/loongarch/include/asm/insn-def.h b/arch/loongarch/include/asm/insn-def.h
new file mode 100644
index 000000000000..e285ee108fb0
--- /dev/null
+++ b/arch/loongarch/include/asm/insn-def.h
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#ifndef __ASM_INSN_DEF_H
+#define __ASM_INSN_DEF_H
+
+#include <linux/stringify.h>
+#include <asm/gpr-num.h>
+#include <asm/asm.h>
+
+#define INSN_STR(x)		__stringify(x)
+#define CSR_RD_SHIFT		0
+#define CSR_RJ_SHIFT		5
+#define CSR_SIMM14_SHIFT	10
+#define CSR_OPCODE_SHIFT	24
+
+#define DEFINE_INSN_CSR							\
+	__DEFINE_ASM_GPR_NUMS						\
+"	.macro insn_csr, opcode, rj, rd, simm14\n"			\
+"	.4byte	((\\opcode << " INSN_STR(CSR_OPCODE_SHIFT) ") |"	\
+"		 (.L__gpr_num_\\rj << " INSN_STR(CSR_RJ_SHIFT) ") |"	\
+"		 (.L__gpr_num_\\rd << " INSN_STR(CSR_RD_SHIFT) ") |"	\
+"		 (\\simm14 << " INSN_STR(CSR_SIMM14_SHIFT) "))\n"	\
+"	.endm\n"
+
+#define UNDEFINE_INSN_CSR						\
+"	.purgem insn_csr\n"
+
+#define __INSN_CSR(opcode, rj, rd, simm14)				\
+	DEFINE_INSN_CSR							\
+	"insn_csr " opcode ", " rj ", " rd ", " simm14 "\n"		\
+	UNDEFINE_INSN_CSR
+
+
+#define INSN_CSR(opcode, rj, rd, simm14)				\
+	__INSN_CSR(LARCH_##opcode, LARCH_##rj, LARCH_##rd,		\
+		   LARCH_##simm14)
+
+#define __ASM_STR(x)		#x
+#define LARCH_OPCODE(v)		__ASM_STR(v)
+#define LARCH_SIMM14(v)		__ASM_STR(v)
+#define __LARCH_REG(v)		__ASM_STR(v)
+#define LARCH___RD(v)		__LARCH_REG(v)
+#define LARCH___RJ(v)		__LARCH_REG(v)
+#define LARCH_OPCODE_GCSR	LARCH_OPCODE(5)
+
+#define GCSR_read(csr, rd)						\
+	INSN_CSR(OPCODE_GCSR, __RJ(zero), __RD(rd), SIMM14(csr))
+
+#define GCSR_write(csr, rd)						\
+	INSN_CSR(OPCODE_GCSR, __RJ($r1), __RD(rd), SIMM14(csr))
+
+#define GCSR_xchg(csr, rj, rd)						\
+	INSN_CSR(OPCODE_GCSR, __RJ(rj), __RD(rd), SIMM14(csr))
+
+#endif /* __ASM_INSN_DEF_H */
diff --git a/arch/loongarch/include/asm/kvm_csr.h b/arch/loongarch/include/asm/kvm_csr.h
new file mode 100644
index 000000000000..10dba5bc6df1
--- /dev/null
+++ b/arch/loongarch/include/asm/kvm_csr.h
@@ -0,0 +1,231 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#ifndef __ASM_LOONGARCH_KVM_CSR_H__
+#define __ASM_LOONGARCH_KVM_CSR_H__
+#include <asm/loongarch.h>
+#include <asm/kvm_vcpu.h>
+#include <linux/uaccess.h>
+#include <linux/kvm_host.h>
+
+/*
+ * Instructions will be available in binutils later
+ * read val from guest csr register %[csr]
+ * gcsrrd %[val], %[csr]
+ */
+#define gcsr_read(csr)						\
+({								\
+	register unsigned long __v;				\
+	__asm__ __volatile__ (GCSR_read(csr, %0)		\
+				: "=r" (__v) :			\
+				: "memory");			\
+	__v;							\
+})
+
+/*
+ * Instructions will be available in binutils later
+ * write val to guest csr register %[csr]
+ * gcsrwr %[val], %[csr]
+ */
+#define gcsr_write(val, csr)					\
+({								\
+	register unsigned long __v = val;			\
+	__asm__ __volatile__ (GCSR_write(csr, %0)		\
+				: "+r" (__v) :			\
+				: "memory");			\
+})
+
+/*
+ * Instructions will be available in binutils later
+ * replace masked bits of guest csr register %[csr] with val
+ * gcsrxchg %[val], %[mask], %[csr]
+ */
+#define gcsr_xchg(val, mask, csr)				\
+({								\
+	register unsigned long __v = val;			\
+	__asm__ __volatile__ (GCSR_xchg(csr, %1, %0)		\
+				: "+r" (__v)			\
+				: "r"  (mask)			\
+				: "memory");			\
+	__v;							\
+})
+
+/* Guest CSRS read and write */
+#define read_gcsr_crmd()		gcsr_read(LOONGARCH_CSR_CRMD)
+#define write_gcsr_crmd(val)		gcsr_write(val, LOONGARCH_CSR_CRMD)
+#define read_gcsr_prmd()		gcsr_read(LOONGARCH_CSR_PRMD)
+#define write_gcsr_prmd(val)		gcsr_write(val, LOONGARCH_CSR_PRMD)
+#define read_gcsr_euen()		gcsr_read(LOONGARCH_CSR_EUEN)
+#define write_gcsr_euen(val)		gcsr_write(val, LOONGARCH_CSR_EUEN)
+#define read_gcsr_misc()		gcsr_read(LOONGARCH_CSR_MISC)
+#define write_gcsr_misc(val)		gcsr_write(val, LOONGARCH_CSR_MISC)
+#define read_gcsr_ecfg()		gcsr_read(LOONGARCH_CSR_ECFG)
+#define write_gcsr_ecfg(val)		gcsr_write(val, LOONGARCH_CSR_ECFG)
+#define read_gcsr_estat()		gcsr_read(LOONGARCH_CSR_ESTAT)
+#define write_gcsr_estat(val)		gcsr_write(val, LOONGARCH_CSR_ESTAT)
+#define read_gcsr_era()			gcsr_read(LOONGARCH_CSR_ERA)
+#define write_gcsr_era(val)		gcsr_write(val, LOONGARCH_CSR_ERA)
+#define read_gcsr_badv()		gcsr_read(LOONGARCH_CSR_BADV)
+#define write_gcsr_badv(val)		gcsr_write(val, LOONGARCH_CSR_BADV)
+#define read_gcsr_badi()		gcsr_read(LOONGARCH_CSR_BADI)
+#define write_gcsr_badi(val)		gcsr_write(val, LOONGARCH_CSR_BADI)
+#define read_gcsr_eentry()		gcsr_read(LOONGARCH_CSR_EENTRY)
+#define write_gcsr_eentry(val)		gcsr_write(val, LOONGARCH_CSR_EENTRY)
+
+#define read_gcsr_tlbidx()		gcsr_read(LOONGARCH_CSR_TLBIDX)
+#define write_gcsr_tlbidx(val)		gcsr_write(val, LOONGARCH_CSR_TLBIDX)
+#define read_gcsr_tlbhi()		gcsr_read(LOONGARCH_CSR_TLBEHI)
+#define write_gcsr_tlbhi(val)		gcsr_write(val, LOONGARCH_CSR_TLBEHI)
+#define read_gcsr_tlblo0()		gcsr_read(LOONGARCH_CSR_TLBELO0)
+#define write_gcsr_tlblo0(val)		gcsr_write(val, LOONGARCH_CSR_TLBELO0)
+#define read_gcsr_tlblo1()		gcsr_read(LOONGARCH_CSR_TLBELO1)
+#define write_gcsr_tlblo1(val)		gcsr_write(val, LOONGARCH_CSR_TLBELO1)
+
+#define read_gcsr_asid()		gcsr_read(LOONGARCH_CSR_ASID)
+#define write_gcsr_asid(val)		gcsr_write(val, LOONGARCH_CSR_ASID)
+#define read_gcsr_pgdl()		gcsr_read(LOONGARCH_CSR_PGDL)
+#define write_gcsr_pgdl(val)		gcsr_write(val, LOONGARCH_CSR_PGDL)
+#define read_gcsr_pgdh()		gcsr_read(LOONGARCH_CSR_PGDH)
+#define write_gcsr_pgdh(val)		gcsr_write(val, LOONGARCH_CSR_PGDH)
+#define write_gcsr_pgd(val)		gcsr_write(val, LOONGARCH_CSR_PGD)
+#define read_gcsr_pgd()			gcsr_read(LOONGARCH_CSR_PGD)
+#define read_gcsr_pwctl0()		gcsr_read(LOONGARCH_CSR_PWCTL0)
+#define write_gcsr_pwctl0(val)		gcsr_write(val, LOONGARCH_CSR_PWCTL0)
+#define read_gcsr_pwctl1()		gcsr_read(LOONGARCH_CSR_PWCTL1)
+#define write_gcsr_pwctl1(val)		gcsr_write(val, LOONGARCH_CSR_PWCTL1)
+#define read_gcsr_stlbpgsize()		gcsr_read(LOONGARCH_CSR_STLBPGSIZE)
+#define write_gcsr_stlbpgsize(val)	gcsr_write(val, LOONGARCH_CSR_STLBPGSIZE)
+#define read_gcsr_rvacfg()		gcsr_read(LOONGARCH_CSR_RVACFG)
+#define write_gcsr_rvacfg(val)		gcsr_write(val, LOONGARCH_CSR_RVACFG)
+
+#define read_gcsr_cpuid()		gcsr_read(LOONGARCH_CSR_CPUID)
+#define write_gcsr_cpuid(val)		gcsr_write(val, LOONGARCH_CSR_CPUID)
+#define read_gcsr_prcfg1()		gcsr_read(LOONGARCH_CSR_PRCFG1)
+#define write_gcsr_prcfg1(val)		gcsr_write(val, LOONGARCH_CSR_PRCFG1)
+#define read_gcsr_prcfg2()		gcsr_read(LOONGARCH_CSR_PRCFG2)
+#define write_gcsr_prcfg2(val)		gcsr_write(val, LOONGARCH_CSR_PRCFG2)
+#define read_gcsr_prcfg3()		gcsr_read(LOONGARCH_CSR_PRCFG3)
+#define write_gcsr_prcfg3(val)		gcsr_write(val, LOONGARCH_CSR_PRCFG3)
+
+#define read_gcsr_kscratch0()		gcsr_read(LOONGARCH_CSR_KS0)
+#define write_gcsr_kscratch0(val)	gcsr_write(val, LOONGARCH_CSR_KS0)
+#define read_gcsr_kscratch1()		gcsr_read(LOONGARCH_CSR_KS1)
+#define write_gcsr_kscratch1(val)	gcsr_write(val, LOONGARCH_CSR_KS1)
+#define read_gcsr_kscratch2()		gcsr_read(LOONGARCH_CSR_KS2)
+#define write_gcsr_kscratch2(val)	gcsr_write(val, LOONGARCH_CSR_KS2)
+#define read_gcsr_kscratch3()		gcsr_read(LOONGARCH_CSR_KS3)
+#define write_gcsr_kscratch3(val)	gcsr_write(val, LOONGARCH_CSR_KS3)
+#define read_gcsr_kscratch4()		gcsr_read(LOONGARCH_CSR_KS4)
+#define write_gcsr_kscratch4(val)	gcsr_write(val, LOONGARCH_CSR_KS4)
+#define read_gcsr_kscratch5()		gcsr_read(LOONGARCH_CSR_KS5)
+#define write_gcsr_kscratch5(val)	gcsr_write(val, LOONGARCH_CSR_KS5)
+#define read_gcsr_kscratch6()		gcsr_read(LOONGARCH_CSR_KS6)
+#define write_gcsr_kscratch6(val)	gcsr_write(val, LOONGARCH_CSR_KS6)
+#define read_gcsr_kscratch7()		gcsr_read(LOONGARCH_CSR_KS7)
+#define write_gcsr_kscratch7(val)	gcsr_write(val, LOONGARCH_CSR_KS7)
+
+#define read_gcsr_timerid()		gcsr_read(LOONGARCH_CSR_TMID)
+#define write_gcsr_timerid(val)		gcsr_write(val, LOONGARCH_CSR_TMID)
+#define read_gcsr_timercfg()		gcsr_read(LOONGARCH_CSR_TCFG)
+#define write_gcsr_timercfg(val)	gcsr_write(val, LOONGARCH_CSR_TCFG)
+#define read_gcsr_timertick()		gcsr_read(LOONGARCH_CSR_TVAL)
+#define write_gcsr_timertick(val)	gcsr_write(val, LOONGARCH_CSR_TVAL)
+#define read_gcsr_timeroffset()		gcsr_read(LOONGARCH_CSR_CNTC)
+#define write_gcsr_timeroffset(val)	gcsr_write(val, LOONGARCH_CSR_CNTC)
+
+#define read_gcsr_llbctl()		gcsr_read(LOONGARCH_CSR_LLBCTL)
+#define write_gcsr_llbctl(val)		gcsr_write(val, LOONGARCH_CSR_LLBCTL)
+
+#define read_gcsr_tlbrentry()		gcsr_read(LOONGARCH_CSR_TLBRENTRY)
+#define write_gcsr_tlbrentry(val)	gcsr_write(val, LOONGARCH_CSR_TLBRENTRY)
+#define read_gcsr_tlbrbadv()		gcsr_read(LOONGARCH_CSR_TLBRBADV)
+#define write_gcsr_tlbrbadv(val)	gcsr_write(val, LOONGARCH_CSR_TLBRBADV)
+#define read_gcsr_tlbrera()		gcsr_read(LOONGARCH_CSR_TLBRERA)
+#define write_gcsr_tlbrera(val)		gcsr_write(val, LOONGARCH_CSR_TLBRERA)
+#define read_gcsr_tlbrsave()		gcsr_read(LOONGARCH_CSR_TLBRSAVE)
+#define write_gcsr_tlbrsave(val)	gcsr_write(val, LOONGARCH_CSR_TLBRSAVE)
+#define read_gcsr_tlbrelo0()		gcsr_read(LOONGARCH_CSR_TLBRELO0)
+#define write_gcsr_tlbrelo0(val)	gcsr_write(val, LOONGARCH_CSR_TLBRELO0)
+#define read_gcsr_tlbrelo1()		gcsr_read(LOONGARCH_CSR_TLBRELO1)
+#define write_gcsr_tlbrelo1(val)	gcsr_write(val, LOONGARCH_CSR_TLBRELO1)
+#define read_gcsr_tlbrehi()		gcsr_read(LOONGARCH_CSR_TLBREHI)
+#define write_gcsr_tlbrehi(val)		gcsr_write(val, LOONGARCH_CSR_TLBREHI)
+#define read_gcsr_tlbrprmd()		gcsr_read(LOONGARCH_CSR_TLBRPRMD)
+#define write_gcsr_tlbrprmd(val)	gcsr_write(val, LOONGARCH_CSR_TLBRPRMD)
+
+#define read_gcsr_directwin0()		gcsr_read(LOONGARCH_CSR_DMWIN0)
+#define write_gcsr_directwin0(val)	gcsr_write(val, LOONGARCH_CSR_DMWIN0)
+#define read_gcsr_directwin1()		gcsr_read(LOONGARCH_CSR_DMWIN1)
+#define write_gcsr_directwin1(val)	gcsr_write(val, LOONGARCH_CSR_DMWIN1)
+#define read_gcsr_directwin2()		gcsr_read(LOONGARCH_CSR_DMWIN2)
+#define write_gcsr_directwin2(val)	gcsr_write(val, LOONGARCH_CSR_DMWIN2)
+#define read_gcsr_directwin3()		gcsr_read(LOONGARCH_CSR_DMWIN3)
+#define write_gcsr_directwin3(val)	gcsr_write(val, LOONGARCH_CSR_DMWIN3)
+
+/* Guest related CSRs */
+#define read_csr_gtlbc()		csr_read64(LOONGARCH_CSR_GTLBC)
+#define write_csr_gtlbc(val)		csr_write64(val, LOONGARCH_CSR_GTLBC)
+#define read_csr_trgp()			csr_read64(LOONGARCH_CSR_TRGP)
+#define read_csr_gcfg()			csr_read64(LOONGARCH_CSR_GCFG)
+#define write_csr_gcfg(val)		csr_write64(val, LOONGARCH_CSR_GCFG)
+#define read_csr_gstat()		csr_read64(LOONGARCH_CSR_GSTAT)
+#define write_csr_gstat(val)		csr_write64(val, LOONGARCH_CSR_GSTAT)
+#define read_csr_gintc()		csr_read64(LOONGARCH_CSR_GINTC)
+#define write_csr_gintc(val)		csr_write64(val, LOONGARCH_CSR_GINTC)
+#define read_csr_gcntc()		csr_read64(LOONGARCH_CSR_GCNTC)
+#define write_csr_gcntc(val)		csr_write64(val, LOONGARCH_CSR_GCNTC)
+
+#define __BUILD_GCSR_OP(name)		__BUILD_CSR_COMMON(gcsr_##name)
+
+__BUILD_GCSR_OP(llbctl)
+__BUILD_GCSR_OP(tlbidx)
+__BUILD_CSR_OP(gcfg)
+__BUILD_CSR_OP(gstat)
+__BUILD_CSR_OP(gtlbc)
+__BUILD_CSR_OP(gintc)
+
+#define set_gcsr_estat(val)	\
+	gcsr_xchg(val, val, LOONGARCH_CSR_ESTAT)
+#define clear_gcsr_estat(val)	\
+	gcsr_xchg(~(val), val, LOONGARCH_CSR_ESTAT)
+
+#define kvm_read_hw_gcsr(id)		gcsr_read(id)
+#define kvm_write_hw_gcsr(csr, id, val)	gcsr_write(val, id)
+
+int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v);
+int _kvm_setcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 v);
+
+int _kvm_emu_iocsr(larch_inst inst, struct kvm_run *run, struct kvm_vcpu *vcpu);
+
+#define kvm_save_hw_gcsr(csr, gid)	(csr->csrs[gid] = gcsr_read(gid))
+#define kvm_restore_hw_gcsr(csr, gid)	(gcsr_write(csr->csrs[gid], gid))
+
+static __always_inline unsigned long kvm_read_sw_gcsr(struct loongarch_csrs *csr, int gid)
+{
+	return csr->csrs[gid];
+}
+
+static __always_inline void kvm_write_sw_gcsr(struct loongarch_csrs *csr,
+					      int gid, unsigned long val)
+{
+	csr->csrs[gid] = val;
+}
+
+static __always_inline void kvm_set_sw_gcsr(struct loongarch_csrs *csr,
+					    int gid, unsigned long val)
+{
+	csr->csrs[gid] |= val;
+}
+
+static __always_inline void kvm_change_sw_gcsr(struct loongarch_csrs *csr,
+					       int gid, unsigned long mask,
+					       unsigned long val)
+{
+	unsigned long _mask = mask;
+
+	csr->csrs[gid] &= ~_mask;
+	csr->csrs[gid] |= val & _mask;
+}
+#endif	/* __ASM_LOONGARCH_KVM_CSR_H__ */
diff --git a/arch/loongarch/include/asm/kvm_vcpu.h b/arch/loongarch/include/asm/kvm_vcpu.h
new file mode 100644
index 000000000000..74deaf55d22c
--- /dev/null
+++ b/arch/loongarch/include/asm/kvm_vcpu.h
@@ -0,0 +1,97 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#ifndef __ASM_LOONGARCH_KVM_VCPU_H__
+#define __ASM_LOONGARCH_KVM_VCPU_H__
+
+#include <linux/kvm_host.h>
+#include <asm/loongarch.h>
+
+/* Controlled by 0x5 guest exst */
+#define CPU_SIP0			(_ULCAST_(1))
+#define CPU_SIP1			(_ULCAST_(1) << 1)
+#define CPU_PMU				(_ULCAST_(1) << 10)
+#define CPU_TIMER			(_ULCAST_(1) << 11)
+#define CPU_IPI				(_ULCAST_(1) << 12)
+
+/* Controlled by 0x52 guest exception VIP
+ * aligned to exst bit 5~12
+ */
+#define CPU_IP0				(_ULCAST_(1))
+#define CPU_IP1				(_ULCAST_(1) << 1)
+#define CPU_IP2				(_ULCAST_(1) << 2)
+#define CPU_IP3				(_ULCAST_(1) << 3)
+#define CPU_IP4				(_ULCAST_(1) << 4)
+#define CPU_IP5				(_ULCAST_(1) << 5)
+#define CPU_IP6				(_ULCAST_(1) << 6)
+#define CPU_IP7				(_ULCAST_(1) << 7)
+
+#define MNSEC_PER_SEC			(NSEC_PER_SEC >> 20)
+
+/* KVM_IRQ_LINE irq field index values */
+#define KVM_LOONGSON_IRQ_TYPE_SHIFT	24
+#define KVM_LOONGSON_IRQ_TYPE_MASK	0xff
+#define KVM_LOONGSON_IRQ_VCPU_SHIFT	16
+#define KVM_LOONGSON_IRQ_VCPU_MASK	0xff
+#define KVM_LOONGSON_IRQ_NUM_SHIFT	0
+#define KVM_LOONGSON_IRQ_NUM_MASK	0xffff
+
+/* Irq_type field */
+#define KVM_LOONGSON_IRQ_TYPE_CPU_IP	0
+#define KVM_LOONGSON_IRQ_TYPE_CPU_IO	1
+#define KVM_LOONGSON_IRQ_TYPE_HT	2
+#define KVM_LOONGSON_IRQ_TYPE_MSI	3
+#define KVM_LOONGSON_IRQ_TYPE_IOAPIC	4
+#define KVM_LOONGSON_IRQ_TYPE_ROUTE	5
+
+/* Out-of-kernel GIC cpu interrupt injection irq_number field */
+#define KVM_LOONGSON_IRQ_CPU_IRQ	0
+#define KVM_LOONGSON_IRQ_CPU_FIQ	1
+#define KVM_LOONGSON_CPU_IP_NUM		8
+
+typedef union loongarch_instruction  larch_inst;
+typedef int (*exit_handle_fn)(struct kvm_vcpu *);
+
+int  _kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst);
+int  _kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst);
+int  _kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int  _kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int  _kvm_emu_idle(struct kvm_vcpu *vcpu);
+int  _kvm_handle_pv_hcall(struct kvm_vcpu *vcpu);
+int  _kvm_pending_timer(struct kvm_vcpu *vcpu);
+int  _kvm_handle_fault(struct kvm_vcpu *vcpu, int fault);
+void _kvm_deliver_intr(struct kvm_vcpu *vcpu);
+
+void kvm_own_fpu(struct kvm_vcpu *vcpu);
+void kvm_lose_fpu(struct kvm_vcpu *vcpu);
+void kvm_save_fpu(struct loongarch_fpu *fpu);
+void kvm_restore_fpu(struct loongarch_fpu *fpu);
+void kvm_restore_fcsr(struct loongarch_fpu *fpu);
+
+void kvm_acquire_timer(struct kvm_vcpu *vcpu);
+void kvm_reset_timer(struct kvm_vcpu *vcpu);
+enum hrtimer_restart kvm_count_timeout(struct kvm_vcpu *vcpu);
+void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long hz);
+void kvm_restore_timer(struct kvm_vcpu *vcpu);
+void kvm_save_timer(struct kvm_vcpu *vcpu);
+
+int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu,
+			struct kvm_loongarch_interrupt *irq);
+/*
+ * Loongarch KVM guest interrupt handling
+ */
+static inline void _kvm_queue_irq(struct kvm_vcpu *vcpu, unsigned int irq)
+{
+	set_bit(irq, &vcpu->arch.irq_pending);
+	clear_bit(irq, &vcpu->arch.irq_clear);
+}
+
+static inline void _kvm_dequeue_irq(struct kvm_vcpu *vcpu, unsigned int irq)
+{
+	clear_bit(irq, &vcpu->arch.irq_pending);
+	set_bit(irq, &vcpu->arch.irq_clear);
+}
+
+#endif /* __ASM_LOONGARCH_KVM_VCPU_H__ */
diff --git a/arch/loongarch/include/asm/loongarch.h b/arch/loongarch/include/asm/loongarch.h
index b3323ab5b78d..35ae5c2be8b6 100644
--- a/arch/loongarch/include/asm/loongarch.h
+++ b/arch/loongarch/include/asm/loongarch.h
@@ -11,6 +11,7 @@
 
 #ifndef __ASSEMBLY__
 #include <larchintrin.h>
+#include <asm/insn-def.h>
 
 /*
  * parse_r var, r - Helper assembler macro for parsing register names.
@@ -309,6 +310,7 @@ static __always_inline void iocsr_write64(u64 val, u32 reg)
 #define LOONGARCH_CSR_ECFG		0x4	/* Exception config */
 #define  CSR_ECFG_VS_SHIFT		16
 #define  CSR_ECFG_VS_WIDTH		3
+#define  CSR_ECFG_VS_SHIFT_END		(CSR_ECFG_VS_SHIFT + CSR_ECFG_VS_WIDTH - 1)
 #define  CSR_ECFG_VS			(_ULCAST_(0x7) << CSR_ECFG_VS_SHIFT)
 #define  CSR_ECFG_IM_SHIFT		0
 #define  CSR_ECFG_IM_WIDTH		14
@@ -397,13 +399,14 @@ static __always_inline void iocsr_write64(u64 val, u32 reg)
 #define  CSR_TLBLO1_V			(_ULCAST_(0x1) << CSR_TLBLO1_V_SHIFT)
 
 #define LOONGARCH_CSR_GTLBC		0x15	/* Guest TLB control */
-#define  CSR_GTLBC_RID_SHIFT		16
-#define  CSR_GTLBC_RID_WIDTH		8
-#define  CSR_GTLBC_RID			(_ULCAST_(0xff) << CSR_GTLBC_RID_SHIFT)
+#define  CSR_GTLBC_TGID_SHIFT		16
+#define  CSR_GTLBC_TGID_WIDTH		8
+#define  CSR_GTLBC_TGID_SHIFT_END	(CSR_GTLBC_TGID_SHIFT + CSR_GTLBC_TGID_WIDTH - 1)
+#define  CSR_GTLBC_TGID			(_ULCAST_(0xff) << CSR_GTLBC_TGID_SHIFT)
 #define  CSR_GTLBC_TOTI_SHIFT		13
 #define  CSR_GTLBC_TOTI			(_ULCAST_(0x1) << CSR_GTLBC_TOTI_SHIFT)
-#define  CSR_GTLBC_USERID_SHIFT		12
-#define  CSR_GTLBC_USERID		(_ULCAST_(0x1) << CSR_GTLBC_USERID_SHIFT)
+#define  CSR_GTLBC_USETGID_SHIFT	12
+#define  CSR_GTLBC_USETGID		(_ULCAST_(0x1) << CSR_GTLBC_USETGID_SHIFT)
 #define  CSR_GTLBC_GMTLBSZ_SHIFT	0
 #define  CSR_GTLBC_GMTLBSZ_WIDTH	6
 #define  CSR_GTLBC_GMTLBSZ		(_ULCAST_(0x3f) << CSR_GTLBC_GMTLBSZ_SHIFT)
@@ -555,6 +558,7 @@ static __always_inline void iocsr_write64(u64 val, u32 reg)
 #define LOONGARCH_CSR_GSTAT		0x50	/* Guest status */
 #define  CSR_GSTAT_GID_SHIFT		16
 #define  CSR_GSTAT_GID_WIDTH		8
+#define  CSR_GSTAT_GID_SHIFT_END	(CSR_GSTAT_GID_SHIFT + CSR_GSTAT_GID_WIDTH - 1)
 #define  CSR_GSTAT_GID			(_ULCAST_(0xff) << CSR_GSTAT_GID_SHIFT)
 #define  CSR_GSTAT_GIDBIT_SHIFT		4
 #define  CSR_GSTAT_GIDBIT_WIDTH		6
@@ -605,6 +609,12 @@ static __always_inline void iocsr_write64(u64 val, u32 reg)
 #define  CSR_GCFG_MATC_GUEST		(_ULCAST_(0x0) << CSR_GCFG_MATC_SHITF)
 #define  CSR_GCFG_MATC_ROOT		(_ULCAST_(0x1) << CSR_GCFG_MATC_SHITF)
 #define  CSR_GCFG_MATC_NEST		(_ULCAST_(0x2) << CSR_GCFG_MATC_SHITF)
+#define  CSR_GCFG_MATP_NEST_SHIFT	2
+#define  CSR_GCFG_MATP_NEST		(_ULCAST_(0x1) << CSR_GCFG_MATP_NEST_SHIFT)
+#define  CSR_GCFG_MATP_ROOT_SHIFT	1
+#define  CSR_GCFG_MATP_ROOT		(_ULCAST_(0x1) << CSR_GCFG_MATP_ROOT_SHIFT)
+#define  CSR_GCFG_MATP_GUEST_SHIFT	0
+#define  CSR_GCFG_MATP_GUEST		(_ULCAST_(0x1) << CSR_GCFG_MATP_GUEST_SHIFT)
 
 #define LOONGARCH_CSR_GINTC		0x52	/* Guest interrupt control */
 #define  CSR_GINTC_HC_SHIFT		16
diff --git a/arch/loongarch/kvm/trace.h b/arch/loongarch/kvm/trace.h
new file mode 100644
index 000000000000..17b28d94d569
--- /dev/null
+++ b/arch/loongarch/kvm/trace.h
@@ -0,0 +1,168 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#if !defined(_TRACE_KVM_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_KVM_H
+
+#include <linux/tracepoint.h>
+#include <asm/kvm_csr.h>
+
+#undef	TRACE_SYSTEM
+#define TRACE_SYSTEM	kvm
+
+/*
+ * Tracepoints for VM enters
+ */
+DECLARE_EVENT_CLASS(kvm_transition,
+	TP_PROTO(struct kvm_vcpu *vcpu),
+	TP_ARGS(vcpu),
+	TP_STRUCT__entry(
+		__field(unsigned long, pc)
+	),
+
+	TP_fast_assign(
+		__entry->pc = vcpu->arch.pc;
+	),
+
+	TP_printk("PC: 0x%08lx",
+		  __entry->pc)
+);
+
+DEFINE_EVENT(kvm_transition, kvm_enter,
+	     TP_PROTO(struct kvm_vcpu *vcpu),
+	     TP_ARGS(vcpu));
+
+DEFINE_EVENT(kvm_transition, kvm_reenter,
+	     TP_PROTO(struct kvm_vcpu *vcpu),
+	     TP_ARGS(vcpu));
+
+DEFINE_EVENT(kvm_transition, kvm_out,
+	     TP_PROTO(struct kvm_vcpu *vcpu),
+	     TP_ARGS(vcpu));
+
+/* Further exit reasons */
+#define KVM_TRACE_EXIT_IDLE		64
+#define KVM_TRACE_EXIT_CACHE		65
+#define KVM_TRACE_EXIT_SIGNAL		66
+
+/* Tracepoints for VM exits */
+#define kvm_trace_symbol_exit_types			\
+	{ KVM_TRACE_EXIT_IDLE,		"IDLE" },	\
+	{ KVM_TRACE_EXIT_CACHE,		"CACHE" },	\
+	{ KVM_TRACE_EXIT_SIGNAL,	"Signal" }
+
+TRACE_EVENT(kvm_exit_gspr,
+	    TP_PROTO(struct kvm_vcpu *vcpu, unsigned int inst_word),
+	    TP_ARGS(vcpu, inst_word),
+	    TP_STRUCT__entry(
+			__field(unsigned int, inst_word)
+	    ),
+
+	    TP_fast_assign(
+			__entry->inst_word = inst_word;
+	    ),
+
+	    TP_printk("inst word: 0x%08x",
+		      __entry->inst_word)
+);
+
+
+DECLARE_EVENT_CLASS(kvm_exit,
+	    TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
+	    TP_ARGS(vcpu, reason),
+	    TP_STRUCT__entry(
+			__field(unsigned long, pc)
+			__field(unsigned int, reason)
+	    ),
+
+	    TP_fast_assign(
+			__entry->pc = vcpu->arch.pc;
+			__entry->reason = reason;
+	    ),
+
+	    TP_printk("[%s]PC: 0x%08lx",
+		      __print_symbolic(__entry->reason,
+				       kvm_trace_symbol_exit_types),
+		      __entry->pc)
+);
+
+DEFINE_EVENT(kvm_exit, kvm_exit_idle,
+	     TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
+	     TP_ARGS(vcpu, reason));
+
+DEFINE_EVENT(kvm_exit, kvm_exit_cache,
+	     TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
+	     TP_ARGS(vcpu, reason));
+
+DEFINE_EVENT(kvm_exit, kvm_exit,
+	     TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
+	     TP_ARGS(vcpu, reason));
+
+#define KVM_TRACE_AUX_RESTORE		0
+#define KVM_TRACE_AUX_SAVE		1
+#define KVM_TRACE_AUX_ENABLE		2
+#define KVM_TRACE_AUX_DISABLE		3
+#define KVM_TRACE_AUX_DISCARD		4
+
+#define KVM_TRACE_AUX_FPU		1
+
+#define kvm_trace_symbol_aux_op				\
+	{ KVM_TRACE_AUX_RESTORE,	"restore" },	\
+	{ KVM_TRACE_AUX_SAVE,		"save" },	\
+	{ KVM_TRACE_AUX_ENABLE,		"enable" },	\
+	{ KVM_TRACE_AUX_DISABLE,	"disable" },	\
+	{ KVM_TRACE_AUX_DISCARD,	"discard" }
+
+#define kvm_trace_symbol_aux_state			\
+	{ KVM_TRACE_AUX_FPU,     "FPU" }
+
+TRACE_EVENT(kvm_aux,
+	    TP_PROTO(struct kvm_vcpu *vcpu, unsigned int op,
+		     unsigned int state),
+	    TP_ARGS(vcpu, op, state),
+	    TP_STRUCT__entry(
+			__field(unsigned long, pc)
+			__field(u8, op)
+			__field(u8, state)
+	    ),
+
+	    TP_fast_assign(
+			__entry->pc = vcpu->arch.pc;
+			__entry->op = op;
+			__entry->state = state;
+	    ),
+
+	    TP_printk("%s %s PC: 0x%08lx",
+		      __print_symbolic(__entry->op,
+				       kvm_trace_symbol_aux_op),
+		      __print_symbolic(__entry->state,
+				       kvm_trace_symbol_aux_state),
+		      __entry->pc)
+);
+
+TRACE_EVENT(kvm_vpid_change,
+	    TP_PROTO(struct kvm_vcpu *vcpu, unsigned long vpid),
+	    TP_ARGS(vcpu, vpid),
+	    TP_STRUCT__entry(
+			__field(unsigned long, vpid)
+	    ),
+
+	    TP_fast_assign(
+			__entry->vpid = vpid;
+	    ),
+
+	    TP_printk("vpid: 0x%08lx",
+		      __entry->vpid)
+);
+
+#endif /* _TRACE_LOONGARCH64_KVM_H */
+
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH ../../arch/loongarch/kvm
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_FILE trace
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v13 07/30] LoongArch: KVM: Implement vcpu run interface
  2023-06-09  9:04 [PATCH v13 00/30] Add KVM LoongArch support Tianrui Zhao
  2023-06-09  9:04 ` [PATCH v13 03/30] LoongArch: KVM: Implement kvm hardware enable, disable interface Tianrui Zhao
  2023-06-09  9:04 ` [PATCH v13 05/30] LoongArch: KVM: Add vcpu related header files Tianrui Zhao
@ 2023-06-09  9:04 ` Tianrui Zhao
  2023-06-09  9:04 ` [PATCH v13 08/30] LoongArch: KVM: Implement vcpu handle exit interface Tianrui Zhao
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 17+ messages in thread
From: Tianrui Zhao @ 2023-06-09  9:04 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui, tangyouling

Implement vcpu run interface, handling mmio, iocsr reading fault
and deliver interrupt, lose fpu before vcpu enter guest.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vcpu.c | 83 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 83 insertions(+)

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index 24b5b00266a1..eba5c07b8be3 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -17,6 +17,41 @@ int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
 	return 0;
 }
 
+/* Returns 1 if the guest TLB may be clobbered */
+static int _kvm_check_requests(struct kvm_vcpu *vcpu)
+{
+	int ret = 0;
+
+	if (!kvm_request_pending(vcpu))
+		return 0;
+
+	if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu)) {
+		/* Drop vpid for this vCPU */
+		vcpu->arch.vpid = 0;
+		/* This will clobber guest TLB contents too */
+		ret = 1;
+	}
+
+	return ret;
+}
+
+static void kvm_pre_enter_guest(struct kvm_vcpu *vcpu)
+{
+	/*
+	 * handle vcpu timer, interrupts, check requests and
+	 * check vmid before vcpu enter guest
+	 */
+	kvm_acquire_timer(vcpu);
+	_kvm_deliver_intr(vcpu);
+	/* make sure the vcpu mode has been written */
+	smp_store_mb(vcpu->mode, IN_GUEST_MODE);
+	_kvm_check_requests(vcpu);
+	_kvm_check_vmid(vcpu);
+	vcpu->arch.host_eentry = csr_read64(LOONGARCH_CSR_EENTRY);
+	/* clear KVM_LARCH_CSR as csr will change when enter guest */
+	vcpu->arch.aux_inuse &= ~KVM_LARCH_CSR;
+}
+
 int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 {
 	unsigned long timer_hz;
@@ -86,3 +121,51 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
 			context->last_vcpu = NULL;
 	}
 }
+
+int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
+{
+	int r = -EINTR;
+	struct kvm_run *run = vcpu->run;
+
+	vcpu_load(vcpu);
+
+	kvm_sigset_activate(vcpu);
+
+	if (vcpu->mmio_needed) {
+		if (!vcpu->mmio_is_write)
+			_kvm_complete_mmio_read(vcpu, run);
+		vcpu->mmio_needed = 0;
+	}
+
+	if (run->exit_reason == KVM_EXIT_LOONGARCH_IOCSR) {
+		if (!run->iocsr_io.is_write)
+			_kvm_complete_iocsr_read(vcpu, run);
+	}
+
+	/* clear exit_reason */
+	run->exit_reason = KVM_EXIT_UNKNOWN;
+	if (run->immediate_exit)
+		goto out;
+
+	lose_fpu(1);
+
+	local_irq_disable();
+	guest_timing_enter_irqoff();
+
+	kvm_pre_enter_guest(vcpu);
+	trace_kvm_enter(vcpu);
+
+	guest_state_enter_irqoff();
+	r = kvm_loongarch_ops->enter_guest(run, vcpu);
+
+	/* guest_state_exit_irqoff() already done.  */
+	trace_kvm_out(vcpu);
+	guest_timing_exit_irqoff();
+	local_irq_enable();
+
+out:
+	kvm_sigset_deactivate(vcpu);
+
+	vcpu_put(vcpu);
+	return r;
+}
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v13 08/30] LoongArch: KVM: Implement vcpu handle exit interface
  2023-06-09  9:04 [PATCH v13 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (2 preceding siblings ...)
  2023-06-09  9:04 ` [PATCH v13 07/30] LoongArch: KVM: Implement vcpu run interface Tianrui Zhao
@ 2023-06-09  9:04 ` Tianrui Zhao
  2023-06-09  9:04 ` [PATCH v13 10/30] LoongArch: KVM: Implement vcpu ENABLE_CAP ioctl interface Tianrui Zhao
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 17+ messages in thread
From: Tianrui Zhao @ 2023-06-09  9:04 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui, tangyouling

Implement vcpu handle exit interface, getting the exit code by ESTAT
register and using kvm exception vector to handle it.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vcpu.c | 45 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 45 insertions(+)

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index eba5c07b8be3..a45e9d9efe5b 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -52,6 +52,51 @@ static void kvm_pre_enter_guest(struct kvm_vcpu *vcpu)
 	vcpu->arch.aux_inuse &= ~KVM_LARCH_CSR;
 }
 
+/*
+ * Return 1 for resume guest and "<= 0" for resume host.
+ */
+static int _kvm_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
+{
+	unsigned long exst = vcpu->arch.host_estat;
+	u32 intr = exst & 0x1fff; /* ignore NMI */
+	u32 exccode = (exst & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT;
+	int ret = RESUME_GUEST;
+
+	vcpu->mode = OUTSIDE_GUEST_MODE;
+
+	/* Set a default exit reason */
+	run->exit_reason = KVM_EXIT_UNKNOWN;
+
+	local_irq_enable();
+	guest_state_exit_irqoff();
+
+	trace_kvm_exit(vcpu, exccode);
+	if (exccode) {
+		ret = _kvm_handle_fault(vcpu, exccode);
+	} else {
+		WARN(!intr, "vm exiting with suspicious irq\n");
+		++vcpu->stat.int_exits;
+	}
+
+	cond_resched();
+	local_irq_disable();
+
+	if (ret == RESUME_HOST)
+		return ret;
+
+	/* Only check for signals if not already exiting to userspace */
+	if (signal_pending(current)) {
+		vcpu->run->exit_reason = KVM_EXIT_INTR;
+		++vcpu->stat.signal_exits;
+		return -EINTR;
+	}
+
+	kvm_pre_enter_guest(vcpu);
+	trace_kvm_reenter(vcpu);
+	guest_state_enter_irqoff();
+	return RESUME_GUEST;
+}
+
 int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 {
 	unsigned long timer_hz;
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v13 10/30] LoongArch: KVM: Implement vcpu ENABLE_CAP ioctl interface
  2023-06-09  9:04 [PATCH v13 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (3 preceding siblings ...)
  2023-06-09  9:04 ` [PATCH v13 08/30] LoongArch: KVM: Implement vcpu handle exit interface Tianrui Zhao
@ 2023-06-09  9:04 ` Tianrui Zhao
  2023-06-09  9:05 ` [PATCH v13 20/30] LoongArch: KVM: Implement handle csr excption Tianrui Zhao
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 17+ messages in thread
From: Tianrui Zhao @ 2023-06-09  9:04 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui, tangyouling

Implement LoongArch vcpu KVM_ENABLE_CAP ioctl interface.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vcpu.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index b0cce413762d..da97b77da8eb 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -186,6 +186,16 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 	return 0;
 }
 
+static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
+				     struct kvm_enable_cap *cap)
+{
+	/*
+	 * FPU is enable by default, do not support any other caps,
+	 * and later we will support such as LSX cap.
+	 */
+	return -EINVAL;
+}
+
 long kvm_arch_vcpu_ioctl(struct file *filp,
 			 unsigned int ioctl, unsigned long arg)
 {
@@ -209,6 +219,15 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 			r = _kvm_get_reg(vcpu, &reg);
 		break;
 	}
+	case KVM_ENABLE_CAP: {
+		struct kvm_enable_cap cap;
+
+		r = -EFAULT;
+		if (copy_from_user(&cap, argp, sizeof(cap)))
+			break;
+		r = kvm_vcpu_ioctl_enable_cap(vcpu, &cap);
+		break;
+	}
 	default:
 		r = -ENOIOCTLCMD;
 		break;
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v13 20/30] LoongArch: KVM: Implement handle csr excption
  2023-06-09  9:04 [PATCH v13 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (4 preceding siblings ...)
  2023-06-09  9:04 ` [PATCH v13 10/30] LoongArch: KVM: Implement vcpu ENABLE_CAP ioctl interface Tianrui Zhao
@ 2023-06-09  9:05 ` Tianrui Zhao
  2023-06-09  9:05 ` [PATCH v13 29/30] LoongArch: KVM: Supplement kvm document about LoongArch-specific part Tianrui Zhao
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 17+ messages in thread
From: Tianrui Zhao @ 2023-06-09  9:05 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui, tangyouling

Implement kvm handle LoongArch vcpu exit caused by reading and
writing csr. Using csr structure to emulate the registers.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/exit.c | 98 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 98 insertions(+)
 create mode 100644 arch/loongarch/kvm/exit.c

diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c
new file mode 100644
index 000000000000..18635333fc9a
--- /dev/null
+++ b/arch/loongarch/kvm/exit.c
@@ -0,0 +1,98 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/module.h>
+#include <linux/preempt.h>
+#include <linux/vmalloc.h>
+#include <asm/fpu.h>
+#include <asm/inst.h>
+#include <asm/time.h>
+#include <asm/tlb.h>
+#include <asm/loongarch.h>
+#include <asm/numa.h>
+#include <asm/kvm_vcpu.h>
+#include <asm/kvm_csr.h>
+#include <linux/kvm_host.h>
+#include <asm/mmzone.h>
+#include "trace.h"
+
+static unsigned long _kvm_emu_read_csr(struct kvm_vcpu *vcpu, int csrid)
+{
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+	unsigned long val = 0;
+
+	if (get_gcsr_flag(csrid) & SW_GCSR)
+		val = kvm_read_sw_gcsr(csr, csrid);
+	else
+		pr_warn_once("Unsupport csrread 0x%x with pc %lx\n",
+			csrid, vcpu->arch.pc);
+	return val;
+}
+
+static void _kvm_emu_write_csr(struct kvm_vcpu *vcpu, int csrid,
+	unsigned long val)
+{
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+
+	if (get_gcsr_flag(csrid) & SW_GCSR)
+		kvm_write_sw_gcsr(csr, csrid, val);
+	else
+		pr_warn_once("Unsupport csrwrite 0x%x with pc %lx\n",
+				csrid, vcpu->arch.pc);
+}
+
+static void _kvm_emu_xchg_csr(struct kvm_vcpu *vcpu, int csrid,
+	unsigned long csr_mask, unsigned long val)
+{
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+
+	if (get_gcsr_flag(csrid) & SW_GCSR) {
+		unsigned long orig;
+
+		orig = kvm_read_sw_gcsr(csr, csrid);
+		orig &= ~csr_mask;
+		orig |= val & csr_mask;
+		kvm_write_sw_gcsr(csr, csrid, orig);
+	} else
+		pr_warn_once("Unsupport csrxchg 0x%x with pc %lx\n",
+				csrid, vcpu->arch.pc);
+}
+
+static int _kvm_handle_csr(struct kvm_vcpu *vcpu, larch_inst inst)
+{
+	unsigned int rd, rj, csrid;
+	unsigned long csr_mask;
+	unsigned long val = 0;
+
+	/*
+	 * CSR value mask imm
+	 * rj = 0 means csrrd
+	 * rj = 1 means csrwr
+	 * rj != 0,1 means csrxchg
+	 */
+	rd = inst.reg2csr_format.rd;
+	rj = inst.reg2csr_format.rj;
+	csrid = inst.reg2csr_format.csr;
+
+	/* Process CSR ops */
+	if (rj == 0) {
+		/* process csrrd */
+		val = _kvm_emu_read_csr(vcpu, csrid);
+		vcpu->arch.gprs[rd] = val;
+	} else if (rj == 1) {
+		/* process csrwr */
+		val = vcpu->arch.gprs[rd];
+		_kvm_emu_write_csr(vcpu, csrid, val);
+	} else {
+		/* process csrxchg */
+		val = vcpu->arch.gprs[rd];
+		csr_mask = vcpu->arch.gprs[rj];
+		_kvm_emu_xchg_csr(vcpu, csrid, csr_mask, val);
+	}
+
+	return EMULATE_DONE;
+}
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v13 29/30] LoongArch: KVM: Supplement kvm document about LoongArch-specific part
  2023-06-09  9:04 [PATCH v13 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (5 preceding siblings ...)
  2023-06-09  9:05 ` [PATCH v13 20/30] LoongArch: KVM: Implement handle csr excption Tianrui Zhao
@ 2023-06-09  9:05 ` Tianrui Zhao
  2023-06-09  9:05 ` [PATCH v13 30/30] LoongArch: KVM: Add maintainers for LoongArch KVM Tianrui Zhao
  2023-06-09  9:46 ` [PATCH v13 00/30] Add KVM LoongArch support zhaotianrui
  8 siblings, 0 replies; 17+ messages in thread
From: Tianrui Zhao @ 2023-06-09  9:05 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui, tangyouling

Supplement kvm document about LoongArch-specific part, such as add
api introduction for GET/SET_ONE_REG, GET/SET_FPU, GET/SET_MP_STATE,
etc.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 Documentation/virt/kvm/api.rst | 71 +++++++++++++++++++++++++++++-----
 1 file changed, 62 insertions(+), 9 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index add067793b90..ad8e13eab48d 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -416,6 +416,12 @@ Reads the general purpose registers from the vcpu.
 	__u64 pc;
   };
 
+  /* LoongArch */
+  struct kvm_regs {
+        unsigned long gpr[32];
+        unsigned long pc;
+  };
+
 
 4.12 KVM_SET_REGS
 -----------------
@@ -506,7 +512,7 @@ translation mode.
 ------------------
 
 :Capability: basic
-:Architectures: x86, ppc, mips, riscv
+:Architectures: x86, ppc, mips, riscv, loongarch
 :Type: vcpu ioctl
 :Parameters: struct kvm_interrupt (in)
 :Returns: 0 on success, negative on failure.
@@ -592,6 +598,14 @@ b) KVM_INTERRUPT_UNSET
 
 This is an asynchronous vcpu ioctl and can be invoked from any thread.
 
+LOONGARCH:
+^^^^^^^^^^
+
+Queues an external interrupt to be injected into the virtual CPU. A negative
+interrupt number dequeues the interrupt.
+
+This is an asynchronous vcpu ioctl and can be invoked from any thread.
+
 
 4.17 KVM_DEBUG_GUEST
 --------------------
@@ -737,7 +751,7 @@ signal mask.
 ----------------
 
 :Capability: basic
-:Architectures: x86
+:Architectures: x86, loongarch
 :Type: vcpu ioctl
 :Parameters: struct kvm_fpu (out)
 :Returns: 0 on success, -1 on error
@@ -746,7 +760,7 @@ Reads the floating point state from the vcpu.
 
 ::
 
-  /* for KVM_GET_FPU and KVM_SET_FPU */
+  /* x86: for KVM_GET_FPU and KVM_SET_FPU */
   struct kvm_fpu {
 	__u8  fpr[8][16];
 	__u16 fcw;
@@ -761,12 +775,22 @@ Reads the floating point state from the vcpu.
 	__u32 pad2;
   };
 
+  /* LoongArch: for KVM_GET_FPU and KVM_SET_FPU */
+  struct kvm_fpu {
+        __u32 fcsr;
+        __u32 none;
+        __u64 fcc;
+        struct kvm_fpureg {
+                __u64 val64[4];
+        }fpr[32];
+  };
+
 
 4.23 KVM_SET_FPU
 ----------------
 
 :Capability: basic
-:Architectures: x86
+:Architectures: x86, loongarch
 :Type: vcpu ioctl
 :Parameters: struct kvm_fpu (in)
 :Returns: 0 on success, -1 on error
@@ -775,7 +799,7 @@ Writes the floating point state to the vcpu.
 
 ::
 
-  /* for KVM_GET_FPU and KVM_SET_FPU */
+  /* x86: for KVM_GET_FPU and KVM_SET_FPU */
   struct kvm_fpu {
 	__u8  fpr[8][16];
 	__u16 fcw;
@@ -790,6 +814,16 @@ Writes the floating point state to the vcpu.
 	__u32 pad2;
   };
 
+  /* LoongArch: for KVM_GET_FPU and KVM_SET_FPU */
+  struct kvm_fpu {
+        __u32 fcsr;
+        __u32 none;
+        __u64 fcc;
+        struct kvm_fpureg {
+                __u64 val64[4];
+        }fpr[32];
+  };
+
 
 4.24 KVM_CREATE_IRQCHIP
 -----------------------
@@ -1387,7 +1421,7 @@ documentation when it pops into existence).
 -------------------
 
 :Capability: KVM_CAP_ENABLE_CAP
-:Architectures: mips, ppc, s390, x86
+:Architectures: mips, ppc, s390, x86, loongarch
 :Type: vcpu ioctl
 :Parameters: struct kvm_enable_cap (in)
 :Returns: 0 on success; -1 on error
@@ -1442,7 +1476,7 @@ for vm-wide capabilities.
 ---------------------
 
 :Capability: KVM_CAP_MP_STATE
-:Architectures: x86, s390, arm64, riscv
+:Architectures: x86, s390, arm64, riscv, loongarch
 :Type: vcpu ioctl
 :Parameters: struct kvm_mp_state (out)
 :Returns: 0 on success; -1 on error
@@ -1460,7 +1494,7 @@ Possible values are:
 
    ==========================    ===============================================
    KVM_MP_STATE_RUNNABLE         the vcpu is currently running
-                                 [x86,arm64,riscv]
+                                 [x86,arm64,riscv,loongarch]
    KVM_MP_STATE_UNINITIALIZED    the vcpu is an application processor (AP)
                                  which has not yet received an INIT signal [x86]
    KVM_MP_STATE_INIT_RECEIVED    the vcpu has received an INIT signal, and is
@@ -1516,11 +1550,14 @@ For riscv:
 The only states that are valid are KVM_MP_STATE_STOPPED and
 KVM_MP_STATE_RUNNABLE which reflect if the vcpu is paused or not.
 
+On LoongArch, only the KVM_MP_STATE_RUNNABLE state is used to reflect
+whether the vcpu is runnable.
+
 4.39 KVM_SET_MP_STATE
 ---------------------
 
 :Capability: KVM_CAP_MP_STATE
-:Architectures: x86, s390, arm64, riscv
+:Architectures: x86, s390, arm64, riscv, loongarch
 :Type: vcpu ioctl
 :Parameters: struct kvm_mp_state (in)
 :Returns: 0 on success; -1 on error
@@ -1538,6 +1575,9 @@ For arm64/riscv:
 The only states that are valid are KVM_MP_STATE_STOPPED and
 KVM_MP_STATE_RUNNABLE which reflect if the vcpu should be paused or not.
 
+On LoongArch, only the KVM_MP_STATE_RUNNABLE state is used to reflect
+whether the vcpu is runnable.
+
 4.40 KVM_SET_IDENTITY_MAP_ADDR
 ------------------------------
 
@@ -2839,6 +2879,19 @@ Following are the RISC-V D-extension registers:
   0x8020 0000 0600 0020 fcsr      Floating point control and status register
 ======================= ========= =============================================
 
+LoongArch registers are mapped using the lower 32 bits. The upper 16 bits of
+that is the register group type.
+
+LoongArch csr registers are used to control guest cpu or get status of guest
+cpu, and they have the following id bit patterns::
+
+  0x9030 0000 0001 00 <reg:5> <sel:3>   (64-bit)
+
+LoongArch KVM control registers are used to implement some new defined functions
+such as set vcpu counter or reset vcpu, and they have the following id bit patterns::
+
+  0x9030 0000 0002 <reg:16>
+
 
 4.69 KVM_GET_ONE_REG
 --------------------
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v13 30/30] LoongArch: KVM: Add maintainers for LoongArch KVM
  2023-06-09  9:04 [PATCH v13 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (6 preceding siblings ...)
  2023-06-09  9:05 ` [PATCH v13 29/30] LoongArch: KVM: Supplement kvm document about LoongArch-specific part Tianrui Zhao
@ 2023-06-09  9:05 ` Tianrui Zhao
  2023-06-15  9:27   ` Huacai Chen
  2023-06-09  9:46 ` [PATCH v13 00/30] Add KVM LoongArch support zhaotianrui
  8 siblings, 1 reply; 17+ messages in thread
From: Tianrui Zhao @ 2023-06-09  9:05 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui, tangyouling

Add maintainers for LoongArch KVM.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 MAINTAINERS | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 27ef11624748..c2fbfd6ad4e5 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -11357,6 +11357,18 @@ F:	include/kvm/arm_*
 F:	tools/testing/selftests/kvm/*/aarch64/
 F:	tools/testing/selftests/kvm/aarch64/
 
+KERNEL VIRTUAL MACHINE FOR LOONGARCH (KVM/LoongArch)
+M:	Tianrui Zhao <zhaotianrui@loongson.cn>
+M:	Bibo Mao <maobibo@loongson.cn>
+M:	Huacai Chen <chenhuacai@kernel.org>
+L:	kvm@vger.kernel.org
+L:	loongarch@lists.linux.dev
+S:	Maintained
+T:	git https://github.com/loongson/linux-loongarch-kvm
+F:	arch/loongarch/include/asm/kvm*
+F:	arch/loongarch/include/uapi/asm/kvm*
+F:	arch/loongarch/kvm/
+
 KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)
 M:	Huacai Chen <chenhuacai@kernel.org>
 M:	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v13 00/30] Add KVM LoongArch support
  2023-06-09  9:04 [PATCH v13 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (7 preceding siblings ...)
  2023-06-09  9:05 ` [PATCH v13 30/30] LoongArch: KVM: Add maintainers for LoongArch KVM Tianrui Zhao
@ 2023-06-09  9:46 ` zhaotianrui
  8 siblings, 0 replies; 17+ messages in thread
From: zhaotianrui @ 2023-06-09  9:46 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, tangyouling

Please ignore this patch series, as it is incomplete.

Thanks

Tianrui Zhao

在 2023/6/9 下午5:04, Tianrui Zhao 写道:
> This series adds KVM LoongArch support. Loongson 3A5000 supports hardware
> assisted virtualization. With cpu virtualization, there are separate
> hw-supported user mode and kernel mode in guest mode. With memory
> virtualization, there are two-level hw mmu table for guest mode and host
> mode. Also there is separate hw cpu timer with consant frequency in
> guest mode, so that vm can migrate between hosts with different freq.
> Currently, we are able to boot LoongArch Linux Guests.
>
> Few key aspects of KVM LoongArch added by this series are:
> 1. Enable kvm hardware function when kvm module is loaded.
> 2. Implement VM and vcpu related ioctl interface such as vcpu create,
>     vcpu run etc. GET_ONE_REG/SET_ONE_REG ioctl commands are use to
>     get general registers one by one.
> 3. Hardware access about MMU, timer and csr are emulated in kernel.
> 4. Hardwares such as mmio and iocsr device are emulated in user space
>     such as APIC, IPI, pci devices etc.
>
> The running environment of LoongArch virt machine:
> 1. Cross tools to build kernel and uefi:
>     $ wget https://github.com/loongson/build-tools/releases/download/2022.09.06/loongarch64-clfs-6.3-cross-tools-gcc-glibc.tar.xz
>     tar -vxf loongarch64-clfs-6.3-cross-tools-gcc-glibc.tar.xz  -C /opt
>     export PATH=/opt/cross-tools/bin:$PATH
>     export LD_LIBRARY_PATH=/opt/cross-tools/lib:$LD_LIBRARY_PATH
>     export LD_LIBRARY_PATH=/opt/cross-tools/loongarch64-unknown-linux-gnu/lib/:$LD_LIBRARY_PATH
> 2. This series is based on the linux source code:
>     https://github.com/loongson/linux-loongarch-kvm
>     Build command:
>     git checkout kvm-loongarch
>     make ARCH=loongarch CROSS_COMPILE=loongarch64-unknown-linux-gnu- loongson3_defconfig
>     make ARCH=loongarch CROSS_COMPILE=loongarch64-unknown-linux-gnu-
> 3. QEMU hypervisor with LoongArch supported:
>     https://github.com/loongson/qemu
>     Build command:
>     git checkout kvm-loongarch
>     ./configure --target-list="loongarch64-softmmu"  --enable-kvm
>     make
> 4. Uefi bios of LoongArch virt machine:
>     Link: https://github.com/tianocore/edk2-platforms/tree/master/Platform/Loongson/LoongArchQemuPkg#readme
> 5. you can also access the binary files we have already build:
>     https://github.com/yangxiaojuan-loongson/qemu-binary
> The command to boot loongarch virt machine:
>     $ qemu-system-loongarch64 -machine virt -m 4G -cpu la464 \
>     -smp 1 -bios QEMU_EFI.fd -kernel vmlinuz.efi -initrd ramdisk \
>     -serial stdio   -monitor telnet:localhost:4495,server,nowait \
>     -append "root=/dev/ram rdinit=/sbin/init console=ttyS0,115200" \
>     --nographic
>
> Changes for v13:
> 1. Remove patch-28 "Implement probe virtualization when cpu init", as the
> virtualization information about FPU,PMP,LSX in guest.options,options_dyn
> is not used and the gcfg reg value can be read in kvm_hardware_enable, so
> remove the previous cpu_probe_lvz function.
> 2. Fix vcpu_enable_cap interface, it should return -EINVAL directly, as
> FPU cap is enable by default, and do not support any other caps now.
> 3. Simplify the jirl instruction with jr when without return addr,
> simplify case HW0 ... HW7 statment in interrupt.c
> 4. Rename host_stack,host_gp in kvm_vcpu_arch to host_sp,host_tp.
> 5. Remove 'cpu' parameter in _kvm_check_requests, as 'cpu' is not used,
> and remove 'cpu' parameter in kvm_check_vmid function, as it can get
> cpu number by itself.
>
> Changes for v12:
> 1. Improve the gcsr write/read/xchg interface to avoid the previous
> instruction statment like parse_r and make the code easy understanding,
> they are implemented in asm/insn-def.h and the instructions consistent
> of "opcode" "rj" "rd" "simm14" arguments.
> 2. Fix the maintainers list of LoongArch KVM.
>
> Changes for v11:
> 1. Add maintainers for LoongArch KVM.
>
> Changes for v10:
> 1. Fix grammatical problems in LoongArch documentation.
> 2. It is not necessary to save or restore the LOONGARCH_CSR_PGD when
> vcpu put and vcpu load, so we remove it.
>
> Changes for v9:
> 1. Apply the new defined interrupt number macros in loongarch.h to kvm,
> such as INT_SWI0, INT_HWI0, INT_TI, INT_IPI, etc. And remove the
> previous unused macros.
> 2. Remove unused variables in kvm_vcpu_arch, and reorder the variables
> to make them more standard.
>
> Changes for v8:
> 1. Adjust the cpu_data.guest.options structure, add the ases flag into
> it, and remove the previous guest.ases. We do this to keep consistent
> with host cpu_data.options structure.
> 2. Remove the "#include <asm/kvm_host.h>" in some files which also
> include the "<linux/kvm_host.h>". As linux/kvm_host.h already include
> the asm/kvm_host.h.
> 3. Fix some unstandard spelling and grammar errors in comments, and
> improve a little code format to make it easier and standard.
>
> Changes for v7:
> 1. Fix the kvm_save/restore_hw_gcsr compiling warnings reported by
> kernel test robot. The report link is:
> https://lore.kernel.org/oe-kbuild-all/202304131526.iXfLaVZc-lkp@intel.com/
> 2. Fix loongarch kvm trace related compiling problems.
>
> Changes for v6:
> 1. Fix the Documentation/virt/kvm/api.rst compile warning about
> loongarch parts.
>
> Changes for v5:
> 1. Implement get/set mp_state ioctl interface, and only the
> KVM_MP_STATE_RUNNABLE state is supported now, and other states
> will be completed in the future. The state is also used when vcpu
> run idle instruction, if vcpu state is changed to RUNNABLE, the
> vcpu will have the possibility to be woken up.
> 2. Supplement kvm document about loongarch-specific part, such as add
> api introduction for GET/SET_ONE_REG, GET/SET_FPU, GET/SET_MP_STATE,
> etc.
> 3. Improve the kvm_switch_to_guest function in switch.S, remove the
> previous tmp,tmp1 arguments and replace it with t0,t1 reg.
>
> Changes for v4:
> 1. Add a csr_need_update flag in _vcpu_put, as most csr registers keep
> unchanged during process context switch, so we need not to update it
> every time. We can do this only if the soft csr is different form hardware.
> That is to say all of csrs should update after vcpu enter guest, as for
> set_csr_ioctl, we have written soft csr to keep consistent with hardware.
> 2. Improve get/set_csr_ioctl interface, we set SW or HW or INVALID flag
> for all csrs according to it's features when kvm init. In get/set_csr_ioctl,
> if csr is HW, we use gcsrrd/ gcsrwr instruction to access it, else if csr is
> SW, we use software to emulate it, and others return false.
> 3. Add set_hw_gcsr function in csr_ops.S, and it is used in set_csr_ioctl.
> We have splited hw gcsr into three parts, so we can calculate the code offset
> by gcsrid and jump here to run the gcsrwr instruction. We use this function to
> make the code easier and avoid to use the previous SET_HW_GCSR(XXX) interface.
> 4. Improve kvm mmu functions, such as flush page table and make clean page table
> interface.
>
> Changes for v3:
> 1. Remove the vpid array list in kvm_vcpu_arch and use a vpid variable here,
> because a vpid will never be recycled if a vCPU migrates from physical CPU A
> to B and back to A.
> 2. Make some constant variables in kvm_context to global such as vpid_mask,
> guest_eentry, enter_guest, etc.
> 3. Add some new tracepoints, such as kvm_trace_idle, kvm_trace_cache,
> kvm_trace_gspr, etc.
> 4. There are some duplicate codes in kvm_handle_exit and kvm_vcpu_run,
> so we move it to a new function kvm_pre_enter_guest.
> 5. Change the RESUME_HOST, RESUME_GUEST value, return 1 for resume guest
> and "<= 0" for resume host.
> 6. Fcsr and fpu registers are saved/restored together.
>
> Changes for v2:
> 1. Seprate the original patch-01 and patch-03 into small patches, and the
> patches mainly contain kvm module init, module exit, vcpu create, vcpu run,
> etc.
> 2. Remove the original KVM_{GET,SET}_CSRS ioctl in the kvm uapi header,
> and we use the common KVM_{GET,SET}_ONE_REG to access register.
> 3. Use BIT(x) to replace the "1 << n_bits" statement.
>
> Tianrui Zhao (30):
>    LoongArch: KVM: Add kvm related header files
>    LoongArch: KVM: Implement kvm module related interface
>    LoongArch: KVM: Implement kvm hardware enable, disable interface
>    LoongArch: KVM: Implement VM related functions
>    LoongArch: KVM: Add vcpu related header files
>    LoongArch: KVM: Implement vcpu create and destroy interface
>    LoongArch: KVM: Implement vcpu run interface
>    LoongArch: KVM: Implement vcpu handle exit interface
>    LoongArch: KVM: Implement vcpu get, vcpu set registers
>    LoongArch: KVM: Implement vcpu ENABLE_CAP ioctl interface
>    LoongArch: KVM: Implement fpu related operations for vcpu
>    LoongArch: KVM: Implement vcpu interrupt operations
>    LoongArch: KVM: Implement misc vcpu related interfaces
>    LoongArch: KVM: Implement vcpu load and vcpu put operations
>    LoongArch: KVM: Implement vcpu status description
>    LoongArch: KVM: Implement update VM id function
>    LoongArch: KVM: Implement virtual machine tlb operations
>    LoongArch: KVM: Implement vcpu timer operations
>    LoongArch: KVM: Implement kvm mmu operations
>    LoongArch: KVM: Implement handle csr excption
>    LoongArch: KVM: Implement handle iocsr exception
>    LoongArch: KVM: Implement handle idle exception
>    LoongArch: KVM: Implement handle gspr exception
>    LoongArch: KVM: Implement handle mmio exception
>    LoongArch: KVM: Implement handle fpu exception
>    LoongArch: KVM: Implement kvm exception vector
>    LoongArch: KVM: Implement vcpu world switch
>    LoongArch: KVM: Enable kvm config and add the makefile
>    LoongArch: KVM: Supplement kvm document about LoongArch-specific part
>    LoongArch: KVM: Add maintainers for LoongArch KVM
>
>   Documentation/virt/kvm/api.rst             |  71 +-
>   MAINTAINERS                                |  12 +
>   arch/loongarch/Kbuild                      |   1 +
>   arch/loongarch/Kconfig                     |   2 +
>   arch/loongarch/configs/loongson3_defconfig |   2 +
>   arch/loongarch/include/asm/insn-def.h      |  55 ++
>   arch/loongarch/include/asm/inst.h          |  16 +
>   arch/loongarch/include/asm/kvm_csr.h       | 231 ++++++
>   arch/loongarch/include/asm/kvm_host.h      | 253 ++++++
>   arch/loongarch/include/asm/kvm_types.h     |  11 +
>   arch/loongarch/include/asm/kvm_vcpu.h      |  97 +++
>   arch/loongarch/include/asm/loongarch.h     |  20 +-
>   arch/loongarch/include/uapi/asm/kvm.h      | 106 +++
>   arch/loongarch/kernel/asm-offsets.c        |  32 +
>   arch/loongarch/kvm/Kconfig                 |  38 +
>   arch/loongarch/kvm/Makefile                |  22 +
>   arch/loongarch/kvm/csr_ops.S               |  76 ++
>   arch/loongarch/kvm/exit.c                  | 707 +++++++++++++++++
>   arch/loongarch/kvm/interrupt.c             | 113 +++
>   arch/loongarch/kvm/main.c                  | 347 ++++++++
>   arch/loongarch/kvm/mmu.c                   | 725 +++++++++++++++++
>   arch/loongarch/kvm/switch.S                | 301 +++++++
>   arch/loongarch/kvm/timer.c                 | 266 +++++++
>   arch/loongarch/kvm/tlb.c                   |  32 +
>   arch/loongarch/kvm/trace.h                 | 168 ++++
>   arch/loongarch/kvm/vcpu.c                  | 869 +++++++++++++++++++++
>   arch/loongarch/kvm/vm.c                    |  76 ++
>   arch/loongarch/kvm/vmid.c                  |  66 ++
>   include/uapi/linux/kvm.h                   |   9 +
>   29 files changed, 4710 insertions(+), 14 deletions(-)
>   create mode 100644 arch/loongarch/include/asm/insn-def.h
>   create mode 100644 arch/loongarch/include/asm/kvm_csr.h
>   create mode 100644 arch/loongarch/include/asm/kvm_host.h
>   create mode 100644 arch/loongarch/include/asm/kvm_types.h
>   create mode 100644 arch/loongarch/include/asm/kvm_vcpu.h
>   create mode 100644 arch/loongarch/include/uapi/asm/kvm.h
>   create mode 100644 arch/loongarch/kvm/Kconfig
>   create mode 100644 arch/loongarch/kvm/Makefile
>   create mode 100644 arch/loongarch/kvm/csr_ops.S
>   create mode 100644 arch/loongarch/kvm/exit.c
>   create mode 100644 arch/loongarch/kvm/interrupt.c
>   create mode 100644 arch/loongarch/kvm/main.c
>   create mode 100644 arch/loongarch/kvm/mmu.c
>   create mode 100644 arch/loongarch/kvm/switch.S
>   create mode 100644 arch/loongarch/kvm/timer.c
>   create mode 100644 arch/loongarch/kvm/tlb.c
>   create mode 100644 arch/loongarch/kvm/trace.h
>   create mode 100644 arch/loongarch/kvm/vcpu.c
>   create mode 100644 arch/loongarch/kvm/vm.c
>   create mode 100644 arch/loongarch/kvm/vmid.c
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v13 30/30] LoongArch: KVM: Add maintainers for LoongArch KVM
  2023-06-09  9:05 ` [PATCH v13 30/30] LoongArch: KVM: Add maintainers for LoongArch KVM Tianrui Zhao
@ 2023-06-15  9:27   ` Huacai Chen
  2023-06-16  2:50     ` zhaotianrui
  0 siblings, 1 reply; 17+ messages in thread
From: Huacai Chen @ 2023-06-15  9:27 UTC (permalink / raw)
  To: Tianrui Zhao
  Cc: linux-kernel, kvm, Paolo Bonzini, WANG Xuerui,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, maobibo, Xi Ruoyao, tangyouling

Hi, Tianrui,

On Fri, Jun 9, 2023 at 5:06 PM Tianrui Zhao <zhaotianrui@loongson.cn> wrote:
>
> Add maintainers for LoongArch KVM.
>
> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
> ---
>  MAINTAINERS | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 27ef11624748..c2fbfd6ad4e5 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -11357,6 +11357,18 @@ F:     include/kvm/arm_*
>  F:     tools/testing/selftests/kvm/*/aarch64/
>  F:     tools/testing/selftests/kvm/aarch64/
>
> +KERNEL VIRTUAL MACHINE FOR LOONGARCH (KVM/LoongArch)
> +M:     Tianrui Zhao <zhaotianrui@loongson.cn>
> +M:     Bibo Mao <maobibo@loongson.cn>
> +M:     Huacai Chen <chenhuacai@kernel.org>
> +L:     kvm@vger.kernel.org
> +L:     loongarch@lists.linux.dev
> +S:     Maintained
> +T:     git https://github.com/loongson/linux-loongarch-kvm
I'm not sure, but I think this should be a tree which can be used to
send PR for upstream maintainers. If no other selection, we should use
git git://git.kernel.org/pub/scm/virt/kvm/kvm.git

Huacai
> +F:     arch/loongarch/include/asm/kvm*
> +F:     arch/loongarch/include/uapi/asm/kvm*
> +F:     arch/loongarch/kvm/
> +
>  KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)
>  M:     Huacai Chen <chenhuacai@kernel.org>
>  M:     Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
> --
> 2.39.1
>
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v13 05/30] LoongArch: KVM: Add vcpu related header files
  2023-06-09  9:04 ` [PATCH v13 05/30] LoongArch: KVM: Add vcpu related header files Tianrui Zhao
@ 2023-06-15  9:51   ` Huacai Chen
  2023-06-17  3:05     ` zhaotianrui
  0 siblings, 1 reply; 17+ messages in thread
From: Huacai Chen @ 2023-06-15  9:51 UTC (permalink / raw)
  To: Tianrui Zhao
  Cc: linux-kernel, kvm, Paolo Bonzini, WANG Xuerui,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, maobibo, Xi Ruoyao, tangyouling

Hi, Tianrui,

I suggest to use similar method as vector support:
https://lore.kernel.org/loongarch/20230613151918.2039498-1-chenhuacai@loongson.cn/T/#u

Introduce CC_HAS_LVZ_EXTENSION to detect whether the toolchain
supports LVZ-specific instructions. And make the KVM config option
depend on CC_HAS_LVZ_EXTENSION. In this way the code will be more
maintainable. Of course this makes the old toolchain unable to build a
KVM-enabled kernel, but I think that is not a big problem.

Huacai

On Fri, Jun 9, 2023 at 5:05 PM Tianrui Zhao <zhaotianrui@loongson.cn> wrote:
>
> Add LoongArch vcpu related header files, including vcpu csr
> information, irq number defines, and some vcpu interfaces.
>
> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
> ---
>  arch/loongarch/include/asm/insn-def.h  |  55 ++++++
>  arch/loongarch/include/asm/kvm_csr.h   | 231 +++++++++++++++++++++++++
>  arch/loongarch/include/asm/kvm_vcpu.h  |  97 +++++++++++
>  arch/loongarch/include/asm/loongarch.h |  20 ++-
>  arch/loongarch/kvm/trace.h             | 168 ++++++++++++++++++
>  5 files changed, 566 insertions(+), 5 deletions(-)
>  create mode 100644 arch/loongarch/include/asm/insn-def.h
>  create mode 100644 arch/loongarch/include/asm/kvm_csr.h
>  create mode 100644 arch/loongarch/include/asm/kvm_vcpu.h
>  create mode 100644 arch/loongarch/kvm/trace.h
>
> diff --git a/arch/loongarch/include/asm/insn-def.h b/arch/loongarch/include/asm/insn-def.h
> new file mode 100644
> index 000000000000..e285ee108fb0
> --- /dev/null
> +++ b/arch/loongarch/include/asm/insn-def.h
> @@ -0,0 +1,55 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +
> +#ifndef __ASM_INSN_DEF_H
> +#define __ASM_INSN_DEF_H
> +
> +#include <linux/stringify.h>
> +#include <asm/gpr-num.h>
> +#include <asm/asm.h>
> +
> +#define INSN_STR(x)            __stringify(x)
> +#define CSR_RD_SHIFT           0
> +#define CSR_RJ_SHIFT           5
> +#define CSR_SIMM14_SHIFT       10
> +#define CSR_OPCODE_SHIFT       24
> +
> +#define DEFINE_INSN_CSR                                                        \
> +       __DEFINE_ASM_GPR_NUMS                                           \
> +"      .macro insn_csr, opcode, rj, rd, simm14\n"                      \
> +"      .4byte  ((\\opcode << " INSN_STR(CSR_OPCODE_SHIFT) ") |"        \
> +"               (.L__gpr_num_\\rj << " INSN_STR(CSR_RJ_SHIFT) ") |"    \
> +"               (.L__gpr_num_\\rd << " INSN_STR(CSR_RD_SHIFT) ") |"    \
> +"               (\\simm14 << " INSN_STR(CSR_SIMM14_SHIFT) "))\n"       \
> +"      .endm\n"
> +
> +#define UNDEFINE_INSN_CSR                                              \
> +"      .purgem insn_csr\n"
> +
> +#define __INSN_CSR(opcode, rj, rd, simm14)                             \
> +       DEFINE_INSN_CSR                                                 \
> +       "insn_csr " opcode ", " rj ", " rd ", " simm14 "\n"             \
> +       UNDEFINE_INSN_CSR
> +
> +
> +#define INSN_CSR(opcode, rj, rd, simm14)                               \
> +       __INSN_CSR(LARCH_##opcode, LARCH_##rj, LARCH_##rd,              \
> +                  LARCH_##simm14)
> +
> +#define __ASM_STR(x)           #x
> +#define LARCH_OPCODE(v)                __ASM_STR(v)
> +#define LARCH_SIMM14(v)                __ASM_STR(v)
> +#define __LARCH_REG(v)         __ASM_STR(v)
> +#define LARCH___RD(v)          __LARCH_REG(v)
> +#define LARCH___RJ(v)          __LARCH_REG(v)
> +#define LARCH_OPCODE_GCSR      LARCH_OPCODE(5)
> +
> +#define GCSR_read(csr, rd)                                             \
> +       INSN_CSR(OPCODE_GCSR, __RJ(zero), __RD(rd), SIMM14(csr))
> +
> +#define GCSR_write(csr, rd)                                            \
> +       INSN_CSR(OPCODE_GCSR, __RJ($r1), __RD(rd), SIMM14(csr))
> +
> +#define GCSR_xchg(csr, rj, rd)                                         \
> +       INSN_CSR(OPCODE_GCSR, __RJ(rj), __RD(rd), SIMM14(csr))
> +
> +#endif /* __ASM_INSN_DEF_H */
> diff --git a/arch/loongarch/include/asm/kvm_csr.h b/arch/loongarch/include/asm/kvm_csr.h
> new file mode 100644
> index 000000000000..10dba5bc6df1
> --- /dev/null
> +++ b/arch/loongarch/include/asm/kvm_csr.h
> @@ -0,0 +1,231 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
> + */
> +
> +#ifndef __ASM_LOONGARCH_KVM_CSR_H__
> +#define __ASM_LOONGARCH_KVM_CSR_H__
> +#include <asm/loongarch.h>
> +#include <asm/kvm_vcpu.h>
> +#include <linux/uaccess.h>
> +#include <linux/kvm_host.h>
> +
> +/*
> + * Instructions will be available in binutils later
> + * read val from guest csr register %[csr]
> + * gcsrrd %[val], %[csr]
> + */
> +#define gcsr_read(csr)                                         \
> +({                                                             \
> +       register unsigned long __v;                             \
> +       __asm__ __volatile__ (GCSR_read(csr, %0)                \
> +                               : "=r" (__v) :                  \
> +                               : "memory");                    \
> +       __v;                                                    \
> +})
> +
> +/*
> + * Instructions will be available in binutils later
> + * write val to guest csr register %[csr]
> + * gcsrwr %[val], %[csr]
> + */
> +#define gcsr_write(val, csr)                                   \
> +({                                                             \
> +       register unsigned long __v = val;                       \
> +       __asm__ __volatile__ (GCSR_write(csr, %0)               \
> +                               : "+r" (__v) :                  \
> +                               : "memory");                    \
> +})
> +
> +/*
> + * Instructions will be available in binutils later
> + * replace masked bits of guest csr register %[csr] with val
> + * gcsrxchg %[val], %[mask], %[csr]
> + */
> +#define gcsr_xchg(val, mask, csr)                              \
> +({                                                             \
> +       register unsigned long __v = val;                       \
> +       __asm__ __volatile__ (GCSR_xchg(csr, %1, %0)            \
> +                               : "+r" (__v)                    \
> +                               : "r"  (mask)                   \
> +                               : "memory");                    \
> +       __v;                                                    \
> +})
> +
> +/* Guest CSRS read and write */
> +#define read_gcsr_crmd()               gcsr_read(LOONGARCH_CSR_CRMD)
> +#define write_gcsr_crmd(val)           gcsr_write(val, LOONGARCH_CSR_CRMD)
> +#define read_gcsr_prmd()               gcsr_read(LOONGARCH_CSR_PRMD)
> +#define write_gcsr_prmd(val)           gcsr_write(val, LOONGARCH_CSR_PRMD)
> +#define read_gcsr_euen()               gcsr_read(LOONGARCH_CSR_EUEN)
> +#define write_gcsr_euen(val)           gcsr_write(val, LOONGARCH_CSR_EUEN)
> +#define read_gcsr_misc()               gcsr_read(LOONGARCH_CSR_MISC)
> +#define write_gcsr_misc(val)           gcsr_write(val, LOONGARCH_CSR_MISC)
> +#define read_gcsr_ecfg()               gcsr_read(LOONGARCH_CSR_ECFG)
> +#define write_gcsr_ecfg(val)           gcsr_write(val, LOONGARCH_CSR_ECFG)
> +#define read_gcsr_estat()              gcsr_read(LOONGARCH_CSR_ESTAT)
> +#define write_gcsr_estat(val)          gcsr_write(val, LOONGARCH_CSR_ESTAT)
> +#define read_gcsr_era()                        gcsr_read(LOONGARCH_CSR_ERA)
> +#define write_gcsr_era(val)            gcsr_write(val, LOONGARCH_CSR_ERA)
> +#define read_gcsr_badv()               gcsr_read(LOONGARCH_CSR_BADV)
> +#define write_gcsr_badv(val)           gcsr_write(val, LOONGARCH_CSR_BADV)
> +#define read_gcsr_badi()               gcsr_read(LOONGARCH_CSR_BADI)
> +#define write_gcsr_badi(val)           gcsr_write(val, LOONGARCH_CSR_BADI)
> +#define read_gcsr_eentry()             gcsr_read(LOONGARCH_CSR_EENTRY)
> +#define write_gcsr_eentry(val)         gcsr_write(val, LOONGARCH_CSR_EENTRY)
> +
> +#define read_gcsr_tlbidx()             gcsr_read(LOONGARCH_CSR_TLBIDX)
> +#define write_gcsr_tlbidx(val)         gcsr_write(val, LOONGARCH_CSR_TLBIDX)
> +#define read_gcsr_tlbhi()              gcsr_read(LOONGARCH_CSR_TLBEHI)
> +#define write_gcsr_tlbhi(val)          gcsr_write(val, LOONGARCH_CSR_TLBEHI)
> +#define read_gcsr_tlblo0()             gcsr_read(LOONGARCH_CSR_TLBELO0)
> +#define write_gcsr_tlblo0(val)         gcsr_write(val, LOONGARCH_CSR_TLBELO0)
> +#define read_gcsr_tlblo1()             gcsr_read(LOONGARCH_CSR_TLBELO1)
> +#define write_gcsr_tlblo1(val)         gcsr_write(val, LOONGARCH_CSR_TLBELO1)
> +
> +#define read_gcsr_asid()               gcsr_read(LOONGARCH_CSR_ASID)
> +#define write_gcsr_asid(val)           gcsr_write(val, LOONGARCH_CSR_ASID)
> +#define read_gcsr_pgdl()               gcsr_read(LOONGARCH_CSR_PGDL)
> +#define write_gcsr_pgdl(val)           gcsr_write(val, LOONGARCH_CSR_PGDL)
> +#define read_gcsr_pgdh()               gcsr_read(LOONGARCH_CSR_PGDH)
> +#define write_gcsr_pgdh(val)           gcsr_write(val, LOONGARCH_CSR_PGDH)
> +#define write_gcsr_pgd(val)            gcsr_write(val, LOONGARCH_CSR_PGD)
> +#define read_gcsr_pgd()                        gcsr_read(LOONGARCH_CSR_PGD)
> +#define read_gcsr_pwctl0()             gcsr_read(LOONGARCH_CSR_PWCTL0)
> +#define write_gcsr_pwctl0(val)         gcsr_write(val, LOONGARCH_CSR_PWCTL0)
> +#define read_gcsr_pwctl1()             gcsr_read(LOONGARCH_CSR_PWCTL1)
> +#define write_gcsr_pwctl1(val)         gcsr_write(val, LOONGARCH_CSR_PWCTL1)
> +#define read_gcsr_stlbpgsize()         gcsr_read(LOONGARCH_CSR_STLBPGSIZE)
> +#define write_gcsr_stlbpgsize(val)     gcsr_write(val, LOONGARCH_CSR_STLBPGSIZE)
> +#define read_gcsr_rvacfg()             gcsr_read(LOONGARCH_CSR_RVACFG)
> +#define write_gcsr_rvacfg(val)         gcsr_write(val, LOONGARCH_CSR_RVACFG)
> +
> +#define read_gcsr_cpuid()              gcsr_read(LOONGARCH_CSR_CPUID)
> +#define write_gcsr_cpuid(val)          gcsr_write(val, LOONGARCH_CSR_CPUID)
> +#define read_gcsr_prcfg1()             gcsr_read(LOONGARCH_CSR_PRCFG1)
> +#define write_gcsr_prcfg1(val)         gcsr_write(val, LOONGARCH_CSR_PRCFG1)
> +#define read_gcsr_prcfg2()             gcsr_read(LOONGARCH_CSR_PRCFG2)
> +#define write_gcsr_prcfg2(val)         gcsr_write(val, LOONGARCH_CSR_PRCFG2)
> +#define read_gcsr_prcfg3()             gcsr_read(LOONGARCH_CSR_PRCFG3)
> +#define write_gcsr_prcfg3(val)         gcsr_write(val, LOONGARCH_CSR_PRCFG3)
> +
> +#define read_gcsr_kscratch0()          gcsr_read(LOONGARCH_CSR_KS0)
> +#define write_gcsr_kscratch0(val)      gcsr_write(val, LOONGARCH_CSR_KS0)
> +#define read_gcsr_kscratch1()          gcsr_read(LOONGARCH_CSR_KS1)
> +#define write_gcsr_kscratch1(val)      gcsr_write(val, LOONGARCH_CSR_KS1)
> +#define read_gcsr_kscratch2()          gcsr_read(LOONGARCH_CSR_KS2)
> +#define write_gcsr_kscratch2(val)      gcsr_write(val, LOONGARCH_CSR_KS2)
> +#define read_gcsr_kscratch3()          gcsr_read(LOONGARCH_CSR_KS3)
> +#define write_gcsr_kscratch3(val)      gcsr_write(val, LOONGARCH_CSR_KS3)
> +#define read_gcsr_kscratch4()          gcsr_read(LOONGARCH_CSR_KS4)
> +#define write_gcsr_kscratch4(val)      gcsr_write(val, LOONGARCH_CSR_KS4)
> +#define read_gcsr_kscratch5()          gcsr_read(LOONGARCH_CSR_KS5)
> +#define write_gcsr_kscratch5(val)      gcsr_write(val, LOONGARCH_CSR_KS5)
> +#define read_gcsr_kscratch6()          gcsr_read(LOONGARCH_CSR_KS6)
> +#define write_gcsr_kscratch6(val)      gcsr_write(val, LOONGARCH_CSR_KS6)
> +#define read_gcsr_kscratch7()          gcsr_read(LOONGARCH_CSR_KS7)
> +#define write_gcsr_kscratch7(val)      gcsr_write(val, LOONGARCH_CSR_KS7)
> +
> +#define read_gcsr_timerid()            gcsr_read(LOONGARCH_CSR_TMID)
> +#define write_gcsr_timerid(val)                gcsr_write(val, LOONGARCH_CSR_TMID)
> +#define read_gcsr_timercfg()           gcsr_read(LOONGARCH_CSR_TCFG)
> +#define write_gcsr_timercfg(val)       gcsr_write(val, LOONGARCH_CSR_TCFG)
> +#define read_gcsr_timertick()          gcsr_read(LOONGARCH_CSR_TVAL)
> +#define write_gcsr_timertick(val)      gcsr_write(val, LOONGARCH_CSR_TVAL)
> +#define read_gcsr_timeroffset()                gcsr_read(LOONGARCH_CSR_CNTC)
> +#define write_gcsr_timeroffset(val)    gcsr_write(val, LOONGARCH_CSR_CNTC)
> +
> +#define read_gcsr_llbctl()             gcsr_read(LOONGARCH_CSR_LLBCTL)
> +#define write_gcsr_llbctl(val)         gcsr_write(val, LOONGARCH_CSR_LLBCTL)
> +
> +#define read_gcsr_tlbrentry()          gcsr_read(LOONGARCH_CSR_TLBRENTRY)
> +#define write_gcsr_tlbrentry(val)      gcsr_write(val, LOONGARCH_CSR_TLBRENTRY)
> +#define read_gcsr_tlbrbadv()           gcsr_read(LOONGARCH_CSR_TLBRBADV)
> +#define write_gcsr_tlbrbadv(val)       gcsr_write(val, LOONGARCH_CSR_TLBRBADV)
> +#define read_gcsr_tlbrera()            gcsr_read(LOONGARCH_CSR_TLBRERA)
> +#define write_gcsr_tlbrera(val)                gcsr_write(val, LOONGARCH_CSR_TLBRERA)
> +#define read_gcsr_tlbrsave()           gcsr_read(LOONGARCH_CSR_TLBRSAVE)
> +#define write_gcsr_tlbrsave(val)       gcsr_write(val, LOONGARCH_CSR_TLBRSAVE)
> +#define read_gcsr_tlbrelo0()           gcsr_read(LOONGARCH_CSR_TLBRELO0)
> +#define write_gcsr_tlbrelo0(val)       gcsr_write(val, LOONGARCH_CSR_TLBRELO0)
> +#define read_gcsr_tlbrelo1()           gcsr_read(LOONGARCH_CSR_TLBRELO1)
> +#define write_gcsr_tlbrelo1(val)       gcsr_write(val, LOONGARCH_CSR_TLBRELO1)
> +#define read_gcsr_tlbrehi()            gcsr_read(LOONGARCH_CSR_TLBREHI)
> +#define write_gcsr_tlbrehi(val)                gcsr_write(val, LOONGARCH_CSR_TLBREHI)
> +#define read_gcsr_tlbrprmd()           gcsr_read(LOONGARCH_CSR_TLBRPRMD)
> +#define write_gcsr_tlbrprmd(val)       gcsr_write(val, LOONGARCH_CSR_TLBRPRMD)
> +
> +#define read_gcsr_directwin0()         gcsr_read(LOONGARCH_CSR_DMWIN0)
> +#define write_gcsr_directwin0(val)     gcsr_write(val, LOONGARCH_CSR_DMWIN0)
> +#define read_gcsr_directwin1()         gcsr_read(LOONGARCH_CSR_DMWIN1)
> +#define write_gcsr_directwin1(val)     gcsr_write(val, LOONGARCH_CSR_DMWIN1)
> +#define read_gcsr_directwin2()         gcsr_read(LOONGARCH_CSR_DMWIN2)
> +#define write_gcsr_directwin2(val)     gcsr_write(val, LOONGARCH_CSR_DMWIN2)
> +#define read_gcsr_directwin3()         gcsr_read(LOONGARCH_CSR_DMWIN3)
> +#define write_gcsr_directwin3(val)     gcsr_write(val, LOONGARCH_CSR_DMWIN3)
> +
> +/* Guest related CSRs */
> +#define read_csr_gtlbc()               csr_read64(LOONGARCH_CSR_GTLBC)
> +#define write_csr_gtlbc(val)           csr_write64(val, LOONGARCH_CSR_GTLBC)
> +#define read_csr_trgp()                        csr_read64(LOONGARCH_CSR_TRGP)
> +#define read_csr_gcfg()                        csr_read64(LOONGARCH_CSR_GCFG)
> +#define write_csr_gcfg(val)            csr_write64(val, LOONGARCH_CSR_GCFG)
> +#define read_csr_gstat()               csr_read64(LOONGARCH_CSR_GSTAT)
> +#define write_csr_gstat(val)           csr_write64(val, LOONGARCH_CSR_GSTAT)
> +#define read_csr_gintc()               csr_read64(LOONGARCH_CSR_GINTC)
> +#define write_csr_gintc(val)           csr_write64(val, LOONGARCH_CSR_GINTC)
> +#define read_csr_gcntc()               csr_read64(LOONGARCH_CSR_GCNTC)
> +#define write_csr_gcntc(val)           csr_write64(val, LOONGARCH_CSR_GCNTC)
> +
> +#define __BUILD_GCSR_OP(name)          __BUILD_CSR_COMMON(gcsr_##name)
> +
> +__BUILD_GCSR_OP(llbctl)
> +__BUILD_GCSR_OP(tlbidx)
> +__BUILD_CSR_OP(gcfg)
> +__BUILD_CSR_OP(gstat)
> +__BUILD_CSR_OP(gtlbc)
> +__BUILD_CSR_OP(gintc)
> +
> +#define set_gcsr_estat(val)    \
> +       gcsr_xchg(val, val, LOONGARCH_CSR_ESTAT)
> +#define clear_gcsr_estat(val)  \
> +       gcsr_xchg(~(val), val, LOONGARCH_CSR_ESTAT)
> +
> +#define kvm_read_hw_gcsr(id)           gcsr_read(id)
> +#define kvm_write_hw_gcsr(csr, id, val)        gcsr_write(val, id)
> +
> +int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v);
> +int _kvm_setcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 v);
> +
> +int _kvm_emu_iocsr(larch_inst inst, struct kvm_run *run, struct kvm_vcpu *vcpu);
> +
> +#define kvm_save_hw_gcsr(csr, gid)     (csr->csrs[gid] = gcsr_read(gid))
> +#define kvm_restore_hw_gcsr(csr, gid)  (gcsr_write(csr->csrs[gid], gid))
> +
> +static __always_inline unsigned long kvm_read_sw_gcsr(struct loongarch_csrs *csr, int gid)
> +{
> +       return csr->csrs[gid];
> +}
> +
> +static __always_inline void kvm_write_sw_gcsr(struct loongarch_csrs *csr,
> +                                             int gid, unsigned long val)
> +{
> +       csr->csrs[gid] = val;
> +}
> +
> +static __always_inline void kvm_set_sw_gcsr(struct loongarch_csrs *csr,
> +                                           int gid, unsigned long val)
> +{
> +       csr->csrs[gid] |= val;
> +}
> +
> +static __always_inline void kvm_change_sw_gcsr(struct loongarch_csrs *csr,
> +                                              int gid, unsigned long mask,
> +                                              unsigned long val)
> +{
> +       unsigned long _mask = mask;
> +
> +       csr->csrs[gid] &= ~_mask;
> +       csr->csrs[gid] |= val & _mask;
> +}
> +#endif /* __ASM_LOONGARCH_KVM_CSR_H__ */
> diff --git a/arch/loongarch/include/asm/kvm_vcpu.h b/arch/loongarch/include/asm/kvm_vcpu.h
> new file mode 100644
> index 000000000000..74deaf55d22c
> --- /dev/null
> +++ b/arch/loongarch/include/asm/kvm_vcpu.h
> @@ -0,0 +1,97 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
> + */
> +
> +#ifndef __ASM_LOONGARCH_KVM_VCPU_H__
> +#define __ASM_LOONGARCH_KVM_VCPU_H__
> +
> +#include <linux/kvm_host.h>
> +#include <asm/loongarch.h>
> +
> +/* Controlled by 0x5 guest exst */
> +#define CPU_SIP0                       (_ULCAST_(1))
> +#define CPU_SIP1                       (_ULCAST_(1) << 1)
> +#define CPU_PMU                                (_ULCAST_(1) << 10)
> +#define CPU_TIMER                      (_ULCAST_(1) << 11)
> +#define CPU_IPI                                (_ULCAST_(1) << 12)
> +
> +/* Controlled by 0x52 guest exception VIP
> + * aligned to exst bit 5~12
> + */
> +#define CPU_IP0                                (_ULCAST_(1))
> +#define CPU_IP1                                (_ULCAST_(1) << 1)
> +#define CPU_IP2                                (_ULCAST_(1) << 2)
> +#define CPU_IP3                                (_ULCAST_(1) << 3)
> +#define CPU_IP4                                (_ULCAST_(1) << 4)
> +#define CPU_IP5                                (_ULCAST_(1) << 5)
> +#define CPU_IP6                                (_ULCAST_(1) << 6)
> +#define CPU_IP7                                (_ULCAST_(1) << 7)
> +
> +#define MNSEC_PER_SEC                  (NSEC_PER_SEC >> 20)
> +
> +/* KVM_IRQ_LINE irq field index values */
> +#define KVM_LOONGSON_IRQ_TYPE_SHIFT    24
> +#define KVM_LOONGSON_IRQ_TYPE_MASK     0xff
> +#define KVM_LOONGSON_IRQ_VCPU_SHIFT    16
> +#define KVM_LOONGSON_IRQ_VCPU_MASK     0xff
> +#define KVM_LOONGSON_IRQ_NUM_SHIFT     0
> +#define KVM_LOONGSON_IRQ_NUM_MASK      0xffff
> +
> +/* Irq_type field */
> +#define KVM_LOONGSON_IRQ_TYPE_CPU_IP   0
> +#define KVM_LOONGSON_IRQ_TYPE_CPU_IO   1
> +#define KVM_LOONGSON_IRQ_TYPE_HT       2
> +#define KVM_LOONGSON_IRQ_TYPE_MSI      3
> +#define KVM_LOONGSON_IRQ_TYPE_IOAPIC   4
> +#define KVM_LOONGSON_IRQ_TYPE_ROUTE    5
> +
> +/* Out-of-kernel GIC cpu interrupt injection irq_number field */
> +#define KVM_LOONGSON_IRQ_CPU_IRQ       0
> +#define KVM_LOONGSON_IRQ_CPU_FIQ       1
> +#define KVM_LOONGSON_CPU_IP_NUM                8
> +
> +typedef union loongarch_instruction  larch_inst;
> +typedef int (*exit_handle_fn)(struct kvm_vcpu *);
> +
> +int  _kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst);
> +int  _kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst);
> +int  _kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
> +int  _kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
> +int  _kvm_emu_idle(struct kvm_vcpu *vcpu);
> +int  _kvm_handle_pv_hcall(struct kvm_vcpu *vcpu);
> +int  _kvm_pending_timer(struct kvm_vcpu *vcpu);
> +int  _kvm_handle_fault(struct kvm_vcpu *vcpu, int fault);
> +void _kvm_deliver_intr(struct kvm_vcpu *vcpu);
> +
> +void kvm_own_fpu(struct kvm_vcpu *vcpu);
> +void kvm_lose_fpu(struct kvm_vcpu *vcpu);
> +void kvm_save_fpu(struct loongarch_fpu *fpu);
> +void kvm_restore_fpu(struct loongarch_fpu *fpu);
> +void kvm_restore_fcsr(struct loongarch_fpu *fpu);
> +
> +void kvm_acquire_timer(struct kvm_vcpu *vcpu);
> +void kvm_reset_timer(struct kvm_vcpu *vcpu);
> +enum hrtimer_restart kvm_count_timeout(struct kvm_vcpu *vcpu);
> +void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long hz);
> +void kvm_restore_timer(struct kvm_vcpu *vcpu);
> +void kvm_save_timer(struct kvm_vcpu *vcpu);
> +
> +int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu,
> +                       struct kvm_loongarch_interrupt *irq);
> +/*
> + * Loongarch KVM guest interrupt handling
> + */
> +static inline void _kvm_queue_irq(struct kvm_vcpu *vcpu, unsigned int irq)
> +{
> +       set_bit(irq, &vcpu->arch.irq_pending);
> +       clear_bit(irq, &vcpu->arch.irq_clear);
> +}
> +
> +static inline void _kvm_dequeue_irq(struct kvm_vcpu *vcpu, unsigned int irq)
> +{
> +       clear_bit(irq, &vcpu->arch.irq_pending);
> +       set_bit(irq, &vcpu->arch.irq_clear);
> +}
> +
> +#endif /* __ASM_LOONGARCH_KVM_VCPU_H__ */
> diff --git a/arch/loongarch/include/asm/loongarch.h b/arch/loongarch/include/asm/loongarch.h
> index b3323ab5b78d..35ae5c2be8b6 100644
> --- a/arch/loongarch/include/asm/loongarch.h
> +++ b/arch/loongarch/include/asm/loongarch.h
> @@ -11,6 +11,7 @@
>
>  #ifndef __ASSEMBLY__
>  #include <larchintrin.h>
> +#include <asm/insn-def.h>
>
>  /*
>   * parse_r var, r - Helper assembler macro for parsing register names.
> @@ -309,6 +310,7 @@ static __always_inline void iocsr_write64(u64 val, u32 reg)
>  #define LOONGARCH_CSR_ECFG             0x4     /* Exception config */
>  #define  CSR_ECFG_VS_SHIFT             16
>  #define  CSR_ECFG_VS_WIDTH             3
> +#define  CSR_ECFG_VS_SHIFT_END         (CSR_ECFG_VS_SHIFT + CSR_ECFG_VS_WIDTH - 1)
>  #define  CSR_ECFG_VS                   (_ULCAST_(0x7) << CSR_ECFG_VS_SHIFT)
>  #define  CSR_ECFG_IM_SHIFT             0
>  #define  CSR_ECFG_IM_WIDTH             14
> @@ -397,13 +399,14 @@ static __always_inline void iocsr_write64(u64 val, u32 reg)
>  #define  CSR_TLBLO1_V                  (_ULCAST_(0x1) << CSR_TLBLO1_V_SHIFT)
>
>  #define LOONGARCH_CSR_GTLBC            0x15    /* Guest TLB control */
> -#define  CSR_GTLBC_RID_SHIFT           16
> -#define  CSR_GTLBC_RID_WIDTH           8
> -#define  CSR_GTLBC_RID                 (_ULCAST_(0xff) << CSR_GTLBC_RID_SHIFT)
> +#define  CSR_GTLBC_TGID_SHIFT          16
> +#define  CSR_GTLBC_TGID_WIDTH          8
> +#define  CSR_GTLBC_TGID_SHIFT_END      (CSR_GTLBC_TGID_SHIFT + CSR_GTLBC_TGID_WIDTH - 1)
> +#define  CSR_GTLBC_TGID                        (_ULCAST_(0xff) << CSR_GTLBC_TGID_SHIFT)
>  #define  CSR_GTLBC_TOTI_SHIFT          13
>  #define  CSR_GTLBC_TOTI                        (_ULCAST_(0x1) << CSR_GTLBC_TOTI_SHIFT)
> -#define  CSR_GTLBC_USERID_SHIFT                12
> -#define  CSR_GTLBC_USERID              (_ULCAST_(0x1) << CSR_GTLBC_USERID_SHIFT)
> +#define  CSR_GTLBC_USETGID_SHIFT       12
> +#define  CSR_GTLBC_USETGID             (_ULCAST_(0x1) << CSR_GTLBC_USETGID_SHIFT)
>  #define  CSR_GTLBC_GMTLBSZ_SHIFT       0
>  #define  CSR_GTLBC_GMTLBSZ_WIDTH       6
>  #define  CSR_GTLBC_GMTLBSZ             (_ULCAST_(0x3f) << CSR_GTLBC_GMTLBSZ_SHIFT)
> @@ -555,6 +558,7 @@ static __always_inline void iocsr_write64(u64 val, u32 reg)
>  #define LOONGARCH_CSR_GSTAT            0x50    /* Guest status */
>  #define  CSR_GSTAT_GID_SHIFT           16
>  #define  CSR_GSTAT_GID_WIDTH           8
> +#define  CSR_GSTAT_GID_SHIFT_END       (CSR_GSTAT_GID_SHIFT + CSR_GSTAT_GID_WIDTH - 1)
>  #define  CSR_GSTAT_GID                 (_ULCAST_(0xff) << CSR_GSTAT_GID_SHIFT)
>  #define  CSR_GSTAT_GIDBIT_SHIFT                4
>  #define  CSR_GSTAT_GIDBIT_WIDTH                6
> @@ -605,6 +609,12 @@ static __always_inline void iocsr_write64(u64 val, u32 reg)
>  #define  CSR_GCFG_MATC_GUEST           (_ULCAST_(0x0) << CSR_GCFG_MATC_SHITF)
>  #define  CSR_GCFG_MATC_ROOT            (_ULCAST_(0x1) << CSR_GCFG_MATC_SHITF)
>  #define  CSR_GCFG_MATC_NEST            (_ULCAST_(0x2) << CSR_GCFG_MATC_SHITF)
> +#define  CSR_GCFG_MATP_NEST_SHIFT      2
> +#define  CSR_GCFG_MATP_NEST            (_ULCAST_(0x1) << CSR_GCFG_MATP_NEST_SHIFT)
> +#define  CSR_GCFG_MATP_ROOT_SHIFT      1
> +#define  CSR_GCFG_MATP_ROOT            (_ULCAST_(0x1) << CSR_GCFG_MATP_ROOT_SHIFT)
> +#define  CSR_GCFG_MATP_GUEST_SHIFT     0
> +#define  CSR_GCFG_MATP_GUEST           (_ULCAST_(0x1) << CSR_GCFG_MATP_GUEST_SHIFT)
>
>  #define LOONGARCH_CSR_GINTC            0x52    /* Guest interrupt control */
>  #define  CSR_GINTC_HC_SHIFT            16
> diff --git a/arch/loongarch/kvm/trace.h b/arch/loongarch/kvm/trace.h
> new file mode 100644
> index 000000000000..17b28d94d569
> --- /dev/null
> +++ b/arch/loongarch/kvm/trace.h
> @@ -0,0 +1,168 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
> + */
> +
> +#if !defined(_TRACE_KVM_H) || defined(TRACE_HEADER_MULTI_READ)
> +#define _TRACE_KVM_H
> +
> +#include <linux/tracepoint.h>
> +#include <asm/kvm_csr.h>
> +
> +#undef TRACE_SYSTEM
> +#define TRACE_SYSTEM   kvm
> +
> +/*
> + * Tracepoints for VM enters
> + */
> +DECLARE_EVENT_CLASS(kvm_transition,
> +       TP_PROTO(struct kvm_vcpu *vcpu),
> +       TP_ARGS(vcpu),
> +       TP_STRUCT__entry(
> +               __field(unsigned long, pc)
> +       ),
> +
> +       TP_fast_assign(
> +               __entry->pc = vcpu->arch.pc;
> +       ),
> +
> +       TP_printk("PC: 0x%08lx",
> +                 __entry->pc)
> +);
> +
> +DEFINE_EVENT(kvm_transition, kvm_enter,
> +            TP_PROTO(struct kvm_vcpu *vcpu),
> +            TP_ARGS(vcpu));
> +
> +DEFINE_EVENT(kvm_transition, kvm_reenter,
> +            TP_PROTO(struct kvm_vcpu *vcpu),
> +            TP_ARGS(vcpu));
> +
> +DEFINE_EVENT(kvm_transition, kvm_out,
> +            TP_PROTO(struct kvm_vcpu *vcpu),
> +            TP_ARGS(vcpu));
> +
> +/* Further exit reasons */
> +#define KVM_TRACE_EXIT_IDLE            64
> +#define KVM_TRACE_EXIT_CACHE           65
> +#define KVM_TRACE_EXIT_SIGNAL          66
> +
> +/* Tracepoints for VM exits */
> +#define kvm_trace_symbol_exit_types                    \
> +       { KVM_TRACE_EXIT_IDLE,          "IDLE" },       \
> +       { KVM_TRACE_EXIT_CACHE,         "CACHE" },      \
> +       { KVM_TRACE_EXIT_SIGNAL,        "Signal" }
> +
> +TRACE_EVENT(kvm_exit_gspr,
> +           TP_PROTO(struct kvm_vcpu *vcpu, unsigned int inst_word),
> +           TP_ARGS(vcpu, inst_word),
> +           TP_STRUCT__entry(
> +                       __field(unsigned int, inst_word)
> +           ),
> +
> +           TP_fast_assign(
> +                       __entry->inst_word = inst_word;
> +           ),
> +
> +           TP_printk("inst word: 0x%08x",
> +                     __entry->inst_word)
> +);
> +
> +
> +DECLARE_EVENT_CLASS(kvm_exit,
> +           TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
> +           TP_ARGS(vcpu, reason),
> +           TP_STRUCT__entry(
> +                       __field(unsigned long, pc)
> +                       __field(unsigned int, reason)
> +           ),
> +
> +           TP_fast_assign(
> +                       __entry->pc = vcpu->arch.pc;
> +                       __entry->reason = reason;
> +           ),
> +
> +           TP_printk("[%s]PC: 0x%08lx",
> +                     __print_symbolic(__entry->reason,
> +                                      kvm_trace_symbol_exit_types),
> +                     __entry->pc)
> +);
> +
> +DEFINE_EVENT(kvm_exit, kvm_exit_idle,
> +            TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
> +            TP_ARGS(vcpu, reason));
> +
> +DEFINE_EVENT(kvm_exit, kvm_exit_cache,
> +            TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
> +            TP_ARGS(vcpu, reason));
> +
> +DEFINE_EVENT(kvm_exit, kvm_exit,
> +            TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
> +            TP_ARGS(vcpu, reason));
> +
> +#define KVM_TRACE_AUX_RESTORE          0
> +#define KVM_TRACE_AUX_SAVE             1
> +#define KVM_TRACE_AUX_ENABLE           2
> +#define KVM_TRACE_AUX_DISABLE          3
> +#define KVM_TRACE_AUX_DISCARD          4
> +
> +#define KVM_TRACE_AUX_FPU              1
> +
> +#define kvm_trace_symbol_aux_op                                \
> +       { KVM_TRACE_AUX_RESTORE,        "restore" },    \
> +       { KVM_TRACE_AUX_SAVE,           "save" },       \
> +       { KVM_TRACE_AUX_ENABLE,         "enable" },     \
> +       { KVM_TRACE_AUX_DISABLE,        "disable" },    \
> +       { KVM_TRACE_AUX_DISCARD,        "discard" }
> +
> +#define kvm_trace_symbol_aux_state                     \
> +       { KVM_TRACE_AUX_FPU,     "FPU" }
> +
> +TRACE_EVENT(kvm_aux,
> +           TP_PROTO(struct kvm_vcpu *vcpu, unsigned int op,
> +                    unsigned int state),
> +           TP_ARGS(vcpu, op, state),
> +           TP_STRUCT__entry(
> +                       __field(unsigned long, pc)
> +                       __field(u8, op)
> +                       __field(u8, state)
> +           ),
> +
> +           TP_fast_assign(
> +                       __entry->pc = vcpu->arch.pc;
> +                       __entry->op = op;
> +                       __entry->state = state;
> +           ),
> +
> +           TP_printk("%s %s PC: 0x%08lx",
> +                     __print_symbolic(__entry->op,
> +                                      kvm_trace_symbol_aux_op),
> +                     __print_symbolic(__entry->state,
> +                                      kvm_trace_symbol_aux_state),
> +                     __entry->pc)
> +);
> +
> +TRACE_EVENT(kvm_vpid_change,
> +           TP_PROTO(struct kvm_vcpu *vcpu, unsigned long vpid),
> +           TP_ARGS(vcpu, vpid),
> +           TP_STRUCT__entry(
> +                       __field(unsigned long, vpid)
> +           ),
> +
> +           TP_fast_assign(
> +                       __entry->vpid = vpid;
> +           ),
> +
> +           TP_printk("vpid: 0x%08lx",
> +                     __entry->vpid)
> +);
> +
> +#endif /* _TRACE_LOONGARCH64_KVM_H */
> +
> +#undef TRACE_INCLUDE_PATH
> +#define TRACE_INCLUDE_PATH ../../arch/loongarch/kvm
> +#undef TRACE_INCLUDE_FILE
> +#define TRACE_INCLUDE_FILE trace
> +
> +/* This part must be outside protection */
> +#include <trace/define_trace.h>
> --
> 2.39.1
>
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v13 30/30] LoongArch: KVM: Add maintainers for LoongArch KVM
  2023-06-15  9:27   ` Huacai Chen
@ 2023-06-16  2:50     ` zhaotianrui
  0 siblings, 0 replies; 17+ messages in thread
From: zhaotianrui @ 2023-06-16  2:50 UTC (permalink / raw)
  To: Huacai Chen
  Cc: linux-kernel, kvm, Paolo Bonzini, WANG Xuerui,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, maobibo, Xi Ruoyao, tangyouling


在 2023/6/15 下午5:27, Huacai Chen 写道:
> Hi, Tianrui,
>
> On Fri, Jun 9, 2023 at 5:06 PM Tianrui Zhao <zhaotianrui@loongson.cn> wrote:
>> Add maintainers for LoongArch KVM.
>>
>> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
>> ---
>>   MAINTAINERS | 12 ++++++++++++
>>   1 file changed, 12 insertions(+)
>>
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index 27ef11624748..c2fbfd6ad4e5 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -11357,6 +11357,18 @@ F:     include/kvm/arm_*
>>   F:     tools/testing/selftests/kvm/*/aarch64/
>>   F:     tools/testing/selftests/kvm/aarch64/
>>
>> +KERNEL VIRTUAL MACHINE FOR LOONGARCH (KVM/LoongArch)
>> +M:     Tianrui Zhao <zhaotianrui@loongson.cn>
>> +M:     Bibo Mao <maobibo@loongson.cn>
>> +M:     Huacai Chen <chenhuacai@kernel.org>
>> +L:     kvm@vger.kernel.org
>> +L:     loongarch@lists.linux.dev
>> +S:     Maintained
>> +T:     git https://github.com/loongson/linux-loongarch-kvm
> I'm not sure, but I think this should be a tree which can be used to
> send PR for upstream maintainers. If no other selection, we should use
> git git://git.kernel.org/pub/scm/virt/kvm/kvm.git
>
> Huacai

Thanks, I will use this kvm source link.

Tianrui Zhao

>> +F:     arch/loongarch/include/asm/kvm*
>> +F:     arch/loongarch/include/uapi/asm/kvm*
>> +F:     arch/loongarch/kvm/
>> +
>>   KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)
>>   M:     Huacai Chen <chenhuacai@kernel.org>
>>   M:     Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
>> --
>> 2.39.1
>>
>>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v13 05/30] LoongArch: KVM: Add vcpu related header files
  2023-06-15  9:51   ` Huacai Chen
@ 2023-06-17  3:05     ` zhaotianrui
  2023-06-17  3:54       ` bibo, mao
  0 siblings, 1 reply; 17+ messages in thread
From: zhaotianrui @ 2023-06-17  3:05 UTC (permalink / raw)
  To: Huacai Chen
  Cc: linux-kernel, kvm, Paolo Bonzini, WANG Xuerui,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, maobibo, Xi Ruoyao, tangyouling


在 2023/6/15 下午5:51, Huacai Chen 写道:
> Hi, Tianrui,
>
> I suggest to use similar method as vector support:
> https://lore.kernel.org/loongarch/20230613151918.2039498-1-chenhuacai@loongson.cn/T/#u
>
> Introduce CC_HAS_LVZ_EXTENSION to detect whether the toolchain
> supports LVZ-specific instructions. And make the KVM config option
> depend on CC_HAS_LVZ_EXTENSION. In this way the code will be more
> maintainable. Of course this makes the old toolchain unable to build a
> KVM-enabled kernel, but I think that is not a big problem.
>
> Huacai
Hi,  Huacai,
Should the code be this if we add the toolchain condition?
kvm/Kconfig:
+config CC_HAS_LVZ_EXTENSION
+    def_bool $(cc-option,-mlvz)
config KVM
      tristate "Kernel-based Virtual Machine (KVM) support"
      depends on HAVE_KVM
+    depends on CC_HAS_LVZ_EXTENSION
      select MMU_NOTIFIER
     ...
     ...
Thanks
Tianrui Zhao
>
> On Fri, Jun 9, 2023 at 5:05 PM Tianrui Zhao <zhaotianrui@loongson.cn> wrote:
>> Add LoongArch vcpu related header files, including vcpu csr
>> information, irq number defines, and some vcpu interfaces.
>>
>> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
>> ---
>>   arch/loongarch/include/asm/insn-def.h  |  55 ++++++
>>   arch/loongarch/include/asm/kvm_csr.h   | 231 +++++++++++++++++++++++++
>>   arch/loongarch/include/asm/kvm_vcpu.h  |  97 +++++++++++
>>   arch/loongarch/include/asm/loongarch.h |  20 ++-
>>   arch/loongarch/kvm/trace.h             | 168 ++++++++++++++++++
>>   5 files changed, 566 insertions(+), 5 deletions(-)
>>   create mode 100644 arch/loongarch/include/asm/insn-def.h
>>   create mode 100644 arch/loongarch/include/asm/kvm_csr.h
>>   create mode 100644 arch/loongarch/include/asm/kvm_vcpu.h
>>   create mode 100644 arch/loongarch/kvm/trace.h
>>
>> diff --git a/arch/loongarch/include/asm/insn-def.h b/arch/loongarch/include/asm/insn-def.h
>> new file mode 100644
>> index 000000000000..e285ee108fb0
>> --- /dev/null
>> +++ b/arch/loongarch/include/asm/insn-def.h
>> @@ -0,0 +1,55 @@
>> +/* SPDX-License-Identifier: GPL-2.0-only */
>> +
>> +#ifndef __ASM_INSN_DEF_H
>> +#define __ASM_INSN_DEF_H
>> +
>> +#include <linux/stringify.h>
>> +#include <asm/gpr-num.h>
>> +#include <asm/asm.h>
>> +
>> +#define INSN_STR(x)            __stringify(x)
>> +#define CSR_RD_SHIFT           0
>> +#define CSR_RJ_SHIFT           5
>> +#define CSR_SIMM14_SHIFT       10
>> +#define CSR_OPCODE_SHIFT       24
>> +
>> +#define DEFINE_INSN_CSR                                                        \
>> +       __DEFINE_ASM_GPR_NUMS                                           \
>> +"      .macro insn_csr, opcode, rj, rd, simm14\n"                      \
>> +"      .4byte  ((\\opcode << " INSN_STR(CSR_OPCODE_SHIFT) ") |"        \
>> +"               (.L__gpr_num_\\rj << " INSN_STR(CSR_RJ_SHIFT) ") |"    \
>> +"               (.L__gpr_num_\\rd << " INSN_STR(CSR_RD_SHIFT) ") |"    \
>> +"               (\\simm14 << " INSN_STR(CSR_SIMM14_SHIFT) "))\n"       \
>> +"      .endm\n"
>> +
>> +#define UNDEFINE_INSN_CSR                                              \
>> +"      .purgem insn_csr\n"
>> +
>> +#define __INSN_CSR(opcode, rj, rd, simm14)                             \
>> +       DEFINE_INSN_CSR                                                 \
>> +       "insn_csr " opcode ", " rj ", " rd ", " simm14 "\n"             \
>> +       UNDEFINE_INSN_CSR
>> +
>> +
>> +#define INSN_CSR(opcode, rj, rd, simm14)                               \
>> +       __INSN_CSR(LARCH_##opcode, LARCH_##rj, LARCH_##rd,              \
>> +                  LARCH_##simm14)
>> +
>> +#define __ASM_STR(x)           #x
>> +#define LARCH_OPCODE(v)                __ASM_STR(v)
>> +#define LARCH_SIMM14(v)                __ASM_STR(v)
>> +#define __LARCH_REG(v)         __ASM_STR(v)
>> +#define LARCH___RD(v)          __LARCH_REG(v)
>> +#define LARCH___RJ(v)          __LARCH_REG(v)
>> +#define LARCH_OPCODE_GCSR      LARCH_OPCODE(5)
>> +
>> +#define GCSR_read(csr, rd)                                             \
>> +       INSN_CSR(OPCODE_GCSR, __RJ(zero), __RD(rd), SIMM14(csr))
>> +
>> +#define GCSR_write(csr, rd)                                            \
>> +       INSN_CSR(OPCODE_GCSR, __RJ($r1), __RD(rd), SIMM14(csr))
>> +
>> +#define GCSR_xchg(csr, rj, rd)                                         \
>> +       INSN_CSR(OPCODE_GCSR, __RJ(rj), __RD(rd), SIMM14(csr))
>> +
>> +#endif /* __ASM_INSN_DEF_H */
>> diff --git a/arch/loongarch/include/asm/kvm_csr.h b/arch/loongarch/include/asm/kvm_csr.h
>> new file mode 100644
>> index 000000000000..10dba5bc6df1
>> --- /dev/null
>> +++ b/arch/loongarch/include/asm/kvm_csr.h
>> @@ -0,0 +1,231 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
>> + */
>> +
>> +#ifndef __ASM_LOONGARCH_KVM_CSR_H__
>> +#define __ASM_LOONGARCH_KVM_CSR_H__
>> +#include <asm/loongarch.h>
>> +#include <asm/kvm_vcpu.h>
>> +#include <linux/uaccess.h>
>> +#include <linux/kvm_host.h>
>> +
>> +/*
>> + * Instructions will be available in binutils later
>> + * read val from guest csr register %[csr]
>> + * gcsrrd %[val], %[csr]
>> + */
>> +#define gcsr_read(csr)                                         \
>> +({                                                             \
>> +       register unsigned long __v;                             \
>> +       __asm__ __volatile__ (GCSR_read(csr, %0)                \
>> +                               : "=r" (__v) :                  \
>> +                               : "memory");                    \
>> +       __v;                                                    \
>> +})
>> +
>> +/*
>> + * Instructions will be available in binutils later
>> + * write val to guest csr register %[csr]
>> + * gcsrwr %[val], %[csr]
>> + */
>> +#define gcsr_write(val, csr)                                   \
>> +({                                                             \
>> +       register unsigned long __v = val;                       \
>> +       __asm__ __volatile__ (GCSR_write(csr, %0)               \
>> +                               : "+r" (__v) :                  \
>> +                               : "memory");                    \
>> +})
>> +
>> +/*
>> + * Instructions will be available in binutils later
>> + * replace masked bits of guest csr register %[csr] with val
>> + * gcsrxchg %[val], %[mask], %[csr]
>> + */
>> +#define gcsr_xchg(val, mask, csr)                              \
>> +({                                                             \
>> +       register unsigned long __v = val;                       \
>> +       __asm__ __volatile__ (GCSR_xchg(csr, %1, %0)            \
>> +                               : "+r" (__v)                    \
>> +                               : "r"  (mask)                   \
>> +                               : "memory");                    \
>> +       __v;                                                    \
>> +})
>> +
>> +/* Guest CSRS read and write */
>> +#define read_gcsr_crmd()               gcsr_read(LOONGARCH_CSR_CRMD)
>> +#define write_gcsr_crmd(val)           gcsr_write(val, LOONGARCH_CSR_CRMD)
>> +#define read_gcsr_prmd()               gcsr_read(LOONGARCH_CSR_PRMD)
>> +#define write_gcsr_prmd(val)           gcsr_write(val, LOONGARCH_CSR_PRMD)
>> +#define read_gcsr_euen()               gcsr_read(LOONGARCH_CSR_EUEN)
>> +#define write_gcsr_euen(val)           gcsr_write(val, LOONGARCH_CSR_EUEN)
>> +#define read_gcsr_misc()               gcsr_read(LOONGARCH_CSR_MISC)
>> +#define write_gcsr_misc(val)           gcsr_write(val, LOONGARCH_CSR_MISC)
>> +#define read_gcsr_ecfg()               gcsr_read(LOONGARCH_CSR_ECFG)
>> +#define write_gcsr_ecfg(val)           gcsr_write(val, LOONGARCH_CSR_ECFG)
>> +#define read_gcsr_estat()              gcsr_read(LOONGARCH_CSR_ESTAT)
>> +#define write_gcsr_estat(val)          gcsr_write(val, LOONGARCH_CSR_ESTAT)
>> +#define read_gcsr_era()                        gcsr_read(LOONGARCH_CSR_ERA)
>> +#define write_gcsr_era(val)            gcsr_write(val, LOONGARCH_CSR_ERA)
>> +#define read_gcsr_badv()               gcsr_read(LOONGARCH_CSR_BADV)
>> +#define write_gcsr_badv(val)           gcsr_write(val, LOONGARCH_CSR_BADV)
>> +#define read_gcsr_badi()               gcsr_read(LOONGARCH_CSR_BADI)
>> +#define write_gcsr_badi(val)           gcsr_write(val, LOONGARCH_CSR_BADI)
>> +#define read_gcsr_eentry()             gcsr_read(LOONGARCH_CSR_EENTRY)
>> +#define write_gcsr_eentry(val)         gcsr_write(val, LOONGARCH_CSR_EENTRY)
>> +
>> +#define read_gcsr_tlbidx()             gcsr_read(LOONGARCH_CSR_TLBIDX)
>> +#define write_gcsr_tlbidx(val)         gcsr_write(val, LOONGARCH_CSR_TLBIDX)
>> +#define read_gcsr_tlbhi()              gcsr_read(LOONGARCH_CSR_TLBEHI)
>> +#define write_gcsr_tlbhi(val)          gcsr_write(val, LOONGARCH_CSR_TLBEHI)
>> +#define read_gcsr_tlblo0()             gcsr_read(LOONGARCH_CSR_TLBELO0)
>> +#define write_gcsr_tlblo0(val)         gcsr_write(val, LOONGARCH_CSR_TLBELO0)
>> +#define read_gcsr_tlblo1()             gcsr_read(LOONGARCH_CSR_TLBELO1)
>> +#define write_gcsr_tlblo1(val)         gcsr_write(val, LOONGARCH_CSR_TLBELO1)
>> +
>> +#define read_gcsr_asid()               gcsr_read(LOONGARCH_CSR_ASID)
>> +#define write_gcsr_asid(val)           gcsr_write(val, LOONGARCH_CSR_ASID)
>> +#define read_gcsr_pgdl()               gcsr_read(LOONGARCH_CSR_PGDL)
>> +#define write_gcsr_pgdl(val)           gcsr_write(val, LOONGARCH_CSR_PGDL)
>> +#define read_gcsr_pgdh()               gcsr_read(LOONGARCH_CSR_PGDH)
>> +#define write_gcsr_pgdh(val)           gcsr_write(val, LOONGARCH_CSR_PGDH)
>> +#define write_gcsr_pgd(val)            gcsr_write(val, LOONGARCH_CSR_PGD)
>> +#define read_gcsr_pgd()                        gcsr_read(LOONGARCH_CSR_PGD)
>> +#define read_gcsr_pwctl0()             gcsr_read(LOONGARCH_CSR_PWCTL0)
>> +#define write_gcsr_pwctl0(val)         gcsr_write(val, LOONGARCH_CSR_PWCTL0)
>> +#define read_gcsr_pwctl1()             gcsr_read(LOONGARCH_CSR_PWCTL1)
>> +#define write_gcsr_pwctl1(val)         gcsr_write(val, LOONGARCH_CSR_PWCTL1)
>> +#define read_gcsr_stlbpgsize()         gcsr_read(LOONGARCH_CSR_STLBPGSIZE)
>> +#define write_gcsr_stlbpgsize(val)     gcsr_write(val, LOONGARCH_CSR_STLBPGSIZE)
>> +#define read_gcsr_rvacfg()             gcsr_read(LOONGARCH_CSR_RVACFG)
>> +#define write_gcsr_rvacfg(val)         gcsr_write(val, LOONGARCH_CSR_RVACFG)
>> +
>> +#define read_gcsr_cpuid()              gcsr_read(LOONGARCH_CSR_CPUID)
>> +#define write_gcsr_cpuid(val)          gcsr_write(val, LOONGARCH_CSR_CPUID)
>> +#define read_gcsr_prcfg1()             gcsr_read(LOONGARCH_CSR_PRCFG1)
>> +#define write_gcsr_prcfg1(val)         gcsr_write(val, LOONGARCH_CSR_PRCFG1)
>> +#define read_gcsr_prcfg2()             gcsr_read(LOONGARCH_CSR_PRCFG2)
>> +#define write_gcsr_prcfg2(val)         gcsr_write(val, LOONGARCH_CSR_PRCFG2)
>> +#define read_gcsr_prcfg3()             gcsr_read(LOONGARCH_CSR_PRCFG3)
>> +#define write_gcsr_prcfg3(val)         gcsr_write(val, LOONGARCH_CSR_PRCFG3)
>> +
>> +#define read_gcsr_kscratch0()          gcsr_read(LOONGARCH_CSR_KS0)
>> +#define write_gcsr_kscratch0(val)      gcsr_write(val, LOONGARCH_CSR_KS0)
>> +#define read_gcsr_kscratch1()          gcsr_read(LOONGARCH_CSR_KS1)
>> +#define write_gcsr_kscratch1(val)      gcsr_write(val, LOONGARCH_CSR_KS1)
>> +#define read_gcsr_kscratch2()          gcsr_read(LOONGARCH_CSR_KS2)
>> +#define write_gcsr_kscratch2(val)      gcsr_write(val, LOONGARCH_CSR_KS2)
>> +#define read_gcsr_kscratch3()          gcsr_read(LOONGARCH_CSR_KS3)
>> +#define write_gcsr_kscratch3(val)      gcsr_write(val, LOONGARCH_CSR_KS3)
>> +#define read_gcsr_kscratch4()          gcsr_read(LOONGARCH_CSR_KS4)
>> +#define write_gcsr_kscratch4(val)      gcsr_write(val, LOONGARCH_CSR_KS4)
>> +#define read_gcsr_kscratch5()          gcsr_read(LOONGARCH_CSR_KS5)
>> +#define write_gcsr_kscratch5(val)      gcsr_write(val, LOONGARCH_CSR_KS5)
>> +#define read_gcsr_kscratch6()          gcsr_read(LOONGARCH_CSR_KS6)
>> +#define write_gcsr_kscratch6(val)      gcsr_write(val, LOONGARCH_CSR_KS6)
>> +#define read_gcsr_kscratch7()          gcsr_read(LOONGARCH_CSR_KS7)
>> +#define write_gcsr_kscratch7(val)      gcsr_write(val, LOONGARCH_CSR_KS7)
>> +
>> +#define read_gcsr_timerid()            gcsr_read(LOONGARCH_CSR_TMID)
>> +#define write_gcsr_timerid(val)                gcsr_write(val, LOONGARCH_CSR_TMID)
>> +#define read_gcsr_timercfg()           gcsr_read(LOONGARCH_CSR_TCFG)
>> +#define write_gcsr_timercfg(val)       gcsr_write(val, LOONGARCH_CSR_TCFG)
>> +#define read_gcsr_timertick()          gcsr_read(LOONGARCH_CSR_TVAL)
>> +#define write_gcsr_timertick(val)      gcsr_write(val, LOONGARCH_CSR_TVAL)
>> +#define read_gcsr_timeroffset()                gcsr_read(LOONGARCH_CSR_CNTC)
>> +#define write_gcsr_timeroffset(val)    gcsr_write(val, LOONGARCH_CSR_CNTC)
>> +
>> +#define read_gcsr_llbctl()             gcsr_read(LOONGARCH_CSR_LLBCTL)
>> +#define write_gcsr_llbctl(val)         gcsr_write(val, LOONGARCH_CSR_LLBCTL)
>> +
>> +#define read_gcsr_tlbrentry()          gcsr_read(LOONGARCH_CSR_TLBRENTRY)
>> +#define write_gcsr_tlbrentry(val)      gcsr_write(val, LOONGARCH_CSR_TLBRENTRY)
>> +#define read_gcsr_tlbrbadv()           gcsr_read(LOONGARCH_CSR_TLBRBADV)
>> +#define write_gcsr_tlbrbadv(val)       gcsr_write(val, LOONGARCH_CSR_TLBRBADV)
>> +#define read_gcsr_tlbrera()            gcsr_read(LOONGARCH_CSR_TLBRERA)
>> +#define write_gcsr_tlbrera(val)                gcsr_write(val, LOONGARCH_CSR_TLBRERA)
>> +#define read_gcsr_tlbrsave()           gcsr_read(LOONGARCH_CSR_TLBRSAVE)
>> +#define write_gcsr_tlbrsave(val)       gcsr_write(val, LOONGARCH_CSR_TLBRSAVE)
>> +#define read_gcsr_tlbrelo0()           gcsr_read(LOONGARCH_CSR_TLBRELO0)
>> +#define write_gcsr_tlbrelo0(val)       gcsr_write(val, LOONGARCH_CSR_TLBRELO0)
>> +#define read_gcsr_tlbrelo1()           gcsr_read(LOONGARCH_CSR_TLBRELO1)
>> +#define write_gcsr_tlbrelo1(val)       gcsr_write(val, LOONGARCH_CSR_TLBRELO1)
>> +#define read_gcsr_tlbrehi()            gcsr_read(LOONGARCH_CSR_TLBREHI)
>> +#define write_gcsr_tlbrehi(val)                gcsr_write(val, LOONGARCH_CSR_TLBREHI)
>> +#define read_gcsr_tlbrprmd()           gcsr_read(LOONGARCH_CSR_TLBRPRMD)
>> +#define write_gcsr_tlbrprmd(val)       gcsr_write(val, LOONGARCH_CSR_TLBRPRMD)
>> +
>> +#define read_gcsr_directwin0()         gcsr_read(LOONGARCH_CSR_DMWIN0)
>> +#define write_gcsr_directwin0(val)     gcsr_write(val, LOONGARCH_CSR_DMWIN0)
>> +#define read_gcsr_directwin1()         gcsr_read(LOONGARCH_CSR_DMWIN1)
>> +#define write_gcsr_directwin1(val)     gcsr_write(val, LOONGARCH_CSR_DMWIN1)
>> +#define read_gcsr_directwin2()         gcsr_read(LOONGARCH_CSR_DMWIN2)
>> +#define write_gcsr_directwin2(val)     gcsr_write(val, LOONGARCH_CSR_DMWIN2)
>> +#define read_gcsr_directwin3()         gcsr_read(LOONGARCH_CSR_DMWIN3)
>> +#define write_gcsr_directwin3(val)     gcsr_write(val, LOONGARCH_CSR_DMWIN3)
>> +
>> +/* Guest related CSRs */
>> +#define read_csr_gtlbc()               csr_read64(LOONGARCH_CSR_GTLBC)
>> +#define write_csr_gtlbc(val)           csr_write64(val, LOONGARCH_CSR_GTLBC)
>> +#define read_csr_trgp()                        csr_read64(LOONGARCH_CSR_TRGP)
>> +#define read_csr_gcfg()                        csr_read64(LOONGARCH_CSR_GCFG)
>> +#define write_csr_gcfg(val)            csr_write64(val, LOONGARCH_CSR_GCFG)
>> +#define read_csr_gstat()               csr_read64(LOONGARCH_CSR_GSTAT)
>> +#define write_csr_gstat(val)           csr_write64(val, LOONGARCH_CSR_GSTAT)
>> +#define read_csr_gintc()               csr_read64(LOONGARCH_CSR_GINTC)
>> +#define write_csr_gintc(val)           csr_write64(val, LOONGARCH_CSR_GINTC)
>> +#define read_csr_gcntc()               csr_read64(LOONGARCH_CSR_GCNTC)
>> +#define write_csr_gcntc(val)           csr_write64(val, LOONGARCH_CSR_GCNTC)
>> +
>> +#define __BUILD_GCSR_OP(name)          __BUILD_CSR_COMMON(gcsr_##name)
>> +
>> +__BUILD_GCSR_OP(llbctl)
>> +__BUILD_GCSR_OP(tlbidx)
>> +__BUILD_CSR_OP(gcfg)
>> +__BUILD_CSR_OP(gstat)
>> +__BUILD_CSR_OP(gtlbc)
>> +__BUILD_CSR_OP(gintc)
>> +
>> +#define set_gcsr_estat(val)    \
>> +       gcsr_xchg(val, val, LOONGARCH_CSR_ESTAT)
>> +#define clear_gcsr_estat(val)  \
>> +       gcsr_xchg(~(val), val, LOONGARCH_CSR_ESTAT)
>> +
>> +#define kvm_read_hw_gcsr(id)           gcsr_read(id)
>> +#define kvm_write_hw_gcsr(csr, id, val)        gcsr_write(val, id)
>> +
>> +int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v);
>> +int _kvm_setcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 v);
>> +
>> +int _kvm_emu_iocsr(larch_inst inst, struct kvm_run *run, struct kvm_vcpu *vcpu);
>> +
>> +#define kvm_save_hw_gcsr(csr, gid)     (csr->csrs[gid] = gcsr_read(gid))
>> +#define kvm_restore_hw_gcsr(csr, gid)  (gcsr_write(csr->csrs[gid], gid))
>> +
>> +static __always_inline unsigned long kvm_read_sw_gcsr(struct loongarch_csrs *csr, int gid)
>> +{
>> +       return csr->csrs[gid];
>> +}
>> +
>> +static __always_inline void kvm_write_sw_gcsr(struct loongarch_csrs *csr,
>> +                                             int gid, unsigned long val)
>> +{
>> +       csr->csrs[gid] = val;
>> +}
>> +
>> +static __always_inline void kvm_set_sw_gcsr(struct loongarch_csrs *csr,
>> +                                           int gid, unsigned long val)
>> +{
>> +       csr->csrs[gid] |= val;
>> +}
>> +
>> +static __always_inline void kvm_change_sw_gcsr(struct loongarch_csrs *csr,
>> +                                              int gid, unsigned long mask,
>> +                                              unsigned long val)
>> +{
>> +       unsigned long _mask = mask;
>> +
>> +       csr->csrs[gid] &= ~_mask;
>> +       csr->csrs[gid] |= val & _mask;
>> +}
>> +#endif /* __ASM_LOONGARCH_KVM_CSR_H__ */
>> diff --git a/arch/loongarch/include/asm/kvm_vcpu.h b/arch/loongarch/include/asm/kvm_vcpu.h
>> new file mode 100644
>> index 000000000000..74deaf55d22c
>> --- /dev/null
>> +++ b/arch/loongarch/include/asm/kvm_vcpu.h
>> @@ -0,0 +1,97 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
>> + */
>> +
>> +#ifndef __ASM_LOONGARCH_KVM_VCPU_H__
>> +#define __ASM_LOONGARCH_KVM_VCPU_H__
>> +
>> +#include <linux/kvm_host.h>
>> +#include <asm/loongarch.h>
>> +
>> +/* Controlled by 0x5 guest exst */
>> +#define CPU_SIP0                       (_ULCAST_(1))
>> +#define CPU_SIP1                       (_ULCAST_(1) << 1)
>> +#define CPU_PMU                                (_ULCAST_(1) << 10)
>> +#define CPU_TIMER                      (_ULCAST_(1) << 11)
>> +#define CPU_IPI                                (_ULCAST_(1) << 12)
>> +
>> +/* Controlled by 0x52 guest exception VIP
>> + * aligned to exst bit 5~12
>> + */
>> +#define CPU_IP0                                (_ULCAST_(1))
>> +#define CPU_IP1                                (_ULCAST_(1) << 1)
>> +#define CPU_IP2                                (_ULCAST_(1) << 2)
>> +#define CPU_IP3                                (_ULCAST_(1) << 3)
>> +#define CPU_IP4                                (_ULCAST_(1) << 4)
>> +#define CPU_IP5                                (_ULCAST_(1) << 5)
>> +#define CPU_IP6                                (_ULCAST_(1) << 6)
>> +#define CPU_IP7                                (_ULCAST_(1) << 7)
>> +
>> +#define MNSEC_PER_SEC                  (NSEC_PER_SEC >> 20)
>> +
>> +/* KVM_IRQ_LINE irq field index values */
>> +#define KVM_LOONGSON_IRQ_TYPE_SHIFT    24
>> +#define KVM_LOONGSON_IRQ_TYPE_MASK     0xff
>> +#define KVM_LOONGSON_IRQ_VCPU_SHIFT    16
>> +#define KVM_LOONGSON_IRQ_VCPU_MASK     0xff
>> +#define KVM_LOONGSON_IRQ_NUM_SHIFT     0
>> +#define KVM_LOONGSON_IRQ_NUM_MASK      0xffff
>> +
>> +/* Irq_type field */
>> +#define KVM_LOONGSON_IRQ_TYPE_CPU_IP   0
>> +#define KVM_LOONGSON_IRQ_TYPE_CPU_IO   1
>> +#define KVM_LOONGSON_IRQ_TYPE_HT       2
>> +#define KVM_LOONGSON_IRQ_TYPE_MSI      3
>> +#define KVM_LOONGSON_IRQ_TYPE_IOAPIC   4
>> +#define KVM_LOONGSON_IRQ_TYPE_ROUTE    5
>> +
>> +/* Out-of-kernel GIC cpu interrupt injection irq_number field */
>> +#define KVM_LOONGSON_IRQ_CPU_IRQ       0
>> +#define KVM_LOONGSON_IRQ_CPU_FIQ       1
>> +#define KVM_LOONGSON_CPU_IP_NUM                8
>> +
>> +typedef union loongarch_instruction  larch_inst;
>> +typedef int (*exit_handle_fn)(struct kvm_vcpu *);
>> +
>> +int  _kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst);
>> +int  _kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst);
>> +int  _kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
>> +int  _kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
>> +int  _kvm_emu_idle(struct kvm_vcpu *vcpu);
>> +int  _kvm_handle_pv_hcall(struct kvm_vcpu *vcpu);
>> +int  _kvm_pending_timer(struct kvm_vcpu *vcpu);
>> +int  _kvm_handle_fault(struct kvm_vcpu *vcpu, int fault);
>> +void _kvm_deliver_intr(struct kvm_vcpu *vcpu);
>> +
>> +void kvm_own_fpu(struct kvm_vcpu *vcpu);
>> +void kvm_lose_fpu(struct kvm_vcpu *vcpu);
>> +void kvm_save_fpu(struct loongarch_fpu *fpu);
>> +void kvm_restore_fpu(struct loongarch_fpu *fpu);
>> +void kvm_restore_fcsr(struct loongarch_fpu *fpu);
>> +
>> +void kvm_acquire_timer(struct kvm_vcpu *vcpu);
>> +void kvm_reset_timer(struct kvm_vcpu *vcpu);
>> +enum hrtimer_restart kvm_count_timeout(struct kvm_vcpu *vcpu);
>> +void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long hz);
>> +void kvm_restore_timer(struct kvm_vcpu *vcpu);
>> +void kvm_save_timer(struct kvm_vcpu *vcpu);
>> +
>> +int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu,
>> +                       struct kvm_loongarch_interrupt *irq);
>> +/*
>> + * Loongarch KVM guest interrupt handling
>> + */
>> +static inline void _kvm_queue_irq(struct kvm_vcpu *vcpu, unsigned int irq)
>> +{
>> +       set_bit(irq, &vcpu->arch.irq_pending);
>> +       clear_bit(irq, &vcpu->arch.irq_clear);
>> +}
>> +
>> +static inline void _kvm_dequeue_irq(struct kvm_vcpu *vcpu, unsigned int irq)
>> +{
>> +       clear_bit(irq, &vcpu->arch.irq_pending);
>> +       set_bit(irq, &vcpu->arch.irq_clear);
>> +}
>> +
>> +#endif /* __ASM_LOONGARCH_KVM_VCPU_H__ */
>> diff --git a/arch/loongarch/include/asm/loongarch.h b/arch/loongarch/include/asm/loongarch.h
>> index b3323ab5b78d..35ae5c2be8b6 100644
>> --- a/arch/loongarch/include/asm/loongarch.h
>> +++ b/arch/loongarch/include/asm/loongarch.h
>> @@ -11,6 +11,7 @@
>>
>>   #ifndef __ASSEMBLY__
>>   #include <larchintrin.h>
>> +#include <asm/insn-def.h>
>>
>>   /*
>>    * parse_r var, r - Helper assembler macro for parsing register names.
>> @@ -309,6 +310,7 @@ static __always_inline void iocsr_write64(u64 val, u32 reg)
>>   #define LOONGARCH_CSR_ECFG             0x4     /* Exception config */
>>   #define  CSR_ECFG_VS_SHIFT             16
>>   #define  CSR_ECFG_VS_WIDTH             3
>> +#define  CSR_ECFG_VS_SHIFT_END         (CSR_ECFG_VS_SHIFT + CSR_ECFG_VS_WIDTH - 1)
>>   #define  CSR_ECFG_VS                   (_ULCAST_(0x7) << CSR_ECFG_VS_SHIFT)
>>   #define  CSR_ECFG_IM_SHIFT             0
>>   #define  CSR_ECFG_IM_WIDTH             14
>> @@ -397,13 +399,14 @@ static __always_inline void iocsr_write64(u64 val, u32 reg)
>>   #define  CSR_TLBLO1_V                  (_ULCAST_(0x1) << CSR_TLBLO1_V_SHIFT)
>>
>>   #define LOONGARCH_CSR_GTLBC            0x15    /* Guest TLB control */
>> -#define  CSR_GTLBC_RID_SHIFT           16
>> -#define  CSR_GTLBC_RID_WIDTH           8
>> -#define  CSR_GTLBC_RID                 (_ULCAST_(0xff) << CSR_GTLBC_RID_SHIFT)
>> +#define  CSR_GTLBC_TGID_SHIFT          16
>> +#define  CSR_GTLBC_TGID_WIDTH          8
>> +#define  CSR_GTLBC_TGID_SHIFT_END      (CSR_GTLBC_TGID_SHIFT + CSR_GTLBC_TGID_WIDTH - 1)
>> +#define  CSR_GTLBC_TGID                        (_ULCAST_(0xff) << CSR_GTLBC_TGID_SHIFT)
>>   #define  CSR_GTLBC_TOTI_SHIFT          13
>>   #define  CSR_GTLBC_TOTI                        (_ULCAST_(0x1) << CSR_GTLBC_TOTI_SHIFT)
>> -#define  CSR_GTLBC_USERID_SHIFT                12
>> -#define  CSR_GTLBC_USERID              (_ULCAST_(0x1) << CSR_GTLBC_USERID_SHIFT)
>> +#define  CSR_GTLBC_USETGID_SHIFT       12
>> +#define  CSR_GTLBC_USETGID             (_ULCAST_(0x1) << CSR_GTLBC_USETGID_SHIFT)
>>   #define  CSR_GTLBC_GMTLBSZ_SHIFT       0
>>   #define  CSR_GTLBC_GMTLBSZ_WIDTH       6
>>   #define  CSR_GTLBC_GMTLBSZ             (_ULCAST_(0x3f) << CSR_GTLBC_GMTLBSZ_SHIFT)
>> @@ -555,6 +558,7 @@ static __always_inline void iocsr_write64(u64 val, u32 reg)
>>   #define LOONGARCH_CSR_GSTAT            0x50    /* Guest status */
>>   #define  CSR_GSTAT_GID_SHIFT           16
>>   #define  CSR_GSTAT_GID_WIDTH           8
>> +#define  CSR_GSTAT_GID_SHIFT_END       (CSR_GSTAT_GID_SHIFT + CSR_GSTAT_GID_WIDTH - 1)
>>   #define  CSR_GSTAT_GID                 (_ULCAST_(0xff) << CSR_GSTAT_GID_SHIFT)
>>   #define  CSR_GSTAT_GIDBIT_SHIFT                4
>>   #define  CSR_GSTAT_GIDBIT_WIDTH                6
>> @@ -605,6 +609,12 @@ static __always_inline void iocsr_write64(u64 val, u32 reg)
>>   #define  CSR_GCFG_MATC_GUEST           (_ULCAST_(0x0) << CSR_GCFG_MATC_SHITF)
>>   #define  CSR_GCFG_MATC_ROOT            (_ULCAST_(0x1) << CSR_GCFG_MATC_SHITF)
>>   #define  CSR_GCFG_MATC_NEST            (_ULCAST_(0x2) << CSR_GCFG_MATC_SHITF)
>> +#define  CSR_GCFG_MATP_NEST_SHIFT      2
>> +#define  CSR_GCFG_MATP_NEST            (_ULCAST_(0x1) << CSR_GCFG_MATP_NEST_SHIFT)
>> +#define  CSR_GCFG_MATP_ROOT_SHIFT      1
>> +#define  CSR_GCFG_MATP_ROOT            (_ULCAST_(0x1) << CSR_GCFG_MATP_ROOT_SHIFT)
>> +#define  CSR_GCFG_MATP_GUEST_SHIFT     0
>> +#define  CSR_GCFG_MATP_GUEST           (_ULCAST_(0x1) << CSR_GCFG_MATP_GUEST_SHIFT)
>>
>>   #define LOONGARCH_CSR_GINTC            0x52    /* Guest interrupt control */
>>   #define  CSR_GINTC_HC_SHIFT            16
>> diff --git a/arch/loongarch/kvm/trace.h b/arch/loongarch/kvm/trace.h
>> new file mode 100644
>> index 000000000000..17b28d94d569
>> --- /dev/null
>> +++ b/arch/loongarch/kvm/trace.h
>> @@ -0,0 +1,168 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
>> + */
>> +
>> +#if !defined(_TRACE_KVM_H) || defined(TRACE_HEADER_MULTI_READ)
>> +#define _TRACE_KVM_H
>> +
>> +#include <linux/tracepoint.h>
>> +#include <asm/kvm_csr.h>
>> +
>> +#undef TRACE_SYSTEM
>> +#define TRACE_SYSTEM   kvm
>> +
>> +/*
>> + * Tracepoints for VM enters
>> + */
>> +DECLARE_EVENT_CLASS(kvm_transition,
>> +       TP_PROTO(struct kvm_vcpu *vcpu),
>> +       TP_ARGS(vcpu),
>> +       TP_STRUCT__entry(
>> +               __field(unsigned long, pc)
>> +       ),
>> +
>> +       TP_fast_assign(
>> +               __entry->pc = vcpu->arch.pc;
>> +       ),
>> +
>> +       TP_printk("PC: 0x%08lx",
>> +                 __entry->pc)
>> +);
>> +
>> +DEFINE_EVENT(kvm_transition, kvm_enter,
>> +            TP_PROTO(struct kvm_vcpu *vcpu),
>> +            TP_ARGS(vcpu));
>> +
>> +DEFINE_EVENT(kvm_transition, kvm_reenter,
>> +            TP_PROTO(struct kvm_vcpu *vcpu),
>> +            TP_ARGS(vcpu));
>> +
>> +DEFINE_EVENT(kvm_transition, kvm_out,
>> +            TP_PROTO(struct kvm_vcpu *vcpu),
>> +            TP_ARGS(vcpu));
>> +
>> +/* Further exit reasons */
>> +#define KVM_TRACE_EXIT_IDLE            64
>> +#define KVM_TRACE_EXIT_CACHE           65
>> +#define KVM_TRACE_EXIT_SIGNAL          66
>> +
>> +/* Tracepoints for VM exits */
>> +#define kvm_trace_symbol_exit_types                    \
>> +       { KVM_TRACE_EXIT_IDLE,          "IDLE" },       \
>> +       { KVM_TRACE_EXIT_CACHE,         "CACHE" },      \
>> +       { KVM_TRACE_EXIT_SIGNAL,        "Signal" }
>> +
>> +TRACE_EVENT(kvm_exit_gspr,
>> +           TP_PROTO(struct kvm_vcpu *vcpu, unsigned int inst_word),
>> +           TP_ARGS(vcpu, inst_word),
>> +           TP_STRUCT__entry(
>> +                       __field(unsigned int, inst_word)
>> +           ),
>> +
>> +           TP_fast_assign(
>> +                       __entry->inst_word = inst_word;
>> +           ),
>> +
>> +           TP_printk("inst word: 0x%08x",
>> +                     __entry->inst_word)
>> +);
>> +
>> +
>> +DECLARE_EVENT_CLASS(kvm_exit,
>> +           TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
>> +           TP_ARGS(vcpu, reason),
>> +           TP_STRUCT__entry(
>> +                       __field(unsigned long, pc)
>> +                       __field(unsigned int, reason)
>> +           ),
>> +
>> +           TP_fast_assign(
>> +                       __entry->pc = vcpu->arch.pc;
>> +                       __entry->reason = reason;
>> +           ),
>> +
>> +           TP_printk("[%s]PC: 0x%08lx",
>> +                     __print_symbolic(__entry->reason,
>> +                                      kvm_trace_symbol_exit_types),
>> +                     __entry->pc)
>> +);
>> +
>> +DEFINE_EVENT(kvm_exit, kvm_exit_idle,
>> +            TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
>> +            TP_ARGS(vcpu, reason));
>> +
>> +DEFINE_EVENT(kvm_exit, kvm_exit_cache,
>> +            TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
>> +            TP_ARGS(vcpu, reason));
>> +
>> +DEFINE_EVENT(kvm_exit, kvm_exit,
>> +            TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
>> +            TP_ARGS(vcpu, reason));
>> +
>> +#define KVM_TRACE_AUX_RESTORE          0
>> +#define KVM_TRACE_AUX_SAVE             1
>> +#define KVM_TRACE_AUX_ENABLE           2
>> +#define KVM_TRACE_AUX_DISABLE          3
>> +#define KVM_TRACE_AUX_DISCARD          4
>> +
>> +#define KVM_TRACE_AUX_FPU              1
>> +
>> +#define kvm_trace_symbol_aux_op                                \
>> +       { KVM_TRACE_AUX_RESTORE,        "restore" },    \
>> +       { KVM_TRACE_AUX_SAVE,           "save" },       \
>> +       { KVM_TRACE_AUX_ENABLE,         "enable" },     \
>> +       { KVM_TRACE_AUX_DISABLE,        "disable" },    \
>> +       { KVM_TRACE_AUX_DISCARD,        "discard" }
>> +
>> +#define kvm_trace_symbol_aux_state                     \
>> +       { KVM_TRACE_AUX_FPU,     "FPU" }
>> +
>> +TRACE_EVENT(kvm_aux,
>> +           TP_PROTO(struct kvm_vcpu *vcpu, unsigned int op,
>> +                    unsigned int state),
>> +           TP_ARGS(vcpu, op, state),
>> +           TP_STRUCT__entry(
>> +                       __field(unsigned long, pc)
>> +                       __field(u8, op)
>> +                       __field(u8, state)
>> +           ),
>> +
>> +           TP_fast_assign(
>> +                       __entry->pc = vcpu->arch.pc;
>> +                       __entry->op = op;
>> +                       __entry->state = state;
>> +           ),
>> +
>> +           TP_printk("%s %s PC: 0x%08lx",
>> +                     __print_symbolic(__entry->op,
>> +                                      kvm_trace_symbol_aux_op),
>> +                     __print_symbolic(__entry->state,
>> +                                      kvm_trace_symbol_aux_state),
>> +                     __entry->pc)
>> +);
>> +
>> +TRACE_EVENT(kvm_vpid_change,
>> +           TP_PROTO(struct kvm_vcpu *vcpu, unsigned long vpid),
>> +           TP_ARGS(vcpu, vpid),
>> +           TP_STRUCT__entry(
>> +                       __field(unsigned long, vpid)
>> +           ),
>> +
>> +           TP_fast_assign(
>> +                       __entry->vpid = vpid;
>> +           ),
>> +
>> +           TP_printk("vpid: 0x%08lx",
>> +                     __entry->vpid)
>> +);
>> +
>> +#endif /* _TRACE_LOONGARCH64_KVM_H */
>> +
>> +#undef TRACE_INCLUDE_PATH
>> +#define TRACE_INCLUDE_PATH ../../arch/loongarch/kvm
>> +#undef TRACE_INCLUDE_FILE
>> +#define TRACE_INCLUDE_FILE trace
>> +
>> +/* This part must be outside protection */
>> +#include <trace/define_trace.h>
>> --
>> 2.39.1
>>
>>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v13 05/30] LoongArch: KVM: Add vcpu related header files
  2023-06-17  3:05     ` zhaotianrui
@ 2023-06-17  3:54       ` bibo, mao
  0 siblings, 0 replies; 17+ messages in thread
From: bibo, mao @ 2023-06-17  3:54 UTC (permalink / raw)
  To: zhaotianrui, Huacai Chen
  Cc: linux-kernel, kvm, Paolo Bonzini, WANG Xuerui,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, Xi Ruoyao, tangyouling



在 2023/6/17 11:05, zhaotianrui 写道:
> 
> 在 2023/6/15 下午5:51, Huacai Chen 写道:
>> Hi, Tianrui,
>>
>> I suggest to use similar method as vector support:
>> https://lore.kernel.org/loongarch/20230613151918.2039498-1-chenhuacai@loongson.cn/T/#u
>>
>> Introduce CC_HAS_LVZ_EXTENSION to detect whether the toolchain
>> supports LVZ-specific instructions. And make the KVM config option
>> depend on CC_HAS_LVZ_EXTENSION. In this way the code will be more
>> maintainable. Of course this makes the old toolchain unable to build a
>> KVM-enabled kernel, but I think that is not a big problem.
>>
>> Huacai
> Hi,  Huacai,
> Should the code be this if we add the toolchain condition?
> kvm/Kconfig:
> +config CC_HAS_LVZ_EXTENSION
> +    def_bool $(cc-option,-mlvz)
> config KVM
>       tristate "Kernel-based Virtual Machine (KVM) support"
>       depends on HAVE_KVM
> +    depends on CC_HAS_LVZ_EXTENSION
>       select MMU_NOTIFIER

No, we should not do so. kvm should support both. Currently popular gcc 
has no -mlvz option, only that if gcc supports CC_HAS_LVZ_EXTENSION, 
there will be better implementation with macro like gcsr_read, rather 
than method with ".word" code, else ".word" code will be used

Regards
Bibo, Mao

>      ...
>      ...
> Thanks
> Tianrui Zhao
>>
>> On Fri, Jun 9, 2023 at 5:05 PM Tianrui Zhao <zhaotianrui@loongson.cn> 
>> wrote:
>>> Add LoongArch vcpu related header files, including vcpu csr
>>> information, irq number defines, and some vcpu interfaces.
>>>
>>> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
>>> ---
>>>   arch/loongarch/include/asm/insn-def.h  |  55 ++++++
>>>   arch/loongarch/include/asm/kvm_csr.h   | 231 +++++++++++++++++++++++++
>>>   arch/loongarch/include/asm/kvm_vcpu.h  |  97 +++++++++++
>>>   arch/loongarch/include/asm/loongarch.h |  20 ++-
>>>   arch/loongarch/kvm/trace.h             | 168 ++++++++++++++++++
>>>   5 files changed, 566 insertions(+), 5 deletions(-)
>>>   create mode 100644 arch/loongarch/include/asm/insn-def.h
>>>   create mode 100644 arch/loongarch/include/asm/kvm_csr.h
>>>   create mode 100644 arch/loongarch/include/asm/kvm_vcpu.h
>>>   create mode 100644 arch/loongarch/kvm/trace.h
>>>
>>> diff --git a/arch/loongarch/include/asm/insn-def.h 
>>> b/arch/loongarch/include/asm/insn-def.h
>>> new file mode 100644
>>> index 000000000000..e285ee108fb0
>>> --- /dev/null
>>> +++ b/arch/loongarch/include/asm/insn-def.h
>>> @@ -0,0 +1,55 @@
>>> +/* SPDX-License-Identifier: GPL-2.0-only */
>>> +
>>> +#ifndef __ASM_INSN_DEF_H
>>> +#define __ASM_INSN_DEF_H
>>> +
>>> +#include <linux/stringify.h>
>>> +#include <asm/gpr-num.h>
>>> +#include <asm/asm.h>
>>> +
>>> +#define INSN_STR(x)            __stringify(x)
>>> +#define CSR_RD_SHIFT           0
>>> +#define CSR_RJ_SHIFT           5
>>> +#define CSR_SIMM14_SHIFT       10
>>> +#define CSR_OPCODE_SHIFT       24
>>> +
>>> +#define 
>>> DEFINE_INSN_CSR                                                        \
>>> +       
>>> __DEFINE_ASM_GPR_NUMS                                           \
>>> +"      .macro insn_csr, opcode, rj, rd, 
>>> simm14\n"                      \
>>> +"      .4byte  ((\\opcode << " INSN_STR(CSR_OPCODE_SHIFT) ") 
>>> |"        \
>>> +"               (.L__gpr_num_\\rj << " INSN_STR(CSR_RJ_SHIFT) ") 
>>> |"    \
>>> +"               (.L__gpr_num_\\rd << " INSN_STR(CSR_RD_SHIFT) ") 
>>> |"    \
>>> +"               (\\simm14 << " INSN_STR(CSR_SIMM14_SHIFT) 
>>> "))\n"       \
>>> +"      .endm\n"
>>> +
>>> +#define 
>>> UNDEFINE_INSN_CSR                                              \
>>> +"      .purgem insn_csr\n"
>>> +
>>> +#define __INSN_CSR(opcode, rj, rd, 
>>> simm14)                             \
>>> +       
>>> DEFINE_INSN_CSR                                                 \
>>> +       "insn_csr " opcode ", " rj ", " rd ", " simm14 
>>> "\n"             \
>>> +       UNDEFINE_INSN_CSR
>>> +
>>> +
>>> +#define INSN_CSR(opcode, rj, rd, 
>>> simm14)                               \
>>> +       __INSN_CSR(LARCH_##opcode, LARCH_##rj, 
>>> LARCH_##rd,              \
>>> +                  LARCH_##simm14)
>>> +
>>> +#define __ASM_STR(x)           #x
>>> +#define LARCH_OPCODE(v)                __ASM_STR(v)
>>> +#define LARCH_SIMM14(v)                __ASM_STR(v)
>>> +#define __LARCH_REG(v)         __ASM_STR(v)
>>> +#define LARCH___RD(v)          __LARCH_REG(v)
>>> +#define LARCH___RJ(v)          __LARCH_REG(v)
>>> +#define LARCH_OPCODE_GCSR      LARCH_OPCODE(5)
>>> +
>>> +#define GCSR_read(csr, 
>>> rd)                                             \
>>> +       INSN_CSR(OPCODE_GCSR, __RJ(zero), __RD(rd), SIMM14(csr))
>>> +
>>> +#define GCSR_write(csr, 
>>> rd)                                            \
>>> +       INSN_CSR(OPCODE_GCSR, __RJ($r1), __RD(rd), SIMM14(csr))
>>> +
>>> +#define GCSR_xchg(csr, rj, 
>>> rd)                                         \
>>> +       INSN_CSR(OPCODE_GCSR, __RJ(rj), __RD(rd), SIMM14(csr))
>>> +
>>> +#endif /* __ASM_INSN_DEF_H */
>>> diff --git a/arch/loongarch/include/asm/kvm_csr.h 
>>> b/arch/loongarch/include/asm/kvm_csr.h
>>> new file mode 100644
>>> index 000000000000..10dba5bc6df1
>>> --- /dev/null
>>> +++ b/arch/loongarch/include/asm/kvm_csr.h
>>> @@ -0,0 +1,231 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>> +/*
>>> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
>>> + */
>>> +
>>> +#ifndef __ASM_LOONGARCH_KVM_CSR_H__
>>> +#define __ASM_LOONGARCH_KVM_CSR_H__
>>> +#include <asm/loongarch.h>
>>> +#include <asm/kvm_vcpu.h>
>>> +#include <linux/uaccess.h>
>>> +#include <linux/kvm_host.h>
>>> +
>>> +/*
>>> + * Instructions will be available in binutils later
>>> + * read val from guest csr register %[csr]
>>> + * gcsrrd %[val], %[csr]
>>> + */
>>> +#define gcsr_read(csr)                                         \
>>> +({                                                             \
>>> +       register unsigned long __v;                             \
>>> +       __asm__ __volatile__ (GCSR_read(csr, %0)                \
>>> +                               : "=r" (__v) :                  \
>>> +                               : "memory");                    \
>>> +       __v;                                                    \
>>> +})
>>> +
>>> +/*
>>> + * Instructions will be available in binutils later
>>> + * write val to guest csr register %[csr]
>>> + * gcsrwr %[val], %[csr]
>>> + */
>>> +#define gcsr_write(val, csr)                                   \
>>> +({                                                             \
>>> +       register unsigned long __v = val;                       \
>>> +       __asm__ __volatile__ (GCSR_write(csr, %0)               \
>>> +                               : "+r" (__v) :                  \
>>> +                               : "memory");                    \
>>> +})
>>> +
>>> +/*
>>> + * Instructions will be available in binutils later
>>> + * replace masked bits of guest csr register %[csr] with val
>>> + * gcsrxchg %[val], %[mask], %[csr]
>>> + */
>>> +#define gcsr_xchg(val, mask, csr)                              \
>>> +({                                                             \
>>> +       register unsigned long __v = val;                       \
>>> +       __asm__ __volatile__ (GCSR_xchg(csr, %1, %0)            \
>>> +                               : "+r" (__v)                    \
>>> +                               : "r"  (mask)                   \
>>> +                               : "memory");                    \
>>> +       __v;                                                    \
>>> +})
>>> +
>>> +/* Guest CSRS read and write */
>>> +#define read_gcsr_crmd()               gcsr_read(LOONGARCH_CSR_CRMD)
>>> +#define write_gcsr_crmd(val)           gcsr_write(val, 
>>> LOONGARCH_CSR_CRMD)
>>> +#define read_gcsr_prmd()               gcsr_read(LOONGARCH_CSR_PRMD)
>>> +#define write_gcsr_prmd(val)           gcsr_write(val, 
>>> LOONGARCH_CSR_PRMD)
>>> +#define read_gcsr_euen()               gcsr_read(LOONGARCH_CSR_EUEN)
>>> +#define write_gcsr_euen(val)           gcsr_write(val, 
>>> LOONGARCH_CSR_EUEN)
>>> +#define read_gcsr_misc()               gcsr_read(LOONGARCH_CSR_MISC)
>>> +#define write_gcsr_misc(val)           gcsr_write(val, 
>>> LOONGARCH_CSR_MISC)
>>> +#define read_gcsr_ecfg()               gcsr_read(LOONGARCH_CSR_ECFG)
>>> +#define write_gcsr_ecfg(val)           gcsr_write(val, 
>>> LOONGARCH_CSR_ECFG)
>>> +#define read_gcsr_estat()              gcsr_read(LOONGARCH_CSR_ESTAT)
>>> +#define write_gcsr_estat(val)          gcsr_write(val, 
>>> LOONGARCH_CSR_ESTAT)
>>> +#define read_gcsr_era()                        
>>> gcsr_read(LOONGARCH_CSR_ERA)
>>> +#define write_gcsr_era(val)            gcsr_write(val, 
>>> LOONGARCH_CSR_ERA)
>>> +#define read_gcsr_badv()               gcsr_read(LOONGARCH_CSR_BADV)
>>> +#define write_gcsr_badv(val)           gcsr_write(val, 
>>> LOONGARCH_CSR_BADV)
>>> +#define read_gcsr_badi()               gcsr_read(LOONGARCH_CSR_BADI)
>>> +#define write_gcsr_badi(val)           gcsr_write(val, 
>>> LOONGARCH_CSR_BADI)
>>> +#define read_gcsr_eentry()             gcsr_read(LOONGARCH_CSR_EENTRY)
>>> +#define write_gcsr_eentry(val)         gcsr_write(val, 
>>> LOONGARCH_CSR_EENTRY)
>>> +
>>> +#define read_gcsr_tlbidx()             gcsr_read(LOONGARCH_CSR_TLBIDX)
>>> +#define write_gcsr_tlbidx(val)         gcsr_write(val, 
>>> LOONGARCH_CSR_TLBIDX)
>>> +#define read_gcsr_tlbhi()              gcsr_read(LOONGARCH_CSR_TLBEHI)
>>> +#define write_gcsr_tlbhi(val)          gcsr_write(val, 
>>> LOONGARCH_CSR_TLBEHI)
>>> +#define read_gcsr_tlblo0()             gcsr_read(LOONGARCH_CSR_TLBELO0)
>>> +#define write_gcsr_tlblo0(val)         gcsr_write(val, 
>>> LOONGARCH_CSR_TLBELO0)
>>> +#define read_gcsr_tlblo1()             gcsr_read(LOONGARCH_CSR_TLBELO1)
>>> +#define write_gcsr_tlblo1(val)         gcsr_write(val, 
>>> LOONGARCH_CSR_TLBELO1)
>>> +
>>> +#define read_gcsr_asid()               gcsr_read(LOONGARCH_CSR_ASID)
>>> +#define write_gcsr_asid(val)           gcsr_write(val, 
>>> LOONGARCH_CSR_ASID)
>>> +#define read_gcsr_pgdl()               gcsr_read(LOONGARCH_CSR_PGDL)
>>> +#define write_gcsr_pgdl(val)           gcsr_write(val, 
>>> LOONGARCH_CSR_PGDL)
>>> +#define read_gcsr_pgdh()               gcsr_read(LOONGARCH_CSR_PGDH)
>>> +#define write_gcsr_pgdh(val)           gcsr_write(val, 
>>> LOONGARCH_CSR_PGDH)
>>> +#define write_gcsr_pgd(val)            gcsr_write(val, 
>>> LOONGARCH_CSR_PGD)
>>> +#define read_gcsr_pgd()                        
>>> gcsr_read(LOONGARCH_CSR_PGD)
>>> +#define read_gcsr_pwctl0()             gcsr_read(LOONGARCH_CSR_PWCTL0)
>>> +#define write_gcsr_pwctl0(val)         gcsr_write(val, 
>>> LOONGARCH_CSR_PWCTL0)
>>> +#define read_gcsr_pwctl1()             gcsr_read(LOONGARCH_CSR_PWCTL1)
>>> +#define write_gcsr_pwctl1(val)         gcsr_write(val, 
>>> LOONGARCH_CSR_PWCTL1)
>>> +#define read_gcsr_stlbpgsize()         
>>> gcsr_read(LOONGARCH_CSR_STLBPGSIZE)
>>> +#define write_gcsr_stlbpgsize(val)     gcsr_write(val, 
>>> LOONGARCH_CSR_STLBPGSIZE)
>>> +#define read_gcsr_rvacfg()             gcsr_read(LOONGARCH_CSR_RVACFG)
>>> +#define write_gcsr_rvacfg(val)         gcsr_write(val, 
>>> LOONGARCH_CSR_RVACFG)
>>> +
>>> +#define read_gcsr_cpuid()              gcsr_read(LOONGARCH_CSR_CPUID)
>>> +#define write_gcsr_cpuid(val)          gcsr_write(val, 
>>> LOONGARCH_CSR_CPUID)
>>> +#define read_gcsr_prcfg1()             gcsr_read(LOONGARCH_CSR_PRCFG1)
>>> +#define write_gcsr_prcfg1(val)         gcsr_write(val, 
>>> LOONGARCH_CSR_PRCFG1)
>>> +#define read_gcsr_prcfg2()             gcsr_read(LOONGARCH_CSR_PRCFG2)
>>> +#define write_gcsr_prcfg2(val)         gcsr_write(val, 
>>> LOONGARCH_CSR_PRCFG2)
>>> +#define read_gcsr_prcfg3()             gcsr_read(LOONGARCH_CSR_PRCFG3)
>>> +#define write_gcsr_prcfg3(val)         gcsr_write(val, 
>>> LOONGARCH_CSR_PRCFG3)
>>> +
>>> +#define read_gcsr_kscratch0()          gcsr_read(LOONGARCH_CSR_KS0)
>>> +#define write_gcsr_kscratch0(val)      gcsr_write(val, 
>>> LOONGARCH_CSR_KS0)
>>> +#define read_gcsr_kscratch1()          gcsr_read(LOONGARCH_CSR_KS1)
>>> +#define write_gcsr_kscratch1(val)      gcsr_write(val, 
>>> LOONGARCH_CSR_KS1)
>>> +#define read_gcsr_kscratch2()          gcsr_read(LOONGARCH_CSR_KS2)
>>> +#define write_gcsr_kscratch2(val)      gcsr_write(val, 
>>> LOONGARCH_CSR_KS2)
>>> +#define read_gcsr_kscratch3()          gcsr_read(LOONGARCH_CSR_KS3)
>>> +#define write_gcsr_kscratch3(val)      gcsr_write(val, 
>>> LOONGARCH_CSR_KS3)
>>> +#define read_gcsr_kscratch4()          gcsr_read(LOONGARCH_CSR_KS4)
>>> +#define write_gcsr_kscratch4(val)      gcsr_write(val, 
>>> LOONGARCH_CSR_KS4)
>>> +#define read_gcsr_kscratch5()          gcsr_read(LOONGARCH_CSR_KS5)
>>> +#define write_gcsr_kscratch5(val)      gcsr_write(val, 
>>> LOONGARCH_CSR_KS5)
>>> +#define read_gcsr_kscratch6()          gcsr_read(LOONGARCH_CSR_KS6)
>>> +#define write_gcsr_kscratch6(val)      gcsr_write(val, 
>>> LOONGARCH_CSR_KS6)
>>> +#define read_gcsr_kscratch7()          gcsr_read(LOONGARCH_CSR_KS7)
>>> +#define write_gcsr_kscratch7(val)      gcsr_write(val, 
>>> LOONGARCH_CSR_KS7)
>>> +
>>> +#define read_gcsr_timerid()            gcsr_read(LOONGARCH_CSR_TMID)
>>> +#define write_gcsr_timerid(val)                gcsr_write(val, 
>>> LOONGARCH_CSR_TMID)
>>> +#define read_gcsr_timercfg()           gcsr_read(LOONGARCH_CSR_TCFG)
>>> +#define write_gcsr_timercfg(val)       gcsr_write(val, 
>>> LOONGARCH_CSR_TCFG)
>>> +#define read_gcsr_timertick()          gcsr_read(LOONGARCH_CSR_TVAL)
>>> +#define write_gcsr_timertick(val)      gcsr_write(val, 
>>> LOONGARCH_CSR_TVAL)
>>> +#define read_gcsr_timeroffset()                
>>> gcsr_read(LOONGARCH_CSR_CNTC)
>>> +#define write_gcsr_timeroffset(val)    gcsr_write(val, 
>>> LOONGARCH_CSR_CNTC)
>>> +
>>> +#define read_gcsr_llbctl()             gcsr_read(LOONGARCH_CSR_LLBCTL)
>>> +#define write_gcsr_llbctl(val)         gcsr_write(val, 
>>> LOONGARCH_CSR_LLBCTL)
>>> +
>>> +#define read_gcsr_tlbrentry()          
>>> gcsr_read(LOONGARCH_CSR_TLBRENTRY)
>>> +#define write_gcsr_tlbrentry(val)      gcsr_write(val, 
>>> LOONGARCH_CSR_TLBRENTRY)
>>> +#define read_gcsr_tlbrbadv()           
>>> gcsr_read(LOONGARCH_CSR_TLBRBADV)
>>> +#define write_gcsr_tlbrbadv(val)       gcsr_write(val, 
>>> LOONGARCH_CSR_TLBRBADV)
>>> +#define read_gcsr_tlbrera()            gcsr_read(LOONGARCH_CSR_TLBRERA)
>>> +#define write_gcsr_tlbrera(val)                gcsr_write(val, 
>>> LOONGARCH_CSR_TLBRERA)
>>> +#define read_gcsr_tlbrsave()           
>>> gcsr_read(LOONGARCH_CSR_TLBRSAVE)
>>> +#define write_gcsr_tlbrsave(val)       gcsr_write(val, 
>>> LOONGARCH_CSR_TLBRSAVE)
>>> +#define read_gcsr_tlbrelo0()           
>>> gcsr_read(LOONGARCH_CSR_TLBRELO0)
>>> +#define write_gcsr_tlbrelo0(val)       gcsr_write(val, 
>>> LOONGARCH_CSR_TLBRELO0)
>>> +#define read_gcsr_tlbrelo1()           
>>> gcsr_read(LOONGARCH_CSR_TLBRELO1)
>>> +#define write_gcsr_tlbrelo1(val)       gcsr_write(val, 
>>> LOONGARCH_CSR_TLBRELO1)
>>> +#define read_gcsr_tlbrehi()            gcsr_read(LOONGARCH_CSR_TLBREHI)
>>> +#define write_gcsr_tlbrehi(val)                gcsr_write(val, 
>>> LOONGARCH_CSR_TLBREHI)
>>> +#define read_gcsr_tlbrprmd()           
>>> gcsr_read(LOONGARCH_CSR_TLBRPRMD)
>>> +#define write_gcsr_tlbrprmd(val)       gcsr_write(val, 
>>> LOONGARCH_CSR_TLBRPRMD)
>>> +
>>> +#define read_gcsr_directwin0()         gcsr_read(LOONGARCH_CSR_DMWIN0)
>>> +#define write_gcsr_directwin0(val)     gcsr_write(val, 
>>> LOONGARCH_CSR_DMWIN0)
>>> +#define read_gcsr_directwin1()         gcsr_read(LOONGARCH_CSR_DMWIN1)
>>> +#define write_gcsr_directwin1(val)     gcsr_write(val, 
>>> LOONGARCH_CSR_DMWIN1)
>>> +#define read_gcsr_directwin2()         gcsr_read(LOONGARCH_CSR_DMWIN2)
>>> +#define write_gcsr_directwin2(val)     gcsr_write(val, 
>>> LOONGARCH_CSR_DMWIN2)
>>> +#define read_gcsr_directwin3()         gcsr_read(LOONGARCH_CSR_DMWIN3)
>>> +#define write_gcsr_directwin3(val)     gcsr_write(val, 
>>> LOONGARCH_CSR_DMWIN3)
>>> +
>>> +/* Guest related CSRs */
>>> +#define read_csr_gtlbc()               csr_read64(LOONGARCH_CSR_GTLBC)
>>> +#define write_csr_gtlbc(val)           csr_write64(val, 
>>> LOONGARCH_CSR_GTLBC)
>>> +#define read_csr_trgp()                        
>>> csr_read64(LOONGARCH_CSR_TRGP)
>>> +#define read_csr_gcfg()                        
>>> csr_read64(LOONGARCH_CSR_GCFG)
>>> +#define write_csr_gcfg(val)            csr_write64(val, 
>>> LOONGARCH_CSR_GCFG)
>>> +#define read_csr_gstat()               csr_read64(LOONGARCH_CSR_GSTAT)
>>> +#define write_csr_gstat(val)           csr_write64(val, 
>>> LOONGARCH_CSR_GSTAT)
>>> +#define read_csr_gintc()               csr_read64(LOONGARCH_CSR_GINTC)
>>> +#define write_csr_gintc(val)           csr_write64(val, 
>>> LOONGARCH_CSR_GINTC)
>>> +#define read_csr_gcntc()               csr_read64(LOONGARCH_CSR_GCNTC)
>>> +#define write_csr_gcntc(val)           csr_write64(val, 
>>> LOONGARCH_CSR_GCNTC)
>>> +
>>> +#define __BUILD_GCSR_OP(name)          __BUILD_CSR_COMMON(gcsr_##name)
>>> +
>>> +__BUILD_GCSR_OP(llbctl)
>>> +__BUILD_GCSR_OP(tlbidx)
>>> +__BUILD_CSR_OP(gcfg)
>>> +__BUILD_CSR_OP(gstat)
>>> +__BUILD_CSR_OP(gtlbc)
>>> +__BUILD_CSR_OP(gintc)
>>> +
>>> +#define set_gcsr_estat(val)    \
>>> +       gcsr_xchg(val, val, LOONGARCH_CSR_ESTAT)
>>> +#define clear_gcsr_estat(val)  \
>>> +       gcsr_xchg(~(val), val, LOONGARCH_CSR_ESTAT)
>>> +
>>> +#define kvm_read_hw_gcsr(id)           gcsr_read(id)
>>> +#define kvm_write_hw_gcsr(csr, id, val)        gcsr_write(val, id)
>>> +
>>> +int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v);
>>> +int _kvm_setcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 v);
>>> +
>>> +int _kvm_emu_iocsr(larch_inst inst, struct kvm_run *run, struct 
>>> kvm_vcpu *vcpu);
>>> +
>>> +#define kvm_save_hw_gcsr(csr, gid)     (csr->csrs[gid] = 
>>> gcsr_read(gid))
>>> +#define kvm_restore_hw_gcsr(csr, gid)  (gcsr_write(csr->csrs[gid], 
>>> gid))
>>> +
>>> +static __always_inline unsigned long kvm_read_sw_gcsr(struct 
>>> loongarch_csrs *csr, int gid)
>>> +{
>>> +       return csr->csrs[gid];
>>> +}
>>> +
>>> +static __always_inline void kvm_write_sw_gcsr(struct loongarch_csrs 
>>> *csr,
>>> +                                             int gid, unsigned long 
>>> val)
>>> +{
>>> +       csr->csrs[gid] = val;
>>> +}
>>> +
>>> +static __always_inline void kvm_set_sw_gcsr(struct loongarch_csrs *csr,
>>> +                                           int gid, unsigned long val)
>>> +{
>>> +       csr->csrs[gid] |= val;
>>> +}
>>> +
>>> +static __always_inline void kvm_change_sw_gcsr(struct loongarch_csrs 
>>> *csr,
>>> +                                              int gid, unsigned long 
>>> mask,
>>> +                                              unsigned long val)
>>> +{
>>> +       unsigned long _mask = mask;
>>> +
>>> +       csr->csrs[gid] &= ~_mask;
>>> +       csr->csrs[gid] |= val & _mask;
>>> +}
>>> +#endif /* __ASM_LOONGARCH_KVM_CSR_H__ */
>>> diff --git a/arch/loongarch/include/asm/kvm_vcpu.h 
>>> b/arch/loongarch/include/asm/kvm_vcpu.h
>>> new file mode 100644
>>> index 000000000000..74deaf55d22c
>>> --- /dev/null
>>> +++ b/arch/loongarch/include/asm/kvm_vcpu.h
>>> @@ -0,0 +1,97 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>> +/*
>>> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
>>> + */
>>> +
>>> +#ifndef __ASM_LOONGARCH_KVM_VCPU_H__
>>> +#define __ASM_LOONGARCH_KVM_VCPU_H__
>>> +
>>> +#include <linux/kvm_host.h>
>>> +#include <asm/loongarch.h>
>>> +
>>> +/* Controlled by 0x5 guest exst */
>>> +#define CPU_SIP0                       (_ULCAST_(1))
>>> +#define CPU_SIP1                       (_ULCAST_(1) << 1)
>>> +#define CPU_PMU                                (_ULCAST_(1) << 10)
>>> +#define CPU_TIMER                      (_ULCAST_(1) << 11)
>>> +#define CPU_IPI                                (_ULCAST_(1) << 12)
>>> +
>>> +/* Controlled by 0x52 guest exception VIP
>>> + * aligned to exst bit 5~12
>>> + */
>>> +#define CPU_IP0                                (_ULCAST_(1))
>>> +#define CPU_IP1                                (_ULCAST_(1) << 1)
>>> +#define CPU_IP2                                (_ULCAST_(1) << 2)
>>> +#define CPU_IP3                                (_ULCAST_(1) << 3)
>>> +#define CPU_IP4                                (_ULCAST_(1) << 4)
>>> +#define CPU_IP5                                (_ULCAST_(1) << 5)
>>> +#define CPU_IP6                                (_ULCAST_(1) << 6)
>>> +#define CPU_IP7                                (_ULCAST_(1) << 7)
>>> +
>>> +#define MNSEC_PER_SEC                  (NSEC_PER_SEC >> 20)
>>> +
>>> +/* KVM_IRQ_LINE irq field index values */
>>> +#define KVM_LOONGSON_IRQ_TYPE_SHIFT    24
>>> +#define KVM_LOONGSON_IRQ_TYPE_MASK     0xff
>>> +#define KVM_LOONGSON_IRQ_VCPU_SHIFT    16
>>> +#define KVM_LOONGSON_IRQ_VCPU_MASK     0xff
>>> +#define KVM_LOONGSON_IRQ_NUM_SHIFT     0
>>> +#define KVM_LOONGSON_IRQ_NUM_MASK      0xffff
>>> +
>>> +/* Irq_type field */
>>> +#define KVM_LOONGSON_IRQ_TYPE_CPU_IP   0
>>> +#define KVM_LOONGSON_IRQ_TYPE_CPU_IO   1
>>> +#define KVM_LOONGSON_IRQ_TYPE_HT       2
>>> +#define KVM_LOONGSON_IRQ_TYPE_MSI      3
>>> +#define KVM_LOONGSON_IRQ_TYPE_IOAPIC   4
>>> +#define KVM_LOONGSON_IRQ_TYPE_ROUTE    5
>>> +
>>> +/* Out-of-kernel GIC cpu interrupt injection irq_number field */
>>> +#define KVM_LOONGSON_IRQ_CPU_IRQ       0
>>> +#define KVM_LOONGSON_IRQ_CPU_FIQ       1
>>> +#define KVM_LOONGSON_CPU_IP_NUM                8
>>> +
>>> +typedef union loongarch_instruction  larch_inst;
>>> +typedef int (*exit_handle_fn)(struct kvm_vcpu *);
>>> +
>>> +int  _kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst);
>>> +int  _kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst);
>>> +int  _kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run 
>>> *run);
>>> +int  _kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run 
>>> *run);
>>> +int  _kvm_emu_idle(struct kvm_vcpu *vcpu);
>>> +int  _kvm_handle_pv_hcall(struct kvm_vcpu *vcpu);
>>> +int  _kvm_pending_timer(struct kvm_vcpu *vcpu);
>>> +int  _kvm_handle_fault(struct kvm_vcpu *vcpu, int fault);
>>> +void _kvm_deliver_intr(struct kvm_vcpu *vcpu);
>>> +
>>> +void kvm_own_fpu(struct kvm_vcpu *vcpu);
>>> +void kvm_lose_fpu(struct kvm_vcpu *vcpu);
>>> +void kvm_save_fpu(struct loongarch_fpu *fpu);
>>> +void kvm_restore_fpu(struct loongarch_fpu *fpu);
>>> +void kvm_restore_fcsr(struct loongarch_fpu *fpu);
>>> +
>>> +void kvm_acquire_timer(struct kvm_vcpu *vcpu);
>>> +void kvm_reset_timer(struct kvm_vcpu *vcpu);
>>> +enum hrtimer_restart kvm_count_timeout(struct kvm_vcpu *vcpu);
>>> +void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long hz);
>>> +void kvm_restore_timer(struct kvm_vcpu *vcpu);
>>> +void kvm_save_timer(struct kvm_vcpu *vcpu);
>>> +
>>> +int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu,
>>> +                       struct kvm_loongarch_interrupt *irq);
>>> +/*
>>> + * Loongarch KVM guest interrupt handling
>>> + */
>>> +static inline void _kvm_queue_irq(struct kvm_vcpu *vcpu, unsigned 
>>> int irq)
>>> +{
>>> +       set_bit(irq, &vcpu->arch.irq_pending);
>>> +       clear_bit(irq, &vcpu->arch.irq_clear);
>>> +}
>>> +
>>> +static inline void _kvm_dequeue_irq(struct kvm_vcpu *vcpu, unsigned 
>>> int irq)
>>> +{
>>> +       clear_bit(irq, &vcpu->arch.irq_pending);
>>> +       set_bit(irq, &vcpu->arch.irq_clear);
>>> +}
>>> +
>>> +#endif /* __ASM_LOONGARCH_KVM_VCPU_H__ */
>>> diff --git a/arch/loongarch/include/asm/loongarch.h 
>>> b/arch/loongarch/include/asm/loongarch.h
>>> index b3323ab5b78d..35ae5c2be8b6 100644
>>> --- a/arch/loongarch/include/asm/loongarch.h
>>> +++ b/arch/loongarch/include/asm/loongarch.h
>>> @@ -11,6 +11,7 @@
>>>
>>>   #ifndef __ASSEMBLY__
>>>   #include <larchintrin.h>
>>> +#include <asm/insn-def.h>
>>>
>>>   /*
>>>    * parse_r var, r - Helper assembler macro for parsing register names.
>>> @@ -309,6 +310,7 @@ static __always_inline void iocsr_write64(u64 
>>> val, u32 reg)
>>>   #define LOONGARCH_CSR_ECFG             0x4     /* Exception config */
>>>   #define  CSR_ECFG_VS_SHIFT             16
>>>   #define  CSR_ECFG_VS_WIDTH             3
>>> +#define  CSR_ECFG_VS_SHIFT_END         (CSR_ECFG_VS_SHIFT + 
>>> CSR_ECFG_VS_WIDTH - 1)
>>>   #define  CSR_ECFG_VS                   (_ULCAST_(0x7) << 
>>> CSR_ECFG_VS_SHIFT)
>>>   #define  CSR_ECFG_IM_SHIFT             0
>>>   #define  CSR_ECFG_IM_WIDTH             14
>>> @@ -397,13 +399,14 @@ static __always_inline void iocsr_write64(u64 
>>> val, u32 reg)
>>>   #define  CSR_TLBLO1_V                  (_ULCAST_(0x1) << 
>>> CSR_TLBLO1_V_SHIFT)
>>>
>>>   #define LOONGARCH_CSR_GTLBC            0x15    /* Guest TLB control */
>>> -#define  CSR_GTLBC_RID_SHIFT           16
>>> -#define  CSR_GTLBC_RID_WIDTH           8
>>> -#define  CSR_GTLBC_RID                 (_ULCAST_(0xff) << 
>>> CSR_GTLBC_RID_SHIFT)
>>> +#define  CSR_GTLBC_TGID_SHIFT          16
>>> +#define  CSR_GTLBC_TGID_WIDTH          8
>>> +#define  CSR_GTLBC_TGID_SHIFT_END      (CSR_GTLBC_TGID_SHIFT + 
>>> CSR_GTLBC_TGID_WIDTH - 1)
>>> +#define  CSR_GTLBC_TGID                        (_ULCAST_(0xff) << 
>>> CSR_GTLBC_TGID_SHIFT)
>>>   #define  CSR_GTLBC_TOTI_SHIFT          13
>>>   #define  CSR_GTLBC_TOTI                        (_ULCAST_(0x1) << 
>>> CSR_GTLBC_TOTI_SHIFT)
>>> -#define  CSR_GTLBC_USERID_SHIFT                12
>>> -#define  CSR_GTLBC_USERID              (_ULCAST_(0x1) << 
>>> CSR_GTLBC_USERID_SHIFT)
>>> +#define  CSR_GTLBC_USETGID_SHIFT       12
>>> +#define  CSR_GTLBC_USETGID             (_ULCAST_(0x1) << 
>>> CSR_GTLBC_USETGID_SHIFT)
>>>   #define  CSR_GTLBC_GMTLBSZ_SHIFT       0
>>>   #define  CSR_GTLBC_GMTLBSZ_WIDTH       6
>>>   #define  CSR_GTLBC_GMTLBSZ             (_ULCAST_(0x3f) << 
>>> CSR_GTLBC_GMTLBSZ_SHIFT)
>>> @@ -555,6 +558,7 @@ static __always_inline void iocsr_write64(u64 
>>> val, u32 reg)
>>>   #define LOONGARCH_CSR_GSTAT            0x50    /* Guest status */
>>>   #define  CSR_GSTAT_GID_SHIFT           16
>>>   #define  CSR_GSTAT_GID_WIDTH           8
>>> +#define  CSR_GSTAT_GID_SHIFT_END       (CSR_GSTAT_GID_SHIFT + 
>>> CSR_GSTAT_GID_WIDTH - 1)
>>>   #define  CSR_GSTAT_GID                 (_ULCAST_(0xff) << 
>>> CSR_GSTAT_GID_SHIFT)
>>>   #define  CSR_GSTAT_GIDBIT_SHIFT                4
>>>   #define  CSR_GSTAT_GIDBIT_WIDTH                6
>>> @@ -605,6 +609,12 @@ static __always_inline void iocsr_write64(u64 
>>> val, u32 reg)
>>>   #define  CSR_GCFG_MATC_GUEST           (_ULCAST_(0x0) << 
>>> CSR_GCFG_MATC_SHITF)
>>>   #define  CSR_GCFG_MATC_ROOT            (_ULCAST_(0x1) << 
>>> CSR_GCFG_MATC_SHITF)
>>>   #define  CSR_GCFG_MATC_NEST            (_ULCAST_(0x2) << 
>>> CSR_GCFG_MATC_SHITF)
>>> +#define  CSR_GCFG_MATP_NEST_SHIFT      2
>>> +#define  CSR_GCFG_MATP_NEST            (_ULCAST_(0x1) << 
>>> CSR_GCFG_MATP_NEST_SHIFT)
>>> +#define  CSR_GCFG_MATP_ROOT_SHIFT      1
>>> +#define  CSR_GCFG_MATP_ROOT            (_ULCAST_(0x1) << 
>>> CSR_GCFG_MATP_ROOT_SHIFT)
>>> +#define  CSR_GCFG_MATP_GUEST_SHIFT     0
>>> +#define  CSR_GCFG_MATP_GUEST           (_ULCAST_(0x1) << 
>>> CSR_GCFG_MATP_GUEST_SHIFT)
>>>
>>>   #define LOONGARCH_CSR_GINTC            0x52    /* Guest interrupt 
>>> control */
>>>   #define  CSR_GINTC_HC_SHIFT            16
>>> diff --git a/arch/loongarch/kvm/trace.h b/arch/loongarch/kvm/trace.h
>>> new file mode 100644
>>> index 000000000000..17b28d94d569
>>> --- /dev/null
>>> +++ b/arch/loongarch/kvm/trace.h
>>> @@ -0,0 +1,168 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>> +/*
>>> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
>>> + */
>>> +
>>> +#if !defined(_TRACE_KVM_H) || defined(TRACE_HEADER_MULTI_READ)
>>> +#define _TRACE_KVM_H
>>> +
>>> +#include <linux/tracepoint.h>
>>> +#include <asm/kvm_csr.h>
>>> +
>>> +#undef TRACE_SYSTEM
>>> +#define TRACE_SYSTEM   kvm
>>> +
>>> +/*
>>> + * Tracepoints for VM enters
>>> + */
>>> +DECLARE_EVENT_CLASS(kvm_transition,
>>> +       TP_PROTO(struct kvm_vcpu *vcpu),
>>> +       TP_ARGS(vcpu),
>>> +       TP_STRUCT__entry(
>>> +               __field(unsigned long, pc)
>>> +       ),
>>> +
>>> +       TP_fast_assign(
>>> +               __entry->pc = vcpu->arch.pc;
>>> +       ),
>>> +
>>> +       TP_printk("PC: 0x%08lx",
>>> +                 __entry->pc)
>>> +);
>>> +
>>> +DEFINE_EVENT(kvm_transition, kvm_enter,
>>> +            TP_PROTO(struct kvm_vcpu *vcpu),
>>> +            TP_ARGS(vcpu));
>>> +
>>> +DEFINE_EVENT(kvm_transition, kvm_reenter,
>>> +            TP_PROTO(struct kvm_vcpu *vcpu),
>>> +            TP_ARGS(vcpu));
>>> +
>>> +DEFINE_EVENT(kvm_transition, kvm_out,
>>> +            TP_PROTO(struct kvm_vcpu *vcpu),
>>> +            TP_ARGS(vcpu));
>>> +
>>> +/* Further exit reasons */
>>> +#define KVM_TRACE_EXIT_IDLE            64
>>> +#define KVM_TRACE_EXIT_CACHE           65
>>> +#define KVM_TRACE_EXIT_SIGNAL          66
>>> +
>>> +/* Tracepoints for VM exits */
>>> +#define kvm_trace_symbol_exit_types                    \
>>> +       { KVM_TRACE_EXIT_IDLE,          "IDLE" },       \
>>> +       { KVM_TRACE_EXIT_CACHE,         "CACHE" },      \
>>> +       { KVM_TRACE_EXIT_SIGNAL,        "Signal" }
>>> +
>>> +TRACE_EVENT(kvm_exit_gspr,
>>> +           TP_PROTO(struct kvm_vcpu *vcpu, unsigned int inst_word),
>>> +           TP_ARGS(vcpu, inst_word),
>>> +           TP_STRUCT__entry(
>>> +                       __field(unsigned int, inst_word)
>>> +           ),
>>> +
>>> +           TP_fast_assign(
>>> +                       __entry->inst_word = inst_word;
>>> +           ),
>>> +
>>> +           TP_printk("inst word: 0x%08x",
>>> +                     __entry->inst_word)
>>> +);
>>> +
>>> +
>>> +DECLARE_EVENT_CLASS(kvm_exit,
>>> +           TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
>>> +           TP_ARGS(vcpu, reason),
>>> +           TP_STRUCT__entry(
>>> +                       __field(unsigned long, pc)
>>> +                       __field(unsigned int, reason)
>>> +           ),
>>> +
>>> +           TP_fast_assign(
>>> +                       __entry->pc = vcpu->arch.pc;
>>> +                       __entry->reason = reason;
>>> +           ),
>>> +
>>> +           TP_printk("[%s]PC: 0x%08lx",
>>> +                     __print_symbolic(__entry->reason,
>>> +                                      kvm_trace_symbol_exit_types),
>>> +                     __entry->pc)
>>> +);
>>> +
>>> +DEFINE_EVENT(kvm_exit, kvm_exit_idle,
>>> +            TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
>>> +            TP_ARGS(vcpu, reason));
>>> +
>>> +DEFINE_EVENT(kvm_exit, kvm_exit_cache,
>>> +            TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
>>> +            TP_ARGS(vcpu, reason));
>>> +
>>> +DEFINE_EVENT(kvm_exit, kvm_exit,
>>> +            TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
>>> +            TP_ARGS(vcpu, reason));
>>> +
>>> +#define KVM_TRACE_AUX_RESTORE          0
>>> +#define KVM_TRACE_AUX_SAVE             1
>>> +#define KVM_TRACE_AUX_ENABLE           2
>>> +#define KVM_TRACE_AUX_DISABLE          3
>>> +#define KVM_TRACE_AUX_DISCARD          4
>>> +
>>> +#define KVM_TRACE_AUX_FPU              1
>>> +
>>> +#define kvm_trace_symbol_aux_op                                \
>>> +       { KVM_TRACE_AUX_RESTORE,        "restore" },    \
>>> +       { KVM_TRACE_AUX_SAVE,           "save" },       \
>>> +       { KVM_TRACE_AUX_ENABLE,         "enable" },     \
>>> +       { KVM_TRACE_AUX_DISABLE,        "disable" },    \
>>> +       { KVM_TRACE_AUX_DISCARD,        "discard" }
>>> +
>>> +#define kvm_trace_symbol_aux_state                     \
>>> +       { KVM_TRACE_AUX_FPU,     "FPU" }
>>> +
>>> +TRACE_EVENT(kvm_aux,
>>> +           TP_PROTO(struct kvm_vcpu *vcpu, unsigned int op,
>>> +                    unsigned int state),
>>> +           TP_ARGS(vcpu, op, state),
>>> +           TP_STRUCT__entry(
>>> +                       __field(unsigned long, pc)
>>> +                       __field(u8, op)
>>> +                       __field(u8, state)
>>> +           ),
>>> +
>>> +           TP_fast_assign(
>>> +                       __entry->pc = vcpu->arch.pc;
>>> +                       __entry->op = op;
>>> +                       __entry->state = state;
>>> +           ),
>>> +
>>> +           TP_printk("%s %s PC: 0x%08lx",
>>> +                     __print_symbolic(__entry->op,
>>> +                                      kvm_trace_symbol_aux_op),
>>> +                     __print_symbolic(__entry->state,
>>> +                                      kvm_trace_symbol_aux_state),
>>> +                     __entry->pc)
>>> +);
>>> +
>>> +TRACE_EVENT(kvm_vpid_change,
>>> +           TP_PROTO(struct kvm_vcpu *vcpu, unsigned long vpid),
>>> +           TP_ARGS(vcpu, vpid),
>>> +           TP_STRUCT__entry(
>>> +                       __field(unsigned long, vpid)
>>> +           ),
>>> +
>>> +           TP_fast_assign(
>>> +                       __entry->vpid = vpid;
>>> +           ),
>>> +
>>> +           TP_printk("vpid: 0x%08lx",
>>> +                     __entry->vpid)
>>> +);
>>> +
>>> +#endif /* _TRACE_LOONGARCH64_KVM_H */
>>> +
>>> +#undef TRACE_INCLUDE_PATH
>>> +#define TRACE_INCLUDE_PATH ../../arch/loongarch/kvm
>>> +#undef TRACE_INCLUDE_FILE
>>> +#define TRACE_INCLUDE_FILE trace
>>> +
>>> +/* This part must be outside protection */
>>> +#include <trace/define_trace.h>
>>> -- 
>>> 2.39.1
>>>
>>>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v13 10/30] LoongArch: KVM: Implement vcpu ENABLE_CAP ioctl interface
  2023-06-09  9:08 ` [PATCH v13 10/30] LoongArch: KVM: Implement vcpu ENABLE_CAP ioctl interface Tianrui Zhao
@ 2023-06-13 12:57   ` bibo, mao
  0 siblings, 0 replies; 17+ messages in thread
From: bibo, mao @ 2023-06-13 12:57 UTC (permalink / raw)
  To: Tianrui Zhao
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	Xi Ruoyao, tangyouling, linux-kernel, kvm

Reviewed-by: Bibo Mao <maobibo@loongson.cn>

Regards
Bibo, Mao

在 2023/6/9 17:08, Tianrui Zhao 写道:
> Implement LoongArch vcpu KVM_ENABLE_CAP ioctl interface.
> 
> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
> ---
>   arch/loongarch/kvm/vcpu.c | 19 +++++++++++++++++++
>   1 file changed, 19 insertions(+)
> 
> diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
> index b0cce413762d..da97b77da8eb 100644
> --- a/arch/loongarch/kvm/vcpu.c
> +++ b/arch/loongarch/kvm/vcpu.c
> @@ -186,6 +186,16 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
>   	return 0;
>   }
>   
> +static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
> +				     struct kvm_enable_cap *cap)
> +{
> +	/*
> +	 * FPU is enable by default, do not support any other caps,
> +	 * and later we will support such as LSX cap.
> +	 */
> +	return -EINVAL;
> +}
> +
>   long kvm_arch_vcpu_ioctl(struct file *filp,
>   			 unsigned int ioctl, unsigned long arg)
>   {
> @@ -209,6 +219,15 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
>   			r = _kvm_get_reg(vcpu, &reg);
>   		break;
>   	}
> +	case KVM_ENABLE_CAP: {
> +		struct kvm_enable_cap cap;
> +
> +		r = -EFAULT;
> +		if (copy_from_user(&cap, argp, sizeof(cap)))
> +			break;
> +		r = kvm_vcpu_ioctl_enable_cap(vcpu, &cap);
> +		break;
> +	}
>   	default:
>   		r = -ENOIOCTLCMD;
>   		break;

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v13 10/30] LoongArch: KVM: Implement vcpu ENABLE_CAP ioctl interface
  2023-06-09  9:08 Tianrui Zhao
@ 2023-06-09  9:08 ` Tianrui Zhao
  2023-06-13 12:57   ` bibo, mao
  0 siblings, 1 reply; 17+ messages in thread
From: Tianrui Zhao @ 2023-06-09  9:08 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui, tangyouling

Implement LoongArch vcpu KVM_ENABLE_CAP ioctl interface.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vcpu.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index b0cce413762d..da97b77da8eb 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -186,6 +186,16 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 	return 0;
 }
 
+static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
+				     struct kvm_enable_cap *cap)
+{
+	/*
+	 * FPU is enable by default, do not support any other caps,
+	 * and later we will support such as LSX cap.
+	 */
+	return -EINVAL;
+}
+
 long kvm_arch_vcpu_ioctl(struct file *filp,
 			 unsigned int ioctl, unsigned long arg)
 {
@@ -209,6 +219,15 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 			r = _kvm_get_reg(vcpu, &reg);
 		break;
 	}
+	case KVM_ENABLE_CAP: {
+		struct kvm_enable_cap cap;
+
+		r = -EFAULT;
+		if (copy_from_user(&cap, argp, sizeof(cap)))
+			break;
+		r = kvm_vcpu_ioctl_enable_cap(vcpu, &cap);
+		break;
+	}
 	default:
 		r = -ENOIOCTLCMD;
 		break;
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2023-06-17  3:55 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-09  9:04 [PATCH v13 00/30] Add KVM LoongArch support Tianrui Zhao
2023-06-09  9:04 ` [PATCH v13 03/30] LoongArch: KVM: Implement kvm hardware enable, disable interface Tianrui Zhao
2023-06-09  9:04 ` [PATCH v13 05/30] LoongArch: KVM: Add vcpu related header files Tianrui Zhao
2023-06-15  9:51   ` Huacai Chen
2023-06-17  3:05     ` zhaotianrui
2023-06-17  3:54       ` bibo, mao
2023-06-09  9:04 ` [PATCH v13 07/30] LoongArch: KVM: Implement vcpu run interface Tianrui Zhao
2023-06-09  9:04 ` [PATCH v13 08/30] LoongArch: KVM: Implement vcpu handle exit interface Tianrui Zhao
2023-06-09  9:04 ` [PATCH v13 10/30] LoongArch: KVM: Implement vcpu ENABLE_CAP ioctl interface Tianrui Zhao
2023-06-09  9:05 ` [PATCH v13 20/30] LoongArch: KVM: Implement handle csr excption Tianrui Zhao
2023-06-09  9:05 ` [PATCH v13 29/30] LoongArch: KVM: Supplement kvm document about LoongArch-specific part Tianrui Zhao
2023-06-09  9:05 ` [PATCH v13 30/30] LoongArch: KVM: Add maintainers for LoongArch KVM Tianrui Zhao
2023-06-15  9:27   ` Huacai Chen
2023-06-16  2:50     ` zhaotianrui
2023-06-09  9:46 ` [PATCH v13 00/30] Add KVM LoongArch support zhaotianrui
2023-06-09  9:08 Tianrui Zhao
2023-06-09  9:08 ` [PATCH v13 10/30] LoongArch: KVM: Implement vcpu ENABLE_CAP ioctl interface Tianrui Zhao
2023-06-13 12:57   ` bibo, mao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).