linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/4] KVM: selftests: Add LoongArch support
@ 2023-11-30 11:18 Tianrui Zhao
  2023-11-30 11:18 ` [PATCH v5 1/4] KVM: selftests: Add KVM selftests header files for LoongArch Tianrui Zhao
                   ` (3 more replies)
  0 siblings, 4 replies; 14+ messages in thread
From: Tianrui Zhao @ 2023-11-30 11:18 UTC (permalink / raw)
  To: Shuah Khan, Paolo Bonzini, linux-kernel, kvm, Sean Christopherson
  Cc: Vishal Annapurve, Huacai Chen, WANG Xuerui, loongarch, Peter Xu,
	Vipin Sharma, maobibo, zhaotianrui

We add LoongArch support into KVM selftests and there are some KVM
test cases we have passed:
	demand_paging_test
	dirty_log_perf_test
	dirty_log_test
	guest_print_test
	kvm_binary_stats_test
	kvm_create_max_vcpus
	kvm_page_table_test
	memslot_modification_stress_test
	memslot_perf_test
	set_memory_region_test

Changes for v5:
1. In LoongArch kvm self tests, the DEFAULT_GUEST_TEST_MEM could be
0x130000000, it is different from the default value in memstress.h.
So we Move the definition of DEFAULT_GUEST_TEST_MEM into LoongArch
ucall.h, and add 'ifndef' condition for DEFAULT_GUEST_TEST_MEM
in memstress.h.

Changes for v4:
1. Remove the based-on flag, as the LoongArch KVM patch series
have been accepted by Linux kernel, so this can be applied directly
in kernel.

Changes for v3:
1. Improve implementation of LoongArch VM page walk.
2. Add exception handler for LoongArch.
3. Add dirty_log_test, dirty_log_perf_test, guest_print_test
test cases for LoongArch.
4. Add __ASSEMBLER__ macro to distinguish asm file and c file.
5. Move ucall_arch_do_ucall to the header file and make it as
static inline to avoid function calls.
6. Change the DEFAULT_GUEST_TEST_MEM base addr for LoongArch.

Changes for v2:
1. We should use ".balign 4096" to align the assemble code with 4K in
exception.S instead of "align 12".
2. LoongArch only supports 3 or 4 levels page tables, so we remove the
hanlders for 2-levels page table.
3. Remove the DEFAULT_LOONGARCH_GUEST_STACK_VADDR_MIN and use the common
DEFAULT_GUEST_STACK_VADDR_MIN to allocate stack memory in guest.
4. Reorganize the test cases supported by LoongArch.
5. Fix some code comments.
6. Add kvm_binary_stats_test test case into LoongArch KVM selftests.

changes for v1:
1. Add kvm selftests header files for LoongArch.
2. Add processor tests for LoongArch KVM.
3. Add ucall tests for LoongArch KVM.
4. Add LoongArch tests into makefile.

All of the test cases results:
1..10
 timeout set to 120
 selftests: kvm: demand_paging_test
 Testing guest mode: PA-bits:36,  VA-bits:47, 16K pages
 guest physical test memory: [0xfbfffc000, 0xfffffc000)
 Finished creating vCPUs and starting uffd threads
 Started all vCPUs
 All vCPU threads joined
 Total guest execution time: 0.200804700s
 Overall demand paging rate: 326366.862927 pgs/sec
ok 1 selftests: kvm: demand_paging_test
 timeout set to 120
 selftests: kvm: dirty_log_perf_test
 Test iterations: 2
 Testing guest mode: PA-bits:36,  VA-bits:47, 16K pages
 guest physical test memory: [0xfbfffc000, 0xfffffc000)
 Random seed: 1
 Populate memory time: 0.201452560s
 Enabling dirty logging time: 0.000451670s
 
 Iteration 1 dirty memory time: 0.051582140s
 Iteration 1 get dirty log time: 0.000010510s
 Iteration 1 clear dirty log time: 0.000421730s
 Iteration 2 dirty memory time: 0.046593760s
 Iteration 2 get dirty log time: 0.000002110s
 Iteration 2 clear dirty log time: 0.000418020s
 Disabling dirty logging time: 0.002948490s
 Get dirty log over 2 iterations took 0.000012620s. (Avg 0.000006310s/iteration)
 Clear dirty log over 2 iterations took 0.000839750s. (Avg 0.000419875s/iteration)
ok 2 selftests: kvm: dirty_log_perf_test
 timeout set to 120
 selftests: kvm: dirty_log_test
 Test iterations: 32, interval: 10 (ms)
 Testing Log Mode 'dirty-log'
 Testing guest mode: PA-bits:36,  VA-bits:47, 16K pages
 guest physical test memory offset: 0xfbfff0000
 Dirtied 453632 pages
 Total bits checked: dirty (436564), clear (1595145), track_next (70002)
 Testing Log Mode 'clear-log'
 Testing guest mode: PA-bits:36,  VA-bits:47, 16K pages
 guest physical test memory offset: 0xfbfff0000
 Dirtied 425984 pages
 Total bits checked: dirty (414397), clear (1617312), track_next (68152)
 Testing Log Mode 'dirty-ring'
 Testing guest mode: PA-bits:36,  VA-bits:47, 16K pages
 dirty ring count: 0x10000
 guest physical test memory offset: 0xfbfff0000
 vcpu stops because vcpu is kicked out...
 Notifying vcpu to continue
 vcpu continues now.
 Iteration 1 collected 3201 pages
 vcpu stops because dirty ring is full...
 vcpu continues now.
 vcpu stops because dirty ring is full...
 Notifying vcpu to continue
 Iteration 2 collected 65472 pages
 ......
 vcpu continues now.
 vcpu stops because vcpu is kicked out...
 vcpu continues now.
 vcpu stops because vcpu is kicked out...
 Notifying vcpu to continue
 vcpu continues now.
 Iteration 31 collected 12642 pages
 vcpu stops because dirty ring is full...
 vcpu continues now.
 Dirtied 7275520 pages
 Total bits checked: dirty (1165675), clear (866034), track_next (811358)
ok 3 selftests: kvm: dirty_log_test
 timeout set to 120
 selftests: kvm: guest_print_test
ok 4 selftests: kvm: guest_print_test
 timeout set to 120
 selftests: kvm: kvm_binary_stats_test
 TAP version 13
 1..4
 ok 1 vm0
 ok 2 vm1
 ok 3 vm2
 ok 4 vm3
 # Totals: pass:4 fail:0 xfail:0 xpass:0 skip:0 error:0
ok 5 selftests: kvm: kvm_binary_stats_test
 timeout set to 120
 selftests: kvm: kvm_create_max_vcpus
 KVM_CAP_MAX_VCPU_ID: 256
 KVM_CAP_MAX_VCPUS: 256
 Testing creating 256 vCPUs, with IDs 0...255.
ok 6 selftests: kvm: kvm_create_max_vcpus
 timeout set to 120
 selftests: kvm: kvm_page_table_test
 Testing guest mode: PA-bits:36,  VA-bits:47, 16K pages
 Testing memory backing src type: anonymous
 Testing memory backing src granularity: 0x4000
 Testing memory size(aligned): 0x40000000
 Guest physical test memory offset: 0xfbfffc000
 Host  virtual  test memory offset: 0x7fffb0860000
 Number of testing vCPUs: 1
 Started all vCPUs successfully
 KVM_CREATE_MAPPINGS: total execution time: 0.200919330s
 
 KVM_UPDATE_MAPPINGS: total execution time: 0.051182930s
 
 KVM_ADJUST_MAPPINGS: total execution time: 0.010083590s
 
ok 7 selftests: kvm: kvm_page_table_test
 timeout set to 120
 selftests: kvm: memslot_modification_stress_test
 Testing guest mode: PA-bits:36,  VA-bits:47, 16K pages
 guest physical test memory: [0xfbfffc000, 0xfffffc000)
 Finished creating vCPUs
 Started all vCPUs
 All vCPU threads joined
ok 8 selftests: kvm: memslot_modification_stress_test
 timeout set to 120
 selftests: kvm: memslot_perf_test
 Testing map performance with 1 runs, 5 seconds each
 Memslot count too high for this test, decrease the cap (max is 2053)
 
 Testing unmap performance with 1 runs, 5 seconds each
 Memslot count too high for this test, decrease the cap (max is 8197)
 
 Testing unmap chunked performance with 1 runs, 5 seconds each
 Memslot count too high for this test, decrease the cap (max is 8197)
 
 Testing move active area performance with 1 runs, 5 seconds each
 Test took 0.761678900s for slot setup + 5.000014460s all iterations
 Done 120167 iterations, avg 0.000041608s each
 Best runtime result was 0.000041608s per iteration (with 120167 iterations)
 
 Testing move inactive area performance with 1 runs, 5 seconds each
 Test took 0.771796550s for slot setup + 5.000018520s all iterations
 Done 136354 iterations, avg 0.000036669s each
 Best runtime result was 0.000036669s per iteration (with 136354 iterations)
 
 Testing RW performance with 1 runs, 5 seconds each
 Test took 0.763568840s for slot setup + 5.002233800s all iterations
 Done 649 iterations, avg 0.007707602s each
 Best runtime result was 0.007707602s per iteration (with 649 iterations)
 Best slot setup time for the whole test area was 0.761678900s
ok 9 selftests: kvm: memslot_perf_test
 timeout set to 120
 selftests: kvm: set_memory_region_test
 Allowed number of memory slots: 32767
 Adding slots 0..32766, each memory region with 2048K size
ok 10 selftests: kvm: set_memory_region_test

Tianrui Zhao (4):
  KVM: selftests: Add KVM selftests header files for LoongArch
  KVM: selftests: Add core KVM selftests support for LoongArch
  KVM: selftests: Add ucall test support for LoongArch
  KVM: selftests: Add test cases for LoongArch

 tools/testing/selftests/kvm/Makefile          |  15 +
 .../selftests/kvm/include/kvm_util_base.h     |   5 +
 .../kvm/include/loongarch/processor.h         | 133 +++++++
 .../selftests/kvm/include/loongarch/ucall.h   |  28 ++
 .../testing/selftests/kvm/include/memstress.h |   2 +
 .../selftests/kvm/lib/loongarch/exception.S   |  59 ++++
 .../selftests/kvm/lib/loongarch/processor.c   | 333 ++++++++++++++++++
 .../selftests/kvm/lib/loongarch/ucall.c       |  38 ++
 8 files changed, 613 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/include/loongarch/processor.h
 create mode 100644 tools/testing/selftests/kvm/include/loongarch/ucall.h
 create mode 100644 tools/testing/selftests/kvm/lib/loongarch/exception.S
 create mode 100644 tools/testing/selftests/kvm/lib/loongarch/processor.c
 create mode 100644 tools/testing/selftests/kvm/lib/loongarch/ucall.c

-- 
2.39.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v5 1/4] KVM: selftests: Add KVM selftests header files for LoongArch
  2023-11-30 11:18 [PATCH v5 0/4] KVM: selftests: Add LoongArch support Tianrui Zhao
@ 2023-11-30 11:18 ` Tianrui Zhao
  2023-12-12  3:08   ` zhaotianrui
  2023-11-30 11:18 ` [PATCH v5 2/4] KVM: selftests: Add core KVM selftests support " Tianrui Zhao
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 14+ messages in thread
From: Tianrui Zhao @ 2023-11-30 11:18 UTC (permalink / raw)
  To: Shuah Khan, Paolo Bonzini, linux-kernel, kvm, Sean Christopherson
  Cc: Vishal Annapurve, Huacai Chen, WANG Xuerui, loongarch, Peter Xu,
	Vipin Sharma, maobibo, zhaotianrui

Add KVM selftests header files for LoongArch, including processor.h
and kvm_util_base.h. Those mainly contain LoongArch CSR register defines
and page table information. And change DEFAULT_GUEST_TEST_MEM base addr
for LoongArch.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 .../selftests/kvm/include/kvm_util_base.h     |   5 +
 .../kvm/include/loongarch/processor.h         | 133 ++++++++++++++++++
 .../testing/selftests/kvm/include/memstress.h |   2 +
 3 files changed, 140 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/include/loongarch/processor.h

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index a18db6a7b3c..97f8b24741b 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -218,6 +218,11 @@ extern enum vm_guest_mode vm_mode_default;
 #define MIN_PAGE_SHIFT			12U
 #define ptes_per_page(page_size)	((page_size) / 8)
 
+#elif defined(__loongarch__)
+#define VM_MODE_DEFAULT			VM_MODE_P36V47_16K
+#define MIN_PAGE_SHIFT			14U
+#define ptes_per_page(page_size)	((page_size) / 8)
+
 #endif
 
 #define MIN_PAGE_SIZE		(1U << MIN_PAGE_SHIFT)
diff --git a/tools/testing/selftests/kvm/include/loongarch/processor.h b/tools/testing/selftests/kvm/include/loongarch/processor.h
new file mode 100644
index 00000000000..cea6b284131
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/loongarch/processor.h
@@ -0,0 +1,133 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#ifndef SELFTEST_KVM_PROCESSOR_H
+#define SELFTEST_KVM_PROCESSOR_H
+
+#define _PAGE_VALID_SHIFT	0
+#define _PAGE_DIRTY_SHIFT	1
+#define _PAGE_PLV_SHIFT		2  /* 2~3, two bits */
+#define _CACHE_SHIFT		4  /* 4~5, two bits */
+#define _PAGE_PRESENT_SHIFT	7
+#define _PAGE_WRITE_SHIFT	8
+
+#define PLV_KERN		0
+#define PLV_USER		3
+#define PLV_MASK		0x3
+
+#define _PAGE_VALID		(0x1UL << _PAGE_VALID_SHIFT)
+#define _PAGE_PRESENT		(0x1UL << _PAGE_PRESENT_SHIFT)
+#define _PAGE_WRITE		(0x1UL << _PAGE_WRITE_SHIFT)
+#define _PAGE_DIRTY		(0x1UL << _PAGE_DIRTY_SHIFT)
+#define _PAGE_USER		(PLV_USER << _PAGE_PLV_SHIFT)
+#define __READABLE		(_PAGE_VALID)
+#define __WRITEABLE		(_PAGE_DIRTY | _PAGE_WRITE)
+#define _CACHE_CC		(0x1UL << _CACHE_SHIFT) /* Coherent Cached */
+
+/* general registers */
+#define zero	$r0
+#define ra	$r1
+#define tp	$r2
+#define sp	$r3
+#define a0	$r4
+#define a1	$r5
+#define a2	$r6
+#define a3	$r7
+#define a4	$r8
+#define a5	$r9
+#define a6	$r10
+#define a7	$r11
+#define t0	$r12
+#define t1	$r13
+#define t2	$r14
+#define t3	$r15
+#define t4	$r16
+#define t5	$r17
+#define t6	$r18
+#define t7	$r19
+#define t8	$r20
+#define u0	$r21
+#define fp	$r22
+#define s0	$r23
+#define s1	$r24
+#define s2	$r25
+#define s3	$r26
+#define s4	$r27
+#define s5	$r28
+#define s6	$r29
+#define s7	$r30
+#define s8	$r31
+
+#define PS_4K				0x0000000c
+#define PS_8K				0x0000000d
+#define PS_16K				0x0000000e
+#define PS_DEFAULT_SIZE			PS_16K
+
+/* Basic CSR registers */
+#define LOONGARCH_CSR_CRMD		0x0 /* Current mode info */
+#define CSR_CRMD_PG_SHIFT		4
+#define CSR_CRMD_PG			(0x1UL << CSR_CRMD_PG_SHIFT)
+#define CSR_CRMD_IE_SHIFT		2
+#define CSR_CRMD_IE			(0x1UL << CSR_CRMD_IE_SHIFT)
+#define CSR_CRMD_PLV_SHIFT		0
+#define CSR_CRMD_PLV_WIDTH		2
+#define CSR_CRMD_PLV			(0x3UL << CSR_CRMD_PLV_SHIFT)
+#define PLV_MASK			0x3
+
+#define LOONGARCH_CSR_PRMD		0x1
+#define LOONGARCH_CSR_EUEN		0x2
+#define LOONGARCH_CSR_ECFG		0x4
+#define LOONGARCH_CSR_ESTAT		0x5 /* Exception status */
+#define LOONGARCH_CSR_ERA		0x6 /* ERA */
+#define LOONGARCH_CSR_BADV		0x7 /* Bad virtual address */
+#define LOONGARCH_CSR_EENTRY		0xc
+#define LOONGARCH_CSR_TLBIDX		0x10 /* TLB Index, EHINV, PageSize, NP */
+#define CSR_TLBIDX_PS_SHIFT		24
+#define CSR_TLBIDX_PS_WIDTH		6
+#define CSR_TLBIDX_PS			(0x3fUL << CSR_TLBIDX_PS_SHIFT)
+#define CSR_TLBIDX_SIZEM		0x3f000000
+#define CSR_TLBIDX_SIZE			CSR_TLBIDX_PS_SHIFT
+
+#define LOONGARCH_CSR_ASID		0x18 /* ASID */
+/* Page table base address when VA[VALEN-1] = 0 */
+#define LOONGARCH_CSR_PGDL		0x19
+/* Page table base address when VA[VALEN-1] = 1 */
+#define LOONGARCH_CSR_PGDH		0x1a
+/* Page table base */
+#define LOONGARCH_CSR_PGD		0x1b
+#define LOONGARCH_CSR_PWCTL0		0x1c
+#define LOONGARCH_CSR_PWCTL1		0x1d
+#define LOONGARCH_CSR_STLBPGSIZE	0x1e
+#define LOONGARCH_CSR_CPUID		0x20
+#define LOONGARCH_CSR_KS0		0x30
+#define LOONGARCH_CSR_KS1		0x31
+#define LOONGARCH_CSR_TMID		0x40
+#define LOONGARCH_CSR_TCFG		0x41
+#define LOONGARCH_CSR_TLBRENTRY		0x88 /* TLB refill exception entry */
+/* KSave for TLB refill exception */
+#define LOONGARCH_CSR_TLBRSAVE		0x8b
+#define LOONGARCH_CSR_TLBREHI		0x8e
+#define CSR_TLBREHI_PS_SHIFT		0
+#define CSR_TLBREHI_PS			(0x3fUL << CSR_TLBREHI_PS_SHIFT)
+
+#define DEFAULT_LOONARCH64_STACK_MIN		0x4000
+#define DEFAULT_LOONARCH64_PAGE_TABLE_MIN	0x4000
+#define EXREGS_GPRS				(32)
+
+#ifndef __ASSEMBLER__
+struct ex_regs {
+	unsigned long regs[EXREGS_GPRS];
+	unsigned long pc;
+	unsigned long estat;
+	unsigned long badv;
+};
+
+extern void handle_tlb_refill(void);
+extern void handle_exception(void);
+#endif
+
+#define PC_OFFSET_EXREGS		((EXREGS_GPRS + 0) * 8)
+#define ESTAT_OFFSET_EXREGS		((EXREGS_GPRS + 1) * 8)
+#define BADV_OFFSET_EXREGS		((EXREGS_GPRS + 2) * 8)
+#define EXREGS_SIZE			((EXREGS_GPRS + 3) * 8)
+
+#endif /* SELFTEST_KVM_PROCESSOR_H */
diff --git a/tools/testing/selftests/kvm/include/memstress.h b/tools/testing/selftests/kvm/include/memstress.h
index ce4e603050e..5bcdaf2efab 100644
--- a/tools/testing/selftests/kvm/include/memstress.h
+++ b/tools/testing/selftests/kvm/include/memstress.h
@@ -13,7 +13,9 @@
 #include "kvm_util.h"
 
 /* Default guest test virtual memory offset */
+#ifndef DEFAULT_GUEST_TEST_MEM
 #define DEFAULT_GUEST_TEST_MEM		0xc0000000
+#endif
 
 #define DEFAULT_PER_VCPU_MEM_SIZE	(1 << 30) /* 1G */
 
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 2/4] KVM: selftests: Add core KVM selftests support for LoongArch
  2023-11-30 11:18 [PATCH v5 0/4] KVM: selftests: Add LoongArch support Tianrui Zhao
  2023-11-30 11:18 ` [PATCH v5 1/4] KVM: selftests: Add KVM selftests header files for LoongArch Tianrui Zhao
@ 2023-11-30 11:18 ` Tianrui Zhao
  2023-12-04  2:05   ` maobibo
  2023-11-30 11:18 ` [PATCH v5 3/4] KVM: selftests: Add ucall test " Tianrui Zhao
  2023-11-30 11:18 ` [PATCH v5 4/4] KVM: selftests: Add test cases " Tianrui Zhao
  3 siblings, 1 reply; 14+ messages in thread
From: Tianrui Zhao @ 2023-11-30 11:18 UTC (permalink / raw)
  To: Shuah Khan, Paolo Bonzini, linux-kernel, kvm, Sean Christopherson
  Cc: Vishal Annapurve, Huacai Chen, WANG Xuerui, loongarch, Peter Xu,
	Vipin Sharma, maobibo, zhaotianrui

Add core KVM selftests support for LoongArch.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 .../selftests/kvm/lib/loongarch/exception.S   |  59 ++++
 .../selftests/kvm/lib/loongarch/processor.c   | 333 ++++++++++++++++++
 2 files changed, 392 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/lib/loongarch/exception.S
 create mode 100644 tools/testing/selftests/kvm/lib/loongarch/processor.c

diff --git a/tools/testing/selftests/kvm/lib/loongarch/exception.S b/tools/testing/selftests/kvm/lib/loongarch/exception.S
new file mode 100644
index 00000000000..88bfa505c6f
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/loongarch/exception.S
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include "processor.h"
+
+/* address of refill exception should be 4K aligned */
+.balign	4096
+.global handle_tlb_refill
+handle_tlb_refill:
+	csrwr	t0, LOONGARCH_CSR_TLBRSAVE
+	csrrd	t0, LOONGARCH_CSR_PGD
+	lddir	t0, t0, 3
+	lddir	t0, t0, 1
+	ldpte	t0, 0
+	ldpte	t0, 1
+	tlbfill
+	csrrd	t0, LOONGARCH_CSR_TLBRSAVE
+	ertn
+
+	/*
+	 * save and restore all gprs except base register,
+	 * and default value of base register is sp ($r3).
+	 */
+.macro save_gprs base
+	.irp n,1,2,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31
+	st.d    $r\n, \base, 8 * \n
+	.endr
+.endm
+
+.macro restore_gprs base
+	.irp n,1,2,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31
+	ld.d    $r\n, \base, 8 * \n
+	.endr
+.endm
+
+/* address of general exception should be 4K aligned */
+.balign	4096
+.global handle_exception
+handle_exception:
+	csrwr  sp, LOONGARCH_CSR_KS0
+	csrrd  sp, LOONGARCH_CSR_KS1
+	addi.d sp, sp, -EXREGS_SIZE
+
+	save_gprs sp
+	/* save sp register to stack */
+	csrrd  t0, LOONGARCH_CSR_KS0
+	st.d   t0, sp, 3 * 8
+
+	csrrd  t0, LOONGARCH_CSR_ERA
+	st.d   t0, sp, PC_OFFSET_EXREGS
+	csrrd  t0, LOONGARCH_CSR_ESTAT
+	st.d   t0, sp, ESTAT_OFFSET_EXREGS
+	csrrd  t0, LOONGARCH_CSR_BADV
+	st.d   t0, sp, BADV_OFFSET_EXREGS
+
+	or     a0, sp, zero
+	bl route_exception
+	restore_gprs sp
+	csrrd  sp, LOONGARCH_CSR_KS0
+	ertn
diff --git a/tools/testing/selftests/kvm/lib/loongarch/processor.c b/tools/testing/selftests/kvm/lib/loongarch/processor.c
new file mode 100644
index 00000000000..82d8c1ec711
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/loongarch/processor.c
@@ -0,0 +1,333 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <assert.h>
+#include <linux/compiler.h>
+
+#include "kvm_util.h"
+#include "processor.h"
+
+static vm_paddr_t invalid_pgtable[4];
+static uint64_t virt_pte_index(struct kvm_vm *vm, vm_vaddr_t gva, int level)
+{
+	unsigned int shift;
+	uint64_t mask;
+
+	shift = level * (vm->page_shift - 3) + vm->page_shift;
+	mask = (1UL << (vm->page_shift - 3)) - 1;
+	return (gva >> shift) & mask;
+}
+
+static uint64_t pte_addr(struct kvm_vm *vm, uint64_t entry)
+{
+	return entry &  ~((0x1UL << vm->page_shift) - 1);
+}
+
+static uint64_t ptrs_per_pte(struct kvm_vm *vm)
+{
+	return 1 << (vm->page_shift - 3);
+}
+
+static void virt_set_pgtable(struct kvm_vm *vm, vm_paddr_t table, vm_paddr_t child)
+{
+	uint64_t *ptep;
+	int i, ptrs_per_pte;
+
+	ptep = addr_gpa2hva(vm, table);
+	ptrs_per_pte = 1 << (vm->page_shift - 3);
+	for (i = 0; i < ptrs_per_pte; i++)
+		*(ptep + i) = child;
+}
+
+void virt_arch_pgd_alloc(struct kvm_vm *vm)
+{
+	int i;
+	vm_paddr_t child, table;
+
+	if (vm->pgd_created)
+		return;
+	child = table = 0;
+	for (i = 0; i < vm->pgtable_levels; i++) {
+		invalid_pgtable[i] = child;
+		table = vm_phy_page_alloc(vm, DEFAULT_LOONARCH64_PAGE_TABLE_MIN,
+				vm->memslots[MEM_REGION_PT]);
+		TEST_ASSERT(table, "Fail to allocate page tale at level %d\n", i);
+		virt_set_pgtable(vm, table, child);
+		child = table;
+	}
+	vm->pgd = table;
+	vm->pgd_created = true;
+}
+
+static int virt_pte_none(uint64_t *ptep, int level)
+{
+	return *ptep == invalid_pgtable[level];
+}
+
+static uint64_t *virt_populate_pte(struct kvm_vm *vm, vm_vaddr_t gva, int alloc)
+{
+	uint64_t *ptep;
+	vm_paddr_t child;
+	int level;
+
+	if (!vm->pgd_created)
+		goto unmapped_gva;
+
+	level = vm->pgtable_levels - 1;
+	child = vm->pgd;
+	while (level > 0) {
+		ptep = addr_gpa2hva(vm, child) + virt_pte_index(vm, gva, level) * 8;
+		if (virt_pte_none(ptep, level)) {
+			if (alloc) {
+				child = vm_alloc_page_table(vm);
+				virt_set_pgtable(vm, child, invalid_pgtable[level - 1]);
+				*ptep = child;
+			} else
+				goto unmapped_gva;
+
+		} else
+			child = pte_addr(vm, *ptep);
+		level--;
+	}
+
+	ptep = addr_gpa2hva(vm, child) + virt_pte_index(vm, gva, level) * 8;
+	return ptep;
+
+unmapped_gva:
+	TEST_FAIL("No mapping for vm virtual address, gva: 0x%lx", gva);
+	exit(EXIT_FAILURE);
+}
+
+vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+{
+	uint64_t *ptep;
+
+	ptep = virt_populate_pte(vm, gva, 0);
+	TEST_ASSERT(*ptep != 0, "Virtual address vaddr: 0x%lx not mapped\n", gva);
+
+	return pte_addr(vm, *ptep) + (gva & (vm->page_size - 1));
+}
+
+void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
+{
+	uint32_t prot_bits;
+	uint64_t *ptep;
+
+	TEST_ASSERT((vaddr % vm->page_size) == 0,
+			"Virtual address not on page boundary,\n"
+			"vaddr: 0x%lx vm->page_size: 0x%x", vaddr, vm->page_size);
+	TEST_ASSERT(sparsebit_is_set(vm->vpages_valid,
+			(vaddr >> vm->page_shift)),
+			"Invalid virtual address, vaddr: 0x%lx", vaddr);
+	TEST_ASSERT((paddr % vm->page_size) == 0,
+			"Physical address not on page boundary,\n"
+			"paddr: 0x%lx vm->page_size: 0x%x", paddr, vm->page_size);
+	TEST_ASSERT((paddr >> vm->page_shift) <= vm->max_gfn,
+			"Physical address beyond maximum supported,\n"
+			"paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
+			paddr, vm->max_gfn, vm->page_size);
+
+	ptep = virt_populate_pte(vm, vaddr, 1);
+	prot_bits = _PAGE_PRESENT | __READABLE | __WRITEABLE | _CACHE_CC;
+	prot_bits |= _PAGE_USER;
+	*ptep = paddr | prot_bits;
+}
+
+static void pte_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent, uint64_t page, int level)
+{
+	static const char * const type[] = { "pte", "pmd", "pud", "pgd"};
+	uint64_t pte, *ptep;
+
+	if (level < 0)
+		return;
+
+	for (pte = page; pte < page + ptrs_per_pte(vm) * 8; pte += 8) {
+		ptep = addr_gpa2hva(vm, pte);
+		if (virt_pte_none(ptep, level))
+			continue;
+		fprintf(stream, "%*s%s: %lx: %lx at %p\n",
+				indent, "", type[level], pte, *ptep, ptep);
+		pte_dump(stream, vm, indent + 1, pte_addr(vm, *ptep), level--);
+	}
+}
+
+void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+{
+	int level;
+
+	if (!vm->pgd_created)
+		return;
+
+	level = vm->pgtable_levels - 1;
+	pte_dump(stream, vm, indent, vm->pgd, level);
+}
+
+void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent)
+{
+}
+
+void assert_on_unhandled_exception(struct kvm_vcpu *vcpu)
+{
+	struct ucall uc;
+
+	if (get_ucall(vcpu, &uc) != UCALL_UNHANDLED)
+		return;
+
+	TEST_FAIL("Unexpected exception (pc:0x%lx, estat:0x%lx, badv:0x%lx)",
+			uc.args[0], uc.args[1], uc.args[2]);
+}
+
+void route_exception(struct ex_regs *regs)
+{
+	unsigned long pc, estat, badv;
+
+	pc = regs->pc;
+	estat = regs->estat;
+	badv  = regs->badv;
+	ucall(UCALL_UNHANDLED, 3, pc, estat, badv);
+	while (1)
+		;
+}
+
+void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...)
+{
+	va_list ap;
+	struct kvm_regs regs;
+	int i;
+
+	TEST_ASSERT(num >= 1 && num <= 8, "Unsupported number of args,\n"
+		    "num: %u\n", num);
+
+	vcpu_regs_get(vcpu, &regs);
+	va_start(ap, num);
+	for (i = 0; i < num; i++)
+		regs.gpr[i + 4] = va_arg(ap, uint64_t);
+	va_end(ap);
+	vcpu_regs_set(vcpu, &regs);
+}
+
+static void loongarch_get_csr(struct kvm_vcpu *vcpu, uint64_t id, void *addr)
+{
+	uint64_t csrid;
+
+	csrid = KVM_REG_LOONGARCH_CSR | KVM_REG_SIZE_U64 | 8 * id;
+	vcpu_get_reg(vcpu, csrid, addr);
+}
+
+static void loongarch_set_csr(struct kvm_vcpu *vcpu, uint64_t id, uint64_t val)
+{
+	uint64_t csrid;
+
+	csrid = KVM_REG_LOONGARCH_CSR | KVM_REG_SIZE_U64 | 8 * id;
+	vcpu_set_reg(vcpu, csrid, val);
+}
+
+static void loongarch_vcpu_setup(struct kvm_vcpu *vcpu)
+{
+	unsigned long val;
+	int width;
+	struct kvm_vm *vm = vcpu->vm;
+
+	switch (vm->mode) {
+	case VM_MODE_P48V48_16K:
+	case VM_MODE_P40V48_16K:
+	case VM_MODE_P36V48_16K:
+	case VM_MODE_P36V47_16K:
+		break;
+
+	default:
+		TEST_FAIL("Unknown guest mode, mode: 0x%x", vm->mode);
+	}
+
+	/* user mode and page enable mode */
+	val = PLV_USER | CSR_CRMD_PG;
+	loongarch_set_csr(vcpu, LOONGARCH_CSR_CRMD, val);
+	loongarch_set_csr(vcpu, LOONGARCH_CSR_PRMD, val);
+	loongarch_set_csr(vcpu, LOONGARCH_CSR_EUEN, 1);
+	loongarch_set_csr(vcpu, LOONGARCH_CSR_ECFG, 0);
+	loongarch_set_csr(vcpu, LOONGARCH_CSR_TCFG, 0);
+	loongarch_set_csr(vcpu, LOONGARCH_CSR_ASID, 1);
+
+	width = vm->page_shift - 3;
+	val = 0;
+	switch (vm->pgtable_levels) {
+	case 4:
+		/* pud page shift and width */
+		val = (vm->page_shift + width * 2) << 20 | (width << 25);
+		/* fall throuth */
+	case 3:
+		/* pmd page shift and width */
+		val |= (vm->page_shift + width) << 10 | (width << 15);
+		/* pte page shift and width */
+		val |= vm->page_shift | width << 5;
+		break;
+	default:
+		TEST_FAIL("Got %u page table levels, expected 3 or 4", vm->pgtable_levels);
+	}
+	loongarch_set_csr(vcpu, LOONGARCH_CSR_PWCTL0, val);
+
+	/* pgd page shift and width */
+	val = (vm->page_shift + width * (vm->pgtable_levels - 1)) | width << 6;
+	loongarch_set_csr(vcpu, LOONGARCH_CSR_PWCTL1, val);
+
+	loongarch_set_csr(vcpu, LOONGARCH_CSR_PGDL, vm->pgd);
+
+	/*
+	 * refill exception runs on real mode, entry address should
+	 * be physical address
+	 */
+	val = addr_gva2gpa(vm, (unsigned long)handle_tlb_refill);
+	loongarch_set_csr(vcpu, LOONGARCH_CSR_TLBRENTRY, val);
+
+	/*
+	 * general exception runs on page-enabled mode, entry address should
+	 * be virtual address
+	 */
+	val = (unsigned long)handle_exception;
+	loongarch_set_csr(vcpu, LOONGARCH_CSR_EENTRY, val);
+
+	loongarch_get_csr(vcpu, LOONGARCH_CSR_TLBIDX, &val);
+	val &= ~CSR_TLBIDX_SIZEM;
+	val |= PS_DEFAULT_SIZE << CSR_TLBIDX_SIZE;
+	loongarch_set_csr(vcpu, LOONGARCH_CSR_TLBIDX, val);
+
+	loongarch_set_csr(vcpu, LOONGARCH_CSR_STLBPGSIZE, PS_DEFAULT_SIZE);
+
+	/* LOONGARCH_CSR_KS1 is used for exception stack */
+	val = __vm_vaddr_alloc(vm, vm->page_size,
+			DEFAULT_LOONARCH64_STACK_MIN, MEM_REGION_DATA);
+	TEST_ASSERT(val != 0,  "No memory for exception stack");
+	val = val + vm->page_size;
+	loongarch_set_csr(vcpu, LOONGARCH_CSR_KS1, val);
+
+	loongarch_get_csr(vcpu, LOONGARCH_CSR_TLBREHI, &val);
+	val &= ~CSR_TLBREHI_PS;
+	val |= PS_DEFAULT_SIZE << CSR_TLBREHI_PS_SHIFT;
+	loongarch_set_csr(vcpu, LOONGARCH_CSR_TLBREHI, val);
+
+	loongarch_set_csr(vcpu, LOONGARCH_CSR_CPUID, vcpu->id);
+	loongarch_set_csr(vcpu, LOONGARCH_CSR_TMID,  vcpu->id);
+}
+
+struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id,
+				  void *guest_code)
+{
+	size_t stack_size;
+	uint64_t stack_vaddr;
+	struct kvm_regs regs;
+	struct kvm_vcpu *vcpu;
+
+	vcpu = __vm_vcpu_add(vm, vcpu_id);
+	stack_size = vm->page_size;
+	stack_vaddr = __vm_vaddr_alloc(vm, stack_size,
+			DEFAULT_LOONARCH64_STACK_MIN, MEM_REGION_DATA);
+	TEST_ASSERT(stack_vaddr != 0,  "No memory for vm stack");
+
+	loongarch_vcpu_setup(vcpu);
+	/* Setup guest general purpose registers */
+	vcpu_regs_get(vcpu, &regs);
+	regs.gpr[3] = stack_vaddr + stack_size;
+	regs.pc = (uint64_t)guest_code;
+	vcpu_regs_set(vcpu, &regs);
+
+	return vcpu;
+}
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 3/4] KVM: selftests: Add ucall test support for LoongArch
  2023-11-30 11:18 [PATCH v5 0/4] KVM: selftests: Add LoongArch support Tianrui Zhao
  2023-11-30 11:18 ` [PATCH v5 1/4] KVM: selftests: Add KVM selftests header files for LoongArch Tianrui Zhao
  2023-11-30 11:18 ` [PATCH v5 2/4] KVM: selftests: Add core KVM selftests support " Tianrui Zhao
@ 2023-11-30 11:18 ` Tianrui Zhao
  2023-12-04  2:05   ` maobibo
  2023-11-30 11:18 ` [PATCH v5 4/4] KVM: selftests: Add test cases " Tianrui Zhao
  3 siblings, 1 reply; 14+ messages in thread
From: Tianrui Zhao @ 2023-11-30 11:18 UTC (permalink / raw)
  To: Shuah Khan, Paolo Bonzini, linux-kernel, kvm, Sean Christopherson
  Cc: Vishal Annapurve, Huacai Chen, WANG Xuerui, loongarch, Peter Xu,
	Vipin Sharma, maobibo, zhaotianrui

Add ucall test support for LoongArch. A ucall is a "hypercall to
userspace".

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 .../selftests/kvm/include/loongarch/ucall.h   | 28 ++++++++++++++
 .../selftests/kvm/lib/loongarch/ucall.c       | 38 +++++++++++++++++++
 2 files changed, 66 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/include/loongarch/ucall.h
 create mode 100644 tools/testing/selftests/kvm/lib/loongarch/ucall.c

diff --git a/tools/testing/selftests/kvm/include/loongarch/ucall.h b/tools/testing/selftests/kvm/include/loongarch/ucall.h
new file mode 100644
index 00000000000..e9033ea6fbf
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/loongarch/ucall.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef SELFTEST_KVM_UCALL_H
+#define SELFTEST_KVM_UCALL_H
+
+#include "kvm_util_base.h"
+
+#define UCALL_EXIT_REASON       KVM_EXIT_MMIO
+
+/*
+ * Default base address for application loading is 0x120000000,
+ * DEFAULT_GUEST_TEST_MEM should be larger than app loading address,
+ * so that PER_VCPU_MEM_SIZE can be large enough, and kvm selftests
+ * app size is smaller than 256M in generic
+ */
+#define DEFAULT_GUEST_TEST_MEM	0x130000000
+
+/*
+ * ucall_exit_mmio_addr holds per-VM values (global data is duplicated by each
+ * VM), it must not be accessed from host code.
+ */
+extern vm_vaddr_t *ucall_exit_mmio_addr;
+
+static inline void ucall_arch_do_ucall(vm_vaddr_t uc)
+{
+	WRITE_ONCE(*ucall_exit_mmio_addr, uc);
+}
+
+#endif
diff --git a/tools/testing/selftests/kvm/lib/loongarch/ucall.c b/tools/testing/selftests/kvm/lib/loongarch/ucall.c
new file mode 100644
index 00000000000..fc6cbb50573
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/loongarch/ucall.c
@@ -0,0 +1,38 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * ucall support. A ucall is a "hypercall to userspace".
+ *
+ */
+#include "kvm_util.h"
+
+/*
+ * ucall_exit_mmio_addr holds per-VM values (global data is duplicated by each
+ * VM), it must not be accessed from host code.
+ */
+vm_vaddr_t *ucall_exit_mmio_addr;
+
+void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
+{
+	vm_vaddr_t mmio_gva = vm_vaddr_unused_gap(vm, vm->page_size, KVM_UTIL_MIN_VADDR);
+
+	virt_map(vm, mmio_gva, mmio_gpa, 1);
+
+	vm->ucall_mmio_addr = mmio_gpa;
+
+	write_guest_global(vm, ucall_exit_mmio_addr, (vm_vaddr_t *)mmio_gva);
+}
+
+void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu)
+{
+	struct kvm_run *run = vcpu->run;
+
+	if (run->exit_reason == KVM_EXIT_MMIO &&
+	    run->mmio.phys_addr == vcpu->vm->ucall_mmio_addr) {
+		TEST_ASSERT(run->mmio.is_write && run->mmio.len == sizeof(uint64_t),
+			    "Unexpected ucall exit mmio address access");
+
+		return (void *)(*((uint64_t *)run->mmio.data));
+	}
+
+	return NULL;
+}
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 4/4] KVM: selftests: Add test cases for LoongArch
  2023-11-30 11:18 [PATCH v5 0/4] KVM: selftests: Add LoongArch support Tianrui Zhao
                   ` (2 preceding siblings ...)
  2023-11-30 11:18 ` [PATCH v5 3/4] KVM: selftests: Add ucall test " Tianrui Zhao
@ 2023-11-30 11:18 ` Tianrui Zhao
  2023-12-04  2:11   ` maobibo
  3 siblings, 1 reply; 14+ messages in thread
From: Tianrui Zhao @ 2023-11-30 11:18 UTC (permalink / raw)
  To: Shuah Khan, Paolo Bonzini, linux-kernel, kvm, Sean Christopherson
  Cc: Vishal Annapurve, Huacai Chen, WANG Xuerui, loongarch, Peter Xu,
	Vipin Sharma, maobibo, zhaotianrui

There are some KVM common test cases supported by LoongArch:
	demand_paging_test
	dirty_log_perf_test
	dirty_log_test
	guest_print_test
	kvm_binary_stats_test
	kvm_create_max_vcpus
	kvm_page_table_test
	memslot_modification_stress_test
	memslot_perf_test
	set_memory_region_test
And other test cases are not supported by LoongArch. For example,
we do not support rseq_test, as the glibc do not support it.

Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 tools/testing/selftests/kvm/Makefile | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index a5963ab9215..9d099d48013 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -65,6 +65,10 @@ LIBKVM_s390x += lib/s390x/ucall.c
 LIBKVM_riscv += lib/riscv/processor.c
 LIBKVM_riscv += lib/riscv/ucall.c
 
+LIBKVM_loongarch += lib/loongarch/processor.c
+LIBKVM_loongarch += lib/loongarch/ucall.c
+LIBKVM_loongarch += lib/loongarch/exception.S
+
 # Non-compiled test targets
 TEST_PROGS_x86_64 += x86_64/nx_huge_pages_test.sh
 
@@ -202,6 +206,17 @@ TEST_GEN_PROGS_riscv += kvm_binary_stats_test
 
 SPLIT_TESTS += get-reg-list
 
+TEST_GEN_PROGS_loongarch += demand_paging_test
+TEST_GEN_PROGS_loongarch += dirty_log_perf_test
+TEST_GEN_PROGS_loongarch += dirty_log_test
+TEST_GEN_PROGS_loongarch += guest_print_test
+TEST_GEN_PROGS_loongarch += kvm_binary_stats_test
+TEST_GEN_PROGS_loongarch += kvm_create_max_vcpus
+TEST_GEN_PROGS_loongarch += kvm_page_table_test
+TEST_GEN_PROGS_loongarch += memslot_modification_stress_test
+TEST_GEN_PROGS_loongarch += memslot_perf_test
+TEST_GEN_PROGS_loongarch += set_memory_region_test
+
 TEST_PROGS += $(TEST_PROGS_$(ARCH_DIR))
 TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(ARCH_DIR))
 TEST_GEN_PROGS_EXTENDED += $(TEST_GEN_PROGS_EXTENDED_$(ARCH_DIR))
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 2/4] KVM: selftests: Add core KVM selftests support for LoongArch
  2023-11-30 11:18 ` [PATCH v5 2/4] KVM: selftests: Add core KVM selftests support " Tianrui Zhao
@ 2023-12-04  2:05   ` maobibo
  0 siblings, 0 replies; 14+ messages in thread
From: maobibo @ 2023-12-04  2:05 UTC (permalink / raw)
  To: Tianrui Zhao, Shuah Khan, Paolo Bonzini, linux-kernel, kvm,
	Sean Christopherson
  Cc: Vishal Annapurve, Huacai Chen, WANG Xuerui, loongarch, Peter Xu,
	Vipin Sharma



On 2023/11/30 下午7:18, Tianrui Zhao wrote:
> Add core KVM selftests support for LoongArch.
> 
> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
> ---
>   .../selftests/kvm/lib/loongarch/exception.S   |  59 ++++
>   .../selftests/kvm/lib/loongarch/processor.c   | 333 ++++++++++++++++++
>   2 files changed, 392 insertions(+)
>   create mode 100644 tools/testing/selftests/kvm/lib/loongarch/exception.S
>   create mode 100644 tools/testing/selftests/kvm/lib/loongarch/processor.c
> 
> diff --git a/tools/testing/selftests/kvm/lib/loongarch/exception.S b/tools/testing/selftests/kvm/lib/loongarch/exception.S
> new file mode 100644
> index 00000000000..88bfa505c6f
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/loongarch/exception.S
> @@ -0,0 +1,59 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#include "processor.h"
> +
> +/* address of refill exception should be 4K aligned */
> +.balign	4096
> +.global handle_tlb_refill
> +handle_tlb_refill:
> +	csrwr	t0, LOONGARCH_CSR_TLBRSAVE
> +	csrrd	t0, LOONGARCH_CSR_PGD
> +	lddir	t0, t0, 3
> +	lddir	t0, t0, 1
> +	ldpte	t0, 0
> +	ldpte	t0, 1
> +	tlbfill
> +	csrrd	t0, LOONGARCH_CSR_TLBRSAVE
> +	ertn
> +
> +	/*
> +	 * save and restore all gprs except base register,
> +	 * and default value of base register is sp ($r3).
> +	 */
> +.macro save_gprs base
> +	.irp n,1,2,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31
> +	st.d    $r\n, \base, 8 * \n
> +	.endr
> +.endm
> +
> +.macro restore_gprs base
> +	.irp n,1,2,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31
> +	ld.d    $r\n, \base, 8 * \n
> +	.endr
> +.endm
> +
> +/* address of general exception should be 4K aligned */
> +.balign	4096
> +.global handle_exception
> +handle_exception:
> +	csrwr  sp, LOONGARCH_CSR_KS0
> +	csrrd  sp, LOONGARCH_CSR_KS1
> +	addi.d sp, sp, -EXREGS_SIZE
> +
> +	save_gprs sp
> +	/* save sp register to stack */
> +	csrrd  t0, LOONGARCH_CSR_KS0
> +	st.d   t0, sp, 3 * 8
> +
> +	csrrd  t0, LOONGARCH_CSR_ERA
> +	st.d   t0, sp, PC_OFFSET_EXREGS
> +	csrrd  t0, LOONGARCH_CSR_ESTAT
> +	st.d   t0, sp, ESTAT_OFFSET_EXREGS
> +	csrrd  t0, LOONGARCH_CSR_BADV
> +	st.d   t0, sp, BADV_OFFSET_EXREGS
> +
> +	or     a0, sp, zero
> +	bl route_exception
> +	restore_gprs sp
> +	csrrd  sp, LOONGARCH_CSR_KS0
> +	ertn
> diff --git a/tools/testing/selftests/kvm/lib/loongarch/processor.c b/tools/testing/selftests/kvm/lib/loongarch/processor.c
> new file mode 100644
> index 00000000000..82d8c1ec711
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/loongarch/processor.c
> @@ -0,0 +1,333 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <assert.h>
> +#include <linux/compiler.h>
> +
> +#include "kvm_util.h"
> +#include "processor.h"
> +
> +static vm_paddr_t invalid_pgtable[4];
> +static uint64_t virt_pte_index(struct kvm_vm *vm, vm_vaddr_t gva, int level)
> +{
> +	unsigned int shift;
> +	uint64_t mask;
> +
> +	shift = level * (vm->page_shift - 3) + vm->page_shift;
> +	mask = (1UL << (vm->page_shift - 3)) - 1;
> +	return (gva >> shift) & mask;
> +}
> +
> +static uint64_t pte_addr(struct kvm_vm *vm, uint64_t entry)
> +{
> +	return entry &  ~((0x1UL << vm->page_shift) - 1);
> +}
> +
> +static uint64_t ptrs_per_pte(struct kvm_vm *vm)
> +{
> +	return 1 << (vm->page_shift - 3);
> +}
> +
> +static void virt_set_pgtable(struct kvm_vm *vm, vm_paddr_t table, vm_paddr_t child)
> +{
> +	uint64_t *ptep;
> +	int i, ptrs_per_pte;
> +
> +	ptep = addr_gpa2hva(vm, table);
> +	ptrs_per_pte = 1 << (vm->page_shift - 3);
> +	for (i = 0; i < ptrs_per_pte; i++)
> +		*(ptep + i) = child;
> +}
> +
> +void virt_arch_pgd_alloc(struct kvm_vm *vm)
> +{
> +	int i;
> +	vm_paddr_t child, table;
> +
> +	if (vm->pgd_created)
> +		return;
> +	child = table = 0;
> +	for (i = 0; i < vm->pgtable_levels; i++) {
> +		invalid_pgtable[i] = child;
> +		table = vm_phy_page_alloc(vm, DEFAULT_LOONARCH64_PAGE_TABLE_MIN,
> +				vm->memslots[MEM_REGION_PT]);
> +		TEST_ASSERT(table, "Fail to allocate page tale at level %d\n", i);
> +		virt_set_pgtable(vm, table, child);
> +		child = table;
> +	}
> +	vm->pgd = table;
> +	vm->pgd_created = true;
> +}
> +
> +static int virt_pte_none(uint64_t *ptep, int level)
> +{
> +	return *ptep == invalid_pgtable[level];
> +}
> +
> +static uint64_t *virt_populate_pte(struct kvm_vm *vm, vm_vaddr_t gva, int alloc)
> +{
> +	uint64_t *ptep;
> +	vm_paddr_t child;
> +	int level;
> +
> +	if (!vm->pgd_created)
> +		goto unmapped_gva;
> +
> +	level = vm->pgtable_levels - 1;
> +	child = vm->pgd;
> +	while (level > 0) {
> +		ptep = addr_gpa2hva(vm, child) + virt_pte_index(vm, gva, level) * 8;
> +		if (virt_pte_none(ptep, level)) {
> +			if (alloc) {
> +				child = vm_alloc_page_table(vm);
> +				virt_set_pgtable(vm, child, invalid_pgtable[level - 1]);
> +				*ptep = child;
> +			} else
> +				goto unmapped_gva;
> +
> +		} else
> +			child = pte_addr(vm, *ptep);
> +		level--;
> +	}
> +
> +	ptep = addr_gpa2hva(vm, child) + virt_pte_index(vm, gva, level) * 8;
> +	return ptep;
> +
> +unmapped_gva:
> +	TEST_FAIL("No mapping for vm virtual address, gva: 0x%lx", gva);
> +	exit(EXIT_FAILURE);
> +}
> +
> +vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
> +{
> +	uint64_t *ptep;
> +
> +	ptep = virt_populate_pte(vm, gva, 0);
> +	TEST_ASSERT(*ptep != 0, "Virtual address vaddr: 0x%lx not mapped\n", gva);
> +
> +	return pte_addr(vm, *ptep) + (gva & (vm->page_size - 1));
> +}
> +
> +void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
> +{
> +	uint32_t prot_bits;
> +	uint64_t *ptep;
> +
> +	TEST_ASSERT((vaddr % vm->page_size) == 0,
> +			"Virtual address not on page boundary,\n"
> +			"vaddr: 0x%lx vm->page_size: 0x%x", vaddr, vm->page_size);
> +	TEST_ASSERT(sparsebit_is_set(vm->vpages_valid,
> +			(vaddr >> vm->page_shift)),
> +			"Invalid virtual address, vaddr: 0x%lx", vaddr);
> +	TEST_ASSERT((paddr % vm->page_size) == 0,
> +			"Physical address not on page boundary,\n"
> +			"paddr: 0x%lx vm->page_size: 0x%x", paddr, vm->page_size);
> +	TEST_ASSERT((paddr >> vm->page_shift) <= vm->max_gfn,
> +			"Physical address beyond maximum supported,\n"
> +			"paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
> +			paddr, vm->max_gfn, vm->page_size);
> +
> +	ptep = virt_populate_pte(vm, vaddr, 1);
> +	prot_bits = _PAGE_PRESENT | __READABLE | __WRITEABLE | _CACHE_CC;
> +	prot_bits |= _PAGE_USER;
> +	*ptep = paddr | prot_bits;
> +}
> +
> +static void pte_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent, uint64_t page, int level)
> +{
> +	static const char * const type[] = { "pte", "pmd", "pud", "pgd"};
> +	uint64_t pte, *ptep;
> +
> +	if (level < 0)
> +		return;
> +
> +	for (pte = page; pte < page + ptrs_per_pte(vm) * 8; pte += 8) {
> +		ptep = addr_gpa2hva(vm, pte);
> +		if (virt_pte_none(ptep, level))
> +			continue;
> +		fprintf(stream, "%*s%s: %lx: %lx at %p\n",
> +				indent, "", type[level], pte, *ptep, ptep);
> +		pte_dump(stream, vm, indent + 1, pte_addr(vm, *ptep), level--);
> +	}
> +}
> +
> +void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
> +{
> +	int level;
> +
> +	if (!vm->pgd_created)
> +		return;
> +
> +	level = vm->pgtable_levels - 1;
> +	pte_dump(stream, vm, indent, vm->pgd, level);
> +}
> +
> +void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent)
> +{
> +}
> +
> +void assert_on_unhandled_exception(struct kvm_vcpu *vcpu)
> +{
> +	struct ucall uc;
> +
> +	if (get_ucall(vcpu, &uc) != UCALL_UNHANDLED)
> +		return;
> +
> +	TEST_FAIL("Unexpected exception (pc:0x%lx, estat:0x%lx, badv:0x%lx)",
> +			uc.args[0], uc.args[1], uc.args[2]);
> +}
> +
> +void route_exception(struct ex_regs *regs)
> +{
> +	unsigned long pc, estat, badv;
> +
> +	pc = regs->pc;
> +	estat = regs->estat;
> +	badv  = regs->badv;
> +	ucall(UCALL_UNHANDLED, 3, pc, estat, badv);
> +	while (1)
> +		;
> +}
> +
> +void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...)
> +{
> +	va_list ap;
> +	struct kvm_regs regs;
> +	int i;
> +
> +	TEST_ASSERT(num >= 1 && num <= 8, "Unsupported number of args,\n"
> +		    "num: %u\n", num);
> +
> +	vcpu_regs_get(vcpu, &regs);
> +	va_start(ap, num);
> +	for (i = 0; i < num; i++)
> +		regs.gpr[i + 4] = va_arg(ap, uint64_t);
> +	va_end(ap);
> +	vcpu_regs_set(vcpu, &regs);
> +}
> +
> +static void loongarch_get_csr(struct kvm_vcpu *vcpu, uint64_t id, void *addr)
> +{
> +	uint64_t csrid;
> +
> +	csrid = KVM_REG_LOONGARCH_CSR | KVM_REG_SIZE_U64 | 8 * id;
> +	vcpu_get_reg(vcpu, csrid, addr);
> +}
> +
> +static void loongarch_set_csr(struct kvm_vcpu *vcpu, uint64_t id, uint64_t val)
> +{
> +	uint64_t csrid;
> +
> +	csrid = KVM_REG_LOONGARCH_CSR | KVM_REG_SIZE_U64 | 8 * id;
> +	vcpu_set_reg(vcpu, csrid, val);
> +}
> +
> +static void loongarch_vcpu_setup(struct kvm_vcpu *vcpu)
> +{
> +	unsigned long val;
> +	int width;
> +	struct kvm_vm *vm = vcpu->vm;
> +
> +	switch (vm->mode) {
> +	case VM_MODE_P48V48_16K:
> +	case VM_MODE_P40V48_16K:
> +	case VM_MODE_P36V48_16K:
> +	case VM_MODE_P36V47_16K:
> +		break;
> +
> +	default:
> +		TEST_FAIL("Unknown guest mode, mode: 0x%x", vm->mode);
> +	}
> +
> +	/* user mode and page enable mode */
> +	val = PLV_USER | CSR_CRMD_PG;
> +	loongarch_set_csr(vcpu, LOONGARCH_CSR_CRMD, val);
> +	loongarch_set_csr(vcpu, LOONGARCH_CSR_PRMD, val);
> +	loongarch_set_csr(vcpu, LOONGARCH_CSR_EUEN, 1);
> +	loongarch_set_csr(vcpu, LOONGARCH_CSR_ECFG, 0);
> +	loongarch_set_csr(vcpu, LOONGARCH_CSR_TCFG, 0);
> +	loongarch_set_csr(vcpu, LOONGARCH_CSR_ASID, 1);
> +
> +	width = vm->page_shift - 3;
> +	val = 0;
> +	switch (vm->pgtable_levels) {
> +	case 4:
> +		/* pud page shift and width */
> +		val = (vm->page_shift + width * 2) << 20 | (width << 25);
> +		/* fall throuth */
> +	case 3:
> +		/* pmd page shift and width */
> +		val |= (vm->page_shift + width) << 10 | (width << 15);
> +		/* pte page shift and width */
> +		val |= vm->page_shift | width << 5;
> +		break;
> +	default:
> +		TEST_FAIL("Got %u page table levels, expected 3 or 4", vm->pgtable_levels);
> +	}
> +	loongarch_set_csr(vcpu, LOONGARCH_CSR_PWCTL0, val);
> +
> +	/* pgd page shift and width */
> +	val = (vm->page_shift + width * (vm->pgtable_levels - 1)) | width << 6;
> +	loongarch_set_csr(vcpu, LOONGARCH_CSR_PWCTL1, val);
> +
> +	loongarch_set_csr(vcpu, LOONGARCH_CSR_PGDL, vm->pgd);
> +
> +	/*
> +	 * refill exception runs on real mode, entry address should
> +	 * be physical address
> +	 */
> +	val = addr_gva2gpa(vm, (unsigned long)handle_tlb_refill);
> +	loongarch_set_csr(vcpu, LOONGARCH_CSR_TLBRENTRY, val);
> +
> +	/*
> +	 * general exception runs on page-enabled mode, entry address should
> +	 * be virtual address
> +	 */
> +	val = (unsigned long)handle_exception;
> +	loongarch_set_csr(vcpu, LOONGARCH_CSR_EENTRY, val);
> +
> +	loongarch_get_csr(vcpu, LOONGARCH_CSR_TLBIDX, &val);
> +	val &= ~CSR_TLBIDX_SIZEM;
> +	val |= PS_DEFAULT_SIZE << CSR_TLBIDX_SIZE;
> +	loongarch_set_csr(vcpu, LOONGARCH_CSR_TLBIDX, val);
> +
> +	loongarch_set_csr(vcpu, LOONGARCH_CSR_STLBPGSIZE, PS_DEFAULT_SIZE);
> +
> +	/* LOONGARCH_CSR_KS1 is used for exception stack */
> +	val = __vm_vaddr_alloc(vm, vm->page_size,
> +			DEFAULT_LOONARCH64_STACK_MIN, MEM_REGION_DATA);
> +	TEST_ASSERT(val != 0,  "No memory for exception stack");
> +	val = val + vm->page_size;
> +	loongarch_set_csr(vcpu, LOONGARCH_CSR_KS1, val);
> +
> +	loongarch_get_csr(vcpu, LOONGARCH_CSR_TLBREHI, &val);
> +	val &= ~CSR_TLBREHI_PS;
> +	val |= PS_DEFAULT_SIZE << CSR_TLBREHI_PS_SHIFT;
> +	loongarch_set_csr(vcpu, LOONGARCH_CSR_TLBREHI, val);
> +
> +	loongarch_set_csr(vcpu, LOONGARCH_CSR_CPUID, vcpu->id);
> +	loongarch_set_csr(vcpu, LOONGARCH_CSR_TMID,  vcpu->id);
> +}
> +
> +struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id,
> +				  void *guest_code)
> +{
> +	size_t stack_size;
> +	uint64_t stack_vaddr;
> +	struct kvm_regs regs;
> +	struct kvm_vcpu *vcpu;
> +
> +	vcpu = __vm_vcpu_add(vm, vcpu_id);
> +	stack_size = vm->page_size;
> +	stack_vaddr = __vm_vaddr_alloc(vm, stack_size,
> +			DEFAULT_LOONARCH64_STACK_MIN, MEM_REGION_DATA);
> +	TEST_ASSERT(stack_vaddr != 0,  "No memory for vm stack");
> +
> +	loongarch_vcpu_setup(vcpu);
> +	/* Setup guest general purpose registers */
> +	vcpu_regs_get(vcpu, &regs);
> +	regs.gpr[3] = stack_vaddr + stack_size;
> +	regs.pc = (uint64_t)guest_code;
> +	vcpu_regs_set(vcpu, &regs);
> +
> +	return vcpu;
> +}
> 
Reviewed-by: Bibo Mao <maobibo@loongson.cn>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 3/4] KVM: selftests: Add ucall test support for LoongArch
  2023-11-30 11:18 ` [PATCH v5 3/4] KVM: selftests: Add ucall test " Tianrui Zhao
@ 2023-12-04  2:05   ` maobibo
  0 siblings, 0 replies; 14+ messages in thread
From: maobibo @ 2023-12-04  2:05 UTC (permalink / raw)
  To: Tianrui Zhao, Shuah Khan, Paolo Bonzini, linux-kernel, kvm,
	Sean Christopherson
  Cc: Vishal Annapurve, Huacai Chen, WANG Xuerui, loongarch, Peter Xu,
	Vipin Sharma



On 2023/11/30 下午7:18, Tianrui Zhao wrote:
> Add ucall test support for LoongArch. A ucall is a "hypercall to
> userspace".
> 
> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
> ---
>   .../selftests/kvm/include/loongarch/ucall.h   | 28 ++++++++++++++
>   .../selftests/kvm/lib/loongarch/ucall.c       | 38 +++++++++++++++++++
>   2 files changed, 66 insertions(+)
>   create mode 100644 tools/testing/selftests/kvm/include/loongarch/ucall.h
>   create mode 100644 tools/testing/selftests/kvm/lib/loongarch/ucall.c
> 
> diff --git a/tools/testing/selftests/kvm/include/loongarch/ucall.h b/tools/testing/selftests/kvm/include/loongarch/ucall.h
> new file mode 100644
> index 00000000000..e9033ea6fbf
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/loongarch/ucall.h
> @@ -0,0 +1,28 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +#ifndef SELFTEST_KVM_UCALL_H
> +#define SELFTEST_KVM_UCALL_H
> +
> +#include "kvm_util_base.h"
> +
> +#define UCALL_EXIT_REASON       KVM_EXIT_MMIO
> +
> +/*
> + * Default base address for application loading is 0x120000000,
> + * DEFAULT_GUEST_TEST_MEM should be larger than app loading address,
> + * so that PER_VCPU_MEM_SIZE can be large enough, and kvm selftests
> + * app size is smaller than 256M in generic
> + */
> +#define DEFAULT_GUEST_TEST_MEM	0x130000000
> +
> +/*
> + * ucall_exit_mmio_addr holds per-VM values (global data is duplicated by each
> + * VM), it must not be accessed from host code.
> + */
> +extern vm_vaddr_t *ucall_exit_mmio_addr;
> +
> +static inline void ucall_arch_do_ucall(vm_vaddr_t uc)
> +{
> +	WRITE_ONCE(*ucall_exit_mmio_addr, uc);
> +}
> +
> +#endif
> diff --git a/tools/testing/selftests/kvm/lib/loongarch/ucall.c b/tools/testing/selftests/kvm/lib/loongarch/ucall.c
> new file mode 100644
> index 00000000000..fc6cbb50573
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/loongarch/ucall.c
> @@ -0,0 +1,38 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * ucall support. A ucall is a "hypercall to userspace".
> + *
> + */
> +#include "kvm_util.h"
> +
> +/*
> + * ucall_exit_mmio_addr holds per-VM values (global data is duplicated by each
> + * VM), it must not be accessed from host code.
> + */
> +vm_vaddr_t *ucall_exit_mmio_addr;
> +
> +void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
> +{
> +	vm_vaddr_t mmio_gva = vm_vaddr_unused_gap(vm, vm->page_size, KVM_UTIL_MIN_VADDR);
> +
> +	virt_map(vm, mmio_gva, mmio_gpa, 1);
> +
> +	vm->ucall_mmio_addr = mmio_gpa;
> +
> +	write_guest_global(vm, ucall_exit_mmio_addr, (vm_vaddr_t *)mmio_gva);
> +}
> +
> +void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_run *run = vcpu->run;
> +
> +	if (run->exit_reason == KVM_EXIT_MMIO &&
> +	    run->mmio.phys_addr == vcpu->vm->ucall_mmio_addr) {
> +		TEST_ASSERT(run->mmio.is_write && run->mmio.len == sizeof(uint64_t),
> +			    "Unexpected ucall exit mmio address access");
> +
> +		return (void *)(*((uint64_t *)run->mmio.data));
> +	}
> +
> +	return NULL;
> +}
> 
Reviewed-by: Bibo Mao <maobibo@loongson.cn>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 4/4] KVM: selftests: Add test cases for LoongArch
  2023-11-30 11:18 ` [PATCH v5 4/4] KVM: selftests: Add test cases " Tianrui Zhao
@ 2023-12-04  2:11   ` maobibo
  0 siblings, 0 replies; 14+ messages in thread
From: maobibo @ 2023-12-04  2:11 UTC (permalink / raw)
  To: Tianrui Zhao, Shuah Khan, Paolo Bonzini, linux-kernel, kvm,
	Sean Christopherson
  Cc: Vishal Annapurve, Huacai Chen, WANG Xuerui, loongarch, Peter Xu,
	Vipin Sharma



On 2023/11/30 下午7:18, Tianrui Zhao wrote:
> There are some KVM common test cases supported by LoongArch:
> 	demand_paging_test
> 	dirty_log_perf_test
> 	dirty_log_test
> 	guest_print_test
> 	kvm_binary_stats_test
> 	kvm_create_max_vcpus
> 	kvm_page_table_test
> 	memslot_modification_stress_test
> 	memslot_perf_test
> 	set_memory_region_test
> And other test cases are not supported by LoongArch. For example,
> we do not support rseq_test, as the glibc do not support it.
> 
> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
> ---
>   tools/testing/selftests/kvm/Makefile | 15 +++++++++++++++
>   1 file changed, 15 insertions(+)
> 
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index a5963ab9215..9d099d48013 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -65,6 +65,10 @@ LIBKVM_s390x += lib/s390x/ucall.c
>   LIBKVM_riscv += lib/riscv/processor.c
>   LIBKVM_riscv += lib/riscv/ucall.c
>   
> +LIBKVM_loongarch += lib/loongarch/processor.c
> +LIBKVM_loongarch += lib/loongarch/ucall.c
> +LIBKVM_loongarch += lib/loongarch/exception.S
> +
>   # Non-compiled test targets
>   TEST_PROGS_x86_64 += x86_64/nx_huge_pages_test.sh
>   
> @@ -202,6 +206,17 @@ TEST_GEN_PROGS_riscv += kvm_binary_stats_test
>   
>   SPLIT_TESTS += get-reg-list
>   
> +TEST_GEN_PROGS_loongarch += demand_paging_test
> +TEST_GEN_PROGS_loongarch += dirty_log_perf_test
> +TEST_GEN_PROGS_loongarch += dirty_log_test
> +TEST_GEN_PROGS_loongarch += guest_print_test
> +TEST_GEN_PROGS_loongarch += kvm_binary_stats_test
> +TEST_GEN_PROGS_loongarch += kvm_create_max_vcpus
> +TEST_GEN_PROGS_loongarch += kvm_page_table_test
> +TEST_GEN_PROGS_loongarch += memslot_modification_stress_test
> +TEST_GEN_PROGS_loongarch += memslot_perf_test
> +TEST_GEN_PROGS_loongarch += set_memory_region_test
rseq_test is not supported by LoongArch kernel, and get-reg-list 
interface is not supported by KVM now, arch specific testcases
will be added later also.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
> +
>   TEST_PROGS += $(TEST_PROGS_$(ARCH_DIR))
>   TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(ARCH_DIR))
>   TEST_GEN_PROGS_EXTENDED += $(TEST_GEN_PROGS_EXTENDED_$(ARCH_DIR))
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 1/4] KVM: selftests: Add KVM selftests header files for LoongArch
  2023-11-30 11:18 ` [PATCH v5 1/4] KVM: selftests: Add KVM selftests header files for LoongArch Tianrui Zhao
@ 2023-12-12  3:08   ` zhaotianrui
  2023-12-12 17:18     ` Sean Christopherson
  0 siblings, 1 reply; 14+ messages in thread
From: zhaotianrui @ 2023-12-12  3:08 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Shuah Khan, Paolo Bonzini, linux-kernel, kvm, Vishal Annapurve,
	Huacai Chen, WANG Xuerui, loongarch, Peter Xu, Vipin Sharma,
	maobibo

Hi, Sean:

I want to change the definition of  DEFAULT_GUEST_TEST_MEM in the common 
file "memstress.h", like this:

  /* Default guest test virtual memory offset */
+#ifndef DEFAULT_GUEST_TEST_MEM
  #define DEFAULT_GUEST_TEST_MEM		0xc0000000
+#endif

As this address should be re-defined in LoongArch headers.
So, do you have any suggesstion?

Thanks
Tianrui Zhao

在 2023/11/30 下午7:18, Tianrui Zhao 写道:
> Add KVM selftests header files for LoongArch, including processor.h
> and kvm_util_base.h. Those mainly contain LoongArch CSR register defines
> and page table information. And change DEFAULT_GUEST_TEST_MEM base addr
> for LoongArch.
>
> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
> ---
>   .../selftests/kvm/include/kvm_util_base.h     |   5 +
>   .../kvm/include/loongarch/processor.h         | 133 ++++++++++++++++++
>   .../testing/selftests/kvm/include/memstress.h |   2 +
>   3 files changed, 140 insertions(+)
>   create mode 100644 tools/testing/selftests/kvm/include/loongarch/processor.h
>
> diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
> index a18db6a7b3c..97f8b24741b 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util_base.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
> @@ -218,6 +218,11 @@ extern enum vm_guest_mode vm_mode_default;
>   #define MIN_PAGE_SHIFT			12U
>   #define ptes_per_page(page_size)	((page_size) / 8)
>   
> +#elif defined(__loongarch__)
> +#define VM_MODE_DEFAULT			VM_MODE_P36V47_16K
> +#define MIN_PAGE_SHIFT			14U
> +#define ptes_per_page(page_size)	((page_size) / 8)
> +
>   #endif
>   
>   #define MIN_PAGE_SIZE		(1U << MIN_PAGE_SHIFT)
> diff --git a/tools/testing/selftests/kvm/include/loongarch/processor.h b/tools/testing/selftests/kvm/include/loongarch/processor.h
> new file mode 100644
> index 00000000000..cea6b284131
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/loongarch/processor.h
> @@ -0,0 +1,133 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +
> +#ifndef SELFTEST_KVM_PROCESSOR_H
> +#define SELFTEST_KVM_PROCESSOR_H
> +
> +#define _PAGE_VALID_SHIFT	0
> +#define _PAGE_DIRTY_SHIFT	1
> +#define _PAGE_PLV_SHIFT		2  /* 2~3, two bits */
> +#define _CACHE_SHIFT		4  /* 4~5, two bits */
> +#define _PAGE_PRESENT_SHIFT	7
> +#define _PAGE_WRITE_SHIFT	8
> +
> +#define PLV_KERN		0
> +#define PLV_USER		3
> +#define PLV_MASK		0x3
> +
> +#define _PAGE_VALID		(0x1UL << _PAGE_VALID_SHIFT)
> +#define _PAGE_PRESENT		(0x1UL << _PAGE_PRESENT_SHIFT)
> +#define _PAGE_WRITE		(0x1UL << _PAGE_WRITE_SHIFT)
> +#define _PAGE_DIRTY		(0x1UL << _PAGE_DIRTY_SHIFT)
> +#define _PAGE_USER		(PLV_USER << _PAGE_PLV_SHIFT)
> +#define __READABLE		(_PAGE_VALID)
> +#define __WRITEABLE		(_PAGE_DIRTY | _PAGE_WRITE)
> +#define _CACHE_CC		(0x1UL << _CACHE_SHIFT) /* Coherent Cached */
> +
> +/* general registers */
> +#define zero	$r0
> +#define ra	$r1
> +#define tp	$r2
> +#define sp	$r3
> +#define a0	$r4
> +#define a1	$r5
> +#define a2	$r6
> +#define a3	$r7
> +#define a4	$r8
> +#define a5	$r9
> +#define a6	$r10
> +#define a7	$r11
> +#define t0	$r12
> +#define t1	$r13
> +#define t2	$r14
> +#define t3	$r15
> +#define t4	$r16
> +#define t5	$r17
> +#define t6	$r18
> +#define t7	$r19
> +#define t8	$r20
> +#define u0	$r21
> +#define fp	$r22
> +#define s0	$r23
> +#define s1	$r24
> +#define s2	$r25
> +#define s3	$r26
> +#define s4	$r27
> +#define s5	$r28
> +#define s6	$r29
> +#define s7	$r30
> +#define s8	$r31
> +
> +#define PS_4K				0x0000000c
> +#define PS_8K				0x0000000d
> +#define PS_16K				0x0000000e
> +#define PS_DEFAULT_SIZE			PS_16K
> +
> +/* Basic CSR registers */
> +#define LOONGARCH_CSR_CRMD		0x0 /* Current mode info */
> +#define CSR_CRMD_PG_SHIFT		4
> +#define CSR_CRMD_PG			(0x1UL << CSR_CRMD_PG_SHIFT)
> +#define CSR_CRMD_IE_SHIFT		2
> +#define CSR_CRMD_IE			(0x1UL << CSR_CRMD_IE_SHIFT)
> +#define CSR_CRMD_PLV_SHIFT		0
> +#define CSR_CRMD_PLV_WIDTH		2
> +#define CSR_CRMD_PLV			(0x3UL << CSR_CRMD_PLV_SHIFT)
> +#define PLV_MASK			0x3
> +
> +#define LOONGARCH_CSR_PRMD		0x1
> +#define LOONGARCH_CSR_EUEN		0x2
> +#define LOONGARCH_CSR_ECFG		0x4
> +#define LOONGARCH_CSR_ESTAT		0x5 /* Exception status */
> +#define LOONGARCH_CSR_ERA		0x6 /* ERA */
> +#define LOONGARCH_CSR_BADV		0x7 /* Bad virtual address */
> +#define LOONGARCH_CSR_EENTRY		0xc
> +#define LOONGARCH_CSR_TLBIDX		0x10 /* TLB Index, EHINV, PageSize, NP */
> +#define CSR_TLBIDX_PS_SHIFT		24
> +#define CSR_TLBIDX_PS_WIDTH		6
> +#define CSR_TLBIDX_PS			(0x3fUL << CSR_TLBIDX_PS_SHIFT)
> +#define CSR_TLBIDX_SIZEM		0x3f000000
> +#define CSR_TLBIDX_SIZE			CSR_TLBIDX_PS_SHIFT
> +
> +#define LOONGARCH_CSR_ASID		0x18 /* ASID */
> +/* Page table base address when VA[VALEN-1] = 0 */
> +#define LOONGARCH_CSR_PGDL		0x19
> +/* Page table base address when VA[VALEN-1] = 1 */
> +#define LOONGARCH_CSR_PGDH		0x1a
> +/* Page table base */
> +#define LOONGARCH_CSR_PGD		0x1b
> +#define LOONGARCH_CSR_PWCTL0		0x1c
> +#define LOONGARCH_CSR_PWCTL1		0x1d
> +#define LOONGARCH_CSR_STLBPGSIZE	0x1e
> +#define LOONGARCH_CSR_CPUID		0x20
> +#define LOONGARCH_CSR_KS0		0x30
> +#define LOONGARCH_CSR_KS1		0x31
> +#define LOONGARCH_CSR_TMID		0x40
> +#define LOONGARCH_CSR_TCFG		0x41
> +#define LOONGARCH_CSR_TLBRENTRY		0x88 /* TLB refill exception entry */
> +/* KSave for TLB refill exception */
> +#define LOONGARCH_CSR_TLBRSAVE		0x8b
> +#define LOONGARCH_CSR_TLBREHI		0x8e
> +#define CSR_TLBREHI_PS_SHIFT		0
> +#define CSR_TLBREHI_PS			(0x3fUL << CSR_TLBREHI_PS_SHIFT)
> +
> +#define DEFAULT_LOONARCH64_STACK_MIN		0x4000
> +#define DEFAULT_LOONARCH64_PAGE_TABLE_MIN	0x4000
> +#define EXREGS_GPRS				(32)
> +
> +#ifndef __ASSEMBLER__
> +struct ex_regs {
> +	unsigned long regs[EXREGS_GPRS];
> +	unsigned long pc;
> +	unsigned long estat;
> +	unsigned long badv;
> +};
> +
> +extern void handle_tlb_refill(void);
> +extern void handle_exception(void);
> +#endif
> +
> +#define PC_OFFSET_EXREGS		((EXREGS_GPRS + 0) * 8)
> +#define ESTAT_OFFSET_EXREGS		((EXREGS_GPRS + 1) * 8)
> +#define BADV_OFFSET_EXREGS		((EXREGS_GPRS + 2) * 8)
> +#define EXREGS_SIZE			((EXREGS_GPRS + 3) * 8)
> +
> +#endif /* SELFTEST_KVM_PROCESSOR_H */
> diff --git a/tools/testing/selftests/kvm/include/memstress.h b/tools/testing/selftests/kvm/include/memstress.h
> index ce4e603050e..5bcdaf2efab 100644
> --- a/tools/testing/selftests/kvm/include/memstress.h
> +++ b/tools/testing/selftests/kvm/include/memstress.h
> @@ -13,7 +13,9 @@
>   #include "kvm_util.h"
>   
>   /* Default guest test virtual memory offset */
> +#ifndef DEFAULT_GUEST_TEST_MEM
>   #define DEFAULT_GUEST_TEST_MEM		0xc0000000
> +#endif
>   
>   #define DEFAULT_PER_VCPU_MEM_SIZE	(1 << 30) /* 1G */
>   


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 1/4] KVM: selftests: Add KVM selftests header files for LoongArch
  2023-12-12  3:08   ` zhaotianrui
@ 2023-12-12 17:18     ` Sean Christopherson
  2023-12-13  7:15       ` zhaotianrui
  0 siblings, 1 reply; 14+ messages in thread
From: Sean Christopherson @ 2023-12-12 17:18 UTC (permalink / raw)
  To: zhaotianrui
  Cc: Shuah Khan, Paolo Bonzini, linux-kernel, kvm, Vishal Annapurve,
	Huacai Chen, WANG Xuerui, loongarch, Peter Xu, Vipin Sharma,
	maobibo

On Tue, Dec 12, 2023, zhaotianrui wrote:
> Hi, Sean:
> 
> I want to change the definition of  DEFAULT_GUEST_TEST_MEM in the common
> file "memstress.h", like this:
> 
>  /* Default guest test virtual memory offset */
> +#ifndef DEFAULT_GUEST_TEST_MEM
>  #define DEFAULT_GUEST_TEST_MEM		0xc0000000
> +#endif
> 
> As this address should be re-defined in LoongArch headers.

Why?  E.g. is 0xc0000000 unconditionally reserved, not guaranteed to be valid,
something else?

> So, do you have any suggesstion?

Hmm, I think ideally kvm_util_base.h would define a range of memory that can be
used by tests for arbitrary data.  Multiple tests use 0xc0000000, which is not
entirely arbitrary, i.e. it doesn't _need_ to be 0xc0000000, but 0xc0000000 is
convenient because it's 32-bit addressable and doesn't overlap reserved areas in
other architectures.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Re: [PATCH v5 1/4] KVM: selftests: Add KVM selftests header files for LoongArch
  2023-12-12 17:18     ` Sean Christopherson
@ 2023-12-13  7:15       ` zhaotianrui
  2023-12-13  7:42         ` maobibo
  0 siblings, 1 reply; 14+ messages in thread
From: zhaotianrui @ 2023-12-13  7:15 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Shuah Khan, Paolo Bonzini, linux-kernel, kvm, Vishal Annapurve,
	Huacai Chen, WANG Xuerui, loongarch, Peter Xu, Vipin Sharma,
	maobibo



在 2023/12/13 上午1:18, Sean Christopherson 写道:
> On Tue, Dec 12, 2023, zhaotianrui wrote:
>> Hi, Sean:
>>
>> I want to change the definition of  DEFAULT_GUEST_TEST_MEM in the common
>> file "memstress.h", like this:
>>
>>   /* Default guest test virtual memory offset */
>> +#ifndef DEFAULT_GUEST_TEST_MEM
>>   #define DEFAULT_GUEST_TEST_MEM		0xc0000000
>> +#endif
>>
>> As this address should be re-defined in LoongArch headers.
> 
> Why?  E.g. is 0xc0000000 unconditionally reserved, not guaranteed to be valid,
> something else?
> 
>> So, do you have any suggesstion?
> 
> Hmm, I think ideally kvm_util_base.h would define a range of memory that can be
> used by tests for arbitrary data.  Multiple tests use 0xc0000000, which is not
> entirely arbitrary, i.e. it doesn't _need_ to be 0xc0000000, but 0xc0000000 is
> convenient because it's 32-bit addressable and doesn't overlap reserved areas in
> other architectures.
> 
Thanks for your explanation, and LoongArch want to define 
DEFAULT_GUEST_TEST_MEM to 0x130000000. As default base address for 
application loading is 0x120000000, DEFAULT_GUEST_TEST_MEM should be 
larger than app loading address, so that PER_VCPU_MEM_SIZE can be large 
enough, and kvm selftests app size is smaller than 256M in generic.

Thanks
Tianrui Zhao


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 1/4] KVM: selftests: Add KVM selftests header files for LoongArch
  2023-12-13  7:15       ` zhaotianrui
@ 2023-12-13  7:42         ` maobibo
  2023-12-13 23:56           ` Sean Christopherson
  0 siblings, 1 reply; 14+ messages in thread
From: maobibo @ 2023-12-13  7:42 UTC (permalink / raw)
  To: zhaotianrui, Sean Christopherson
  Cc: Shuah Khan, Paolo Bonzini, linux-kernel, kvm, Vishal Annapurve,
	Huacai Chen, WANG Xuerui, loongarch, Peter Xu, Vipin Sharma,
	huangpei



On 2023/12/13 下午3:15, zhaotianrui wrote:
> 
> 
> 在 2023/12/13 上午1:18, Sean Christopherson 写道:
>> On Tue, Dec 12, 2023, zhaotianrui wrote:
>>> Hi, Sean:
>>>
>>> I want to change the definition of  DEFAULT_GUEST_TEST_MEM in the common
>>> file "memstress.h", like this:
>>>
>>>   /* Default guest test virtual memory offset */
>>> +#ifndef DEFAULT_GUEST_TEST_MEM
>>>   #define DEFAULT_GUEST_TEST_MEM        0xc0000000
>>> +#endif
>>>
>>> As this address should be re-defined in LoongArch headers.
>>
>> Why?  E.g. is 0xc0000000 unconditionally reserved, not guaranteed to 
>> be valid,
>> something else?
>>
>>> So, do you have any suggesstion?
>>
>> Hmm, I think ideally kvm_util_base.h would define a range of memory 
>> that can be
>> used by tests for arbitrary data.  Multiple tests use 0xc0000000, 
>> which is not
>> entirely arbitrary, i.e. it doesn't _need_ to be 0xc0000000, but 
>> 0xc0000000 is
>> convenient because it's 32-bit addressable and doesn't overlap 
>> reserved areas in
>> other architectures.
In general text entry address of user application on x86/arm64 Linux
is 0x200000, however on LoongArch system text entry address is strange, 
its value 0x120000000.

When DEFAULT_GUEST_TEST_MEM is defined as 0xc0000000, there is 
limitation for guest memory size, it cannot exceed 0x120000000 - 
0xc000000 = 1.5G bytes, else there will be conflict. However
there is no such issue on x86/arm64, since 0xc0000000 is above text 
entry address 0x200000.

The LoongArch link scripts actually is strange, it brings out some 
compatible issues such dpdk/kvm selftest when user applications
want fixed virtual address space.

So here DEFAULT_GUEST_TEST_MEM is defined as 0x130000000 separately, 
maybe 0x140000000 is better since it is 1G super-page aligned for 4K 
page size.

Regards
Bibo Mao

>>
> Thanks for your explanation, and LoongArch want to define 
> DEFAULT_GUEST_TEST_MEM to 0x130000000. As default base address for 
> application loading is 0x120000000, DEFAULT_GUEST_TEST_MEM should be 
> larger than app loading address, so that PER_VCPU_MEM_SIZE can be large 
> enough, and kvm selftests app size is smaller than 256M in generic.
> 
> Thanks
> Tianrui Zhao


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 1/4] KVM: selftests: Add KVM selftests header files for LoongArch
  2023-12-13  7:42         ` maobibo
@ 2023-12-13 23:56           ` Sean Christopherson
  2023-12-14  2:20             ` maobibo
  0 siblings, 1 reply; 14+ messages in thread
From: Sean Christopherson @ 2023-12-13 23:56 UTC (permalink / raw)
  To: maobibo
  Cc: zhaotianrui, Shuah Khan, Paolo Bonzini, linux-kernel, kvm,
	Vishal Annapurve, Huacai Chen, WANG Xuerui, loongarch, Peter Xu,
	Vipin Sharma, huangpei

On Wed, Dec 13, 2023, maobibo wrote:
> 
> On 2023/12/13 下午3:15, zhaotianrui wrote:
> > 
> > 在 2023/12/13 上午1:18, Sean Christopherson 写道:
> > > On Tue, Dec 12, 2023, zhaotianrui wrote:
> > > > Hi, Sean:
> > > > 
> > > > I want to change the definition of  DEFAULT_GUEST_TEST_MEM in the common
> > > > file "memstress.h", like this:
> > > > 
> > > >   /* Default guest test virtual memory offset */
> > > > +#ifndef DEFAULT_GUEST_TEST_MEM
> > > >   #define DEFAULT_GUEST_TEST_MEM        0xc0000000
> > > > +#endif
> > > > 
> > > > As this address should be re-defined in LoongArch headers.
> > > 
> > > Why?  E.g. is 0xc0000000 unconditionally reserved, not guaranteed to
> > > be valid,
> > > something else?
> > > 
> > > > So, do you have any suggesstion?
> > > 
> > > Hmm, I think ideally kvm_util_base.h would define a range of memory that
> > > can be used by tests for arbitrary data.  Multiple tests use 0xc0000000,
> > > which is not entirely arbitrary, i.e. it doesn't _need_ to be 0xc0000000,
> > > but 0xc0000000 is convenient because it's 32-bit addressable and doesn't
> > > overlap reserved areas in other architectures.
> In general text entry address of user application on x86/arm64 Linux
> is 0x200000, however on LoongArch system text entry address is strange, its
> value 0x120000000.
> 
> When DEFAULT_GUEST_TEST_MEM is defined as 0xc0000000, there is limitation
> for guest memory size, it cannot exceed 0x120000000 - 0xc000000 = 1.5G
> bytes, else there will be conflict. However there is no such issue on
> x86/arm64, since 0xc0000000 is above text entry address 0x200000.

Ugh, I spent a good 30 minutes trying to figure out how any of this works on x86
before I realized DEFAULT_GUEST_TEST_MEM is used for the guest _virtual_ address
space.

I was thinking we were talking about guest _physical_ address, hence my comments
about it being 32-bit addressable and not overlappin reserved areas.  E.g. on x86,
anything remotely resembling a real system has regular memory, a.k.a. DRAM, split
between low memory (below the 32-bit boundary, i.e. below 4GiB) and high memory
(from 4GiB to the max legal physical address).  Addresses above "top of lower
usable DRAM" (TOLUD) are reserved (again, in a "real" system) for things like
PCI, local APIC, I/O APIC, and the _architecturally_ defined RESET vector.

I couldn't figure out how x86 worked, because KVM creates an KVM-internal memslot
at address 0xfee00000.  And then I realized the test creates memslots at completely
different GPAs, and DEFAULT_GUEST_TEST_MEM is used only as super arbitrary
guest virtual address.

*sigh*

Anyways...

> The LoongArch link scripts actually is strange, it brings out some
> compatible issues such dpdk/kvm selftest when user applications
> want fixed virtual address space.

Can you elaborate on compatiblity issues?  I don't see the connection between
DPDK and KVM selftests.

> So here DEFAULT_GUEST_TEST_MEM is defined as 0x130000000 separately, maybe
> 0x140000000 is better since it is 1G super-page aligned for 4K page size.

I would strongly prefer we carve out a virtual address range that *all* tests
can safely use for test-specific code and data.  E.g. if/when we add userspace
support to selftests, I like the idea of having dedicated address spaces for
kernel vs. user[*].

Maybe we can march in that generally direction and define test's virtual address
range to be in kernel space, i.e. the high half.  I assume/hope that would play
nice with all architectures' entry points?

[*] https://lore.kernel.org/all/20231102155111.28821-1-guang.zeng@intel.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 1/4] KVM: selftests: Add KVM selftests header files for LoongArch
  2023-12-13 23:56           ` Sean Christopherson
@ 2023-12-14  2:20             ` maobibo
  0 siblings, 0 replies; 14+ messages in thread
From: maobibo @ 2023-12-14  2:20 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: zhaotianrui, Shuah Khan, Paolo Bonzini, linux-kernel, kvm,
	Vishal Annapurve, Huacai Chen, WANG Xuerui, loongarch, Peter Xu,
	Vipin Sharma, huangpei



On 2023/12/14 上午7:56, Sean Christopherson wrote:
> On Wed, Dec 13, 2023, maobibo wrote:
>>
>> On 2023/12/13 下午3:15, zhaotianrui wrote:
>>>
>>> 在 2023/12/13 上午1:18, Sean Christopherson 写道:
>>>> On Tue, Dec 12, 2023, zhaotianrui wrote:
>>>>> Hi, Sean:
>>>>>
>>>>> I want to change the definition of  DEFAULT_GUEST_TEST_MEM in the common
>>>>> file "memstress.h", like this:
>>>>>
>>>>>    /* Default guest test virtual memory offset */
>>>>> +#ifndef DEFAULT_GUEST_TEST_MEM
>>>>>    #define DEFAULT_GUEST_TEST_MEM        0xc0000000
>>>>> +#endif
>>>>>
>>>>> As this address should be re-defined in LoongArch headers.
>>>>
>>>> Why?  E.g. is 0xc0000000 unconditionally reserved, not guaranteed to
>>>> be valid,
>>>> something else?
>>>>
>>>>> So, do you have any suggesstion?
>>>>
>>>> Hmm, I think ideally kvm_util_base.h would define a range of memory that
>>>> can be used by tests for arbitrary data.  Multiple tests use 0xc0000000,
>>>> which is not entirely arbitrary, i.e. it doesn't _need_ to be 0xc0000000,
>>>> but 0xc0000000 is convenient because it's 32-bit addressable and doesn't
>>>> overlap reserved areas in other architectures.
>> In general text entry address of user application on x86/arm64 Linux
>> is 0x200000, however on LoongArch system text entry address is strange, its
>> value 0x120000000.
>>
>> When DEFAULT_GUEST_TEST_MEM is defined as 0xc0000000, there is limitation
>> for guest memory size, it cannot exceed 0x120000000 - 0xc000000 = 1.5G
>> bytes, else there will be conflict. However there is no such issue on
>> x86/arm64, since 0xc0000000 is above text entry address 0x200000.
> 
> Ugh, I spent a good 30 minutes trying to figure out how any of this works on x86
> before I realized DEFAULT_GUEST_TEST_MEM is used for the guest _virtual_ address
> space.
> 
> I was thinking we were talking about guest _physical_ address, hence my comments
> about it being 32-bit addressable and not overlappin reserved areas.  E.g. on x86,
> anything remotely resembling a real system has regular memory, a.k.a. DRAM, split
> between low memory (below the 32-bit boundary, i.e. below 4GiB) and high memory
> (from 4GiB to the max legal physical address).  Addresses above "top of lower
> usable DRAM" (TOLUD) are reserved (again, in a "real" system) for things like
> PCI, local APIC, I/O APIC, and the _architecturally_ defined RESET vector.
> 
> I couldn't figure out how x86 worked, because KVM creates an KVM-internal memslot
> at address 0xfee00000.  And then I realized the test creates memslots at completely
> different GPAs, and DEFAULT_GUEST_TEST_MEM is used only as super arbitrary
> guest virtual address.
The framework and idea of kvm selftest is very good and intrinsic, and 
it is very easy to write unit test case for kvm -:)

> 
> *sigh*
> 
> Anyways...
> 
>> The LoongArch link scripts actually is strange, it brings out some
>> compatible issues such dpdk/kvm selftest when user applications
>> want fixed virtual address space.
> 
> Can you elaborate on compatiblity issues?  I don't see the connection between
> DPDK and KVM selftests.
No, there is no the connection between DPDK and KVM selftests. I mean 
that some applications which use fixed VA address have the same issue, 
however this kind of usage is OK on X86/ARM. DPDK also uses fixed IOVA 
address(0xC0000000) when it is combined with IOMMU, there is the similar 
conflict issue on LoongArch machines.

> 
>> So here DEFAULT_GUEST_TEST_MEM is defined as 0x130000000 separately, maybe
>> 0x140000000 is better since it is 1G super-page aligned for 4K page size.
> 
> I would strongly prefer we carve out a virtual address range that *all* tests
> can safely use for test-specific code and data.  E.g. if/when we add userspace
> support to selftests, I like the idea of having dedicated address spaces for
> kernel vs. user[*].
> 
> Maybe we can march in that generally direction and define test's virtual address
> range to be in kernel space, i.e. the high half.  I assume/hope that would play
> nice with all architectures' entry points?
yeap, it will solve the issue, virtual address range in kernel space can 
be used. Also both unprivileged and  privileged instruction can be 
tested with ZengGuang's patch.

And is this patchset eligible to merge if common file 
selftests/kvm/include/memstress.h is kept unchanged? Since it is pending 
for a period of time, also LoongArch kvm selftest can pass with guest 
memory size below 1.5G . We can add kernel/user mode support if 
ZengGuang's patch is merged.

Regards
Bibo Mao
> 
> [*] https://lore.kernel.org/all/20231102155111.28821-1-guang.zeng@intel.com
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2023-12-14  2:21 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-30 11:18 [PATCH v5 0/4] KVM: selftests: Add LoongArch support Tianrui Zhao
2023-11-30 11:18 ` [PATCH v5 1/4] KVM: selftests: Add KVM selftests header files for LoongArch Tianrui Zhao
2023-12-12  3:08   ` zhaotianrui
2023-12-12 17:18     ` Sean Christopherson
2023-12-13  7:15       ` zhaotianrui
2023-12-13  7:42         ` maobibo
2023-12-13 23:56           ` Sean Christopherson
2023-12-14  2:20             ` maobibo
2023-11-30 11:18 ` [PATCH v5 2/4] KVM: selftests: Add core KVM selftests support " Tianrui Zhao
2023-12-04  2:05   ` maobibo
2023-11-30 11:18 ` [PATCH v5 3/4] KVM: selftests: Add ucall test " Tianrui Zhao
2023-12-04  2:05   ` maobibo
2023-11-30 11:18 ` [PATCH v5 4/4] KVM: selftests: Add test cases " Tianrui Zhao
2023-12-04  2:11   ` maobibo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).