kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 00/18] KVM selftests code consolidation and cleanup
@ 2022-10-24 11:34 Wei Wang
  2022-10-24 11:34 ` [PATCH v1 01/18] KVM: selftests/kvm_util: use array of pointers to maintain vcpus in kvm_vm Wei Wang
                   ` (18 more replies)
  0 siblings, 19 replies; 43+ messages in thread
From: Wei Wang @ 2022-10-24 11:34 UTC (permalink / raw)
  To: seanjc, pbonzini
  Cc: dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel, Wei Wang

This patch series intends to improve kvm selftests with better code
consolidation using the helper functions to perform vcpu and thread
related operations.

In general, several aspects are improved:
1) The current users allocate an array of vcpu pointers to the vcpus that
   are added to a vm, and an array of vcpu threads. This isn't necessary
   as kvm_vm already maintains a list of added vcpus. This series changes
   the list of vcpus in the kvm_vm struct to a vcpu array for users to
   work with and removes each user's own allocation of such vcpu arrays.
   Aslo add the vcpu thread to the kvm_vcpu struct, so that users don't
   need to explicitly allocate a thread array to manage vcpu threads on
   their own.
2) Change the users to use the helper functions provided by this series
   with the following enhancements:
   - Many users working with pthread_create/join forgot to check if
     error on returning. The helper functions have handled thoses inside,
     so users don't need to handle them by themselves;
   - The vcpu threads created via the helper functions are named in
     "vcpu-##id" format. Naming the threads facilitates debugging,
     performance tuning, runtime pining etc;
   - helper functions named with "vm_vcpu_threads_" iterates over all the
     vcpus that have been added to the vm. Users don't need a explicit
     loop to go through the added cpus by themselves.
3) kvm_vcpu is used as the interface parameter to the vcpu thread's
   start routine, and the user specific data is made to be the private
   data in kvm_vcpu. This can simplify the user specific data structures,
   as kvm_vcpu has already included the required info for the thread, for
   example, in patch 13, the cpu_idx field from "struct vcpu_thread"
   is a duplicate of vcpu->id.

The changes have been tested on an SPR server. Patch 15 and 16 haven't
been tested due to the lack of an ARM environment currently.

Wei Wang (18):
  KVM: selftests/kvm_util: use array of pointers to maintain vcpus in
    kvm_vm
  KVM: selftests/kvm_util: use vm->vcpus[] when create vm with vcpus
  KVM: selftests/kvm_util: helper functions for vcpus and threads
  KVM: selftests/kvm_page_table_test: vcpu related code consolidation
  KVM: selftests/hardware_disable_test: code consolidation and cleanup
  KVM: selftests/dirty_log_test: vcpu related code consolidation
  KVM: selftests/max_guest_memory_test: vcpu related code consolidation
  KVM: selftests/set_memory_region_test: vcpu related code consolidation
  KVM: selftests/steal_time: vcpu related code consolidation and cleanup
  KVM: selftests/tsc_scaling_sync: vcpu related code consolidation
  KVM: selftest/xapic_ipi_test: vcpu related code consolidation
  KVM: selftests/rseq_test: name the migration thread and some cleanup
  KVM: selftests/perf_test_util: vcpu related code consolidation
  KVM: selftest/memslot_perf_test: vcpu related code consolidation
  KVM: selftests/vgic_init: vcpu related code consolidation
  KVM: selftest/arch_timer: vcpu related code consolidation
  KVM: selftests: remove the *vcpu[] input from __vm_create_with_vcpus
  KVM: selftests/kvm_create_max_vcpus: check KVM_MAX_VCPUS

 .../selftests/kvm/aarch64/arch_timer.c        |  42 ++--
 .../testing/selftests/kvm/aarch64/vgic_init.c |  35 ++-
 .../selftests/kvm/access_tracking_perf_test.c |  18 +-
 .../selftests/kvm/demand_paging_test.c        |   9 +-
 .../selftests/kvm/dirty_log_perf_test.c       |  11 +-
 tools/testing/selftests/kvm/dirty_log_test.c  |  16 +-
 .../selftests/kvm/hardware_disable_test.c     |  56 ++---
 .../testing/selftests/kvm/include/kvm_util.h  |  24 ++
 .../selftests/kvm/include/kvm_util_base.h     |  12 +-
 .../selftests/kvm/include/perf_test_util.h    |   9 +-
 .../selftests/kvm/kvm_create_max_vcpus.c      |   7 +
 .../selftests/kvm/kvm_page_table_test.c       |  16 +-
 tools/testing/selftests/kvm/lib/kvm_util.c    | 217 +++++++++++++++---
 .../selftests/kvm/lib/perf_test_util.c        |  68 ++----
 .../selftests/kvm/lib/x86_64/perf_test_util.c |  11 +-
 tools/testing/selftests/kvm/lib/x86_64/vmx.c  |   2 +-
 .../selftests/kvm/max_guest_memory_test.c     |  53 ++---
 .../kvm/memslot_modification_stress_test.c    |   9 +-
 .../testing/selftests/kvm/memslot_perf_test.c | 137 +++++------
 tools/testing/selftests/kvm/rseq_test.c       |  10 +-
 .../selftests/kvm/set_memory_region_test.c    |  16 +-
 tools/testing/selftests/kvm/steal_time.c      |  15 +-
 .../selftests/kvm/x86_64/tsc_scaling_sync.c   |  25 +-
 .../selftests/kvm/x86_64/xapic_ipi_test.c     |  54 ++---
 24 files changed, 476 insertions(+), 396 deletions(-)

-- 
2.27.0


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v1 01/18] KVM: selftests/kvm_util: use array of pointers to maintain vcpus in kvm_vm
  2022-10-24 11:34 [PATCH v1 00/18] KVM selftests code consolidation and cleanup Wei Wang
@ 2022-10-24 11:34 ` Wei Wang
  2022-10-26 23:47   ` Sean Christopherson
  2022-10-24 11:34 ` [PATCH v1 02/18] KVM: selftests/kvm_util: use vm->vcpus[] when create vm with vcpus Wei Wang
                   ` (17 subsequent siblings)
  18 siblings, 1 reply; 43+ messages in thread
From: Wei Wang @ 2022-10-24 11:34 UTC (permalink / raw)
  To: seanjc, pbonzini
  Cc: dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel, Wei Wang

Each vcpu has an id associated with it and is intrinsically faster
and easier to be referenced by indexing into an array with "vcpu->id",
compared to using a list of vcpus in the current implementation. Change
the vcpu list to an array of vcpu pointers. Users then don't need to
allocate such a vcpu array on their own, and instead, they can reuse
the one maintained in kvm_vm.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 .../testing/selftests/kvm/include/kvm_util.h  |  4 +++
 .../selftests/kvm/include/kvm_util_base.h     |  3 +-
 tools/testing/selftests/kvm/lib/kvm_util.c    | 34 ++++++-------------
 tools/testing/selftests/kvm/lib/x86_64/vmx.c  |  2 +-
 4 files changed, 17 insertions(+), 26 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index c9286811a4cb..5d5c8968fb06 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -10,4 +10,8 @@
 #include "kvm_util_base.h"
 #include "ucall_common.h"
 
+#define vm_iterate_over_vcpus(vm, vcpu, i)				\
+	for (i = 0, vcpu = vm->vcpus[0];				\
+		vcpu && i < KVM_MAX_VCPUS; vcpu = vm->vcpus[++i])
+
 #endif /* SELFTEST_KVM_UTIL_H */
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index e42a09cd24a0..c90a9609b853 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -45,7 +45,6 @@ struct userspace_mem_region {
 };
 
 struct kvm_vcpu {
-	struct list_head list;
 	uint32_t id;
 	int fd;
 	struct kvm_vm *vm;
@@ -75,7 +74,6 @@ struct kvm_vm {
 	unsigned int pa_bits;
 	unsigned int va_bits;
 	uint64_t max_gfn;
-	struct list_head vcpus;
 	struct userspace_mem_regions regions;
 	struct sparsebit *vpages_valid;
 	struct sparsebit *vpages_mapped;
@@ -92,6 +90,7 @@ struct kvm_vm {
 	int stats_fd;
 	struct kvm_stats_header stats_header;
 	struct kvm_stats_desc *stats_desc;
+	struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];
 };
 
 
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index f1cb1627161f..941f6c3ea9dc 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -195,7 +195,6 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages)
 	vm = calloc(1, sizeof(*vm));
 	TEST_ASSERT(vm != NULL, "Insufficient Memory");
 
-	INIT_LIST_HEAD(&vm->vcpus);
 	vm->regions.gpa_tree = RB_ROOT;
 	vm->regions.hva_tree = RB_ROOT;
 	hash_init(vm->regions.slot_hash);
@@ -534,6 +533,10 @@ __weak void vcpu_arch_free(struct kvm_vcpu *vcpu)
 static void vm_vcpu_rm(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
 {
 	int ret;
+	uint32_t vcpu_id = vcpu->id;
+
+	TEST_ASSERT(!!vm->vcpus[vcpu_id], "vCPU%d wasn't added\n", vcpu_id);
+	vm->vcpus[vcpu_id] = NULL;
 
 	if (vcpu->dirty_gfns) {
 		ret = munmap(vcpu->dirty_gfns, vm->dirty_ring_size);
@@ -547,18 +550,16 @@ static void vm_vcpu_rm(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
 	ret = close(vcpu->fd);
 	TEST_ASSERT(!ret,  __KVM_SYSCALL_ERROR("close()", ret));
 
-	list_del(&vcpu->list);
-
 	vcpu_arch_free(vcpu);
 	free(vcpu);
 }
 
 void kvm_vm_release(struct kvm_vm *vmp)
 {
-	struct kvm_vcpu *vcpu, *tmp;
-	int ret;
+	struct kvm_vcpu *vcpu;
+	int i, ret;
 
-	list_for_each_entry_safe(vcpu, tmp, &vmp->vcpus, list)
+	vm_iterate_over_vcpus(vmp, vcpu, i)
 		vm_vcpu_rm(vmp, vcpu);
 
 	ret = close(vmp->fd);
@@ -1085,18 +1086,6 @@ static int vcpu_mmap_sz(void)
 	return ret;
 }
 
-static bool vcpu_exists(struct kvm_vm *vm, uint32_t vcpu_id)
-{
-	struct kvm_vcpu *vcpu;
-
-	list_for_each_entry(vcpu, &vm->vcpus, list) {
-		if (vcpu->id == vcpu_id)
-			return true;
-	}
-
-	return false;
-}
-
 /*
  * Adds a virtual CPU to the VM specified by vm with the ID given by vcpu_id.
  * No additional vCPU setup is done.  Returns the vCPU.
@@ -1106,7 +1095,7 @@ struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
 	struct kvm_vcpu *vcpu;
 
 	/* Confirm a vcpu with the specified id doesn't already exist. */
-	TEST_ASSERT(!vcpu_exists(vm, vcpu_id), "vCPU%d already exists\n", vcpu_id);
+	TEST_ASSERT(!vm->vcpus[vcpu_id], "vCPU%d already exists\n", vcpu_id);
 
 	/* Allocate and initialize new vcpu structure. */
 	vcpu = calloc(1, sizeof(*vcpu));
@@ -1125,8 +1114,7 @@ struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
 	TEST_ASSERT(vcpu->run != MAP_FAILED,
 		    __KVM_SYSCALL_ERROR("mmap()", (int)(unsigned long)MAP_FAILED));
 
-	/* Add to linked-list of VCPUs. */
-	list_add(&vcpu->list, &vm->vcpus);
+	vm->vcpus[vcpu_id] = vcpu;
 
 	return vcpu;
 }
@@ -1684,7 +1672,7 @@ void kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing)
  */
 void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
 {
-	int ctr;
+	int i, ctr;
 	struct userspace_mem_region *region;
 	struct kvm_vcpu *vcpu;
 
@@ -1712,7 +1700,7 @@ void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
 	}
 	fprintf(stream, "%*sVCPUs:\n", indent, "");
 
-	list_for_each_entry(vcpu, &vm->vcpus, list)
+	vm_iterate_over_vcpus(vm, vcpu, i)
 		vcpu_dump(stream, vcpu, indent + 2);
 }
 
diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c
index d21049c38fc5..77812dd03647 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c
@@ -549,7 +549,7 @@ bool kvm_vm_has_ept(struct kvm_vm *vm)
 	struct kvm_vcpu *vcpu;
 	uint64_t ctrl;
 
-	vcpu = list_first_entry(&vm->vcpus, struct kvm_vcpu, list);
+	vcpu = vm->vcpus[0];
 	TEST_ASSERT(vcpu, "Cannot determine EPT support without vCPUs.\n");
 
 	ctrl = vcpu_get_msr(vcpu, MSR_IA32_VMX_TRUE_PROCBASED_CTLS) >> 32;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v1 02/18] KVM: selftests/kvm_util: use vm->vcpus[] when create vm with vcpus
  2022-10-24 11:34 [PATCH v1 00/18] KVM selftests code consolidation and cleanup Wei Wang
  2022-10-24 11:34 ` [PATCH v1 01/18] KVM: selftests/kvm_util: use array of pointers to maintain vcpus in kvm_vm Wei Wang
@ 2022-10-24 11:34 ` Wei Wang
  2022-10-24 11:34 ` [PATCH v1 03/18] KVM: selftests/kvm_util: helper functions for vcpus and threads Wei Wang
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 43+ messages in thread
From: Wei Wang @ 2022-10-24 11:34 UTC (permalink / raw)
  To: seanjc, pbonzini
  Cc: dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel, Wei Wang

The kvm_vm struct has been changed to maintain vcpus via an array of
vcpu pointers, there is no need to require users to allocate and pass
in another vcpu array when create the vm.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 tools/testing/selftests/kvm/lib/kvm_util.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 941f6c3ea9dc..1f69f5ca8356 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -371,12 +371,10 @@ struct kvm_vm *__vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus
 	struct kvm_vm *vm;
 	int i;
 
-	TEST_ASSERT(!nr_vcpus || vcpus, "Must provide vCPU array");
-
 	vm = __vm_create(mode, nr_vcpus, extra_mem_pages);
 
 	for (i = 0; i < nr_vcpus; ++i)
-		vcpus[i] = vm_vcpu_add(vm, i, guest_code);
+		vm->vcpus[i] = vm_vcpu_add(vm, i, guest_code);
 
 	return vm;
 }
@@ -385,13 +383,12 @@ struct kvm_vm *__vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
 					 uint64_t extra_mem_pages,
 					 void *guest_code)
 {
-	struct kvm_vcpu *vcpus[1];
 	struct kvm_vm *vm;
 
 	vm = __vm_create_with_vcpus(VM_MODE_DEFAULT, 1, extra_mem_pages,
-				    guest_code, vcpus);
+				    guest_code, NULL);
 
-	*vcpu = vcpus[0];
+	*vcpu = vm->vcpus[0];
 	return vm;
 }
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v1 03/18] KVM: selftests/kvm_util: helper functions for vcpus and threads
  2022-10-24 11:34 [PATCH v1 00/18] KVM selftests code consolidation and cleanup Wei Wang
  2022-10-24 11:34 ` [PATCH v1 01/18] KVM: selftests/kvm_util: use array of pointers to maintain vcpus in kvm_vm Wei Wang
  2022-10-24 11:34 ` [PATCH v1 02/18] KVM: selftests/kvm_util: use vm->vcpus[] when create vm with vcpus Wei Wang
@ 2022-10-24 11:34 ` Wei Wang
  2022-10-27  0:09   ` Sean Christopherson
  2022-10-24 11:34 ` [PATCH v1 04/18] KVM: selftests/kvm_page_table_test: vcpu related code consolidation Wei Wang
                   ` (15 subsequent siblings)
  18 siblings, 1 reply; 43+ messages in thread
From: Wei Wang @ 2022-10-24 11:34 UTC (permalink / raw)
  To: seanjc, pbonzini
  Cc: dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel, Wei Wang

Add a vcpu thread field to the kvm_vcpu struct, so that each user
doesn't need to define an array of such threads on their own. The
private_data pointer is added and optionally used to hold user
specific data, and type casting to the user's data type will be
performed in the user vcpu thread's start_routine.

A couple of the helper functions are added to support vcpu related
operations:

pthread_create_with_name is provided to create general threads with
user specified name.

vcpu_thread_create is provided to create a vcpu thread with name in
"vcpu##id" format, vm_vcpu_threads_create is provided to create vcpu
threads for the vcpus that have been created for a vm. The thread
naming facilitates debugging, performance tuning, runtime pining etc.
An example is shown below reported from "top". With naming the vcpu
threads, the per-vcpu info becomes more noticeable:
PID  USER PR  NI VIRT    RES  SHR  S  %CPU  %MEM TIME+   COMMAND
4464 root 20  0  4248684 4.0g 1628 R  99.9  26.2 0:50.97 dirty_log_perf_
4467 root 20  0  4248684 4.0g 1628 R  99.9  26.2 0:50.93 vcpu0
4469 root 20  0  4248684 4.0g 1628 R  99.9  26.2 0:50.93 vcpu2
4470 root 20  0  4248684 4.0g 1628 R  99.9  26.2 0:50.94 vcpu3
4468 root 20  0  4248684 4.0g 1628 R  99.7  26.2 0:50.93 vcpu1

vm_vcpu_threads_join is provided to join all the vcpu threads.

vm_vcpu_threads_private_data_alloc is provided to allocate memory used
for user specific private data to each vcpu that have been created to
the vm.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 .../testing/selftests/kvm/include/kvm_util.h  |  20 ++
 .../selftests/kvm/include/kvm_util_base.h     |   2 +
 tools/testing/selftests/kvm/lib/kvm_util.c    | 172 ++++++++++++++++++
 3 files changed, 194 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 5d5c8968fb06..036ed05e72e6 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -6,6 +6,7 @@
  */
 #ifndef SELFTEST_KVM_UTIL_H
 #define SELFTEST_KVM_UTIL_H
+#include <pthread.h>
 
 #include "kvm_util_base.h"
 #include "ucall_common.h"
@@ -14,4 +15,23 @@
 	for (i = 0, vcpu = vm->vcpus[0];				\
 		vcpu && i < KVM_MAX_VCPUS; vcpu = vm->vcpus[++i])
 
+void __pthread_create_with_name(pthread_t *thread, const pthread_attr_t *attr,
+			void *(*start_routine)(void *), void *arg, char *name);
+
+void pthread_create_with_name(pthread_t *thread,
+			void *(*start_routine)(void *), void *arg, char *name);
+
+void __vcpu_thread_create(struct kvm_vcpu *vcpu, const pthread_attr_t *attr,
+		   void *(*start_routine)(void *), uint32_t private_data_size);
+
+void vcpu_thread_create(struct kvm_vcpu *vcpu, void *(*start_routine)(void *),
+			uint32_t private_data_size);
+
+void vm_vcpu_threads_create(struct kvm_vm *vm,
+		void *(*start_routine)(void *), uint32_t private_data_size);
+
+void vm_vcpu_threads_join(struct kvm_vm *vm);
+
+void vm_vcpu_threads_private_data_alloc(struct kvm_vm *vm, uint32_t data_size);
+
 #endif /* SELFTEST_KVM_UTIL_H */
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index c90a9609b853..d0d6aaec0098 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -55,6 +55,8 @@ struct kvm_vcpu {
 	struct kvm_dirty_gfn *dirty_gfns;
 	uint32_t fetch_index;
 	uint32_t dirty_gfns_count;
+	pthread_t thread;
+	void *private_data;
 };
 
 struct userspace_mem_regions {
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 1f69f5ca8356..ba3e774087fb 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -2006,3 +2006,175 @@ void __vm_get_stat(struct kvm_vm *vm, const char *stat_name, uint64_t *data,
 		break;
 	}
 }
+
+/*
+ * Create a named thread with user's attribute
+ *
+ * Input Args:
+ *   attr - the attribute of the thread to create
+ *   start_routine - the routine to run in the thread context
+ *   arg - the argument passed to start_routine
+ *   name - the name of the thread
+ *
+ * Output Args:
+ *   thread - the thread to be created
+ *
+ * Create a thread with a user specified name.
+ */
+void __pthread_create_with_name(pthread_t *thread, const pthread_attr_t *attr,
+			void *(*start_routine)(void *), void *arg, char *name)
+{
+	int r;
+
+	r = pthread_create(thread, NULL, start_routine, arg);
+	TEST_ASSERT(!r, "thread(%s) creation failed, r = %d", name, r);
+	r = pthread_setname_np(*thread, name);
+	TEST_ASSERT(!r, "thread(%s) setting name failed, r = %d", name, r);
+}
+
+/*
+ * Create a named thread with the default thread attribute
+ *
+ * Input Args:
+ *   start_routine - the routine to run in the thread context
+ *   arg - the argument passed to start_routine
+ *   name - the name of the thread
+ *
+ * Output Args:
+ *   thread - the thread to be created
+ *
+ * Create a thread with a user specified name and default thread attribute.
+ */
+void pthread_create_with_name(pthread_t *thread,
+			void *(*start_routine)(void *), void *arg, char *name)
+{
+	__pthread_create_with_name(thread, NULL, start_routine, arg, name);
+}
+
+/*
+ * Create a vcpu thread with user's attribute
+ *
+ * Input Args:
+ *   vcpu - the vcpu for which the thread is created
+ *   attr - the attribute of the vcpu thread
+ *   start_routine - the routine to run in the thread context
+ *   private_data_size - the size of the user's per-vcpu private_data
+ *
+ * Output Args:
+ *   None
+ *
+ * Create a vcpu thread with user provided attribute and the name in
+ * "vcpu-##id" format.
+ */
+void __vcpu_thread_create(struct kvm_vcpu *vcpu, const pthread_attr_t *attr,
+		   void *(*start_routine)(void *), uint32_t private_data_size)
+{
+	char vcpu_name[16];
+
+	if (private_data_size) {
+		vcpu->private_data = calloc(1, private_data_size);
+		TEST_ASSERT(vcpu->private_data, "%s: failed", __func__);
+	}
+
+	sprintf(vcpu_name, "vcpu-%d", vcpu->id);
+	__pthread_create_with_name(&vcpu->thread, attr,
+				   start_routine, (void *)vcpu, vcpu_name);
+}
+
+/*
+ * Create a vcpu thread with the default thread attribute
+ *
+ * Input Args:
+ *   vcpu - the vcpu for which the thread is created
+ *   start_routine - the routine to run in the thread context
+ *   private_data_size - the size of the user's per-vcpu private_data
+ *
+ * Output Args:
+ *   None
+ *
+ * Create a vcpu thread with the default thread attribute and the name in
+ * "vcpu-##id" format, and allocate memory to be used as the vcpu thread's
+ * private data if private_data_size isn't 0.
+ */
+void vcpu_thread_create(struct kvm_vcpu *vcpu, void *(*start_routine)(void *),
+			uint32_t private_data_size)
+{
+	__vcpu_thread_create(vcpu, NULL, start_routine, private_data_size);
+}
+
+/*
+ * Create vcpu threads for all the vcpus that have been created for a VM
+ *
+ * Input Args:
+ *   vm - the VM for which the vcpu threads are created
+ *   start_routine - the routine to run in the thread context
+ *   private_data_size - the size of the user's per-vcpu private_data
+ *
+ * Output Args:
+ *   None
+ *
+ * Create vcpu threads for all the vcpus that have been created for the VM,
+ * and the thread name in "vcpu-##id" format. Allocate memory to each vcpu
+ * thread to be used for its private data if private_data_size isn't 0.
+ */
+void vm_vcpu_threads_create(struct kvm_vm *vm,
+		void *(*start_routine)(void *), uint32_t private_data_size)
+{
+	struct kvm_vcpu *vcpu;
+	uint32_t i;
+
+	vm_iterate_over_vcpus(vm, vcpu, i)
+		vcpu_thread_create(vcpu, start_routine, private_data_size);
+
+}
+
+/*
+ * Join the VM's vcpu threads
+ *
+ * Input Args:
+ *   vm - the VM for which its vcpu threads should join
+ *
+ * Output Args:
+ *   None
+ *
+ * Iterate over all the vcpus and join the threads.
+ */
+void vm_vcpu_threads_join(struct kvm_vm *vm)
+{
+	struct kvm_vcpu *vcpu;
+	void *one_failure;
+	unsigned long failures = 0;
+	int r, i;
+
+	vm_iterate_over_vcpus(vm, vcpu, i) {
+		r = pthread_join(vcpu->thread, &one_failure);
+		TEST_ASSERT(r == 0, "failed to join vcpu %d thread", i);
+		failures += (unsigned long)one_failure;
+	}
+
+	TEST_ASSERT(!failures, "%s: failed", __func__);
+}
+
+/*
+ * Allocate memory used for private data of the vm's vcpus
+ *
+ * Input Args:
+ *   vm - the VM for which its vcpus will be assigned the allocated memory
+ *   data_size - the size of the memory to allocate
+ *
+ * Output Args:
+ *   None
+ *
+ * Allocate memory to be used for private data of each vcpu that has been
+ * created for vm.
+ */
+void vm_vcpu_threads_private_data_alloc(struct kvm_vm *vm, uint32_t data_size)
+{
+	struct kvm_vcpu *vcpu;
+	int i;
+
+	vm_iterate_over_vcpus(vm, vcpu, i) {
+		vcpu->private_data = calloc(1, data_size);
+		TEST_ASSERT(vcpu->private_data, "%s: failed", __func__);
+	}
+}
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v1 04/18] KVM: selftests/kvm_page_table_test: vcpu related code consolidation
  2022-10-24 11:34 [PATCH v1 00/18] KVM selftests code consolidation and cleanup Wei Wang
                   ` (2 preceding siblings ...)
  2022-10-24 11:34 ` [PATCH v1 03/18] KVM: selftests/kvm_util: helper functions for vcpus and threads Wei Wang
@ 2022-10-24 11:34 ` Wei Wang
  2022-10-24 11:34 ` [PATCH v1 05/18] KVM: selftests/hardware_disable_test: code consolidation and cleanup Wei Wang
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 43+ messages in thread
From: Wei Wang @ 2022-10-24 11:34 UTC (permalink / raw)
  To: seanjc, pbonzini
  Cc: dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel, Wei Wang

kvm_vm has changed to use an array of vcpu pointers (i.e. *vcpus[]) and
the vcpu thread has been included in kvm_vcpu. Remove the unnecessary
array of vcpu poniters and vcpu threads allocation. Use the helper
functions to create and join the vcpu threads.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 .../testing/selftests/kvm/kvm_page_table_test.c  | 16 +++-------------
 1 file changed, 3 insertions(+), 13 deletions(-)

diff --git a/tools/testing/selftests/kvm/kvm_page_table_test.c b/tools/testing/selftests/kvm/kvm_page_table_test.c
index f42c6ac6d71d..4c3df48d80fc 100644
--- a/tools/testing/selftests/kvm/kvm_page_table_test.c
+++ b/tools/testing/selftests/kvm/kvm_page_table_test.c
@@ -14,7 +14,6 @@
 #include <stdio.h>
 #include <stdlib.h>
 #include <time.h>
-#include <pthread.h>
 #include <semaphore.h>
 
 #include "test_util.h"
@@ -55,7 +54,6 @@ struct test_args {
 	uint64_t large_num_pages;
 	uint64_t host_pages_per_lpage;
 	enum vm_mem_backing_src_type src_type;
-	struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];
 };
 
 /*
@@ -255,7 +253,7 @@ static struct kvm_vm *pre_init_before_test(enum vm_guest_mode mode, void *arg)
 	/* Create a VM with enough guest pages */
 	guest_num_pages = test_mem_size / guest_page_size;
 	vm = __vm_create_with_vcpus(mode, nr_vcpus, guest_num_pages,
-				    guest_code, test_args.vcpus);
+				    guest_code, NULL);
 
 	/* Align down GPA of the testing memslot */
 	if (!p->phys_offset)
@@ -343,7 +341,6 @@ static void vcpus_complete_new_stage(enum test_stage stage)
 
 static void run_test(enum vm_guest_mode mode, void *arg)
 {
-	pthread_t *vcpu_threads;
 	struct kvm_vm *vm;
 	struct timespec start;
 	struct timespec ts_diff;
@@ -352,15 +349,10 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	/* Create VM with vCPUs and make some pre-initialization */
 	vm = pre_init_before_test(mode, arg);
 
-	vcpu_threads = malloc(nr_vcpus * sizeof(*vcpu_threads));
-	TEST_ASSERT(vcpu_threads, "Memory allocation failed");
-
 	host_quit = false;
 	*current_stage = KVM_BEFORE_MAPPINGS;
 
-	for (i = 0; i < nr_vcpus; i++)
-		pthread_create(&vcpu_threads[i], NULL, vcpu_worker,
-			       test_args.vcpus[i]);
+	vm_vcpu_threads_create(vm, vcpu_worker, 0);
 
 	vcpus_complete_new_stage(*current_stage);
 	pr_info("Started all vCPUs successfully\n");
@@ -407,8 +399,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 		TEST_ASSERT(ret == 0, "Error in sem_post");
 	}
 
-	for (i = 0; i < nr_vcpus; i++)
-		pthread_join(vcpu_threads[i], NULL);
+	vm_vcpu_threads_join(vm);
 
 	ret = sem_destroy(&test_stage_updated);
 	TEST_ASSERT(ret == 0, "Error in sem_destroy");
@@ -416,7 +407,6 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	ret = sem_destroy(&test_stage_completed);
 	TEST_ASSERT(ret == 0, "Error in sem_destroy");
 
-	free(vcpu_threads);
 	ucall_uninit(vm);
 	kvm_vm_free(vm);
 }
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v1 05/18] KVM: selftests/hardware_disable_test: code consolidation and cleanup
  2022-10-24 11:34 [PATCH v1 00/18] KVM selftests code consolidation and cleanup Wei Wang
                   ` (3 preceding siblings ...)
  2022-10-24 11:34 ` [PATCH v1 04/18] KVM: selftests/kvm_page_table_test: vcpu related code consolidation Wei Wang
@ 2022-10-24 11:34 ` Wei Wang
  2022-10-27  0:16   ` Sean Christopherson
  2022-10-24 11:34 ` [PATCH v1 06/18] KVM: selftests/dirty_log_test: vcpu related code consolidation Wei Wang
                   ` (13 subsequent siblings)
  18 siblings, 1 reply; 43+ messages in thread
From: Wei Wang @ 2022-10-24 11:34 UTC (permalink / raw)
  To: seanjc, pbonzini
  Cc: dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel, Wei Wang

Remove the unnecessary definition of the threads[] array, and use the
helper functions to create and join threads.

Also move setting of the thread affinity to __vcpu_thread_create using
attribute. This avoids an explicit step to set it after thread
creation.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 .../selftests/kvm/hardware_disable_test.c     | 56 +++++--------------
 1 file changed, 15 insertions(+), 41 deletions(-)

diff --git a/tools/testing/selftests/kvm/hardware_disable_test.c b/tools/testing/selftests/kvm/hardware_disable_test.c
index f5d59b9934f1..c212d34a6714 100644
--- a/tools/testing/selftests/kvm/hardware_disable_test.c
+++ b/tools/testing/selftests/kvm/hardware_disable_test.c
@@ -8,7 +8,6 @@
 #define _GNU_SOURCE
 
 #include <fcntl.h>
-#include <pthread.h>
 #include <semaphore.h>
 #include <stdint.h>
 #include <stdlib.h>
@@ -59,64 +58,39 @@ static void *sleeping_thread(void *arg)
 	pthread_exit(NULL);
 }
 
-static inline void check_create_thread(pthread_t *thread, pthread_attr_t *attr,
-				       void *(*f)(void *), void *arg)
-{
-	int r;
-
-	r = pthread_create(thread, attr, f, arg);
-	TEST_ASSERT(r == 0, "%s: failed to create thread", __func__);
-}
-
-static inline void check_set_affinity(pthread_t thread, cpu_set_t *cpu_set)
-{
-	int r;
-
-	r = pthread_setaffinity_np(thread, sizeof(cpu_set_t), cpu_set);
-	TEST_ASSERT(r == 0, "%s: failed set affinity", __func__);
-}
-
-static inline void check_join(pthread_t thread, void **retval)
-{
-	int r;
-
-	r = pthread_join(thread, retval);
-	TEST_ASSERT(r == 0, "%s: failed to join thread", __func__);
-}
-
 static void run_test(uint32_t run)
 {
 	struct kvm_vcpu *vcpu;
 	struct kvm_vm *vm;
 	cpu_set_t cpu_set;
-	pthread_t threads[VCPU_NUM];
 	pthread_t throw_away;
-	void *b;
+	pthread_attr_t attr;
 	uint32_t i, j;
+	int r;
 
 	CPU_ZERO(&cpu_set);
 	for (i = 0; i < VCPU_NUM; i++)
 		CPU_SET(i, &cpu_set);
+	r = pthread_attr_init(&attr);
+	TEST_ASSERT(!r, "%s: failed to init thread attr, r = %d", __func__, r);
+	r = pthread_attr_setaffinity_np(&attr, sizeof(cpu_set_t), &cpu_set);
+	TEST_ASSERT(!r, "%s: failed to set affinity, r = %d", __func__, r);
 
-	vm = vm_create(VCPU_NUM);
+	vm = vm_create_with_vcpus(VCPU_NUM, guest_code, NULL);
 
 	pr_debug("%s: [%d] start vcpus\n", __func__, run);
-	for (i = 0; i < VCPU_NUM; ++i) {
-		vcpu = vm_vcpu_add(vm, i, guest_code);
+	vm_iterate_over_vcpus(vm, vcpu, i) {
+		__vcpu_thread_create(vcpu, &attr, run_vcpu, 0);
 
-		check_create_thread(&threads[i], NULL, run_vcpu, vcpu);
-		check_set_affinity(threads[i], &cpu_set);
-
-		for (j = 0; j < SLEEPING_THREAD_NUM; ++j) {
-			check_create_thread(&throw_away, NULL, sleeping_thread,
-					    (void *)NULL);
-			check_set_affinity(throw_away, &cpu_set);
-		}
+		for (j = 0; j < SLEEPING_THREAD_NUM; ++j)
+			__pthread_create_with_name(&throw_away, &attr,
+						sleeping_thread, (void *)NULL,
+						"sleeping-thread");
 	}
 	pr_debug("%s: [%d] all threads launched\n", __func__, run);
 	sem_post(sem);
-	for (i = 0; i < VCPU_NUM; ++i)
-		check_join(threads[i], &b);
+
+	vm_vcpu_threads_join(vm);
 	/* Should not be reached */
 	TEST_ASSERT(false, "%s: [%d] child escaped the ninja\n", __func__, run);
 }
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v1 06/18] KVM: selftests/dirty_log_test: vcpu related code consolidation
  2022-10-24 11:34 [PATCH v1 00/18] KVM selftests code consolidation and cleanup Wei Wang
                   ` (4 preceding siblings ...)
  2022-10-24 11:34 ` [PATCH v1 05/18] KVM: selftests/hardware_disable_test: code consolidation and cleanup Wei Wang
@ 2022-10-24 11:34 ` Wei Wang
  2022-10-24 11:34 ` [PATCH v1 07/18] KVM: selftests/max_guest_memory_test: " Wei Wang
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 43+ messages in thread
From: Wei Wang @ 2022-10-24 11:34 UTC (permalink / raw)
  To: seanjc, pbonzini
  Cc: dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel, Wei Wang

Remove the globally defined vcpu thread, and reuse the one from
kvm_vcpu. Also, use the helper functions to create and join the vcpu
thread, which has implemented error check (i.e. TEST_ASSERT) inside.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 tools/testing/selftests/kvm/dirty_log_test.c | 16 +++++++---------
 1 file changed, 7 insertions(+), 9 deletions(-)

diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index b5234d6efbe1..9177b8ca004d 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -9,7 +9,6 @@
 
 #include <stdio.h>
 #include <stdlib.h>
-#include <pthread.h>
 #include <semaphore.h>
 #include <sys/types.h>
 #include <signal.h>
@@ -188,12 +187,11 @@ enum log_mode_t {
 static enum log_mode_t host_log_mode_option = LOG_MODE_ALL;
 /* Logging mode for current run */
 static enum log_mode_t host_log_mode;
-static pthread_t vcpu_thread;
 static uint32_t test_dirty_ring_count = TEST_DIRTY_RING_COUNT;
 
-static void vcpu_kick(void)
+static void vcpu_kick(struct kvm_vcpu *vcpu)
 {
-	pthread_kill(vcpu_thread, SIG_IPI);
+	pthread_kill(vcpu->thread, SIG_IPI);
 }
 
 /*
@@ -315,10 +313,10 @@ static uint32_t dirty_ring_collect_one(struct kvm_dirty_gfn *dirty_gfns,
 	return count;
 }
 
-static void dirty_ring_wait_vcpu(void)
+static void dirty_ring_wait_vcpu(struct kvm_vcpu *vcpu)
 {
 	/* This makes sure that hardware PML cache flushed */
-	vcpu_kick();
+	vcpu_kick(vcpu);
 	sem_wait_until(&sem_vcpu_stop);
 }
 
@@ -336,7 +334,7 @@ static void dirty_ring_collect_dirty_pages(struct kvm_vcpu *vcpu, int slot,
 	uint32_t count = 0, cleared;
 	bool continued_vcpu = false;
 
-	dirty_ring_wait_vcpu();
+	dirty_ring_wait_vcpu(vcpu);
 
 	if (!dirty_ring_vcpu_ring_full) {
 		/*
@@ -772,7 +770,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	host_clear_count = 0;
 	host_track_next_count = 0;
 
-	pthread_create(&vcpu_thread, NULL, vcpu_worker, vcpu);
+	vcpu_thread_create(vcpu, vcpu_worker, 0);
 
 	while (iteration < p->iterations) {
 		/* Give the vcpu thread some time to dirty some pages */
@@ -805,7 +803,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	/* Tell the vcpu thread to quit */
 	host_quit = true;
 	log_mode_before_vcpu_join();
-	pthread_join(vcpu_thread, NULL);
+	vm_vcpu_threads_join(vm);
 
 	pr_info("Total bits checked: dirty (%"PRIu64"), clear (%"PRIu64"), "
 		"track_next (%"PRIu64")\n", host_dirty_count, host_clear_count,
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v1 07/18] KVM: selftests/max_guest_memory_test: vcpu related code consolidation
  2022-10-24 11:34 [PATCH v1 00/18] KVM selftests code consolidation and cleanup Wei Wang
                   ` (5 preceding siblings ...)
  2022-10-24 11:34 ` [PATCH v1 06/18] KVM: selftests/dirty_log_test: vcpu related code consolidation Wei Wang
@ 2022-10-24 11:34 ` Wei Wang
  2022-10-24 11:34 ` [PATCH v1 08/18] KVM: selftests/set_memory_region_test: " Wei Wang
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 43+ messages in thread
From: Wei Wang @ 2022-10-24 11:34 UTC (permalink / raw)
  To: seanjc, pbonzini
  Cc: dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel, Wei Wang

Remove the unnecessary allocation of the vcpu and threads array,
and use the helper functinos to create and join the vcpu threads.

As the vcpu thread's start routine (i.e. vcpu_worker) uses kvm_vcpu as the
interface, change vcpu_info to be the vcpu thread's private data to have
it passed to the thread's start routine.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 .../selftests/kvm/max_guest_memory_test.c     | 53 +++++++------------
 1 file changed, 20 insertions(+), 33 deletions(-)

diff --git a/tools/testing/selftests/kvm/max_guest_memory_test.c b/tools/testing/selftests/kvm/max_guest_memory_test.c
index 9a6e4f3ad6b5..2d9c83e36e65 100644
--- a/tools/testing/selftests/kvm/max_guest_memory_test.c
+++ b/tools/testing/selftests/kvm/max_guest_memory_test.c
@@ -3,7 +3,6 @@
 
 #include <stdio.h>
 #include <stdlib.h>
-#include <pthread.h>
 #include <semaphore.h>
 #include <sys/types.h>
 #include <signal.h>
@@ -27,8 +26,7 @@ static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride)
 	GUEST_DONE();
 }
 
-struct vcpu_info {
-	struct kvm_vcpu *vcpu;
+struct vcpu_thread_data {
 	uint64_t start_gpa;
 	uint64_t end_gpa;
 };
@@ -59,13 +57,15 @@ static void run_vcpu(struct kvm_vcpu *vcpu)
 
 static void *vcpu_worker(void *data)
 {
-	struct vcpu_info *info = data;
-	struct kvm_vcpu *vcpu = info->vcpu;
+	struct kvm_vcpu *vcpu = (struct kvm_vcpu *)data;
+	struct vcpu_thread_data *thread_data =
+		(struct vcpu_thread_data *)vcpu->private_data;
 	struct kvm_vm *vm = vcpu->vm;
 	struct kvm_sregs sregs;
 	struct kvm_regs regs;
 
-	vcpu_args_set(vcpu, 3, info->start_gpa, info->end_gpa, vm->page_size);
+	vcpu_args_set(vcpu, 3, thread_data->start_gpa,
+		      thread_data->end_gpa, vm->page_size);
 
 	/* Snapshot regs before the first run. */
 	vcpu_regs_get(vcpu, &regs);
@@ -88,31 +88,24 @@ static void *vcpu_worker(void *data)
 	return NULL;
 }
 
-static pthread_t *spawn_workers(struct kvm_vm *vm, struct kvm_vcpu **vcpus,
-				uint64_t start_gpa, uint64_t end_gpa)
+static void vm_vcpu_threads_data_init(struct kvm_vm *vm,
+				     uint64_t start_gpa, uint64_t end_gpa)
 {
-	struct vcpu_info *info;
+	struct kvm_vcpu *vcpu;
+	struct vcpu_thread_data *thread_data;
 	uint64_t gpa, nr_bytes;
-	pthread_t *threads;
 	int i;
 
-	threads = malloc(nr_vcpus * sizeof(*threads));
-	TEST_ASSERT(threads, "Failed to allocate vCPU threads");
-
-	info = malloc(nr_vcpus * sizeof(*info));
-	TEST_ASSERT(info, "Failed to allocate vCPU gpa ranges");
-
 	nr_bytes = ((end_gpa - start_gpa) / nr_vcpus) &
 			~((uint64_t)vm->page_size - 1);
 	TEST_ASSERT(nr_bytes, "C'mon, no way you have %d CPUs", nr_vcpus);
 
-	for (i = 0, gpa = start_gpa; i < nr_vcpus; i++, gpa += nr_bytes) {
-		info[i].vcpu = vcpus[i];
-		info[i].start_gpa = gpa;
-		info[i].end_gpa = gpa + nr_bytes;
-		pthread_create(&threads[i], NULL, vcpu_worker, &info[i]);
+	vm_iterate_over_vcpus(vm, vcpu, i) {
+		thread_data = (struct vcpu_thread_data *)vcpu->private_data;
+		gpa = start_gpa + i * nr_bytes;
+		thread_data->start_gpa =  gpa;
+		thread_data->end_gpa = gpa + nr_bytes;
 	}
-	return threads;
 }
 
 static void rendezvous_with_vcpus(struct timespec *time, const char *name)
@@ -170,8 +163,6 @@ int main(int argc, char *argv[])
 	uint64_t max_gpa, gpa, slot_size, max_mem, i;
 	int max_slots, slot, opt, fd;
 	bool hugepages = false;
-	struct kvm_vcpu **vcpus;
-	pthread_t *threads;
 	struct kvm_vm *vm;
 	void *mem;
 
@@ -214,10 +205,7 @@ int main(int argc, char *argv[])
 		}
 	}
 
-	vcpus = malloc(nr_vcpus * sizeof(*vcpus));
-	TEST_ASSERT(vcpus, "Failed to allocate vCPU array");
-
-	vm = vm_create_with_vcpus(nr_vcpus, guest_code, vcpus);
+	vm = vm_create_with_vcpus(nr_vcpus, guest_code, NULL);
 
 	max_gpa = vm->max_gfn << vm->page_shift;
 	TEST_ASSERT(max_gpa > (4 * slot_size), "MAXPHYADDR <4gb ");
@@ -254,10 +242,10 @@ int main(int argc, char *argv[])
 	}
 
 	atomic_set(&rendezvous, nr_vcpus + 1);
-	threads = spawn_workers(vm, vcpus, start_gpa, gpa);
 
-	free(vcpus);
-	vcpus = NULL;
+	vm_vcpu_threads_create(vm, vcpu_worker,
+			       sizeof(struct vcpu_thread_data));
+	vm_vcpu_threads_data_init(vm, start_gpa, gpa);
 
 	pr_info("Running with %lugb of guest memory and %u vCPUs\n",
 		(gpa - start_gpa) / size_1gb, nr_vcpus);
@@ -287,8 +275,7 @@ int main(int argc, char *argv[])
 	munmap(mem, slot_size / 2);
 
 	/* Sanity check that the vCPUs actually ran. */
-	for (i = 0; i < nr_vcpus; i++)
-		pthread_join(threads[i], NULL);
+	vm_vcpu_threads_join(vm);
 
 	/*
 	 * Deliberately exit without deleting the remaining memslots or closing
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v1 08/18] KVM: selftests/set_memory_region_test: vcpu related code consolidation
  2022-10-24 11:34 [PATCH v1 00/18] KVM selftests code consolidation and cleanup Wei Wang
                   ` (6 preceding siblings ...)
  2022-10-24 11:34 ` [PATCH v1 07/18] KVM: selftests/max_guest_memory_test: " Wei Wang
@ 2022-10-24 11:34 ` Wei Wang
  2022-10-24 11:34 ` [PATCH v1 09/18] KVM: selftests/steal_time: vcpu related code consolidation and cleanup Wei Wang
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 43+ messages in thread
From: Wei Wang @ 2022-10-24 11:34 UTC (permalink / raw)
  To: seanjc, pbonzini
  Cc: dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel, Wei Wang

Remove the unnecessary vcpu_thread definition and remove it from the
related funtions' input as it can be referenced from the kvm_vcpu
struct. Also use the helper functinos to create and join the vcpu thread.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 .../selftests/kvm/set_memory_region_test.c       | 16 ++++++----------
 1 file changed, 6 insertions(+), 10 deletions(-)

diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c
index 0d55f508d595..d233668957da 100644
--- a/tools/testing/selftests/kvm/set_memory_region_test.c
+++ b/tools/testing/selftests/kvm/set_memory_region_test.c
@@ -1,7 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0
 #define _GNU_SOURCE /* for program_invocation_short_name */
 #include <fcntl.h>
-#include <pthread.h>
 #include <sched.h>
 #include <semaphore.h>
 #include <signal.h>
@@ -108,8 +107,7 @@ static void wait_for_vcpu(void)
 	usleep(100000);
 }
 
-static struct kvm_vm *spawn_vm(struct kvm_vcpu **vcpu, pthread_t *vcpu_thread,
-			       void *guest_code)
+static struct kvm_vm *spawn_vm(struct kvm_vcpu **vcpu, void *guest_code)
 {
 	struct kvm_vm *vm;
 	uint64_t *hva;
@@ -134,7 +132,7 @@ static struct kvm_vm *spawn_vm(struct kvm_vcpu **vcpu, pthread_t *vcpu_thread,
 	hva = addr_gpa2hva(vm, MEM_REGION_GPA);
 	memset(hva, 0, 2 * 4096);
 
-	pthread_create(vcpu_thread, NULL, vcpu_worker, *vcpu);
+	vm_vcpu_threads_create(vm, vcpu_worker, 0);
 
 	/* Ensure the guest thread is spun up. */
 	wait_for_vcpu();
@@ -175,12 +173,11 @@ static void guest_code_move_memory_region(void)
 
 static void test_move_memory_region(void)
 {
-	pthread_t vcpu_thread;
 	struct kvm_vcpu *vcpu;
 	struct kvm_vm *vm;
 	uint64_t *hva;
 
-	vm = spawn_vm(&vcpu, &vcpu_thread, guest_code_move_memory_region);
+	vm = spawn_vm(&vcpu, guest_code_move_memory_region);
 
 	hva = addr_gpa2hva(vm, MEM_REGION_GPA);
 
@@ -211,7 +208,7 @@ static void test_move_memory_region(void)
 	/* Defered sync from when the memslot was misaligned (above). */
 	wait_for_vcpu();
 
-	pthread_join(vcpu_thread, NULL);
+	vm_vcpu_threads_join(vm);
 
 	kvm_vm_free(vm);
 }
@@ -254,13 +251,12 @@ static void guest_code_delete_memory_region(void)
 
 static void test_delete_memory_region(void)
 {
-	pthread_t vcpu_thread;
 	struct kvm_vcpu *vcpu;
 	struct kvm_regs regs;
 	struct kvm_run *run;
 	struct kvm_vm *vm;
 
-	vm = spawn_vm(&vcpu, &vcpu_thread, guest_code_delete_memory_region);
+	vm = spawn_vm(&vcpu, guest_code_delete_memory_region);
 
 	/* Delete the memory region, the guest should not die. */
 	vm_mem_region_delete(vm, MEM_REGION_SLOT);
@@ -282,7 +278,7 @@ static void test_delete_memory_region(void)
 	 */
 	vm_mem_region_delete(vm, 0);
 
-	pthread_join(vcpu_thread, NULL);
+	vm_vcpu_threads_join(vm);
 
 	run = vcpu->run;
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v1 09/18] KVM: selftests/steal_time: vcpu related code consolidation and cleanup
  2022-10-24 11:34 [PATCH v1 00/18] KVM selftests code consolidation and cleanup Wei Wang
                   ` (7 preceding siblings ...)
  2022-10-24 11:34 ` [PATCH v1 08/18] KVM: selftests/set_memory_region_test: " Wei Wang
@ 2022-10-24 11:34 ` Wei Wang
  2022-10-27  0:17   ` Sean Christopherson
  2022-10-24 11:34 ` [PATCH v1 10/18] KVM: selftests/tsc_scaling_sync: vcpu related code consolidation Wei Wang
                   ` (9 subsequent siblings)
  18 siblings, 1 reply; 43+ messages in thread
From: Wei Wang @ 2022-10-24 11:34 UTC (permalink / raw)
  To: seanjc, pbonzini
  Cc: dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel, Wei Wang

Remove the unnecessary definition of array of the vcpu pointers and
re-use the one from the kvm_vm struct (i.e. vm->vcpus[]). Use the helper
function to create the time stealing thread with name.

Also add a check of the pthread_join return value.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 tools/testing/selftests/kvm/steal_time.c | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c
index db8967f1a17b..857ed2c073fc 100644
--- a/tools/testing/selftests/kvm/steal_time.c
+++ b/tools/testing/selftests/kvm/steal_time.c
@@ -8,7 +8,6 @@
 #include <stdio.h>
 #include <time.h>
 #include <sched.h>
-#include <pthread.h>
 #include <linux/kernel.h>
 #include <asm/kvm.h>
 #include <asm/kvm_para.h>
@@ -241,7 +240,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu)
 
 int main(int ac, char **av)
 {
-	struct kvm_vcpu *vcpus[NR_VCPUS];
+	struct kvm_vcpu **vcpus;
 	struct kvm_vm *vm;
 	pthread_attr_t attr;
 	pthread_t thread;
@@ -250,7 +249,7 @@ int main(int ac, char **av)
 	long stolen_time;
 	long run_delay;
 	bool verbose;
-	int i;
+	int i, r;
 
 	verbose = ac > 1 && (!strncmp(av[1], "-v", 3) || !strncmp(av[1], "--verbose", 10));
 
@@ -262,7 +261,8 @@ int main(int ac, char **av)
 	pthread_setaffinity_np(pthread_self(), sizeof(cpu_set_t), &cpuset);
 
 	/* Create a VM and an identity mapped memslot for the steal time structure */
-	vm = vm_create_with_vcpus(NR_VCPUS, guest_code, vcpus);
+	vm = vm_create_with_vcpus(NR_VCPUS, guest_code, NULL);
+	vcpus = vm->vcpus;
 	gpages = vm_calc_num_guest_pages(VM_MODE_DEFAULT, STEAL_TIME_SIZE * NR_VCPUS);
 	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, ST_GPA_BASE, 1, gpages, 0);
 	virt_map(vm, ST_GPA_BASE, ST_GPA_BASE, gpages);
@@ -290,11 +290,14 @@ int main(int ac, char **av)
 
 		/* Steal time from the VCPU. The steal time thread has the same CPU affinity as the VCPUs. */
 		run_delay = get_run_delay();
-		pthread_create(&thread, &attr, do_steal_time, NULL);
+		__pthread_create_with_name(&thread, &attr,
+					   do_steal_time, NULL, "steal-time");
 		do
 			sched_yield();
 		while (get_run_delay() - run_delay < MIN_RUN_DELAY_NS);
-		pthread_join(thread, NULL);
+		r = pthread_join(thread, NULL);
+		TEST_ASSERT(!r, "failed to join the time stealing thread");
+
 		run_delay = get_run_delay() - run_delay;
 		TEST_ASSERT(run_delay >= MIN_RUN_DELAY_NS,
 			    "Expected run_delay >= %ld, got %ld",
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v1 10/18] KVM: selftests/tsc_scaling_sync: vcpu related code consolidation
  2022-10-24 11:34 [PATCH v1 00/18] KVM selftests code consolidation and cleanup Wei Wang
                   ` (8 preceding siblings ...)
  2022-10-24 11:34 ` [PATCH v1 09/18] KVM: selftests/steal_time: vcpu related code consolidation and cleanup Wei Wang
@ 2022-10-24 11:34 ` Wei Wang
  2022-10-24 11:34 ` [PATCH v1 11/18] KVM: selftest/xapic_ipi_test: " Wei Wang
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 43+ messages in thread
From: Wei Wang @ 2022-10-24 11:34 UTC (permalink / raw)
  To: seanjc, pbonzini
  Cc: dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel, Wei Wang

Remove the unnecessary definition of the vcpu_threads[] array, as
it has beend included in the kvm_vcpu struct. Use the helper functions
to create and join the vcpu threads.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 .../selftests/kvm/x86_64/tsc_scaling_sync.c   | 25 ++++---------------
 1 file changed, 5 insertions(+), 20 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/tsc_scaling_sync.c b/tools/testing/selftests/kvm/x86_64/tsc_scaling_sync.c
index 47139aab7408..34a8beef42b6 100644
--- a/tools/testing/selftests/kvm/x86_64/tsc_scaling_sync.c
+++ b/tools/testing/selftests/kvm/x86_64/tsc_scaling_sync.c
@@ -15,7 +15,6 @@
 #include <time.h>
 #include <sched.h>
 #include <signal.h>
-#include <pthread.h>
 
 #define NR_TEST_VCPUS 20
 
@@ -44,18 +43,15 @@ static void guest_code(void)
 }
 
 
-static void *run_vcpu(void *_cpu_nr)
+static void *run_vcpu(void *data)
 {
-	unsigned long vcpu_id = (unsigned long)_cpu_nr;
+	struct kvm_vcpu *vcpu = (struct kvm_vcpu *)data;
 	unsigned long failures = 0;
 	static bool first_cpu_done;
-	struct kvm_vcpu *vcpu;
 
 	/* The kernel is fine, but vm_vcpu_add() needs locking */
 	pthread_spin_lock(&create_lock);
 
-	vcpu = vm_vcpu_add(vm, vcpu_id, guest_code);
-
 	if (!first_cpu_done) {
 		first_cpu_done = true;
 		vcpu_set_msr(vcpu, MSR_IA32_TSC, TEST_TSC_OFFSET);
@@ -95,23 +91,12 @@ int main(int argc, char *argv[])
 {
 	TEST_REQUIRE(kvm_has_cap(KVM_CAP_VM_TSC_CONTROL));
 
-	vm = vm_create(NR_TEST_VCPUS);
+	vm = vm_create_with_vcpus(NR_TEST_VCPUS, guest_code, NULL);
 	vm_ioctl(vm, KVM_SET_TSC_KHZ, (void *) TEST_TSC_KHZ);
 
 	pthread_spin_init(&create_lock, PTHREAD_PROCESS_PRIVATE);
-	pthread_t cpu_threads[NR_TEST_VCPUS];
-	unsigned long cpu;
-	for (cpu = 0; cpu < NR_TEST_VCPUS; cpu++)
-		pthread_create(&cpu_threads[cpu], NULL, run_vcpu, (void *)cpu);
-
-	unsigned long failures = 0;
-	for (cpu = 0; cpu < NR_TEST_VCPUS; cpu++) {
-		void *this_cpu_failures;
-		pthread_join(cpu_threads[cpu], &this_cpu_failures);
-		failures += (unsigned long)this_cpu_failures;
-	}
-
-	TEST_ASSERT(!failures, "TSC sync failed");
+	vm_vcpu_threads_create(vm, run_vcpu, 0);
+	vm_vcpu_threads_join(vm);
 	pthread_spin_destroy(&create_lock);
 	kvm_vm_free(vm);
 	return 0;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v1 11/18] KVM: selftest/xapic_ipi_test: vcpu related code consolidation
  2022-10-24 11:34 [PATCH v1 00/18] KVM selftests code consolidation and cleanup Wei Wang
                   ` (9 preceding siblings ...)
  2022-10-24 11:34 ` [PATCH v1 10/18] KVM: selftests/tsc_scaling_sync: vcpu related code consolidation Wei Wang
@ 2022-10-24 11:34 ` Wei Wang
  2022-10-24 11:34 ` [PATCH v1 12/18] KVM: selftests/rseq_test: name the migration thread and some cleanup Wei Wang
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 43+ messages in thread
From: Wei Wang @ 2022-10-24 11:34 UTC (permalink / raw)
  To: seanjc, pbonzini
  Cc: dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel, Wei Wang

Remove the unnecessary definition of the theads[] array, and change
thread_params to be the vcpu's private data. Use the helper function
to create the threads.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 .../selftests/kvm/x86_64/xapic_ipi_test.c     | 54 +++++++++----------
 1 file changed, 24 insertions(+), 30 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c b/tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c
index 3d272d7f961e..cc2630429067 100644
--- a/tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c
@@ -22,7 +22,6 @@
 
 #define _GNU_SOURCE /* for program_invocation_short_name */
 #include <getopt.h>
-#include <pthread.h>
 #include <inttypes.h>
 #include <string.h>
 #include <time.h>
@@ -76,7 +75,6 @@ struct test_data_page {
 
 struct thread_params {
 	struct test_data_page *data;
-	struct kvm_vcpu *vcpu;
 	uint64_t *pipis_rcvd; /* host address of ipis_rcvd global */
 };
 
@@ -193,8 +191,9 @@ static void sender_guest_code(struct test_data_page *data)
 
 static void *vcpu_thread(void *arg)
 {
-	struct thread_params *params = (struct thread_params *)arg;
-	struct kvm_vcpu *vcpu = params->vcpu;
+	struct kvm_vcpu *vcpu = (struct kvm_vcpu *)arg;
+	struct thread_params *params =
+		(struct thread_params *)vcpu->private_data;
 	struct ucall uc;
 	int old;
 	int r;
@@ -233,17 +232,17 @@ static void *vcpu_thread(void *arg)
 	return NULL;
 }
 
-static void cancel_join_vcpu_thread(pthread_t thread, struct kvm_vcpu *vcpu)
+static void cancel_join_vcpu_thread(struct kvm_vcpu *vcpu)
 {
 	void *retval;
 	int r;
 
-	r = pthread_cancel(thread);
+	r = pthread_cancel(vcpu->thread);
 	TEST_ASSERT(r == 0,
 		    "pthread_cancel on vcpu_id=%d failed with errno=%d",
 		    vcpu->id, r);
 
-	r = pthread_join(thread, &retval);
+	r = pthread_join(vcpu->thread, &retval);
 	TEST_ASSERT(r == 0,
 		    "pthread_join on vcpu_id=%d failed with errno=%d",
 		    vcpu->id, r);
@@ -393,17 +392,16 @@ void get_cmdline_args(int argc, char *argv[], int *run_secs,
 
 int main(int argc, char *argv[])
 {
-	int r;
-	int wait_secs;
+	int i, wait_secs;
 	const int max_halter_wait = 10;
 	int run_secs = 0;
 	int delay_usecs = 0;
 	struct test_data_page *data;
 	vm_vaddr_t test_data_page_vaddr;
 	bool migrate = false;
-	pthread_t threads[2];
-	struct thread_params params[2];
+	struct thread_params *params;
 	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
 	uint64_t *pipis_rcvd;
 
 	get_cmdline_args(argc, argv, &run_secs, &migrate, &delay_usecs);
@@ -412,33 +410,31 @@ int main(int argc, char *argv[])
 	if (delay_usecs <= 0)
 		delay_usecs = DEFAULT_DELAY_USECS;
 
-	vm = vm_create_with_one_vcpu(&params[0].vcpu, halter_guest_code);
+	vm = vm_create_with_one_vcpu(&vcpu, halter_guest_code);
 
 	vm_init_descriptor_tables(vm);
-	vcpu_init_descriptor_tables(params[0].vcpu);
+	vcpu_init_descriptor_tables(vcpu);
 	vm_install_exception_handler(vm, IPI_VECTOR, guest_ipi_handler);
 
 	virt_pg_map(vm, APIC_DEFAULT_GPA, APIC_DEFAULT_GPA);
 
-	params[1].vcpu = vm_vcpu_add(vm, 1, sender_guest_code);
+	vcpu = vm_vcpu_add(vm, 1, sender_guest_code);
+	vm_vcpu_threads_private_data_alloc(vm, sizeof(struct thread_params));
 
 	test_data_page_vaddr = vm_vaddr_alloc_page(vm);
 	data = addr_gva2hva(vm, test_data_page_vaddr);
 	memset(data, 0, sizeof(*data));
-	params[0].data = data;
-	params[1].data = data;
-
-	vcpu_args_set(params[0].vcpu, 1, test_data_page_vaddr);
-	vcpu_args_set(params[1].vcpu, 1, test_data_page_vaddr);
-
 	pipis_rcvd = (uint64_t *)addr_gva2hva(vm, (uint64_t)&ipis_rcvd);
-	params[0].pipis_rcvd = pipis_rcvd;
-	params[1].pipis_rcvd = pipis_rcvd;
+
+	vm_iterate_over_vcpus(vm, vcpu, i) {
+		params = (struct thread_params *)vcpu->private_data;
+		params->data = data;
+		params->pipis_rcvd = pipis_rcvd;
+		vcpu_args_set(vcpu, 1, test_data_page_vaddr);
+	}
 
 	/* Start halter vCPU thread and wait for it to execute first HLT. */
-	r = pthread_create(&threads[0], NULL, vcpu_thread, &params[0]);
-	TEST_ASSERT(r == 0,
-		    "pthread_create halter failed errno=%d", errno);
+	vcpu_thread_create(vm->vcpus[0], vcpu_thread, 0);
 	fprintf(stderr, "Halter vCPU thread started\n");
 
 	wait_secs = 0;
@@ -455,9 +451,7 @@ int main(int argc, char *argv[])
 		"Halter vCPU thread reported its APIC ID: %u after %d seconds.\n",
 		data->halter_apic_id, wait_secs);
 
-	r = pthread_create(&threads[1], NULL, vcpu_thread, &params[1]);
-	TEST_ASSERT(r == 0, "pthread_create sender failed errno=%d", errno);
-
+	vcpu_thread_create(vm->vcpus[1], vcpu_thread, 0);
 	fprintf(stderr,
 		"IPI sender vCPU thread started. Letting vCPUs run for %d seconds.\n",
 		run_secs);
@@ -470,8 +464,8 @@ int main(int argc, char *argv[])
 	/*
 	 * Cancel threads and wait for them to stop.
 	 */
-	cancel_join_vcpu_thread(threads[0], params[0].vcpu);
-	cancel_join_vcpu_thread(threads[1], params[1].vcpu);
+	cancel_join_vcpu_thread(vm->vcpus[0]);
+	cancel_join_vcpu_thread(vm->vcpus[1]);
 
 	fprintf(stderr,
 		"Test successful after running for %d seconds.\n"
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v1 12/18] KVM: selftests/rseq_test: name the migration thread and some cleanup
  2022-10-24 11:34 [PATCH v1 00/18] KVM selftests code consolidation and cleanup Wei Wang
                   ` (10 preceding siblings ...)
  2022-10-24 11:34 ` [PATCH v1 11/18] KVM: selftest/xapic_ipi_test: " Wei Wang
@ 2022-10-24 11:34 ` Wei Wang
  2022-10-27  0:18   ` Sean Christopherson
  2022-10-24 11:34 ` [PATCH v1 13/18] KVM: selftests/perf_test_util: vcpu related code consolidation Wei Wang
                   ` (6 subsequent siblings)
  18 siblings, 1 reply; 43+ messages in thread
From: Wei Wang @ 2022-10-24 11:34 UTC (permalink / raw)
  To: seanjc, pbonzini
  Cc: dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel, Wei Wang

Use the helper function to create the migration thread with name. Change
the global defination of migration_thread to local, as it's not
referenced anywhere outside main(). Aslo, check the return value from
pthread_join and assert on errors.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 tools/testing/selftests/kvm/rseq_test.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/kvm/rseq_test.c b/tools/testing/selftests/kvm/rseq_test.c
index 6f88da7e60be..c124f00ca4fe 100644
--- a/tools/testing/selftests/kvm/rseq_test.c
+++ b/tools/testing/selftests/kvm/rseq_test.c
@@ -2,7 +2,6 @@
 #define _GNU_SOURCE /* for program_invocation_short_name */
 #include <errno.h>
 #include <fcntl.h>
-#include <pthread.h>
 #include <sched.h>
 #include <stdio.h>
 #include <stdlib.h>
@@ -28,7 +27,6 @@
  */
 #define NR_TASK_MIGRATIONS 100000
 
-static pthread_t migration_thread;
 static cpu_set_t possible_mask;
 static int min_cpu, max_cpu;
 static bool done;
@@ -204,6 +202,7 @@ int main(int argc, char *argv[])
 	struct kvm_vm *vm;
 	struct kvm_vcpu *vcpu;
 	u32 cpu, rseq_cpu;
+	pthread_t migration_thread;
 
 	/* Tell stdout not to buffer its content */
 	setbuf(stdout, NULL);
@@ -226,8 +225,8 @@ int main(int argc, char *argv[])
 	vm = vm_create_with_one_vcpu(&vcpu, guest_code);
 	ucall_init(vm, NULL);
 
-	pthread_create(&migration_thread, NULL, migration_worker,
-		       (void *)(unsigned long)syscall(SYS_gettid));
+	pthread_create_with_name(&migration_thread, migration_worker,
+		 (void *)(unsigned long)syscall(SYS_gettid), "mig-thread");
 
 	for (i = 0; !done; i++) {
 		vcpu_run(vcpu);
@@ -272,7 +271,8 @@ int main(int argc, char *argv[])
 	TEST_ASSERT(i > (NR_TASK_MIGRATIONS / 2),
 		    "Only performed %d KVM_RUNs, task stalled too much?\n", i);
 
-	pthread_join(migration_thread, NULL);
+	r = pthread_join(migration_thread, NULL);
+	TEST_ASSERT(r == 0, "failed to join the migration thread");
 
 	kvm_vm_free(vm);
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v1 13/18] KVM: selftests/perf_test_util: vcpu related code consolidation
  2022-10-24 11:34 [PATCH v1 00/18] KVM selftests code consolidation and cleanup Wei Wang
                   ` (11 preceding siblings ...)
  2022-10-24 11:34 ` [PATCH v1 12/18] KVM: selftests/rseq_test: name the migration thread and some cleanup Wei Wang
@ 2022-10-24 11:34 ` Wei Wang
  2022-10-24 11:34 ` [PATCH v1 14/18] KVM: selftest/memslot_perf_test: " Wei Wang
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 43+ messages in thread
From: Wei Wang @ 2022-10-24 11:34 UTC (permalink / raw)
  To: seanjc, pbonzini
  Cc: dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel, Wei Wang

Peform vcpu related code consolidation in lib/perf_test_util.c and its
users. To be more precise:
For the lib, perf_test_util.c:
- remove the globally defined *vcpus[] array, as it is a duplicate of
  vm->vcpus[i], and accordingly, remove the "*vcpus[]" input parameters
  from the related APIs (e.g. perf_test_setup_vcpus);
- remove the globally defined vcpu_thread array, as the vcpu thread has
  been included into the kvm_vcpu struct, and simplify the implementation
  in perf_test_start_vcpu_threads by using the related helper functions;
- remove the redundant fields in "struct vcpu_thread" (e.g. vcpu_idx), as
  they are already part of the vcpu struct. Also rename it to "struct
  vcpu_thread_data" and change it to the vcpu thread's private_data, which
  is passed to the vcpu threads' start_routine (i.e. vcpu_thread_main).
- remove perf_test_join_vcpu_threads as we have a helper function to join
  the vcpu threads (i.e. vm_vcpu_threads_join), and put it in
  perf_test_destroy_vm so that users don't need to call threads_join
  and destroy_vm separately.
- change vcpu_fn (per-user vcpu hread's callback routine) to use
  "struct kvm_vcpu" as an interface, as it is easier to get the related
  info from vcpu (e.g. vcpu->id);

For the users, access_tracking_perf_test.c, demand_paging_test.c, and
memslot_modification_stress_test.c, dirty_log_perf_test.c:
- change the input parameters of the functions (e.g. vcpu_thread_main)
  to use "struct kvm_vcpu" as an interface to match the change in the lib;

Finally, have the lib and user changes in one patch to ensure the
interface and its users are updated together, so that the compilation
doesn't complain with errors.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 .../selftests/kvm/access_tracking_perf_test.c | 18 +++--
 .../selftests/kvm/demand_paging_test.c        |  9 +--
 .../selftests/kvm/dirty_log_perf_test.c       | 11 ++-
 .../selftests/kvm/include/perf_test_util.h    |  9 ++-
 .../selftests/kvm/lib/perf_test_util.c        | 68 +++++++------------
 .../selftests/kvm/lib/x86_64/perf_test_util.c | 11 +--
 .../kvm/memslot_modification_stress_test.c    |  9 +--
 7 files changed, 53 insertions(+), 82 deletions(-)

diff --git a/tools/testing/selftests/kvm/access_tracking_perf_test.c b/tools/testing/selftests/kvm/access_tracking_perf_test.c
index 76c583a07ea2..878b9189774c 100644
--- a/tools/testing/selftests/kvm/access_tracking_perf_test.c
+++ b/tools/testing/selftests/kvm/access_tracking_perf_test.c
@@ -125,10 +125,10 @@ static void mark_page_idle(int page_idle_fd, uint64_t pfn)
 		    "Set page_idle bits for PFN 0x%" PRIx64, pfn);
 }
 
-static void mark_vcpu_memory_idle(struct kvm_vm *vm,
-				  struct perf_test_vcpu_args *vcpu_args)
+static void mark_vcpu_memory_idle(struct kvm_vm *vm, int vcpu_idx)
 {
-	int vcpu_idx = vcpu_args->vcpu_idx;
+	struct perf_test_vcpu_args *vcpu_args =
+				&perf_test_args.vcpu_args[vcpu_idx];
 	uint64_t base_gva = vcpu_args->gva;
 	uint64_t pages = vcpu_args->pages;
 	uint64_t page;
@@ -220,11 +220,10 @@ static bool spin_wait_for_next_iteration(int *current_iteration)
 	return true;
 }
 
-static void vcpu_thread_main(struct perf_test_vcpu_args *vcpu_args)
+static void vcpu_thread_main(struct kvm_vcpu *vcpu)
 {
-	struct kvm_vcpu *vcpu = vcpu_args->vcpu;
-	struct kvm_vm *vm = perf_test_args.vm;
-	int vcpu_idx = vcpu_args->vcpu_idx;
+	struct kvm_vm *vm = vcpu->vm;
+	int vcpu_idx = vcpu->id;
 	int current_iteration = 0;
 
 	while (spin_wait_for_next_iteration(&current_iteration)) {
@@ -234,7 +233,7 @@ static void vcpu_thread_main(struct perf_test_vcpu_args *vcpu_args)
 			assert_ucall(vcpu, UCALL_SYNC);
 			break;
 		case ITERATION_MARK_IDLE:
-			mark_vcpu_memory_idle(vm, vcpu_args);
+			mark_vcpu_memory_idle(vm, vcpu_idx);
 			break;
 		};
 
@@ -306,7 +305,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	vm = perf_test_create_vm(mode, nr_vcpus, params->vcpu_memory_bytes, 1,
 				 params->backing_src, !overlap_memory_access);
 
-	perf_test_start_vcpu_threads(nr_vcpus, vcpu_thread_main);
+	perf_test_start_vcpu_threads(vm, vcpu_thread_main);
 
 	pr_info("\n");
 	access_memory(vm, nr_vcpus, ACCESS_WRITE, "Populating memory");
@@ -324,7 +323,6 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	/* Set done to signal the vCPU threads to exit */
 	done = true;
 
-	perf_test_join_vcpu_threads(nr_vcpus);
 	perf_test_destroy_vm(vm);
 }
 
diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index 779ae54f89c4..7b8aaf3a5d57 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -14,7 +14,6 @@
 #include <stdlib.h>
 #include <time.h>
 #include <poll.h>
-#include <pthread.h>
 #include <linux/userfaultfd.h>
 #include <sys/syscall.h>
 
@@ -42,10 +41,9 @@ static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE;
 static size_t demand_paging_size;
 static char *guest_data_prototype;
 
-static void vcpu_worker(struct perf_test_vcpu_args *vcpu_args)
+static void vcpu_worker(struct kvm_vcpu *vcpu)
 {
-	struct kvm_vcpu *vcpu = vcpu_args->vcpu;
-	int vcpu_idx = vcpu_args->vcpu_idx;
+	int vcpu_idx = vcpu->id;
 	struct kvm_run *run = vcpu->run;
 	struct timespec start;
 	struct timespec ts_diff;
@@ -336,10 +334,9 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	pr_info("Finished creating vCPUs and starting uffd threads\n");
 
 	clock_gettime(CLOCK_MONOTONIC, &start);
-	perf_test_start_vcpu_threads(nr_vcpus, vcpu_worker);
+	perf_test_start_vcpu_threads(vm, vcpu_worker);
 	pr_info("Started all vCPUs\n");
 
-	perf_test_join_vcpu_threads(nr_vcpus);
 	ts_diff = timespec_elapsed(start);
 	pr_info("All vCPU threads joined\n");
 
diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c
index f99e39a672d3..808d3d768c82 100644
--- a/tools/testing/selftests/kvm/dirty_log_perf_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c
@@ -11,7 +11,6 @@
 #include <stdio.h>
 #include <stdlib.h>
 #include <time.h>
-#include <pthread.h>
 #include <linux/bitmap.h>
 
 #include "kvm_util.h"
@@ -67,10 +66,11 @@ static bool host_quit;
 static int iteration;
 static int vcpu_last_completed_iteration[KVM_MAX_VCPUS];
 
-static void vcpu_worker(struct perf_test_vcpu_args *vcpu_args)
+static void vcpu_worker(struct kvm_vcpu *vcpu)
 {
-	struct kvm_vcpu *vcpu = vcpu_args->vcpu;
-	int vcpu_idx = vcpu_args->vcpu_idx;
+	int vcpu_idx = vcpu->id;
+	struct perf_test_vcpu_args *vcpu_args =
+				&perf_test_args.vcpu_args[vcpu_idx];
 	uint64_t pages_count = 0;
 	struct kvm_run *run;
 	struct timespec start;
@@ -248,7 +248,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	for (i = 0; i < nr_vcpus; i++)
 		vcpu_last_completed_iteration[i] = -1;
 
-	perf_test_start_vcpu_threads(nr_vcpus, vcpu_worker);
+	perf_test_start_vcpu_threads(vm, vcpu_worker);
 
 	/* Allow the vCPUs to populate memory */
 	pr_debug("Starting iteration %d - Populating\n", iteration);
@@ -329,7 +329,6 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	 * wait for them to exit.
 	 */
 	host_quit = true;
-	perf_test_join_vcpu_threads(nr_vcpus);
 
 	avg = timespec_div(get_dirty_log_total, p->iterations);
 	pr_info("Get dirty log over %lu iterations took %ld.%.9lds. (Avg %ld.%.9lds/iteration)\n",
diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h
index eaa88df0555a..43816756c1da 100644
--- a/tools/testing/selftests/kvm/include/perf_test_util.h
+++ b/tools/testing/selftests/kvm/include/perf_test_util.h
@@ -24,8 +24,7 @@ struct perf_test_vcpu_args {
 	uint64_t gva;
 	uint64_t pages;
 
-	/* Only used by the host userspace part of the vCPU thread */
-	struct kvm_vcpu *vcpu;
+	/* For guest to check if data is corrupted */
 	int vcpu_idx;
 };
 
@@ -53,11 +52,11 @@ void perf_test_destroy_vm(struct kvm_vm *vm);
 
 void perf_test_set_wr_fract(struct kvm_vm *vm, int wr_fract);
 
-void perf_test_start_vcpu_threads(int vcpus, void (*vcpu_fn)(struct perf_test_vcpu_args *));
-void perf_test_join_vcpu_threads(int vcpus);
+void perf_test_start_vcpu_threads(struct kvm_vm *vm,
+				  void (*vcpu_fn)(struct kvm_vcpu *vcpu));
 void perf_test_guest_code(uint32_t vcpu_id);
 
 uint64_t perf_test_nested_pages(int nr_vcpus);
-void perf_test_setup_nested(struct kvm_vm *vm, int nr_vcpus, struct kvm_vcpu *vcpus[]);
+void perf_test_setup_nested(struct kvm_vm *vm);
 
 #endif /* SELFTEST_KVM_PERF_TEST_UTIL_H */
diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
index 9618b37c66f7..94c0f496c9c1 100644
--- a/tools/testing/selftests/kvm/lib/perf_test_util.c
+++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
@@ -16,28 +16,17 @@ struct perf_test_args perf_test_args;
  */
 static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
 
-struct vcpu_thread {
-	/* The index of the vCPU. */
-	int vcpu_idx;
-
-	/* The pthread backing the vCPU. */
-	pthread_t thread;
-
+struct vcpu_thread_data {
 	/* Set to true once the vCPU thread is up and running. */
 	bool running;
 };
 
-/* The vCPU threads involved in this test. */
-static struct vcpu_thread vcpu_threads[KVM_MAX_VCPUS];
-
 /* The function run by each vCPU thread, as provided by the test. */
-static void (*vcpu_thread_fn)(struct perf_test_vcpu_args *);
+static void (*vcpu_thread_fn)(struct kvm_vcpu *);
 
 /* Set to true once all vCPU threads are up and running. */
 static bool all_vcpu_threads_running;
 
-static struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];
-
 /*
  * Continuously write to the first 8 bytes of each page in the
  * specified region.
@@ -71,7 +60,6 @@ void perf_test_guest_code(uint32_t vcpu_idx)
 }
 
 void perf_test_setup_vcpus(struct kvm_vm *vm, int nr_vcpus,
-			   struct kvm_vcpu *vcpus[],
 			   uint64_t vcpu_memory_bytes,
 			   bool partition_vcpu_memory_access)
 {
@@ -82,7 +70,6 @@ void perf_test_setup_vcpus(struct kvm_vm *vm, int nr_vcpus,
 	for (i = 0; i < nr_vcpus; i++) {
 		vcpu_args = &pta->vcpu_args[i];
 
-		vcpu_args->vcpu = vcpus[i];
 		vcpu_args->vcpu_idx = i;
 
 		if (partition_vcpu_memory_access) {
@@ -98,7 +85,7 @@ void perf_test_setup_vcpus(struct kvm_vm *vm, int nr_vcpus,
 			vcpu_args->gpa = pta->gpa;
 		}
 
-		vcpu_args_set(vcpus[i], 1, i);
+		vcpu_args_set(vm->vcpus[i], 1, i);
 
 		pr_debug("Added VCPU %d with test mem gpa [%lx, %lx)\n",
 			 i, vcpu_args->gpa, vcpu_args->gpa +
@@ -153,7 +140,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int nr_vcpus,
 	 * effect as KVM allows aliasing HVAs in meslots.
 	 */
 	vm = __vm_create_with_vcpus(mode, nr_vcpus, slot0_pages + guest_num_pages,
-				    perf_test_guest_code, vcpus);
+				    perf_test_guest_code, NULL);
 
 	pta->vm = vm;
 
@@ -201,12 +188,12 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int nr_vcpus,
 	/* Do mapping for the demand paging memory slot */
 	virt_map(vm, guest_test_virt_mem, pta->gpa, guest_num_pages);
 
-	perf_test_setup_vcpus(vm, nr_vcpus, vcpus, vcpu_memory_bytes,
+	perf_test_setup_vcpus(vm, nr_vcpus, vcpu_memory_bytes,
 			      partition_vcpu_memory_access);
 
 	if (pta->nested) {
 		pr_info("Configuring vCPUs to run in L2 (nested).\n");
-		perf_test_setup_nested(vm, nr_vcpus, vcpus);
+		perf_test_setup_nested(vm);
 	}
 
 	ucall_init(vm, NULL);
@@ -219,6 +206,9 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int nr_vcpus,
 
 void perf_test_destroy_vm(struct kvm_vm *vm)
 {
+	vm_vcpu_threads_join(vm);
+	pr_info("All vCPU threads joined\n");
+
 	ucall_uninit(vm);
 	kvm_vm_free(vm);
 }
@@ -234,7 +224,7 @@ uint64_t __weak perf_test_nested_pages(int nr_vcpus)
 	return 0;
 }
 
-void __weak perf_test_setup_nested(struct kvm_vm *vm, int nr_vcpus, struct kvm_vcpu **vcpus)
+void __weak perf_test_setup_nested(struct kvm_vm *vm)
 {
 	pr_info("%s() not support on this architecture, skipping.\n", __func__);
 	exit(KSFT_SKIP);
@@ -242,9 +232,11 @@ void __weak perf_test_setup_nested(struct kvm_vm *vm, int nr_vcpus, struct kvm_v
 
 static void *vcpu_thread_main(void *data)
 {
-	struct vcpu_thread *vcpu = data;
+	struct kvm_vcpu *vcpu = (struct kvm_vcpu *)data;
+	struct vcpu_thread_data *thread_data =
+		(struct vcpu_thread_data *)vcpu->private_data;
 
-	WRITE_ONCE(vcpu->running, true);
+	WRITE_ONCE(thread_data->running, true);
 
 	/*
 	 * Wait for all vCPU threads to be up and running before calling the test-
@@ -255,40 +247,30 @@ static void *vcpu_thread_main(void *data)
 	while (!READ_ONCE(all_vcpu_threads_running))
 		;
 
-	vcpu_thread_fn(&perf_test_args.vcpu_args[vcpu->vcpu_idx]);
+	vcpu_thread_fn(vcpu);
 
 	return NULL;
 }
 
-void perf_test_start_vcpu_threads(int nr_vcpus,
-				  void (*vcpu_fn)(struct perf_test_vcpu_args *))
+void perf_test_start_vcpu_threads(struct kvm_vm *vm,
+				  void (*vcpu_fn)(struct kvm_vcpu *))
 {
 	int i;
+	struct kvm_vcpu *vcpu;
+	struct vcpu_thread_data *thread_data;
 
 	vcpu_thread_fn = vcpu_fn;
 	WRITE_ONCE(all_vcpu_threads_running, false);
 
-	for (i = 0; i < nr_vcpus; i++) {
-		struct vcpu_thread *vcpu = &vcpu_threads[i];
-
-		vcpu->vcpu_idx = i;
-		WRITE_ONCE(vcpu->running, false);
+	/* thread_data->running already false-initialized on allocation */
+	vm_vcpu_threads_create(vm, vcpu_thread_main,
+				sizeof(struct vcpu_thread_data));
 
-		pthread_create(&vcpu->thread, NULL, vcpu_thread_main, vcpu);
-	}
-
-	for (i = 0; i < nr_vcpus; i++) {
-		while (!READ_ONCE(vcpu_threads[i].running))
+	vm_iterate_over_vcpus(vm, vcpu, i) {
+		thread_data = (struct vcpu_thread_data *)vcpu->private_data;
+		while (!READ_ONCE(thread_data->running))
 			;
 	}
 
 	WRITE_ONCE(all_vcpu_threads_running, true);
 }
-
-void perf_test_join_vcpu_threads(int nr_vcpus)
-{
-	int i;
-
-	for (i = 0; i < nr_vcpus; i++)
-		pthread_join(vcpu_threads[i].thread, NULL);
-}
diff --git a/tools/testing/selftests/kvm/lib/x86_64/perf_test_util.c b/tools/testing/selftests/kvm/lib/x86_64/perf_test_util.c
index 0f344a7c89c4..8c4c87df5b8d 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/perf_test_util.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/perf_test_util.c
@@ -77,16 +77,17 @@ void perf_test_setup_ept(struct vmx_pages *vmx, struct kvm_vm *vm)
 	nested_identity_map_1g(vmx, vm, start, end - start);
 }
 
-void perf_test_setup_nested(struct kvm_vm *vm, int nr_vcpus, struct kvm_vcpu *vcpus[])
+void perf_test_setup_nested(struct kvm_vm *vm)
 {
 	struct vmx_pages *vmx, *vmx0 = NULL;
 	struct kvm_regs regs;
 	vm_vaddr_t vmx_gva;
 	int vcpu_id;
+	struct kvm_vcpu *vcpu;
 
 	TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_VMX));
 
-	for (vcpu_id = 0; vcpu_id < nr_vcpus; vcpu_id++) {
+	vm_iterate_over_vcpus(vm, vcpu, vcpu_id) {
 		vmx = vcpu_alloc_vmx(vm, &vmx_gva);
 
 		if (vcpu_id == 0) {
@@ -103,9 +104,9 @@ void perf_test_setup_nested(struct kvm_vm *vm, int nr_vcpus, struct kvm_vcpu *vc
 		 * Override the vCPU to run perf_test_l1_guest_code() which will
 		 * bounce it into L2 before calling perf_test_guest_code().
 		 */
-		vcpu_regs_get(vcpus[vcpu_id], &regs);
+		vcpu_regs_get(vcpu, &regs);
 		regs.rip = (unsigned long) perf_test_l1_guest_code;
-		vcpu_regs_set(vcpus[vcpu_id], &regs);
-		vcpu_args_set(vcpus[vcpu_id], 2, vmx_gva, vcpu_id);
+		vcpu_regs_set(vcpu, &regs);
+		vcpu_args_set(vcpu, 2, vmx_gva, vcpu->id);
 	}
 }
diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
index bb1d17a1171b..d41d2b989a91 100644
--- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c
+++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
@@ -16,7 +16,6 @@
 #include <asm/unistd.h>
 #include <time.h>
 #include <poll.h>
-#include <pthread.h>
 #include <linux/bitmap.h>
 #include <linux/bitops.h>
 #include <linux/userfaultfd.h>
@@ -36,9 +35,8 @@ static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE;
 
 static bool run_vcpus = true;
 
-static void vcpu_worker(struct perf_test_vcpu_args *vcpu_args)
+static void vcpu_worker(struct kvm_vcpu *vcpu)
 {
-	struct kvm_vcpu *vcpu = vcpu_args->vcpu;
 	struct kvm_run *run;
 	int ret;
 
@@ -103,7 +101,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 
 	pr_info("Finished creating vCPUs\n");
 
-	perf_test_start_vcpu_threads(nr_vcpus, vcpu_worker);
+	perf_test_start_vcpu_threads(vm, vcpu_worker);
 
 	pr_info("Started all vCPUs\n");
 
@@ -112,9 +110,6 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 
 	run_vcpus = false;
 
-	perf_test_join_vcpu_threads(nr_vcpus);
-	pr_info("All vCPU threads joined\n");
-
 	perf_test_destroy_vm(vm);
 }
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v1 14/18] KVM: selftest/memslot_perf_test: vcpu related code consolidation
  2022-10-24 11:34 [PATCH v1 00/18] KVM selftests code consolidation and cleanup Wei Wang
                   ` (12 preceding siblings ...)
  2022-10-24 11:34 ` [PATCH v1 13/18] KVM: selftests/perf_test_util: vcpu related code consolidation Wei Wang
@ 2022-10-24 11:34 ` Wei Wang
  2022-10-24 11:34 ` [PATCH v1 15/18] KVM: selftests/vgic_init: " Wei Wang
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 43+ messages in thread
From: Wei Wang @ 2022-10-24 11:34 UTC (permalink / raw)
  To: seanjc, pbonzini
  Cc: dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel, Wei Wang

Remove the vcpu and vcpu_thread fields in the vm_data struct and reuse
the one in the kvm_vm strut. Rename vm_data to vcpu_thread_data and make
it be the vcpu thread's private data. Also use the helper functions to
create and join the vcpu thread.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 .../testing/selftests/kvm/memslot_perf_test.c | 137 +++++++++---------
 1 file changed, 71 insertions(+), 66 deletions(-)

diff --git a/tools/testing/selftests/kvm/memslot_perf_test.c b/tools/testing/selftests/kvm/memslot_perf_test.c
index 44995446d942..bf37d1bb8d50 100644
--- a/tools/testing/selftests/kvm/memslot_perf_test.c
+++ b/tools/testing/selftests/kvm/memslot_perf_test.c
@@ -6,7 +6,6 @@
  *
  * Basic guest setup / host vCPU thread code lifted from set_memory_region_test.
  */
-#include <pthread.h>
 #include <sched.h>
 #include <semaphore.h>
 #include <stdatomic.h>
@@ -86,10 +85,7 @@ static_assert(MEM_TEST_MOVE_SIZE <= MEM_TEST_SIZE,
 #define MEM_TEST_VAL_1 0x1122334455667788
 #define MEM_TEST_VAL_2 0x99AABBCCDDEEFF00
 
-struct vm_data {
-	struct kvm_vm *vm;
-	struct kvm_vcpu *vcpu;
-	pthread_t vcpu_thread;
+struct vcpu_thread_data {
 	uint32_t nslots;
 	uint64_t npages;
 	uint64_t pages_per_slot;
@@ -126,7 +122,7 @@ static bool verbose;
 			pr_info(__VA_ARGS__);	\
 	} while (0)
 
-static void check_mmio_access(struct vm_data *data, struct kvm_run *run)
+static void check_mmio_access(struct vcpu_thread_data *data, struct kvm_run *run)
 {
 	TEST_ASSERT(data->mmio_ok, "Unexpected mmio exit");
 	TEST_ASSERT(run->mmio.is_write, "Unexpected mmio read");
@@ -140,8 +136,9 @@ static void check_mmio_access(struct vm_data *data, struct kvm_run *run)
 
 static void *vcpu_worker(void *__data)
 {
-	struct vm_data *data = __data;
-	struct kvm_vcpu *vcpu = data->vcpu;
+	struct kvm_vcpu *vcpu = (struct kvm_vcpu *)__data;
+	struct vcpu_thread_data *data =
+		(struct vcpu_thread_data *)vcpu->private_data;
 	struct kvm_run *run = vcpu->run;
 	struct ucall uc;
 
@@ -187,7 +184,8 @@ static void wait_for_vcpu(void)
 		    "sem_timedwait() failed: %d\n", errno);
 }
 
-static void *vm_gpa2hva(struct vm_data *data, uint64_t gpa, uint64_t *rempages)
+static void *vm_gpa2hva(struct vcpu_thread_data *data,
+			  uint64_t gpa, uint64_t *rempages)
 {
 	uint64_t gpage, pgoffs;
 	uint32_t slot, slotoffs;
@@ -220,31 +218,19 @@ static void *vm_gpa2hva(struct vm_data *data, uint64_t gpa, uint64_t *rempages)
 	return (uint8_t *)base + slotoffs * 4096 + pgoffs;
 }
 
-static uint64_t vm_slot2gpa(struct vm_data *data, uint32_t slot)
+static uint64_t vm_slot2gpa(struct vcpu_thread_data *data, uint32_t slot)
 {
 	TEST_ASSERT(slot < data->nslots, "Too high slot number");
 
 	return MEM_GPA + slot * data->pages_per_slot * 4096;
 }
 
-static struct vm_data *alloc_vm(void)
-{
-	struct vm_data *data;
-
-	data = malloc(sizeof(*data));
-	TEST_ASSERT(data, "malloc(vmdata) failed");
-
-	data->vm = NULL;
-	data->vcpu = NULL;
-	data->hva_slots = NULL;
-
-	return data;
-}
-
-static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots,
-		       void *guest_code, uint64_t mempages,
-		       struct timespec *slot_runtime)
+static bool prepare_vm(struct kvm_vm **out_vm, int nslots, uint64_t *maxslots,
+	void *guest_code, uint64_t mempages,  struct timespec *slot_runtime)
 {
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	struct vcpu_thread_data *data;
 	uint32_t max_mem_slots;
 	uint64_t rempages;
 	uint64_t guest_addr;
@@ -263,6 +249,11 @@ static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots,
 
 	TEST_ASSERT(mempages > 1,
 		    "Can't test without any memory");
+	vm = __vm_create_with_one_vcpu(&vcpu, mempages, guest_code);
+	*out_vm = vm;
+	vm_vcpu_threads_private_data_alloc(vm,
+					sizeof(struct vcpu_thread_data));
+	data = (struct vcpu_thread_data *)vcpu->private_data;
 
 	data->npages = mempages;
 	data->nslots = max_mem_slots - 1;
@@ -276,8 +267,7 @@ static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots,
 	data->hva_slots = malloc(sizeof(*data->hva_slots) * data->nslots);
 	TEST_ASSERT(data->hva_slots, "malloc() fail");
 
-	data->vm = __vm_create_with_one_vcpu(&data->vcpu, mempages, guest_code);
-	ucall_init(data->vm, NULL);
+	ucall_init(vm, NULL);
 
 	pr_info_v("Adding slots 1..%i, each slot with %"PRIu64" pages + %"PRIu64" extra pages last\n",
 		max_mem_slots - 1, data->pages_per_slot, rempages);
@@ -290,7 +280,7 @@ static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots,
 		if (slot == max_mem_slots - 1)
 			npages += rempages;
 
-		vm_userspace_mem_region_add(data->vm, VM_MEM_SRC_ANONYMOUS,
+		vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
 					    guest_addr, slot, npages,
 					    0);
 		guest_addr += npages * 4096;
@@ -305,18 +295,18 @@ static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots,
 		if (slot == max_mem_slots - 2)
 			npages += rempages;
 
-		gpa = vm_phy_pages_alloc(data->vm, npages, guest_addr,
+		gpa = vm_phy_pages_alloc(vm, npages, guest_addr,
 					 slot + 1);
 		TEST_ASSERT(gpa == guest_addr,
 			    "vm_phy_pages_alloc() failed\n");
 
-		data->hva_slots[slot] = addr_gpa2hva(data->vm, guest_addr);
+		data->hva_slots[slot] = addr_gpa2hva(vm, guest_addr);
 		memset(data->hva_slots[slot], 0, npages * 4096);
 
 		guest_addr += npages * 4096;
 	}
 
-	virt_map(data->vm, MEM_GPA, MEM_GPA, mempages);
+	virt_map(vm, MEM_GPA, MEM_GPA, mempages);
 
 	sync = (typeof(sync))vm_gpa2hva(data, MEM_SYNC_GPA, NULL);
 	atomic_init(&sync->start_flag, false);
@@ -328,26 +318,22 @@ static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots,
 	return true;
 }
 
-static void launch_vm(struct vm_data *data)
+static void launch_vm(struct kvm_vm *vm)
 {
 	pr_info_v("Launching the test VM\n");
 
-	pthread_create(&data->vcpu_thread, NULL, vcpu_worker, data);
+	vm_vcpu_threads_create(vm, vcpu_worker, 0);
 
 	/* Ensure the guest thread is spun up. */
 	wait_for_vcpu();
 }
 
-static void free_vm(struct vm_data *data)
+static void vcpu_thread_data_free(struct vcpu_thread_data *data)
 {
-	kvm_vm_free(data->vm);
-	free(data->hva_slots);
-	free(data);
-}
-
-static void wait_guest_exit(struct vm_data *data)
-{
-	pthread_join(data->vcpu_thread, NULL);
+	if (data) {
+		free(data->hva_slots);
+		free(data);
+	}
 }
 
 static void let_guest_run(struct sync_area *sync)
@@ -535,7 +521,7 @@ static void guest_code_test_memslot_rw(void)
 	GUEST_DONE();
 }
 
-static bool test_memslot_move_prepare(struct vm_data *data,
+static bool test_memslot_move_prepare(struct vcpu_thread_data *data,
 				      struct sync_area *sync,
 				      uint64_t *maxslots, bool isactive)
 {
@@ -565,31 +551,33 @@ static bool test_memslot_move_prepare(struct vm_data *data,
 	return true;
 }
 
-static bool test_memslot_move_prepare_active(struct vm_data *data,
+static bool test_memslot_move_prepare_active(struct vcpu_thread_data *data,
 					     struct sync_area *sync,
 					     uint64_t *maxslots)
 {
 	return test_memslot_move_prepare(data, sync, maxslots, true);
 }
 
-static bool test_memslot_move_prepare_inactive(struct vm_data *data,
+static bool test_memslot_move_prepare_inactive(struct vcpu_thread_data *data,
 					       struct sync_area *sync,
 					       uint64_t *maxslots)
 {
 	return test_memslot_move_prepare(data, sync, maxslots, false);
 }
 
-static void test_memslot_move_loop(struct vm_data *data, struct sync_area *sync)
+static void test_memslot_move_loop(struct kvm_vcpu *vcpu, struct sync_area *sync)
 {
 	uint64_t movesrcgpa;
+	struct vcpu_thread_data *data =
+			(struct vcpu_thread_data *)vcpu->private_data;
 
 	movesrcgpa = vm_slot2gpa(data, data->nslots - 1);
-	vm_mem_region_move(data->vm, data->nslots - 1 + 1,
+	vm_mem_region_move(vcpu->vm, data->nslots - 1 + 1,
 			   MEM_TEST_MOVE_GPA_DEST);
-	vm_mem_region_move(data->vm, data->nslots - 1 + 1, movesrcgpa);
+	vm_mem_region_move(vcpu->vm, data->nslots - 1 + 1, movesrcgpa);
 }
 
-static void test_memslot_do_unmap(struct vm_data *data,
+static void test_memslot_do_unmap(struct vcpu_thread_data *data,
 				  uint64_t offsp, uint64_t count)
 {
 	uint64_t gpa, ctr;
@@ -613,7 +601,7 @@ static void test_memslot_do_unmap(struct vm_data *data,
 		    "madvise(MADV_DONTNEED) should exactly cover all of the requested area");
 }
 
-static void test_memslot_map_unmap_check(struct vm_data *data,
+static void test_memslot_map_unmap_check(struct vcpu_thread_data *data,
 					 uint64_t offsp, uint64_t valexp)
 {
 	uint64_t gpa;
@@ -630,8 +618,11 @@ static void test_memslot_map_unmap_check(struct vm_data *data,
 	*val = 0;
 }
 
-static void test_memslot_map_loop(struct vm_data *data, struct sync_area *sync)
+static void test_memslot_map_loop(struct kvm_vcpu *vcpu,
+				  struct sync_area *sync)
 {
+	struct vcpu_thread_data *data =
+				(struct vcpu_thread_data *)vcpu->private_data;
 	/*
 	 * Unmap the second half of the test area while guest writes to (maps)
 	 * the first half.
@@ -670,7 +661,7 @@ static void test_memslot_map_loop(struct vm_data *data, struct sync_area *sync)
 				     MEM_TEST_VAL_2);
 }
 
-static void test_memslot_unmap_loop_common(struct vm_data *data,
+static void test_memslot_unmap_loop_common(struct vcpu_thread_data *data,
 					   struct sync_area *sync,
 					   uint64_t chunk)
 {
@@ -697,21 +688,30 @@ static void test_memslot_unmap_loop_common(struct vm_data *data,
 		test_memslot_do_unmap(data, ctr, chunk);
 }
 
-static void test_memslot_unmap_loop(struct vm_data *data,
+static void test_memslot_unmap_loop(struct kvm_vcpu *vcpu,
 				    struct sync_area *sync)
 {
+	struct vcpu_thread_data *data =
+			(struct vcpu_thread_data *)vcpu->private_data;
+
 	test_memslot_unmap_loop_common(data, sync, 1);
 }
 
-static void test_memslot_unmap_loop_chunked(struct vm_data *data,
+static void test_memslot_unmap_loop_chunked(struct kvm_vcpu *vcpu,
 					    struct sync_area *sync)
 {
+	struct vcpu_thread_data *data =
+			(struct vcpu_thread_data *)vcpu->private_data;
+
 	test_memslot_unmap_loop_common(data, sync, MEM_TEST_UNMAP_CHUNK_PAGES);
 }
 
-static void test_memslot_rw_loop(struct vm_data *data, struct sync_area *sync)
+static void test_memslot_rw_loop(struct kvm_vcpu *vcpu,
+				 struct sync_area *sync)
 {
 	uint64_t gptr;
+	struct vcpu_thread_data *data =
+			(struct vcpu_thread_data *)vcpu->private_data;
 
 	for (gptr = MEM_TEST_GPA + 4096 / 2;
 	     gptr < MEM_TEST_GPA + MEM_TEST_SIZE; gptr += 4096)
@@ -737,9 +737,9 @@ struct test_data {
 	const char *name;
 	uint64_t mem_size;
 	void (*guest_code)(void);
-	bool (*prepare)(struct vm_data *data, struct sync_area *sync,
+	bool (*prepare)(struct vcpu_thread_data *data, struct sync_area *sync,
 			uint64_t *maxslots);
-	void (*loop)(struct vm_data *data, struct sync_area *sync);
+	void (*loop)(struct kvm_vcpu *vcpu, struct sync_area *sync);
 };
 
 static bool test_execute(int nslots, uint64_t *maxslots,
@@ -750,18 +750,22 @@ static bool test_execute(int nslots, uint64_t *maxslots,
 			 struct timespec *guest_runtime)
 {
 	uint64_t mem_size = tdata->mem_size ? : MEM_SIZE_PAGES;
-	struct vm_data *data;
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	struct vcpu_thread_data *data = NULL;
 	struct sync_area *sync;
 	struct timespec tstart;
 	bool ret = true;
 
-	data = alloc_vm();
-	if (!prepare_vm(data, nslots, maxslots, tdata->guest_code,
+	if (!prepare_vm(&vm, nslots, maxslots, tdata->guest_code,
 			mem_size, slot_runtime)) {
 		ret = false;
 		goto exit_free;
 	}
 
+	vcpu = vm->vcpus[0];
+	data = (struct vcpu_thread_data *)vcpu->private_data;
+
 	sync = (typeof(sync))vm_gpa2hva(data, MEM_SYNC_GPA, NULL);
 
 	if (tdata->prepare &&
@@ -770,7 +774,7 @@ static bool test_execute(int nslots, uint64_t *maxslots,
 		goto exit_free;
 	}
 
-	launch_vm(data);
+	launch_vm(vm);
 
 	clock_gettime(CLOCK_MONOTONIC, &tstart);
 	let_guest_run(sync);
@@ -780,16 +784,17 @@ static bool test_execute(int nslots, uint64_t *maxslots,
 		if (guest_runtime->tv_sec >= maxtime)
 			break;
 
-		tdata->loop(data, sync);
+		tdata->loop(vcpu, sync);
 
 		(*nloops)++;
 	}
 
 	make_guest_exit(sync);
-	wait_guest_exit(data);
+	vm_vcpu_threads_join(vm);
 
 exit_free:
-	free_vm(data);
+	vcpu_thread_data_free(data);
+	kvm_vm_free(vm);
 
 	return ret;
 }
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v1 15/18] KVM: selftests/vgic_init: vcpu related code consolidation
  2022-10-24 11:34 [PATCH v1 00/18] KVM selftests code consolidation and cleanup Wei Wang
                   ` (13 preceding siblings ...)
  2022-10-24 11:34 ` [PATCH v1 14/18] KVM: selftest/memslot_perf_test: " Wei Wang
@ 2022-10-24 11:34 ` Wei Wang
  2022-10-24 11:34 ` [PATCH v1 16/18] KVM: selftest/arch_timer: " Wei Wang
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 43+ messages in thread
From: Wei Wang @ 2022-10-24 11:34 UTC (permalink / raw)
  To: seanjc, pbonzini
  Cc: dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel, Wei Wang

Remove the unnecessary definition of the *vcpu[] array and reuse the one
from kvm_vm. Simplify the related function interface by removeing the
*vcpu[].

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 .../testing/selftests/kvm/aarch64/vgic_init.c | 35 +++++++++----------
 1 file changed, 17 insertions(+), 18 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vgic_init.c b/tools/testing/selftests/kvm/aarch64/vgic_init.c
index 9c131d977a1b..e24130a49581 100644
--- a/tools/testing/selftests/kvm/aarch64/vgic_init.c
+++ b/tools/testing/selftests/kvm/aarch64/vgic_init.c
@@ -74,13 +74,12 @@ static int run_vcpu(struct kvm_vcpu *vcpu)
 }
 
 static struct vm_gic vm_gic_create_with_vcpus(uint32_t gic_dev_type,
-					      uint32_t nr_vcpus,
-					      struct kvm_vcpu *vcpus[])
+					      uint32_t nr_vcpus)
 {
 	struct vm_gic v;
 
 	v.gic_dev_type = gic_dev_type;
-	v.vm = vm_create_with_vcpus(nr_vcpus, guest_code, vcpus);
+	v.vm = vm_create_with_vcpus(nr_vcpus, guest_code, NULL);
 	v.gic_fd = kvm_create_device(v.vm, gic_dev_type);
 
 	return v;
@@ -324,14 +323,15 @@ static void subtest_v3_redist_regions(struct vm_gic *v)
  */
 static void test_vgic_then_vcpus(uint32_t gic_dev_type)
 {
-	struct kvm_vcpu *vcpus[NR_VCPUS];
+	struct kvm_vcpu **vcpus;
 	struct vm_gic v;
 	int ret, i;
 
-	v = vm_gic_create_with_vcpus(gic_dev_type, 1, vcpus);
+	v = vm_gic_create_with_vcpus(gic_dev_type, 1);
 
 	subtest_dist_rdist(&v);
 
+	vcpus = v.vm->vcpus;
 	/* Add the rest of the VCPUs */
 	for (i = 1; i < NR_VCPUS; ++i)
 		vcpus[i] = vm_vcpu_add(v.vm, i, guest_code);
@@ -345,15 +345,14 @@ static void test_vgic_then_vcpus(uint32_t gic_dev_type)
 /* All the VCPUs are created before the VGIC KVM device gets initialized */
 static void test_vcpus_then_vgic(uint32_t gic_dev_type)
 {
-	struct kvm_vcpu *vcpus[NR_VCPUS];
 	struct vm_gic v;
 	int ret;
 
-	v = vm_gic_create_with_vcpus(gic_dev_type, NR_VCPUS, vcpus);
+	v = vm_gic_create_with_vcpus(gic_dev_type, NR_VCPUS);
 
 	subtest_dist_rdist(&v);
 
-	ret = run_vcpu(vcpus[3]);
+	ret = run_vcpu(v.vm->vcpus[3]);
 	TEST_ASSERT(ret == -EINVAL, "dist/rdist overlap detected on 1st vcpu run");
 
 	vm_gic_destroy(&v);
@@ -361,24 +360,25 @@ static void test_vcpus_then_vgic(uint32_t gic_dev_type)
 
 static void test_v3_new_redist_regions(void)
 {
-	struct kvm_vcpu *vcpus[NR_VCPUS];
+	struct kvm_vcpu **vcpus;
 	void *dummy = NULL;
 	struct vm_gic v;
 	uint64_t addr;
 	int ret;
 
-	v = vm_gic_create_with_vcpus(KVM_DEV_TYPE_ARM_VGIC_V3, NR_VCPUS, vcpus);
+	v = vm_gic_create_with_vcpus(KVM_DEV_TYPE_ARM_VGIC_V3, NR_VCPUS);
 	subtest_v3_redist_regions(&v);
 	kvm_device_attr_set(v.gic_fd, KVM_DEV_ARM_VGIC_GRP_CTRL,
 			    KVM_DEV_ARM_VGIC_CTRL_INIT, NULL);
 
+	vcpus = v.vm->vcpus;
 	ret = run_vcpu(vcpus[3]);
 	TEST_ASSERT(ret == -ENXIO, "running without sufficient number of rdists");
 	vm_gic_destroy(&v);
 
 	/* step2 */
 
-	v = vm_gic_create_with_vcpus(KVM_DEV_TYPE_ARM_VGIC_V3, NR_VCPUS, vcpus);
+	v = vm_gic_create_with_vcpus(KVM_DEV_TYPE_ARM_VGIC_V3, NR_VCPUS);
 	subtest_v3_redist_regions(&v);
 
 	addr = REDIST_REGION_ATTR_ADDR(1, 0x280000, 0, 2);
@@ -392,7 +392,7 @@ static void test_v3_new_redist_regions(void)
 
 	/* step 3 */
 
-	v = vm_gic_create_with_vcpus(KVM_DEV_TYPE_ARM_VGIC_V3, NR_VCPUS, vcpus);
+	v = vm_gic_create_with_vcpus(KVM_DEV_TYPE_ARM_VGIC_V3, NR_VCPUS);
 	subtest_v3_redist_regions(&v);
 
 	ret = __kvm_device_attr_set(v.gic_fd, KVM_DEV_ARM_VGIC_GRP_ADDR,
@@ -559,12 +559,12 @@ static void test_v3_last_bit_single_rdist(void)
 /* Uses the legacy REDIST region API. */
 static void test_v3_redist_ipa_range_check_at_vcpu_run(void)
 {
-	struct kvm_vcpu *vcpus[NR_VCPUS];
+	struct kvm_vcpu **vcpus;
 	struct vm_gic v;
 	int ret, i;
 	uint64_t addr;
 
-	v = vm_gic_create_with_vcpus(KVM_DEV_TYPE_ARM_VGIC_V3, 1, vcpus);
+	v = vm_gic_create_with_vcpus(KVM_DEV_TYPE_ARM_VGIC_V3, 1);
 
 	/* Set space for 3 redists, we have 1 vcpu, so this succeeds. */
 	addr = max_phys_size - (3 * 2 * 0x10000);
@@ -575,6 +575,7 @@ static void test_v3_redist_ipa_range_check_at_vcpu_run(void)
 	kvm_device_attr_set(v.gic_fd, KVM_DEV_ARM_VGIC_GRP_ADDR,
 			    KVM_VGIC_V3_ADDR_TYPE_DIST, &addr);
 
+	vcpus = v.vm->vcpus;
 	/* Add the rest of the VCPUs */
 	for (i = 1; i < NR_VCPUS; ++i)
 		vcpus[i] = vm_vcpu_add(v.vm, i, guest_code);
@@ -592,12 +593,11 @@ static void test_v3_redist_ipa_range_check_at_vcpu_run(void)
 
 static void test_v3_its_region(void)
 {
-	struct kvm_vcpu *vcpus[NR_VCPUS];
 	struct vm_gic v;
 	uint64_t addr;
 	int its_fd, ret;
 
-	v = vm_gic_create_with_vcpus(KVM_DEV_TYPE_ARM_VGIC_V3, NR_VCPUS, vcpus);
+	v = vm_gic_create_with_vcpus(KVM_DEV_TYPE_ARM_VGIC_V3, NR_VCPUS);
 	its_fd = kvm_create_device(v.vm, KVM_DEV_TYPE_ARM_VGIC_ITS);
 
 	addr = 0x401000;
@@ -637,12 +637,11 @@ static void test_v3_its_region(void)
  */
 int test_kvm_device(uint32_t gic_dev_type)
 {
-	struct kvm_vcpu *vcpus[NR_VCPUS];
 	struct vm_gic v;
 	uint32_t other;
 	int ret;
 
-	v.vm = vm_create_with_vcpus(NR_VCPUS, guest_code, vcpus);
+	v.vm = vm_create_with_vcpus(NR_VCPUS, guest_code, NULL);
 
 	/* try to create a non existing KVM device */
 	ret = __kvm_test_create_device(v.vm, 0);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v1 16/18] KVM: selftest/arch_timer: vcpu related code consolidation
  2022-10-24 11:34 [PATCH v1 00/18] KVM selftests code consolidation and cleanup Wei Wang
                   ` (14 preceding siblings ...)
  2022-10-24 11:34 ` [PATCH v1 15/18] KVM: selftests/vgic_init: " Wei Wang
@ 2022-10-24 11:34 ` Wei Wang
  2022-10-24 11:34 ` [PATCH v1 17/18] KVM: selftests: remove the *vcpu[] input from __vm_create_with_vcpus Wei Wang
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 43+ messages in thread
From: Wei Wang @ 2022-10-24 11:34 UTC (permalink / raw)
  To: seanjc, pbonzini
  Cc: dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel, Wei Wang

Remove the globally defined vcpu and pthread arrays, and reuse the one
from kvm_vm and kvm_vcpu. Also use the helper functions to create vcpu
threads with name.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 .../selftests/kvm/aarch64/arch_timer.c        | 42 +++++++------------
 1 file changed, 16 insertions(+), 26 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer.c b/tools/testing/selftests/kvm/aarch64/arch_timer.c
index 574eb73f0e90..7c1057e8fca7 100644
--- a/tools/testing/selftests/kvm/aarch64/arch_timer.c
+++ b/tools/testing/selftests/kvm/aarch64/arch_timer.c
@@ -23,7 +23,6 @@
 #define _GNU_SOURCE
 
 #include <stdlib.h>
-#include <pthread.h>
 #include <linux/kvm.h>
 #include <linux/sizes.h>
 #include <linux/bitmap.h>
@@ -76,8 +75,6 @@ struct test_vcpu_shared_data {
 	uint64_t xcnt;
 };
 
-static struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];
-static pthread_t pt_vcpu_run[KVM_MAX_VCPUS];
 static struct test_vcpu_shared_data vcpu_shared_data[KVM_MAX_VCPUS];
 
 static int vtimer_irq, ptimer_irq;
@@ -212,7 +209,8 @@ static void guest_code(void)
 
 static void *test_vcpu_run(void *arg)
 {
-	unsigned int vcpu_idx = (unsigned long)arg;
+	struct kvm_vcpu *vcpu = (struct kvm_vcpu *)arg;
+	unsigned int vcpu_idx = vcpu->id;
 	struct ucall uc;
 	struct kvm_vcpu *vcpu = vcpus[vcpu_idx];
 	struct kvm_vm *vm = vcpu->vm;
@@ -263,18 +261,19 @@ static uint32_t test_get_pcpu(void)
 	return pcpu;
 }
 
-static int test_migrate_vcpu(unsigned int vcpu_idx)
+static int test_migrate_vcpu(struct kvm_vcpu *vcpu)
 {
 	int ret;
 	cpu_set_t cpuset;
 	uint32_t new_pcpu = test_get_pcpu();
+	uint32_t vcpu_idx = vcpu->id;
 
 	CPU_ZERO(&cpuset);
 	CPU_SET(new_pcpu, &cpuset);
 
 	pr_debug("Migrating vCPU: %u to pCPU: %u\n", vcpu_idx, new_pcpu);
 
-	ret = pthread_setaffinity_np(pt_vcpu_run[vcpu_idx],
+	ret = pthread_setaffinity_np(vcpu->thread,
 				     sizeof(cpuset), &cpuset);
 
 	/* Allow the error where the vCPU thread is already finished */
@@ -287,6 +286,7 @@ static int test_migrate_vcpu(unsigned int vcpu_idx)
 
 static void *test_vcpu_migration(void *arg)
 {
+	struct kvm_vm *vm = (struct kvm_vm *)arg;
 	unsigned int i, n_done;
 	bool vcpu_done;
 
@@ -303,7 +303,7 @@ static void *test_vcpu_migration(void *arg)
 				continue;
 			}
 
-			test_migrate_vcpu(i);
+			test_migrate_vcpu(vm->vcpus[i]);
 		}
 	} while (test_args.nr_vcpus != n_done);
 
@@ -314,31 +314,21 @@ static void test_run(struct kvm_vm *vm)
 {
 	pthread_t pt_vcpu_migration;
 	unsigned int i;
-	int ret;
 
 	pthread_mutex_init(&vcpu_done_map_lock, NULL);
 	vcpu_done_map = bitmap_zalloc(test_args.nr_vcpus);
 	TEST_ASSERT(vcpu_done_map, "Failed to allocate vcpu done bitmap\n");
 
-	for (i = 0; i < (unsigned long)test_args.nr_vcpus; i++) {
-		ret = pthread_create(&pt_vcpu_run[i], NULL, test_vcpu_run,
-				     (void *)(unsigned long)i);
-		TEST_ASSERT(!ret, "Failed to create vCPU-%d pthread\n", i);
-	}
+	vm_vcpu_threads_create(vm, test_vcpu_run, 0);
 
 	/* Spawn a thread to control the vCPU migrations */
 	if (test_args.migration_freq_ms) {
 		srand(time(NULL));
-
-		ret = pthread_create(&pt_vcpu_migration, NULL,
-					test_vcpu_migration, NULL);
-		TEST_ASSERT(!ret, "Failed to create the migration pthread\n");
+		pthread_create_with_name(&pt_vcpu_migration,
+				 test_vcpu_migration, vm, "control-thread");
 	}
 
-
-	for (i = 0; i < test_args.nr_vcpus; i++)
-		pthread_join(pt_vcpu_run[i], NULL);
-
+	vm_vcpu_threads_join(vm);
 	if (test_args.migration_freq_ms)
 		pthread_join(pt_vcpu_migration, NULL);
 
@@ -364,16 +354,16 @@ static int gic_fd;
 static struct kvm_vm *test_vm_create(void)
 {
 	struct kvm_vm *vm;
-	unsigned int i;
-	int nr_vcpus = test_args.nr_vcpus;
+	struct kvm_vcpu *vcpu;
+	int i, nr_vcpus = test_args.nr_vcpus;
 
-	vm = vm_create_with_vcpus(nr_vcpus, guest_code, vcpus);
+	vm = vm_create_with_vcpus(nr_vcpus, guest_code, NULL);
 
 	vm_init_descriptor_tables(vm);
 	vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler);
 
-	for (i = 0; i < nr_vcpus; i++)
-		vcpu_init_descriptor_tables(vcpus[i]);
+	vm_iterate_over_cpus(vm, vcpu, i)
+		vcpu_init_descriptor_tables(vcpu);
 
 	ucall_init(vm, NULL);
 	test_init_timer_irq(vm);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v1 17/18] KVM: selftests: remove the *vcpu[] input from __vm_create_with_vcpus
  2022-10-24 11:34 [PATCH v1 00/18] KVM selftests code consolidation and cleanup Wei Wang
                   ` (15 preceding siblings ...)
  2022-10-24 11:34 ` [PATCH v1 16/18] KVM: selftest/arch_timer: " Wei Wang
@ 2022-10-24 11:34 ` Wei Wang
  2022-10-24 11:34 ` [PATCH v1 18/18] KVM: selftests/kvm_create_max_vcpus: check KVM_MAX_VCPUS Wei Wang
  2022-10-26 21:22 ` [PATCH v1 00/18] KVM selftests code consolidation and cleanup David Matlack
  18 siblings, 0 replies; 43+ messages in thread
From: Wei Wang @ 2022-10-24 11:34 UTC (permalink / raw)
  To: seanjc, pbonzini
  Cc: dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel, Wei Wang

kvm_vm has included an array of vcpu pointers (i.e. *vcpu[]) to the
added vcpus, so there is no need for users to supply its own *vcpu[].
Remove the *vcpu[] from __vm_create_with_vcpus and the related callers.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 tools/testing/selftests/kvm/aarch64/arch_timer.c      | 2 +-
 tools/testing/selftests/kvm/aarch64/vgic_init.c       | 4 ++--
 tools/testing/selftests/kvm/hardware_disable_test.c   | 2 +-
 tools/testing/selftests/kvm/include/kvm_util_base.h   | 7 +++----
 tools/testing/selftests/kvm/kvm_page_table_test.c     | 2 +-
 tools/testing/selftests/kvm/lib/kvm_util.c            | 4 ++--
 tools/testing/selftests/kvm/lib/perf_test_util.c      | 2 +-
 tools/testing/selftests/kvm/max_guest_memory_test.c   | 2 +-
 tools/testing/selftests/kvm/steal_time.c              | 2 +-
 tools/testing/selftests/kvm/x86_64/tsc_scaling_sync.c | 2 +-
 10 files changed, 14 insertions(+), 15 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer.c b/tools/testing/selftests/kvm/aarch64/arch_timer.c
index 7c1057e8fca7..1373e41ef365 100644
--- a/tools/testing/selftests/kvm/aarch64/arch_timer.c
+++ b/tools/testing/selftests/kvm/aarch64/arch_timer.c
@@ -357,7 +357,7 @@ static struct kvm_vm *test_vm_create(void)
 	struct kvm_vcpu *vcpu;
 	int i, nr_vcpus = test_args.nr_vcpus;
 
-	vm = vm_create_with_vcpus(nr_vcpus, guest_code, NULL);
+	vm = vm_create_with_vcpus(nr_vcpus, guest_code);
 
 	vm_init_descriptor_tables(vm);
 	vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler);
diff --git a/tools/testing/selftests/kvm/aarch64/vgic_init.c b/tools/testing/selftests/kvm/aarch64/vgic_init.c
index e24130a49581..b5defd94dd2e 100644
--- a/tools/testing/selftests/kvm/aarch64/vgic_init.c
+++ b/tools/testing/selftests/kvm/aarch64/vgic_init.c
@@ -79,7 +79,7 @@ static struct vm_gic vm_gic_create_with_vcpus(uint32_t gic_dev_type,
 	struct vm_gic v;
 
 	v.gic_dev_type = gic_dev_type;
-	v.vm = vm_create_with_vcpus(nr_vcpus, guest_code, NULL);
+	v.vm = vm_create_with_vcpus(nr_vcpus, guest_code);
 	v.gic_fd = kvm_create_device(v.vm, gic_dev_type);
 
 	return v;
@@ -641,7 +641,7 @@ int test_kvm_device(uint32_t gic_dev_type)
 	uint32_t other;
 	int ret;
 
-	v.vm = vm_create_with_vcpus(NR_VCPUS, guest_code, NULL);
+	v.vm = vm_create_with_vcpus(NR_VCPUS, guest_code);
 
 	/* try to create a non existing KVM device */
 	ret = __kvm_test_create_device(v.vm, 0);
diff --git a/tools/testing/selftests/kvm/hardware_disable_test.c b/tools/testing/selftests/kvm/hardware_disable_test.c
index c212d34a6714..f16e07485380 100644
--- a/tools/testing/selftests/kvm/hardware_disable_test.c
+++ b/tools/testing/selftests/kvm/hardware_disable_test.c
@@ -76,7 +76,7 @@ static void run_test(uint32_t run)
 	r = pthread_attr_setaffinity_np(&attr, sizeof(cpu_set_t), &cpu_set);
 	TEST_ASSERT(!r, "%s: failed to set affinity, r = %d", __func__, r);
 
-	vm = vm_create_with_vcpus(VCPU_NUM, guest_code, NULL);
+	vm = vm_create_with_vcpus(VCPU_NUM, guest_code);
 
 	pr_debug("%s: [%d] start vcpus\n", __func__, run);
 	vm_iterate_over_vcpus(vm, vcpu, i) {
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index d0d6aaec0098..5a5b7210cf7c 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -663,14 +663,13 @@ static inline struct kvm_vm *vm_create(uint32_t nr_runnable_vcpus)
 
 struct kvm_vm *__vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus,
 				      uint64_t extra_mem_pages,
-				      void *guest_code, struct kvm_vcpu *vcpus[]);
+				      void *guest_code);
 
 static inline struct kvm_vm *vm_create_with_vcpus(uint32_t nr_vcpus,
-						  void *guest_code,
-						  struct kvm_vcpu *vcpus[])
+						  void *guest_code)
 {
 	return __vm_create_with_vcpus(VM_MODE_DEFAULT, nr_vcpus, 0,
-				      guest_code, vcpus);
+				      guest_code);
 }
 
 /*
diff --git a/tools/testing/selftests/kvm/kvm_page_table_test.c b/tools/testing/selftests/kvm/kvm_page_table_test.c
index 4c3df48d80fc..1a9dd189c225 100644
--- a/tools/testing/selftests/kvm/kvm_page_table_test.c
+++ b/tools/testing/selftests/kvm/kvm_page_table_test.c
@@ -253,7 +253,7 @@ static struct kvm_vm *pre_init_before_test(enum vm_guest_mode mode, void *arg)
 	/* Create a VM with enough guest pages */
 	guest_num_pages = test_mem_size / guest_page_size;
 	vm = __vm_create_with_vcpus(mode, nr_vcpus, guest_num_pages,
-				    guest_code, NULL);
+				    guest_code);
 
 	/* Align down GPA of the testing memslot */
 	if (!p->phys_offset)
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index ba3e774087fb..69dad4fa9ca1 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -366,7 +366,7 @@ struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
  */
 struct kvm_vm *__vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus,
 				      uint64_t extra_mem_pages,
-				      void *guest_code, struct kvm_vcpu *vcpus[])
+				      void *guest_code)
 {
 	struct kvm_vm *vm;
 	int i;
@@ -386,7 +386,7 @@ struct kvm_vm *__vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
 	struct kvm_vm *vm;
 
 	vm = __vm_create_with_vcpus(VM_MODE_DEFAULT, 1, extra_mem_pages,
-				    guest_code, NULL);
+				    guest_code);
 
 	*vcpu = vm->vcpus[0];
 	return vm;
diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
index 94c0f496c9c1..3103c9f40e76 100644
--- a/tools/testing/selftests/kvm/lib/perf_test_util.c
+++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
@@ -140,7 +140,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int nr_vcpus,
 	 * effect as KVM allows aliasing HVAs in meslots.
 	 */
 	vm = __vm_create_with_vcpus(mode, nr_vcpus, slot0_pages + guest_num_pages,
-				    perf_test_guest_code, NULL);
+				    perf_test_guest_code);
 
 	pta->vm = vm;
 
diff --git a/tools/testing/selftests/kvm/max_guest_memory_test.c b/tools/testing/selftests/kvm/max_guest_memory_test.c
index 2d9c83e36e65..7480730caeeb 100644
--- a/tools/testing/selftests/kvm/max_guest_memory_test.c
+++ b/tools/testing/selftests/kvm/max_guest_memory_test.c
@@ -205,7 +205,7 @@ int main(int argc, char *argv[])
 		}
 	}
 
-	vm = vm_create_with_vcpus(nr_vcpus, guest_code, NULL);
+	vm = vm_create_with_vcpus(nr_vcpus, guest_code);
 
 	max_gpa = vm->max_gfn << vm->page_shift;
 	TEST_ASSERT(max_gpa > (4 * slot_size), "MAXPHYADDR <4gb ");
diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c
index 857ed2c073fc..530b08e64846 100644
--- a/tools/testing/selftests/kvm/steal_time.c
+++ b/tools/testing/selftests/kvm/steal_time.c
@@ -261,7 +261,7 @@ int main(int ac, char **av)
 	pthread_setaffinity_np(pthread_self(), sizeof(cpu_set_t), &cpuset);
 
 	/* Create a VM and an identity mapped memslot for the steal time structure */
-	vm = vm_create_with_vcpus(NR_VCPUS, guest_code, NULL);
+	vm = vm_create_with_vcpus(NR_VCPUS, guest_code);
 	vcpus = vm->vcpus;
 	gpages = vm_calc_num_guest_pages(VM_MODE_DEFAULT, STEAL_TIME_SIZE * NR_VCPUS);
 	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, ST_GPA_BASE, 1, gpages, 0);
diff --git a/tools/testing/selftests/kvm/x86_64/tsc_scaling_sync.c b/tools/testing/selftests/kvm/x86_64/tsc_scaling_sync.c
index 34a8beef42b6..3c050ffe5edb 100644
--- a/tools/testing/selftests/kvm/x86_64/tsc_scaling_sync.c
+++ b/tools/testing/selftests/kvm/x86_64/tsc_scaling_sync.c
@@ -91,7 +91,7 @@ int main(int argc, char *argv[])
 {
 	TEST_REQUIRE(kvm_has_cap(KVM_CAP_VM_TSC_CONTROL));
 
-	vm = vm_create_with_vcpus(NR_TEST_VCPUS, guest_code, NULL);
+	vm = vm_create_with_vcpus(NR_TEST_VCPUS, guest_code);
 	vm_ioctl(vm, KVM_SET_TSC_KHZ, (void *) TEST_TSC_KHZ);
 
 	pthread_spin_init(&create_lock, PTHREAD_PROCESS_PRIVATE);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v1 18/18] KVM: selftests/kvm_create_max_vcpus: check KVM_MAX_VCPUS
  2022-10-24 11:34 [PATCH v1 00/18] KVM selftests code consolidation and cleanup Wei Wang
                   ` (16 preceding siblings ...)
  2022-10-24 11:34 ` [PATCH v1 17/18] KVM: selftests: remove the *vcpu[] input from __vm_create_with_vcpus Wei Wang
@ 2022-10-24 11:34 ` Wei Wang
  2022-10-27  0:22   ` Sean Christopherson
  2022-10-26 21:22 ` [PATCH v1 00/18] KVM selftests code consolidation and cleanup David Matlack
  18 siblings, 1 reply; 43+ messages in thread
From: Wei Wang @ 2022-10-24 11:34 UTC (permalink / raw)
  To: seanjc, pbonzini
  Cc: dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel, Wei Wang

If the KVM side max vcpu number is larger than the one supported by the
userspace selftests, adjust the max number.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 tools/testing/selftests/kvm/kvm_create_max_vcpus.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/tools/testing/selftests/kvm/kvm_create_max_vcpus.c b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
index 31b3cb24b9a7..bbdb371e21ed 100644
--- a/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
+++ b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
@@ -50,6 +50,13 @@ int main(int argc, char *argv[])
 
 	pr_info("KVM_CAP_MAX_VCPU_ID: %d\n", kvm_max_vcpu_id);
 	pr_info("KVM_CAP_MAX_VCPUS: %d\n", kvm_max_vcpus);
+	pr_info("selftests KVM_MAX_VCPUS: %d\n", KVM_MAX_VCPUS);
+
+	if (kvm_max_vcpu_id > KVM_MAX_VCPUS)
+		kvm_max_vcpu_id = KVM_MAX_VCPUS;
+
+	if (kvm_max_vcpus > KVM_MAX_VCPUS)
+		kvm_max_vcpus = KVM_MAX_VCPUS;
 
 	/*
 	 * Check that we're allowed to open nr_fds_wanted file descriptors and
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: [PATCH v1 00/18] KVM selftests code consolidation and cleanup
  2022-10-24 11:34 [PATCH v1 00/18] KVM selftests code consolidation and cleanup Wei Wang
                   ` (17 preceding siblings ...)
  2022-10-24 11:34 ` [PATCH v1 18/18] KVM: selftests/kvm_create_max_vcpus: check KVM_MAX_VCPUS Wei Wang
@ 2022-10-26 21:22 ` David Matlack
  2022-10-27 12:18   ` Wang, Wei W
  18 siblings, 1 reply; 43+ messages in thread
From: David Matlack @ 2022-10-26 21:22 UTC (permalink / raw)
  To: Wei Wang; +Cc: seanjc, pbonzini, vipinsh, ajones, eric.auger, kvm, linux-kernel

On Mon, Oct 24, 2022 at 07:34:27PM +0800, Wei Wang wrote:
> This patch series intends to improve kvm selftests with better code
> consolidation using the helper functions to perform vcpu and thread
> related operations.
> 
> In general, several aspects are improved:
> 1) The current users allocate an array of vcpu pointers to the vcpus that
>    are added to a vm, and an array of vcpu threads. This isn't necessary
>    as kvm_vm already maintains a list of added vcpus. This series changes
>    the list of vcpus in the kvm_vm struct to a vcpu array for users to
>    work with and removes each user's own allocation of such vcpu arrays.
>    Aslo add the vcpu thread to the kvm_vcpu struct, so that users don't
>    need to explicitly allocate a thread array to manage vcpu threads on
>    their own.
> 2) Change the users to use the helper functions provided by this series
>    with the following enhancements:
>    - Many users working with pthread_create/join forgot to check if
>      error on returning. The helper functions have handled thoses inside,
>      so users don't need to handle them by themselves;
>    - The vcpu threads created via the helper functions are named in
>      "vcpu-##id" format. Naming the threads facilitates debugging,
>      performance tuning, runtime pining etc;
>    - helper functions named with "vm_vcpu_threads_" iterates over all the
>      vcpus that have been added to the vm. Users don't need a explicit
>      loop to go through the added cpus by themselves.
> 3) kvm_vcpu is used as the interface parameter to the vcpu thread's
>    start routine, and the user specific data is made to be the private
>    data in kvm_vcpu. This can simplify the user specific data structures,
>    as kvm_vcpu has already included the required info for the thread, for
>    example, in patch 13, the cpu_idx field from "struct vcpu_thread"
>    is a duplicate of vcpu->id.

I haven't dug too much into the actual code yet, but I have some high
level feedback based on a quick look through the series:

 - Use the format "KVM: selftests: <Decsription>" for the shortlog.

 - Make the shortlog more specific. "vcpu related code consolidation" is
   vague.

 - Do not introduce bugs and then fix them in subsequent commits.  This
   breaks bisection. For example, kvm_page_table_test is broken at "KVM:
   selftests/kvm_util: use vm->vcpus[] when create vm with vcpus" and
   then fixed by "KVM: selftests/kvm_page_table_test: vcpu related code
   consolidation".

 - Try to limit each patch to one logical change. This is somewhat more
   art than science, but the basic idea is to avoid changing too much at
   once so that the code is easier to review and bisect. For example,
   "KVM: selftests/perf_test_util: vcpu related code consolidation" has
   a list of 6 different changes being made in the commit description.
   This is a sure sign this commit should be broken up. The same applies
   to many of the other patches. This will also make it easier to come
   up with more specific shortlogs.

> 
> The changes have been tested on an SPR server. Patch 15 and 16 haven't
> been tested due to the lack of an ARM environment currently.
> 
> Wei Wang (18):
>   KVM: selftests/kvm_util: use array of pointers to maintain vcpus in
>     kvm_vm
>   KVM: selftests/kvm_util: use vm->vcpus[] when create vm with vcpus
>   KVM: selftests/kvm_util: helper functions for vcpus and threads
>   KVM: selftests/kvm_page_table_test: vcpu related code consolidation
>   KVM: selftests/hardware_disable_test: code consolidation and cleanup
>   KVM: selftests/dirty_log_test: vcpu related code consolidation
>   KVM: selftests/max_guest_memory_test: vcpu related code consolidation
>   KVM: selftests/set_memory_region_test: vcpu related code consolidation
>   KVM: selftests/steal_time: vcpu related code consolidation and cleanup
>   KVM: selftests/tsc_scaling_sync: vcpu related code consolidation
>   KVM: selftest/xapic_ipi_test: vcpu related code consolidation
>   KVM: selftests/rseq_test: name the migration thread and some cleanup
>   KVM: selftests/perf_test_util: vcpu related code consolidation
>   KVM: selftest/memslot_perf_test: vcpu related code consolidation
>   KVM: selftests/vgic_init: vcpu related code consolidation
>   KVM: selftest/arch_timer: vcpu related code consolidation
>   KVM: selftests: remove the *vcpu[] input from __vm_create_with_vcpus
>   KVM: selftests/kvm_create_max_vcpus: check KVM_MAX_VCPUS
> 
>  .../selftests/kvm/aarch64/arch_timer.c        |  42 ++--
>  .../testing/selftests/kvm/aarch64/vgic_init.c |  35 ++-
>  .../selftests/kvm/access_tracking_perf_test.c |  18 +-
>  .../selftests/kvm/demand_paging_test.c        |   9 +-
>  .../selftests/kvm/dirty_log_perf_test.c       |  11 +-
>  tools/testing/selftests/kvm/dirty_log_test.c  |  16 +-
>  .../selftests/kvm/hardware_disable_test.c     |  56 ++---
>  .../testing/selftests/kvm/include/kvm_util.h  |  24 ++
>  .../selftests/kvm/include/kvm_util_base.h     |  12 +-
>  .../selftests/kvm/include/perf_test_util.h    |   9 +-
>  .../selftests/kvm/kvm_create_max_vcpus.c      |   7 +
>  .../selftests/kvm/kvm_page_table_test.c       |  16 +-
>  tools/testing/selftests/kvm/lib/kvm_util.c    | 217 +++++++++++++++---
>  .../selftests/kvm/lib/perf_test_util.c        |  68 ++----
>  .../selftests/kvm/lib/x86_64/perf_test_util.c |  11 +-
>  tools/testing/selftests/kvm/lib/x86_64/vmx.c  |   2 +-
>  .../selftests/kvm/max_guest_memory_test.c     |  53 ++---
>  .../kvm/memslot_modification_stress_test.c    |   9 +-
>  .../testing/selftests/kvm/memslot_perf_test.c | 137 +++++------
>  tools/testing/selftests/kvm/rseq_test.c       |  10 +-
>  .../selftests/kvm/set_memory_region_test.c    |  16 +-
>  tools/testing/selftests/kvm/steal_time.c      |  15 +-
>  .../selftests/kvm/x86_64/tsc_scaling_sync.c   |  25 +-
>  .../selftests/kvm/x86_64/xapic_ipi_test.c     |  54 ++---
>  24 files changed, 476 insertions(+), 396 deletions(-)
> 
> -- 
> 2.27.0
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v1 01/18] KVM: selftests/kvm_util: use array of pointers to maintain vcpus in kvm_vm
  2022-10-24 11:34 ` [PATCH v1 01/18] KVM: selftests/kvm_util: use array of pointers to maintain vcpus in kvm_vm Wei Wang
@ 2022-10-26 23:47   ` Sean Christopherson
  2022-10-27 12:28     ` Wang, Wei W
  0 siblings, 1 reply; 43+ messages in thread
From: Sean Christopherson @ 2022-10-26 23:47 UTC (permalink / raw)
  To: Wei Wang
  Cc: pbonzini, dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel

On Mon, Oct 24, 2022, Wei Wang wrote:
> Each vcpu has an id associated with it and is intrinsically faster
> and easier to be referenced by indexing into an array with "vcpu->id",
> compared to using a list of vcpus in the current implementation. Change
> the vcpu list to an array of vcpu pointers. Users then don't need to
> allocate such a vcpu array on their own, and instead, they can reuse
> the one maintained in kvm_vm.
> 
> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> ---
>  .../testing/selftests/kvm/include/kvm_util.h  |  4 +++
>  .../selftests/kvm/include/kvm_util_base.h     |  3 +-
>  tools/testing/selftests/kvm/lib/kvm_util.c    | 34 ++++++-------------
>  tools/testing/selftests/kvm/lib/x86_64/vmx.c  |  2 +-
>  4 files changed, 17 insertions(+), 26 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index c9286811a4cb..5d5c8968fb06 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -10,4 +10,8 @@
>  #include "kvm_util_base.h"
>  #include "ucall_common.h"
>  
> +#define vm_iterate_over_vcpus(vm, vcpu, i)				\

vm_for_each_vcpu() is more aligned with existing KVM terminology.

> +	for (i = 0, vcpu = vm->vcpus[0];				\
> +		vcpu && i < KVM_MAX_VCPUS; vcpu = vm->vcpus[++i])

I hate pointer arithmetic more than most people, but in this case it avoids the
need to pass in 'i', which in turn cuts down on boilerplate and churn.

>  #endif /* SELFTEST_KVM_UTIL_H */
> diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
> index e42a09cd24a0..c90a9609b853 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util_base.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
> @@ -45,7 +45,6 @@ struct userspace_mem_region {
>  };
>  
>  struct kvm_vcpu {
> -	struct list_head list;
>  	uint32_t id;
>  	int fd;
>  	struct kvm_vm *vm;
> @@ -75,7 +74,6 @@ struct kvm_vm {
>  	unsigned int pa_bits;
>  	unsigned int va_bits;
>  	uint64_t max_gfn;
> -	struct list_head vcpus;
>  	struct userspace_mem_regions regions;
>  	struct sparsebit *vpages_valid;
>  	struct sparsebit *vpages_mapped;
> @@ -92,6 +90,7 @@ struct kvm_vm {
>  	int stats_fd;
>  	struct kvm_stats_header stats_header;
>  	struct kvm_stats_desc *stats_desc;
> +	struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];

We can dynamically allocate the array without too much trouble, though I'm not
sure it's worth shaving the few KiB of memory.  For __vm_create(), the number of
vCPUs is known when the VM is created.  For vm_create_barebones(), we could do
the simple thing of allocating KVM_MAX_VCPU.

> @@ -534,6 +533,10 @@ __weak void vcpu_arch_free(struct kvm_vcpu *vcpu)
>  static void vm_vcpu_rm(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
>  {
>  	int ret;
> +	uint32_t vcpu_id = vcpu->id;
> +
> +	TEST_ASSERT(!!vm->vcpus[vcpu_id], "vCPU%d wasn't added\n", vcpu_id);

This is unecessary, there's one caller and it's iterating over the array of vCPUs.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v1 03/18] KVM: selftests/kvm_util: helper functions for vcpus and threads
  2022-10-24 11:34 ` [PATCH v1 03/18] KVM: selftests/kvm_util: helper functions for vcpus and threads Wei Wang
@ 2022-10-27  0:09   ` Sean Christopherson
  2022-10-27 14:02     ` Wang, Wei W
  0 siblings, 1 reply; 43+ messages in thread
From: Sean Christopherson @ 2022-10-27  0:09 UTC (permalink / raw)
  To: Wei Wang
  Cc: pbonzini, dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel

On Mon, Oct 24, 2022, Wei Wang wrote:
> @@ -14,4 +15,23 @@
>  	for (i = 0, vcpu = vm->vcpus[0];				\
>  		vcpu && i < KVM_MAX_VCPUS; vcpu = vm->vcpus[++i])
>  
> +void __pthread_create_with_name(pthread_t *thread, const pthread_attr_t *attr,

Can these return pthread_t instead of taking them as a param and have a "void"
return?  I'm pretty sure pthread_t is an integer type in most implementations,
i.e. can be cheaply copied by value.

> +			void *(*start_routine)(void *), void *arg, char *name);

Add a typedef for the payload, both to make it harder to screw up, and to make the
code more readable.  Does pthread really not provide one already?

> +void pthread_create_with_name(pthread_t *thread,
> +			void *(*start_routine)(void *), void *arg, char *name);

Align params, e.g.

void pthread_create_with_name(pthread_t *thread, void *(*start_routine)(void *),
			      void *arg, char *name);

Poking out past the 80 char soft limit is much preferable to arbitrary indentation.
Please fix this in all patches.
  
>  struct userspace_mem_regions {
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 1f69f5ca8356..ba3e774087fb 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -2006,3 +2006,175 @@ void __vm_get_stat(struct kvm_vm *vm, const char *stat_name, uint64_t *data,
>  		break;
>  	}
>  }
> +
> +/*
> + * Create a named thread with user's attribute
> + *
> + * Input Args:
> + *   attr - the attribute of the thread to create
> + *   start_routine - the routine to run in the thread context
> + *   arg - the argument passed to start_routine
> + *   name - the name of the thread
> + *
> + * Output Args:
> + *   thread - the thread to be created
> + *
> + * Create a thread with a user specified name.
> + */

Please don't add function comments, we're opportunistically removing the existing
boilerplate ones as we go.  Most of the comments, like this one, add very little
value as it's pretty obvious what the function does and what the params are.

> +void __pthread_create_with_name(pthread_t *thread, const pthread_attr_t *attr,
> +			void *(*start_routine)(void *), void *arg, char *name)
> +{
> +	int r;
> +
> +	r = pthread_create(thread, NULL, start_routine, arg);
> +	TEST_ASSERT(!r, "thread(%s) creation failed, r = %d", name, r);

Assuming 'r' is an errno, pretty print its name with strerror().

> +	r = pthread_setname_np(*thread, name);
> +	TEST_ASSERT(!r, "thread(%s) setting name failed, r = %d", name, r);

Same here.

> +}

...

> +void vm_vcpu_threads_create(struct kvm_vm *vm,
> +		void *(*start_routine)(void *), uint32_t private_data_size)

I vote (very strongly) to not deal with allocating private data.  The private data
isn't strictly related to threads, and the vast majority of callers don't need
private data, i.e. the param is dead weight in most cases.

And unless I'm missing something, it's trivial to move to a separate helper,
though honestly even that seems like overkill.

Wait, looking further, it already is a separate helper...  Forcing a bunch of
callers to specify '0' just to eliminate one function call in a handful of cases
is not a good tradeoff.

> +void vm_vcpu_threads_private_data_alloc(struct kvm_vm *vm, uint32_t data_size)

As above, this isn't strictly related to threads, e.g. vm_alloc_vcpu_private_data()

> +{
> +	struct kvm_vcpu *vcpu;
> +	int i;
> +
> +	vm_iterate_over_vcpus(vm, vcpu, i) {
> +		vcpu->private_data = calloc(1, data_size);
> +		TEST_ASSERT(vcpu->private_data, "%s: failed", __func__);
> +	}
> +}
> -- 
> 2.27.0
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v1 05/18] KVM: selftests/hardware_disable_test: code consolidation and cleanup
  2022-10-24 11:34 ` [PATCH v1 05/18] KVM: selftests/hardware_disable_test: code consolidation and cleanup Wei Wang
@ 2022-10-27  0:16   ` Sean Christopherson
  2022-10-27 14:14     ` Wang, Wei W
  0 siblings, 1 reply; 43+ messages in thread
From: Sean Christopherson @ 2022-10-27  0:16 UTC (permalink / raw)
  To: Wei Wang
  Cc: pbonzini, dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel

On Mon, Oct 24, 2022, Wei Wang wrote:
> Remove the unnecessary definition of the threads[] array, and use the
> helper functions to create and join threads.
> 
> Also move setting of the thread affinity to __vcpu_thread_create using
> attribute. This avoids an explicit step to set it after thread
> creation.

As David called out, please do this in a separate patch (one logical change per
patch).

> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> ---
>  .../selftests/kvm/hardware_disable_test.c     | 56 +++++--------------
>  1 file changed, 15 insertions(+), 41 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/hardware_disable_test.c b/tools/testing/selftests/kvm/hardware_disable_test.c
> index f5d59b9934f1..c212d34a6714 100644
> --- a/tools/testing/selftests/kvm/hardware_disable_test.c
> +++ b/tools/testing/selftests/kvm/hardware_disable_test.c
> @@ -8,7 +8,6 @@
>  #define _GNU_SOURCE
>  
>  #include <fcntl.h>
> -#include <pthread.h>
>  #include <semaphore.h>
>  #include <stdint.h>
>  #include <stdlib.h>
> @@ -59,64 +58,39 @@ static void *sleeping_thread(void *arg)
>  	pthread_exit(NULL);
>  }
>  
> -static inline void check_create_thread(pthread_t *thread, pthread_attr_t *attr,
> -				       void *(*f)(void *), void *arg)
> -{
> -	int r;
> -
> -	r = pthread_create(thread, attr, f, arg);
> -	TEST_ASSERT(r == 0, "%s: failed to create thread", __func__);
> -}
> -
> -static inline void check_set_affinity(pthread_t thread, cpu_set_t *cpu_set)
> -{
> -	int r;
> -
> -	r = pthread_setaffinity_np(thread, sizeof(cpu_set_t), cpu_set);
> -	TEST_ASSERT(r == 0, "%s: failed set affinity", __func__);
> -}
> -
> -static inline void check_join(pthread_t thread, void **retval)
> -{
> -	int r;
> -
> -	r = pthread_join(thread, retval);
> -	TEST_ASSERT(r == 0, "%s: failed to join thread", __func__);
> -}
> -
>  static void run_test(uint32_t run)
>  {
>  	struct kvm_vcpu *vcpu;
>  	struct kvm_vm *vm;
>  	cpu_set_t cpu_set;
> -	pthread_t threads[VCPU_NUM];
>  	pthread_t throw_away;
> -	void *b;
> +	pthread_attr_t attr;
>  	uint32_t i, j;
> +	int r;
>  
>  	CPU_ZERO(&cpu_set);
>  	for (i = 0; i < VCPU_NUM; i++)
>  		CPU_SET(i, &cpu_set);

Uh, what is this test doing?  I assume the intent is to avoid spamming all pCPUs
in the system, but I don't get the benefit of doing so.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v1 09/18] KVM: selftests/steal_time: vcpu related code consolidation and cleanup
  2022-10-24 11:34 ` [PATCH v1 09/18] KVM: selftests/steal_time: vcpu related code consolidation and cleanup Wei Wang
@ 2022-10-27  0:17   ` Sean Christopherson
  0 siblings, 0 replies; 43+ messages in thread
From: Sean Christopherson @ 2022-10-27  0:17 UTC (permalink / raw)
  To: Wei Wang
  Cc: pbonzini, dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel

On Mon, Oct 24, 2022, Wei Wang wrote:
> Remove the unnecessary definition of array of the vcpu pointers and
> re-use the one from the kvm_vm struct (i.e. vm->vcpus[]). Use the helper
> function to create the time stealing thread with name.

One thing per patch.

> Also add a check of the pthread_join return value.
> 
> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> ---
>  tools/testing/selftests/kvm/steal_time.c | 15 +++++++++------
>  1 file changed, 9 insertions(+), 6 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c
> index db8967f1a17b..857ed2c073fc 100644
> --- a/tools/testing/selftests/kvm/steal_time.c
> +++ b/tools/testing/selftests/kvm/steal_time.c
> @@ -8,7 +8,6 @@
>  #include <stdio.h>
>  #include <time.h>
>  #include <sched.h>
> -#include <pthread.h>
>  #include <linux/kernel.h>
>  #include <asm/kvm.h>
>  #include <asm/kvm_para.h>
> @@ -241,7 +240,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu)
>  
>  int main(int ac, char **av)
>  {
> -	struct kvm_vcpu *vcpus[NR_VCPUS];
> +	struct kvm_vcpu **vcpus;
>  	struct kvm_vm *vm;
>  	pthread_attr_t attr;
>  	pthread_t thread;
> @@ -250,7 +249,7 @@ int main(int ac, char **av)
>  	long stolen_time;
>  	long run_delay;
>  	bool verbose;
> -	int i;
> +	int i, r;
>  
>  	verbose = ac > 1 && (!strncmp(av[1], "-v", 3) || !strncmp(av[1], "--verbose", 10));
>  
> @@ -262,7 +261,8 @@ int main(int ac, char **av)
>  	pthread_setaffinity_np(pthread_self(), sizeof(cpu_set_t), &cpuset);
>  
>  	/* Create a VM and an identity mapped memslot for the steal time structure */
> -	vm = vm_create_with_vcpus(NR_VCPUS, guest_code, vcpus);
> +	vm = vm_create_with_vcpus(NR_VCPUS, guest_code, NULL);
> +	vcpus = vm->vcpus;

Just use vm->vcpus directly and drop the local variable, it's not that much more
churn and this looks quite odd.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v1 12/18] KVM: selftests/rseq_test: name the migration thread and some cleanup
  2022-10-24 11:34 ` [PATCH v1 12/18] KVM: selftests/rseq_test: name the migration thread and some cleanup Wei Wang
@ 2022-10-27  0:18   ` Sean Christopherson
  0 siblings, 0 replies; 43+ messages in thread
From: Sean Christopherson @ 2022-10-27  0:18 UTC (permalink / raw)
  To: Wei Wang
  Cc: pbonzini, dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel

On Mon, Oct 24, 2022, Wei Wang wrote:
> @@ -272,7 +271,8 @@ int main(int argc, char *argv[])
>  	TEST_ASSERT(i > (NR_TASK_MIGRATIONS / 2),
>  		    "Only performed %d KVM_RUNs, task stalled too much?\n", i);
>  
> -	pthread_join(migration_thread, NULL);
> +	r = pthread_join(migration_thread, NULL);
> +	TEST_ASSERT(r == 0, "failed to join the migration thread");

!r is the preferred style.

>  
>  	kvm_vm_free(vm);
>  
> -- 
> 2.27.0
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v1 18/18] KVM: selftests/kvm_create_max_vcpus: check KVM_MAX_VCPUS
  2022-10-24 11:34 ` [PATCH v1 18/18] KVM: selftests/kvm_create_max_vcpus: check KVM_MAX_VCPUS Wei Wang
@ 2022-10-27  0:22   ` Sean Christopherson
  0 siblings, 0 replies; 43+ messages in thread
From: Sean Christopherson @ 2022-10-27  0:22 UTC (permalink / raw)
  To: Wei Wang
  Cc: pbonzini, dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel

On Mon, Oct 24, 2022, Wei Wang wrote:
> If the KVM side max vcpu number is larger than the one supported by the
> userspace selftests, adjust the max number.

No, this defeats the purpose of the test.  "create max vCPUs" means "create the
maximum number allowed by KVM", not "create the arbitrary max supported by selftests".

^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [PATCH v1 00/18] KVM selftests code consolidation and cleanup
  2022-10-26 21:22 ` [PATCH v1 00/18] KVM selftests code consolidation and cleanup David Matlack
@ 2022-10-27 12:18   ` Wang, Wei W
  2022-10-27 15:44     ` Sean Christopherson
  0 siblings, 1 reply; 43+ messages in thread
From: Wang, Wei W @ 2022-10-27 12:18 UTC (permalink / raw)
  To: David Matlack
  Cc: Christopherson,,
	Sean, pbonzini, vipinsh, ajones, eric.auger, kvm, linux-kernel

On Thursday, October 27, 2022 5:23 AM, David Matlack:
> On Mon, Oct 24, 2022 at 07:34:27PM +0800, Wei Wang wrote:
> > This patch series intends to improve kvm selftests with better code
> > consolidation using the helper functions to perform vcpu and thread
> > related operations.
> >
> > In general, several aspects are improved:
> > 1) The current users allocate an array of vcpu pointers to the vcpus that
> >    are added to a vm, and an array of vcpu threads. This isn't necessary
> >    as kvm_vm already maintains a list of added vcpus. This series changes
> >    the list of vcpus in the kvm_vm struct to a vcpu array for users to
> >    work with and removes each user's own allocation of such vcpu arrays.
> >    Aslo add the vcpu thread to the kvm_vcpu struct, so that users don't
> >    need to explicitly allocate a thread array to manage vcpu threads on
> >    their own.
> > 2) Change the users to use the helper functions provided by this series
> >    with the following enhancements:
> >    - Many users working with pthread_create/join forgot to check if
> >      error on returning. The helper functions have handled thoses inside,
> >      so users don't need to handle them by themselves;
> >    - The vcpu threads created via the helper functions are named in
> >      "vcpu-##id" format. Naming the threads facilitates debugging,
> >      performance tuning, runtime pining etc;
> >    - helper functions named with "vm_vcpu_threads_" iterates over all the
> >      vcpus that have been added to the vm. Users don't need a explicit
> >      loop to go through the added cpus by themselves.
> > 3) kvm_vcpu is used as the interface parameter to the vcpu thread's
> >    start routine, and the user specific data is made to be the private
> >    data in kvm_vcpu. This can simplify the user specific data structures,
> >    as kvm_vcpu has already included the required info for the thread, for
> >    example, in patch 13, the cpu_idx field from "struct vcpu_thread"
> >    is a duplicate of vcpu->id.
> 
> I haven't dug too much into the actual code yet, but I have some high level
> feedback based on a quick look through the series:
> 
>  - Use the format "KVM: selftests: <Decsription>" for the shortlog.

I know it's not common to see so far, but curious is this the required format?
I didn't find where it's documented. If it's indeed a requirement, probably we
also need to enhance checkpatch.pl to detect this.

If it's not required, I think it is more obvious to have /sub_field in the title,
e.g. selftests/hardware_disable_test, to outline which specific part of
selftests the patch is changing. (the selftests are growing larger with many
usages independent of each other).

> 
>  - Make the shortlog more specific. "vcpu related code consolidation" is
>    vague.
> 
>  - Do not introduce bugs and then fix them in subsequent commits.  This
>    breaks bisection. For example, kvm_page_table_test is broken at "KVM:
>    selftests/kvm_util: use vm->vcpus[] when create vm with vcpus" and
>    then fixed by "KVM: selftests/kvm_page_table_test: vcpu related code
>    consolidation".
> 
>  - Try to limit each patch to one logical change. This is somewhat more
>    art than science, but the basic idea is to avoid changing too much at
>    once so that the code is easier to review and bisect. For example,
>    "KVM: selftests/perf_test_util: vcpu related code consolidation" has
>    a list of 6 different changes being made in the commit description.
>    This is a sure sign this commit should be broken up. The same applies
>    to many of the other patches. This will also make it easier to come
>    up with more specific shortlogs.

OK, will re-organize the patches.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [PATCH v1 01/18] KVM: selftests/kvm_util: use array of pointers to maintain vcpus in kvm_vm
  2022-10-26 23:47   ` Sean Christopherson
@ 2022-10-27 12:28     ` Wang, Wei W
  2022-10-27 15:27       ` Sean Christopherson
  0 siblings, 1 reply; 43+ messages in thread
From: Wang, Wei W @ 2022-10-27 12:28 UTC (permalink / raw)
  To: Christopherson,, Sean
  Cc: pbonzini, dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel

On Thursday, October 27, 2022 7:48 AM, Sean Christopherson wrote:
> > +	for (i = 0, vcpu = vm->vcpus[0];				\
> > +		vcpu && i < KVM_MAX_VCPUS; vcpu = vm->vcpus[++i])
> 
> I hate pointer arithmetic more than most people, but in this case it avoids the
> need to pass in 'i', which in turn cuts down on boilerplate and churn.

Hmm, indeed, this can be improved, how about this one:

+#define vm_iterate_over_vcpus(vm, vcpu)                         \
+       for (vcpu = vm->vcpus[0]; vcpu; vcpu = vm->vcpus[vcpu->id + 1]) \


> 
> >  #endif /* SELFTEST_KVM_UTIL_H */
> > diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h
> > b/tools/testing/selftests/kvm/include/kvm_util_base.h
> > index e42a09cd24a0..c90a9609b853 100644
> > --- a/tools/testing/selftests/kvm/include/kvm_util_base.h
> > +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
> > @@ -45,7 +45,6 @@ struct userspace_mem_region {  };
> >
> >  struct kvm_vcpu {
> > -	struct list_head list;
> >  	uint32_t id;
> >  	int fd;
> >  	struct kvm_vm *vm;
> > @@ -75,7 +74,6 @@ struct kvm_vm {
> >  	unsigned int pa_bits;
> >  	unsigned int va_bits;
> >  	uint64_t max_gfn;
> > -	struct list_head vcpus;
> >  	struct userspace_mem_regions regions;
> >  	struct sparsebit *vpages_valid;
> >  	struct sparsebit *vpages_mapped;
> > @@ -92,6 +90,7 @@ struct kvm_vm {
> >  	int stats_fd;
> >  	struct kvm_stats_header stats_header;
> >  	struct kvm_stats_desc *stats_desc;
> > +	struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];
> 
> We can dynamically allocate the array without too much trouble, though I'm not
> sure it's worth shaving the few KiB of memory.  For __vm_create(), the number
> of vCPUs is known when the VM is created.  For vm_create_barebones(), we
> could do the simple thing of allocating KVM_MAX_VCPU.

The issue with dynamic allocation is that some users start with __vm_create(nr_vcpus), and later
could add more cpus with vm_vcpu_add (e.g. x86_64/xapic_ipi_test.c). To support this we may
need to re-allocate the array for later vm_vcpu_add(), and also need to add nr_vcpus to indicate
the size.
It's userspace memory, and not a problem to use a bit larger virtual memory (memory are not really
allocated until we have that many vcpus to touch the array entries), I think.

> 
> > @@ -534,6 +533,10 @@ __weak void vcpu_arch_free(struct kvm_vcpu *vcpu)
> > static void vm_vcpu_rm(struct kvm_vm *vm, struct kvm_vcpu *vcpu)  {
> >  	int ret;
> > +	uint32_t vcpu_id = vcpu->id;
> > +
> > +	TEST_ASSERT(!!vm->vcpus[vcpu_id], "vCPU%d wasn't added\n", vcpu_id);
> 
> This is unecessary, there's one caller and it's iterating over the array of vCPUs.

That's right, thanks.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [PATCH v1 03/18] KVM: selftests/kvm_util: helper functions for vcpus and threads
  2022-10-27  0:09   ` Sean Christopherson
@ 2022-10-27 14:02     ` Wang, Wei W
  2022-10-27 14:54       ` Sean Christopherson
  0 siblings, 1 reply; 43+ messages in thread
From: Wang, Wei W @ 2022-10-27 14:02 UTC (permalink / raw)
  To: Christopherson,, Sean
  Cc: pbonzini, dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel

On Thursday, October 27, 2022 8:10 AM, Sean Christopherson wrote:
> On Mon, Oct 24, 2022, Wei Wang wrote:
> > @@ -14,4 +15,23 @@
> >  	for (i = 0, vcpu = vm->vcpus[0];				\
> >  		vcpu && i < KVM_MAX_VCPUS; vcpu = vm->vcpus[++i])
> >
> > +void __pthread_create_with_name(pthread_t *thread, const
> > +pthread_attr_t *attr,
> 
> Can these return pthread_t instead of taking them as a param and have a "void"
> return?  I'm pretty sure pthread_t is an integer type in most implementations,
> i.e. can be cheaply copied by value.

Yes, sounds good.

> 
> > +			void *(*start_routine)(void *), void *arg, char *name);
> 
> Add a typedef for the payload, both to make it harder to screw up, and to make
> the code more readable.  Does pthread really not provide one already?

You meant typedef for start_routine? I searched throughout pthread.h, and didn't find it.
Maybe we could create one here.

> > +void vm_vcpu_threads_create(struct kvm_vm *vm,
> > +		void *(*start_routine)(void *), uint32_t private_data_size)
> 
> I vote (very strongly) to not deal with allocating private data.  The private data
> isn't strictly related to threads, and the vast majority of callers don't need private
> data, i.e. the param is dead weight in most cases.
> 
> And unless I'm missing something, it's trivial to move to a separate helper,
> though honestly even that seems like overkill.
> 
> Wait, looking further, it already is a separate helper...  Forcing a bunch of
> callers to specify '0' just to eliminate one function call in a handful of cases is not
> a good tradeoff.

The intention was to do the allocation within one vm_for_each_vcpu()
iteration when possible. Just a micro-optimization, but no problem, we can keep
them separate if that looks better (simpler).

^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [PATCH v1 05/18] KVM: selftests/hardware_disable_test: code consolidation and cleanup
  2022-10-27  0:16   ` Sean Christopherson
@ 2022-10-27 14:14     ` Wang, Wei W
  2022-10-27 18:03       ` Sean Christopherson
  0 siblings, 1 reply; 43+ messages in thread
From: Wang, Wei W @ 2022-10-27 14:14 UTC (permalink / raw)
  To: Christopherson,, Sean
  Cc: pbonzini, dmatlack, vipinsh, ajones, eric.auger, kvm,
	linux-kernel, ikalvarado

On Thursday, October 27, 2022 8:16 AM, Sean Christopherson wrote:
> > diff --git a/tools/testing/selftests/kvm/hardware_disable_test.c
> >  static void run_test(uint32_t run)
> >  {
> >  	struct kvm_vcpu *vcpu;
> >  	struct kvm_vm *vm;
> >  	cpu_set_t cpu_set;
> > -	pthread_t threads[VCPU_NUM];
> >  	pthread_t throw_away;
> > -	void *b;
> > +	pthread_attr_t attr;
> >  	uint32_t i, j;
> > +	int r;
> >
> >  	CPU_ZERO(&cpu_set);
> >  	for (i = 0; i < VCPU_NUM; i++)
> >  		CPU_SET(i, &cpu_set);
> 
> Uh, what is this test doing?  I assume the intent is to avoid spamming all
> pCPUs in the system, but I don't get the benefit of doing so.

IIUIC, it is to test if the condition race between the 2 paths:
#1 kvm_arch_hardware_disable->drop_user_return_notifiers() and
#2 fire_user_return_notifiers->kvm_on_user_return
has been solved by disabling interrupts in kvm_on_user_return.

To stress the tests, it creates a bunch of threads (continuously making syscalls
to trigger #2 above) to be scheduled on the same pCPU that runs a vCPU, and
then VM is killed, which triggers #1 above. 
They fork to test 512 times hoping there is chance #1 and #2 above can happen
at the same time without an issue.

+ Ignacio to confirm if possible.


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v1 03/18] KVM: selftests/kvm_util: helper functions for vcpus and threads
  2022-10-27 14:02     ` Wang, Wei W
@ 2022-10-27 14:54       ` Sean Christopherson
  0 siblings, 0 replies; 43+ messages in thread
From: Sean Christopherson @ 2022-10-27 14:54 UTC (permalink / raw)
  To: Wang, Wei W
  Cc: pbonzini, dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel

On Thu, Oct 27, 2022, Wang, Wei W wrote:
> On Thursday, October 27, 2022 8:10 AM, Sean Christopherson wrote:
> > > +void vm_vcpu_threads_create(struct kvm_vm *vm,
> > > +		void *(*start_routine)(void *), uint32_t private_data_size)
> > 
> > I vote (very strongly) to not deal with allocating private data.  The private data
> > isn't strictly related to threads, and the vast majority of callers don't need private
> > data, i.e. the param is dead weight in most cases.
> > 
> > And unless I'm missing something, it's trivial to move to a separate helper,
> > though honestly even that seems like overkill.
> > 
> > Wait, looking further, it already is a separate helper...  Forcing a bunch of
> > callers to specify '0' just to eliminate one function call in a handful of cases is not
> > a good tradeoff.
> 
> The intention was to do the allocation within one vm_for_each_vcpu()
> iteration when possible. Just a micro-optimization, but no problem, we can keep
> them separate if that looks better (simpler).

Keep them separate, that level of optimization is not something that's ever going
to be noticeable.

I don't want to say that performance is an afterthought for KVM selftests, but in
common code it's definitely way down the list of priorities because even the most
naive implementation for things like configuring vCPUs is going to have a runtime
measured in milliseconds.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v1 01/18] KVM: selftests/kvm_util: use array of pointers to maintain vcpus in kvm_vm
  2022-10-27 12:28     ` Wang, Wei W
@ 2022-10-27 15:27       ` Sean Christopherson
  2022-10-28  2:13         ` Wang, Wei W
  0 siblings, 1 reply; 43+ messages in thread
From: Sean Christopherson @ 2022-10-27 15:27 UTC (permalink / raw)
  To: Wang, Wei W
  Cc: pbonzini, dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel

On Thu, Oct 27, 2022, Wang, Wei W wrote:
> On Thursday, October 27, 2022 7:48 AM, Sean Christopherson wrote:
> > > +	for (i = 0, vcpu = vm->vcpus[0];				\
> > > +		vcpu && i < KVM_MAX_VCPUS; vcpu = vm->vcpus[++i])
> > 
> > I hate pointer arithmetic more than most people, but in this case it avoids the
> > need to pass in 'i', which in turn cuts down on boilerplate and churn.
> 
> Hmm, indeed, this can be improved, how about this one:
> 
> +#define vm_iterate_over_vcpus(vm, vcpu)                         \
> +       for (vcpu = vm->vcpus[0]; vcpu; vcpu = vm->vcpus[vcpu->id + 1]) \

Needs to be bounded by the size of the array.
 
> > >  #endif /* SELFTEST_KVM_UTIL_H */
> > > diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h
> > > b/tools/testing/selftests/kvm/include/kvm_util_base.h
> > > index e42a09cd24a0..c90a9609b853 100644
> > > --- a/tools/testing/selftests/kvm/include/kvm_util_base.h
> > > +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
> > > @@ -45,7 +45,6 @@ struct userspace_mem_region {  };
> > >
> > >  struct kvm_vcpu {
> > > -	struct list_head list;
> > >  	uint32_t id;
> > >  	int fd;
> > >  	struct kvm_vm *vm;
> > > @@ -75,7 +74,6 @@ struct kvm_vm {
> > >  	unsigned int pa_bits;
> > >  	unsigned int va_bits;
> > >  	uint64_t max_gfn;
> > > -	struct list_head vcpus;
> > >  	struct userspace_mem_regions regions;
> > >  	struct sparsebit *vpages_valid;
> > >  	struct sparsebit *vpages_mapped;
> > > @@ -92,6 +90,7 @@ struct kvm_vm {
> > >  	int stats_fd;
> > >  	struct kvm_stats_header stats_header;
> > >  	struct kvm_stats_desc *stats_desc;
> > > +	struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];
> > 
> > We can dynamically allocate the array without too much trouble, though I'm not
> > sure it's worth shaving the few KiB of memory.  For __vm_create(), the number
> > of vCPUs is known when the VM is created.  For vm_create_barebones(), we
> > could do the simple thing of allocating KVM_MAX_VCPU.
> 
> The issue with dynamic allocation is that some users start with
> __vm_create(nr_vcpus), and later could add more cpus with vm_vcpu_add (e.g.
> x86_64/xapic_ipi_test.c). To support this we may need to re-allocate the
> array for later vm_vcpu_add(), and also need to add nr_vcpus to indicate the
> size.

Hrm, right, the number of runnable CPUs isn't a hard upper bound.  Ideally it
would be, as the number of pages required for guest memory will fail to account
for the "extra" vcpus.  E.g. that test should really do vm_create(2) and then
manually add each vCPU.

> It's userspace memory, and not a problem to use a bit larger virtual memory
> (memory are not really allocated until we have that many vcpus to touch the
> array entries), I think.

Yeah, just allocate the max for now, though the array still needs to be dynamically
allocated based on the actual maximum number of vCPUs.  Oh, duh, we can do the
easy thing and just bump KVM_MAX_VCPUS to 1024 to match KVM.  And then assert that
kvm_check_cap(KVM_CAP_MAX_VCPUS) == KVM_CAP_MAX_VCPUS in kvm_create_max_vcpus.c.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v1 00/18] KVM selftests code consolidation and cleanup
  2022-10-27 12:18   ` Wang, Wei W
@ 2022-10-27 15:44     ` Sean Christopherson
  2022-10-27 16:24       ` David Matlack
  0 siblings, 1 reply; 43+ messages in thread
From: Sean Christopherson @ 2022-10-27 15:44 UTC (permalink / raw)
  To: Wang, Wei W
  Cc: David Matlack, pbonzini, vipinsh, ajones, eric.auger, kvm, linux-kernel

On Thu, Oct 27, 2022, Wang, Wei W wrote:
> On Thursday, October 27, 2022 5:23 AM, David Matlack:
> > I haven't dug too much into the actual code yet, but I have some high level
> > feedback based on a quick look through the series:
> > 
> >  - Use the format "KVM: selftests: <Decsription>" for the shortlog.
> 
> I know it's not common to see so far, but curious is this the required format?

It's definitely the preferred format.

> I didn't find where it's documented.

Heh, for all shortlog scopes, the "documentation" is `git log --pretty=oneline` :-)

> If it's indeed a requirement, probably we also need to enhance checkpatch.pl
> to detect this.

I like the idea in theory, but that'd be a daunting task to set up, and quite the
maintenance nightmare.  There are probably thousands of file => scope mappings
throughout the kernel, with any number of exceptions and arbitrary rules.

> If it's not required, I think it is more obvious to have /sub_field in the title,
> e.g. selftests/hardware_disable_test, to outline which specific part of
> selftests the patch is changing. (the selftests are growing larger with many
> usages independent of each other).

I agree that "KVM: selftests:" is a rather large umbrella, but it's not obvious
that "KVM: selfetest/<test>" is unequivocally better, e.g. if someone is making a
change to x86_64/vmx_exception_with_invalid_guest_state.c, the scope will be

  KVM: selftests/x86_64/vmx_exception_with_invalid_guest_state:

or 

  KVM: selftests/vmx_exception_with_invalid_guest_state:

which doesn't leave a whole lot of room for an actual shortlog.

When reviewing selftests patches, I do find myself pausing sometimes to see exactly
what file/test is being modified, but in almost all cases a quick glance at the
diffstat provides the answer.  And on the flip side, having all selftests patches
exactly match "KVM: selftests:" makes it super easy to identify selftest changes,
i.e. it's mostly a wash overall in terms of efficiency (for me at least).

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v1 00/18] KVM selftests code consolidation and cleanup
  2022-10-27 15:44     ` Sean Christopherson
@ 2022-10-27 16:24       ` David Matlack
  2022-10-27 18:27         ` Sean Christopherson
  0 siblings, 1 reply; 43+ messages in thread
From: David Matlack @ 2022-10-27 16:24 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Wang, Wei W, pbonzini, vipinsh, ajones, eric.auger, kvm, linux-kernel

On Thu, Oct 27, 2022 at 8:44 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Thu, Oct 27, 2022, Wang, Wei W wrote:
> > On Thursday, October 27, 2022 5:23 AM, David Matlack:
> > > I haven't dug too much into the actual code yet, but I have some high level
> > > feedback based on a quick look through the series:
> > >
> > >  - Use the format "KVM: selftests: <Decsription>" for the shortlog.
> >
> > I know it's not common to see so far, but curious is this the required format?
>
> It's definitely the preferred format.
>
> > I didn't find where it's documented.
>
> Heh, for all shortlog scopes, the "documentation" is `git log --pretty=oneline` :-)
>
> > If it's indeed a requirement, probably we also need to enhance checkpatch.pl
> > to detect this.
>
> I like the idea in theory, but that'd be a daunting task to set up, and quite the
> maintenance nightmare.  There are probably thousands of file => scope mappings
> throughout the kernel, with any number of exceptions and arbitrary rules.

I was thinking about proposing this in checkpatch.pl, or in some
KVM-specific check script. It seems like the following rule: If a
commit only modifies files in tools/testing/selftests/kvm/*, then
requires the shortlog match the regex "KVM: selftests: .*". That would
handle the vast majority of cases without affecting other subsystems.

Sean are you more concerned that if we start validating shortlogs in
checkpatch.pl then eventually it will get too out of hand? (i.e. not
so concerned with this specific case, but the general problem?)

Either way, we should at least document the preferred KVM shortlog
styles in Documentation/virtual/kvm/.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v1 05/18] KVM: selftests/hardware_disable_test: code consolidation and cleanup
  2022-10-27 14:14     ` Wang, Wei W
@ 2022-10-27 18:03       ` Sean Christopherson
  2022-10-28  2:16         ` Wang, Wei W
  0 siblings, 1 reply; 43+ messages in thread
From: Sean Christopherson @ 2022-10-27 18:03 UTC (permalink / raw)
  To: Wang, Wei W
  Cc: pbonzini, dmatlack, vipinsh, ajones, eric.auger, kvm,
	linux-kernel, ikalvarado

On Thu, Oct 27, 2022, Wang, Wei W wrote:
> On Thursday, October 27, 2022 8:16 AM, Sean Christopherson wrote:
> > > diff --git a/tools/testing/selftests/kvm/hardware_disable_test.c
> > >  static void run_test(uint32_t run)
> > >  {
> > >  	struct kvm_vcpu *vcpu;
> > >  	struct kvm_vm *vm;
> > >  	cpu_set_t cpu_set;
> > > -	pthread_t threads[VCPU_NUM];
> > >  	pthread_t throw_away;
> > > -	void *b;
> > > +	pthread_attr_t attr;
> > >  	uint32_t i, j;
> > > +	int r;
> > >
> > >  	CPU_ZERO(&cpu_set);
> > >  	for (i = 0; i < VCPU_NUM; i++)
> > >  		CPU_SET(i, &cpu_set);
> > 
> > Uh, what is this test doing?  I assume the intent is to avoid spamming all
> > pCPUs in the system, but I don't get the benefit of doing so.
> 
> IIUIC, it is to test if the condition race between the 2 paths:
> #1 kvm_arch_hardware_disable->drop_user_return_notifiers() and
> #2 fire_user_return_notifiers->kvm_on_user_return
> has been solved by disabling interrupts in kvm_on_user_return.
> 
> To stress the tests, it creates a bunch of threads (continuously making syscalls
> to trigger #2 above) to be scheduled on the same pCPU that runs a vCPU, and
> then VM is killed, which triggers #1 above. 
> They fork to test 512 times hoping there is chance #1 and #2 above can happen
> at the same time without an issue.

But why does it matter what pCPU a vCPU is running on?  Wouldn't the probability
of triggering a race between kvm_on_user_return() and hardware_disable() be
_higher_ if there are more pCPUs returning to userspace?

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v1 00/18] KVM selftests code consolidation and cleanup
  2022-10-27 16:24       ` David Matlack
@ 2022-10-27 18:27         ` Sean Christopherson
  2022-10-28 12:41           ` Andrew Jones
  0 siblings, 1 reply; 43+ messages in thread
From: Sean Christopherson @ 2022-10-27 18:27 UTC (permalink / raw)
  To: David Matlack
  Cc: Wang, Wei W, pbonzini, vipinsh, ajones, eric.auger, kvm, linux-kernel

On Thu, Oct 27, 2022, David Matlack wrote:
> On Thu, Oct 27, 2022 at 8:44 AM Sean Christopherson <seanjc@google.com> wrote:
> > I like the idea in theory, but that'd be a daunting task to set up, and quite the
> > maintenance nightmare.  There are probably thousands of file => scope mappings
> > throughout the kernel, with any number of exceptions and arbitrary rules.
> 
> I was thinking about proposing this in checkpatch.pl, or in some
> KVM-specific check script. It seems like the following rule: If a
> commit only modifies files in tools/testing/selftests/kvm/*, then
> requires the shortlog match the regex "KVM: selftests: .*". That would
> handle the vast majority of cases without affecting other subsystems.
> 
> Sean are you more concerned that if we start validating shortlogs in
> checkpatch.pl then eventually it will get too out of hand? (i.e. not
> so concerned with this specific case, but the general problem?)

Ya, the general problem.  Hardcoding anything KVM specific in checkpatch.pl isn't
going to fly.  The checkpatch maintainers most definitely don't want to take on
the burden of maintaining subsystem rules.  Letting one subsystem add custom rules
effectively opens the flood gates to all subsystems adding custom rules.  And from
a KVM perspective, I don't want to have to get an Acked-by from a checkpatch
maintiainer just to tweak a KVM rule.

The only somewhat feasible approach I can think of would be to provide a generic
"language" for shortlog scope rules, and have checkpatch look for a well-known
file in relevant directories, e.g. add arch/x86/kvm/SCOPES or whatever.  But even
that is a non-trivial problem to solve, as it means coming up with a "language"
that has a reasonable chance of working for many subsystems without generating too
many false positives.

It's definitely doable, and likely not actually a maintenance nightmare (I wrote
that thinking of modifying a common rules file).  But it's still fairly daunting
as getting buy-in on something that affects the kernel at large tends to be easier
said then done.  Then again, I'm probably being pessimistic due to my sub-par
regex+scripting skills :-)

^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [PATCH v1 01/18] KVM: selftests/kvm_util: use array of pointers to maintain vcpus in kvm_vm
  2022-10-27 15:27       ` Sean Christopherson
@ 2022-10-28  2:13         ` Wang, Wei W
  0 siblings, 0 replies; 43+ messages in thread
From: Wang, Wei W @ 2022-10-28  2:13 UTC (permalink / raw)
  To: Christopherson,, Sean
  Cc: pbonzini, dmatlack, vipinsh, ajones, eric.auger, kvm, linux-kernel

On Thursday, October 27, 2022 11:27 PM, Sean Christopherson wrote:
> Yeah, just allocate the max for now, though the array still needs to be
> dynamically allocated based on the actual maximum number of vCPUs.  Oh, duh,
> we can do the easy thing and just bump KVM_MAX_VCPUS to 1024 to match
> KVM.  And then assert that
> kvm_check_cap(KVM_CAP_MAX_VCPUS) == KVM_CAP_MAX_VCPUS in
> kvm_create_max_vcpus.c.

Right. I thought about the same thing, we should update KVM_MAX_VCPUS anyway.


^ permalink raw reply	[flat|nested] 43+ messages in thread

* RE: [PATCH v1 05/18] KVM: selftests/hardware_disable_test: code consolidation and cleanup
  2022-10-27 18:03       ` Sean Christopherson
@ 2022-10-28  2:16         ` Wang, Wei W
  0 siblings, 0 replies; 43+ messages in thread
From: Wang, Wei W @ 2022-10-28  2:16 UTC (permalink / raw)
  To: Christopherson,, Sean
  Cc: pbonzini, dmatlack, vipinsh, ajones, eric.auger, kvm,
	linux-kernel, ikalvarado

On Friday, October 28, 2022 2:03 AM, Sean Christopherson wrote:
> But why does it matter what pCPU a vCPU is running on?  Wouldn't the
> probability of triggering a race between kvm_on_user_return() and
> hardware_disable() be _higher_ if there are more pCPUs returning to userspace?

I think the point there is that the vcpus and those syscall threads need to be on the
same pCPUs. Linux by default has its own load balancing for threads to run on. If the
vcpus and syscall threads are scattered on different pCPUs, kvm_on_user_return
would less likely to be triggered when the syscall threads return to userspace.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v1 00/18] KVM selftests code consolidation and cleanup
  2022-10-27 18:27         ` Sean Christopherson
@ 2022-10-28 12:41           ` Andrew Jones
  2022-10-28 15:49             ` Sean Christopherson
  0 siblings, 1 reply; 43+ messages in thread
From: Andrew Jones @ 2022-10-28 12:41 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: David Matlack, Wang, Wei W, pbonzini, vipinsh, ajones,
	eric.auger, kvm, linux-kernel

On Thu, Oct 27, 2022 at 06:27:39PM +0000, Sean Christopherson wrote:
> On Thu, Oct 27, 2022, David Matlack wrote:
> > On Thu, Oct 27, 2022 at 8:44 AM Sean Christopherson <seanjc@google.com> wrote:
> > > I like the idea in theory, but that'd be a daunting task to set up, and quite the
> > > maintenance nightmare.  There are probably thousands of file => scope mappings
> > > throughout the kernel, with any number of exceptions and arbitrary rules.
> > 
> > I was thinking about proposing this in checkpatch.pl, or in some
> > KVM-specific check script. It seems like the following rule: If a
> > commit only modifies files in tools/testing/selftests/kvm/*, then
> > requires the shortlog match the regex "KVM: selftests: .*". That would
> > handle the vast majority of cases without affecting other subsystems.
> > 
> > Sean are you more concerned that if we start validating shortlogs in
> > checkpatch.pl then eventually it will get too out of hand? (i.e. not
> > so concerned with this specific case, but the general problem?)
> 
> Ya, the general problem.  Hardcoding anything KVM specific in checkpatch.pl isn't
> going to fly.  The checkpatch maintainers most definitely don't want to take on
> the burden of maintaining subsystem rules.  Letting one subsystem add custom rules
> effectively opens the flood gates to all subsystems adding custom rules.  And from
> a KVM perspective, I don't want to have to get an Acked-by from a checkpatch
> maintiainer just to tweak a KVM rule.
> 
> The only somewhat feasible approach I can think of would be to provide a generic
> "language" for shortlog scope rules, and have checkpatch look for a well-known
> file in relevant directories, e.g. add arch/x86/kvm/SCOPES or whatever.  But even
> that is a non-trivial problem to solve, as it means coming up with a "language"
> that has a reasonable chance of working for many subsystems without generating too
> many false positives.
> 
> It's definitely doable, and likely not actually a maintenance nightmare (I wrote
> that thinking of modifying a common rules file).  But it's still fairly daunting
> as getting buy-in on something that affects the kernel at large tends to be easier
> said then done.  Then again, I'm probably being pessimistic due to my sub-par
> regex+scripting skills :-)

How about adding support for checkpatch extension plugins? If we could add
a plugin script, e.g. tools/testing/selftests/kvm/.checkpatch, and modify
checkpatch to run .checkpatch scripts in the patched files' directories
(and recursively in the parent directories) when found, then we'd get
custom checkpatch behaviors. The scripts wouldn't even have to be written
in Perl (but I say that a bit sadly, because I like Perl).

Thanks,
drew

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v1 00/18] KVM selftests code consolidation and cleanup
  2022-10-28 12:41           ` Andrew Jones
@ 2022-10-28 15:49             ` Sean Christopherson
  2022-11-07 18:11               ` David Matlack
  0 siblings, 1 reply; 43+ messages in thread
From: Sean Christopherson @ 2022-10-28 15:49 UTC (permalink / raw)
  To: Andrew Jones
  Cc: David Matlack, Wang, Wei W, pbonzini, vipinsh, ajones,
	eric.auger, kvm, linux-kernel

On Fri, Oct 28, 2022, Andrew Jones wrote:
> On Thu, Oct 27, 2022 at 06:27:39PM +0000, Sean Christopherson wrote:
> > On Thu, Oct 27, 2022, David Matlack wrote:
> > > On Thu, Oct 27, 2022 at 8:44 AM Sean Christopherson <seanjc@google.com> wrote:
> > > > I like the idea in theory, but that'd be a daunting task to set up, and quite the
> > > > maintenance nightmare.  There are probably thousands of file => scope mappings
> > > > throughout the kernel, with any number of exceptions and arbitrary rules.
> > > 
> > > I was thinking about proposing this in checkpatch.pl, or in some
> > > KVM-specific check script. It seems like the following rule: If a
> > > commit only modifies files in tools/testing/selftests/kvm/*, then
> > > requires the shortlog match the regex "KVM: selftests: .*". That would
> > > handle the vast majority of cases without affecting other subsystems.
> > > 
> > > Sean are you more concerned that if we start validating shortlogs in
> > > checkpatch.pl then eventually it will get too out of hand? (i.e. not
> > > so concerned with this specific case, but the general problem?)
> > 
> > Ya, the general problem.  Hardcoding anything KVM specific in checkpatch.pl isn't
> > going to fly.  The checkpatch maintainers most definitely don't want to take on
> > the burden of maintaining subsystem rules.  Letting one subsystem add custom rules
> > effectively opens the flood gates to all subsystems adding custom rules.  And from
> > a KVM perspective, I don't want to have to get an Acked-by from a checkpatch
> > maintiainer just to tweak a KVM rule.
> > 
> > The only somewhat feasible approach I can think of would be to provide a generic
> > "language" for shortlog scope rules, and have checkpatch look for a well-known
> > file in relevant directories, e.g. add arch/x86/kvm/SCOPES or whatever.  But even
> > that is a non-trivial problem to solve, as it means coming up with a "language"
> > that has a reasonable chance of working for many subsystems without generating too
> > many false positives.
> > 
> > It's definitely doable, and likely not actually a maintenance nightmare (I wrote
> > that thinking of modifying a common rules file).  But it's still fairly daunting
> > as getting buy-in on something that affects the kernel at large tends to be easier
> > said then done.  Then again, I'm probably being pessimistic due to my sub-par
> > regex+scripting skills :-)
> 
> How about adding support for checkpatch extension plugins? If we could add
> a plugin script, e.g. tools/testing/selftests/kvm/.checkpatch, and modify
> checkpatch to run .checkpatch scripts in the patched files' directories
> (and recursively in the parent directories) when found, then we'd get
> custom checkpatch behaviors. The scripts wouldn't even have to be written
> in Perl (but I say that a bit sadly, because I like Perl).

That will work for simple cases, but patches that touch files in multiple directories
will be messy.  E.g. a patch that touches virt/kvm/ and arch/x86/kvm/ will have
two separate custom rules enforcing two different scopes.

Recursively executing plugins will also be problematic, e.g. except for KVM, arch/x86/
is maintained by the tip-tree folks, and the tip-tree is quite opinionated on all
sorts of things, whereas KVM tends to be a bit more relaxed.

Enforcing scope through plugins would also lead to some amount of duplicate code
throught subsystems.

Anyways, if someone wants to pursue this, these ideas and the "requirement" should
be run by the checkpatch maintainers.  They have far more experience and authority
in this area, and I suspect we aren't the first people to want checkpatch to get
involved in enforcing shortlog scope.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v1 00/18] KVM selftests code consolidation and cleanup
  2022-10-28 15:49             ` Sean Christopherson
@ 2022-11-07 18:11               ` David Matlack
  2022-11-07 18:19                 ` Sean Christopherson
  0 siblings, 1 reply; 43+ messages in thread
From: David Matlack @ 2022-11-07 18:11 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Andrew Jones, Wang, Wei W, pbonzini, vipinsh, ajones, eric.auger,
	kvm, linux-kernel

On Fri, Oct 28, 2022 at 8:49 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Fri, Oct 28, 2022, Andrew Jones wrote:
> > On Thu, Oct 27, 2022 at 06:27:39PM +0000, Sean Christopherson wrote:
> > > On Thu, Oct 27, 2022, David Matlack wrote:
> > > > On Thu, Oct 27, 2022 at 8:44 AM Sean Christopherson <seanjc@google.com> wrote:
> > > > > I like the idea in theory, but that'd be a daunting task to set up, and quite the
> > > > > maintenance nightmare.  There are probably thousands of file => scope mappings
> > > > > throughout the kernel, with any number of exceptions and arbitrary rules.
> > > >
> > > > I was thinking about proposing this in checkpatch.pl, or in some
> > > > KVM-specific check script. It seems like the following rule: If a
> > > > commit only modifies files in tools/testing/selftests/kvm/*, then
> > > > requires the shortlog match the regex "KVM: selftests: .*". That would
> > > > handle the vast majority of cases without affecting other subsystems.
> > > >
> > > > Sean are you more concerned that if we start validating shortlogs in
> > > > checkpatch.pl then eventually it will get too out of hand? (i.e. not
> > > > so concerned with this specific case, but the general problem?)
> > >
> > > Ya, the general problem.  Hardcoding anything KVM specific in checkpatch.pl isn't
> > > going to fly.  The checkpatch maintainers most definitely don't want to take on
> > > the burden of maintaining subsystem rules.  Letting one subsystem add custom rules
> > > effectively opens the flood gates to all subsystems adding custom rules.  And from
> > > a KVM perspective, I don't want to have to get an Acked-by from a checkpatch
> > > maintiainer just to tweak a KVM rule.
> > >
> > > The only somewhat feasible approach I can think of would be to provide a generic
> > > "language" for shortlog scope rules, and have checkpatch look for a well-known
> > > file in relevant directories, e.g. add arch/x86/kvm/SCOPES or whatever.  But even
> > > that is a non-trivial problem to solve, as it means coming up with a "language"
> > > that has a reasonable chance of working for many subsystems without generating too
> > > many false positives.
> > >
> > > It's definitely doable, and likely not actually a maintenance nightmare (I wrote
> > > that thinking of modifying a common rules file).  But it's still fairly daunting
> > > as getting buy-in on something that affects the kernel at large tends to be easier
> > > said then done.  Then again, I'm probably being pessimistic due to my sub-par
> > > regex+scripting skills :-)
> >
> > How about adding support for checkpatch extension plugins? If we could add
> > a plugin script, e.g. tools/testing/selftests/kvm/.checkpatch, and modify
> > checkpatch to run .checkpatch scripts in the patched files' directories
> > (and recursively in the parent directories) when found, then we'd get
> > custom checkpatch behaviors. The scripts wouldn't even have to be written
> > in Perl (but I say that a bit sadly, because I like Perl).
>
> That will work for simple cases, but patches that touch files in multiple directories
> will be messy.  E.g. a patch that touches virt/kvm/ and arch/x86/kvm/ will have
> two separate custom rules enforcing two different scopes.
>
> Recursively executing plugins will also be problematic, e.g. except for KVM, arch/x86/
> is maintained by the tip-tree folks, and the tip-tree is quite opinionated on all
> sorts of things, whereas KVM tends to be a bit more relaxed.
>
> Enforcing scope through plugins would also lead to some amount of duplicate code
> throught subsystems.
>
> Anyways, if someone wants to pursue this, these ideas and the "requirement" should
> be run by the checkpatch maintainers.  They have far more experience and authority
> in this area, and I suspect we aren't the first people to want checkpatch to get
> involved in enforcing shortlog scope.

Documenting would at least be an improvement over what we have today
since it would eliminate the need to re-explain the preferred rules
every time. We can just point to the documentation when reviewing
patches.

`git log --pretty=oneline` is not a great way to document shortlog
scopes because it does not explain the rules (e.g. when to use "KVM:
x86: " vs "KVM: x86/mmu: "), does not explain why things the way they
are, and is inconsistent since we don't always catch every patch that
goes by with a non-preferred shortlog scope.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v1 00/18] KVM selftests code consolidation and cleanup
  2022-11-07 18:11               ` David Matlack
@ 2022-11-07 18:19                 ` Sean Christopherson
  2022-11-09 19:05                   ` David Matlack
  0 siblings, 1 reply; 43+ messages in thread
From: Sean Christopherson @ 2022-11-07 18:19 UTC (permalink / raw)
  To: David Matlack
  Cc: Andrew Jones, Wang, Wei W, pbonzini, vipinsh, ajones, eric.auger,
	kvm, linux-kernel

On Mon, Nov 07, 2022, David Matlack wrote:
> On Fri, Oct 28, 2022 at 8:49 AM Sean Christopherson <seanjc@google.com> wrote:
> > Anyways, if someone wants to pursue this, these ideas and the "requirement" should
> > be run by the checkpatch maintainers.  They have far more experience and authority
> > in this area, and I suspect we aren't the first people to want checkpatch to get
> > involved in enforcing shortlog scope.
> 
> Documenting would at least be an improvement over what we have today
> since it would eliminate the need to re-explain the preferred rules
> every time. We can just point to the documentation when reviewing
> patches.

Agreed.  And there are many other things I want to formalize for KVM x86, e.g.
testing expectations, health requirements for the various branches, what each
branch is used for etc...

If you want to send a patch for the shortlogs thing, maybe create

  Documentation/process/maintainer-kvm-x86.rst

and link it into Documentation/process/maintainer-handbooks.rst?

> `git log --pretty=oneline` is not a great way to document shortlog
> scopes because it does not explain the rules (e.g. when to use "KVM:
> x86: " vs "KVM: x86/mmu: "), does not explain why things the way they
> are, and is inconsistent since we don't always catch every patch that
> goes by with a non-preferred shortlog scope.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v1 00/18] KVM selftests code consolidation and cleanup
  2022-11-07 18:19                 ` Sean Christopherson
@ 2022-11-09 19:05                   ` David Matlack
  0 siblings, 0 replies; 43+ messages in thread
From: David Matlack @ 2022-11-09 19:05 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Andrew Jones, Wang, Wei W, pbonzini, vipinsh, ajones, eric.auger,
	kvm, linux-kernel

On Mon, Nov 7, 2022 at 10:19 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Mon, Nov 07, 2022, David Matlack wrote:
> > On Fri, Oct 28, 2022 at 8:49 AM Sean Christopherson <seanjc@google.com> wrote:
> > > Anyways, if someone wants to pursue this, these ideas and the "requirement" should
> > > be run by the checkpatch maintainers.  They have far more experience and authority
> > > in this area, and I suspect we aren't the first people to want checkpatch to get
> > > involved in enforcing shortlog scope.
> >
> > Documenting would at least be an improvement over what we have today
> > since it would eliminate the need to re-explain the preferred rules
> > every time. We can just point to the documentation when reviewing
> > patches.
>
> Agreed.  And there are many other things I want to formalize for KVM x86, e.g.
> testing expectations, health requirements for the various branches, what each
> branch is used for etc...
>
> If you want to send a patch for the shortlogs thing, maybe create
>
>   Documentation/process/maintainer-kvm-x86.rst
>
> and link it into Documentation/process/maintainer-handbooks.rst?

Can do. I'll try to take a look later this week or next week.

^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2022-11-09 19:05 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-24 11:34 [PATCH v1 00/18] KVM selftests code consolidation and cleanup Wei Wang
2022-10-24 11:34 ` [PATCH v1 01/18] KVM: selftests/kvm_util: use array of pointers to maintain vcpus in kvm_vm Wei Wang
2022-10-26 23:47   ` Sean Christopherson
2022-10-27 12:28     ` Wang, Wei W
2022-10-27 15:27       ` Sean Christopherson
2022-10-28  2:13         ` Wang, Wei W
2022-10-24 11:34 ` [PATCH v1 02/18] KVM: selftests/kvm_util: use vm->vcpus[] when create vm with vcpus Wei Wang
2022-10-24 11:34 ` [PATCH v1 03/18] KVM: selftests/kvm_util: helper functions for vcpus and threads Wei Wang
2022-10-27  0:09   ` Sean Christopherson
2022-10-27 14:02     ` Wang, Wei W
2022-10-27 14:54       ` Sean Christopherson
2022-10-24 11:34 ` [PATCH v1 04/18] KVM: selftests/kvm_page_table_test: vcpu related code consolidation Wei Wang
2022-10-24 11:34 ` [PATCH v1 05/18] KVM: selftests/hardware_disable_test: code consolidation and cleanup Wei Wang
2022-10-27  0:16   ` Sean Christopherson
2022-10-27 14:14     ` Wang, Wei W
2022-10-27 18:03       ` Sean Christopherson
2022-10-28  2:16         ` Wang, Wei W
2022-10-24 11:34 ` [PATCH v1 06/18] KVM: selftests/dirty_log_test: vcpu related code consolidation Wei Wang
2022-10-24 11:34 ` [PATCH v1 07/18] KVM: selftests/max_guest_memory_test: " Wei Wang
2022-10-24 11:34 ` [PATCH v1 08/18] KVM: selftests/set_memory_region_test: " Wei Wang
2022-10-24 11:34 ` [PATCH v1 09/18] KVM: selftests/steal_time: vcpu related code consolidation and cleanup Wei Wang
2022-10-27  0:17   ` Sean Christopherson
2022-10-24 11:34 ` [PATCH v1 10/18] KVM: selftests/tsc_scaling_sync: vcpu related code consolidation Wei Wang
2022-10-24 11:34 ` [PATCH v1 11/18] KVM: selftest/xapic_ipi_test: " Wei Wang
2022-10-24 11:34 ` [PATCH v1 12/18] KVM: selftests/rseq_test: name the migration thread and some cleanup Wei Wang
2022-10-27  0:18   ` Sean Christopherson
2022-10-24 11:34 ` [PATCH v1 13/18] KVM: selftests/perf_test_util: vcpu related code consolidation Wei Wang
2022-10-24 11:34 ` [PATCH v1 14/18] KVM: selftest/memslot_perf_test: " Wei Wang
2022-10-24 11:34 ` [PATCH v1 15/18] KVM: selftests/vgic_init: " Wei Wang
2022-10-24 11:34 ` [PATCH v1 16/18] KVM: selftest/arch_timer: " Wei Wang
2022-10-24 11:34 ` [PATCH v1 17/18] KVM: selftests: remove the *vcpu[] input from __vm_create_with_vcpus Wei Wang
2022-10-24 11:34 ` [PATCH v1 18/18] KVM: selftests/kvm_create_max_vcpus: check KVM_MAX_VCPUS Wei Wang
2022-10-27  0:22   ` Sean Christopherson
2022-10-26 21:22 ` [PATCH v1 00/18] KVM selftests code consolidation and cleanup David Matlack
2022-10-27 12:18   ` Wang, Wei W
2022-10-27 15:44     ` Sean Christopherson
2022-10-27 16:24       ` David Matlack
2022-10-27 18:27         ` Sean Christopherson
2022-10-28 12:41           ` Andrew Jones
2022-10-28 15:49             ` Sean Christopherson
2022-11-07 18:11               ` David Matlack
2022-11-07 18:19                 ` Sean Christopherson
2022-11-09 19:05                   ` David Matlack

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).