linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] kvm: selftests: Add KVM_CAP_MAX_VCPU_ID cap test
@ 2022-05-03  6:40 Zeng Guang
  2022-05-13  1:03 ` Sean Christopherson
  0 siblings, 1 reply; 2+ messages in thread
From: Zeng Guang @ 2022-05-03  6:40 UTC (permalink / raw)
  To: Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, kvm, Dave Hansen, Tony Luck,
	Kan Liang, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Kim Phillips, Jarkko Sakkinen, Jethro Beekman,
	Kai Huang
  Cc: x86, linux-kernel, Robert Hu, Gao Chao, Zeng Guang

Basic test coverage of KVM_CAP_MAX_VCPU_ID cap.

This capability can be enabled before vCPU creation and only allowed
to set once. if assigned vcpu id is beyond KVM_CAP_MAX_VCPU_ID
capability, vCPU creation will fail.

Signed-off-by: Zeng Guang <guang.zeng@intel.com>
---
 tools/testing/selftests/kvm/.gitignore        |  1 +
 tools/testing/selftests/kvm/Makefile          |  1 +
 .../kvm/x86_64/max_vcpuid_cap_test.c          | 59 +++++++++++++++++++
 3 files changed, 61 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/x86_64/max_vcpuid_cap_test.c

diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index d1e8f5237469..b860dcfee920 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -22,6 +22,7 @@
 /x86_64/hyperv_cpuid
 /x86_64/hyperv_features
 /x86_64/hyperv_svm_test
+/x86_64/max_vcpuid_cap_test
 /x86_64/mmio_warning_test
 /x86_64/mmu_role_test
 /x86_64/platform_info_test
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 21c2dbd21a81..e92dc78de4d0 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -86,6 +86,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test
 TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test
 TEST_GEN_PROGS_x86_64 += x86_64/sev_migrate_tests
 TEST_GEN_PROGS_x86_64 += x86_64/amx_test
+TEST_GEN_PROGS_x86_64 += x86_64/max_vcpuid_cap_test
 TEST_GEN_PROGS_x86_64 += access_tracking_perf_test
 TEST_GEN_PROGS_x86_64 += demand_paging_test
 TEST_GEN_PROGS_x86_64 += dirty_log_test
diff --git a/tools/testing/selftests/kvm/x86_64/max_vcpuid_cap_test.c b/tools/testing/selftests/kvm/x86_64/max_vcpuid_cap_test.c
new file mode 100644
index 000000000000..2e3d1d236ef0
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/max_vcpuid_cap_test.c
@@ -0,0 +1,59 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * maximum APIC ID capability tests
+ *
+ * Copyright (C) 2022, Intel, Inc.
+ *
+ * Tests for getting/setting maximum APIC ID capability
+ */
+
+#include "kvm_util.h"
+#include "../lib/kvm_util_internal.h"
+
+#define MAX_VCPU_ID	2
+
+int main(int argc, char *argv[])
+{
+	struct kvm_vm *vm;
+	struct kvm_enable_cap cap = { 0 };
+	int ret;
+
+	vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
+
+	/* Get KVM_CAP_MAX_VCPU_ID cap supported in KVM */
+	ret = vm_check_cap(vm, KVM_CAP_MAX_VCPU_ID);
+
+	/* Check failure if set KVM_CAP_MAX_VCPU_ID beyond KVM cap */
+	cap.cap = KVM_CAP_MAX_VCPU_ID;
+	cap.args[0] = ret + 1;
+	ret = ioctl(vm->fd, KVM_ENABLE_CAP, &cap);
+	TEST_ASSERT(ret < 0,
+		    "Unexpected success to enable KVM_CAP_MAX_VCPU_ID"
+		    "beyond KVM cap!\n");
+
+	/* Check success if set KVM_CAP_MAX_VCPU_ID */
+	cap.args[0] = MAX_VCPU_ID;
+	ret = ioctl(vm->fd, KVM_ENABLE_CAP, &cap);
+	TEST_ASSERT(ret == 0,
+		    "Unexpected failure to enable KVM_CAP_MAX_VCPU_ID!\n");
+
+	/* Check success if set KVM_CAP_MAX_VCPU_ID same value */
+	ret = ioctl(vm->fd, KVM_ENABLE_CAP, &cap);
+	TEST_ASSERT(ret == 0,
+		    "Unexpected failure to set KVM_CAP_MAX_VCPU_ID same value\n");
+
+	/* Check failure if set KVM_CAP_MAX_VCPU_ID different value again */
+	cap.args[0] = MAX_VCPU_ID + 1;
+	ret = ioctl(vm->fd, KVM_ENABLE_CAP, &cap);
+	TEST_ASSERT(ret < 0,
+		    "Unexpected success to enable KVM_CAP_MAX_VCPU_ID with"
+		    "different value again\n");
+
+	/* Check failure if create vCPU with id beyond KVM_CAP_MAX_VCPU_ID cap*/
+	ret = ioctl(vm->fd, KVM_CREATE_VCPU, MAX_VCPU_ID);
+	TEST_ASSERT(ret < 0,
+		    "Unexpected success in creating a vCPU with VCPU ID out of range\n");
+
+	kvm_vm_free(vm);
+	return 0;
+}
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH v2] kvm: selftests: Add KVM_CAP_MAX_VCPU_ID cap test
  2022-05-03  6:40 [PATCH v2] kvm: selftests: Add KVM_CAP_MAX_VCPU_ID cap test Zeng Guang
@ 2022-05-13  1:03 ` Sean Christopherson
  0 siblings, 0 replies; 2+ messages in thread
From: Sean Christopherson @ 2022-05-13  1:03 UTC (permalink / raw)
  To: Zeng Guang
  Cc: Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, Dave Hansen, Tony Luck, Kan Liang,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Kim Phillips, Jarkko Sakkinen, Jethro Beekman, Kai Huang, x86,
	linux-kernel, Robert Hu, Gao Chao

On Tue, May 03, 2022, Zeng Guang wrote:
> +int main(int argc, char *argv[])
> +{
> +	struct kvm_vm *vm;
> +	struct kvm_enable_cap cap = { 0 };
> +	int ret;
> +
> +	vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> +
> +	/* Get KVM_CAP_MAX_VCPU_ID cap supported in KVM */
> +	ret = vm_check_cap(vm, KVM_CAP_MAX_VCPU_ID);

Because it's trivial to do so, and will avoid a hardcoded "max", what about looping
over all possible values?  And then some arbitrary number of the max?

	max_nr_vcpus = vm_check_cap(vm, KVM_CAP_MAX_VCPU_ID);
	TEST_ASSERT(max_nr_vcpus > ???, ...)

	for (i = 0; i < max_nr_vcpus; i++) 
		vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);

		vm_set_max_nr_vcpus(vm, 0);
		vm_create_invalid_vcpu(vm, 0);

		vm_set_max_nr_vcpus(vm, i);
		vm_set_max_nr_vcpus(vm, i);
		vm_create_invalid_vcpu(vm, i);
		vm_create_invalid_vcpu(vm, i + 1);

		vm_set_invalid_max_nr_vcpus(vm, 0);
		vm_set_invalid_max_nr_vcpus(vm, i + 1);
		vm_set_invalid_max_nr_vcpus(vm, i - 1);
		vm_set_invalid_max_nr_vcpus(vm, max_nr_vcpus);

		close(vm->fd);
	}

	for ( ; i < max_nr_vcpus + 100; i++) {
		vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);

		vm_set_invalid_max_nr_vcpus(vm, i);

		close(vm->fd);
	}
> +
> +	/* Check failure if set KVM_CAP_MAX_VCPU_ID beyond KVM cap */
> +	cap.cap = KVM_CAP_MAX_VCPU_ID;
> +	cap.args[0] = ret + 1;
> +	ret = ioctl(vm->fd, KVM_ENABLE_CAP, &cap);

A helper or two to set the cap would be, uh, helpful :-)  See above for ideas.

> +	TEST_ASSERT(ret < 0,
> +		    "Unexpected success to enable KVM_CAP_MAX_VCPU_ID"
> +		    "beyond KVM cap!\n");

Please don't wrap quoted strings.  Shorten the string and/or let the line run long.
For the string/message, prioritize information that the user _can't_ get from looking
at the code, and info that is highly relevant to the expectations.  E.g. print the
the return value, the errno, and the allegedly bad value.

It's definitely helpful to provide context too (KVM Unit Tests drive me bonkers for
their terse messages), but for cases like this, it's redundant.  "Unexpected success"
is redundant because the "ret < 0" conveys that failure was expected, and hopefully
most people will intuit that the test was trying "to enable" KVM_CAP_MAX_VCPU_ID.
If not, a quick glance at the code (file and line provided) will give them that info.

E.g. assuming this ends up in a helper, something like

	TEST_ASSERT(ret == -1 && errno == EINVAL,
		    "KVM_CAP_MAX_VCPU_ID bug, max ID = %d, ret = %d, errno = %d (%s),
		    max_id, errno, strerror(errno));

IMO it's worth checking errno to reduce the probability of false pass, e.g. if the
ioctl() is rejected for some other reason due to a test bug.

> +
> +	/* Check success if set KVM_CAP_MAX_VCPU_ID */
> +	cap.args[0] = MAX_VCPU_ID;
> +	ret = ioctl(vm->fd, KVM_ENABLE_CAP, &cap);
> +	TEST_ASSERT(ret == 0,
> +		    "Unexpected failure to enable KVM_CAP_MAX_VCPU_ID!\n");

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-05-13  1:03 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-03  6:40 [PATCH v2] kvm: selftests: Add KVM_CAP_MAX_VCPU_ID cap test Zeng Guang
2022-05-13  1:03 ` Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).