* [PATCH MANUALSEL 5.15 5/9] KVM: selftests: Make sure kvm_create_max_vcpus test won't hit RLIMIT_NOFILE
[not found] <20211213141944.352249-1-sashal@kernel.org>
@ 2021-12-13 14:19 ` Sasha Levin
2021-12-13 14:27 ` Paolo Bonzini
2021-12-13 14:19 ` [PATCH MANUALSEL 5.15 7/9] KVM: selftests: Avoid KVM_SET_CPUID2 after KVM_RUN in hyperv_features test Sasha Levin
1 sibling, 1 reply; 4+ messages in thread
From: Sasha Levin @ 2021-12-13 14:19 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Vitaly Kuznetsov, Sean Christopherson, Paolo Bonzini,
Sasha Levin, shuah, kvm, linux-kselftest
From: Vitaly Kuznetsov <vkuznets@redhat.com>
[ Upstream commit 908fa88e420f30dde6d80f092795a18ec72ca6d3 ]
With the elevated 'KVM_CAP_MAX_VCPUS' value kvm_create_max_vcpus test
may hit RLIMIT_NOFILE limits:
# ./kvm_create_max_vcpus
KVM_CAP_MAX_VCPU_ID: 4096
KVM_CAP_MAX_VCPUS: 1024
Testing creating 1024 vCPUs, with IDs 0...1023.
/dev/kvm not available (errno: 24), skipping test
Adjust RLIMIT_NOFILE limits to make sure KVM_CAP_MAX_VCPUS fds can be
opened. Note, raising hard limit ('rlim_max') requires CAP_SYS_RESOURCE
capability which is generally not needed to run kvm selftests (but without
raising the limit the test is doomed to fail anyway).
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20211123135953.667434-1-vkuznets@redhat.com>
[Skip the test if the hard limit can be raised. - Paolo]
Reviewed-by: Sean Christopherson <seanjc@google.com>
Tested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
.../selftests/kvm/kvm_create_max_vcpus.c | 30 +++++++++++++++++++
1 file changed, 30 insertions(+)
diff --git a/tools/testing/selftests/kvm/kvm_create_max_vcpus.c b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
index 0299cd81b8ba2..aa3795cd7bd3d 100644
--- a/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
+++ b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
@@ -12,6 +12,7 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
+#include <sys/resource.h>
#include "test_util.h"
@@ -40,10 +41,39 @@ int main(int argc, char *argv[])
{
int kvm_max_vcpu_id = kvm_check_cap(KVM_CAP_MAX_VCPU_ID);
int kvm_max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS);
+ /*
+ * Number of file descriptors reqired, KVM_CAP_MAX_VCPUS for vCPU fds +
+ * an arbitrary number for everything else.
+ */
+ int nr_fds_wanted = kvm_max_vcpus + 100;
+ struct rlimit rl;
pr_info("KVM_CAP_MAX_VCPU_ID: %d\n", kvm_max_vcpu_id);
pr_info("KVM_CAP_MAX_VCPUS: %d\n", kvm_max_vcpus);
+ /*
+ * Check that we're allowed to open nr_fds_wanted file descriptors and
+ * try raising the limits if needed.
+ */
+ TEST_ASSERT(!getrlimit(RLIMIT_NOFILE, &rl), "getrlimit() failed!");
+
+ if (rl.rlim_cur < nr_fds_wanted) {
+ rl.rlim_cur = nr_fds_wanted;
+ if (rl.rlim_max < nr_fds_wanted) {
+ int old_rlim_max = rl.rlim_max;
+ rl.rlim_max = nr_fds_wanted;
+
+ int r = setrlimit(RLIMIT_NOFILE, &rl);
+ if (r < 0) {
+ printf("RLIMIT_NOFILE hard limit is too low (%d, wanted %d)\n",
+ old_rlim_max, nr_fds_wanted);
+ exit(KSFT_SKIP);
+ }
+ } else {
+ TEST_ASSERT(!setrlimit(RLIMIT_NOFILE, &rl), "setrlimit() failed!");
+ }
+ }
+
/*
* Upstream KVM prior to 4.8 does not support KVM_CAP_MAX_VCPU_ID.
* Userspace is supposed to use KVM_CAP_MAX_VCPUS as the maximum ID
--
2.33.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH MANUALSEL 5.15 7/9] KVM: selftests: Avoid KVM_SET_CPUID2 after KVM_RUN in hyperv_features test
[not found] <20211213141944.352249-1-sashal@kernel.org>
2021-12-13 14:19 ` [PATCH MANUALSEL 5.15 5/9] KVM: selftests: Make sure kvm_create_max_vcpus test won't hit RLIMIT_NOFILE Sasha Levin
@ 2021-12-13 14:19 ` Sasha Levin
2021-12-13 14:28 ` Paolo Bonzini
1 sibling, 1 reply; 4+ messages in thread
From: Sasha Levin @ 2021-12-13 14:19 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Vitaly Kuznetsov, Paolo Bonzini, Sasha Levin, shuah, seanjc,
ricarkol, maz, kvm, linux-kselftest
From: Vitaly Kuznetsov <vkuznets@redhat.com>
[ Upstream commit 6c1186430a808f97e2052bd5d9eff12c5d5defb0 ]
hyperv_features's sole purpose is to test access to various Hyper-V MSRs
and hypercalls with different CPUID data. As KVM_SET_CPUID2 after KVM_RUN
is deprecated and soon-to-be forbidden, avoid it by re-creating test VM
for each sub-test.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20211122175818.608220-2-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
.../selftests/kvm/x86_64/hyperv_features.c | 140 +++++++++---------
1 file changed, 71 insertions(+), 69 deletions(-)
diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_features.c b/tools/testing/selftests/kvm/x86_64/hyperv_features.c
index 91d88aaa98992..672915ce73d8f 100644
--- a/tools/testing/selftests/kvm/x86_64/hyperv_features.c
+++ b/tools/testing/selftests/kvm/x86_64/hyperv_features.c
@@ -165,10 +165,10 @@ static void hv_set_cpuid(struct kvm_vm *vm, struct kvm_cpuid2 *cpuid,
vcpu_set_cpuid(vm, VCPU_ID, cpuid);
}
-static void guest_test_msrs_access(struct kvm_vm *vm, struct msr_data *msr,
- struct kvm_cpuid2 *best)
+static void guest_test_msrs_access(void)
{
struct kvm_run *run;
+ struct kvm_vm *vm;
struct ucall uc;
int stage = 0, r;
struct kvm_cpuid_entry2 feat = {
@@ -180,11 +180,34 @@ static void guest_test_msrs_access(struct kvm_vm *vm, struct msr_data *msr,
struct kvm_cpuid_entry2 dbg = {
.function = HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES
};
- struct kvm_enable_cap cap = {0};
-
- run = vcpu_state(vm, VCPU_ID);
+ struct kvm_cpuid2 *best;
+ vm_vaddr_t msr_gva;
+ struct kvm_enable_cap cap = {
+ .cap = KVM_CAP_HYPERV_ENFORCE_CPUID,
+ .args = {1}
+ };
+ struct msr_data *msr;
while (true) {
+ vm = vm_create_default(VCPU_ID, 0, guest_msr);
+
+ msr_gva = vm_vaddr_alloc_page(vm);
+ memset(addr_gva2hva(vm, msr_gva), 0x0, getpagesize());
+ msr = addr_gva2hva(vm, msr_gva);
+
+ vcpu_args_set(vm, VCPU_ID, 1, msr_gva);
+ vcpu_enable_cap(vm, VCPU_ID, &cap);
+
+ vcpu_set_hv_cpuid(vm, VCPU_ID);
+
+ best = kvm_get_supported_hv_cpuid();
+
+ vm_init_descriptor_tables(vm);
+ vcpu_init_descriptor_tables(vm, VCPU_ID);
+ vm_install_exception_handler(vm, GP_VECTOR, guest_gp_handler);
+
+ run = vcpu_state(vm, VCPU_ID);
+
switch (stage) {
case 0:
/*
@@ -315,6 +338,7 @@ static void guest_test_msrs_access(struct kvm_vm *vm, struct msr_data *msr,
* capability enabled and guest visible CPUID bit unset.
*/
cap.cap = KVM_CAP_HYPERV_SYNIC2;
+ cap.args[0] = 0;
vcpu_enable_cap(vm, VCPU_ID, &cap);
break;
case 22:
@@ -461,9 +485,9 @@ static void guest_test_msrs_access(struct kvm_vm *vm, struct msr_data *msr,
switch (get_ucall(vm, VCPU_ID, &uc)) {
case UCALL_SYNC:
- TEST_ASSERT(uc.args[1] == stage,
- "Unexpected stage: %ld (%d expected)\n",
- uc.args[1], stage);
+ TEST_ASSERT(uc.args[1] == 0,
+ "Unexpected stage: %ld (0 expected)\n",
+ uc.args[1]);
break;
case UCALL_ABORT:
TEST_FAIL("%s at %s:%ld", (const char *)uc.args[0],
@@ -474,13 +498,14 @@ static void guest_test_msrs_access(struct kvm_vm *vm, struct msr_data *msr,
}
stage++;
+ kvm_vm_free(vm);
}
}
-static void guest_test_hcalls_access(struct kvm_vm *vm, struct hcall_data *hcall,
- void *input, void *output, struct kvm_cpuid2 *best)
+static void guest_test_hcalls_access(void)
{
struct kvm_run *run;
+ struct kvm_vm *vm;
struct ucall uc;
int stage = 0, r;
struct kvm_cpuid_entry2 feat = {
@@ -493,10 +518,38 @@ static void guest_test_hcalls_access(struct kvm_vm *vm, struct hcall_data *hcall
struct kvm_cpuid_entry2 dbg = {
.function = HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES
};
-
- run = vcpu_state(vm, VCPU_ID);
+ struct kvm_enable_cap cap = {
+ .cap = KVM_CAP_HYPERV_ENFORCE_CPUID,
+ .args = {1}
+ };
+ vm_vaddr_t hcall_page, hcall_params;
+ struct hcall_data *hcall;
+ struct kvm_cpuid2 *best;
while (true) {
+ vm = vm_create_default(VCPU_ID, 0, guest_hcall);
+
+ vm_init_descriptor_tables(vm);
+ vcpu_init_descriptor_tables(vm, VCPU_ID);
+ vm_install_exception_handler(vm, UD_VECTOR, guest_ud_handler);
+
+ /* Hypercall input/output */
+ hcall_page = vm_vaddr_alloc_pages(vm, 2);
+ hcall = addr_gva2hva(vm, hcall_page);
+ memset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize());
+
+ hcall_params = vm_vaddr_alloc_page(vm);
+ memset(addr_gva2hva(vm, hcall_params), 0x0, getpagesize());
+
+ vcpu_args_set(vm, VCPU_ID, 2, addr_gva2gpa(vm, hcall_page), hcall_params);
+ vcpu_enable_cap(vm, VCPU_ID, &cap);
+
+ vcpu_set_hv_cpuid(vm, VCPU_ID);
+
+ best = kvm_get_supported_hv_cpuid();
+
+ run = vcpu_state(vm, VCPU_ID);
+
switch (stage) {
case 0:
hcall->control = 0xdeadbeef;
@@ -606,9 +659,9 @@ static void guest_test_hcalls_access(struct kvm_vm *vm, struct hcall_data *hcall
switch (get_ucall(vm, VCPU_ID, &uc)) {
case UCALL_SYNC:
- TEST_ASSERT(uc.args[1] == stage,
- "Unexpected stage: %ld (%d expected)\n",
- uc.args[1], stage);
+ TEST_ASSERT(uc.args[1] == 0,
+ "Unexpected stage: %ld (0 expected)\n",
+ uc.args[1]);
break;
case UCALL_ABORT:
TEST_FAIL("%s at %s:%ld", (const char *)uc.args[0],
@@ -619,66 +672,15 @@ static void guest_test_hcalls_access(struct kvm_vm *vm, struct hcall_data *hcall
}
stage++;
+ kvm_vm_free(vm);
}
}
int main(void)
{
- struct kvm_cpuid2 *best;
- struct kvm_vm *vm;
- vm_vaddr_t msr_gva, hcall_page, hcall_params;
- struct kvm_enable_cap cap = {
- .cap = KVM_CAP_HYPERV_ENFORCE_CPUID,
- .args = {1}
- };
-
- /* Test MSRs */
- vm = vm_create_default(VCPU_ID, 0, guest_msr);
-
- msr_gva = vm_vaddr_alloc_page(vm);
- memset(addr_gva2hva(vm, msr_gva), 0x0, getpagesize());
- vcpu_args_set(vm, VCPU_ID, 1, msr_gva);
- vcpu_enable_cap(vm, VCPU_ID, &cap);
-
- vcpu_set_hv_cpuid(vm, VCPU_ID);
-
- best = kvm_get_supported_hv_cpuid();
-
- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(vm, VCPU_ID);
- vm_install_exception_handler(vm, GP_VECTOR, guest_gp_handler);
-
pr_info("Testing access to Hyper-V specific MSRs\n");
- guest_test_msrs_access(vm, addr_gva2hva(vm, msr_gva),
- best);
- kvm_vm_free(vm);
-
- /* Test hypercalls */
- vm = vm_create_default(VCPU_ID, 0, guest_hcall);
-
- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(vm, VCPU_ID);
- vm_install_exception_handler(vm, UD_VECTOR, guest_ud_handler);
-
- /* Hypercall input/output */
- hcall_page = vm_vaddr_alloc_pages(vm, 2);
- memset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize());
-
- hcall_params = vm_vaddr_alloc_page(vm);
- memset(addr_gva2hva(vm, hcall_params), 0x0, getpagesize());
-
- vcpu_args_set(vm, VCPU_ID, 2, addr_gva2gpa(vm, hcall_page), hcall_params);
- vcpu_enable_cap(vm, VCPU_ID, &cap);
-
- vcpu_set_hv_cpuid(vm, VCPU_ID);
-
- best = kvm_get_supported_hv_cpuid();
+ guest_test_msrs_access();
pr_info("Testing access to Hyper-V hypercalls\n");
- guest_test_hcalls_access(vm, addr_gva2hva(vm, hcall_params),
- addr_gva2hva(vm, hcall_page),
- addr_gva2hva(vm, hcall_page) + getpagesize(),
- best);
-
- kvm_vm_free(vm);
+ guest_test_hcalls_access();
}
--
2.33.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH MANUALSEL 5.15 5/9] KVM: selftests: Make sure kvm_create_max_vcpus test won't hit RLIMIT_NOFILE
2021-12-13 14:19 ` [PATCH MANUALSEL 5.15 5/9] KVM: selftests: Make sure kvm_create_max_vcpus test won't hit RLIMIT_NOFILE Sasha Levin
@ 2021-12-13 14:27 ` Paolo Bonzini
0 siblings, 0 replies; 4+ messages in thread
From: Paolo Bonzini @ 2021-12-13 14:27 UTC (permalink / raw)
To: Sasha Levin, linux-kernel, stable
Cc: Vitaly Kuznetsov, Sean Christopherson, shuah, kvm, linux-kselftest
On 12/13/21 15:19, Sasha Levin wrote:
> From: Vitaly Kuznetsov <vkuznets@redhat.com>
>
> [ Upstream commit 908fa88e420f30dde6d80f092795a18ec72ca6d3 ]
>
> With the elevated 'KVM_CAP_MAX_VCPUS' value kvm_create_max_vcpus test
> may hit RLIMIT_NOFILE limits:
>
> # ./kvm_create_max_vcpus
> KVM_CAP_MAX_VCPU_ID: 4096
> KVM_CAP_MAX_VCPUS: 1024
> Testing creating 1024 vCPUs, with IDs 0...1023.
> /dev/kvm not available (errno: 24), skipping test
>
> Adjust RLIMIT_NOFILE limits to make sure KVM_CAP_MAX_VCPUS fds can be
> opened. Note, raising hard limit ('rlim_max') requires CAP_SYS_RESOURCE
> capability which is generally not needed to run kvm selftests (but without
> raising the limit the test is doomed to fail anyway).
>
> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> Message-Id: <20211123135953.667434-1-vkuznets@redhat.com>
> [Skip the test if the hard limit can be raised. - Paolo]
> Reviewed-by: Sean Christopherson <seanjc@google.com>
> Tested-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Sasha Levin <sashal@kernel.org>
> ---
> .../selftests/kvm/kvm_create_max_vcpus.c | 30 +++++++++++++++++++
> 1 file changed, 30 insertions(+)
>
> diff --git a/tools/testing/selftests/kvm/kvm_create_max_vcpus.c b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
> index 0299cd81b8ba2..aa3795cd7bd3d 100644
> --- a/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
> +++ b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
> @@ -12,6 +12,7 @@
> #include <stdio.h>
> #include <stdlib.h>
> #include <string.h>
> +#include <sys/resource.h>
>
> #include "test_util.h"
>
> @@ -40,10 +41,39 @@ int main(int argc, char *argv[])
> {
> int kvm_max_vcpu_id = kvm_check_cap(KVM_CAP_MAX_VCPU_ID);
> int kvm_max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS);
> + /*
> + * Number of file descriptors reqired, KVM_CAP_MAX_VCPUS for vCPU fds +
> + * an arbitrary number for everything else.
> + */
> + int nr_fds_wanted = kvm_max_vcpus + 100;
> + struct rlimit rl;
>
> pr_info("KVM_CAP_MAX_VCPU_ID: %d\n", kvm_max_vcpu_id);
> pr_info("KVM_CAP_MAX_VCPUS: %d\n", kvm_max_vcpus);
>
> + /*
> + * Check that we're allowed to open nr_fds_wanted file descriptors and
> + * try raising the limits if needed.
> + */
> + TEST_ASSERT(!getrlimit(RLIMIT_NOFILE, &rl), "getrlimit() failed!");
> +
> + if (rl.rlim_cur < nr_fds_wanted) {
> + rl.rlim_cur = nr_fds_wanted;
> + if (rl.rlim_max < nr_fds_wanted) {
> + int old_rlim_max = rl.rlim_max;
> + rl.rlim_max = nr_fds_wanted;
> +
> + int r = setrlimit(RLIMIT_NOFILE, &rl);
> + if (r < 0) {
> + printf("RLIMIT_NOFILE hard limit is too low (%d, wanted %d)\n",
> + old_rlim_max, nr_fds_wanted);
> + exit(KSFT_SKIP);
> + }
> + } else {
> + TEST_ASSERT(!setrlimit(RLIMIT_NOFILE, &rl), "setrlimit() failed!");
> + }
> + }
> +
> /*
> * Upstream KVM prior to 4.8 does not support KVM_CAP_MAX_VCPU_ID.
> * Userspace is supposed to use KVM_CAP_MAX_VCPUS as the maximum ID
>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH MANUALSEL 5.15 7/9] KVM: selftests: Avoid KVM_SET_CPUID2 after KVM_RUN in hyperv_features test
2021-12-13 14:19 ` [PATCH MANUALSEL 5.15 7/9] KVM: selftests: Avoid KVM_SET_CPUID2 after KVM_RUN in hyperv_features test Sasha Levin
@ 2021-12-13 14:28 ` Paolo Bonzini
0 siblings, 0 replies; 4+ messages in thread
From: Paolo Bonzini @ 2021-12-13 14:28 UTC (permalink / raw)
To: Sasha Levin, linux-kernel, stable
Cc: Vitaly Kuznetsov, shuah, seanjc, ricarkol, maz, kvm, linux-kselftest
On 12/13/21 15:19, Sasha Levin wrote:
> From: Vitaly Kuznetsov <vkuznets@redhat.com>
>
> [ Upstream commit 6c1186430a808f97e2052bd5d9eff12c5d5defb0 ]
>
> hyperv_features's sole purpose is to test access to various Hyper-V MSRs
> and hypercalls with different CPUID data. As KVM_SET_CPUID2 after KVM_RUN
> is deprecated and soon-to-be forbidden, avoid it by re-creating test VM
> for each sub-test.
>
> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> Message-Id: <20211122175818.608220-2-vkuznets@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Sasha Levin <sashal@kernel.org>
> ---
> .../selftests/kvm/x86_64/hyperv_features.c | 140 +++++++++---------
> 1 file changed, 71 insertions(+), 69 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_features.c b/tools/testing/selftests/kvm/x86_64/hyperv_features.c
> index 91d88aaa98992..672915ce73d8f 100644
> --- a/tools/testing/selftests/kvm/x86_64/hyperv_features.c
> +++ b/tools/testing/selftests/kvm/x86_64/hyperv_features.c
> @@ -165,10 +165,10 @@ static void hv_set_cpuid(struct kvm_vm *vm, struct kvm_cpuid2 *cpuid,
> vcpu_set_cpuid(vm, VCPU_ID, cpuid);
> }
>
> -static void guest_test_msrs_access(struct kvm_vm *vm, struct msr_data *msr,
> - struct kvm_cpuid2 *best)
> +static void guest_test_msrs_access(void)
> {
> struct kvm_run *run;
> + struct kvm_vm *vm;
> struct ucall uc;
> int stage = 0, r;
> struct kvm_cpuid_entry2 feat = {
> @@ -180,11 +180,34 @@ static void guest_test_msrs_access(struct kvm_vm *vm, struct msr_data *msr,
> struct kvm_cpuid_entry2 dbg = {
> .function = HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES
> };
> - struct kvm_enable_cap cap = {0};
> -
> - run = vcpu_state(vm, VCPU_ID);
> + struct kvm_cpuid2 *best;
> + vm_vaddr_t msr_gva;
> + struct kvm_enable_cap cap = {
> + .cap = KVM_CAP_HYPERV_ENFORCE_CPUID,
> + .args = {1}
> + };
> + struct msr_data *msr;
>
> while (true) {
> + vm = vm_create_default(VCPU_ID, 0, guest_msr);
> +
> + msr_gva = vm_vaddr_alloc_page(vm);
> + memset(addr_gva2hva(vm, msr_gva), 0x0, getpagesize());
> + msr = addr_gva2hva(vm, msr_gva);
> +
> + vcpu_args_set(vm, VCPU_ID, 1, msr_gva);
> + vcpu_enable_cap(vm, VCPU_ID, &cap);
> +
> + vcpu_set_hv_cpuid(vm, VCPU_ID);
> +
> + best = kvm_get_supported_hv_cpuid();
> +
> + vm_init_descriptor_tables(vm);
> + vcpu_init_descriptor_tables(vm, VCPU_ID);
> + vm_install_exception_handler(vm, GP_VECTOR, guest_gp_handler);
> +
> + run = vcpu_state(vm, VCPU_ID);
> +
> switch (stage) {
> case 0:
> /*
> @@ -315,6 +338,7 @@ static void guest_test_msrs_access(struct kvm_vm *vm, struct msr_data *msr,
> * capability enabled and guest visible CPUID bit unset.
> */
> cap.cap = KVM_CAP_HYPERV_SYNIC2;
> + cap.args[0] = 0;
> vcpu_enable_cap(vm, VCPU_ID, &cap);
> break;
> case 22:
> @@ -461,9 +485,9 @@ static void guest_test_msrs_access(struct kvm_vm *vm, struct msr_data *msr,
>
> switch (get_ucall(vm, VCPU_ID, &uc)) {
> case UCALL_SYNC:
> - TEST_ASSERT(uc.args[1] == stage,
> - "Unexpected stage: %ld (%d expected)\n",
> - uc.args[1], stage);
> + TEST_ASSERT(uc.args[1] == 0,
> + "Unexpected stage: %ld (0 expected)\n",
> + uc.args[1]);
> break;
> case UCALL_ABORT:
> TEST_FAIL("%s at %s:%ld", (const char *)uc.args[0],
> @@ -474,13 +498,14 @@ static void guest_test_msrs_access(struct kvm_vm *vm, struct msr_data *msr,
> }
>
> stage++;
> + kvm_vm_free(vm);
> }
> }
>
> -static void guest_test_hcalls_access(struct kvm_vm *vm, struct hcall_data *hcall,
> - void *input, void *output, struct kvm_cpuid2 *best)
> +static void guest_test_hcalls_access(void)
> {
> struct kvm_run *run;
> + struct kvm_vm *vm;
> struct ucall uc;
> int stage = 0, r;
> struct kvm_cpuid_entry2 feat = {
> @@ -493,10 +518,38 @@ static void guest_test_hcalls_access(struct kvm_vm *vm, struct hcall_data *hcall
> struct kvm_cpuid_entry2 dbg = {
> .function = HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES
> };
> -
> - run = vcpu_state(vm, VCPU_ID);
> + struct kvm_enable_cap cap = {
> + .cap = KVM_CAP_HYPERV_ENFORCE_CPUID,
> + .args = {1}
> + };
> + vm_vaddr_t hcall_page, hcall_params;
> + struct hcall_data *hcall;
> + struct kvm_cpuid2 *best;
>
> while (true) {
> + vm = vm_create_default(VCPU_ID, 0, guest_hcall);
> +
> + vm_init_descriptor_tables(vm);
> + vcpu_init_descriptor_tables(vm, VCPU_ID);
> + vm_install_exception_handler(vm, UD_VECTOR, guest_ud_handler);
> +
> + /* Hypercall input/output */
> + hcall_page = vm_vaddr_alloc_pages(vm, 2);
> + hcall = addr_gva2hva(vm, hcall_page);
> + memset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize());
> +
> + hcall_params = vm_vaddr_alloc_page(vm);
> + memset(addr_gva2hva(vm, hcall_params), 0x0, getpagesize());
> +
> + vcpu_args_set(vm, VCPU_ID, 2, addr_gva2gpa(vm, hcall_page), hcall_params);
> + vcpu_enable_cap(vm, VCPU_ID, &cap);
> +
> + vcpu_set_hv_cpuid(vm, VCPU_ID);
> +
> + best = kvm_get_supported_hv_cpuid();
> +
> + run = vcpu_state(vm, VCPU_ID);
> +
> switch (stage) {
> case 0:
> hcall->control = 0xdeadbeef;
> @@ -606,9 +659,9 @@ static void guest_test_hcalls_access(struct kvm_vm *vm, struct hcall_data *hcall
>
> switch (get_ucall(vm, VCPU_ID, &uc)) {
> case UCALL_SYNC:
> - TEST_ASSERT(uc.args[1] == stage,
> - "Unexpected stage: %ld (%d expected)\n",
> - uc.args[1], stage);
> + TEST_ASSERT(uc.args[1] == 0,
> + "Unexpected stage: %ld (0 expected)\n",
> + uc.args[1]);
> break;
> case UCALL_ABORT:
> TEST_FAIL("%s at %s:%ld", (const char *)uc.args[0],
> @@ -619,66 +672,15 @@ static void guest_test_hcalls_access(struct kvm_vm *vm, struct hcall_data *hcall
> }
>
> stage++;
> + kvm_vm_free(vm);
> }
> }
>
> int main(void)
> {
> - struct kvm_cpuid2 *best;
> - struct kvm_vm *vm;
> - vm_vaddr_t msr_gva, hcall_page, hcall_params;
> - struct kvm_enable_cap cap = {
> - .cap = KVM_CAP_HYPERV_ENFORCE_CPUID,
> - .args = {1}
> - };
> -
> - /* Test MSRs */
> - vm = vm_create_default(VCPU_ID, 0, guest_msr);
> -
> - msr_gva = vm_vaddr_alloc_page(vm);
> - memset(addr_gva2hva(vm, msr_gva), 0x0, getpagesize());
> - vcpu_args_set(vm, VCPU_ID, 1, msr_gva);
> - vcpu_enable_cap(vm, VCPU_ID, &cap);
> -
> - vcpu_set_hv_cpuid(vm, VCPU_ID);
> -
> - best = kvm_get_supported_hv_cpuid();
> -
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(vm, VCPU_ID);
> - vm_install_exception_handler(vm, GP_VECTOR, guest_gp_handler);
> -
> pr_info("Testing access to Hyper-V specific MSRs\n");
> - guest_test_msrs_access(vm, addr_gva2hva(vm, msr_gva),
> - best);
> - kvm_vm_free(vm);
> -
> - /* Test hypercalls */
> - vm = vm_create_default(VCPU_ID, 0, guest_hcall);
> -
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(vm, VCPU_ID);
> - vm_install_exception_handler(vm, UD_VECTOR, guest_ud_handler);
> -
> - /* Hypercall input/output */
> - hcall_page = vm_vaddr_alloc_pages(vm, 2);
> - memset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize());
> -
> - hcall_params = vm_vaddr_alloc_page(vm);
> - memset(addr_gva2hva(vm, hcall_params), 0x0, getpagesize());
> -
> - vcpu_args_set(vm, VCPU_ID, 2, addr_gva2gpa(vm, hcall_page), hcall_params);
> - vcpu_enable_cap(vm, VCPU_ID, &cap);
> -
> - vcpu_set_hv_cpuid(vm, VCPU_ID);
> -
> - best = kvm_get_supported_hv_cpuid();
> + guest_test_msrs_access();
>
> pr_info("Testing access to Hyper-V hypercalls\n");
> - guest_test_hcalls_access(vm, addr_gva2hva(vm, hcall_params),
> - addr_gva2hva(vm, hcall_page),
> - addr_gva2hva(vm, hcall_page) + getpagesize(),
> - best);
> -
> - kvm_vm_free(vm);
> + guest_test_hcalls_access();
> }
>
NACK
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2021-12-13 14:28 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20211213141944.352249-1-sashal@kernel.org>
2021-12-13 14:19 ` [PATCH MANUALSEL 5.15 5/9] KVM: selftests: Make sure kvm_create_max_vcpus test won't hit RLIMIT_NOFILE Sasha Levin
2021-12-13 14:27 ` Paolo Bonzini
2021-12-13 14:19 ` [PATCH MANUALSEL 5.15 7/9] KVM: selftests: Avoid KVM_SET_CPUID2 after KVM_RUN in hyperv_features test Sasha Levin
2021-12-13 14:28 ` Paolo Bonzini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).