linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] KVM: x86: allow for more CPUID entries
@ 2020-10-01 13:05 Vitaly Kuznetsov
  2020-10-01 13:05 ` [PATCH 1/3] KVM: x86: disconnect kvm_check_cpuid() from vcpu->arch.cpuid_entries Vitaly Kuznetsov
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Vitaly Kuznetsov @ 2020-10-01 13:05 UTC (permalink / raw)
  To: kvm, Paolo Bonzini
  Cc: Sean Christopherson, Wanpeng Li, Jim Mattson,
	Dr . David Alan Gilbert, Wei Huang, linux-kernel

Changes since RFC:
- "KVM: x86: disconnect kvm_check_cpuid() from vcpu->arch.cpuid_entries"
  added to allow running kvm_check_cpuid() before vcpu->arch.cpuid_entries/
  vcpu->arch.cpuid_nent are changed [Sean Christopherson]
- Shorten local variable names in kvm_vcpu_ioctl_set_cpuid[,2]
  [Sean Christopherson]
- Drop unneeded 'out' labels from kvm_vcpu_ioctl_set_cpuid[,2]
  and return directly whenever possible [Sean Christopherson]

Original description:

With QEMU and newer AMD CPUs (namely: Epyc 'Rome') the current limit for
KVM_MAX_CPUID_ENTRIES(80) is reported to be hit. Last time it was raised
from '40' in 2010. We can, of course, just bump it a little bit to fix
the immediate issue but the report made me wonder why we need to pre-
allocate vcpu->arch.cpuid_entries array instead of sizing it dynamically.
This RFC is intended to feed my curiosity.

Very mildly tested with selftests/kvm-unit-tests and nothing seems to
break. I also don't have access to the system where the original issue
was reported but chances we're fixing it are very good IMO as just the
second patch alone was reported to be sufficient.

Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Vitaly Kuznetsov (3):
  KVM: x86: disconnect kvm_check_cpuid() from vcpu->arch.cpuid_entries
  KVM: x86: allocate vcpu->arch.cpuid_entries dynamically
  KVM: x86: bump KVM_MAX_CPUID_ENTRIES

 arch/x86/include/asm/kvm_host.h |   4 +-
 arch/x86/kvm/cpuid.c            | 123 +++++++++++++++++++-------------
 arch/x86/kvm/x86.c              |   1 +
 3 files changed, 75 insertions(+), 53 deletions(-)

-- 
2.25.4


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/3] KVM: x86: disconnect kvm_check_cpuid() from vcpu->arch.cpuid_entries
  2020-10-01 13:05 [PATCH 0/3] KVM: x86: allow for more CPUID entries Vitaly Kuznetsov
@ 2020-10-01 13:05 ` Vitaly Kuznetsov
  2020-10-05 10:10   ` Maxim Levitsky
  2020-10-01 13:05 ` [PATCH 2/3] KVM: x86: allocate vcpu->arch.cpuid_entries dynamically Vitaly Kuznetsov
  2020-10-01 13:05 ` [PATCH 3/3] KVM: x86: bump KVM_MAX_CPUID_ENTRIES Vitaly Kuznetsov
  2 siblings, 1 reply; 9+ messages in thread
From: Vitaly Kuznetsov @ 2020-10-01 13:05 UTC (permalink / raw)
  To: kvm, Paolo Bonzini
  Cc: Sean Christopherson, Wanpeng Li, Jim Mattson,
	Dr . David Alan Gilbert, Wei Huang, linux-kernel

As a preparatory step to allocating vcpu->arch.cpuid_entries dynamically
make kvm_check_cpuid() check work with an arbitrary 'struct kvm_cpuid_entry2'
array.

Currently, when kvm_check_cpuid() fails we reset vcpu->arch.cpuid_nent to
0 and this is kind of weird, i.e. one would expect CPUIDs to remain
unchanged when KVM_SET_CPUID[2] call fails.

No functional change intended. It would've been possible to move the updated
kvm_check_cpuid() in kvm_vcpu_ioctl_set_cpuid2() and check the supplied
input before we start updating vcpu->arch.cpuid_entries/nent but we
can't do the same in kvm_vcpu_ioctl_set_cpuid() as we'll have to copy
'struct kvm_cpuid_entry' entries first. The change will be made when
vcpu->arch.cpuid_entries[] array becomes allocated dynamically.

Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
---
 arch/x86/kvm/cpuid.c | 38 +++++++++++++++++++++++---------------
 1 file changed, 23 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 37c3668a774f..529348ddedc1 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -54,7 +54,24 @@ static u32 xstate_required_size(u64 xstate_bv, bool compacted)
 
 #define F feature_bit
 
-static int kvm_check_cpuid(struct kvm_vcpu *vcpu)
+static inline struct kvm_cpuid_entry2 *cpuid_entry2_find(
+	struct kvm_cpuid_entry2 *entries, int nent, u32 function, u32 index)
+{
+	struct kvm_cpuid_entry2 *e;
+	int i;
+
+	for (i = 0; i < nent; i++) {
+		e = &entries[i];
+
+		if (e->function == function && (e->index == index ||
+		    !(e->flags & KVM_CPUID_FLAG_SIGNIFCANT_INDEX)))
+			return e;
+	}
+
+	return NULL;
+}
+
+static int kvm_check_cpuid(struct kvm_cpuid_entry2 *entries, int nent)
 {
 	struct kvm_cpuid_entry2 *best;
 
@@ -62,7 +79,7 @@ static int kvm_check_cpuid(struct kvm_vcpu *vcpu)
 	 * The existing code assumes virtual address is 48-bit or 57-bit in the
 	 * canonical address checks; exit if it is ever changed.
 	 */
-	best = kvm_find_cpuid_entry(vcpu, 0x80000008, 0);
+	best = cpuid_entry2_find(entries, nent, 0x80000008, 0);
 	if (best) {
 		int vaddr_bits = (best->eax & 0xff00) >> 8;
 
@@ -220,7 +237,7 @@ int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu,
 		vcpu->arch.cpuid_entries[i].padding[2] = 0;
 	}
 	vcpu->arch.cpuid_nent = cpuid->nent;
-	r = kvm_check_cpuid(vcpu);
+	r = kvm_check_cpuid(vcpu->arch.cpuid_entries, cpuid->nent);
 	if (r) {
 		vcpu->arch.cpuid_nent = 0;
 		kvfree(cpuid_entries);
@@ -250,7 +267,7 @@ int kvm_vcpu_ioctl_set_cpuid2(struct kvm_vcpu *vcpu,
 			   cpuid->nent * sizeof(struct kvm_cpuid_entry2)))
 		goto out;
 	vcpu->arch.cpuid_nent = cpuid->nent;
-	r = kvm_check_cpuid(vcpu);
+	r = kvm_check_cpuid(vcpu->arch.cpuid_entries, cpuid->nent);
 	if (r) {
 		vcpu->arch.cpuid_nent = 0;
 		goto out;
@@ -940,17 +957,8 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
 struct kvm_cpuid_entry2 *kvm_find_cpuid_entry(struct kvm_vcpu *vcpu,
 					      u32 function, u32 index)
 {
-	struct kvm_cpuid_entry2 *e;
-	int i;
-
-	for (i = 0; i < vcpu->arch.cpuid_nent; ++i) {
-		e = &vcpu->arch.cpuid_entries[i];
-
-		if (e->function == function && (e->index == index ||
-		    !(e->flags & KVM_CPUID_FLAG_SIGNIFCANT_INDEX)))
-			return e;
-	}
-	return NULL;
+	return cpuid_entry2_find(vcpu->arch.cpuid_entries, vcpu->arch.cpuid_nent,
+				 function, index);
 }
 EXPORT_SYMBOL_GPL(kvm_find_cpuid_entry);
 
-- 
2.25.4


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/3] KVM: x86: allocate vcpu->arch.cpuid_entries dynamically
  2020-10-01 13:05 [PATCH 0/3] KVM: x86: allow for more CPUID entries Vitaly Kuznetsov
  2020-10-01 13:05 ` [PATCH 1/3] KVM: x86: disconnect kvm_check_cpuid() from vcpu->arch.cpuid_entries Vitaly Kuznetsov
@ 2020-10-01 13:05 ` Vitaly Kuznetsov
  2020-10-05 10:11   ` Maxim Levitsky
  2020-10-01 13:05 ` [PATCH 3/3] KVM: x86: bump KVM_MAX_CPUID_ENTRIES Vitaly Kuznetsov
  2 siblings, 1 reply; 9+ messages in thread
From: Vitaly Kuznetsov @ 2020-10-01 13:05 UTC (permalink / raw)
  To: kvm, Paolo Bonzini
  Cc: Sean Christopherson, Wanpeng Li, Jim Mattson,
	Dr . David Alan Gilbert, Wei Huang, linux-kernel

The current limit for guest CPUID leaves (KVM_MAX_CPUID_ENTRIES, 80)
is reported to be insufficient but before we bump it let's switch to
allocating vcpu->arch.cpuid_entries[] array dynamically. Currently,
'struct kvm_cpuid_entry2' is 40 bytes so vcpu->arch.cpuid_entries is
3200 bytes which accounts for 1/4 of the whole 'struct kvm_vcpu_arch'
but having it pre-allocated (for all vCPUs which we also pre-allocate)
gives us no real benefits.

Another plus of the dynamic allocation is that we now do kvm_check_cpuid()
check before we assign anything to vcpu->arch.cpuid_nent/cpuid_entries so
no changes are made in case the check fails.

Opportunistically remove unneeded 'out' labels from
kvm_vcpu_ioctl_set_cpuid()/kvm_vcpu_ioctl_set_cpuid2() and return
directly whenever possible.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
---
 arch/x86/include/asm/kvm_host.h |  2 +-
 arch/x86/kvm/cpuid.c            | 89 +++++++++++++++++++--------------
 arch/x86/kvm/x86.c              |  1 +
 3 files changed, 53 insertions(+), 39 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index d0f77235da92..7d259e21ea04 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -637,7 +637,7 @@ struct kvm_vcpu_arch {
 	int halt_request; /* real mode on Intel only */
 
 	int cpuid_nent;
-	struct kvm_cpuid_entry2 cpuid_entries[KVM_MAX_CPUID_ENTRIES];
+	struct kvm_cpuid_entry2 *cpuid_entries;
 
 	int maxphyaddr;
 	int max_tdp_level;
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 529348ddedc1..3fe20c4da0a6 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -210,46 +210,53 @@ int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu,
 			     struct kvm_cpuid_entry __user *entries)
 {
 	int r, i;
-	struct kvm_cpuid_entry *cpuid_entries = NULL;
+	struct kvm_cpuid_entry *e = NULL;
+	struct kvm_cpuid_entry2 *e2 = NULL;
 
-	r = -E2BIG;
 	if (cpuid->nent > KVM_MAX_CPUID_ENTRIES)
-		goto out;
+		return -E2BIG;
+
 	if (cpuid->nent) {
-		cpuid_entries = vmemdup_user(entries,
-					     array_size(sizeof(struct kvm_cpuid_entry),
-							cpuid->nent));
-		if (IS_ERR(cpuid_entries)) {
-			r = PTR_ERR(cpuid_entries);
-			goto out;
+		e = vmemdup_user(entries, array_size(sizeof(*e), cpuid->nent));
+		if (IS_ERR(e))
+			return PTR_ERR(e);
+
+		e2 = kvmalloc_array(cpuid->nent, sizeof(*e2), GFP_KERNEL_ACCOUNT);
+		if (!e2) {
+			r = -ENOMEM;
+			goto out_free_cpuid;
 		}
 	}
 	for (i = 0; i < cpuid->nent; i++) {
-		vcpu->arch.cpuid_entries[i].function = cpuid_entries[i].function;
-		vcpu->arch.cpuid_entries[i].eax = cpuid_entries[i].eax;
-		vcpu->arch.cpuid_entries[i].ebx = cpuid_entries[i].ebx;
-		vcpu->arch.cpuid_entries[i].ecx = cpuid_entries[i].ecx;
-		vcpu->arch.cpuid_entries[i].edx = cpuid_entries[i].edx;
-		vcpu->arch.cpuid_entries[i].index = 0;
-		vcpu->arch.cpuid_entries[i].flags = 0;
-		vcpu->arch.cpuid_entries[i].padding[0] = 0;
-		vcpu->arch.cpuid_entries[i].padding[1] = 0;
-		vcpu->arch.cpuid_entries[i].padding[2] = 0;
+		e2[i].function = e[i].function;
+		e2[i].eax = e[i].eax;
+		e2[i].ebx = e[i].ebx;
+		e2[i].ecx = e[i].ecx;
+		e2[i].edx = e[i].edx;
+		e2[i].index = 0;
+		e2[i].flags = 0;
+		e2[i].padding[0] = 0;
+		e2[i].padding[1] = 0;
+		e2[i].padding[2] = 0;
 	}
-	vcpu->arch.cpuid_nent = cpuid->nent;
-	r = kvm_check_cpuid(vcpu->arch.cpuid_entries, cpuid->nent);
+
+	r = kvm_check_cpuid(e2, cpuid->nent);
 	if (r) {
-		vcpu->arch.cpuid_nent = 0;
-		kvfree(cpuid_entries);
-		goto out;
+		kvfree(e2);
+		goto out_free_cpuid;
 	}
 
+	kvfree(vcpu->arch.cpuid_entries);
+	vcpu->arch.cpuid_entries = e2;
+	vcpu->arch.cpuid_nent = cpuid->nent;
+
 	cpuid_fix_nx_cap(vcpu);
 	kvm_update_cpuid_runtime(vcpu);
 	kvm_vcpu_after_set_cpuid(vcpu);
 
-	kvfree(cpuid_entries);
-out:
+out_free_cpuid:
+	kvfree(e);
+
 	return r;
 }
 
@@ -257,26 +264,32 @@ int kvm_vcpu_ioctl_set_cpuid2(struct kvm_vcpu *vcpu,
 			      struct kvm_cpuid2 *cpuid,
 			      struct kvm_cpuid_entry2 __user *entries)
 {
+	struct kvm_cpuid_entry2 *e2 = NULL;
 	int r;
 
-	r = -E2BIG;
 	if (cpuid->nent > KVM_MAX_CPUID_ENTRIES)
-		goto out;
-	r = -EFAULT;
-	if (copy_from_user(&vcpu->arch.cpuid_entries, entries,
-			   cpuid->nent * sizeof(struct kvm_cpuid_entry2)))
-		goto out;
-	vcpu->arch.cpuid_nent = cpuid->nent;
-	r = kvm_check_cpuid(vcpu->arch.cpuid_entries, cpuid->nent);
+		return -E2BIG;
+
+	if (cpuid->nent) {
+		e2 = vmemdup_user(entries, array_size(sizeof(*e2), cpuid->nent));
+		if (IS_ERR(e2))
+			return PTR_ERR(e2);
+	}
+
+	r = kvm_check_cpuid(e2, cpuid->nent);
 	if (r) {
-		vcpu->arch.cpuid_nent = 0;
-		goto out;
+		kvfree(e2);
+		return r;
 	}
 
+	kvfree(vcpu->arch.cpuid_entries);
+	vcpu->arch.cpuid_entries = e2;
+	vcpu->arch.cpuid_nent = cpuid->nent;
+
 	kvm_update_cpuid_runtime(vcpu);
 	kvm_vcpu_after_set_cpuid(vcpu);
-out:
-	return r;
+
+	return 0;
 }
 
 int kvm_vcpu_ioctl_get_cpuid2(struct kvm_vcpu *vcpu,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c4015a43cc8a..f8ed1bde18af 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9877,6 +9877,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
 	kvm_mmu_destroy(vcpu);
 	srcu_read_unlock(&vcpu->kvm->srcu, idx);
 	free_page((unsigned long)vcpu->arch.pio_data);
+	kvfree(vcpu->arch.cpuid_entries);
 	if (!lapic_in_kernel(vcpu))
 		static_key_slow_dec(&kvm_no_apic_vcpu);
 }
-- 
2.25.4


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/3] KVM: x86: bump KVM_MAX_CPUID_ENTRIES
  2020-10-01 13:05 [PATCH 0/3] KVM: x86: allow for more CPUID entries Vitaly Kuznetsov
  2020-10-01 13:05 ` [PATCH 1/3] KVM: x86: disconnect kvm_check_cpuid() from vcpu->arch.cpuid_entries Vitaly Kuznetsov
  2020-10-01 13:05 ` [PATCH 2/3] KVM: x86: allocate vcpu->arch.cpuid_entries dynamically Vitaly Kuznetsov
@ 2020-10-01 13:05 ` Vitaly Kuznetsov
  2020-10-05 10:10   ` Maxim Levitsky
  2 siblings, 1 reply; 9+ messages in thread
From: Vitaly Kuznetsov @ 2020-10-01 13:05 UTC (permalink / raw)
  To: kvm, Paolo Bonzini
  Cc: Sean Christopherson, Wanpeng Li, Jim Mattson,
	Dr . David Alan Gilbert, Wei Huang, linux-kernel

As vcpu->arch.cpuid_entries is now allocated dynamically, the only
remaining use for KVM_MAX_CPUID_ENTRIES is to check KVM_SET_CPUID/
KVM_SET_CPUID2 input for sanity. Since it was reported that the
current limit (80) is insufficient for some CPUs, bump
KVM_MAX_CPUID_ENTRIES and use an arbitrary value '256' as the new
limit.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
---
 arch/x86/include/asm/kvm_host.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 7d259e21ea04..f6d6df64e63a 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -133,7 +133,7 @@ static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level)
 #define KVM_NUM_MMU_PAGES (1 << KVM_MMU_HASH_SHIFT)
 #define KVM_MIN_FREE_MMU_PAGES 5
 #define KVM_REFILL_PAGES 25
-#define KVM_MAX_CPUID_ENTRIES 80
+#define KVM_MAX_CPUID_ENTRIES 256
 #define KVM_NR_FIXED_MTRR_REGION 88
 #define KVM_NR_VAR_MTRR 8
 
-- 
2.25.4


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] KVM: x86: bump KVM_MAX_CPUID_ENTRIES
  2020-10-01 13:05 ` [PATCH 3/3] KVM: x86: bump KVM_MAX_CPUID_ENTRIES Vitaly Kuznetsov
@ 2020-10-05 10:10   ` Maxim Levitsky
  0 siblings, 0 replies; 9+ messages in thread
From: Maxim Levitsky @ 2020-10-05 10:10 UTC (permalink / raw)
  To: Vitaly Kuznetsov, kvm, Paolo Bonzini
  Cc: Sean Christopherson, Wanpeng Li, Jim Mattson,
	Dr . David Alan Gilbert, Wei Huang, linux-kernel

On Thu, 2020-10-01 at 15:05 +0200, Vitaly Kuznetsov wrote:
> As vcpu->arch.cpuid_entries is now allocated dynamically, the only
> remaining use for KVM_MAX_CPUID_ENTRIES is to check KVM_SET_CPUID/
> KVM_SET_CPUID2 input for sanity. Since it was reported that the
> current limit (80) is insufficient for some CPUs, bump
> KVM_MAX_CPUID_ENTRIES and use an arbitrary value '256' as the new
> limit.
> 
> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 7d259e21ea04..f6d6df64e63a 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -133,7 +133,7 @@ static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level)
>  #define KVM_NUM_MMU_PAGES (1 << KVM_MMU_HASH_SHIFT)
>  #define KVM_MIN_FREE_MMU_PAGES 5
>  #define KVM_REFILL_PAGES 25
> -#define KVM_MAX_CPUID_ENTRIES 80
> +#define KVM_MAX_CPUID_ENTRIES 256
>  #define KVM_NR_FIXED_MTRR_REGION 88
>  #define KVM_NR_VAR_MTRR 8
>  

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>

Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3] KVM: x86: disconnect kvm_check_cpuid() from vcpu->arch.cpuid_entries
  2020-10-01 13:05 ` [PATCH 1/3] KVM: x86: disconnect kvm_check_cpuid() from vcpu->arch.cpuid_entries Vitaly Kuznetsov
@ 2020-10-05 10:10   ` Maxim Levitsky
  2020-10-05 11:51     ` Vitaly Kuznetsov
  0 siblings, 1 reply; 9+ messages in thread
From: Maxim Levitsky @ 2020-10-05 10:10 UTC (permalink / raw)
  To: Vitaly Kuznetsov, kvm, Paolo Bonzini
  Cc: Sean Christopherson, Wanpeng Li, Jim Mattson,
	Dr . David Alan Gilbert, Wei Huang, linux-kernel

On Thu, 2020-10-01 at 15:05 +0200, Vitaly Kuznetsov wrote:
> As a preparatory step to allocating vcpu->arch.cpuid_entries dynamically
> make kvm_check_cpuid() check work with an arbitrary 'struct kvm_cpuid_entry2'
> array.
> 
> Currently, when kvm_check_cpuid() fails we reset vcpu->arch.cpuid_nent to
> 0 and this is kind of weird, i.e. one would expect CPUIDs to remain
> unchanged when KVM_SET_CPUID[2] call fails.
Since this specific patch doesn't fix this, maybe move this chunk to following patches,
or to the cover letter?

> 
> No functional change intended. It would've been possible to move the updated
> kvm_check_cpuid() in kvm_vcpu_ioctl_set_cpuid2() and check the supplied
> input before we start updating vcpu->arch.cpuid_entries/nent but we
> can't do the same in kvm_vcpu_ioctl_set_cpuid() as we'll have to copy
> 'struct kvm_cpuid_entry' entries first. The change will be made when
> vcpu->arch.cpuid_entries[] array becomes allocated dynamically.
> 
> Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> ---
>  arch/x86/kvm/cpuid.c | 38 +++++++++++++++++++++++---------------
>  1 file changed, 23 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 37c3668a774f..529348ddedc1 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -54,7 +54,24 @@ static u32 xstate_required_size(u64 xstate_bv, bool compacted)
>  
>  #define F feature_bit
>  
> -static int kvm_check_cpuid(struct kvm_vcpu *vcpu)
> +static inline struct kvm_cpuid_entry2 *cpuid_entry2_find(
> +	struct kvm_cpuid_entry2 *entries, int nent, u32 function, u32 index)
> +{
> +	struct kvm_cpuid_entry2 *e;
> +	int i;
> +
> +	for (i = 0; i < nent; i++) {
> +		e = &entries[i];
> +
> +		if (e->function == function && (e->index == index ||
> +		    !(e->flags & KVM_CPUID_FLAG_SIGNIFCANT_INDEX)))
> +			return e;
> +	}
> +
> +	return NULL;
> +}
> +
> +static int kvm_check_cpuid(struct kvm_cpuid_entry2 *entries, int nent)
>  {
>  	struct kvm_cpuid_entry2 *best;
>  
> @@ -62,7 +79,7 @@ static int kvm_check_cpuid(struct kvm_vcpu *vcpu)
>  	 * The existing code assumes virtual address is 48-bit or 57-bit in the
>  	 * canonical address checks; exit if it is ever changed.
>  	 */
> -	best = kvm_find_cpuid_entry(vcpu, 0x80000008, 0);
> +	best = cpuid_entry2_find(entries, nent, 0x80000008, 0);
>  	if (best) {
>  		int vaddr_bits = (best->eax & 0xff00) >> 8;
>  
> @@ -220,7 +237,7 @@ int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu,
>  		vcpu->arch.cpuid_entries[i].padding[2] = 0;
>  	}
>  	vcpu->arch.cpuid_nent = cpuid->nent;
> -	r = kvm_check_cpuid(vcpu);
> +	r = kvm_check_cpuid(vcpu->arch.cpuid_entries, cpuid->nent);
>  	if (r) {
>  		vcpu->arch.cpuid_nent = 0;
>  		kvfree(cpuid_entries);
> @@ -250,7 +267,7 @@ int kvm_vcpu_ioctl_set_cpuid2(struct kvm_vcpu *vcpu,
>  			   cpuid->nent * sizeof(struct kvm_cpuid_entry2)))
>  		goto out;
>  	vcpu->arch.cpuid_nent = cpuid->nent;
> -	r = kvm_check_cpuid(vcpu);
> +	r = kvm_check_cpuid(vcpu->arch.cpuid_entries, cpuid->nent);
>  	if (r) {
>  		vcpu->arch.cpuid_nent = 0;
>  		goto out;
> @@ -940,17 +957,8 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
>  struct kvm_cpuid_entry2 *kvm_find_cpuid_entry(struct kvm_vcpu *vcpu,
>  					      u32 function, u32 index)
>  {
> -	struct kvm_cpuid_entry2 *e;
> -	int i;
> -
> -	for (i = 0; i < vcpu->arch.cpuid_nent; ++i) {
> -		e = &vcpu->arch.cpuid_entries[i];
> -
> -		if (e->function == function && (e->index == index ||
> -		    !(e->flags & KVM_CPUID_FLAG_SIGNIFCANT_INDEX)))
> -			return e;
> -	}
> -	return NULL;
> +	return cpuid_entry2_find(vcpu->arch.cpuid_entries, vcpu->arch.cpuid_nent,
> +				 function, index);
>  }
>  EXPORT_SYMBOL_GPL(kvm_find_cpuid_entry);
>  

Other than minor note to the commit message, this looks fine, so
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>

Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/3] KVM: x86: allocate vcpu->arch.cpuid_entries dynamically
  2020-10-01 13:05 ` [PATCH 2/3] KVM: x86: allocate vcpu->arch.cpuid_entries dynamically Vitaly Kuznetsov
@ 2020-10-05 10:11   ` Maxim Levitsky
  0 siblings, 0 replies; 9+ messages in thread
From: Maxim Levitsky @ 2020-10-05 10:11 UTC (permalink / raw)
  To: Vitaly Kuznetsov, kvm, Paolo Bonzini
  Cc: Sean Christopherson, Wanpeng Li, Jim Mattson,
	Dr . David Alan Gilbert, Wei Huang, linux-kernel

On Thu, 2020-10-01 at 15:05 +0200, Vitaly Kuznetsov wrote:
> The current limit for guest CPUID leaves (KVM_MAX_CPUID_ENTRIES, 80)
> is reported to be insufficient but before we bump it let's switch to
> allocating vcpu->arch.cpuid_entries[] array dynamically. Currently,
> 'struct kvm_cpuid_entry2' is 40 bytes so vcpu->arch.cpuid_entries is
> 3200 bytes which accounts for 1/4 of the whole 'struct kvm_vcpu_arch'
> but having it pre-allocated (for all vCPUs which we also pre-allocate)
> gives us no real benefits.
> 
> Another plus of the dynamic allocation is that we now do kvm_check_cpuid()
> check before we assign anything to vcpu->arch.cpuid_nent/cpuid_entries so
> no changes are made in case the check fails.
> 
> Opportunistically remove unneeded 'out' labels from
> kvm_vcpu_ioctl_set_cpuid()/kvm_vcpu_ioctl_set_cpuid2() and return
> directly whenever possible.
> 
> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  2 +-
>  arch/x86/kvm/cpuid.c            | 89 +++++++++++++++++++--------------
>  arch/x86/kvm/x86.c              |  1 +
>  3 files changed, 53 insertions(+), 39 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index d0f77235da92..7d259e21ea04 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -637,7 +637,7 @@ struct kvm_vcpu_arch {
>  	int halt_request; /* real mode on Intel only */
>  
>  	int cpuid_nent;
> -	struct kvm_cpuid_entry2 cpuid_entries[KVM_MAX_CPUID_ENTRIES];
> +	struct kvm_cpuid_entry2 *cpuid_entries;
>  
>  	int maxphyaddr;
>  	int max_tdp_level;
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 529348ddedc1..3fe20c4da0a6 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -210,46 +210,53 @@ int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu,
>  			     struct kvm_cpuid_entry __user *entries)
>  {
>  	int r, i;
> -	struct kvm_cpuid_entry *cpuid_entries = NULL;
> +	struct kvm_cpuid_entry *e = NULL;
> +	struct kvm_cpuid_entry2 *e2 = NULL;
>  
> -	r = -E2BIG;
>  	if (cpuid->nent > KVM_MAX_CPUID_ENTRIES)
> -		goto out;
> +		return -E2BIG;
> +
>  	if (cpuid->nent) {
> -		cpuid_entries = vmemdup_user(entries,
> -					     array_size(sizeof(struct kvm_cpuid_entry),
> -							cpuid->nent));
> -		if (IS_ERR(cpuid_entries)) {
> -			r = PTR_ERR(cpuid_entries);
> -			goto out;
> +		e = vmemdup_user(entries, array_size(sizeof(*e), cpuid->nent));
> +		if (IS_ERR(e))
> +			return PTR_ERR(e);
> +
> +		e2 = kvmalloc_array(cpuid->nent, sizeof(*e2), GFP_KERNEL_ACCOUNT);
> +		if (!e2) {
> +			r = -ENOMEM;
> +			goto out_free_cpuid;
>  		}
>  	}
>  	for (i = 0; i < cpuid->nent; i++) {
> -		vcpu->arch.cpuid_entries[i].function = cpuid_entries[i].function;
> -		vcpu->arch.cpuid_entries[i].eax = cpuid_entries[i].eax;
> -		vcpu->arch.cpuid_entries[i].ebx = cpuid_entries[i].ebx;
> -		vcpu->arch.cpuid_entries[i].ecx = cpuid_entries[i].ecx;
> -		vcpu->arch.cpuid_entries[i].edx = cpuid_entries[i].edx;
> -		vcpu->arch.cpuid_entries[i].index = 0;
> -		vcpu->arch.cpuid_entries[i].flags = 0;
> -		vcpu->arch.cpuid_entries[i].padding[0] = 0;
> -		vcpu->arch.cpuid_entries[i].padding[1] = 0;
> -		vcpu->arch.cpuid_entries[i].padding[2] = 0;
> +		e2[i].function = e[i].function;
> +		e2[i].eax = e[i].eax;
> +		e2[i].ebx = e[i].ebx;
> +		e2[i].ecx = e[i].ecx;
> +		e2[i].edx = e[i].edx;
> +		e2[i].index = 0;
> +		e2[i].flags = 0;
> +		e2[i].padding[0] = 0;
> +		e2[i].padding[1] = 0;
> +		e2[i].padding[2] = 0;
>  	}
> -	vcpu->arch.cpuid_nent = cpuid->nent;
> -	r = kvm_check_cpuid(vcpu->arch.cpuid_entries, cpuid->nent);
> +
> +	r = kvm_check_cpuid(e2, cpuid->nent);
>  	if (r) {
> -		vcpu->arch.cpuid_nent = 0;
> -		kvfree(cpuid_entries);
> -		goto out;
> +		kvfree(e2);
> +		goto out_free_cpuid;
>  	}
>  
> +	kvfree(vcpu->arch.cpuid_entries);
> +	vcpu->arch.cpuid_entries = e2;
> +	vcpu->arch.cpuid_nent = cpuid->nent;
> +
>  	cpuid_fix_nx_cap(vcpu);
>  	kvm_update_cpuid_runtime(vcpu);
>  	kvm_vcpu_after_set_cpuid(vcpu);
>  
> -	kvfree(cpuid_entries);
> -out:
> +out_free_cpuid:
> +	kvfree(e);
> +
>  	return r;
>  }
>  
> @@ -257,26 +264,32 @@ int kvm_vcpu_ioctl_set_cpuid2(struct kvm_vcpu *vcpu,
>  			      struct kvm_cpuid2 *cpuid,
>  			      struct kvm_cpuid_entry2 __user *entries)
>  {
> +	struct kvm_cpuid_entry2 *e2 = NULL;
>  	int r;
>  
> -	r = -E2BIG;
>  	if (cpuid->nent > KVM_MAX_CPUID_ENTRIES)
> -		goto out;
> -	r = -EFAULT;
> -	if (copy_from_user(&vcpu->arch.cpuid_entries, entries,
> -			   cpuid->nent * sizeof(struct kvm_cpuid_entry2)))
> -		goto out;
> -	vcpu->arch.cpuid_nent = cpuid->nent;
> -	r = kvm_check_cpuid(vcpu->arch.cpuid_entries, cpuid->nent);
> +		return -E2BIG;
> +
> +	if (cpuid->nent) {
> +		e2 = vmemdup_user(entries, array_size(sizeof(*e2), cpuid->nent));
> +		if (IS_ERR(e2))
> +			return PTR_ERR(e2);
> +	}
> +
> +	r = kvm_check_cpuid(e2, cpuid->nent);
>  	if (r) {
> -		vcpu->arch.cpuid_nent = 0;
> -		goto out;
> +		kvfree(e2);
> +		return r;
>  	}
>  
> +	kvfree(vcpu->arch.cpuid_entries);
> +	vcpu->arch.cpuid_entries = e2;
> +	vcpu->arch.cpuid_nent = cpuid->nent;
> +
>  	kvm_update_cpuid_runtime(vcpu);
>  	kvm_vcpu_after_set_cpuid(vcpu);
> -out:
> -	return r;
> +
> +	return 0;
>  }
>  
>  int kvm_vcpu_ioctl_get_cpuid2(struct kvm_vcpu *vcpu,
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index c4015a43cc8a..f8ed1bde18af 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -9877,6 +9877,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
>  	kvm_mmu_destroy(vcpu);
>  	srcu_read_unlock(&vcpu->kvm->srcu, idx);
>  	free_page((unsigned long)vcpu->arch.pio_data);
> +	kvfree(vcpu->arch.cpuid_entries);
>  	if (!lapic_in_kernel(vcpu))
>  		static_key_slow_dec(&kvm_no_apic_vcpu);
>  }

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>

Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3] KVM: x86: disconnect kvm_check_cpuid() from vcpu->arch.cpuid_entries
  2020-10-05 10:10   ` Maxim Levitsky
@ 2020-10-05 11:51     ` Vitaly Kuznetsov
  2020-10-05 13:06       ` Maxim Levitsky
  0 siblings, 1 reply; 9+ messages in thread
From: Vitaly Kuznetsov @ 2020-10-05 11:51 UTC (permalink / raw)
  To: Maxim Levitsky, kvm, Paolo Bonzini
  Cc: Sean Christopherson, Wanpeng Li, Jim Mattson,
	Dr . David Alan Gilbert, Wei Huang, linux-kernel

Maxim Levitsky <mlevitsk@redhat.com> writes:

> On Thu, 2020-10-01 at 15:05 +0200, Vitaly Kuznetsov wrote:
>> As a preparatory step to allocating vcpu->arch.cpuid_entries dynamically
>> make kvm_check_cpuid() check work with an arbitrary 'struct kvm_cpuid_entry2'
>> array.
>> 
>> Currently, when kvm_check_cpuid() fails we reset vcpu->arch.cpuid_nent to
>> 0 and this is kind of weird, i.e. one would expect CPUIDs to remain
>> unchanged when KVM_SET_CPUID[2] call fails.
> Since this specific patch doesn't fix this, maybe move this chunk to following patches,
> or to the cover letter?

Basically, this kind of pairs with what's after 'No functional change
intended' below: we admit the problem but don't fix it because we can't
(yet) and then in PATCH3 we do two things at once. It would be great to
separate these two changes but this doesn't seem to be possible without
an unneeded code churn.

That said, I'm completely fine with dropping this chunk from the commit
message if it sound inapropriate here.

>
>> 
>> No functional change intended. It would've been possible to move the updated
>> kvm_check_cpuid() in kvm_vcpu_ioctl_set_cpuid2() and check the supplied
>> input before we start updating vcpu->arch.cpuid_entries/nent but we
>> can't do the same in kvm_vcpu_ioctl_set_cpuid() as we'll have to copy
>> 'struct kvm_cpuid_entry' entries first. The change will be made when
>> vcpu->arch.cpuid_entries[] array becomes allocated dynamically.
>> 
>> Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
>> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
>> ---
>>  arch/x86/kvm/cpuid.c | 38 +++++++++++++++++++++++---------------
>>  1 file changed, 23 insertions(+), 15 deletions(-)
>> 
>> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
>> index 37c3668a774f..529348ddedc1 100644
>> --- a/arch/x86/kvm/cpuid.c
>> +++ b/arch/x86/kvm/cpuid.c
>> @@ -54,7 +54,24 @@ static u32 xstate_required_size(u64 xstate_bv, bool compacted)
>>  
>>  #define F feature_bit
>>  
>> -static int kvm_check_cpuid(struct kvm_vcpu *vcpu)
>> +static inline struct kvm_cpuid_entry2 *cpuid_entry2_find(
>> +	struct kvm_cpuid_entry2 *entries, int nent, u32 function, u32 index)
>> +{
>> +	struct kvm_cpuid_entry2 *e;
>> +	int i;
>> +
>> +	for (i = 0; i < nent; i++) {
>> +		e = &entries[i];
>> +
>> +		if (e->function == function && (e->index == index ||
>> +		    !(e->flags & KVM_CPUID_FLAG_SIGNIFCANT_INDEX)))
>> +			return e;
>> +	}
>> +
>> +	return NULL;
>> +}
>> +
>> +static int kvm_check_cpuid(struct kvm_cpuid_entry2 *entries, int nent)
>>  {
>>  	struct kvm_cpuid_entry2 *best;
>>  
>> @@ -62,7 +79,7 @@ static int kvm_check_cpuid(struct kvm_vcpu *vcpu)
>>  	 * The existing code assumes virtual address is 48-bit or 57-bit in the
>>  	 * canonical address checks; exit if it is ever changed.
>>  	 */
>> -	best = kvm_find_cpuid_entry(vcpu, 0x80000008, 0);
>> +	best = cpuid_entry2_find(entries, nent, 0x80000008, 0);
>>  	if (best) {
>>  		int vaddr_bits = (best->eax & 0xff00) >> 8;
>>  
>> @@ -220,7 +237,7 @@ int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu,
>>  		vcpu->arch.cpuid_entries[i].padding[2] = 0;
>>  	}
>>  	vcpu->arch.cpuid_nent = cpuid->nent;
>> -	r = kvm_check_cpuid(vcpu);
>> +	r = kvm_check_cpuid(vcpu->arch.cpuid_entries, cpuid->nent);
>>  	if (r) {
>>  		vcpu->arch.cpuid_nent = 0;
>>  		kvfree(cpuid_entries);
>> @@ -250,7 +267,7 @@ int kvm_vcpu_ioctl_set_cpuid2(struct kvm_vcpu *vcpu,
>>  			   cpuid->nent * sizeof(struct kvm_cpuid_entry2)))
>>  		goto out;
>>  	vcpu->arch.cpuid_nent = cpuid->nent;
>> -	r = kvm_check_cpuid(vcpu);
>> +	r = kvm_check_cpuid(vcpu->arch.cpuid_entries, cpuid->nent);
>>  	if (r) {
>>  		vcpu->arch.cpuid_nent = 0;
>>  		goto out;
>> @@ -940,17 +957,8 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
>>  struct kvm_cpuid_entry2 *kvm_find_cpuid_entry(struct kvm_vcpu *vcpu,
>>  					      u32 function, u32 index)
>>  {
>> -	struct kvm_cpuid_entry2 *e;
>> -	int i;
>> -
>> -	for (i = 0; i < vcpu->arch.cpuid_nent; ++i) {
>> -		e = &vcpu->arch.cpuid_entries[i];
>> -
>> -		if (e->function == function && (e->index == index ||
>> -		    !(e->flags & KVM_CPUID_FLAG_SIGNIFCANT_INDEX)))
>> -			return e;
>> -	}
>> -	return NULL;
>> +	return cpuid_entry2_find(vcpu->arch.cpuid_entries, vcpu->arch.cpuid_nent,
>> +				 function, index);
>>  }
>>  EXPORT_SYMBOL_GPL(kvm_find_cpuid_entry);
>>  
>
> Other than minor note to the commit message, this looks fine, so
> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
>

Thanks!

-- 
Vitaly


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3] KVM: x86: disconnect kvm_check_cpuid() from vcpu->arch.cpuid_entries
  2020-10-05 11:51     ` Vitaly Kuznetsov
@ 2020-10-05 13:06       ` Maxim Levitsky
  0 siblings, 0 replies; 9+ messages in thread
From: Maxim Levitsky @ 2020-10-05 13:06 UTC (permalink / raw)
  To: Vitaly Kuznetsov, kvm, Paolo Bonzini
  Cc: Sean Christopherson, Wanpeng Li, Jim Mattson,
	Dr . David Alan Gilbert, Wei Huang, linux-kernel

On Mon, 2020-10-05 at 13:51 +0200, Vitaly Kuznetsov wrote:
> Maxim Levitsky <mlevitsk@redhat.com> writes:
> 
> > On Thu, 2020-10-01 at 15:05 +0200, Vitaly Kuznetsov wrote:
> > > As a preparatory step to allocating vcpu->arch.cpuid_entries dynamically
> > > make kvm_check_cpuid() check work with an arbitrary 'struct kvm_cpuid_entry2'
> > > array.
> > > 
> > > Currently, when kvm_check_cpuid() fails we reset vcpu->arch.cpuid_nent to
> > > 0 and this is kind of weird, i.e. one would expect CPUIDs to remain
> > > unchanged when KVM_SET_CPUID[2] call fails.
> > Since this specific patch doesn't fix this, maybe move this chunk to following patches,
> > or to the cover letter?
> 
> Basically, this kind of pairs with what's after 'No functional change
> intended' below: we admit the problem but don't fix it because we can't
> (yet) and then in PATCH3 we do two things at once. It would be great to
> separate these two changes but this doesn't seem to be possible without
> an unneeded code churn.
> 
> That said, I'm completely fine with dropping this chunk from the commit
> message if it sound inapropriate here.
It just threw me a bit off course while trying to understand what the patch does.

Best regards,
	Maxim Levitsky

> 
> > > No functional change intended. It would've been possible to move the updated
> > > kvm_check_cpuid() in kvm_vcpu_ioctl_set_cpuid2() and check the supplied
> > > input before we start updating vcpu->arch.cpuid_entries/nent but we
> > > can't do the same in kvm_vcpu_ioctl_set_cpuid() as we'll have to copy
> > > 'struct kvm_cpuid_entry' entries first. The change will be made when
> > > vcpu->arch.cpuid_entries[] array becomes allocated dynamically.
> > > 
> > > Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > > Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> > > ---
> > >  arch/x86/kvm/cpuid.c | 38 +++++++++++++++++++++++---------------
> > >  1 file changed, 23 insertions(+), 15 deletions(-)
> > > 
> > > diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> > > index 37c3668a774f..529348ddedc1 100644
> > > --- a/arch/x86/kvm/cpuid.c
> > > +++ b/arch/x86/kvm/cpuid.c
> > > @@ -54,7 +54,24 @@ static u32 xstate_required_size(u64 xstate_bv, bool compacted)
> > >  
> > >  #define F feature_bit
> > >  
> > > -static int kvm_check_cpuid(struct kvm_vcpu *vcpu)
> > > +static inline struct kvm_cpuid_entry2 *cpuid_entry2_find(
> > > +	struct kvm_cpuid_entry2 *entries, int nent, u32 function, u32 index)
> > > +{
> > > +	struct kvm_cpuid_entry2 *e;
> > > +	int i;
> > > +
> > > +	for (i = 0; i < nent; i++) {
> > > +		e = &entries[i];
> > > +
> > > +		if (e->function == function && (e->index == index ||
> > > +		    !(e->flags & KVM_CPUID_FLAG_SIGNIFCANT_INDEX)))
> > > +			return e;
> > > +	}
> > > +
> > > +	return NULL;
> > > +}
> > > +
> > > +static int kvm_check_cpuid(struct kvm_cpuid_entry2 *entries, int nent)
> > >  {
> > >  	struct kvm_cpuid_entry2 *best;
> > >  
> > > @@ -62,7 +79,7 @@ static int kvm_check_cpuid(struct kvm_vcpu *vcpu)
> > >  	 * The existing code assumes virtual address is 48-bit or 57-bit in the
> > >  	 * canonical address checks; exit if it is ever changed.
> > >  	 */
> > > -	best = kvm_find_cpuid_entry(vcpu, 0x80000008, 0);
> > > +	best = cpuid_entry2_find(entries, nent, 0x80000008, 0);
> > >  	if (best) {
> > >  		int vaddr_bits = (best->eax & 0xff00) >> 8;
> > >  
> > > @@ -220,7 +237,7 @@ int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu,
> > >  		vcpu->arch.cpuid_entries[i].padding[2] = 0;
> > >  	}
> > >  	vcpu->arch.cpuid_nent = cpuid->nent;
> > > -	r = kvm_check_cpuid(vcpu);
> > > +	r = kvm_check_cpuid(vcpu->arch.cpuid_entries, cpuid->nent);
> > >  	if (r) {
> > >  		vcpu->arch.cpuid_nent = 0;
> > >  		kvfree(cpuid_entries);
> > > @@ -250,7 +267,7 @@ int kvm_vcpu_ioctl_set_cpuid2(struct kvm_vcpu *vcpu,
> > >  			   cpuid->nent * sizeof(struct kvm_cpuid_entry2)))
> > >  		goto out;
> > >  	vcpu->arch.cpuid_nent = cpuid->nent;
> > > -	r = kvm_check_cpuid(vcpu);
> > > +	r = kvm_check_cpuid(vcpu->arch.cpuid_entries, cpuid->nent);
> > >  	if (r) {
> > >  		vcpu->arch.cpuid_nent = 0;
> > >  		goto out;
> > > @@ -940,17 +957,8 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
> > >  struct kvm_cpuid_entry2 *kvm_find_cpuid_entry(struct kvm_vcpu *vcpu,
> > >  					      u32 function, u32 index)
> > >  {
> > > -	struct kvm_cpuid_entry2 *e;
> > > -	int i;
> > > -
> > > -	for (i = 0; i < vcpu->arch.cpuid_nent; ++i) {
> > > -		e = &vcpu->arch.cpuid_entries[i];
> > > -
> > > -		if (e->function == function && (e->index == index ||
> > > -		    !(e->flags & KVM_CPUID_FLAG_SIGNIFCANT_INDEX)))
> > > -			return e;
> > > -	}
> > > -	return NULL;
> > > +	return cpuid_entry2_find(vcpu->arch.cpuid_entries, vcpu->arch.cpuid_nent,
> > > +				 function, index);
> > >  }
> > >  EXPORT_SYMBOL_GPL(kvm_find_cpuid_entry);
> > >  
> > 
> > Other than minor note to the commit message, this looks fine, so
> > Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
> > 
> 
> Thanks!
> 



^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-10-05 13:06 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-01 13:05 [PATCH 0/3] KVM: x86: allow for more CPUID entries Vitaly Kuznetsov
2020-10-01 13:05 ` [PATCH 1/3] KVM: x86: disconnect kvm_check_cpuid() from vcpu->arch.cpuid_entries Vitaly Kuznetsov
2020-10-05 10:10   ` Maxim Levitsky
2020-10-05 11:51     ` Vitaly Kuznetsov
2020-10-05 13:06       ` Maxim Levitsky
2020-10-01 13:05 ` [PATCH 2/3] KVM: x86: allocate vcpu->arch.cpuid_entries dynamically Vitaly Kuznetsov
2020-10-05 10:11   ` Maxim Levitsky
2020-10-01 13:05 ` [PATCH 3/3] KVM: x86: bump KVM_MAX_CPUID_ENTRIES Vitaly Kuznetsov
2020-10-05 10:10   ` Maxim Levitsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).