* [PATCH v2 0/6] x86/kvm: add boot parameters for max vcpu configs @ 2021-09-03 13:08 Juergen Gross 2021-09-03 13:08 ` [PATCH v2 4/6] kvm: use kvfree() in kvm_arch_free_vm() Juergen Gross 2021-09-03 13:08 ` [PATCH v2 5/6] kvm: allocate vcpu pointer array separately Juergen Gross 0 siblings, 2 replies; 8+ messages in thread From: Juergen Gross @ 2021-09-03 13:08 UTC (permalink / raw) To: kvm, x86, linux-kernel, linux-doc, linux-arm-kernel Cc: Juergen Gross, Wanpeng Li, ehabkost, Will Deacon, Jonathan Corbet, maz, Joerg Roedel, H. Peter Anvin, kvmarm, Catalin Marinas, Ingo Molnar, Borislav Petkov, Sean Christopherson, Paolo Bonzini, Vitaly Kuznetsov, Thomas Gleixner, Jim Mattson In order to be able to have a single kernel for supporting even huge numbers of vcpus per guest some arrays should be sized dynamically. The easiest way to do that is to add boot parameters for the maximum number of vcpus and to calculate the maximum vcpu-id from that using either the host topology or a topology hint via another boot parameter. This patch series is doing that for x86. The same scheme can be easily adapted to other architectures, but I don't want to do that in the first iteration. In the long term I'd suggest to have a per-guest setting of the two parameters allowing to spare some memory for smaller guests. OTOH this would require new ioctl()s and respective qemu modifications, so I let those away for now. I've tested the series not to break normal guest operation and the new parameters to be effective on x86. For Arm64 I did a compile test only. Changes in V2: - removed old patch 1, as already applied - patch 1 (old patch 2) only for reference, as the patch is already in the kvm tree - switch patch 2 (old patch 3) to calculate vcpu-id - added patch 4 Juergen Gross (6): x86/kvm: remove non-x86 stuff from arch/x86/kvm/ioapic.h x86/kvm: add boot parameter for adding vcpu-id bits x86/kvm: introduce per cpu vcpu masks kvm: use kvfree() in kvm_arch_free_vm() kvm: allocate vcpu pointer array separately x86/kvm: add boot parameter for setting max number of vcpus per guest .../admin-guide/kernel-parameters.txt | 25 ++++++ arch/arm64/include/asm/kvm_host.h | 1 - arch/arm64/kvm/arm.c | 23 ++++-- arch/x86/include/asm/kvm_host.h | 26 +++++-- arch/x86/kvm/hyperv.c | 25 ++++-- arch/x86/kvm/ioapic.c | 12 ++- arch/x86/kvm/ioapic.h | 8 +- arch/x86/kvm/irq_comm.c | 9 ++- arch/x86/kvm/x86.c | 78 ++++++++++++++++++- include/linux/kvm_host.h | 26 ++++++- 10 files changed, 198 insertions(+), 35 deletions(-) -- 2.26.2 _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v2 4/6] kvm: use kvfree() in kvm_arch_free_vm() 2021-09-03 13:08 [PATCH v2 0/6] x86/kvm: add boot parameters for max vcpu configs Juergen Gross @ 2021-09-03 13:08 ` Juergen Gross 2021-09-28 16:48 ` Paolo Bonzini 2021-09-03 13:08 ` [PATCH v2 5/6] kvm: allocate vcpu pointer array separately Juergen Gross 1 sibling, 1 reply; 8+ messages in thread From: Juergen Gross @ 2021-09-03 13:08 UTC (permalink / raw) To: kvm, x86, linux-arm-kernel, linux-kernel Cc: Juergen Gross, Wanpeng Li, ehabkost, Will Deacon, maz, Joerg Roedel, Sean Christopherson, H. Peter Anvin, kvmarm, Ingo Molnar, Catalin Marinas, Borislav Petkov, Paolo Bonzini, Vitaly Kuznetsov, Thomas Gleixner, Jim Mattson By switching from kfree() to kvfree() in kvm_arch_free_vm() Arm64 can use the common variant. This can be accomplished by adding another macro __KVM_HAVE_ARCH_VM_FREE, which will be used only by x86 for now. Further simplification can be achieved by adding __kvm_arch_free_vm() doing the common part. Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juergen Gross <jgross@suse.com> --- V2: - new patch --- arch/arm64/include/asm/kvm_host.h | 1 - arch/arm64/kvm/arm.c | 8 -------- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/x86.c | 2 +- include/linux/kvm_host.h | 9 ++++++++- 5 files changed, 11 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 41911585ae0c..39601fb87e69 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -767,7 +767,6 @@ int kvm_set_ipa_limit(void); #define __KVM_HAVE_ARCH_VM_ALLOC struct kvm *kvm_arch_alloc_vm(void); -void kvm_arch_free_vm(struct kvm *kvm); int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 0ca72f5cda41..38fff5963d9f 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -299,14 +299,6 @@ struct kvm *kvm_arch_alloc_vm(void) return vzalloc(sizeof(struct kvm)); } -void kvm_arch_free_vm(struct kvm *kvm) -{ - if (!has_vhe()) - kfree(kvm); - else - vfree(kvm); -} - int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) { if (irqchip_in_kernel(kvm) && vgic_initialized(kvm)) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index a809a9e4fa5c..f16fadfc030a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1521,6 +1521,8 @@ static inline struct kvm *kvm_arch_alloc_vm(void) { return __vmalloc(kvm_x86_ops.vm_size, GFP_KERNEL_ACCOUNT | __GFP_ZERO); } + +#define __KVM_HAVE_ARCH_VM_FREE void kvm_arch_free_vm(struct kvm *kvm); #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLB diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index fd19b72a5733..cc552763f0e4 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11129,7 +11129,7 @@ void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) void kvm_arch_free_vm(struct kvm *kvm) { kfree(to_kvm_hv(kvm)->hv_pa_pg); - vfree(kvm); + __kvm_arch_free_vm(kvm); } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ae7735b490b4..d75e9c2a00b1 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1044,10 +1044,17 @@ static inline struct kvm *kvm_arch_alloc_vm(void) { return kzalloc(sizeof(struct kvm), GFP_KERNEL); } +#endif + +static inline void __kvm_arch_free_vm(struct kvm *kvm) +{ + kvfree(kvm); +} +#ifndef __KVM_HAVE_ARCH_VM_FREE static inline void kvm_arch_free_vm(struct kvm *kvm) { - kfree(kvm); + __kvm_arch_free_vm(kvm); } #endif -- 2.26.2 _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v2 4/6] kvm: use kvfree() in kvm_arch_free_vm() 2021-09-03 13:08 ` [PATCH v2 4/6] kvm: use kvfree() in kvm_arch_free_vm() Juergen Gross @ 2021-09-28 16:48 ` Paolo Bonzini 0 siblings, 0 replies; 8+ messages in thread From: Paolo Bonzini @ 2021-09-28 16:48 UTC (permalink / raw) To: Juergen Gross, kvm, x86, linux-arm-kernel, linux-kernel Cc: Wanpeng Li, ehabkost, Will Deacon, maz, Joerg Roedel, H. Peter Anvin, kvmarm, Ingo Molnar, Catalin Marinas, Borislav Petkov, Vitaly Kuznetsov, Thomas Gleixner, Jim Mattson On 03/09/21 15:08, Juergen Gross wrote: > By switching from kfree() to kvfree() in kvm_arch_free_vm() Arm64 can > use the common variant. This can be accomplished by adding another > macro __KVM_HAVE_ARCH_VM_FREE, which will be used only by x86 for now. > > Further simplification can be achieved by adding __kvm_arch_free_vm() > doing the common part. > > Suggested-by: Paolo Bonzini <pbonzini@redhat.com> > Signed-off-by: Juergen Gross <jgross@suse.com> Queued this one already, thanks. Paolo _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v2 5/6] kvm: allocate vcpu pointer array separately 2021-09-03 13:08 [PATCH v2 0/6] x86/kvm: add boot parameters for max vcpu configs Juergen Gross 2021-09-03 13:08 ` [PATCH v2 4/6] kvm: use kvfree() in kvm_arch_free_vm() Juergen Gross @ 2021-09-03 13:08 ` Juergen Gross 2021-09-03 14:41 ` Marc Zyngier 1 sibling, 1 reply; 8+ messages in thread From: Juergen Gross @ 2021-09-03 13:08 UTC (permalink / raw) To: kvm, x86, linux-arm-kernel, linux-kernel Cc: Juergen Gross, Wanpeng Li, ehabkost, Will Deacon, maz, Joerg Roedel, Sean Christopherson, H. Peter Anvin, kvmarm, Ingo Molnar, Catalin Marinas, Borislav Petkov, Paolo Bonzini, Vitaly Kuznetsov, Thomas Gleixner, Jim Mattson Prepare support of very large vcpu numbers per guest by moving the vcpu pointer array out of struct kvm. Signed-off-by: Juergen Gross <jgross@suse.com> --- V2: - rebase to new kvm_arch_free_vm() implementation --- arch/arm64/kvm/arm.c | 21 +++++++++++++++++++-- arch/x86/include/asm/kvm_host.h | 5 +---- arch/x86/kvm/x86.c | 18 ++++++++++++++++++ include/linux/kvm_host.h | 17 +++++++++++++++-- 4 files changed, 53 insertions(+), 8 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 38fff5963d9f..8bb5caeba007 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -293,10 +293,27 @@ long kvm_arch_dev_ioctl(struct file *filp, struct kvm *kvm_arch_alloc_vm(void) { + struct kvm *kvm; + + if (!has_vhe()) + kvm = kzalloc(sizeof(struct kvm), GFP_KERNEL); + else + kvm = vzalloc(sizeof(struct kvm)); + + if (!kvm) + return NULL; + if (!has_vhe()) - return kzalloc(sizeof(struct kvm), GFP_KERNEL); + kvm->vcpus = kcalloc(KVM_MAX_VCPUS, sizeof(void *), GFP_KERNEL); + else + kvm->vcpus = vzalloc(KVM_MAX_VCPUS * sizeof(void *)); + + if (!kvm->vcpus) { + kvm_arch_free_vm(kvm); + kvm = NULL; + } - return vzalloc(sizeof(struct kvm)); + return kvm; } int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index f16fadfc030a..6c28d0800208 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1517,10 +1517,7 @@ static inline void kvm_ops_static_call_update(void) } #define __KVM_HAVE_ARCH_VM_ALLOC -static inline struct kvm *kvm_arch_alloc_vm(void) -{ - return __vmalloc(kvm_x86_ops.vm_size, GFP_KERNEL_ACCOUNT | __GFP_ZERO); -} +struct kvm *kvm_arch_alloc_vm(void); #define __KVM_HAVE_ARCH_VM_FREE void kvm_arch_free_vm(struct kvm *kvm); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index cc552763f0e4..ff142b6dd00c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11126,6 +11126,24 @@ void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) static_call(kvm_x86_sched_in)(vcpu, cpu); } +struct kvm *kvm_arch_alloc_vm(void) +{ + struct kvm *kvm; + + kvm = __vmalloc(kvm_x86_ops.vm_size, GFP_KERNEL_ACCOUNT | __GFP_ZERO); + if (!kvm) + return NULL; + + kvm->vcpus = __vmalloc(KVM_MAX_VCPUS * sizeof(void *), + GFP_KERNEL_ACCOUNT | __GFP_ZERO); + if (!kvm->vcpus) { + vfree(kvm); + kvm = NULL; + } + + return kvm; +} + void kvm_arch_free_vm(struct kvm *kvm) { kfree(to_kvm_hv(kvm)->hv_pa_pg); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d75e9c2a00b1..9e2a5f1c6f54 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -536,7 +536,7 @@ struct kvm { struct mutex slots_arch_lock; struct mm_struct *mm; /* userspace tied to this vm */ struct kvm_memslots __rcu *memslots[KVM_ADDRESS_SPACE_NUM]; - struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; + struct kvm_vcpu **vcpus; /* * created_vcpus is protected by kvm->lock, and is incremented @@ -1042,12 +1042,25 @@ void kvm_arch_pre_destroy_vm(struct kvm *kvm); */ static inline struct kvm *kvm_arch_alloc_vm(void) { - return kzalloc(sizeof(struct kvm), GFP_KERNEL); + struct kvm *kvm = kzalloc(sizeof(struct kvm), GFP_KERNEL); + + if (!kvm) + return NULL; + + kvm->vcpus = kcalloc(KVM_MAX_VCPUS, sizeof(void *), GFP_KERNEL); + if (!kvm->vcpus) { + kfree(kvm); + kvm = NULL; + } + + return kvm; } #endif static inline void __kvm_arch_free_vm(struct kvm *kvm) { + if (kvm) + kvfree(kvm->vcpus); kvfree(kvm); } -- 2.26.2 _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v2 5/6] kvm: allocate vcpu pointer array separately 2021-09-03 13:08 ` [PATCH v2 5/6] kvm: allocate vcpu pointer array separately Juergen Gross @ 2021-09-03 14:41 ` Marc Zyngier 2021-09-06 4:33 ` Juergen Gross 0 siblings, 1 reply; 8+ messages in thread From: Marc Zyngier @ 2021-09-03 14:41 UTC (permalink / raw) To: Juergen Gross Cc: Wanpeng Li, ehabkost, kvm, Catalin Marinas, Joerg Roedel, x86, H. Peter Anvin, linux-kernel, kvmarm, Will Deacon, Ingo Molnar, Sean Christopherson, Borislav Petkov, Paolo Bonzini, Vitaly Kuznetsov, Thomas Gleixner, linux-arm-kernel, Jim Mattson On Fri, 03 Sep 2021 14:08:06 +0100, Juergen Gross <jgross@suse.com> wrote: > > Prepare support of very large vcpu numbers per guest by moving the > vcpu pointer array out of struct kvm. > > Signed-off-by: Juergen Gross <jgross@suse.com> > --- > V2: > - rebase to new kvm_arch_free_vm() implementation > --- > arch/arm64/kvm/arm.c | 21 +++++++++++++++++++-- > arch/x86/include/asm/kvm_host.h | 5 +---- > arch/x86/kvm/x86.c | 18 ++++++++++++++++++ > include/linux/kvm_host.h | 17 +++++++++++++++-- > 4 files changed, 53 insertions(+), 8 deletions(-) > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > index 38fff5963d9f..8bb5caeba007 100644 > --- a/arch/arm64/kvm/arm.c > +++ b/arch/arm64/kvm/arm.c > @@ -293,10 +293,27 @@ long kvm_arch_dev_ioctl(struct file *filp, > > struct kvm *kvm_arch_alloc_vm(void) > { > + struct kvm *kvm; > + > + if (!has_vhe()) > + kvm = kzalloc(sizeof(struct kvm), GFP_KERNEL); > + else > + kvm = vzalloc(sizeof(struct kvm)); > + > + if (!kvm) > + return NULL; > + > if (!has_vhe()) > - return kzalloc(sizeof(struct kvm), GFP_KERNEL); > + kvm->vcpus = kcalloc(KVM_MAX_VCPUS, sizeof(void *), GFP_KERNEL); > + else > + kvm->vcpus = vzalloc(KVM_MAX_VCPUS * sizeof(void *)); > + > + if (!kvm->vcpus) { > + kvm_arch_free_vm(kvm); > + kvm = NULL; > + } > > - return vzalloc(sizeof(struct kvm)); > + return kvm; > } > > int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index f16fadfc030a..6c28d0800208 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -1517,10 +1517,7 @@ static inline void kvm_ops_static_call_update(void) > } > > #define __KVM_HAVE_ARCH_VM_ALLOC > -static inline struct kvm *kvm_arch_alloc_vm(void) > -{ > - return __vmalloc(kvm_x86_ops.vm_size, GFP_KERNEL_ACCOUNT | __GFP_ZERO); > -} > +struct kvm *kvm_arch_alloc_vm(void); > > #define __KVM_HAVE_ARCH_VM_FREE > void kvm_arch_free_vm(struct kvm *kvm); > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index cc552763f0e4..ff142b6dd00c 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -11126,6 +11126,24 @@ void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) > static_call(kvm_x86_sched_in)(vcpu, cpu); > } > > +struct kvm *kvm_arch_alloc_vm(void) > +{ > + struct kvm *kvm; > + > + kvm = __vmalloc(kvm_x86_ops.vm_size, GFP_KERNEL_ACCOUNT | __GFP_ZERO); > + if (!kvm) > + return NULL; > + > + kvm->vcpus = __vmalloc(KVM_MAX_VCPUS * sizeof(void *), > + GFP_KERNEL_ACCOUNT | __GFP_ZERO); > + if (!kvm->vcpus) { > + vfree(kvm); > + kvm = NULL; > + } > + > + return kvm; > +} > + > void kvm_arch_free_vm(struct kvm *kvm) > { > kfree(to_kvm_hv(kvm)->hv_pa_pg); > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index d75e9c2a00b1..9e2a5f1c6f54 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -536,7 +536,7 @@ struct kvm { > struct mutex slots_arch_lock; > struct mm_struct *mm; /* userspace tied to this vm */ > struct kvm_memslots __rcu *memslots[KVM_ADDRESS_SPACE_NUM]; > - struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; > + struct kvm_vcpu **vcpus; At this stage, I really wonder why we are not using an xarray instead. I wrote this [1] a while ago, and nothing caught fire. It was also a net deletion of code... M. [1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/vcpu-xarray -- Without deviation from the norm, progress is not possible. _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 5/6] kvm: allocate vcpu pointer array separately 2021-09-03 14:41 ` Marc Zyngier @ 2021-09-06 4:33 ` Juergen Gross 2021-09-06 9:46 ` Marc Zyngier 0 siblings, 1 reply; 8+ messages in thread From: Juergen Gross @ 2021-09-06 4:33 UTC (permalink / raw) To: Marc Zyngier Cc: Wanpeng Li, ehabkost, kvm, Catalin Marinas, Joerg Roedel, x86, H. Peter Anvin, linux-kernel, kvmarm, Will Deacon, Ingo Molnar, Sean Christopherson, Borislav Petkov, Paolo Bonzini, Vitaly Kuznetsov, Thomas Gleixner, linux-arm-kernel, Jim Mattson [-- Attachment #1.1.1.1: Type: text/plain, Size: 3827 bytes --] On 03.09.21 16:41, Marc Zyngier wrote: > On Fri, 03 Sep 2021 14:08:06 +0100, > Juergen Gross <jgross@suse.com> wrote: >> >> Prepare support of very large vcpu numbers per guest by moving the >> vcpu pointer array out of struct kvm. >> >> Signed-off-by: Juergen Gross <jgross@suse.com> >> --- >> V2: >> - rebase to new kvm_arch_free_vm() implementation >> --- >> arch/arm64/kvm/arm.c | 21 +++++++++++++++++++-- >> arch/x86/include/asm/kvm_host.h | 5 +---- >> arch/x86/kvm/x86.c | 18 ++++++++++++++++++ >> include/linux/kvm_host.h | 17 +++++++++++++++-- >> 4 files changed, 53 insertions(+), 8 deletions(-) >> >> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c >> index 38fff5963d9f..8bb5caeba007 100644 >> --- a/arch/arm64/kvm/arm.c >> +++ b/arch/arm64/kvm/arm.c >> @@ -293,10 +293,27 @@ long kvm_arch_dev_ioctl(struct file *filp, >> >> struct kvm *kvm_arch_alloc_vm(void) >> { >> + struct kvm *kvm; >> + >> + if (!has_vhe()) >> + kvm = kzalloc(sizeof(struct kvm), GFP_KERNEL); >> + else >> + kvm = vzalloc(sizeof(struct kvm)); >> + >> + if (!kvm) >> + return NULL; >> + >> if (!has_vhe()) >> - return kzalloc(sizeof(struct kvm), GFP_KERNEL); >> + kvm->vcpus = kcalloc(KVM_MAX_VCPUS, sizeof(void *), GFP_KERNEL); >> + else >> + kvm->vcpus = vzalloc(KVM_MAX_VCPUS * sizeof(void *)); >> + >> + if (!kvm->vcpus) { >> + kvm_arch_free_vm(kvm); >> + kvm = NULL; >> + } >> >> - return vzalloc(sizeof(struct kvm)); >> + return kvm; >> } >> >> int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) >> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h >> index f16fadfc030a..6c28d0800208 100644 >> --- a/arch/x86/include/asm/kvm_host.h >> +++ b/arch/x86/include/asm/kvm_host.h >> @@ -1517,10 +1517,7 @@ static inline void kvm_ops_static_call_update(void) >> } >> >> #define __KVM_HAVE_ARCH_VM_ALLOC >> -static inline struct kvm *kvm_arch_alloc_vm(void) >> -{ >> - return __vmalloc(kvm_x86_ops.vm_size, GFP_KERNEL_ACCOUNT | __GFP_ZERO); >> -} >> +struct kvm *kvm_arch_alloc_vm(void); >> >> #define __KVM_HAVE_ARCH_VM_FREE >> void kvm_arch_free_vm(struct kvm *kvm); >> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c >> index cc552763f0e4..ff142b6dd00c 100644 >> --- a/arch/x86/kvm/x86.c >> +++ b/arch/x86/kvm/x86.c >> @@ -11126,6 +11126,24 @@ void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) >> static_call(kvm_x86_sched_in)(vcpu, cpu); >> } >> >> +struct kvm *kvm_arch_alloc_vm(void) >> +{ >> + struct kvm *kvm; >> + >> + kvm = __vmalloc(kvm_x86_ops.vm_size, GFP_KERNEL_ACCOUNT | __GFP_ZERO); >> + if (!kvm) >> + return NULL; >> + >> + kvm->vcpus = __vmalloc(KVM_MAX_VCPUS * sizeof(void *), >> + GFP_KERNEL_ACCOUNT | __GFP_ZERO); >> + if (!kvm->vcpus) { >> + vfree(kvm); >> + kvm = NULL; >> + } >> + >> + return kvm; >> +} >> + >> void kvm_arch_free_vm(struct kvm *kvm) >> { >> kfree(to_kvm_hv(kvm)->hv_pa_pg); >> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h >> index d75e9c2a00b1..9e2a5f1c6f54 100644 >> --- a/include/linux/kvm_host.h >> +++ b/include/linux/kvm_host.h >> @@ -536,7 +536,7 @@ struct kvm { >> struct mutex slots_arch_lock; >> struct mm_struct *mm; /* userspace tied to this vm */ >> struct kvm_memslots __rcu *memslots[KVM_ADDRESS_SPACE_NUM]; >> - struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; >> + struct kvm_vcpu **vcpus; > > At this stage, I really wonder why we are not using an xarray instead. > > I wrote this [1] a while ago, and nothing caught fire. It was also a > net deletion of code... Indeed, I'd prefer that solution! Are you fine with me swapping my patch with yours in the series? Juergen [-- Attachment #1.1.1.2: OpenPGP public key --] [-- Type: application/pgp-keys, Size: 3135 bytes --] [-- Attachment #1.2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 495 bytes --] [-- Attachment #2: Type: text/plain, Size: 151 bytes --] _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 5/6] kvm: allocate vcpu pointer array separately 2021-09-06 4:33 ` Juergen Gross @ 2021-09-06 9:46 ` Marc Zyngier 2021-09-09 20:28 ` Sean Christopherson 0 siblings, 1 reply; 8+ messages in thread From: Marc Zyngier @ 2021-09-06 9:46 UTC (permalink / raw) To: Juergen Gross Cc: Wanpeng Li, ehabkost, kvm, Catalin Marinas, Joerg Roedel, x86, H. Peter Anvin, linux-kernel, kvmarm, Will Deacon, Ingo Molnar, Sean Christopherson, Borislav Petkov, Paolo Bonzini, Vitaly Kuznetsov, Thomas Gleixner, linux-arm-kernel, Jim Mattson On Mon, 06 Sep 2021 05:33:35 +0100, Juergen Gross <jgross@suse.com> wrote: > > On 03.09.21 16:41, Marc Zyngier wrote: > > > At this stage, I really wonder why we are not using an xarray instead. > > > > I wrote this [1] a while ago, and nothing caught fire. It was also a > > net deletion of code... > > Indeed, I'd prefer that solution! > > Are you fine with me swapping my patch with yours in the series? Of course, feel free to grab the whole series. You'll probably need the initial patches to set the scene for this. On their own, they are a nice cleanup, and I trust you can write a decent commit message for the three patches affecting mips/s390/x86. I would normally do that myself, but my network connectivity is reduced to almost nothing at the moment. Thanks, M. -- Without deviation from the norm, progress is not possible. _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 5/6] kvm: allocate vcpu pointer array separately 2021-09-06 9:46 ` Marc Zyngier @ 2021-09-09 20:28 ` Sean Christopherson 0 siblings, 0 replies; 8+ messages in thread From: Sean Christopherson @ 2021-09-09 20:28 UTC (permalink / raw) To: Marc Zyngier Cc: Juergen Gross, Wanpeng Li, ehabkost, kvm, Catalin Marinas, Joerg Roedel, x86, linux-kernel, kvmarm, Will Deacon, Ingo Molnar, H. Peter Anvin, Borislav Petkov, Paolo Bonzini, Vitaly Kuznetsov, Thomas Gleixner, linux-arm-kernel, Jim Mattson On Mon, Sep 06, 2021, Marc Zyngier wrote: > On Mon, 06 Sep 2021 05:33:35 +0100, > Juergen Gross <jgross@suse.com> wrote: > > > > On 03.09.21 16:41, Marc Zyngier wrote: > > > > > At this stage, I really wonder why we are not using an xarray instead. > > > > > > I wrote this [1] a while ago, and nothing caught fire. It was also a > > > net deletion of code... > > > > Indeed, I'd prefer that solution! > > > > Are you fine with me swapping my patch with yours in the series? > > Of course, feel free to grab the whole series. You'll probably need > the initial patches to set the scene for this. On their own, they are > a nice cleanup, and I trust you can write a decent commit message for > the three patches affecting mips/s390/x86. It would also be a good idea to convert kvm_for_each_vcpu() to use xa_for_each(), I assume that's more performant than 2x atomic_read() + xa_load(). Unless I'm misreading the code, xa_for_each() is guaranteed to iterate in ascending index order, i.e. preserves the current vcpu0..vcpuN iteration order. _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2021-09-28 16:48 UTC | newest] Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2021-09-03 13:08 [PATCH v2 0/6] x86/kvm: add boot parameters for max vcpu configs Juergen Gross 2021-09-03 13:08 ` [PATCH v2 4/6] kvm: use kvfree() in kvm_arch_free_vm() Juergen Gross 2021-09-28 16:48 ` Paolo Bonzini 2021-09-03 13:08 ` [PATCH v2 5/6] kvm: allocate vcpu pointer array separately Juergen Gross 2021-09-03 14:41 ` Marc Zyngier 2021-09-06 4:33 ` Juergen Gross 2021-09-06 9:46 ` Marc Zyngier 2021-09-09 20:28 ` Sean Christopherson
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).