linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: Sean Christopherson <seanjc@google.com>
Cc: Juergen Gross <jgross@suse.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	Anup Patel <anup.patel@wdc.com>,
	Janosch Frank <frankja@linux.ibm.com>,
	kvm@vger.kernel.org,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Huacai Chen <chenhuacai@kernel.org>,
	David Hildenbrand <david@redhat.com>,
	linux-mips@vger.kernel.org, Nicholas Piggin <npiggin@gmail.com>,
	Atish Patra <atish.patra@wdc.com>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	Paul Mackerras <paulus@samba.org>,
	James Morse <james.morse@arm.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kernel-team@android.com,
	Claudio Imbrenda <imbrenda@linux.ibm.com>,
	linuxppc-dev@lists.ozlabs.org, kvmarm@lists.cs.columbia.edu,
	Suzuki K Poulose <suzuki.poulose@arm.com>
Subject: Re: [PATCH 5/5] KVM: Convert the kvm->vcpus array to a xarray
Date: Mon, 08 Nov 2021 08:23:19 +0000	[thread overview]
Message-ID: <8400fc5dfa13c89c8786bfc809011a54@kernel.org> (raw)
In-Reply-To: <87mtmhec88.wl-maz@kernel.org>

On 2021-11-06 11:48, Marc Zyngier wrote:
> On Fri, 05 Nov 2021 20:21:36 +0000,
> Sean Christopherson <seanjc@google.com> wrote:
>> 
>> On Fri, Nov 05, 2021, Marc Zyngier wrote:
>> > At least on arm64 and x86, the vcpus array is pretty huge (512 entries),
>> > and is mostly empty in most cases (running 512 vcpu VMs is not that
>> > common). This mean that we end-up with a 4kB block of unused memory
>> > in the middle of the kvm structure.
>> 
>> Heh, x86 is now up to 1024 entries.
> 
> Humph. I don't want to know whether people are actually using that in
> practice. The only time I create VMs with 512 vcpus is to check
> whether it still works...
> 
>> 
>> > Instead of wasting away this memory, let's use an xarray instead,
>> > which gives us almost the same flexibility as a normal array, but
>> > with a reduced memory usage with smaller VMs.
>> >
>> > Signed-off-by: Marc Zyngier <maz@kernel.org>
>> > ---
>> > @@ -693,7 +694,7 @@ static inline struct kvm_vcpu *kvm_get_vcpu(struct kvm *kvm, int i)
>> >
>> >  	/* Pairs with smp_wmb() in kvm_vm_ioctl_create_vcpu.  */
>> >  	smp_rmb();
>> > -	return kvm->vcpus[i];
>> > +	return xa_load(&kvm->vcpu_array, i);
>> >  }
>> 
>> It'd be nice for this series to convert kvm_for_each_vcpu() to use
>> xa_for_each() as well.  Maybe as a patch on top so that potential
>> explosions from that are isolated from the initiali conversion?
>> 
>> Or maybe even use xa_for_each_range() to cap at online_vcpus?
>> That's technically a functional change, but IMO it's easier to
>> reason about iterating over a snapshot of vCPUs as opposed to being
>> able to iterate over vCPUs as their being added.  In practice I
>> doubt it matters.
>> 
>> #define kvm_for_each_vcpu(idx, vcpup, kvm) \
>> 	xa_for_each_range(&kvm->vcpu_array, idx, vcpup, 0, 
>> atomic_read(&kvm->online_vcpus))
>> 
> 
> I think that's already the behaviour of this iterator (we stop at the
> first empty slot capped to online_vcpus. The only change in behaviour
> is that vcpup currently holds a pointer to the last vcpu in no empty
> slot has been encountered. xa_for_each{,_range}() would set the
> pointer to NULL at all times.
> 
> I doubt anyone relies on that, but it is probably worth eyeballing
> some of the use cases...

This turned out to be an interesting exercise, as we always use an
int for the index, and the xarray iterators insist on an unsigned
long (and even on a pointer to it). On the other hand, I couldn't
spot any case where we'd rely on the last value of the vcpu pointer.

I'll repost the series once we have a solution for patch #4, and
we can then decide whether we want the iterator churn.
-- 
Jazz is not dead. It just smells funny...

  reply	other threads:[~2021-11-08  8:23 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-05 19:20 [PATCH 0/5] KVM: Turn the vcpu array into an xarray Marc Zyngier
2021-11-05 19:20 ` [PATCH 1/5] KVM: Move wiping of the kvm->vcpus array to common code Marc Zyngier
2021-11-05 20:12   ` Sean Christopherson
2021-11-06 11:17     ` Marc Zyngier
2021-11-16 13:49       ` Paolo Bonzini
2021-11-08 12:12   ` Claudio Imbrenda
2021-11-05 19:20 ` [PATCH 2/5] KVM: mips: Use kvm_get_vcpu() instead of open-coded access Marc Zyngier
2021-11-06 15:56   ` Philippe Mathieu-Daudé
2021-11-05 19:20 ` [PATCH 3/5] KVM: s390: " Marc Zyngier
2021-11-08 12:13   ` Claudio Imbrenda
2021-11-05 19:21 ` [PATCH 4/5] KVM: x86: " Marc Zyngier
2021-11-05 20:03   ` Sean Christopherson
2021-11-16 14:04     ` Paolo Bonzini
2021-11-16 16:07       ` Sean Christopherson
2021-11-16 16:48         ` Paolo Bonzini
2021-11-05 19:21 ` [PATCH 5/5] KVM: Convert the kvm->vcpus array to a xarray Marc Zyngier
2021-11-05 20:21   ` Sean Christopherson
2021-11-06 11:48     ` Marc Zyngier
2021-11-08  8:23       ` Marc Zyngier [this message]
2021-11-16 14:13 ` [PATCH 0/5] KVM: Turn the vcpu array into an xarray Juergen Gross
2021-11-16 14:21   ` Paolo Bonzini
2021-11-16 14:54     ` Juergen Gross
2021-11-16 15:03 ` Paolo Bonzini
2021-11-16 15:40   ` Marc Zyngier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8400fc5dfa13c89c8786bfc809011a54@kernel.org \
    --to=maz@kernel.org \
    --cc=aleksandar.qemu.devel@gmail.com \
    --cc=alexandru.elisei@arm.com \
    --cc=anup.patel@wdc.com \
    --cc=atish.patra@wdc.com \
    --cc=borntraeger@de.ibm.com \
    --cc=chenhuacai@kernel.org \
    --cc=david@redhat.com \
    --cc=frankja@linux.ibm.com \
    --cc=imbrenda@linux.ibm.com \
    --cc=james.morse@arm.com \
    --cc=jgross@suse.com \
    --cc=kernel-team@android.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-mips@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=npiggin@gmail.com \
    --cc=paulus@samba.org \
    --cc=pbonzini@redhat.com \
    --cc=seanjc@google.com \
    --cc=suzuki.poulose@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).