kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sean Christopherson <seanjc@google.com>
To: Juergen Gross <jgross@suse.com>
Cc: kvm@vger.kernel.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, Paolo Bonzini <pbonzini@redhat.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH v3 3/4] x86/kvm: add max number of vcpus for hyperv emulation
Date: Wed, 17 Nov 2021 20:50:22 +0000	[thread overview]
Message-ID: <YZVrDpjW0aZjFxo1@google.com> (raw)
In-Reply-To: <20211116141054.17800-4-jgross@suse.com>

On Tue, Nov 16, 2021, Juergen Gross wrote:
> When emulating Hyperv the theoretical maximum of vcpus supported is
> 4096, as this is the architectural limit for sending IPIs via the PV
> interface.
> 
> For restricting the actual supported number of vcpus for that case
> introduce another define KVM_MAX_HYPERV_VCPUS and set it to 1024, like
> today's KVM_MAX_VCPUS. Make both values unsigned ones as this will be
> needed later.
> 
> The actual number of supported vcpus for Hyperv emulation will be the
> lower value of both defines.
> 
> This is a preparation for a future boot parameter support of the max
> number of vcpus for a KVM guest.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V3:
> - new patch
> ---
>  arch/x86/include/asm/kvm_host.h |  3 ++-
>  arch/x86/kvm/hyperv.c           | 15 ++++++++-------
>  2 files changed, 10 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 886930ec8264..8ea03ff01c45 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -38,7 +38,8 @@
>  
>  #define __KVM_HAVE_ARCH_VCPU_DEBUGFS
>  
> -#define KVM_MAX_VCPUS 1024
> +#define KVM_MAX_VCPUS 1024U
> +#define KVM_MAX_HYPERV_VCPUS 1024U

I don't see any reason to put this in kvm_host.h, it should never be used outside
of hyperv.c.

>  #define KVM_MAX_VCPU_IDS kvm_max_vcpu_ids()
>  /* memory slots that are not exposed to userspace */
>  #define KVM_PRIVATE_MEM_SLOTS 3
> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
> index 4a555f32885a..c0fa837121f1 100644
> --- a/arch/x86/kvm/hyperv.c
> +++ b/arch/x86/kvm/hyperv.c
> @@ -41,7 +41,7 @@
>  /* "Hv#1" signature */
>  #define HYPERV_CPUID_SIGNATURE_EAX 0x31237648
>  
> -#define KVM_HV_MAX_SPARSE_VCPU_SET_BITS DIV_ROUND_UP(KVM_MAX_VCPUS, 64)
> +#define KVM_HV_MAX_SPARSE_VCPU_SET_BITS DIV_ROUND_UP(KVM_MAX_HYPERV_VCPUS, 64)
>  
>  static void stimer_mark_pending(struct kvm_vcpu_hv_stimer *stimer,
>  				bool vcpu_kick);
> @@ -166,7 +166,7 @@ static struct kvm_vcpu *get_vcpu_by_vpidx(struct kvm *kvm, u32 vpidx)
>  	struct kvm_vcpu *vcpu = NULL;
>  	int i;
>  
> -	if (vpidx >= KVM_MAX_VCPUS)
> +	if (vpidx >= min(KVM_MAX_VCPUS, KVM_MAX_HYPERV_VCPUS))

IMO, this is conceptually wrong.  KVM should refuse to allow Hyper-V to be enabled
if the max number of vCPUs exceeds what can be supported, or should refuse to create
the vCPUs.  I agree it makes sense to add a Hyper-V specific limit, since there are
Hyper-V structures that have a hard limit, but detection of violations should be a
BUILD_BUG_ON, not a silent failure at runtime.

  reply	other threads:[~2021-11-17 20:50 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-16 14:10 [PATCH v3 0/4] x86/kvm: add boot parameters for max vcpu configs Juergen Gross
2021-11-16 14:10 ` [PATCH v3 1/4] x86/kvm: add boot parameter for adding vcpu-id bits Juergen Gross
2021-11-17  6:59   ` Juergen Gross
2021-11-17 23:46     ` Sean Christopherson
2021-11-18  7:45       ` Juergen Gross
2021-11-18 15:09         ` Sean Christopherson
2021-11-18 15:19           ` Juergen Gross
2021-11-17 23:44   ` Sean Christopherson
2021-11-18  7:44     ` Juergen Gross
2021-11-16 14:10 ` [PATCH v3 2/4] x86/kvm: introduce a per cpu vcpu mask Juergen Gross
2021-11-16 14:10 ` [PATCH v3 3/4] x86/kvm: add max number of vcpus for hyperv emulation Juergen Gross
2021-11-17 20:50   ` Sean Christopherson [this message]
2021-11-18  7:43     ` Juergen Gross
2021-11-18 14:49       ` Sean Christopherson
2021-11-18 15:24         ` Juergen Gross
2021-11-18 16:12           ` Sean Christopherson
2021-11-16 14:10 ` [PATCH v3 4/4] x86/kvm: add boot parameter for setting max number of vcpus per guest Juergen Gross
2021-11-17 20:57   ` Sean Christopherson
2021-11-18  7:16     ` Juergen Gross
2021-11-18 15:05       ` Sean Christopherson
2021-11-18 15:15         ` Juergen Gross
2021-11-18 15:32           ` Sean Christopherson
2021-11-18 16:19             ` Juergen Gross
2021-11-18 15:46           ` Sean Christopherson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YZVrDpjW0aZjFxo1@google.com \
    --to=seanjc@google.com \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=jgross@suse.com \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).