linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Zeng Guang <guang.zeng@intel.com>
To: Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	kvm@vger.kernel.org, Dave Hansen <dave.hansen@linux.intel.com>,
	Tony Luck <tony.luck@intel.com>,
	Kan Liang <kan.liang@linux.intel.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Kim Phillips <kim.phillips@amd.com>,
	Jarkko Sakkinen <jarkko@kernel.org>,
	Jethro Beekman <jethro@fortanix.com>,
	Kai Huang <kai.huang@intel.com>
Cc: x86@kernel.org, linux-kernel@vger.kernel.org,
	Robert Hu <robert.hu@intel.com>, Gao Chao <chao.gao@intel.com>,
	Zeng Guang <guang.zeng@intel.com>
Subject: [PATCH v6 9/9] KVM: VMX: Optimize memory allocation for PID-pointer table
Date: Fri, 25 Feb 2022 16:22:23 +0800	[thread overview]
Message-ID: <20220225082223.18288-10-guang.zeng@intel.com> (raw)
In-Reply-To: <20220225082223.18288-1-guang.zeng@intel.com>

Current kvm allocates 8 pages in advance for Posted Interrupt Descriptor
pointer (PID-pointer) table to accommodate vCPUs with APIC ID up to
KVM_MAX_VCPU_IDS - 1. This policy wastes some memory because most of
VMs have less than 512 vCPUs and then just need one page.

If user hypervisor specify max practical vcpu id prior to vCPU creation,
IPIv can allocate only essential memory for PID-pointer table and reduce
the memory footprint of VMs.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Zeng Guang <guang.zeng@intel.com>
---
 arch/x86/kvm/vmx/vmx.c | 45 ++++++++++++++++++++++++++++--------------
 1 file changed, 30 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 0cb141c277ef..22bfb4953289 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -230,9 +230,6 @@ static const struct {
 };
 
 #define L1D_CACHE_ORDER 4
-
-/* PID(Posted-Interrupt Descriptor)-pointer table entry is 64-bit long */
-#define MAX_PID_TABLE_ORDER get_order(KVM_MAX_VCPU_IDS * sizeof(u64))
 #define PID_TABLE_ENTRY_VALID 1
 
 static void *vmx_l1d_flush_pages;
@@ -4434,6 +4431,24 @@ static u32 vmx_secondary_exec_control(struct vcpu_vmx *vmx)
 	return exec_control;
 }
 
+static int vmx_alloc_pid_table(struct kvm_vmx *kvm_vmx)
+{
+	struct page *pages;
+
+	if(kvm_vmx->pid_table)
+		return 0;
+
+	pages = alloc_pages(GFP_KERNEL | __GFP_ZERO,
+			get_order(kvm_vmx->kvm.arch.max_vcpu_id * sizeof(u64)));
+
+	if (!pages)
+		return -ENOMEM;
+
+	kvm_vmx->pid_table = (void *)page_address(pages);
+	kvm_vmx->pid_last_index = kvm_vmx->kvm.arch.max_vcpu_id - 1;
+	return 0;
+}
+
 #define VMX_XSS_EXIT_BITMAP 0
 
 static void init_vmcs(struct vcpu_vmx *vmx)
@@ -7159,6 +7174,16 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu)
 			goto free_vmcs;
 	}
 
+	if (enable_ipiv && kvm_vcpu_apicv_active(vcpu)) {
+		struct kvm_vmx *kvm_vmx = to_kvm_vmx(vcpu->kvm);
+
+		mutex_lock(&vcpu->kvm->lock);
+		err = vmx_alloc_pid_table(kvm_vmx);
+		mutex_unlock(&vcpu->kvm->lock);
+		if (err)
+			goto free_vmcs;
+	}
+
 	return 0;
 
 free_vmcs:
@@ -7202,17 +7227,6 @@ static int vmx_vm_init(struct kvm *kvm)
 		}
 	}
 
-	if (enable_ipiv) {
-		struct page *pages;
-
-		pages = alloc_pages(GFP_KERNEL | __GFP_ZERO, MAX_PID_TABLE_ORDER);
-		if (!pages)
-			return -ENOMEM;
-
-		to_kvm_vmx(kvm)->pid_table = (void *)page_address(pages);
-		to_kvm_vmx(kvm)->pid_last_index = KVM_MAX_VCPU_IDS - 1;
-	}
-
 	return 0;
 }
 
@@ -7809,7 +7823,8 @@ static void vmx_vm_destroy(struct kvm *kvm)
 	struct kvm_vmx *kvm_vmx = to_kvm_vmx(kvm);
 
 	if (kvm_vmx->pid_table)
-		free_pages((unsigned long)kvm_vmx->pid_table, MAX_PID_TABLE_ORDER);
+		free_pages((unsigned long)kvm_vmx->pid_table,
+			get_order((kvm_vmx->pid_last_index + 1) * sizeof(u64)));
 }
 
 static struct kvm_x86_ops vmx_x86_ops __initdata = {
-- 
2.27.0


  parent reply	other threads:[~2022-02-25  8:54 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-25  8:22 [PATCH v6 0/9] IPI virtualization support for VM Zeng Guang
2022-02-25  8:22 ` [PATCH v6 1/9] x86/cpu: Add new VMX feature, Tertiary VM-Execution control Zeng Guang
2022-02-25 14:09   ` Maxim Levitsky
2022-02-25  8:22 ` [PATCH v6 2/9] KVM: VMX: Extend BUILD_CONTROLS_SHADOW macro to support 64-bit variation Zeng Guang
2022-02-25 14:24   ` Maxim Levitsky
2022-02-25  8:22 ` [PATCH v6 3/9] KVM: VMX: Detect Tertiary VM-Execution control when setup VMCS config Zeng Guang
2022-02-25 14:30   ` Maxim Levitsky
2022-02-25  8:22 ` [PATCH v6 4/9] KVM: VMX: dump_vmcs() reports tertiary_exec_control field as well Zeng Guang
2022-02-25 14:31   ` Maxim Levitsky
2022-02-25  8:22 ` [PATCH v6 5/9] KVM: x86: Add support for vICR APIC-write VM-Exits in x2APIC mode Zeng Guang
2022-02-25 14:44   ` Maxim Levitsky
2022-02-25 15:29     ` Chao Gao
2022-02-25  8:22 ` [PATCH v6 6/9] KVM: x86: lapic: don't allow to change APIC ID unconditionally Zeng Guang
2022-02-25 14:46   ` Maxim Levitsky
2022-02-25 14:56     ` David Woodhouse
2022-02-25 15:11       ` Maxim Levitsky
2022-02-25 15:42         ` David Woodhouse
2022-02-25 16:12           ` Maxim Levitsky
2022-03-01  8:03     ` Chao Gao
2022-03-08 23:04   ` Sean Christopherson
2022-03-09  5:21     ` Chao Gao
2022-03-09  6:01       ` Sean Christopherson
2022-03-09 12:59         ` Maxim Levitsky
2022-03-11  4:26           ` Sean Christopherson
     [not found]             ` <29c76393-4884-94a8-f224-08d313b73f71@intel.com>
2022-03-13  9:19               ` Maxim Levitsky
2022-03-13 10:59                 ` Maxim Levitsky
2022-03-13 13:53                   ` Chao Gao
2022-03-13 15:09                     ` Maxim Levitsky
2022-03-14  4:09                       ` Chao Gao
2022-03-15 15:10                       ` Chao Gao
2022-03-15 15:30                         ` Maxim Levitsky
2022-03-16 11:50                           ` Chao Gao
2022-02-25  8:22 ` [PATCH v6 7/9] KVM: VMX: enable IPI virtualization Zeng Guang
2022-02-25 17:19   ` Maxim Levitsky
2022-03-01  9:21     ` Chao Gao
2022-03-02  6:45       ` Chao Gao
2022-02-25  8:22 ` [PATCH v6 8/9] KVM: x86: Allow userspace set maximum VCPU id for VM Zeng Guang
2022-02-25 17:22   ` Maxim Levitsky
2022-02-25  8:22 ` Zeng Guang [this message]
2022-02-25 17:29   ` [PATCH v6 9/9] KVM: VMX: Optimize memory allocation for PID-pointer table Maxim Levitsky
2022-03-01  9:23     ` Chao Gao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220225082223.18288-10-guang.zeng@intel.com \
    --to=guang.zeng@intel.com \
    --cc=bp@alien8.de \
    --cc=chao.gao@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=jarkko@kernel.org \
    --cc=jethro@fortanix.com \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=kai.huang@intel.com \
    --cc=kan.liang@linux.intel.com \
    --cc=kim.phillips@amd.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=robert.hu@intel.com \
    --cc=seanjc@google.com \
    --cc=tglx@linutronix.de \
    --cc=tony.luck@intel.com \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).