From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E153EC4338F for ; Fri, 13 Aug 2021 20:01:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C9C9F610FA for ; Fri, 13 Aug 2021 20:01:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234195AbhHMUB1 (ORCPT ); Fri, 13 Aug 2021 16:01:27 -0400 Received: from vps-vb.mhejs.net ([37.28.154.113]:50296 "EHLO vps-vb.mhejs.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234251AbhHMUBY (ORCPT ); Fri, 13 Aug 2021 16:01:24 -0400 Received: from MUA by vps-vb.mhejs.net with esmtps (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1mEcw2-00041X-I6; Fri, 13 Aug 2021 21:33:42 +0200 From: "Maciej S. Szmigiero" To: Paolo Bonzini , Vitaly Kuznetsov Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Igor Mammedov , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Christian Borntraeger , Janosch Frank , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 02/13] KVM: x86: Don't call kvm_mmu_change_mmu_pages() if the count hasn't changed Date: Fri, 13 Aug 2021 21:33:15 +0200 Message-Id: X-Mailer: git-send-email 2.32.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: "Maciej S. Szmigiero" There is no point in calling kvm_mmu_change_mmu_pages() for memslot operations that don't change the total page count, so do it just for KVM_MR_CREATE and KVM_MR_DELETE. Signed-off-by: Maciej S. Szmigiero --- arch/x86/kvm/x86.c | 26 ++++++++++++++------------ 1 file changed, 14 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 6f9d9c7457d7..6bbfc53518d8 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11588,20 +11588,22 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *new, enum kvm_mr_change change) { - if (change == KVM_MR_CREATE) - kvm->arch.n_memslots_pages += new->npages; - else if (change == KVM_MR_DELETE) { - WARN_ON(kvm->arch.n_memslots_pages < old->npages); - kvm->arch.n_memslots_pages -= old->npages; - } + if (change == KVM_MR_CREATE || change == KVM_MR_DELETE) { + if (change == KVM_MR_CREATE) + kvm->arch.n_memslots_pages += new->npages; + else { + WARN_ON(kvm->arch.n_memslots_pages < old->npages); + kvm->arch.n_memslots_pages -= old->npages; + } - if (!kvm->arch.n_requested_mmu_pages) { - unsigned long nr_mmu_pages; + if (!kvm->arch.n_requested_mmu_pages) { + unsigned long nr_mmu_pages; - nr_mmu_pages = kvm->arch.n_memslots_pages * - KVM_PERMILLE_MMU_PAGES / 1000; - nr_mmu_pages = max(nr_mmu_pages, KVM_MIN_ALLOC_MMU_PAGES); - kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages); + nr_mmu_pages = kvm->arch.n_memslots_pages * + KVM_PERMILLE_MMU_PAGES / 1000; + nr_mmu_pages = max(nr_mmu_pages, KVM_MIN_ALLOC_MMU_PAGES); + kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages); + } } kvm_mmu_slot_apply_flags(kvm, old, new, change);