From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E1BFC433E0 for ; Mon, 22 Jun 2020 20:09:27 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id D41C620776 for ; Mon, 22 Jun 2020 20:09:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D41C620776 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvmarm-bounces@lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 8E3CE4A4C0; Mon, 22 Jun 2020 16:09:26 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id PcgSUQdlvC-C; Mon, 22 Jun 2020 16:09:25 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 6BC524B139; Mon, 22 Jun 2020 16:09:22 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id D90B04B07F for ; Mon, 22 Jun 2020 16:09:20 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id kV1Xqz+ANBEY for ; Mon, 22 Jun 2020 16:09:19 -0400 (EDT) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 4602D4B0E5 for ; Mon, 22 Jun 2020 16:09:16 -0400 (EDT) IronPort-SDR: MTMZFr2948lK7zXYcxTagJApbi8tqsNoxI7IhDAZAY9nPRNO/aUPFcMGWkty/g4tN9eCpyQQfs OPgJlnbLy+MA== X-IronPort-AV: E=McAfee;i="6000,8403,9660"; a="142921310" X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="142921310" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2020 13:09:16 -0700 IronPort-SDR: w4y4RhIxwywK7gLZhcnZ8zJZ2/tmsaEeOuzYXpKDxFqbOfafiMe34xl3o0yCJyETNVLXHqptHz oWXxrnFLUUEQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="318877091" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by FMSMGA003.fm.intel.com with ESMTP; 22 Jun 2020 13:09:15 -0700 From: Sean Christopherson To: Marc Zyngier , Paolo Bonzini , Arnd Bergmann Subject: [PATCH v2 11/21] KVM: x86/mmu: Zero allocate shadow pages (outside of mmu_lock) Date: Mon, 22 Jun 2020 13:08:12 -0700 Message-Id: <20200622200822.4426-12-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200622200822.4426-1-sean.j.christopherson@intel.com> References: <20200622200822.4426-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Cc: linux-arch@vger.kernel.org, Junaid Shahid , Wanpeng Li , kvm@vger.kernel.org, Joerg Roedel , Peter Shier , linux-mips@vger.kernel.org, Sean Christopherson , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Ben Gardon , Vitaly Kuznetsov , Peter Feiner , kvmarm@lists.cs.columbia.edu, Jim Mattson X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu Set __GFP_ZERO for the shadow page memory cache and drop the explicit clear_page() from kvm_mmu_get_page(). This moves the cost of zeroing a page to the allocation time of the physical page, i.e. when topping up the memory caches, and thus avoids having to zero out an entire page while holding mmu_lock. Cc: Peter Feiner Cc: Peter Shier Cc: Junaid Shahid Cc: Jim Mattson Suggested-by: Ben Gardon Reviewed-by: Ben Gardon Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6b0ec9060786..a8f8eebf67df 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2545,7 +2545,6 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, if (level > PG_LEVEL_4K && need_sync) flush |= kvm_sync_pages(vcpu, gfn, &invalid_list); } - clear_page(sp->spt); trace_kvm_mmu_get_page(sp, true); kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush); @@ -5687,6 +5686,8 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu) vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache; vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO; + vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO; + vcpu->arch.mmu = &vcpu->arch.root_mmu; vcpu->arch.walk_mmu = &vcpu->arch.root_mmu; -- 2.26.0 _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm