From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14E6DC31E40 for ; Fri, 9 Aug 2019 16:15:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C4E012086A for ; Fri, 9 Aug 2019 16:15:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437170AbfHIQPI (ORCPT ); Fri, 9 Aug 2019 12:15:08 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:52928 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2437094AbfHIQPD (ORCPT ); Fri, 9 Aug 2019 12:15:03 -0400 Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 9EECC3031EB6; Fri, 9 Aug 2019 19:01:12 +0300 (EEST) Received: from localhost.localdomain (unknown [89.136.169.210]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 46431305B7A0; Fri, 9 Aug 2019 19:01:11 +0300 (EEST) From: =?UTF-8?q?Adalbert=20Laz=C4=83r?= To: kvm@vger.kernel.org Cc: linux-mm@kvack.org, virtualization@lists.linux-foundation.org, Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= , Konrad Rzeszutek Wilk , Tamas K Lengyel , Mathieu Tarral , =?UTF-8?q?Samuel=20Laur=C3=A9n?= , Patrick Colp , Jan Kiszka , Stefan Hajnoczi , Weijiang Yang , Zhang@vger.kernel.org, Yu C , =?UTF-8?q?Mihai=20Don=C8=9Bu?= , =?UTF-8?q?Adalbert=20Laz=C4=83r?= Subject: [RFC PATCH v6 41/92] KVM: MMU: Enable Lazy mode SPPT setup Date: Fri, 9 Aug 2019 18:59:56 +0300 Message-Id: <20190809160047.8319-42-alazar@bitdefender.com> In-Reply-To: <20190809160047.8319-1-alazar@bitdefender.com> References: <20190809160047.8319-1-alazar@bitdefender.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Yang Weijiang If SPP subpages are set while the physical page are not available in EPT leaf entry, the mapping is first stored in SPP access bitmap buffer. SPPT setup is deferred to access to the protected page, in EPT page fault handler, the SPPT enries are set up. Signed-off-by: Yang Weijiang Message-Id: <20190717133751.12910-9-weijiang.yang@intel.com> Signed-off-by: Adalbert Lazăr --- arch/x86/kvm/mmu.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index d59108a3ebbf..24222e3add91 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -4400,6 +4400,26 @@ check_hugepage_cache_consistency(struct kvm_vcpu *vcpu, gfn_t gfn, int level) return kvm_mtrr_check_gfn_range_consistency(vcpu, gfn, page_num); } +static int kvm_enable_spp_protection(struct kvm *kvm, u64 gfn) +{ + struct kvm_subpage spp_info = {0}; + struct kvm_memory_slot *slot; + + slot = gfn_to_memslot(kvm, gfn); + if (!slot) + return -EFAULT; + + spp_info.base_gfn = gfn; + spp_info.npages = 1; + + if (kvm_mmu_get_subpages(kvm, &spp_info, true) < 0) + return -EFAULT; + + if (spp_info.access_map[0] != FULL_SPP_ACCESS) + kvm_mmu_set_subpages(kvm, &spp_info, true); + + return 0; +} static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code, bool prefault) { @@ -4451,6 +4471,10 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code, if (likely(!force_pt_level)) transparent_hugepage_adjust(vcpu, &gfn, &pfn, &level); r = __direct_map(vcpu, write, map_writable, level, gfn, pfn, prefault); + + if (vcpu->kvm->arch.spp_active && level == PT_PAGE_TABLE_LEVEL) + kvm_enable_spp_protection(vcpu->kvm, gfn); + spin_unlock(&vcpu->kvm->mmu_lock); return r;