From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C824C2D0DD for ; Thu, 2 Jan 2020 05:15:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2DD3B21734 for ; Thu, 2 Jan 2020 05:15:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727717AbgABFPY (ORCPT ); Thu, 2 Jan 2020 00:15:24 -0500 Received: from mga12.intel.com ([192.55.52.136]:22045 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727692AbgABFPV (ORCPT ); Thu, 2 Jan 2020 00:15:21 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 01 Jan 2020 21:15:21 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.69,385,1571727600"; d="scan'208";a="369236611" Received: from unknown (HELO local-michael-cet-test.sh.intel.com) ([10.239.159.128]) by orsmga004.jf.intel.com with ESMTP; 01 Jan 2020 21:15:19 -0800 From: Yang Weijiang To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, pbonzini@redhat.com, jmattson@google.com, sean.j.christopherson@intel.com Cc: yu.c.zhang@linux.intel.com, alazar@bitdefender.com, edwin.zhai@intel.com, Yang Weijiang Subject: [RESEND PATCH v10 07/10] mmu: spp: Enable Lazy mode SPP protection Date: Thu, 2 Jan 2020 13:19:01 +0800 Message-Id: <20200102051904.32090-8-weijiang.yang@intel.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20200102051904.32090-1-weijiang.yang@intel.com> References: <20200102051904.32090-1-weijiang.yang@intel.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org To deal with SPP protected 4KB pages within hugepage(2MB,1GB etc), the hugepage entry is first zapped when set subpage permission, then in tdp_page_fault(), it checks whether the gfn should be mapped to PT_PAGE_TABLE_LEVEL or PT_DIRECTORY_LEVEL level depending on gfn inclusion of SPP protected page range. Suggested-by: Paolo Bonzini Signed-off-by: Yang Weijiang --- arch/x86/kvm/mmu/mmu.c | 20 ++++++++++++++++++++ arch/x86/kvm/mmu/spp.c | 43 ++++++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/spp.h | 4 +++- 3 files changed, 66 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c41791ebee65..aada0a3552b2 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3246,6 +3246,7 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu, unsigned access = sp->role.access; int i, ret; gfn_t gfn; + u32 *wp_bitmap; gfn = kvm_mmu_page_get_gfn(sp, start - sp->spt); slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, access & ACC_WRITE_MASK); @@ -3259,6 +3260,13 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu, for (i = 0; i < ret; i++, gfn++, start++) { mmu_set_spte(vcpu, start, access, 0, sp->role.level, gfn, page_to_pfn(pages[i]), true, true); + if (vcpu->kvm->arch.spp_active) { + wp_bitmap = gfn_to_subpage_wp_info(slot, gfn); + if (wp_bitmap && *wp_bitmap != FULL_SPP_ACCESS) + kvm_spp_mark_protection(vcpu->kvm, + gfn, + *wp_bitmap); + } put_page(pages[i]); } @@ -3372,6 +3380,15 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, int write, map_writable); direct_pte_prefetch(vcpu, it.sptep); ++vcpu->stat.pf_fixed; + if (level == PT_PAGE_TABLE_LEVEL) { + int ret; + u32 access; + + ret = kvm_spp_get_permission(vcpu->kvm, gfn, 1, &access); + if (ret == 1 && access != FULL_SPP_ACCESS) + kvm_spp_mark_protection(vcpu->kvm, gfn, access); + } + return ret; } @@ -4338,6 +4355,9 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code, if (level > PT_DIRECTORY_LEVEL && !check_hugepage_cache_consistency(vcpu, gfn, level)) level = PT_DIRECTORY_LEVEL; + + check_spp_protection(vcpu, gfn, &force_pt_level, &level); + gfn &= ~(KVM_PAGES_PER_HPAGE(level) - 1); } diff --git a/arch/x86/kvm/mmu/spp.c b/arch/x86/kvm/mmu/spp.c index 3ec434140967..f63541b42385 100644 --- a/arch/x86/kvm/mmu/spp.c +++ b/arch/x86/kvm/mmu/spp.c @@ -571,6 +571,49 @@ inline u64 construct_spptp(unsigned long root_hpa) } EXPORT_SYMBOL_GPL(construct_spptp); +static bool is_spp_protected(struct kvm_memory_slot *slot, gfn_t gfn, int level) +{ + int page_num = KVM_PAGES_PER_HPAGE(level); + u32 *access; + gfn_t gfn_max; + + gfn &= ~(page_num - 1); + gfn_max = gfn + page_num - 1; + for (; gfn <= gfn_max; gfn++) { + access = gfn_to_subpage_wp_info(slot, gfn); + if (access && *access != FULL_SPP_ACCESS) + return true; + } + return false; +} + +bool check_spp_protection(struct kvm_vcpu *vcpu, gfn_t gfn, + bool *force_pt_level, int *level) +{ + struct kvm *kvm = vcpu->kvm; + struct kvm_memory_slot *slot; + bool protected; + int old_level = *level; + + if (!kvm->arch.spp_active) + return false; + + slot = gfn_to_memslot(kvm, gfn); + + if (!slot) + return false; + protected = is_spp_protected(slot, gfn, PT_DIRECTORY_LEVEL); + + if (protected) { + *level = PT_PAGE_TABLE_LEVEL; + *force_pt_level = true; + } else if (*level == PT_PDPE_LEVEL && + is_spp_protected(slot, gfn, PT_PDPE_LEVEL)) + *level = PT_DIRECTORY_LEVEL; + + return (old_level != *level); +} + int kvm_vm_ioctl_get_subpages(struct kvm *kvm, u64 gfn, u32 npages, diff --git a/arch/x86/kvm/mmu/spp.h b/arch/x86/kvm/mmu/spp.h index 5ffe06d2cd6f..6b2810965d6a 100644 --- a/arch/x86/kvm/mmu/spp.h +++ b/arch/x86/kvm/mmu/spp.h @@ -13,7 +13,9 @@ int kvm_spp_mark_protection(struct kvm *kvm, u64 gfn, u32 access); bool is_spp_spte(struct kvm_mmu_page *sp); void restore_spp_bit(u64 *spte); bool was_spp_armed(u64 spte); -u64 construct_spptp(unsigned long root_hpa); +bool check_spp_protection(struct kvm_vcpu *vcpu, gfn_t gfn, + bool *force_pt_level, int *level); +inline u64 construct_spptp(unsigned long root_hpa); int kvm_vm_ioctl_get_subpages(struct kvm *kvm, u64 gfn, u32 npages, -- 2.17.2