From: Yang Weijiang <weijiang.yang@intel.com>
To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
pbonzini@redhat.com, jmattson@google.com,
sean.j.christopherson@intel.com
Cc: yu.c.zhang@linux.intel.com, alazar@bitdefender.com,
edwin.zhai@intel.com, Yang Weijiang <weijiang.yang@intel.com>
Subject: [RESEND PATCH v10 07/10] mmu: spp: Enable Lazy mode SPP protection
Date: Thu, 2 Jan 2020 14:13:16 +0800 [thread overview]
Message-ID: <20200102061319.10077-8-weijiang.yang@intel.com> (raw)
In-Reply-To: <20200102061319.10077-1-weijiang.yang@intel.com>
To deal with SPP protected 4KB pages within hugepage(2MB,1GB etc),
the hugepage entry is first zapped when set subpage permission, then
in tdp_page_fault(), it checks whether the gfn should be mapped to
PT_PAGE_TABLE_LEVEL or PT_DIRECTORY_LEVEL level depending on gfn
inclusion of SPP protected page range.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
arch/x86/kvm/mmu/mmu.c | 20 ++++++++++++++++++++
arch/x86/kvm/mmu/spp.c | 43 ++++++++++++++++++++++++++++++++++++++++++
arch/x86/kvm/mmu/spp.h | 2 ++
3 files changed, 65 insertions(+)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index c41791ebee65..aada0a3552b2 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3246,6 +3246,7 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu,
unsigned access = sp->role.access;
int i, ret;
gfn_t gfn;
+ u32 *wp_bitmap;
gfn = kvm_mmu_page_get_gfn(sp, start - sp->spt);
slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, access & ACC_WRITE_MASK);
@@ -3259,6 +3260,13 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu,
for (i = 0; i < ret; i++, gfn++, start++) {
mmu_set_spte(vcpu, start, access, 0, sp->role.level, gfn,
page_to_pfn(pages[i]), true, true);
+ if (vcpu->kvm->arch.spp_active) {
+ wp_bitmap = gfn_to_subpage_wp_info(slot, gfn);
+ if (wp_bitmap && *wp_bitmap != FULL_SPP_ACCESS)
+ kvm_spp_mark_protection(vcpu->kvm,
+ gfn,
+ *wp_bitmap);
+ }
put_page(pages[i]);
}
@@ -3372,6 +3380,15 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, int write,
map_writable);
direct_pte_prefetch(vcpu, it.sptep);
++vcpu->stat.pf_fixed;
+ if (level == PT_PAGE_TABLE_LEVEL) {
+ int ret;
+ u32 access;
+
+ ret = kvm_spp_get_permission(vcpu->kvm, gfn, 1, &access);
+ if (ret == 1 && access != FULL_SPP_ACCESS)
+ kvm_spp_mark_protection(vcpu->kvm, gfn, access);
+ }
+
return ret;
}
@@ -4338,6 +4355,9 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code,
if (level > PT_DIRECTORY_LEVEL &&
!check_hugepage_cache_consistency(vcpu, gfn, level))
level = PT_DIRECTORY_LEVEL;
+
+ check_spp_protection(vcpu, gfn, &force_pt_level, &level);
+
gfn &= ~(KVM_PAGES_PER_HPAGE(level) - 1);
}
diff --git a/arch/x86/kvm/mmu/spp.c b/arch/x86/kvm/mmu/spp.c
index 3ec434140967..f63541b42385 100644
--- a/arch/x86/kvm/mmu/spp.c
+++ b/arch/x86/kvm/mmu/spp.c
@@ -571,6 +571,49 @@ inline u64 construct_spptp(unsigned long root_hpa)
}
EXPORT_SYMBOL_GPL(construct_spptp);
+static bool is_spp_protected(struct kvm_memory_slot *slot, gfn_t gfn, int level)
+{
+ int page_num = KVM_PAGES_PER_HPAGE(level);
+ u32 *access;
+ gfn_t gfn_max;
+
+ gfn &= ~(page_num - 1);
+ gfn_max = gfn + page_num - 1;
+ for (; gfn <= gfn_max; gfn++) {
+ access = gfn_to_subpage_wp_info(slot, gfn);
+ if (access && *access != FULL_SPP_ACCESS)
+ return true;
+ }
+ return false;
+}
+
+bool check_spp_protection(struct kvm_vcpu *vcpu, gfn_t gfn,
+ bool *force_pt_level, int *level)
+{
+ struct kvm *kvm = vcpu->kvm;
+ struct kvm_memory_slot *slot;
+ bool protected;
+ int old_level = *level;
+
+ if (!kvm->arch.spp_active)
+ return false;
+
+ slot = gfn_to_memslot(kvm, gfn);
+
+ if (!slot)
+ return false;
+ protected = is_spp_protected(slot, gfn, PT_DIRECTORY_LEVEL);
+
+ if (protected) {
+ *level = PT_PAGE_TABLE_LEVEL;
+ *force_pt_level = true;
+ } else if (*level == PT_PDPE_LEVEL &&
+ is_spp_protected(slot, gfn, PT_PDPE_LEVEL))
+ *level = PT_DIRECTORY_LEVEL;
+
+ return (old_level != *level);
+}
+
int kvm_vm_ioctl_get_subpages(struct kvm *kvm,
u64 gfn,
u32 npages,
diff --git a/arch/x86/kvm/mmu/spp.h b/arch/x86/kvm/mmu/spp.h
index 5ffe06d2cd6f..0baf0f9ac135 100644
--- a/arch/x86/kvm/mmu/spp.h
+++ b/arch/x86/kvm/mmu/spp.h
@@ -13,6 +13,8 @@ int kvm_spp_mark_protection(struct kvm *kvm, u64 gfn, u32 access);
bool is_spp_spte(struct kvm_mmu_page *sp);
void restore_spp_bit(u64 *spte);
bool was_spp_armed(u64 spte);
+bool check_spp_protection(struct kvm_vcpu *vcpu, gfn_t gfn,
+ bool *force_pt_level, int *level);
u64 construct_spptp(unsigned long root_hpa);
int kvm_vm_ioctl_get_subpages(struct kvm *kvm,
u64 gfn,
--
2.17.2
next prev parent reply other threads:[~2020-01-02 6:10 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-02 6:13 [RESEND PATCH v10 00/10] Enable Sub-Page Write Protection Support Yang Weijiang
2020-01-02 6:13 ` [RESEND PATCH v10 01/10] Documentation: Add EPT based Subpage Protection and related APIs Yang Weijiang
2020-01-02 6:13 ` [RESEND PATCH v10 02/10] vmx: spp: Add control flags for Sub-Page Protection(SPP) Yang Weijiang
2020-01-10 16:58 ` Sean Christopherson
2020-01-13 5:44 ` Yang Weijiang
2020-01-02 6:13 ` [RESEND PATCH v10 03/10] mmu: spp: Add SPP Table setup functions Yang Weijiang
2020-01-10 17:26 ` Sean Christopherson
2020-01-13 6:00 ` Yang Weijiang
2020-01-10 17:40 ` Sean Christopherson
2020-01-13 6:04 ` Yang Weijiang
2020-01-02 6:13 ` [RESEND PATCH v10 04/10] mmu: spp: Add functions to operate SPP access bitmap Yang Weijiang
2020-01-10 17:38 ` Sean Christopherson
2020-01-13 6:15 ` Yang Weijiang
2020-01-02 6:13 ` [RESEND PATCH v10 05/10] x86: spp: Introduce user-space SPP IOCTLs Yang Weijiang
2020-01-10 18:10 ` Sean Christopherson
2020-01-13 8:21 ` Yang Weijiang
2020-01-02 6:13 ` [RESEND PATCH v10 06/10] vmx: spp: Set up SPP paging table at vmentry/vmexit Yang Weijiang
2020-01-10 17:55 ` Sean Christopherson
2020-01-13 6:50 ` Yang Weijiang
2020-01-21 14:01 ` Paolo Bonzini
2020-01-10 18:04 ` Sean Christopherson
2020-01-13 8:10 ` Yang Weijiang
2020-01-13 17:33 ` Sean Christopherson
2020-01-13 18:55 ` Adalbert Lazăr
2020-01-13 21:47 ` Sean Christopherson
2020-01-14 3:08 ` Yang Weijiang
2020-01-14 18:58 ` Sean Christopherson
2020-01-15 1:36 ` Yang Weijiang
2020-01-21 14:14 ` Paolo Bonzini
2020-01-02 6:13 ` Yang Weijiang [this message]
2020-01-02 6:13 ` [RESEND PATCH v10 08/10] mmu: spp: Handle SPP protected pages when VM memory changes Yang Weijiang
2020-01-02 6:13 ` [RESEND PATCH v10 09/10] x86: spp: Add SPP protection check in emulation Yang Weijiang
2020-01-02 6:13 ` [RESEND PATCH v10 10/10] kvm: selftests: selftest for Sub-Page protection Yang Weijiang
-- strict thread matches above, loose matches on Subject: below --
2020-01-02 5:18 [RESEND PATCH v10 00/10] Enable Sub-Page Write Protection Support Yang Weijiang
2020-01-02 5:19 ` [RESEND PATCH v10 07/10] mmu: spp: Enable Lazy mode SPP protection Yang Weijiang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200102061319.10077-8-weijiang.yang@intel.com \
--to=weijiang.yang@intel.com \
--cc=alazar@bitdefender.com \
--cc=edwin.zhai@intel.com \
--cc=jmattson@google.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=sean.j.christopherson@intel.com \
--cc=yu.c.zhang@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).