All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yang Weijiang <weijiang.yang@intel.com>
To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	pbonzini@redhat.com, jmattson@google.com,
	sean.j.christopherson@intel.com
Cc: yu.c.zhang@linux.intel.com, alazar@bitdefender.com,
	edwin.zhai@intel.com, Yang Weijiang <weijiang.yang@intel.com>
Subject: [PATCH v7 7/9] mmu: spp: Enable Lazy mode SPP protection
Date: Tue, 19 Nov 2019 16:49:47 +0800	[thread overview]
Message-ID: <20191119084949.15471-8-weijiang.yang@intel.com> (raw)
In-Reply-To: <20191119084949.15471-1-weijiang.yang@intel.com>

To deal with SPP protected 4KB pages within hugepage(2MB,1GB etc),
the hugepage entry is first zapped when set subpage permission, then
in tdp_page_fault(), it checks whether the gfn should be mapped to
PT_PAGE_TABLE_LEVEL or PT_DIRECTORY_LEVEL level depending on gfn
inclusion of SPP protected page range.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
 arch/x86/kvm/mmu.c     | 14 ++++++++++++++
 arch/x86/kvm/vmx/spp.c | 39 +++++++++++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/spp.h |  4 ++++
 3 files changed, 57 insertions(+)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a632c6b3c326..9c5be402a0b2 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3240,6 +3240,17 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, int write,
 			   map_writable);
 	direct_pte_prefetch(vcpu, it.sptep);
 	++vcpu->stat.pf_fixed;
+	if (level == PT_PAGE_TABLE_LEVEL) {
+		struct kvm_subpage sbp = {0};
+		int pages;
+
+		sbp.base_gfn = gfn;
+		sbp.npages = 1;
+		pages = kvm_spp_get_permission(vcpu->kvm, &sbp);
+		if (pages == 1  && sbp.access_map[0] != FULL_SPP_ACCESS)
+			kvm_spp_mark_protection(vcpu->kvm, &sbp);
+	}
+
 	return ret;
 }
 
@@ -4183,6 +4194,9 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code,
 		if (level > PT_DIRECTORY_LEVEL &&
 		    !check_hugepage_cache_consistency(vcpu, gfn, level))
 			level = PT_DIRECTORY_LEVEL;
+
+		check_spp_protection(vcpu, gfn, &force_pt_level, &level);
+
 		gfn &= ~(KVM_PAGES_PER_HPAGE(level) - 1);
 	}
 
diff --git a/arch/x86/kvm/vmx/spp.c b/arch/x86/kvm/vmx/spp.c
index 0ff23b97970a..111e11bb2598 100644
--- a/arch/x86/kvm/vmx/spp.c
+++ b/arch/x86/kvm/vmx/spp.c
@@ -550,6 +550,45 @@ inline u64 construct_spptp(unsigned long root_hpa)
 }
 EXPORT_SYMBOL_GPL(construct_spptp);
 
+bool is_spp_protected(struct kvm_memory_slot *slot, gfn_t gfn, int level)
+{
+	int page_num = KVM_PAGES_PER_HPAGE(level);
+	int i;
+
+	gfn &= ~(page_num - 1);
+	for (i = 0; i < page_num; ++i) {
+		if (*gfn_to_subpage_wp_info(slot, gfn + i) != FULL_SPP_ACCESS)
+			return true;
+	}
+	return false;
+}
+
+bool check_spp_protection(struct kvm_vcpu *vcpu, gfn_t gfn,
+			  bool *force_pt_level, int *level)
+{
+	struct kvm *kvm = vcpu->kvm;
+	struct kvm_memory_slot *slot;
+	bool protected;
+	int old_level = *level;
+
+	if (!kvm->arch.spp_active)
+		return false;
+
+	slot = gfn_to_memslot(kvm, gfn);
+
+	if (!slot)
+		return false;
+	protected = is_spp_protected(slot, gfn, PT_DIRECTORY_LEVEL);
+
+	if (protected) {
+		*level = PT_PAGE_TABLE_LEVEL;
+		*force_pt_level = true;
+	} else if (is_spp_protected(slot, gfn, PT_PDPE_LEVEL))
+		*level = PT_DIRECTORY_LEVEL;
+
+	return (old_level != *level);
+}
+
 int kvm_vm_ioctl_get_subpages(struct kvm *kvm,
 			      struct kvm_subpage *spp_info)
 {
diff --git a/arch/x86/kvm/vmx/spp.h b/arch/x86/kvm/vmx/spp.h
index 208b557cac7d..1ad526866977 100644
--- a/arch/x86/kvm/vmx/spp.h
+++ b/arch/x86/kvm/vmx/spp.h
@@ -4,9 +4,13 @@
 
 #define FULL_SPP_ACCESS		((u32)((1ULL << 32) - 1))
 
+int kvm_spp_get_permission(struct kvm *kvm, struct kvm_subpage *spp_info);
+int kvm_spp_mark_protection(struct kvm *kvm, struct kvm_subpage *spp_info);
 bool is_spp_spte(struct kvm_mmu_page *sp);
 void restore_spp_bit(u64 *spte);
 bool was_spp_armed(u64 spte);
+bool check_spp_protection(struct kvm_vcpu *vcpu, gfn_t gfn,
+			  bool *force_pt_level, int *level);
 inline u64 construct_spptp(unsigned long root_hpa);
 int kvm_vm_ioctl_get_subpages(struct kvm *kvm,
 			      struct kvm_subpage *spp_info);
-- 
2.17.2


  parent reply	other threads:[~2019-11-19  8:48 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-19  8:49 [PATCH v7 0/9] Enable Sub-Page Write Protection Support Yang Weijiang
2019-11-19  8:49 ` [PATCH v7 1/9] Documentation: Introduce EPT based Subpage Protection and related ioctls Yang Weijiang
2019-11-21 10:02   ` Paolo Bonzini
2019-11-22 16:17     ` Yang Weijiang
2019-11-19  8:49 ` [PATCH v7 2/9] vmx: spp: Add control flags for Sub-Page Protection(SPP) Yang Weijiang
2019-11-21 10:04   ` Paolo Bonzini
2019-11-21 15:34     ` Yang Weijiang
2019-11-21 16:02       ` Paolo Bonzini
2019-11-22 15:23         ` Yang Weijiang
2019-11-22 15:55           ` Paolo Bonzini
2019-11-22 16:24             ` Yang Weijiang
2019-11-19  8:49 ` [PATCH v7 3/9] mmu: spp: Add SPP Table setup functions Yang Weijiang
2019-11-21 10:32   ` Paolo Bonzini
2019-11-21 14:57     ` Yang Weijiang
2019-11-21 10:38   ` Paolo Bonzini
2019-11-21 14:55     ` Yang Weijiang
2019-11-19  8:49 ` [PATCH v7 4/9] mmu: spp: Add functions to create/destroy SPP bitmap block Yang Weijiang
2019-11-21 10:43   ` Paolo Bonzini
2019-11-21 14:45     ` Yang Weijiang
2019-11-19  8:49 ` [PATCH v7 5/9] x86: spp: Introduce user-space SPP IOCTLs Yang Weijiang
2019-11-21 10:03   ` Paolo Bonzini
2019-11-22 16:20     ` Yang Weijiang
2019-11-19  8:49 ` [PATCH v7 6/9] vmx: spp: Set up SPP paging table at vmentry/vmexit Yang Weijiang
2019-11-21 10:18   ` Paolo Bonzini
2019-11-21 15:22     ` Yang Weijiang
2019-11-21 16:08       ` Paolo Bonzini
2019-11-22 15:25         ` Yang Weijiang
2019-11-21 10:32   ` Paolo Bonzini
2019-11-21 15:04     ` Yang Weijiang
2019-11-19  8:49 ` Yang Weijiang [this message]
2019-11-19  8:49 ` [PATCH v7 8/9] mmu: spp: Handle SPP protected pages when VM memory changes Yang Weijiang
2019-11-21 10:32   ` Paolo Bonzini
2019-11-21 15:01     ` Yang Weijiang
2019-11-19  8:49 ` [PATCH v7 9/9] x86: spp: Add SPP protection check in emulation Yang Weijiang
2019-11-21 10:43 ` [PATCH v7 0/9] Enable Sub-Page Write Protection Support Paolo Bonzini
2019-11-21 14:36   ` Yang Weijiang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191119084949.15471-8-weijiang.yang@intel.com \
    --to=weijiang.yang@intel.com \
    --cc=alazar@bitdefender.com \
    --cc=edwin.zhai@intel.com \
    --cc=jmattson@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=sean.j.christopherson@intel.com \
    --cc=yu.c.zhang@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.