From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jun Nakajima Subject: [PATCH v2 04/13] nEPT: Define EPT-specific link_shadow_page() Date: Mon, 6 May 2013 00:04:23 -0700 Message-ID: <1367823872-25895-4-git-send-email-jun.nakajima@intel.com> References: <1367823872-25895-1-git-send-email-jun.nakajima@intel.com> <1367823872-25895-2-git-send-email-jun.nakajima@intel.com> <1367823872-25895-3-git-send-email-jun.nakajima@intel.com> To: kvm@vger.kernel.org Return-path: Received: from mail-pb0-f44.google.com ([209.85.160.44]:43442 "EHLO mail-pb0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752817Ab3EFHEm (ORCPT ); Mon, 6 May 2013 03:04:42 -0400 Received: by mail-pb0-f44.google.com with SMTP id wz17so1845529pbc.17 for ; Mon, 06 May 2013 00:04:42 -0700 (PDT) Received: from localhost (c-98-207-34-191.hsd1.ca.comcast.net. [98.207.34.191]) by mx.google.com with ESMTPSA id at4sm22705543pbc.40.2013.05.06.00.04.40 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Mon, 06 May 2013 00:04:41 -0700 (PDT) In-Reply-To: <1367823872-25895-3-git-send-email-jun.nakajima@intel.com> Sender: kvm-owner@vger.kernel.org List-ID: Since link_shadow_page() is used by a routine in mmu.c, add an EPT-specific link_shadow_page() in paging_tmp.h, rather than moving it. Signed-off-by: Nadav Har'El Signed-off-by: Jun Nakajima Signed-off-by: Xinhao Xu --- arch/x86/kvm/paging_tmpl.h | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 5644f61..51dca23 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -461,6 +461,18 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw, } } +#if PTTYPE == PTTYPE_EPT +static void FNAME(link_shadow_page)(u64 *sptep, struct kvm_mmu_page *sp) +{ + u64 spte; + + spte = __pa(sp->spt) | VMX_EPT_READABLE_MASK | VMX_EPT_WRITABLE_MASK | + VMX_EPT_EXECUTABLE_MASK; + + mmu_spte_set(sptep, spte); +} +#endif + /* * Fetch a shadow pte for a specific level in the paging hierarchy. * If the guest tries to write a write-protected page, we need to @@ -513,7 +525,11 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, goto out_gpte_changed; if (sp) +#if PTTYPE == PTTYPE_EPT + FNAME(link_shadow_page)(it.sptep, sp); +#else link_shadow_page(it.sptep, sp); +#endif } for (; @@ -533,7 +549,11 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, it.level-1, true, direct_access, it.sptep); +#if PTTYPE == PTTYPE_EPT + FNAME(link_shadow_page)(it.sptep, sp); +#else link_shadow_page(it.sptep, sp); +#endif } clear_sp_write_flooding_count(it.sptep); -- 1.8.1.2