From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Cooper Subject: Re: [PATCH v2 02/12] VMX: implement suppress #VE. Date: Wed, 24 Jun 2015 10:35:55 +0100 Message-ID: <558A79FB.6060304@citrix.com> References: <1434999372-3688-1-git-send-email-edmund.h.white@intel.com> <1434999372-3688-3-git-send-email-edmund.h.white@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1434999372-3688-3-git-send-email-edmund.h.white@intel.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Ed White , xen-devel@lists.xen.org Cc: Ravi Sahita , Wei Liu , Tim Deegan , Ian Jackson , Jan Beulich , tlengyel@novetta.com, Daniel De Graaf List-Id: xen-devel@lists.xenproject.org On 22/06/15 19:56, Ed White wrote: > In preparation for selectively enabling #VE in a later patch, set > suppress #VE on all EPTE's. > > Suppress #VE should always be the default condition for two reasons: > it is generally not safe to deliver #VE into a guest unless that guest > has been modified to receive it; and even then for most EPT violations only > the hypervisor is able to handle the violation. > > Signed-off-by: Ed White > --- > xen/arch/x86/mm/p2m-ept.c | 25 ++++++++++++++++++++++++- > 1 file changed, 24 insertions(+), 1 deletion(-) > > diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c > index a6c9adf..5de3387 100644 > --- a/xen/arch/x86/mm/p2m-ept.c > +++ b/xen/arch/x86/mm/p2m-ept.c > @@ -41,7 +41,7 @@ > #define is_epte_superpage(ept_entry) ((ept_entry)->sp) > static inline bool_t is_epte_valid(ept_entry_t *e) > { > - return (e->epte != 0 && e->sa_p2mt != p2m_invalid); > + return ((e->epte & ~(1ul << 63)) != 0 && e->sa_p2mt != p2m_invalid); It might be nice to leave a comment explaining that epte.suppress_ve is not considered as part of validity. This avoids a rather opaque mask against a magic number. Otherwise, Reviewed-by: Andrew Cooper > } > > /* returns : 0 for success, -errno otherwise */ > @@ -219,6 +219,8 @@ static void ept_p2m_type_to_flags(struct p2m_domain *p2m, ept_entry_t *entry, > static int ept_set_middle_entry(struct p2m_domain *p2m, ept_entry_t *ept_entry) > { > struct page_info *pg; > + ept_entry_t *table; > + unsigned int i; > > pg = p2m_alloc_ptp(p2m, 0); > if ( pg == NULL ) > @@ -232,6 +234,15 @@ static int ept_set_middle_entry(struct p2m_domain *p2m, ept_entry_t *ept_entry) > /* Manually set A bit to avoid overhead of MMU having to write it later. */ > ept_entry->a = 1; > > + ept_entry->suppress_ve = 1; > + > + table = __map_domain_page(pg); > + > + for ( i = 0; i < EPT_PAGETABLE_ENTRIES; i++ ) > + table[i].suppress_ve = 1; > + > + unmap_domain_page(table); > + > return 1; > } > > @@ -281,6 +292,7 @@ static int ept_split_super_page(struct p2m_domain *p2m, ept_entry_t *ept_entry, > epte->sp = (level > 1); > epte->mfn += i * trunk; > epte->snp = (iommu_enabled && iommu_snoop); > + epte->suppress_ve = 1; > > ept_p2m_type_to_flags(p2m, epte, epte->sa_p2mt, epte->access); > > @@ -790,6 +802,8 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, > ept_p2m_type_to_flags(p2m, &new_entry, p2mt, p2ma); > } > > + new_entry.suppress_ve = 1; > + > rc = atomic_write_ept_entry(ept_entry, new_entry, target); > if ( unlikely(rc) ) > old_entry.epte = 0; > @@ -1111,6 +1125,8 @@ static void ept_flush_pml_buffers(struct p2m_domain *p2m) > int ept_p2m_init(struct p2m_domain *p2m) > { > struct ept_data *ept = &p2m->ept; > + ept_entry_t *table; > + unsigned int i; > > p2m->set_entry = ept_set_entry; > p2m->get_entry = ept_get_entry; > @@ -1134,6 +1150,13 @@ int ept_p2m_init(struct p2m_domain *p2m) > p2m->flush_hardware_cached_dirty = ept_flush_pml_buffers; > } > > + table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m))); > + > + for ( i = 0; i < EPT_PAGETABLE_ENTRIES; i++ ) > + table[i].suppress_ve = 1; > + > + unmap_domain_page(table); > + > if ( !zalloc_cpumask_var(&ept->synced_mask) ) > return -ENOMEM; >