From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Jan Beulich" Subject: Re: Domctl and physdevop for passthrough (Was: Re: Stabilising some tools only HVMOPs?) Date: Tue, 01 Mar 2016 04:10:29 -0700 Message-ID: <56D586B502000078000D7C28@prv-mh.provo.novell.com> References: <20160217172808.GB3723@citrix.com> <20160219160539.GV3723@citrix.com> <56CAFEE302000078000D4A74@prv-mh.provo.novell.com> <20160223143130.GE3723@citrix.com> <20160229122309.GG17111@citrix.com> <56D447D202000078000D74C9@prv-mh.provo.novell.com> <20160229181236.GI17111@citrix.com> <56D558B202000078000D7A6F@prv-mh.provo.novell.com> <20160301105258.GK17111@citrix.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="=__Part2611A8B5.2__=" Return-path: Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.84) (envelope-from ) id 1aaiCJ-0005x3-V6 for xen-devel@lists.xenproject.org; Tue, 01 Mar 2016 11:10:36 +0000 In-Reply-To: <20160301105258.GK17111@citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" To: Wei Liu Cc: Ian Campbell , Stefano Stabellini , Andrew Cooper , Ian Jackson , PaulDurrant , Anthony PERARD , Xen-devel List-Id: xen-devel@lists.xenproject.org This is a MIME message. If you are reading this text, you may want to consider changing to a mail reader or gateway that understands how to properly handle MIME multipart messages. --=__Part2611A8B5.2__= Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Content-Disposition: inline >>> On 01.03.16 at 11:52, wrote: > On Tue, Mar 01, 2016 at 12:54:09AM -0700, Jan Beulich wrote: >> >>> On 29.02.16 at 19:12, wrote: >> > I read the XSA-154 patch and think a little bit on whether making >> > dedicated hypercall is feasible. >> >=20 >> > 1. The patch for XSA-154 mentions that only MMIO mappings with >> > inconsistent attributes can cause system instability. >> > 2. PV case is hard, but the device model library is only of interest = to >> > HVM domain, so PV can be ignored. >> > 3. We want to continue honoring pinned cachability attributes for HVM >> > domain. >> >=20 >> > It seems we have a way forward. Say, we have new hypercall just for >> > pinning video ram cachability attribute. >> >=20 >> > The new hypercall has following properties: >> >=20 >> > 1. It can only be used on HVM domains. >> > 2. It can only be used on mfns that are not in MMIO ranges, because >> > vram is just normal ram. >> > 3. It can only set the cachability attribute to WC (used by video = ram). >> > 4. It is not considered stable. >> >=20 >> > so that it won't be abused to change cachability attributes of MMIO >> > mappings on PV guest to make the host unstable. The stale data issue = is >> > of no relevance as stated in XSA-154 patch. >> >=20 >> > Does this sound plausible? >>=20 >> Yes, it does, but it extends our dependency on what we've been >> told in the context of XSA-154 is actually true (and has been true >> for all earlier processor generations, and will continue to be true >> in the future). >> But then I don't immediately see why the existing >> pinning operation won't suffice: It's a domctl (i.e. we can change >> it), you say you don't need it to be stable, and it's already >> documented as being intended for RAM only (albeit iirc that's not >> getting enforced anywhere right now). The main present >> problem (which I don't see a new hypercall to solve) is that it's >> GFN-based, and the GFN->MFN mapping can change after such >> pinning got established. Otoh I think that by changing the >> placement of the hvm_get_mem_pinned_cacheattr() calls we >> could enforce the RAM-only aspect quite easily. Let me put >> together a patch ... >=20 > That would be good. Thank you very much. Actually here you go, albeit for now compile-tested only. Maybe you, Andrew, or someone else has some early comment or opinion on this already nevertheless. One thing to consider cache flushing wise is whether when deleting a WC range it wouldn't suffice to just force write combining buffers to be cleared, instead of a full cache flush. But I guess that would better be a 2nd patch anyway. Jan - call hvm_get_mem_pinned_cacheattr() for RAM ranges only (requires some re-ordering in epte_get_entry_emt(), to fully handle all MMIO aspects first) - remove unnecessary indirection for hvm_get_mem_pinned_cacheattr()'s return of the type - make hvm_set_mem_pinned_cacheattr() return an error on bad domain kind or obviously bad GFN range - also avoid cache flush on EPT when removing a UC- range - other code structure adjustments without intended functional change --- unstable.orig/xen/arch/x86/hvm/mtrr.c +++ unstable/xen/arch/x86/hvm/mtrr.c @@ -521,14 +521,12 @@ struct hvm_mem_pinned_cacheattr_range { =20 static DEFINE_RCU_READ_LOCK(pinned_cacheattr_rcu_lock); =20 -void hvm_init_cacheattr_region_list( - struct domain *d) +void hvm_init_cacheattr_region_list(struct domain *d) { INIT_LIST_HEAD(&d->arch.hvm_domain.pinned_cacheattr_ranges); } =20 -void hvm_destroy_cacheattr_region_list( - struct domain *d) +void hvm_destroy_cacheattr_region_list(struct domain *d) { struct list_head *head =3D &d->arch.hvm_domain.pinned_cacheattr_ranges= ; struct hvm_mem_pinned_cacheattr_range *range; @@ -543,20 +541,14 @@ void hvm_destroy_cacheattr_region_list( } } =20 -int hvm_get_mem_pinned_cacheattr( - struct domain *d, - uint64_t guest_fn, - unsigned int order, - uint32_t *type) +int hvm_get_mem_pinned_cacheattr(struct domain *d, uint64_t guest_fn, + unsigned int order) { struct hvm_mem_pinned_cacheattr_range *range; uint64_t mask =3D ~(uint64_t)0 << order; - int rc =3D 0; + int rc =3D -ENXIO; =20 - *type =3D ~0; - - if ( !is_hvm_domain(d) ) - return 0; + ASSERT(has_hvm_container_domain(d)); =20 rcu_read_lock(&pinned_cacheattr_rcu_lock); list_for_each_entry_rcu ( range, @@ -566,14 +558,13 @@ int hvm_get_mem_pinned_cacheattr( if ( ((guest_fn & mask) >=3D range->start) && ((guest_fn | ~mask) <=3D range->end) ) { - *type =3D range->type; - rc =3D 1; + rc =3D range->type; break; } if ( ((guest_fn & mask) <=3D range->end) && (range->start <=3D (guest_fn | ~mask)) ) { - rc =3D -1; + rc =3D -EADDRNOTAVAIL; break; } } @@ -587,20 +578,21 @@ static void free_pinned_cacheattr_entry( xfree(container_of(rcu, struct hvm_mem_pinned_cacheattr_range, rcu)); } =20 -int32_t hvm_set_mem_pinned_cacheattr( - struct domain *d, - uint64_t gfn_start, - uint64_t gfn_end, - uint32_t type) +int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start, + uint64_t gfn_end, uint32_t type) { struct hvm_mem_pinned_cacheattr_range *range; int rc =3D 1; =20 - if ( !is_hvm_domain(d) || gfn_end < gfn_start ) - return 0; + if ( !is_hvm_domain(d) ) + return -EOPNOTSUPP; + + if ( gfn_end < gfn_start || (gfn_start | gfn_end) >> paddr_bits ) + return -EINVAL; =20 - if ( type =3D=3D XEN_DOMCTL_DELETE_MEM_CACHEATTR ) + switch ( type ) { + case XEN_DOMCTL_DELETE_MEM_CACHEATTR: /* Remove the requested range. */ rcu_read_lock(&pinned_cacheattr_rcu_lock); list_for_each_entry_rcu ( range, @@ -613,22 +605,37 @@ int32_t hvm_set_mem_pinned_cacheattr( type =3D range->type; call_rcu(&range->rcu, free_pinned_cacheattr_entry); p2m_memory_type_changed(d); - if ( type !=3D PAT_TYPE_UNCACHABLE ) + switch ( type ) + { + case PAT_TYPE_UC_MINUS: + /* + * For EPT we can also avoid the flush in this case; + * see epte_get_entry_emt(). + */ + if ( hap_enabled(d) && cpu_has_vmx ) + case PAT_TYPE_UNCACHABLE: + break; + /* fall through */ + default: flush_all(FLUSH_CACHE); + break; + } return 0; } rcu_read_unlock(&pinned_cacheattr_rcu_lock); return -ENOENT; - } =20 - if ( !((type =3D=3D PAT_TYPE_UNCACHABLE) || - (type =3D=3D PAT_TYPE_WRCOMB) || - (type =3D=3D PAT_TYPE_WRTHROUGH) || - (type =3D=3D PAT_TYPE_WRPROT) || - (type =3D=3D PAT_TYPE_WRBACK) || - (type =3D=3D PAT_TYPE_UC_MINUS)) || - !is_hvm_domain(d) ) + case PAT_TYPE_UC_MINUS: + case PAT_TYPE_UNCACHABLE: + case PAT_TYPE_WRBACK: + case PAT_TYPE_WRCOMB: + case PAT_TYPE_WRPROT: + case PAT_TYPE_WRTHROUGH: + break; + + default: return -EINVAL; + } =20 rcu_read_lock(&pinned_cacheattr_rcu_lock); list_for_each_entry_rcu ( range, @@ -762,7 +769,6 @@ int epte_get_entry_emt(struct domain *d, unsigned int order, uint8_t *ipat, bool_t = direct_mmio) { int gmtrr_mtype, hmtrr_mtype; - uint32_t type; struct vcpu *v =3D current; =20 *ipat =3D 0; @@ -782,30 +788,28 @@ int epte_get_entry_emt(struct domain *d, mfn_x(mfn) + (1UL << order) - 1) ) return -1; =20 - switch ( hvm_get_mem_pinned_cacheattr(d, gfn, order, &type) ) + if ( direct_mmio ) { - case 1: + if ( (mfn_x(mfn) ^ d->arch.hvm_domain.vmx.apic_access_mfn) >> = order ) + return MTRR_TYPE_UNCACHABLE; + if ( order ) + return -1; *ipat =3D 1; - return type !=3D PAT_TYPE_UC_MINUS ? type : PAT_TYPE_UNCACHABLE; - case -1: - return -1; + return MTRR_TYPE_WRBACK; } =20 - if ( !need_iommu(d) && !cache_flush_permitted(d) ) + gmtrr_mtype =3D hvm_get_mem_pinned_cacheattr(d, gfn, order); + if ( gmtrr_mtype >=3D 0 ) { - ASSERT(!direct_mmio || - !((mfn_x(mfn) ^ d->arch.hvm_domain.vmx.apic_access_mfn) >> - order)); *ipat =3D 1; - return MTRR_TYPE_WRBACK; + return gmtrr_mtype !=3D PAT_TYPE_UC_MINUS ? gmtrr_mtype + : MTRR_TYPE_UNCACHABLE; } + if ( gmtrr_mtype =3D=3D -EADDRNOTAVAIL ) + return -1; =20 - if ( direct_mmio ) + if ( !need_iommu(d) && !cache_flush_permitted(d) ) { - if ( (mfn_x(mfn) ^ d->arch.hvm_domain.vmx.apic_access_mfn) >> = order ) - return MTRR_TYPE_UNCACHABLE; - if ( order ) - return -1; *ipat =3D 1; return MTRR_TYPE_WRBACK; } --- unstable.orig/xen/arch/x86/mm/shadow/multi.c +++ unstable/xen/arch/x86/mm/shadow/multi.c @@ -607,7 +607,7 @@ _sh_propagate(struct vcpu *v, if ( (level =3D=3D 1) && is_hvm_domain(d) && !is_xen_heap_mfn(mfn_x(target_mfn)) ) { - unsigned int type; + int type; =20 ASSERT(!(sflags & (_PAGE_PAT | _PAGE_PCD | _PAGE_PWT))); =20 @@ -618,7 +618,9 @@ _sh_propagate(struct vcpu *v, * 3) if disables snoop control, compute the PAT index with * gMTRR and gPAT. */ - if ( hvm_get_mem_pinned_cacheattr(d, gfn_x(target_gfn), 0, &type) = ) + if ( !mmio_mfn && + (type =3D hvm_get_mem_pinned_cacheattr(d, gfn_x(target_gfn), + 0)) >=3D 0 ) sflags |=3D pat_type_2_pte_flags(type); else if ( d->arch.hvm_domain.is_in_uc_mode ) sflags |=3D pat_type_2_pte_flags(PAT_TYPE_UNCACHABLE); --- unstable.orig/xen/include/asm-x86/hvm/cacheattr.h +++ unstable/xen/include/asm-x86/hvm/cacheattr.h @@ -1,29 +1,23 @@ #ifndef __HVM_CACHEATTR_H__ #define __HVM_CACHEATTR_H__ =20 -void hvm_init_cacheattr_region_list( - struct domain *d); -void hvm_destroy_cacheattr_region_list( - struct domain *d); +#include + +struct domain; +void hvm_init_cacheattr_region_list(struct domain *d); +void hvm_destroy_cacheattr_region_list(struct domain *d); =20 /* * To see guest_fn is in the pinned range or not, - * if yes, return 1, and set type to value in this range - * if no, return 0, setting type to ~0 - * if ambiguous, return -1, setting type to ~0 (possible only for order > = 0) + * if yes, return the (non-negative) type + * if no or ambiguous, return a negative error code */ -int hvm_get_mem_pinned_cacheattr( - struct domain *d, - uint64_t guest_fn, - unsigned int order, - uint32_t *type); +int hvm_get_mem_pinned_cacheattr(struct domain *d, uint64_t guest_fn, + unsigned int order); =20 =20 /* Set pinned caching type for a domain. */ -int32_t hvm_set_mem_pinned_cacheattr( - struct domain *d, - uint64_t gfn_start, - uint64_t gfn_end, - uint32_t type); +int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start, + uint64_t gfn_end, uint32_t type); =20 #endif /* __HVM_CACHEATTR_H__ */ --=__Part2611A8B5.2__= Content-Type: text/plain; name="x86-HVM-mem-pinned-cacheattr.patch" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="x86-HVM-mem-pinned-cacheattr.patch" =0A- call hvm_get_mem_pinned_cacheattr() for RAM ranges only (requires=0A = some re-ordering in epte_get_entry_emt(), to fully handle all MMIO=0A = aspects first)=0A- remove unnecessary indirection for hvm_get_mem_pinned_ca= cheattr()'s=0A return of the type=0A- make hvm_set_mem_pinned_cacheattr() = return an error on bad domain=0A kind or obviously bad GFN range=0A- also = avoid cache flush on EPT when removing a UC- range=0A- other code = structure adjustments without intended functional change=0A=0A--- = unstable.orig/xen/arch/x86/hvm/mtrr.c=0A+++ unstable/xen/arch/x86/hvm/mtrr.= c=0A@@ -521,14 +521,12 @@ struct hvm_mem_pinned_cacheattr_range {=0A =0A = static DEFINE_RCU_READ_LOCK(pinned_cacheattr_rcu_lock);=0A =0A-void = hvm_init_cacheattr_region_list(=0A- struct domain *d)=0A+void hvm_init_c= acheattr_region_list(struct domain *d)=0A {=0A INIT_LIST_HEAD(&d->arch.= hvm_domain.pinned_cacheattr_ranges);=0A }=0A =0A-void hvm_destroy_cacheattr= _region_list(=0A- struct domain *d)=0A+void hvm_destroy_cacheattr_region= _list(struct domain *d)=0A {=0A struct list_head *head =3D &d->arch.hvm= _domain.pinned_cacheattr_ranges;=0A struct hvm_mem_pinned_cacheattr_ran= ge *range;=0A@@ -543,20 +541,14 @@ void hvm_destroy_cacheattr_region_list(= =0A }=0A }=0A =0A-int hvm_get_mem_pinned_cacheattr(=0A- struct = domain *d,=0A- uint64_t guest_fn,=0A- unsigned int order,=0A- = uint32_t *type)=0A+int hvm_get_mem_pinned_cacheattr(struct domain *d, = uint64_t guest_fn,=0A+ unsigned int = order)=0A {=0A struct hvm_mem_pinned_cacheattr_range *range;=0A = uint64_t mask =3D ~(uint64_t)0 << order;=0A- int rc =3D 0;=0A+ int = rc =3D -ENXIO;=0A =0A- *type =3D ~0;=0A-=0A- if ( !is_hvm_domain(d) = )=0A- return 0;=0A+ ASSERT(has_hvm_container_domain(d));=0A =0A = rcu_read_lock(&pinned_cacheattr_rcu_lock);=0A list_for_each_entry_rc= u ( range,=0A@@ -566,14 +558,13 @@ int hvm_get_mem_pinned_cacheattr(=0A = if ( ((guest_fn & mask) >=3D range->start) &&=0A = ((guest_fn | ~mask) <=3D range->end) )=0A {=0A- *type = =3D range->type;=0A- rc =3D 1;=0A+ rc =3D range->type= ;=0A break;=0A }=0A if ( ((guest_fn & mask) = <=3D range->end) &&=0A (range->start <=3D (guest_fn | ~mask)) = )=0A {=0A- rc =3D -1;=0A+ rc =3D -EADDRNOTAVA= IL;=0A break;=0A }=0A }=0A@@ -587,20 +578,21 @@ = static void free_pinned_cacheattr_entry(=0A xfree(container_of(rcu, = struct hvm_mem_pinned_cacheattr_range, rcu));=0A }=0A =0A-int32_t = hvm_set_mem_pinned_cacheattr(=0A- struct domain *d,=0A- uint64_t = gfn_start,=0A- uint64_t gfn_end,=0A- uint32_t type)=0A+int = hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,=0A+ = uint64_t gfn_end, uint32_t type)=0A {=0A = struct hvm_mem_pinned_cacheattr_range *range;=0A int rc =3D 1;=0A =0A- = if ( !is_hvm_domain(d) || gfn_end < gfn_start )=0A- return = 0;=0A+ if ( !is_hvm_domain(d) )=0A+ return -EOPNOTSUPP;=0A+=0A+ = if ( gfn_end < gfn_start || (gfn_start | gfn_end) >> paddr_bits )=0A+ = return -EINVAL;=0A =0A- if ( type =3D=3D XEN_DOMCTL_DELETE_MEM_CACHE= ATTR )=0A+ switch ( type )=0A {=0A+ case XEN_DOMCTL_DELETE_MEM_CA= CHEATTR:=0A /* Remove the requested range. */=0A rcu_read_l= ock(&pinned_cacheattr_rcu_lock);=0A list_for_each_entry_rcu ( = range,=0A@@ -613,22 +605,37 @@ int32_t hvm_set_mem_pinned_cacheattr(=0A = type =3D range->type;=0A call_rcu(&range->rcu,= free_pinned_cacheattr_entry);=0A p2m_memory_type_changed(d= );=0A- if ( type !=3D PAT_TYPE_UNCACHABLE )=0A+ = switch ( type )=0A+ {=0A+ case PAT_TYPE_UC= _MINUS:=0A+ /*=0A+ * For EPT we can = also avoid the flush in this case;=0A+ * see epte_get_e= ntry_emt().=0A+ */=0A+ if ( = hap_enabled(d) && cpu_has_vmx )=0A+ case PAT_TYPE_UNCACHABLE= :=0A+ break;=0A+ /* fall through = */=0A+ default:=0A flush_all(FLUSH_CACHE= );=0A+ break;=0A+ }=0A = return 0;=0A }=0A rcu_read_unlock(&pinned_cacheattr_rcu= _lock);=0A return -ENOENT;=0A- }=0A =0A- if ( !((type =3D=3D = PAT_TYPE_UNCACHABLE) ||=0A- (type =3D=3D PAT_TYPE_WRCOMB) ||=0A- = (type =3D=3D PAT_TYPE_WRTHROUGH) ||=0A- (type =3D=3D = PAT_TYPE_WRPROT) ||=0A- (type =3D=3D PAT_TYPE_WRBACK) ||=0A- = (type =3D=3D PAT_TYPE_UC_MINUS)) ||=0A- !is_hvm_domain(d) = )=0A+ case PAT_TYPE_UC_MINUS:=0A+ case PAT_TYPE_UNCACHABLE:=0A+ = case PAT_TYPE_WRBACK:=0A+ case PAT_TYPE_WRCOMB:=0A+ case PAT_TYPE_WRP= ROT:=0A+ case PAT_TYPE_WRTHROUGH:=0A+ break;=0A+=0A+ = default:=0A return -EINVAL;=0A+ }=0A =0A rcu_read_lock(&pinn= ed_cacheattr_rcu_lock);=0A list_for_each_entry_rcu ( range,=0A@@ = -762,7 +769,6 @@ int epte_get_entry_emt(struct domain *d,=0A = unsigned int order, uint8_t *ipat, bool_t direct_mmio)=0A {=0A = int gmtrr_mtype, hmtrr_mtype;=0A- uint32_t type;=0A struct vcpu *v = =3D current;=0A =0A *ipat =3D 0;=0A@@ -782,30 +788,28 @@ int epte_get_e= ntry_emt(struct domain *d,=0A mfn_x(mfn) = + (1UL << order) - 1) )=0A return -1;=0A =0A- switch ( = hvm_get_mem_pinned_cacheattr(d, gfn, order, &type) )=0A+ if ( direct_mmi= o )=0A {=0A- case 1:=0A+ if ( (mfn_x(mfn) ^ d->arch.hvm_domai= n.vmx.apic_access_mfn) >> order )=0A+ return MTRR_TYPE_UNCACHABL= E;=0A+ if ( order )=0A+ return -1;=0A *ipat =3D = 1;=0A- return type !=3D PAT_TYPE_UC_MINUS ? type : PAT_TYPE_UNCACHAB= LE;=0A- case -1:=0A- return -1;=0A+ return MTRR_TYPE_WRBAC= K;=0A }=0A =0A- if ( !need_iommu(d) && !cache_flush_permitted(d) = )=0A+ gmtrr_mtype =3D hvm_get_mem_pinned_cacheattr(d, gfn, order);=0A+ = if ( gmtrr_mtype >=3D 0 )=0A {=0A- ASSERT(!direct_mmio ||=0A- = !((mfn_x(mfn) ^ d->arch.hvm_domain.vmx.apic_access_mfn) = >>=0A- order));=0A *ipat =3D 1;=0A- return = MTRR_TYPE_WRBACK;=0A+ return gmtrr_mtype !=3D PAT_TYPE_UC_MINUS ? = gmtrr_mtype=0A+ : MTRR_TYPE_= UNCACHABLE;=0A }=0A+ if ( gmtrr_mtype =3D=3D -EADDRNOTAVAIL )=0A+ = return -1;=0A =0A- if ( direct_mmio )=0A+ if ( !need_iommu(d) = && !cache_flush_permitted(d) )=0A {=0A- if ( (mfn_x(mfn) ^ = d->arch.hvm_domain.vmx.apic_access_mfn) >> order )=0A- return = MTRR_TYPE_UNCACHABLE;=0A- if ( order )=0A- return -1;=0A = *ipat =3D 1;=0A return MTRR_TYPE_WRBACK;=0A }=0A--- = unstable.orig/xen/arch/x86/mm/shadow/multi.c=0A+++ unstable/xen/arch/x86/mm= /shadow/multi.c=0A@@ -607,7 +607,7 @@ _sh_propagate(struct vcpu *v,=0A = if ( (level =3D=3D 1) && is_hvm_domain(d) &&=0A !is_xen_heap_mfn(m= fn_x(target_mfn)) )=0A {=0A- unsigned int type;=0A+ int = type;=0A =0A ASSERT(!(sflags & (_PAGE_PAT | _PAGE_PCD | _PAGE_PWT))= );=0A =0A@@ -618,7 +618,9 @@ _sh_propagate(struct vcpu *v,=0A * = 3) if disables snoop control, compute the PAT index with=0A * = gMTRR and gPAT.=0A */=0A- if ( hvm_get_mem_pinned_cacheattr= (d, gfn_x(target_gfn), 0, &type) )=0A+ if ( !mmio_mfn &&=0A+ = (type =3D hvm_get_mem_pinned_cacheattr(d, gfn_x(target_gfn),=0A+ = 0)) >=3D 0 )=0A = sflags |=3D pat_type_2_pte_flags(type);=0A else if ( d->arch.hvm_do= main.is_in_uc_mode )=0A sflags |=3D pat_type_2_pte_flags(PAT_TY= PE_UNCACHABLE);=0A--- unstable.orig/xen/include/asm-x86/hvm/cacheattr.h=0A+= ++ unstable/xen/include/asm-x86/hvm/cacheattr.h=0A@@ -1,29 +1,23 @@=0A = #ifndef __HVM_CACHEATTR_H__=0A #define __HVM_CACHEATTR_H__=0A =0A-void = hvm_init_cacheattr_region_list(=0A- struct domain *d);=0A-void = hvm_destroy_cacheattr_region_list(=0A- struct domain *d);=0A+#include = =0A+=0A+struct domain;=0A+void hvm_init_cacheattr_region_list(= struct domain *d);=0A+void hvm_destroy_cacheattr_region_list(struct domain = *d);=0A =0A /*=0A * To see guest_fn is in the pinned range or not,=0A- * = if yes, return 1, and set type to value in this range=0A- * if no, return = 0, setting type to ~0=0A- * if ambiguous, return -1, setting type to ~0 = (possible only for order > 0)=0A+ * if yes, return the (non-negative) = type=0A+ * if no or ambiguous, return a negative error code=0A */=0A-int = hvm_get_mem_pinned_cacheattr(=0A- struct domain *d,=0A- uint64_t = guest_fn,=0A- unsigned int order,=0A- uint32_t *type);=0A+int = hvm_get_mem_pinned_cacheattr(struct domain *d, uint64_t guest_fn,=0A+ = unsigned int order);=0A =0A =0A /* Set pinned = caching type for a domain. */=0A-int32_t hvm_set_mem_pinned_cacheattr(=0A- = struct domain *d,=0A- uint64_t gfn_start,=0A- uint64_t gfn_end,=0A= - uint32_t type);=0A+int hvm_set_mem_pinned_cacheattr(struct domain = *d, uint64_t gfn_start,=0A+ uint64_t = gfn_end, uint32_t type);=0A =0A #endif /* __HVM_CACHEATTR_H__ */=0A --=__Part2611A8B5.2__= Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9y Zy94ZW4tZGV2ZWwK --=__Part2611A8B5.2__=--