From: David Woodhouse For some MMIO regions, such as those high above RAM, mfn_valid() will return false. Since the fix for XSA-154 in commit c61a6f74f80e ("x86: enforce consistent cachability of MMIO mappings"), guests have no longer been able to use PAT to obtain write-combining on such regions because the 'ignore PAT' bit is set in EPT. We probably want to err on the side of caution and preserve that behaviour for addresses in mmio_ro_ranges, but not for normal MMIO mappings. That necessitates a slight refactoring to check mfn_valid() later, and let the MMIO case get through to the right code path. Since we're not bailing out for !mfn_valid() immediately, the range checks need to be adjusted to cope — simply by masking in the low bits to account for 'order' instead of adding, to avoid overflow when the mfn is INVALID_MFN (which happens on unmap, since we carefully call this function to fill in the EMT even though the PTE won't be valid). The range checks are also slightly refactored to put only one of them in the fast path in the common case. If it doesn't overlap, then it *definitely* isn't contained, so we don't need both checks. And if it overlaps and is only one page, then it definitely *is* contained. Finally, add a comment clarifying how that 'return -1' works — it isn't returning an error and causing the mapping to fail; it relies on resolve_misconfig() being able to split the mapping later. So it's *only* sane to do it where order>0 and the 'problem' will be solved by splitting the large page. Not for blindly returning 'error', which I was tempted to do in my first attempt. Signed-off-by: David Woodhouse --- xen/arch/x86/hvm/mtrr.c | 26 +++++++++++++++++--------- 1 file changed, 17 insertions(+), 9 deletions(-) diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c index 709759c..8fef756 100644 --- a/xen/arch/x86/hvm/mtrr.c +++ b/xen/arch/x86/hvm/mtrr.c @@ -773,17 +773,19 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn, if ( v->domain != d ) v = d->vcpu ? d->vcpu[0] : NULL; - if ( !mfn_valid(mfn_x(mfn)) || - rangeset_contains_range(mmio_ro_ranges, mfn_x(mfn), - mfn_x(mfn) + (1UL << order) - 1) ) - { - *ipat = 1; - return MTRR_TYPE_UNCACHABLE; - } - + /* Mask, not add, for order so it works with INVALID_MFN on unmapping */ if ( rangeset_overlaps_range(mmio_ro_ranges, mfn_x(mfn), - mfn_x(mfn) + (1UL << order) - 1) ) + mfn_x(mfn) | ((1UL << order) - 1)) ) + { + if ( !order || rangeset_contains_range(mmio_ro_ranges, mfn_x(mfn), + mfn_x(mfn) | ((1UL << order) - 1)) ) + { + *ipat = 1; + return MTRR_TYPE_UNCACHABLE; + } + /* Force invalid memory type so resolve_misconfig() will split it */ return -1; + } if ( direct_mmio ) { @@ -795,6 +797,12 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn, return MTRR_TYPE_WRBACK; } + if ( !mfn_valid(mfn_x(mfn)) ) + { + *ipat = 1; + return MTRR_TYPE_UNCACHABLE; + } + if ( !need_iommu(d) && !cache_flush_permitted(d) ) { *ipat = 1; -- 2.7.4