All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] x86/EPT: miscellaneous further fixes to EMT determination
@ 2014-04-23  8:21 Jan Beulich
  2014-04-23  8:39 ` [PATCH 1/4] x86/EPT: consider page order when checking for APIC MFN Jan Beulich
                   ` (4 more replies)
  0 siblings, 5 replies; 16+ messages in thread
From: Jan Beulich @ 2014-04-23  8:21 UTC (permalink / raw)
  To: xen-devel; +Cc: Keir Fraser, Kevin Tian, Eddie Dong, Jun Nakajima, Tim Deegan

1: consider page order when checking for APIC MFN
2: refine direct MMIO checking when determining EMT
3: fix pinned cache attribute range checking
4: also force EMT re-evaluation if pinned ranges change

Signed-off-by: Jan Beulich <jbeulich@suse.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/4] x86/EPT: consider page order when checking for APIC MFN
  2014-04-23  8:21 [PATCH 0/4] x86/EPT: miscellaneous further fixes to EMT determination Jan Beulich
@ 2014-04-23  8:39 ` Jan Beulich
  2014-04-28  7:38   ` Tian, Kevin
  2014-04-23  8:40 ` [PATCH 2/4] x86/EPT: refine direct MMIO checking when determining EMT Jan Beulich
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2014-04-23  8:39 UTC (permalink / raw)
  To: xen-devel; +Cc: Keir Fraser, Kevin Tian, Eddie Dong, Jun Nakajima, Tim Deegan

[-- Attachment #1: Type: text/plain, Size: 936 bytes --]

This was overlooked in 3d90d6e6 ("x86/EPT: split super pages upon
mismatching memory types").

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -816,15 +816,18 @@ int epte_get_entry_emt(struct domain *d,
           !has_arch_pdevs(d)) )
     {
         ASSERT(!direct_mmio ||
-               mfn_x(mfn) == d->arch.hvm_domain.vmx.apic_access_mfn);
+               !((mfn_x(mfn) ^ d->arch.hvm_domain.vmx.apic_access_mfn) >>
+                 order));
         *ipat = 1;
         return MTRR_TYPE_WRBACK;
     }
 
     if ( direct_mmio )
     {
-        if ( mfn_x(mfn) != d->arch.hvm_domain.vmx.apic_access_mfn )
+        if ( (mfn_x(mfn) ^ d->arch.hvm_domain.vmx.apic_access_mfn) >> order )
             return MTRR_TYPE_UNCACHABLE;
+        if ( order )
+            return -1;
         *ipat = 1;
         return MTRR_TYPE_WRBACK;
     }




[-- Attachment #2: EPT-APIC-access-MFN-order.patch --]
[-- Type: text/plain, Size: 989 bytes --]

x86/EPT: consider page order when checking for APIC MFN

This was overlooked in 3d90d6e6 ("x86/EPT: split super pages upon
mismatching memory types").

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -816,15 +816,18 @@ int epte_get_entry_emt(struct domain *d,
           !has_arch_pdevs(d)) )
     {
         ASSERT(!direct_mmio ||
-               mfn_x(mfn) == d->arch.hvm_domain.vmx.apic_access_mfn);
+               !((mfn_x(mfn) ^ d->arch.hvm_domain.vmx.apic_access_mfn) >>
+                 order));
         *ipat = 1;
         return MTRR_TYPE_WRBACK;
     }
 
     if ( direct_mmio )
     {
-        if ( mfn_x(mfn) != d->arch.hvm_domain.vmx.apic_access_mfn )
+        if ( (mfn_x(mfn) ^ d->arch.hvm_domain.vmx.apic_access_mfn) >> order )
             return MTRR_TYPE_UNCACHABLE;
+        if ( order )
+            return -1;
         *ipat = 1;
         return MTRR_TYPE_WRBACK;
     }

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 2/4] x86/EPT: refine direct MMIO checking when determining EMT
  2014-04-23  8:21 [PATCH 0/4] x86/EPT: miscellaneous further fixes to EMT determination Jan Beulich
  2014-04-23  8:39 ` [PATCH 1/4] x86/EPT: consider page order when checking for APIC MFN Jan Beulich
@ 2014-04-23  8:40 ` Jan Beulich
  2014-04-25 12:05   ` [PATCH 2/4 v2] " Jan Beulich
  2014-04-28  7:41   ` [PATCH 2/4] " Tian, Kevin
  2014-04-23  8:41 ` [PATCH 3/4] x86/EPT: fix pinned cache attribute range checking Jan Beulich
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 16+ messages in thread
From: Jan Beulich @ 2014-04-23  8:40 UTC (permalink / raw)
  To: xen-devel; +Cc: Keir Fraser, Kevin Tian, Eddie Dong, Jun Nakajima, Tim Deegan

[-- Attachment #1: Type: text/plain, Size: 902 bytes --]

With need_iommu() only ever true when iommu_enabled is also true, and
with the former getting set when a PCI device gets added to a guest,
the checks can be consolidated. The range set check are left in place
just in case raw MMIO or I/O port ranges get passed to a guest.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -810,10 +810,9 @@ int epte_get_entry_emt(struct domain *d,
         return -1;
     }
 
-    if ( !iommu_enabled ||
-         (rangeset_is_empty(d->iomem_caps) &&
-          rangeset_is_empty(d->arch.ioport_caps) &&
-          !has_arch_pdevs(d)) )
+    if ( !need_iommu(d) &&
+         rangeset_is_empty(d->iomem_caps) &&
+         rangeset_is_empty(d->arch.ioport_caps) )
     {
         ASSERT(!direct_mmio ||
                !((mfn_x(mfn) ^ d->arch.hvm_domain.vmx.apic_access_mfn) >>




[-- Attachment #2: EPT-EMT-IOMMU-dependency.patch --]
[-- Type: text/plain, Size: 957 bytes --]

x86/EPT: refine direct MMIO checking when determining EMT

With need_iommu() only ever true when iommu_enabled is also true, and
with the former getting set when a PCI device gets added to a guest,
the checks can be consolidated. The range set check are left in place
just in case raw MMIO or I/O port ranges get passed to a guest.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -810,10 +810,9 @@ int epte_get_entry_emt(struct domain *d,
         return -1;
     }
 
-    if ( !iommu_enabled ||
-         (rangeset_is_empty(d->iomem_caps) &&
-          rangeset_is_empty(d->arch.ioport_caps) &&
-          !has_arch_pdevs(d)) )
+    if ( !need_iommu(d) &&
+         rangeset_is_empty(d->iomem_caps) &&
+         rangeset_is_empty(d->arch.ioport_caps) )
     {
         ASSERT(!direct_mmio ||
                !((mfn_x(mfn) ^ d->arch.hvm_domain.vmx.apic_access_mfn) >>

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 3/4] x86/EPT: fix pinned cache attribute range checking
  2014-04-23  8:21 [PATCH 0/4] x86/EPT: miscellaneous further fixes to EMT determination Jan Beulich
  2014-04-23  8:39 ` [PATCH 1/4] x86/EPT: consider page order when checking for APIC MFN Jan Beulich
  2014-04-23  8:40 ` [PATCH 2/4] x86/EPT: refine direct MMIO checking when determining EMT Jan Beulich
@ 2014-04-23  8:41 ` Jan Beulich
  2014-04-28  7:42   ` Tian, Kevin
  2014-04-23  8:41 ` [PATCH 4/4] x86/EPT: also force EMT re-evaluation if pinned ranges change Jan Beulich
  2014-05-01 12:44 ` [PATCH 0/4] x86/EPT: miscellaneous further fixes to EMT determination Tim Deegan
  4 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2014-04-23  8:41 UTC (permalink / raw)
  To: xen-devel; +Cc: Keir Fraser, Kevin Tian, Eddie Dong, Jun Nakajima, Tim Deegan

[-- Attachment #1: Type: text/plain, Size: 1250 bytes --]

This wasn't done properly by 4d66f069 ("x86: fix pinned cache attribute
handling"): The passed in GFN shouldn't be assumed to be order aligned.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -589,6 +589,7 @@ int hvm_get_mem_pinned_cacheattr(
     uint32_t *type)
 {
     struct hvm_mem_pinned_cacheattr_range *range;
+    uint64_t mask = ~(uint64_t)0 << order;
     int rc = 0;
 
     *type = ~0;
@@ -601,15 +602,15 @@ int hvm_get_mem_pinned_cacheattr(
                               &d->arch.hvm_domain.pinned_cacheattr_ranges,
                               list )
     {
-        if ( (guest_fn >= range->start) &&
-             (guest_fn + (1UL << order) - 1 <= range->end) )
+        if ( ((guest_fn & mask) >= range->start) &&
+             ((guest_fn | ~mask) <= range->end) )
         {
             *type = range->type;
             rc = 1;
             break;
         }
-        if ( (guest_fn <= range->end) &&
-             (range->start <= guest_fn + (1UL << order) - 1) )
+        if ( ((guest_fn & mask) <= range->end) &&
+             (range->start <= (guest_fn | ~mask)) )
         {
             rc = -1;
             break;




[-- Attachment #2: x86-get-pinned-order.patch --]
[-- Type: text/plain, Size: 1298 bytes --]

x86/EPT: fix pinned cache attribute range checking

This wasn't done properly by 4d66f069 ("x86: fix pinned cache attribute
handling"): The passed in GFN shouldn't be assumed to be order aligned.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -589,6 +589,7 @@ int hvm_get_mem_pinned_cacheattr(
     uint32_t *type)
 {
     struct hvm_mem_pinned_cacheattr_range *range;
+    uint64_t mask = ~(uint64_t)0 << order;
     int rc = 0;
 
     *type = ~0;
@@ -601,15 +602,15 @@ int hvm_get_mem_pinned_cacheattr(
                               &d->arch.hvm_domain.pinned_cacheattr_ranges,
                               list )
     {
-        if ( (guest_fn >= range->start) &&
-             (guest_fn + (1UL << order) - 1 <= range->end) )
+        if ( ((guest_fn & mask) >= range->start) &&
+             ((guest_fn | ~mask) <= range->end) )
         {
             *type = range->type;
             rc = 1;
             break;
         }
-        if ( (guest_fn <= range->end) &&
-             (range->start <= guest_fn + (1UL << order) - 1) )
+        if ( ((guest_fn & mask) <= range->end) &&
+             (range->start <= (guest_fn | ~mask)) )
         {
             rc = -1;
             break;

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 4/4] x86/EPT: also force EMT re-evaluation if pinned ranges change
  2014-04-23  8:21 [PATCH 0/4] x86/EPT: miscellaneous further fixes to EMT determination Jan Beulich
                   ` (2 preceding siblings ...)
  2014-04-23  8:41 ` [PATCH 3/4] x86/EPT: fix pinned cache attribute range checking Jan Beulich
@ 2014-04-23  8:41 ` Jan Beulich
  2014-04-28  7:43   ` Tian, Kevin
  2014-05-01 12:44 ` [PATCH 0/4] x86/EPT: miscellaneous further fixes to EMT determination Tim Deegan
  4 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2014-04-23  8:41 UTC (permalink / raw)
  To: xen-devel; +Cc: Keir Fraser, Kevin Tian, Eddie Dong, Jun Nakajima, Tim Deegan

[-- Attachment #1: Type: text/plain, Size: 1062 bytes --]

This was inadvertently left out of aa9114ed ("x86/EPT: force
re-evaluation of memory type as necessary"). Note that this
intentionally doesn't use memory_type_changed(): Changes to the pinned
ranges are independent of IOMMU presence, which that function uses to
determine whether to call the underlying p2m_memory_type_changed().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -650,6 +650,7 @@ int32_t hvm_set_mem_pinned_cacheattr(
                 rcu_read_unlock(&pinned_cacheattr_rcu_lock);
                 list_del_rcu(&range->list);
                 call_rcu(&range->rcu, free_pinned_cacheattr_entry);
+                p2m_memory_type_changed(d);
                 return 0;
             }
         rcu_read_unlock(&pinned_cacheattr_rcu_lock);
@@ -695,6 +696,7 @@ int32_t hvm_set_mem_pinned_cacheattr(
     range->type = type;
 
     list_add_rcu(&range->list, &d->arch.hvm_domain.pinned_cacheattr_ranges);
+    p2m_memory_type_changed(d);
 
     return 0;
 }




[-- Attachment #2: x86-set-pinned-reeval.patch --]
[-- Type: text/plain, Size: 1121 bytes --]

x86/EPT: also force EMT re-evaluation if pinned ranges change

This was inadvertently left out of aa9114ed ("x86/EPT: force
re-evaluation of memory type as necessary"). Note that this
intentionally doesn't use memory_type_changed(): Changes to the pinned
ranges are independent of IOMMU presence, which that function uses to
determine whether to call the underlying p2m_memory_type_changed().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -650,6 +650,7 @@ int32_t hvm_set_mem_pinned_cacheattr(
                 rcu_read_unlock(&pinned_cacheattr_rcu_lock);
                 list_del_rcu(&range->list);
                 call_rcu(&range->rcu, free_pinned_cacheattr_entry);
+                p2m_memory_type_changed(d);
                 return 0;
             }
         rcu_read_unlock(&pinned_cacheattr_rcu_lock);
@@ -695,6 +696,7 @@ int32_t hvm_set_mem_pinned_cacheattr(
     range->type = type;
 
     list_add_rcu(&range->list, &d->arch.hvm_domain.pinned_cacheattr_ranges);
+    p2m_memory_type_changed(d);
 
     return 0;
 }

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 2/4 v2] x86/EPT: refine direct MMIO checking when determining EMT
  2014-04-23  8:40 ` [PATCH 2/4] x86/EPT: refine direct MMIO checking when determining EMT Jan Beulich
@ 2014-04-25 12:05   ` Jan Beulich
  2014-04-28  7:59     ` Tian, Kevin
  2014-04-28  7:41   ` [PATCH 2/4] " Tian, Kevin
  1 sibling, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2014-04-25 12:05 UTC (permalink / raw)
  To: xen-devel; +Cc: Keir Fraser, Kevin Tian, Eddie Dong, Jun Nakajima, Tim Deegan

[-- Attachment #1: Type: text/plain, Size: 1142 bytes --]

With need_iommu() only ever true when iommu_enabled is also true, and
with the former getting set when a PCI device gets added to a guest,
the checks can be consolidated. The range set check are left in place
just in case raw MMIO or I/O port ranges get passed to a guest.

At once drop open-coding of cache_flush_permitted().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Also drop open-coding of cache_flush_permitted().

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -20,6 +20,7 @@
 #include <public/hvm/e820.h>
 #include <xen/types.h>
 #include <asm/e820.h>
+#include <asm/iocap.h>
 #include <asm/mm.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
@@ -810,10 +811,7 @@ int epte_get_entry_emt(struct domain *d,
         return -1;
     }
 
-    if ( !iommu_enabled ||
-         (rangeset_is_empty(d->iomem_caps) &&
-          rangeset_is_empty(d->arch.ioport_caps) &&
-          !has_arch_pdevs(d)) )
+    if ( !need_iommu(d) && !cache_flush_permitted(d) )
     {
         ASSERT(!direct_mmio ||
                !((mfn_x(mfn) ^ d->arch.hvm_domain.vmx.apic_access_mfn) >>




[-- Attachment #2: EPT-EMT-IOMMU-dependency.patch --]
[-- Type: text/plain, Size: 1197 bytes --]

x86/EPT: refine direct MMIO checking when determining EMT

With need_iommu() only ever true when iommu_enabled is also true, and
with the former getting set when a PCI device gets added to a guest,
the checks can be consolidated. The range set check are left in place
just in case raw MMIO or I/O port ranges get passed to a guest.

At once drop open-coding of cache_flush_permitted().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Also drop open-coding of cache_flush_permitted().

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -20,6 +20,7 @@
 #include <public/hvm/e820.h>
 #include <xen/types.h>
 #include <asm/e820.h>
+#include <asm/iocap.h>
 #include <asm/mm.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
@@ -810,10 +811,7 @@ int epte_get_entry_emt(struct domain *d,
         return -1;
     }
 
-    if ( !iommu_enabled ||
-         (rangeset_is_empty(d->iomem_caps) &&
-          rangeset_is_empty(d->arch.ioport_caps) &&
-          !has_arch_pdevs(d)) )
+    if ( !need_iommu(d) && !cache_flush_permitted(d) )
     {
         ASSERT(!direct_mmio ||
                !((mfn_x(mfn) ^ d->arch.hvm_domain.vmx.apic_access_mfn) >>

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/4] x86/EPT: consider page order when checking for APIC MFN
  2014-04-23  8:39 ` [PATCH 1/4] x86/EPT: consider page order when checking for APIC MFN Jan Beulich
@ 2014-04-28  7:38   ` Tian, Kevin
  0 siblings, 0 replies; 16+ messages in thread
From: Tian, Kevin @ 2014-04-28  7:38 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Keir Fraser, Dong, Eddie, Nakajima, Jun, Tim Deegan

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Wednesday, April 23, 2014 4:40 PM
> 
> This was overlooked in 3d90d6e6 ("x86/EPT: split super pages upon
> mismatching memory types").
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>

> 
> --- a/xen/arch/x86/hvm/mtrr.c
> +++ b/xen/arch/x86/hvm/mtrr.c
> @@ -816,15 +816,18 @@ int epte_get_entry_emt(struct domain *d,
>            !has_arch_pdevs(d)) )
>      {
>          ASSERT(!direct_mmio ||
> -               mfn_x(mfn) ==
> d->arch.hvm_domain.vmx.apic_access_mfn);
> +               !((mfn_x(mfn) ^
> d->arch.hvm_domain.vmx.apic_access_mfn) >>
> +                 order));
>          *ipat = 1;
>          return MTRR_TYPE_WRBACK;
>      }
> 
>      if ( direct_mmio )
>      {
> -        if ( mfn_x(mfn) != d->arch.hvm_domain.vmx.apic_access_mfn )
> +        if ( (mfn_x(mfn) ^ d->arch.hvm_domain.vmx.apic_access_mfn) >>
> order )
>              return MTRR_TYPE_UNCACHABLE;
> +        if ( order )
> +            return -1;
>          *ipat = 1;
>          return MTRR_TYPE_WRBACK;
>      }
> 
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/4] x86/EPT: refine direct MMIO checking when determining EMT
  2014-04-23  8:40 ` [PATCH 2/4] x86/EPT: refine direct MMIO checking when determining EMT Jan Beulich
  2014-04-25 12:05   ` [PATCH 2/4 v2] " Jan Beulich
@ 2014-04-28  7:41   ` Tian, Kevin
  1 sibling, 0 replies; 16+ messages in thread
From: Tian, Kevin @ 2014-04-28  7:41 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Keir Fraser, Dong, Eddie, Nakajima, Jun, Tim Deegan

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Wednesday, April 23, 2014 4:40 PM
> 
> With need_iommu() only ever true when iommu_enabled is also true, and
> with the former getting set when a PCI device gets added to a guest,
> the checks can be consolidated. The range set check are left in place
> just in case raw MMIO or I/O port ranges get passed to a guest.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>

> 
> --- a/xen/arch/x86/hvm/mtrr.c
> +++ b/xen/arch/x86/hvm/mtrr.c
> @@ -810,10 +810,9 @@ int epte_get_entry_emt(struct domain *d,
>          return -1;
>      }
> 
> -    if ( !iommu_enabled ||
> -         (rangeset_is_empty(d->iomem_caps) &&
> -          rangeset_is_empty(d->arch.ioport_caps) &&
> -          !has_arch_pdevs(d)) )
> +    if ( !need_iommu(d) &&
> +         rangeset_is_empty(d->iomem_caps) &&
> +         rangeset_is_empty(d->arch.ioport_caps) )
>      {
>          ASSERT(!direct_mmio ||
>                 !((mfn_x(mfn) ^
> d->arch.hvm_domain.vmx.apic_access_mfn) >>
> 
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/4] x86/EPT: fix pinned cache attribute range checking
  2014-04-23  8:41 ` [PATCH 3/4] x86/EPT: fix pinned cache attribute range checking Jan Beulich
@ 2014-04-28  7:42   ` Tian, Kevin
  0 siblings, 0 replies; 16+ messages in thread
From: Tian, Kevin @ 2014-04-28  7:42 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Keir Fraser, Dong, Eddie, Nakajima, Jun, Tim Deegan

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Wednesday, April 23, 2014 4:41 PM
> 
> This wasn't done properly by 4d66f069 ("x86: fix pinned cache attribute
> handling"): The passed in GFN shouldn't be assumed to be order aligned.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>

> 
> --- a/xen/arch/x86/hvm/mtrr.c
> +++ b/xen/arch/x86/hvm/mtrr.c
> @@ -589,6 +589,7 @@ int hvm_get_mem_pinned_cacheattr(
>      uint32_t *type)
>  {
>      struct hvm_mem_pinned_cacheattr_range *range;
> +    uint64_t mask = ~(uint64_t)0 << order;
>      int rc = 0;
> 
>      *type = ~0;
> @@ -601,15 +602,15 @@ int hvm_get_mem_pinned_cacheattr(
> 
> &d->arch.hvm_domain.pinned_cacheattr_ranges,
>                                list )
>      {
> -        if ( (guest_fn >= range->start) &&
> -             (guest_fn + (1UL << order) - 1 <= range->end) )
> +        if ( ((guest_fn & mask) >= range->start) &&
> +             ((guest_fn | ~mask) <= range->end) )
>          {
>              *type = range->type;
>              rc = 1;
>              break;
>          }
> -        if ( (guest_fn <= range->end) &&
> -             (range->start <= guest_fn + (1UL << order) - 1) )
> +        if ( ((guest_fn & mask) <= range->end) &&
> +             (range->start <= (guest_fn | ~mask)) )
>          {
>              rc = -1;
>              break;
> 
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 4/4] x86/EPT: also force EMT re-evaluation if pinned ranges change
  2014-04-23  8:41 ` [PATCH 4/4] x86/EPT: also force EMT re-evaluation if pinned ranges change Jan Beulich
@ 2014-04-28  7:43   ` Tian, Kevin
  0 siblings, 0 replies; 16+ messages in thread
From: Tian, Kevin @ 2014-04-28  7:43 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Keir Fraser, Dong, Eddie, Nakajima, Jun, Tim Deegan

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Wednesday, April 23, 2014 4:42 PM
> 
> This was inadvertently left out of aa9114ed ("x86/EPT: force
> re-evaluation of memory type as necessary"). Note that this
> intentionally doesn't use memory_type_changed(): Changes to the pinned
> ranges are independent of IOMMU presence, which that function uses to
> determine whether to call the underlying p2m_memory_type_changed().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>

> 
> --- a/xen/arch/x86/hvm/mtrr.c
> +++ b/xen/arch/x86/hvm/mtrr.c
> @@ -650,6 +650,7 @@ int32_t hvm_set_mem_pinned_cacheattr(
>                  rcu_read_unlock(&pinned_cacheattr_rcu_lock);
>                  list_del_rcu(&range->list);
>                  call_rcu(&range->rcu, free_pinned_cacheattr_entry);
> +                p2m_memory_type_changed(d);
>                  return 0;
>              }
>          rcu_read_unlock(&pinned_cacheattr_rcu_lock);
> @@ -695,6 +696,7 @@ int32_t hvm_set_mem_pinned_cacheattr(
>      range->type = type;
> 
>      list_add_rcu(&range->list,
> &d->arch.hvm_domain.pinned_cacheattr_ranges);
> +    p2m_memory_type_changed(d);
> 
>      return 0;
>  }
> 
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/4 v2] x86/EPT: refine direct MMIO checking when determining EMT
  2014-04-25 12:05   ` [PATCH 2/4 v2] " Jan Beulich
@ 2014-04-28  7:59     ` Tian, Kevin
  2014-04-28  8:24       ` Jan Beulich
  0 siblings, 1 reply; 16+ messages in thread
From: Tian, Kevin @ 2014-04-28  7:59 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Keir Fraser, Dong, Eddie, Nakajima, Jun, Tim Deegan

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Friday, April 25, 2014 8:06 PM
> 
> With need_iommu() only ever true when iommu_enabled is also true, and
> with the former getting set when a PCI device gets added to a guest,
> the checks can be consolidated. The range set check are left in place
> just in case raw MMIO or I/O port ranges get passed to a guest.
> 
> At once drop open-coding of cache_flush_permitted().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Also drop open-coding of cache_flush_permitted().
> 
> --- a/xen/arch/x86/hvm/mtrr.c
> +++ b/xen/arch/x86/hvm/mtrr.c
> @@ -20,6 +20,7 @@
>  #include <public/hvm/e820.h>
>  #include <xen/types.h>
>  #include <asm/e820.h>
> +#include <asm/iocap.h>
>  #include <asm/mm.h>
>  #include <asm/paging.h>
>  #include <asm/p2m.h>
> @@ -810,10 +811,7 @@ int epte_get_entry_emt(struct domain *d,
>          return -1;
>      }
> 
> -    if ( !iommu_enabled ||
> -         (rangeset_is_empty(d->iomem_caps) &&
> -          rangeset_is_empty(d->arch.ioport_caps) &&
> -          !has_arch_pdevs(d)) )
> +    if ( !need_iommu(d) && !cache_flush_permitted(d) )

&&->||

>      {
>          ASSERT(!direct_mmio ||
>                 !((mfn_x(mfn) ^
> d->arch.hvm_domain.vmx.apic_access_mfn) >>
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/4 v2] x86/EPT: refine direct MMIO checking when determining EMT
  2014-04-28  7:59     ` Tian, Kevin
@ 2014-04-28  8:24       ` Jan Beulich
  2014-04-28  9:06         ` Tian, Kevin
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2014-04-28  8:24 UTC (permalink / raw)
  To: Kevin Tian; +Cc: xen-devel, Keir Fraser, Eddie Dong, Jun Nakajima, Tim Deegan

>>> On 28.04.14 at 09:59, <kevin.tian@intel.com> wrote:
>>  From: Jan Beulich [mailto:JBeulich@suse.com]
>> @@ -810,10 +811,7 @@ int epte_get_entry_emt(struct domain *d,
>>          return -1;
>>      }
>> 
>> -    if ( !iommu_enabled ||
>> -         (rangeset_is_empty(d->iomem_caps) &&
>> -          rangeset_is_empty(d->arch.ioport_caps) &&
>> -          !has_arch_pdevs(d)) )
>> +    if ( !need_iommu(d) && !cache_flush_permitted(d) )
> 
> &&->||

No, specifically not - we shouldn't force WB in either case.

Jan

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/4 v2] x86/EPT: refine direct MMIO checking when determining EMT
  2014-04-28  8:24       ` Jan Beulich
@ 2014-04-28  9:06         ` Tian, Kevin
  2014-04-28  9:51           ` Jan Beulich
  0 siblings, 1 reply; 16+ messages in thread
From: Tian, Kevin @ 2014-04-28  9:06 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Keir Fraser, Dong, Eddie, Nakajima, Jun, Tim Deegan

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, April 28, 2014 4:25 PM
> 
> >>> On 28.04.14 at 09:59, <kevin.tian@intel.com> wrote:
> >>  From: Jan Beulich [mailto:JBeulich@suse.com]
> >> @@ -810,10 +811,7 @@ int epte_get_entry_emt(struct domain *d,
> >>          return -1;
> >>      }
> >>
> >> -    if ( !iommu_enabled ||
> >> -         (rangeset_is_empty(d->iomem_caps) &&
> >> -          rangeset_is_empty(d->arch.ioport_caps) &&
> >> -          !has_arch_pdevs(d)) )
> >> +    if ( !need_iommu(d) && !cache_flush_permitted(d) )
> >
> > &&->||
> 
> No, specifically not - we shouldn't force WB in either case.
> 

well, I gave original comments with the impression that it's a cleanup patch
instead of fixing anything, e.g. iommu_enabled->need_iommu, and open-coding
to cache_flush_permitted. Based on that thought, && changes original meaning.

so you actually fix the conditions here. I didn't follow up latest advance in device
assignment recently. Could you elaborate in which situation I/O resources are
assigned while iommu is not required?

Thanks
Kevin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/4 v2] x86/EPT: refine direct MMIO checking when determining EMT
  2014-04-28  9:06         ` Tian, Kevin
@ 2014-04-28  9:51           ` Jan Beulich
  2014-04-28 12:02             ` Tian, Kevin
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2014-04-28  9:51 UTC (permalink / raw)
  To: Kevin Tian; +Cc: xen-devel, KeirFraser, Eddie Dong, Jun Nakajima, Tim Deegan

>>> On 28.04.14 at 11:06, <kevin.tian@intel.com> wrote:
>>  From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: Monday, April 28, 2014 4:25 PM
>> 
>> >>> On 28.04.14 at 09:59, <kevin.tian@intel.com> wrote:
>> >>  From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> @@ -810,10 +811,7 @@ int epte_get_entry_emt(struct domain *d,
>> >>          return -1;
>> >>      }
>> >>
>> >> -    if ( !iommu_enabled ||
>> >> -         (rangeset_is_empty(d->iomem_caps) &&
>> >> -          rangeset_is_empty(d->arch.ioport_caps) &&
>> >> -          !has_arch_pdevs(d)) )
>> >> +    if ( !need_iommu(d) && !cache_flush_permitted(d) )
>> >
>> > &&->||
>> 
>> No, specifically not - we shouldn't force WB in either case.
>> 
> 
> well, I gave original comments with the impression that it's a cleanup patch
> instead of fixing anything, e.g. iommu_enabled->need_iommu, and open-coding
> to cache_flush_permitted. Based on that thought, && changes original meaning.
> 
> so you actually fix the conditions here. I didn't follow up latest advance 
> in device
> assignment recently. Could you elaborate in which situation I/O resources 
> are
> assigned while iommu is not required?

It's not so much a question of "required": IOMMU setup only
happens when a PCI device gets assigned to a guest. It specifically
does not happen (and would make no sense, because the IOMMU
concepts are PCI device based) when a raw MMIO or port range gets
assigned. While the latter is insecure, I don't think we currently do
anything to disallow this, and hence we shouldn't break it here.

Jan

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/4 v2] x86/EPT: refine direct MMIO checking when determining EMT
  2014-04-28  9:51           ` Jan Beulich
@ 2014-04-28 12:02             ` Tian, Kevin
  0 siblings, 0 replies; 16+ messages in thread
From: Tian, Kevin @ 2014-04-28 12:02 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, KeirFraser, Dong, Eddie, Nakajima, Jun, Tim Deegan

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, April 28, 2014 5:52 PM
> 
> >>> On 28.04.14 at 11:06, <kevin.tian@intel.com> wrote:
> >>  From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: Monday, April 28, 2014 4:25 PM
> >>
> >> >>> On 28.04.14 at 09:59, <kevin.tian@intel.com> wrote:
> >> >>  From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >> @@ -810,10 +811,7 @@ int epte_get_entry_emt(struct domain *d,
> >> >>          return -1;
> >> >>      }
> >> >>
> >> >> -    if ( !iommu_enabled ||
> >> >> -         (rangeset_is_empty(d->iomem_caps) &&
> >> >> -          rangeset_is_empty(d->arch.ioport_caps) &&
> >> >> -          !has_arch_pdevs(d)) )
> >> >> +    if ( !need_iommu(d) && !cache_flush_permitted(d) )
> >> >
> >> > &&->||
> >>
> >> No, specifically not - we shouldn't force WB in either case.
> >>
> >
> > well, I gave original comments with the impression that it's a cleanup patch
> > instead of fixing anything, e.g. iommu_enabled->need_iommu, and
> open-coding
> > to cache_flush_permitted. Based on that thought, && changes original
> meaning.
> >
> > so you actually fix the conditions here. I didn't follow up latest advance
> > in device
> > assignment recently. Could you elaborate in which situation I/O resources
> > are
> > assigned while iommu is not required?
> 
> It's not so much a question of "required": IOMMU setup only
> happens when a PCI device gets assigned to a guest. It specifically
> does not happen (and would make no sense, because the IOMMU
> concepts are PCI device based) when a raw MMIO or port range gets
> assigned. While the latter is insecure, I don't think we currently do
> anything to disallow this, and hence we shouldn't break it here.
> 

Got it. Acked-by: Kevin Tian <kevin.tian@intel.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 0/4] x86/EPT: miscellaneous further fixes to EMT determination
  2014-04-23  8:21 [PATCH 0/4] x86/EPT: miscellaneous further fixes to EMT determination Jan Beulich
                   ` (3 preceding siblings ...)
  2014-04-23  8:41 ` [PATCH 4/4] x86/EPT: also force EMT re-evaluation if pinned ranges change Jan Beulich
@ 2014-05-01 12:44 ` Tim Deegan
  4 siblings, 0 replies; 16+ messages in thread
From: Tim Deegan @ 2014-05-01 12:44 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Kevin Tian, Keir Fraser, Eddie Dong, Jun Nakajima

At 09:21 +0100 on 23 Apr (1398241319), Jan Beulich wrote:
> 1: consider page order when checking for APIC MFN
> 2: refine direct MMIO checking when determining EMT
> 3: fix pinned cache attribute range checking
> 4: also force EMT re-evaluation if pinned ranges change
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Tim Deegan <tim@xen.org>

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2014-05-01 12:44 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-04-23  8:21 [PATCH 0/4] x86/EPT: miscellaneous further fixes to EMT determination Jan Beulich
2014-04-23  8:39 ` [PATCH 1/4] x86/EPT: consider page order when checking for APIC MFN Jan Beulich
2014-04-28  7:38   ` Tian, Kevin
2014-04-23  8:40 ` [PATCH 2/4] x86/EPT: refine direct MMIO checking when determining EMT Jan Beulich
2014-04-25 12:05   ` [PATCH 2/4 v2] " Jan Beulich
2014-04-28  7:59     ` Tian, Kevin
2014-04-28  8:24       ` Jan Beulich
2014-04-28  9:06         ` Tian, Kevin
2014-04-28  9:51           ` Jan Beulich
2014-04-28 12:02             ` Tian, Kevin
2014-04-28  7:41   ` [PATCH 2/4] " Tian, Kevin
2014-04-23  8:41 ` [PATCH 3/4] x86/EPT: fix pinned cache attribute range checking Jan Beulich
2014-04-28  7:42   ` Tian, Kevin
2014-04-23  8:41 ` [PATCH 4/4] x86/EPT: also force EMT re-evaluation if pinned ranges change Jan Beulich
2014-04-28  7:43   ` Tian, Kevin
2014-05-01 12:44 ` [PATCH 0/4] x86/EPT: miscellaneous further fixes to EMT determination Tim Deegan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.