xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7 00/11] Check VT-d Device-TLB flush error
@ 2016-06-08  8:58 Xu, Quan
  2016-06-08  8:58 ` [PATCH v7 01/11] IOMMU: handle IOMMU mapping and unmapping failures Xu, Quan
                   ` (10 more replies)
  0 siblings, 11 replies; 45+ messages in thread
From: Xu, Quan @ 2016-06-08  8:58 UTC (permalink / raw)
  To: xen-devel; +Cc: dario.faggioli, Quan Xu

From: Quan Xu <quan.xu@intel.com>

This patch set is a prereq patch set for Patch:'VT-d Device-TLB flush issue'.

While IOMMU Device-TLB flush timed out, xen calls panic() at present. However the existing panic()
is going to be eliminated, so we must propagate the IOMMU Device-TLB flush error up to the call trees.

This patch set is also based on the discussion of 'abstract model of IOMMU unmaping/mapping failures'


Quan Xu (11):
  IOMMU: handle IOMMU mapping and unmapping failures
  IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping
  IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping
    (top level ones)
  IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping (top
    level ones)
  IOMMU/MMU: propagate IOMMU Device-TLB flush error up to
    iommu_iotlb_flush{,_all} (top level ones)
  propagate IOMMU Device-TLB flush error up to EPT update (top level
    ones)
  IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending
    (top level ones)
  IOMMU: propagate IOMMU Device-TLB flush error (leaf ones).
  vt-d: fix the IOMMU flush issue
  vt-d: propagate the IOMMU Device-TLB flush error up to ME phantom
    functions
  vt-d: add __must_check annotation to IOMMU flush pointers and handlers

 xen/arch/arm/p2m.c                            |   4 +-
 xen/arch/x86/acpi/power.c                     |  73 +++++--
 xen/arch/x86/mm.c                             |  13 +-
 xen/arch/x86/mm/p2m-ept.c                     |  36 +++-
 xen/arch/x86/mm/p2m-pt.c                      |  23 ++-
 xen/arch/x86/mm/p2m.c                         |  18 +-
 xen/arch/x86/x86_64/mm.c                      |   4 +-
 xen/common/memory.c                           |  12 +-
 xen/drivers/passthrough/amd/iommu_init.c      |   9 +-
 xen/drivers/passthrough/amd/pci_amd_iommu.c   |  17 +-
 xen/drivers/passthrough/arm/smmu.c            |  19 +-
 xen/drivers/passthrough/iommu.c               |  62 +++++-
 xen/drivers/passthrough/vtd/extern.h          |   3 +-
 xen/drivers/passthrough/vtd/iommu.c           | 284 ++++++++++++++++++--------
 xen/drivers/passthrough/vtd/iommu.h           |  11 +-
 xen/drivers/passthrough/vtd/qinval.c          |  14 +-
 xen/drivers/passthrough/vtd/quirks.c          |  27 ++-
 xen/drivers/passthrough/vtd/x86/vtd.c         |  15 +-
 xen/drivers/passthrough/x86/iommu.c           |   5 +-
 xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |   8 +-
 xen/include/asm-x86/iommu.h                   |   3 +-
 xen/include/xen/iommu.h                       |  26 +--
 22 files changed, 491 insertions(+), 195 deletions(-)

-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH v7 01/11] IOMMU: handle IOMMU mapping and unmapping failures
  2016-06-08  8:58 [PATCH v7 00/11] Check VT-d Device-TLB flush error Xu, Quan
@ 2016-06-08  8:58 ` Xu, Quan
  2016-06-08  8:58 ` [PATCH v7 02/11] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping Xu, Quan
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 45+ messages in thread
From: Xu, Quan @ 2016-06-08  8:58 UTC (permalink / raw)
  To: xen-devel; +Cc: Kevin Tian, dario.faggioli, Jan Beulich, Quan Xu

From: Quan Xu <quan.xu@intel.com>

Treat IOMMU mapping and unmapping failures as a fatal to the DomU
If IOMMU mapping and unmapping failed, crash the DomU and propagate
the error up to the call trees.

No spamming of the log can occur. For DomU, we avoid logging any
message for already dying domains. For Dom0, that'll still be more
verbose than we'd really like, but it at least wouldn't outright
flood the console.

Signed-off-by: Quan Xu <quan.xu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

CC: Jan Beulich <jbeulich@suse.com>
CC: Kevin Tian <kevin.tian@intel.com>
---
 xen/drivers/passthrough/iommu.c | 30 ++++++++++++++++++++++++++++--
 1 file changed, 28 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 9d104d2..673e126 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -240,21 +240,47 @@ int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
                    unsigned int flags)
 {
     const struct domain_iommu *hd = dom_iommu(d);
+    int rc;
 
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    return hd->platform_ops->map_page(d, gfn, mfn, flags);
+    rc = hd->platform_ops->map_page(d, gfn, mfn, flags);
+    if ( unlikely(rc) )
+    {
+        if ( !d->is_shutting_down && printk_ratelimit() )
+            printk(XENLOG_ERR
+                   "d%d: IOMMU mapping gfn %#lx to mfn %#lx failed: %d\n",
+                   d->domain_id, gfn, mfn, rc);
+
+        if ( !is_hardware_domain(d) )
+            domain_crash(d);
+    }
+
+    return rc;
 }
 
 int iommu_unmap_page(struct domain *d, unsigned long gfn)
 {
     const struct domain_iommu *hd = dom_iommu(d);
+    int rc;
 
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    return hd->platform_ops->unmap_page(d, gfn);
+    rc = hd->platform_ops->unmap_page(d, gfn);
+    if ( unlikely(rc) )
+    {
+        if ( !d->is_shutting_down && printk_ratelimit() )
+            printk(XENLOG_ERR
+                   "d%d: IOMMU unmapping gfn %#lx failed: %d\n",
+                   d->domain_id, gfn, rc);
+
+        if ( !is_hardware_domain(d) )
+            domain_crash(d);
+    }
+
+    return rc;
 }
 
 static void iommu_free_pagetables(unsigned long unused)
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH v7 02/11] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping
  2016-06-08  8:58 [PATCH v7 00/11] Check VT-d Device-TLB flush error Xu, Quan
  2016-06-08  8:58 ` [PATCH v7 01/11] IOMMU: handle IOMMU mapping and unmapping failures Xu, Quan
@ 2016-06-08  8:58 ` Xu, Quan
  2016-06-08 14:40   ` Jan Beulich
                     ` (2 more replies)
  2016-06-08  8:58 ` [PATCH v7 03/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping (top level ones) Xu, Quan
                   ` (8 subsequent siblings)
  10 siblings, 3 replies; 45+ messages in thread
From: Xu, Quan @ 2016-06-08  8:58 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Keir Fraser, Jan Beulich, George Dunlap,
	Andrew Cooper, dario.faggioli, Jun Nakajima, Quan Xu

From: Quan Xu <quan.xu@intel.com>

When IOMMU mapping is failed, we issue a best effort rollback, stopping
IOMMU mapping, unmapping the previous IOMMU maps and then reporting the
error up to the call trees. When rollback is not feasible (in early
initialization phase or trade-off of complexity) for the hardware domain,
we do things on a best effort basis, only throwing out an error message.

IOMMU unmapping should perhaps continue despite an error, in an attempt
to do best effort cleanup.

Signed-off-by: Quan Xu <quan.xu@intel.com>

CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>

v7:
  1. add __must_check annotation to iommu_map_page() / iommu_unmap_page().
  2. use the same code structure for p2m_pt_set_entry() as I do on the
     EPT side.
  3. fix amd_iommu_hwdom_init() / iommu_hwdom_init() here.
  4. enhance print infomation. It is no need to mention "hardware domain"
     in printk of iommu_hwdom_init().
  5. make the assignment be replaced by its right side becoming
     the variable's initializer.
---
 xen/arch/x86/mm.c                           | 13 ++++++-----
 xen/arch/x86/mm/p2m-ept.c                   | 34 +++++++++++++++++++++++------
 xen/arch/x86/mm/p2m-pt.c                    | 23 ++++++++++++++++---
 xen/arch/x86/mm/p2m.c                       | 18 ++++++++++++---
 xen/arch/x86/x86_64/mm.c                    |  4 +++-
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 15 +++++++++++--
 xen/drivers/passthrough/iommu.c             | 13 ++++++++++-
 xen/drivers/passthrough/vtd/x86/vtd.c       | 15 +++++++++++--
 xen/include/xen/iommu.h                     |  6 ++---
 9 files changed, 114 insertions(+), 27 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 8d10a3e..ae7c8ab 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2467,7 +2467,7 @@ static int __get_page_type(struct page_info *page, unsigned long type,
                            int preemptible)
 {
     unsigned long nx, x, y = page->u.inuse.type_info;
-    int rc = 0;
+    int rc = 0, iommu_ret = 0;
 
     ASSERT(!(type & ~(PGT_type_mask | PGT_pae_xen_l2)));
 
@@ -2578,11 +2578,11 @@ static int __get_page_type(struct page_info *page, unsigned long type,
         if ( d && is_pv_domain(d) && unlikely(need_iommu(d)) )
         {
             if ( (x & PGT_type_mask) == PGT_writable_page )
-                iommu_unmap_page(d, mfn_to_gmfn(d, page_to_mfn(page)));
+                iommu_ret = iommu_unmap_page(d, mfn_to_gmfn(d, page_to_mfn(page)));
             else if ( type == PGT_writable_page )
-                iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
-                               page_to_mfn(page),
-                               IOMMUF_readable|IOMMUF_writable);
+                iommu_ret = iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
+                                           page_to_mfn(page),
+                                           IOMMUF_readable|IOMMUF_writable);
         }
     }
 
@@ -2599,6 +2599,9 @@ static int __get_page_type(struct page_info *page, unsigned long type,
     if ( (x & PGT_partial) && !(nx & PGT_partial) )
         put_page(page);
 
+    if ( !rc )
+        rc = iommu_ret;
+
     return rc;
 }
 
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 1ed5b47..5aebc24 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -667,6 +667,7 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     unsigned long gfn_remainder = gfn;
     unsigned int i, target = order / EPT_TABLE_ORDER;
     int ret, rc = 0;
+    bool_t entry_written = 0;
     bool_t direct_mmio = (p2mt == p2m_mmio_direct);
     uint8_t ipat = 0;
     bool_t need_modify_vtd_table = 1;
@@ -812,10 +813,15 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     rc = atomic_write_ept_entry(ept_entry, new_entry, target);
     if ( unlikely(rc) )
         old_entry.epte = 0;
-    else if ( p2mt != p2m_invalid &&
-              (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
-        /* Track the highest gfn for which we have ever had a valid mapping */
-        p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
+    else
+    {
+        entry_written = 1;
+
+        if ( p2mt != p2m_invalid &&
+             (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
+            /* Track the highest gfn for which we have ever had a valid mapping */
+            p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
+    }
 
 out:
     if ( needs_sync )
@@ -831,10 +837,24 @@ out:
         {
             if ( iommu_flags )
                 for ( i = 0; i < (1 << order); i++ )
-                    iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags);
+                {
+                    rc = iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags);
+                    if ( unlikely(rc) )
+                    {
+                        while ( i-- )
+                            if ( iommu_unmap_page(p2m->domain, gfn + i) )
+                                continue;
+
+                        break;
+                    }
+                }
             else
                 for ( i = 0; i < (1 << order); i++ )
-                    iommu_unmap_page(d, gfn + i);
+                {
+                    ret = iommu_unmap_page(d, gfn + i);
+                    if ( !rc )
+                        rc = ret;
+                }
         }
     }
 
@@ -847,7 +867,7 @@ out:
     if ( is_epte_present(&old_entry) )
         ept_free_entry(p2m, &old_entry, target);
 
-    if ( rc == 0 && p2m_is_hostp2m(p2m) )
+    if ( entry_written && p2m_is_hostp2m(p2m) )
         p2m_altp2m_propagate_change(d, _gfn(gfn), mfn, order, p2mt, p2ma);
 
     return rc;
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 3d80612..a3870da 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -673,6 +673,8 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     if ( iommu_enabled && need_iommu(p2m->domain) &&
          (iommu_old_flags != iommu_pte_flags || old_mfn != mfn_x(mfn)) )
     {
+        ASSERT(rc == 0);
+
         if ( iommu_use_hap_pt(p2m->domain) )
         {
             if ( iommu_old_flags )
@@ -680,11 +682,26 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
         }
         else if ( iommu_pte_flags )
             for ( i = 0; i < (1UL << page_order); i++ )
-                iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
-                               iommu_pte_flags);
+            {
+                rc = iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
+                                    iommu_pte_flags);
+                if ( unlikely(rc) )
+                {
+                    while ( i-- )
+                        if ( iommu_unmap_page(p2m->domain, gfn + i) )
+                            continue;
+
+                    break;
+                }
+            }
         else
             for ( i = 0; i < (1UL << page_order); i++ )
-                iommu_unmap_page(p2m->domain, gfn + i);
+            {
+                int ret = iommu_unmap_page(p2m->domain, gfn + i);
+
+                if ( !rc )
+                    rc = ret;
+            }
     }
 
     /*
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 9b19769..7ce25d2 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -641,10 +641,20 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn,
 
     if ( !paging_mode_translate(p2m->domain) )
     {
+        int rc = 0;
+
         if ( need_iommu(p2m->domain) )
+        {
             for ( i = 0; i < (1 << page_order); i++ )
-                iommu_unmap_page(p2m->domain, mfn + i);
-        return 0;
+            {
+                int ret = iommu_unmap_page(p2m->domain, mfn + i);
+
+                if ( !rc )
+                    rc = ret;
+            }
+        }
+
+        return rc;
     }
 
     ASSERT(gfn_locked_by_me(p2m, gfn));
@@ -700,7 +710,9 @@ guest_physmap_add_entry(struct domain *d, unsigned long gfn,
                 if ( rc != 0 )
                 {
                     while ( i-- > 0 )
-                        iommu_unmap_page(d, mfn + i);
+                        if ( iommu_unmap_page(d, mfn + i) )
+                            continue;
+
                     return rc;
                 }
             }
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index e07e69e..6dab7ff 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1436,7 +1436,9 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm)
         if ( i != epfn )
         {
             while (i-- > old_max)
-                iommu_unmap_page(hardware_domain, i);
+                if ( iommu_unmap_page(hardware_domain, i) )
+                    continue;
+
             goto destroy_m2p;
         }
     }
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index fce9827..4a860af 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -282,6 +282,8 @@ static void __hwdom_init amd_iommu_hwdom_init(struct domain *d)
 
     if ( !iommu_passthrough && !need_iommu(d) )
     {
+        int rc = 0;
+
         /* Set up 1:1 page table for dom0 */
         for ( i = 0; i < max_pdx; i++ )
         {
@@ -292,12 +294,21 @@ static void __hwdom_init amd_iommu_hwdom_init(struct domain *d)
              * a pfn_valid() check would seem desirable here.
              */
             if ( mfn_valid(pfn) )
-                amd_iommu_map_page(d, pfn, pfn, 
-                                   IOMMUF_readable|IOMMUF_writable);
+            {
+                int ret = amd_iommu_map_page(d, pfn, pfn,
+                                             IOMMUF_readable|IOMMUF_writable);
+
+                if ( !rc )
+                    rc = ret;
+            }
 
             if ( !(i & 0xfffff) )
                 process_pending_softirqs();
         }
+
+        if ( rc )
+            AMD_IOMMU_DEBUG("d%d: IOMMU mapping failed: %d\n",
+                            d->domain_id, rc);
     }
 
     for_each_amd_iommu ( iommu )
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 673e126..ec85352 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -171,20 +171,31 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
     {
         struct page_info *page;
         unsigned int i = 0;
+        int rc = 0;
+
         page_list_for_each ( page, &d->page_list )
         {
             unsigned long mfn = page_to_mfn(page);
             unsigned long gfn = mfn_to_gmfn(d, mfn);
             unsigned int mapping = IOMMUF_readable;
+            int ret;
 
             if ( ((page->u.inuse.type_info & PGT_count_mask) == 0) ||
                  ((page->u.inuse.type_info & PGT_type_mask)
                   == PGT_writable_page) )
                 mapping |= IOMMUF_writable;
-            hd->platform_ops->map_page(d, gfn, mfn, mapping);
+
+            ret = hd->platform_ops->map_page(d, gfn, mfn, mapping);
+            if ( !rc )
+                rc = ret;
+
             if ( !(i++ & 0xfffff) )
                 process_pending_softirqs();
         }
+
+        if ( rc )
+            printk(XENLOG_WARNING "d%d: IOMMU mapping failed: %d\n",
+                   d->domain_id, rc);
     }
 
     return hd->platform_ops->hwdom_init(d);
diff --git a/xen/drivers/passthrough/vtd/x86/vtd.c b/xen/drivers/passthrough/vtd/x86/vtd.c
index c0d6aab..3d67909 100644
--- a/xen/drivers/passthrough/vtd/x86/vtd.c
+++ b/xen/drivers/passthrough/vtd/x86/vtd.c
@@ -118,6 +118,8 @@ void __hwdom_init vtd_set_hwdom_mapping(struct domain *d)
 
     for ( i = 0; i < top; i++ )
     {
+        int rc = 0;
+
         /*
          * Set up 1:1 mapping for dom0. Default to use only conventional RAM
          * areas and let RMRRs include needed reserved regions. When set, the
@@ -140,8 +142,17 @@ void __hwdom_init vtd_set_hwdom_mapping(struct domain *d)
 
         tmp = 1 << (PAGE_SHIFT - PAGE_SHIFT_4K);
         for ( j = 0; j < tmp; j++ )
-            iommu_map_page(d, pfn * tmp + j, pfn * tmp + j,
-                           IOMMUF_readable|IOMMUF_writable);
+        {
+            int ret = iommu_map_page(d, pfn * tmp + j, pfn * tmp + j,
+                                     IOMMUF_readable|IOMMUF_writable);
+
+            if( !rc )
+               rc = ret;
+        }
+
+        if ( rc )
+           printk(XENLOG_WARNING VTDPREFIX " d%d: IOMMU mapping failed: %d\n",
+                  d->domain_id, rc);
 
         if (!(i & (0xfffff >> (PAGE_SHIFT - PAGE_SHIFT_4K))))
             process_pending_softirqs();
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 19ba976..eaa2c77 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -74,9 +74,9 @@ void iommu_teardown(struct domain *d);
 #define IOMMUF_readable  (1u<<_IOMMUF_readable)
 #define _IOMMUF_writable 1
 #define IOMMUF_writable  (1u<<_IOMMUF_writable)
-int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
-                   unsigned int flags);
-int iommu_unmap_page(struct domain *d, unsigned long gfn);
+int __must_check iommu_map_page(struct domain *d, unsigned long gfn,
+                                unsigned long mfn, unsigned int flags);
+int __must_check iommu_unmap_page(struct domain *d, unsigned long gfn);
 
 enum iommu_feature
 {
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH v7 03/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping (top level ones)
  2016-06-08  8:58 [PATCH v7 00/11] Check VT-d Device-TLB flush error Xu, Quan
  2016-06-08  8:58 ` [PATCH v7 01/11] IOMMU: handle IOMMU mapping and unmapping failures Xu, Quan
  2016-06-08  8:58 ` [PATCH v7 02/11] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping Xu, Quan
@ 2016-06-08  8:58 ` Xu, Quan
  2016-06-08 14:42   ` Jan Beulich
                     ` (2 more replies)
  2016-06-08  8:58 ` [PATCH v7 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping " Xu, Quan
                   ` (7 subsequent siblings)
  10 siblings, 3 replies; 45+ messages in thread
From: Xu, Quan @ 2016-06-08  8:58 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, Jan Beulich,
	Andrew Cooper, dario.faggioli, Julien Grall, Quan Xu

From: Quan Xu <quan.xu@intel.com>

Signed-off-by: Quan Xu <quan.xu@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>

CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Feng Wu <feng.wu@intel.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>

v7: just drop 'Reviewed-by: Jan Beulich <jbeulich@suse.com>',
    as I haven't added __must_check annotation to iommu_unmap_page()
    in previous v6.
---
 xen/drivers/passthrough/arm/smmu.c            |  2 +-
 xen/drivers/passthrough/vtd/iommu.c           | 15 ++++++++-------
 xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |  2 +-
 xen/include/xen/iommu.h                       |  2 +-
 4 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 54a03a6..1ce4ddf 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2774,7 +2774,7 @@ static int arm_smmu_map_page(struct domain *d, unsigned long gfn,
 	return guest_physmap_add_entry(d, gfn, mfn, 0, t);
 }
 
-static int arm_smmu_unmap_page(struct domain *d, unsigned long gfn)
+static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long gfn)
 {
 	/*
 	 * This function should only be used by gnttab code when the domain
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index db83949..4844193 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -609,7 +609,7 @@ static void intel_iommu_iotlb_flush_all(struct domain *d)
 }
 
 /* clear one page's page table */
-static void dma_pte_clear_one(struct domain *domain, u64 addr)
+static int __must_check dma_pte_clear_one(struct domain *domain, u64 addr)
 {
     struct domain_iommu *hd = dom_iommu(domain);
     struct dma_pte *page = NULL, *pte = NULL;
@@ -621,7 +621,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
     if ( pg_maddr == 0 )
     {
         spin_unlock(&hd->arch.mapping_lock);
-        return;
+        return 0;
     }
 
     page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
@@ -631,7 +631,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
     {
         spin_unlock(&hd->arch.mapping_lock);
         unmap_vtd_domain_page(page);
-        return;
+        return 0;
     }
 
     dma_clear_pte(*pte);
@@ -642,6 +642,8 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
         __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K, 1, 1);
 
     unmap_vtd_domain_page(page);
+
+    return 0;
 }
 
 static void iommu_free_pagetable(u64 pt_maddr, int level)
@@ -1739,15 +1741,14 @@ static int intel_iommu_map_page(
     return 0;
 }
 
-static int intel_iommu_unmap_page(struct domain *d, unsigned long gfn)
+static int __must_check intel_iommu_unmap_page(struct domain *d,
+                                               unsigned long gfn)
 {
     /* Do nothing if hardware domain and iommu supports pass thru. */
     if ( iommu_passthrough && is_hardware_domain(d) )
         return 0;
 
-    dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
-
-    return 0;
+    return dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
 }
 
 void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
diff --git a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
index 9c51172..57b6cc1 100644
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
@@ -53,7 +53,7 @@ int amd_iommu_update_ivrs_mapping_acpi(void);
 /* mapping functions */
 int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
                        unsigned int flags);
-int amd_iommu_unmap_page(struct domain *d, unsigned long gfn);
+int __must_check amd_iommu_unmap_page(struct domain *d, unsigned long gfn);
 u64 amd_iommu_get_next_table_from_pte(u32 *entry);
 int amd_iommu_reserve_domain_unity_map(struct domain *domain,
                                        u64 phys_addr, unsigned long size,
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index eaa2c77..f45fa5a 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -168,7 +168,7 @@ struct iommu_ops {
     void (*teardown)(struct domain *d);
     int (*map_page)(struct domain *d, unsigned long gfn, unsigned long mfn,
                     unsigned int flags);
-    int (*unmap_page)(struct domain *d, unsigned long gfn);
+    int __must_check (*unmap_page)(struct domain *d, unsigned long gfn);
     void (*free_page_table)(struct page_info *);
 #ifdef CONFIG_X86
     void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH v7 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping (top level ones)
  2016-06-08  8:58 [PATCH v7 00/11] Check VT-d Device-TLB flush error Xu, Quan
                   ` (2 preceding siblings ...)
  2016-06-08  8:58 ` [PATCH v7 03/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping (top level ones) Xu, Quan
@ 2016-06-08  8:58 ` Xu, Quan
  2016-06-08 14:43   ` Jan Beulich
  2016-06-09 10:30   ` Julien Grall
  2016-06-08  8:58 ` [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} " Xu, Quan
                   ` (6 subsequent siblings)
  10 siblings, 2 replies; 45+ messages in thread
From: Xu, Quan @ 2016-06-08  8:58 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, Quan Xu, dario.faggioli,
	Julien Grall, Jan Beulich, Suravee Suthikulpanit

From: Quan Xu <quan.xu@intel.com>

Signed-off-by: Quan Xu <quan.xu@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>

CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Feng Wu <feng.wu@intel.com>
CC: Jan Beulich <jbeulich@suse.com>

v7: drop the amd_iommu_hwdom_init() fix, which has been added in
    patch 2 in this v7.
---
 xen/drivers/passthrough/arm/smmu.c            | 4 ++--
 xen/drivers/passthrough/vtd/iommu.c           | 7 ++++---
 xen/include/asm-x86/hvm/svm/amd-iommu-proto.h | 4 ++--
 xen/include/xen/iommu.h                       | 4 ++--
 4 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 1ce4ddf..ee5c89d 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2745,8 +2745,8 @@ static void arm_smmu_iommu_domain_teardown(struct domain *d)
 	xfree(xen_domain);
 }
 
-static int arm_smmu_map_page(struct domain *d, unsigned long gfn,
-			     unsigned long mfn, unsigned int flags)
+static int __must_check arm_smmu_map_page(struct domain *d, unsigned long gfn,
+                                          unsigned long mfn, unsigned int flags)
 {
 	p2m_type_t t;
 
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 4844193..e900019 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1687,9 +1687,10 @@ static void iommu_domain_teardown(struct domain *d)
     spin_unlock(&hd->arch.mapping_lock);
 }
 
-static int intel_iommu_map_page(
-    struct domain *d, unsigned long gfn, unsigned long mfn,
-    unsigned int flags)
+static int __must_check intel_iommu_map_page(struct domain *d,
+                                             unsigned long gfn,
+                                             unsigned long mfn,
+                                             unsigned int flags)
 {
     struct domain_iommu *hd = dom_iommu(d);
     struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
diff --git a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
index 57b6cc1..ac9f036 100644
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
@@ -51,8 +51,8 @@ int amd_iommu_init(void);
 int amd_iommu_update_ivrs_mapping_acpi(void);
 
 /* mapping functions */
-int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
-                       unsigned int flags);
+int __must_check amd_iommu_map_page(struct domain *d, unsigned long gfn,
+                                    unsigned long mfn, unsigned int flags);
 int __must_check amd_iommu_unmap_page(struct domain *d, unsigned long gfn);
 u64 amd_iommu_get_next_table_from_pte(u32 *entry);
 int amd_iommu_reserve_domain_unity_map(struct domain *domain,
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index f45fa5a..2b86710 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -166,8 +166,8 @@ struct iommu_ops {
 #endif /* HAS_PCI */
 
     void (*teardown)(struct domain *d);
-    int (*map_page)(struct domain *d, unsigned long gfn, unsigned long mfn,
-                    unsigned int flags);
+    int __must_check (*map_page)(struct domain *d, unsigned long gfn,
+                                 unsigned long mfn, unsigned int flags);
     int __must_check (*unmap_page)(struct domain *d, unsigned long gfn);
     void (*free_page_table)(struct page_info *);
 #ifdef CONFIG_X86
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)
  2016-06-08  8:58 [PATCH v7 00/11] Check VT-d Device-TLB flush error Xu, Quan
                   ` (3 preceding siblings ...)
  2016-06-08  8:58 ` [PATCH v7 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping " Xu, Quan
@ 2016-06-08  8:58 ` Xu, Quan
  2016-06-09 11:12   ` Julien Grall
  2016-06-12  6:46   ` Tian, Kevin
  2016-06-08  8:58 ` [PATCH v7 06/11] propagate IOMMU Device-TLB flush error up to EPT update " Xu, Quan
                   ` (5 subsequent siblings)
  10 siblings, 2 replies; 45+ messages in thread
From: Xu, Quan @ 2016-06-08  8:58 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Quan Xu, dario.faggioli,
	Julien Grall, Jan Beulich

From: Quan Xu <quan.xu@intel.com>

Signed-off-by: Quan Xu <quan.xu@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/arm/p2m.c                  |  4 +++-
 xen/common/memory.c                 | 12 ++++++++++--
 xen/drivers/passthrough/iommu.c     | 13 +++++++++----
 xen/drivers/passthrough/x86/iommu.c |  5 +++--
 xen/include/xen/iommu.h             |  5 +++--
 5 files changed, 28 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 6a19c57..65d8f1a 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1178,7 +1178,9 @@ out:
     if ( flush )
     {
         flush_tlb_domain(d);
-        iommu_iotlb_flush(d, sgfn, egfn - sgfn);
+        ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);
+        if ( !rc )
+            rc = ret;
     }
 
     while ( (pg = page_list_remove_head(&free_pages)) )
diff --git a/xen/common/memory.c b/xen/common/memory.c
index ccc6436..46b1d9f 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -683,9 +683,17 @@ static int xenmem_add_to_physmap(struct domain *d,
 #ifdef CONFIG_HAS_PASSTHROUGH
     if ( need_iommu(d) )
     {
+        int ret;
+
         this_cpu(iommu_dont_flush_iotlb) = 0;
-        iommu_iotlb_flush(d, xatp->idx - done, done);
-        iommu_iotlb_flush(d, xatp->gpfn - done, done);
+
+        ret = iommu_iotlb_flush(d, xatp->idx - done, done);
+        if ( unlikely(ret) && rc >= 0 )
+            rc = ret;
+
+        ret = iommu_iotlb_flush(d, xatp->gpfn - done, done);
+        if ( unlikely(ret) && rc >= 0 )
+            rc = ret;
     }
 #endif
 
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index ec85352..3a73fab 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -311,24 +311,29 @@ static void iommu_free_pagetables(unsigned long unused)
                             cpumask_cycle(smp_processor_id(), &cpu_online_map));
 }
 
-void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+int iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+                      unsigned int page_count)
 {
     const struct domain_iommu *hd = dom_iommu(d);
 
     if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
-        return;
+        return 0;
 
     hd->platform_ops->iotlb_flush(d, gfn, page_count);
+
+    return 0;
 }
 
-void iommu_iotlb_flush_all(struct domain *d)
+int iommu_iotlb_flush_all(struct domain *d)
 {
     const struct domain_iommu *hd = dom_iommu(d);
 
     if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush_all )
-        return;
+        return 0;
 
     hd->platform_ops->iotlb_flush_all(d);
+
+    return 0;
 }
 
 int __init iommu_setup(void)
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index b64b98f..a18a608 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -104,8 +104,9 @@ int arch_iommu_populate_page_table(struct domain *d)
     this_cpu(iommu_dont_flush_iotlb) = 0;
 
     if ( !rc )
-        iommu_iotlb_flush_all(d);
-    else if ( rc != -ERESTART )
+        rc = iommu_iotlb_flush_all(d);
+
+    if ( rc && rc != -ERESTART )
         iommu_teardown(d);
 
     return rc;
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 2b86710..57c9fbc 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -200,8 +200,9 @@ int iommu_do_pci_domctl(struct xen_domctl *, struct domain *d,
 int iommu_do_domctl(struct xen_domctl *, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
-void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
-void iommu_iotlb_flush_all(struct domain *d);
+int __must_check iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+                                   unsigned int page_count);
+int __must_check iommu_iotlb_flush_all(struct domain *d);
 
 /*
  * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH v7 06/11] propagate IOMMU Device-TLB flush error up to EPT update (top level ones)
  2016-06-08  8:58 [PATCH v7 00/11] Check VT-d Device-TLB flush error Xu, Quan
                   ` (4 preceding siblings ...)
  2016-06-08  8:58 ` [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} " Xu, Quan
@ 2016-06-08  8:58 ` Xu, Quan
  2016-06-08  8:59 ` [PATCH v7 07/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending " Xu, Quan
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 45+ messages in thread
From: Xu, Quan @ 2016-06-08  8:58 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Feng Wu, Jun Nakajima, George Dunlap, Andrew Cooper,
	dario.faggioli, Jan Beulich, Quan Xu

From: Quan Xu <quan.xu@intel.com>

Propagate the IOMMU Device-TLB flush error up to the ept_set_entry(),
when VT-d shares EPT page table.

Signed-off-by: Quan Xu <quan.xu@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Feng Wu <feng.wu@intel.com>
---
 xen/arch/x86/mm/p2m-ept.c           | 2 +-
 xen/drivers/passthrough/vtd/iommu.c | 6 ++++--
 xen/include/asm-x86/iommu.h         | 3 ++-
 3 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 5aebc24..234d76a 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -832,7 +832,7 @@ out:
          need_modify_vtd_table )
     {
         if ( iommu_hap_pt_share )
-            iommu_pte_flush(d, gfn, &ept_entry->epte, order, vtd_pte_present);
+            rc = iommu_pte_flush(d, gfn, &ept_entry->epte, order, vtd_pte_present);
         else
         {
             if ( iommu_flags )
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index e900019..5366267 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1752,8 +1752,8 @@ static int __must_check intel_iommu_unmap_page(struct domain *d,
     return dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
 }
 
-void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
-                     int order, int present)
+int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
+                    int order, int present)
 {
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu = NULL;
@@ -1778,6 +1778,8 @@ void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
                                    order, !present, flush_dev_iotlb) )
             iommu_flush_write_buffer(iommu);
     }
+
+    return 0;
 }
 
 static int __init vtd_ept_page_compatible(struct iommu *iommu)
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
index e82a2f0..815d77e 100644
--- a/xen/include/asm-x86/iommu.h
+++ b/xen/include/asm-x86/iommu.h
@@ -27,7 +27,8 @@ int iommu_setup_hpet_msi(struct msi_desc *);
 
 /* While VT-d specific, this must get declared in a generic header. */
 int adjust_vtd_irq_affinities(void);
-void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int present);
+int __must_check iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
+                                 int order, int present);
 bool_t iommu_supports_eim(void);
 int iommu_enable_x2apic_IR(void);
 void iommu_disable_x2apic_IR(void);
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH v7 07/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending (top level ones)
  2016-06-08  8:58 [PATCH v7 00/11] Check VT-d Device-TLB flush error Xu, Quan
                   ` (5 preceding siblings ...)
  2016-06-08  8:58 ` [PATCH v7 06/11] propagate IOMMU Device-TLB flush error up to EPT update " Xu, Quan
@ 2016-06-08  8:59 ` Xu, Quan
  2016-06-08 14:51   ` Jan Beulich
                     ` (2 more replies)
  2016-06-08  8:59 ` [PATCH v7 08/11] IOMMU: propagate IOMMU Device-TLB flush error (leaf ones) Xu, Quan
                   ` (3 subsequent siblings)
  10 siblings, 3 replies; 45+ messages in thread
From: Xu, Quan @ 2016-06-08  8:59 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Keir Fraser, Quan Xu,
	Liu Jinsong, dario.faggioli, Julien Grall, Jan Beulich,
	Andrew Cooper, Feng Wu, Suravee Suthikulpanit

From: Quan Xu <quan.xu@intel.com>

Signed-off-by: Quan Xu <quan.xu@intel.com>

CC: Jan Beulich <jbeulich@suse.com>
CC: Liu Jinsong <jinsong.liu@alibaba-inc.com>
CC: Keir Fraser <keir@xen.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Feng Wu <feng.wu@intel.com>

v7:
  1. return SAVED_ALL at the bottom of device_power_down(), instead
     of SAVED_NONE.
  2. drop the 'if ( error > 0 )', calling device_power_up(error)
     without any if().
  3. for vtd_suspend():
       - drop pointless initializer.
       - return 0 at the bottom to make obvious that no error path
         comes there.
---
 xen/arch/x86/acpi/power.c                     | 73 ++++++++++++++++++++-------
 xen/drivers/passthrough/amd/iommu_init.c      |  9 +++-
 xen/drivers/passthrough/amd/pci_amd_iommu.c   |  2 +-
 xen/drivers/passthrough/iommu.c               |  6 ++-
 xen/drivers/passthrough/vtd/iommu.c           | 35 +++++++++----
 xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |  2 +-
 xen/include/xen/iommu.h                       |  4 +-
 7 files changed, 96 insertions(+), 35 deletions(-)

diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
index 2885e31..717a809 100644
--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -43,36 +43,70 @@ struct acpi_sleep_info acpi_sinfo;
 
 void do_suspend_lowlevel(void);
 
+enum dev_power_saved
+{
+    SAVED_NONE,
+    SAVED_CONSOLE,
+    SAVED_TIME,
+    SAVED_I8259A,
+    SAVED_IOAPIC,
+    SAVED_IOMMU,
+    SAVED_LAPIC,
+    SAVED_ALL,
+};
+
 static int device_power_down(void)
 {
-    console_suspend();
+    if ( console_suspend() )
+        return SAVED_NONE;
 
-    time_suspend();
+    if ( time_suspend() )
+        return SAVED_CONSOLE;
 
-    i8259A_suspend();
+    if ( i8259A_suspend() )
+        return SAVED_TIME;
 
+    /* ioapic_suspend cannot fail */
     ioapic_suspend();
 
-    iommu_suspend();
+    if ( iommu_suspend() )
+        return SAVED_IOAPIC;
 
-    lapic_suspend();
+    if ( lapic_suspend() )
+        return SAVED_IOMMU;
 
-    return 0;
+    return SAVED_ALL;
 }
 
-static void device_power_up(void)
+static void device_power_up(enum dev_power_saved saved)
 {
-    lapic_resume();
-
-    iommu_resume();
-
-    ioapic_resume();
-
-    i8259A_resume();
-
-    time_resume();
-
-    console_resume();
+    switch ( saved )
+    {
+    case SAVED_ALL:
+    case SAVED_LAPIC:
+        lapic_resume();
+        /* fall through */
+    case SAVED_IOMMU:
+        iommu_resume();
+        /* fall through */
+    case SAVED_IOAPIC:
+        ioapic_resume();
+        /* fall through */
+    case SAVED_I8259A:
+        i8259A_resume();
+        /* fall through */
+    case SAVED_TIME:
+        time_resume();
+        /* fall through */
+    case SAVED_CONSOLE:
+        console_resume();
+        /* fall through */
+    case SAVED_NONE:
+        break;
+    default:
+        BUG();
+        break;
+    }
 }
 
 static void freeze_domains(void)
@@ -169,6 +203,7 @@ static int enter_state(u32 state)
     {
         printk(XENLOG_ERR "Some devices failed to power down.");
         system_state = SYS_STATE_resume;
+        device_power_up(error);
         goto done;
     }
 
@@ -196,7 +231,7 @@ static int enter_state(u32 state)
     write_cr4(cr4 & ~X86_CR4_MCE);
     write_efer(read_efer());
 
-    device_power_up();
+    device_power_up(SAVED_ALL);
 
     mcheck_init(&boot_cpu_data, 0);
     write_cr4(cr4);
diff --git a/xen/drivers/passthrough/amd/iommu_init.c b/xen/drivers/passthrough/amd/iommu_init.c
index 4536106..0b68596 100644
--- a/xen/drivers/passthrough/amd/iommu_init.c
+++ b/xen/drivers/passthrough/amd/iommu_init.c
@@ -1339,7 +1339,14 @@ static void invalidate_all_devices(void)
     iterate_ivrs_mappings(_invalidate_all_devices);
 }
 
-void amd_iommu_suspend(void)
+int amd_iommu_suspend(void)
+{
+    amd_iommu_crash_shutdown();
+
+    return 0;
+}
+
+void amd_iommu_crash_shutdown(void)
 {
     struct amd_iommu *iommu;
 
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 4a860af..7761241 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -633,6 +633,6 @@ const struct iommu_ops amd_iommu_ops = {
     .suspend = amd_iommu_suspend,
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
-    .crash_shutdown = amd_iommu_suspend,
+    .crash_shutdown = amd_iommu_crash_shutdown,
     .dump_p2m_table = amd_dump_p2m_table,
 };
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 3a73fab..a9898fc 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -379,10 +379,12 @@ int __init iommu_setup(void)
     return rc;
 }
 
-void iommu_suspend()
+int iommu_suspend()
 {
     if ( iommu_enabled )
-        iommu_get_ops()->suspend();
+        return iommu_get_ops()->suspend();
+
+    return 0;
 }
 
 void iommu_resume()
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 5366267..0f17afb 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -541,7 +541,7 @@ static int iommu_flush_iotlb_psi(
     return status;
 }
 
-static void iommu_flush_all(void)
+static int __must_check iommu_flush_all(void)
 {
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu;
@@ -555,6 +555,8 @@ static void iommu_flush_all(void)
         flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
         iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
     }
+
+    return 0;
 }
 
 static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
@@ -1259,7 +1261,9 @@ static void __hwdom_init intel_iommu_hwdom_init(struct domain *d)
     setup_hwdom_pci_devices(d, setup_hwdom_device);
     setup_hwdom_rmrr(d);
 
-    iommu_flush_all();
+    if ( iommu_flush_all() )
+        printk(XENLOG_WARNING VTDPREFIX
+               " IOMMU flush all failed for hardware domain\n");
 
     for_each_drhd_unit ( drhd )
     {
@@ -2001,7 +2005,7 @@ int adjust_vtd_irq_affinities(void)
 }
 __initcall(adjust_vtd_irq_affinities);
 
-static int init_vtd_hw(void)
+static int __must_check init_vtd_hw(void)
 {
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu;
@@ -2099,8 +2103,8 @@ static int init_vtd_hw(void)
             return -EIO;
         }
     }
-    iommu_flush_all();
-    return 0;
+
+    return iommu_flush_all();
 }
 
 static void __hwdom_init setup_hwdom_rmrr(struct domain *d)
@@ -2389,16 +2393,25 @@ static int intel_iommu_group_id(u16 seg, u8 bus, u8 devfn)
 }
 
 static u32 iommu_state[MAX_IOMMUS][MAX_IOMMU_REGS];
-static void vtd_suspend(void)
+
+static int __must_check vtd_suspend(void)
 {
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu;
     u32    i;
+    int rc;
 
     if ( !iommu_enabled )
-        return;
+        return 0;
 
-    iommu_flush_all();
+    rc = iommu_flush_all();
+    if ( unlikely(rc) )
+    {
+        printk(XENLOG_WARNING VTDPREFIX
+               " suspend: IOMMU flush all failed: %d\n", rc);
+
+        return rc;
+    }
 
     for_each_drhd_unit ( drhd )
     {
@@ -2427,6 +2440,8 @@ static void vtd_suspend(void)
         if ( !iommu_intremap && iommu_qinval )
             disable_qinval(iommu);
     }
+
+    return 0;
 }
 
 static void vtd_crash_shutdown(void)
@@ -2437,7 +2452,9 @@ static void vtd_crash_shutdown(void)
     if ( !iommu_enabled )
         return;
 
-    iommu_flush_all();
+    if ( iommu_flush_all() )
+        printk(XENLOG_WARNING VTDPREFIX
+               " crash shutdown: IOMMU flush all failed\n");
 
     for_each_drhd_unit ( drhd )
     {
diff --git a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
index ac9f036..d08dc0b 100644
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
@@ -119,7 +119,7 @@ extern unsigned long *shared_intremap_inuse;
 
 /* power management support */
 void amd_iommu_resume(void);
-void amd_iommu_suspend(void);
+int __must_check amd_iommu_suspend(void);
 void amd_iommu_crash_shutdown(void);
 
 /* guest iommu support */
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 57c9fbc..6535937 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -175,7 +175,7 @@ struct iommu_ops {
     unsigned int (*read_apic_from_ire)(unsigned int apic, unsigned int reg);
     int (*setup_hpet_msi)(struct msi_desc *);
 #endif /* CONFIG_X86 */
-    void (*suspend)(void);
+    int __must_check (*suspend)(void);
     void (*resume)(void);
     void (*share_p2m)(struct domain *d);
     void (*crash_shutdown)(void);
@@ -185,7 +185,7 @@ struct iommu_ops {
     void (*dump_p2m_table)(struct domain *d);
 };
 
-void iommu_suspend(void);
+int __must_check iommu_suspend(void);
 void iommu_resume(void);
 void iommu_crash_shutdown(void);
 int iommu_get_reserved_device_memory(iommu_grdm_t *, void *);
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH v7 08/11] IOMMU: propagate IOMMU Device-TLB flush error (leaf ones).
  2016-06-08  8:58 [PATCH v7 00/11] Check VT-d Device-TLB flush error Xu, Quan
                   ` (6 preceding siblings ...)
  2016-06-08  8:59 ` [PATCH v7 07/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending " Xu, Quan
@ 2016-06-08  8:59 ` Xu, Quan
  2016-06-08  8:59 ` [PATCH v7 09/11] vt-d: fix the IOMMU flush issue Xu, Quan
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 45+ messages in thread
From: Xu, Quan @ 2016-06-08  8:59 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, Quan Xu, dario.faggioli,
	Julien Grall, Jan Beulich

From: Quan Xu <quan.xu@intel.com>

Signed-off-by: Quan Xu <quan.xu@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Feng Wu <feng.wu@intel.com>
---
 xen/drivers/passthrough/arm/smmu.c  | 13 ++++++++-----
 xen/drivers/passthrough/iommu.c     |  8 ++------
 xen/drivers/passthrough/vtd/iommu.c | 32 ++++++++++++++++++++------------
 xen/include/xen/iommu.h             |  5 +++--
 4 files changed, 33 insertions(+), 25 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index ee5c89d..1d21568 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2540,7 +2540,7 @@ static int force_stage = 2;
  */
 static u32 platform_features = ARM_SMMU_FEAT_COHERENT_WALK;
 
-static void arm_smmu_iotlb_flush_all(struct domain *d)
+static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
 {
 	struct arm_smmu_xen_domain *smmu_domain = dom_iommu(d)->arch.priv;
 	struct iommu_domain *cfg;
@@ -2557,13 +2557,16 @@ static void arm_smmu_iotlb_flush_all(struct domain *d)
 		arm_smmu_tlb_inv_context(cfg->priv);
 	}
 	spin_unlock(&smmu_domain->lock);
+
+	return 0;
 }
 
-static void arm_smmu_iotlb_flush(struct domain *d, unsigned long gfn,
-                                 unsigned int page_count)
+static int __must_check arm_smmu_iotlb_flush(struct domain *d,
+                                             unsigned long gfn,
+                                             unsigned int page_count)
 {
-    /* ARM SMMU v1 doesn't have flush by VMA and VMID */
-    arm_smmu_iotlb_flush_all(d);
+	/* ARM SMMU v1 doesn't have flush by VMA and VMID */
+	return arm_smmu_iotlb_flush_all(d);
 }
 
 static struct iommu_domain *arm_smmu_get_domain(struct domain *d,
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index a9898fc..ef80b3c 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -319,9 +319,7 @@ int iommu_iotlb_flush(struct domain *d, unsigned long gfn,
     if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
         return 0;
 
-    hd->platform_ops->iotlb_flush(d, gfn, page_count);
-
-    return 0;
+    return hd->platform_ops->iotlb_flush(d, gfn, page_count);
 }
 
 int iommu_iotlb_flush_all(struct domain *d)
@@ -331,9 +329,7 @@ int iommu_iotlb_flush_all(struct domain *d)
     if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush_all )
         return 0;
 
-    hd->platform_ops->iotlb_flush_all(d);
-
-    return 0;
+    return hd->platform_ops->iotlb_flush_all(d);
 }
 
 int __init iommu_setup(void)
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 0f17afb..48edb67 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -559,8 +559,10 @@ static int __must_check iommu_flush_all(void)
     return 0;
 }
 
-static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
-        int dma_old_pte_present, unsigned int page_count)
+static int __must_check iommu_flush_iotlb(struct domain *d,
+                                          unsigned long gfn,
+                                          bool_t dma_old_pte_present,
+                                          unsigned int page_count)
 {
     struct domain_iommu *hd = dom_iommu(d);
     struct acpi_drhd_unit *drhd;
@@ -598,16 +600,20 @@ static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
                 iommu_flush_write_buffer(iommu);
         }
     }
+
+    return 0;
 }
 
-static void intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+static int __must_check iommu_flush_iotlb_pages(struct domain *d,
+                                                unsigned long gfn,
+                                                unsigned int page_count)
 {
-    __intel_iommu_iotlb_flush(d, gfn, 1, page_count);
+    return iommu_flush_iotlb(d, gfn, 1, page_count);
 }
 
-static void intel_iommu_iotlb_flush_all(struct domain *d)
+static int __must_check iommu_flush_iotlb_all(struct domain *d)
 {
-    __intel_iommu_iotlb_flush(d, INVALID_GFN, 0, 0);
+    return iommu_flush_iotlb(d, INVALID_GFN, 0, 0);
 }
 
 /* clear one page's page table */
@@ -616,6 +622,7 @@ static int __must_check dma_pte_clear_one(struct domain *domain, u64 addr)
     struct domain_iommu *hd = dom_iommu(domain);
     struct dma_pte *page = NULL, *pte = NULL;
     u64 pg_maddr;
+    int rc = 0;
 
     spin_lock(&hd->arch.mapping_lock);
     /* get last level pte */
@@ -641,11 +648,11 @@ static int __must_check dma_pte_clear_one(struct domain *domain, u64 addr)
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
 
     if ( !this_cpu(iommu_dont_flush_iotlb) )
-        __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K, 1, 1);
+        rc = iommu_flush_iotlb_pages(domain, addr >> PAGE_SHIFT_4K, 1);
 
     unmap_vtd_domain_page(page);
 
-    return 0;
+    return rc;
 }
 
 static void iommu_free_pagetable(u64 pt_maddr, int level)
@@ -1699,6 +1706,7 @@ static int __must_check intel_iommu_map_page(struct domain *d,
     struct domain_iommu *hd = dom_iommu(d);
     struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
     u64 pg_maddr;
+    int rc = 0;
 
     /* Do nothing if VT-d shares EPT page table */
     if ( iommu_use_hap_pt(d) )
@@ -1741,9 +1749,9 @@ static int __must_check intel_iommu_map_page(struct domain *d,
     unmap_vtd_domain_page(page);
 
     if ( !this_cpu(iommu_dont_flush_iotlb) )
-        __intel_iommu_iotlb_flush(d, gfn, dma_pte_present(old), 1);
+        rc = iommu_flush_iotlb(d, gfn, dma_pte_present(old), 1);
 
-    return 0;
+    return rc;
 }
 
 static int __must_check intel_iommu_unmap_page(struct domain *d,
@@ -2574,8 +2582,8 @@ const struct iommu_ops intel_iommu_ops = {
     .resume = vtd_resume,
     .share_p2m = iommu_set_pgd,
     .crash_shutdown = vtd_crash_shutdown,
-    .iotlb_flush = intel_iommu_iotlb_flush,
-    .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .iotlb_flush = iommu_flush_iotlb_pages,
+    .iotlb_flush_all = iommu_flush_iotlb_all,
     .get_reserved_device_memory = intel_iommu_get_reserved_device_memory,
     .dump_p2m_table = vtd_dump_p2m_table,
 };
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 6535937..8b23cc9 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -179,8 +179,9 @@ struct iommu_ops {
     void (*resume)(void);
     void (*share_p2m)(struct domain *d);
     void (*crash_shutdown)(void);
-    void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
-    void (*iotlb_flush_all)(struct domain *d);
+    int __must_check (*iotlb_flush)(struct domain *d, unsigned long gfn,
+                                    unsigned int page_count);
+    int __must_check (*iotlb_flush_all)(struct domain *d);
     int (*get_reserved_device_memory)(iommu_grdm_t *, void *);
     void (*dump_p2m_table)(struct domain *d);
 };
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH v7 09/11] vt-d: fix the IOMMU flush issue
  2016-06-08  8:58 [PATCH v7 00/11] Check VT-d Device-TLB flush error Xu, Quan
                   ` (7 preceding siblings ...)
  2016-06-08  8:59 ` [PATCH v7 08/11] IOMMU: propagate IOMMU Device-TLB flush error (leaf ones) Xu, Quan
@ 2016-06-08  8:59 ` Xu, Quan
  2016-06-12  7:32   ` Tian, Kevin
  2016-06-08  8:59 ` [PATCH v7 10/11] vt-d: propagate the IOMMU Device-TLB flush error up to ME phantom functions Xu, Quan
  2016-06-08  8:59 ` [PATCH v7 11/11] vt-d: add __must_check annotation to IOMMU flush pointers and handlers Xu, Quan
  10 siblings, 1 reply; 45+ messages in thread
From: Xu, Quan @ 2016-06-08  8:59 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Keir Fraser, Quan Xu, Andrew Cooper, dario.faggioli,
	Jan Beulich, Feng Wu

From: Quan Xu <quan.xu@intel.com>

The propagation value from IOMMU flush interfaces may be positive, which
indicates callers need to flush cache, not one of faliures.

when the propagation value is positive, this patch fixes this flush issue
as follows:
  - call iommu_flush_write_buffer() to flush cache.
  - return zero.

Signed-off-by: Quan Xu <quan.xu@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

CC: Kevin Tian <kevin.tian@intel.com>
CC: Feng Wu <feng.wu@intel.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>

v7:
  1. Drop blank lines.
  2. make the assignment be replaced by its right side becoming
     the variable's initializer.
  3. leave the comments as are, no reply from VT-d maintainers yet.
---
 xen/drivers/passthrough/vtd/iommu.c | 167 ++++++++++++++++++++++++++----------
 1 file changed, 124 insertions(+), 43 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 48edb67..2a55985 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -388,17 +388,18 @@ static int flush_context_reg(
     return 0;
 }
 
-static int iommu_flush_context_global(
-    struct iommu *iommu, int flush_non_present_entry)
+static int __must_check iommu_flush_context_global(struct iommu *iommu,
+                                                   int flush_non_present_entry)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     return flush->context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL,
                                  flush_non_present_entry);
 }
 
-static int iommu_flush_context_device(
-    struct iommu *iommu, u16 did, u16 source_id,
-    u8 function_mask, int flush_non_present_entry)
+static int __must_check iommu_flush_context_device(struct iommu *iommu,
+                                                   u16 did, u16 source_id,
+                                                   u8 function_mask,
+                                                   int flush_non_present_entry)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     return flush->context(iommu, did, source_id, function_mask,
@@ -473,8 +474,9 @@ static int flush_iotlb_reg(void *_iommu, u16 did,
     return 0;
 }
 
-static int iommu_flush_iotlb_global(struct iommu *iommu,
-    int flush_non_present_entry, int flush_dev_iotlb)
+static int __must_check iommu_flush_iotlb_global(struct iommu *iommu,
+                                                 int flush_non_present_entry,
+                                                 int flush_dev_iotlb)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     int status;
@@ -491,8 +493,9 @@ static int iommu_flush_iotlb_global(struct iommu *iommu,
     return status;
 }
 
-static int iommu_flush_iotlb_dsi(struct iommu *iommu, u16 did,
-    int flush_non_present_entry, int flush_dev_iotlb)
+static int __must_check iommu_flush_iotlb_dsi(struct iommu *iommu, u16 did,
+                                              int flush_non_present_entry,
+                                              int flush_dev_iotlb)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     int status;
@@ -509,9 +512,10 @@ static int iommu_flush_iotlb_dsi(struct iommu *iommu, u16 did,
     return status;
 }
 
-static int iommu_flush_iotlb_psi(
-    struct iommu *iommu, u16 did, u64 addr, unsigned int order,
-    int flush_non_present_entry, int flush_dev_iotlb)
+static int __must_check iommu_flush_iotlb_psi(struct iommu *iommu, u16 did,
+                                              u64 addr, unsigned int order,
+                                              int flush_non_present_entry,
+                                              int flush_dev_iotlb)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     int status;
@@ -545,18 +549,42 @@ static int __must_check iommu_flush_all(void)
 {
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu;
-    int flush_dev_iotlb;
+    int rc = 0;
 
     flush_all_cache();
     for_each_drhd_unit ( drhd )
     {
         iommu = drhd->iommu;
-        iommu_flush_context_global(iommu, 0);
-        flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
-        iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
+        /*
+         * The current logic for rc returns:
+         *   - positive  invoke iommu_flush_write_buffer to flush cache.
+         *   - zero      on success.
+         *   - negative  on failure. Continue to flush IOMMU IOTLB on a
+         *               best effort basis.
+         *
+         * Moreover, IOMMU flush handlers flush_context_qi and flush_iotlb_qi
+         * (or flush_context_reg and flush_iotlb_reg, deep functions in the
+         * call trees of iommu_flush_context_global and iommu_flush_iotlb_global)
+         * are with the same logic to bubble up positive return value.
+         */
+        rc = iommu_flush_context_global(iommu, 0);
+        if ( rc <= 0 )
+        {
+            int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
+            int ret = iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
+
+            ASSERT(ret <= 0);
+            if ( !rc )
+                rc = ret;
+        }
+        else
+        {
+            iommu_flush_write_buffer(iommu);
+            rc = 0;
+        }
     }
 
-    return 0;
+    return rc;
 }
 
 static int __must_check iommu_flush_iotlb(struct domain *d,
@@ -569,6 +597,7 @@ static int __must_check iommu_flush_iotlb(struct domain *d,
     struct iommu *iommu;
     int flush_dev_iotlb;
     int iommu_domid;
+    int rc = 0;
 
     /*
      * No need pcideves_lock here because we have flush
@@ -587,21 +616,23 @@ static int __must_check iommu_flush_iotlb(struct domain *d,
             continue;
 
         if ( page_count != 1 || gfn == INVALID_GFN )
-        {
-            if ( iommu_flush_iotlb_dsi(iommu, iommu_domid,
-                        0, flush_dev_iotlb) )
-                iommu_flush_write_buffer(iommu);
-        }
+            rc = iommu_flush_iotlb_dsi(iommu, iommu_domid,
+                                       0, flush_dev_iotlb);
         else
+            rc = iommu_flush_iotlb_psi(iommu, iommu_domid,
+                                       (paddr_t)gfn << PAGE_SHIFT_4K,
+                                       PAGE_ORDER_4K,
+                                       !dma_old_pte_present,
+                                       flush_dev_iotlb);
+
+        if ( rc > 0 )
         {
-            if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
-                        (paddr_t)gfn << PAGE_SHIFT_4K, PAGE_ORDER_4K,
-                        !dma_old_pte_present, flush_dev_iotlb) )
-                iommu_flush_write_buffer(iommu);
+            iommu_flush_write_buffer(iommu);
+            rc = 0;
         }
     }
 
-    return 0;
+    return rc;
 }
 
 static int __must_check iommu_flush_iotlb_pages(struct domain *d,
@@ -1291,6 +1322,7 @@ int domain_context_mapping_one(
     u64 maddr, pgd_maddr;
     u16 seg = iommu->intel->drhd->segment;
     int agaw;
+    int rc;
 
     ASSERT(pcidevs_locked());
     spin_lock(&iommu->lock);
@@ -1404,13 +1436,34 @@ int domain_context_mapping_one(
     spin_unlock(&iommu->lock);
 
     /* Context entry was previously non-present (with domid 0). */
-    if ( iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) | devfn,
-                                    DMA_CCMD_MASK_NOBIT, 1) )
-        iommu_flush_write_buffer(iommu);
-    else
+    rc = iommu_flush_context_device(iommu, 0, PCI_BDF2(bus, devfn),
+                                    DMA_CCMD_MASK_NOBIT, 1);
+
+    /*
+     * The current logic for rc returns:
+     *   - positive  invoke iommu_flush_write_buffer to flush cache.
+     *   - zero      on success.
+     *   - negative  on failure. Continue to flush IOMMU IOTLB on a
+     *               best effort basis.
+     *
+     * Moreover, IOMMU flush handlers flush_context_qi or flush_iotlb_qi
+     * (or flush_context_reg and flush_iotlb_reg, deep functions in the
+     * call trees of iommu_flush_context_device and iommu_flush_iotlb_dsi)
+     * are with the same logic to bubble up positive return value.
+     */
+    if ( rc <= 0 )
     {
         int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
-        iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
+        int ret = iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
+
+        ASSERT(ret <= 0);
+        if ( !rc )
+            rc = ret;
+    }
+    else
+    {
+        iommu_flush_write_buffer(iommu);
+        rc = 0;
     }
 
     set_bit(iommu->index, &hd->arch.iommu_bitmap);
@@ -1420,7 +1473,7 @@ int domain_context_mapping_one(
     if ( !seg )
         me_wifi_quirk(domain, bus, devfn, MAP_ME_PHANTOM_FUNC);
 
-    return 0;
+    return rc;
 }
 
 static int domain_context_mapping(
@@ -1515,6 +1568,7 @@ int domain_context_unmap_one(
     struct context_entry *context, *context_entries;
     u64 maddr;
     int iommu_domid;
+    int rc;
 
     ASSERT(pcidevs_locked());
     spin_lock(&iommu->lock);
@@ -1542,14 +1596,35 @@ int domain_context_unmap_one(
         return -EINVAL;
     }
 
-    if ( iommu_flush_context_device(iommu, iommu_domid,
-                                    (((u16)bus) << 8) | devfn,
-                                    DMA_CCMD_MASK_NOBIT, 0) )
-        iommu_flush_write_buffer(iommu);
-    else
+    rc = iommu_flush_context_device(iommu, iommu_domid,
+                                    PCI_BDF2(bus, devfn),
+                                    DMA_CCMD_MASK_NOBIT, 0);
+
+    /*
+     * The current logic for rc returns:
+     *   - positive  invoke iommu_flush_write_buffer to flush cache.
+     *   - zero      on success.
+     *   - negative  on failure. Continue to flush IOMMU IOTLB on a
+     *               best effort basis.
+     *
+     * Moreover, IOMMU flush handlers flush_context_qi or flush_iotlb_qi
+     * (or flush_context_reg and flush_iotlb_reg, deep functions in the
+     * call trees of iommu_flush_context_device and iommu_flush_iotlb_dsi)
+     * are with the same logic to bubble up positive return value.
+     */
+    if ( rc <= 0 )
     {
         int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
-        iommu_flush_iotlb_dsi(iommu, iommu_domid, 0, flush_dev_iotlb);
+        int ret = iommu_flush_iotlb_dsi(iommu, iommu_domid, 0, flush_dev_iotlb);
+
+        ASSERT(ret <= 0);
+        if ( !rc )
+            rc = ret;
+    }
+    else
+    {
+        iommu_flush_write_buffer(iommu);
+        rc = 0;
     }
 
     spin_unlock(&iommu->lock);
@@ -1558,7 +1633,7 @@ int domain_context_unmap_one(
     if ( !iommu->intel->drhd->segment )
         me_wifi_quirk(domain, bus, devfn, UNMAP_ME_PHANTOM_FUNC);
 
-    return 0;
+    return rc;
 }
 
 static int domain_context_unmap(
@@ -1772,6 +1847,7 @@ int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
     struct domain_iommu *hd = dom_iommu(d);
     int flush_dev_iotlb;
     int iommu_domid;
+    int rc = 0;
 
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
 
@@ -1785,13 +1861,18 @@ int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
         iommu_domid= domain_iommu_domid(d, iommu);
         if ( iommu_domid == -1 )
             continue;
-        if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
+
+        rc = iommu_flush_iotlb_psi(iommu, iommu_domid,
                                    (paddr_t)gfn << PAGE_SHIFT_4K,
-                                   order, !present, flush_dev_iotlb) )
+                                   order, !present, flush_dev_iotlb);
+        if ( rc > 0 )
+        {
             iommu_flush_write_buffer(iommu);
+            rc = 0;
+        }
     }
 
-    return 0;
+    return rc;
 }
 
 static int __init vtd_ept_page_compatible(struct iommu *iommu)
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH v7 10/11] vt-d: propagate the IOMMU Device-TLB flush error up to ME phantom functions
  2016-06-08  8:58 [PATCH v7 00/11] Check VT-d Device-TLB flush error Xu, Quan
                   ` (8 preceding siblings ...)
  2016-06-08  8:59 ` [PATCH v7 09/11] vt-d: fix the IOMMU flush issue Xu, Quan
@ 2016-06-08  8:59 ` Xu, Quan
  2016-06-12  7:34   ` Tian, Kevin
  2016-06-08  8:59 ` [PATCH v7 11/11] vt-d: add __must_check annotation to IOMMU flush pointers and handlers Xu, Quan
  10 siblings, 1 reply; 45+ messages in thread
From: Xu, Quan @ 2016-06-08  8:59 UTC (permalink / raw)
  To: xen-devel; +Cc: Kevin Tian, dario.faggioli, Feng Wu, Jan Beulich, Quan Xu

From: Quan Xu <quan.xu@intel.com>

Signed-off-by: Quan Xu <quan.xu@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

CC: Jan Beulich <jbeulich@suse.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Feng Wu <feng.wu@intel.com>
---
 xen/drivers/passthrough/vtd/extern.h |  3 ++-
 xen/drivers/passthrough/vtd/iommu.c  |  8 ++++----
 xen/drivers/passthrough/vtd/quirks.c | 27 +++++++++++++++++----------
 3 files changed, 23 insertions(+), 15 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/extern.h b/xen/drivers/passthrough/vtd/extern.h
index cbe0286..6772839 100644
--- a/xen/drivers/passthrough/vtd/extern.h
+++ b/xen/drivers/passthrough/vtd/extern.h
@@ -91,7 +91,8 @@ int is_igd_vt_enabled_quirk(void);
 void platform_quirks_init(void);
 void vtd_ops_preamble_quirk(struct iommu* iommu);
 void vtd_ops_postamble_quirk(struct iommu* iommu);
-void me_wifi_quirk(struct domain *domain, u8 bus, u8 devfn, int map);
+int __must_check me_wifi_quirk(struct domain *domain,
+                               u8 bus, u8 devfn, int map);
 void pci_vtd_quirk(const struct pci_dev *);
 bool_t platform_supports_intremap(void);
 bool_t platform_supports_x2apic(void);
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 2a55985..8ad862e 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1470,8 +1470,8 @@ int domain_context_mapping_one(
 
     unmap_vtd_domain_page(context_entries);
 
-    if ( !seg )
-        me_wifi_quirk(domain, bus, devfn, MAP_ME_PHANTOM_FUNC);
+    if ( !seg && !rc )
+        rc = me_wifi_quirk(domain, bus, devfn, MAP_ME_PHANTOM_FUNC);
 
     return rc;
 }
@@ -1630,8 +1630,8 @@ int domain_context_unmap_one(
     spin_unlock(&iommu->lock);
     unmap_vtd_domain_page(context_entries);
 
-    if ( !iommu->intel->drhd->segment )
-        me_wifi_quirk(domain, bus, devfn, UNMAP_ME_PHANTOM_FUNC);
+    if ( !iommu->intel->drhd->segment && !rc )
+        rc = me_wifi_quirk(domain, bus, devfn, UNMAP_ME_PHANTOM_FUNC);
 
     return rc;
 }
diff --git a/xen/drivers/passthrough/vtd/quirks.c b/xen/drivers/passthrough/vtd/quirks.c
index 473d1fc..91f96ac 100644
--- a/xen/drivers/passthrough/vtd/quirks.c
+++ b/xen/drivers/passthrough/vtd/quirks.c
@@ -331,10 +331,12 @@ void __init platform_quirks_init(void)
  * assigning Intel integrated wifi device to a guest.
  */
 
-static void map_me_phantom_function(struct domain *domain, u32 dev, int map)
+static int __must_check map_me_phantom_function(struct domain *domain,
+                                                u32 dev, int map)
 {
     struct acpi_drhd_unit *drhd;
     struct pci_dev *pdev;
+    int rc;
 
     /* find ME VT-d engine base on a real ME device */
     pdev = pci_get_pdev(0, 0, PCI_DEVFN(dev, 0));
@@ -342,23 +344,26 @@ static void map_me_phantom_function(struct domain *domain, u32 dev, int map)
 
     /* map or unmap ME phantom function */
     if ( map )
-        domain_context_mapping_one(domain, drhd->iommu, 0,
-                                   PCI_DEVFN(dev, 7), NULL);
+        rc = domain_context_mapping_one(domain, drhd->iommu, 0,
+                                        PCI_DEVFN(dev, 7), NULL);
     else
-        domain_context_unmap_one(domain, drhd->iommu, 0,
-                                 PCI_DEVFN(dev, 7));
+        rc = domain_context_unmap_one(domain, drhd->iommu, 0,
+                                      PCI_DEVFN(dev, 7));
+
+    return rc;
 }
 
-void me_wifi_quirk(struct domain *domain, u8 bus, u8 devfn, int map)
+int me_wifi_quirk(struct domain *domain, u8 bus, u8 devfn, int map)
 {
     u32 id;
+    int rc = 0;
 
     id = pci_conf_read32(0, 0, 0, 0, 0);
     if ( IS_CTG(id) )
     {
         /* quit if ME does not exist */
         if ( pci_conf_read32(0, 0, 3, 0, 0) == 0xffffffff )
-            return;
+            return 0;
 
         /* if device is WLAN device, map ME phantom device 0:3.7 */
         id = pci_conf_read32(0, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), 0);
@@ -372,7 +377,7 @@ void me_wifi_quirk(struct domain *domain, u8 bus, u8 devfn, int map)
             case 0x423b8086:
             case 0x423c8086:
             case 0x423d8086:
-                map_me_phantom_function(domain, 3, map);
+                rc = map_me_phantom_function(domain, 3, map);
                 break;
             default:
                 break;
@@ -382,7 +387,7 @@ void me_wifi_quirk(struct domain *domain, u8 bus, u8 devfn, int map)
     {
         /* quit if ME does not exist */
         if ( pci_conf_read32(0, 0, 22, 0, 0) == 0xffffffff )
-            return;
+            return 0;
 
         /* if device is WLAN device, map ME phantom device 0:22.7 */
         id = pci_conf_read32(0, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), 0);
@@ -398,12 +403,14 @@ void me_wifi_quirk(struct domain *domain, u8 bus, u8 devfn, int map)
             case 0x42388086:        /* Puma Peak */
             case 0x422b8086:
             case 0x422c8086:
-                map_me_phantom_function(domain, 22, map);
+                rc = map_me_phantom_function(domain, 22, map);
                 break;
             default:
                 break;
         }
     }
+
+    return rc;
 }
 
 void pci_vtd_quirk(const struct pci_dev *pdev)
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH v7 11/11] vt-d: add __must_check annotation to IOMMU flush pointers and handlers
  2016-06-08  8:58 [PATCH v7 00/11] Check VT-d Device-TLB flush error Xu, Quan
                   ` (9 preceding siblings ...)
  2016-06-08  8:59 ` [PATCH v7 10/11] vt-d: propagate the IOMMU Device-TLB flush error up to ME phantom functions Xu, Quan
@ 2016-06-08  8:59 ` Xu, Quan
  2016-06-12  7:35   ` Tian, Kevin
  10 siblings, 1 reply; 45+ messages in thread
From: Xu, Quan @ 2016-06-08  8:59 UTC (permalink / raw)
  To: xen-devel; +Cc: Kevin Tian, dario.faggioli, Feng Wu, Jan Beulich, Quan Xu

From: Quan Xu <quan.xu@intel.com>

Signed-off-by: Quan Xu <quan.xu@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

CC: Jan Beulich <jbeulich@suse.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Feng Wu <feng.wu@intel.com>

v7: using !! instead of ?: .
---
 xen/drivers/passthrough/vtd/iommu.c  | 44 ++++++++++++++++++------------------
 xen/drivers/passthrough/vtd/iommu.h  | 11 +++++----
 xen/drivers/passthrough/vtd/qinval.c | 14 ++++++------
 3 files changed, 36 insertions(+), 33 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 8ad862e..fc5c76c 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -335,10 +335,9 @@ static void iommu_flush_write_buffer(struct iommu *iommu)
 }
 
 /* return value determine if we need a write buffer flush */
-static int flush_context_reg(
-    void *_iommu,
-    u16 did, u16 source_id, u8 function_mask, u64 type,
-    int flush_non_present_entry)
+static int __must_check flush_context_reg(void *_iommu, u16 did, u16 source_id,
+                                          u8 function_mask, u64 type,
+                                          bool_t flush_non_present_entry)
 {
     struct iommu *iommu = (struct iommu *) _iommu;
     u64 val = 0;
@@ -389,7 +388,7 @@ static int flush_context_reg(
 }
 
 static int __must_check iommu_flush_context_global(struct iommu *iommu,
-                                                   int flush_non_present_entry)
+                                                   bool_t flush_non_present_entry)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     return flush->context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL,
@@ -399,7 +398,7 @@ static int __must_check iommu_flush_context_global(struct iommu *iommu,
 static int __must_check iommu_flush_context_device(struct iommu *iommu,
                                                    u16 did, u16 source_id,
                                                    u8 function_mask,
-                                                   int flush_non_present_entry)
+                                                   bool_t flush_non_present_entry)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     return flush->context(iommu, did, source_id, function_mask,
@@ -408,9 +407,10 @@ static int __must_check iommu_flush_context_device(struct iommu *iommu,
 }
 
 /* return value determine if we need a write buffer flush */
-static int flush_iotlb_reg(void *_iommu, u16 did,
-                           u64 addr, unsigned int size_order, u64 type,
-                           int flush_non_present_entry, int flush_dev_iotlb)
+static int __must_check flush_iotlb_reg(void *_iommu, u16 did, u64 addr,
+                                        unsigned int size_order, u64 type,
+                                        bool_t flush_non_present_entry,
+                                        bool_t flush_dev_iotlb)
 {
     struct iommu *iommu = (struct iommu *) _iommu;
     int tlb_offset = ecap_iotlb_offset(iommu->ecap);
@@ -475,8 +475,8 @@ static int flush_iotlb_reg(void *_iommu, u16 did,
 }
 
 static int __must_check iommu_flush_iotlb_global(struct iommu *iommu,
-                                                 int flush_non_present_entry,
-                                                 int flush_dev_iotlb)
+                                                 bool_t flush_non_present_entry,
+                                                 bool_t flush_dev_iotlb)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     int status;
@@ -494,8 +494,8 @@ static int __must_check iommu_flush_iotlb_global(struct iommu *iommu,
 }
 
 static int __must_check iommu_flush_iotlb_dsi(struct iommu *iommu, u16 did,
-                                              int flush_non_present_entry,
-                                              int flush_dev_iotlb)
+                                              bool_t flush_non_present_entry,
+                                              bool_t flush_dev_iotlb)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     int status;
@@ -514,8 +514,8 @@ static int __must_check iommu_flush_iotlb_dsi(struct iommu *iommu, u16 did,
 
 static int __must_check iommu_flush_iotlb_psi(struct iommu *iommu, u16 did,
                                               u64 addr, unsigned int order,
-                                              int flush_non_present_entry,
-                                              int flush_dev_iotlb)
+                                              bool_t flush_non_present_entry,
+                                              bool_t flush_dev_iotlb)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     int status;
@@ -570,7 +570,7 @@ static int __must_check iommu_flush_all(void)
         rc = iommu_flush_context_global(iommu, 0);
         if ( rc <= 0 )
         {
-            int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
+            bool_t flush_dev_iotlb = !!find_ats_dev_drhd(iommu);
             int ret = iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
 
             ASSERT(ret <= 0);
@@ -595,7 +595,7 @@ static int __must_check iommu_flush_iotlb(struct domain *d,
     struct domain_iommu *hd = dom_iommu(d);
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu;
-    int flush_dev_iotlb;
+    bool_t flush_dev_iotlb;
     int iommu_domid;
     int rc = 0;
 
@@ -610,7 +610,7 @@ static int __must_check iommu_flush_iotlb(struct domain *d,
         if ( !test_bit(iommu->index, &hd->arch.iommu_bitmap) )
             continue;
 
-        flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
+        flush_dev_iotlb = !!find_ats_dev_drhd(iommu);
         iommu_domid= domain_iommu_domid(d, iommu);
         if ( iommu_domid == -1 )
             continue;
@@ -1453,7 +1453,7 @@ int domain_context_mapping_one(
      */
     if ( rc <= 0 )
     {
-        int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
+        bool_t flush_dev_iotlb = !!find_ats_dev_drhd(iommu);
         int ret = iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
 
         ASSERT(ret <= 0);
@@ -1614,7 +1614,7 @@ int domain_context_unmap_one(
      */
     if ( rc <= 0 )
     {
-        int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
+        bool_t flush_dev_iotlb = !!find_ats_dev_drhd(iommu);
         int ret = iommu_flush_iotlb_dsi(iommu, iommu_domid, 0, flush_dev_iotlb);
 
         ASSERT(ret <= 0);
@@ -1845,7 +1845,7 @@ int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu = NULL;
     struct domain_iommu *hd = dom_iommu(d);
-    int flush_dev_iotlb;
+    bool_t flush_dev_iotlb;
     int iommu_domid;
     int rc = 0;
 
@@ -1857,7 +1857,7 @@ int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
         if ( !test_bit(iommu->index, &hd->arch.iommu_bitmap) )
             continue;
 
-        flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
+        flush_dev_iotlb = !!find_ats_dev_drhd(iommu);
         iommu_domid= domain_iommu_domid(d, iommu);
         if ( iommu_domid == -1 )
             continue;
diff --git a/xen/drivers/passthrough/vtd/iommu.h b/xen/drivers/passthrough/vtd/iommu.h
index c55ee08..e6cf738 100644
--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -514,10 +514,13 @@ struct ir_ctrl {
 };
 
 struct iommu_flush {
-    int (*context)(void *iommu, u16 did, u16 source_id,
-                   u8 function_mask, u64 type, int non_present_entry_flush);
-    int (*iotlb)(void *iommu, u16 did, u64 addr, unsigned int size_order,
-                 u64 type, int flush_non_present_entry, int flush_dev_iotlb);
+    int __must_check (*context)(void *iommu, u16 did, u16 source_id,
+                                u8 function_mask, u64 type,
+                                bool_t non_present_entry_flush);
+    int __must_check (*iotlb)(void *iommu, u16 did, u64 addr,
+                              unsigned int size_order, u64 type,
+                              bool_t flush_non_present_entry,
+                              bool_t flush_dev_iotlb);
 };
 
 struct intel_iommu {
diff --git a/xen/drivers/passthrough/vtd/qinval.c b/xen/drivers/passthrough/vtd/qinval.c
index b81b0bd..aa7841a 100644
--- a/xen/drivers/passthrough/vtd/qinval.c
+++ b/xen/drivers/passthrough/vtd/qinval.c
@@ -274,9 +274,9 @@ int iommu_flush_iec_index(struct iommu *iommu, u8 im, u16 iidx)
    return __iommu_flush_iec(iommu, IEC_INDEX_INVL, im, iidx);
 }
 
-static int flush_context_qi(
-    void *_iommu, u16 did, u16 sid, u8 fm, u64 type,
-    int flush_non_present_entry)
+static int __must_check flush_context_qi(void *_iommu, u16 did,
+                                         u16 sid, u8 fm, u64 type,
+                                         bool_t flush_non_present_entry)
 {
     int ret = 0;
     struct iommu *iommu = (struct iommu *)_iommu;
@@ -305,10 +305,10 @@ static int flush_context_qi(
     return ret;
 }
 
-static int flush_iotlb_qi(
-    void *_iommu, u16 did,
-    u64 addr, unsigned int size_order, u64 type,
-    int flush_non_present_entry, int flush_dev_iotlb)
+static int __must_check flush_iotlb_qi(void *_iommu, u16 did, u64 addr,
+                                       unsigned int size_order, u64 type,
+                                       bool_t flush_non_present_entry,
+                                       bool_t flush_dev_iotlb)
 {
     u8 dr = 0, dw = 0;
     int ret = 0;
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 02/11] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping
  2016-06-08  8:58 ` [PATCH v7 02/11] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping Xu, Quan
@ 2016-06-08 14:40   ` Jan Beulich
       [not found]   ` <57584D8602000078000F32D0@prv-mh.provo.novell.com>
  2016-06-12  6:41   ` Tian, Kevin
  2 siblings, 0 replies; 45+ messages in thread
From: Jan Beulich @ 2016-06-08 14:40 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Keir Fraser, George Dunlap, Andrew Cooper,
	dario.faggioli, xen-devel, Jun Nakajima

>>> On 08.06.16 at 10:58, <quan.xu@intel.com> wrote:
> @@ -831,10 +837,24 @@ out:
>          {
>              if ( iommu_flags )
>                  for ( i = 0; i < (1 << order); i++ )
> -                    iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags);
> +                {
> +                    rc = iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags);
> +                    if ( unlikely(rc) )
> +                    {
> +                        while ( i-- )
> +                            if ( iommu_unmap_page(p2m->domain, gfn + i) )
> +                                continue;

Nice idea. I would have preferred a brief comment explaining this,
but anyway.

> @@ -140,8 +142,17 @@ void __hwdom_init vtd_set_hwdom_mapping(struct domain *d)
>  
>          tmp = 1 << (PAGE_SHIFT - PAGE_SHIFT_4K);
>          for ( j = 0; j < tmp; j++ )
> -            iommu_map_page(d, pfn * tmp + j, pfn * tmp + j,
> -                           IOMMUF_readable|IOMMUF_writable);
> +        {
> +            int ret = iommu_map_page(d, pfn * tmp + j, pfn * tmp + j,
> +                                     IOMMUF_readable|IOMMUF_writable);
> +
> +            if( !rc )

Missing blank (could be fixed upon commit if no other reason for
another iteration arises).

Reviewed-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 03/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping (top level ones)
  2016-06-08  8:58 ` [PATCH v7 03/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping (top level ones) Xu, Quan
@ 2016-06-08 14:42   ` Jan Beulich
  2016-06-08 14:54   ` Jan Beulich
  2016-06-09 10:28   ` Julien Grall
  2 siblings, 0 replies; 45+ messages in thread
From: Jan Beulich @ 2016-06-08 14:42 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, Andrew Cooper,
	dario.faggioli, xen-devel, Julien Grall

>>> On 08.06.16 at 10:58, <quan.xu@intel.com> wrote:
> From: Quan Xu <quan.xu@intel.com>
> 
> Signed-off-by: Quan Xu <quan.xu@intel.com>
> Acked-by: Kevin Tian <kevin.tian@intel.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping (top level ones)
  2016-06-08  8:58 ` [PATCH v7 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping " Xu, Quan
@ 2016-06-08 14:43   ` Jan Beulich
  2016-06-09 18:45     ` Suravee Suthikulanit
  2016-06-09 10:30   ` Julien Grall
  1 sibling, 1 reply; 45+ messages in thread
From: Jan Beulich @ 2016-06-08 14:43 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, dario.faggioli,
	xen-devel, Julien Grall, Suravee Suthikulpanit

>>> On 08.06.16 at 10:58, <quan.xu@intel.com> wrote:
> From: Quan Xu <quan.xu@intel.com>
> 
> Signed-off-by: Quan Xu <quan.xu@intel.com>
> Acked-by: Kevin Tian <kevin.tian@intel.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 07/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending (top level ones)
  2016-06-08  8:59 ` [PATCH v7 07/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending " Xu, Quan
@ 2016-06-08 14:51   ` Jan Beulich
  2016-06-12  7:42     ` Xu, Quan
  2016-06-09 18:58   ` Suravee Suthikulanit
  2016-06-12  6:58   ` Tian, Kevin
  2 siblings, 1 reply; 45+ messages in thread
From: Jan Beulich @ 2016-06-08 14:51 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, Liu Jinsong,
	dario.faggioli, xen-devel, Julien Grall, Suravee Suthikulpanit,
	Andrew Cooper, Keir Fraser

>>> On 08.06.16 at 10:59, <quan.xu@intel.com> wrote:
> @@ -169,6 +203,7 @@ static int enter_state(u32 state)

Right above here we have

    if ( (error = device_power_down()) )

which is now wrong as long as SAVED_ALL is not zero.

>      {
>          printk(XENLOG_ERR "Some devices failed to power down.");
>          system_state = SYS_STATE_resume;
> +        device_power_up(error);
>          goto done;

For the goto you need to adjust "error", or else you return
something meaningless (a sort of random positive number) to your
caller.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 03/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping (top level ones)
  2016-06-08  8:58 ` [PATCH v7 03/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping (top level ones) Xu, Quan
  2016-06-08 14:42   ` Jan Beulich
@ 2016-06-08 14:54   ` Jan Beulich
  2016-06-09 19:30     ` Suravee Suthikulanit
  2016-06-09 10:28   ` Julien Grall
  2 siblings, 1 reply; 45+ messages in thread
From: Jan Beulich @ 2016-06-08 14:54 UTC (permalink / raw)
  To: Suravee Suthikulpanit, Quan Xu
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, Andrew Cooper,
	dario.faggioli, xen-devel, Julien Grall

>>> On 08.06.16 at 10:58, <quan.xu@intel.com> wrote:
> From: Quan Xu <quan.xu@intel.com>
> 
> Signed-off-by: Quan Xu <quan.xu@intel.com>
> Acked-by: Kevin Tian <kevin.tian@intel.com>
> 
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien.grall@arm.com>
> CC: Kevin Tian <kevin.tian@intel.com>
> CC: Feng Wu <feng.wu@intel.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>

CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

> v7: just drop 'Reviewed-by: Jan Beulich <jbeulich@suse.com>',
>     as I haven't added __must_check annotation to iommu_unmap_page()
>     in previous v6.
> ---
>  xen/drivers/passthrough/arm/smmu.c            |  2 +-
>  xen/drivers/passthrough/vtd/iommu.c           | 15 ++++++++-------
>  xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |  2 +-
>  xen/include/xen/iommu.h                       |  2 +-
>  4 files changed, 11 insertions(+), 10 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu.c 
> b/xen/drivers/passthrough/arm/smmu.c
> index 54a03a6..1ce4ddf 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -2774,7 +2774,7 @@ static int arm_smmu_map_page(struct domain *d, unsigned 
> long gfn,
>  	return guest_physmap_add_entry(d, gfn, mfn, 0, t);
>  }
>  
> -static int arm_smmu_unmap_page(struct domain *d, unsigned long gfn)
> +static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long 
> gfn)
>  {
>  	/*
>  	 * This function should only be used by gnttab code when the domain
> diff --git a/xen/drivers/passthrough/vtd/iommu.c 
> b/xen/drivers/passthrough/vtd/iommu.c
> index db83949..4844193 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -609,7 +609,7 @@ static void intel_iommu_iotlb_flush_all(struct domain *d)
>  }
>  
>  /* clear one page's page table */
> -static void dma_pte_clear_one(struct domain *domain, u64 addr)
> +static int __must_check dma_pte_clear_one(struct domain *domain, u64 addr)
>  {
>      struct domain_iommu *hd = dom_iommu(domain);
>      struct dma_pte *page = NULL, *pte = NULL;
> @@ -621,7 +621,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 
> addr)
>      if ( pg_maddr == 0 )
>      {
>          spin_unlock(&hd->arch.mapping_lock);
> -        return;
> +        return 0;
>      }
>  
>      page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
> @@ -631,7 +631,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 
> addr)
>      {
>          spin_unlock(&hd->arch.mapping_lock);
>          unmap_vtd_domain_page(page);
> -        return;
> +        return 0;
>      }
>  
>      dma_clear_pte(*pte);
> @@ -642,6 +642,8 @@ static void dma_pte_clear_one(struct domain *domain, u64 
> addr)
>          __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K, 1, 1);
>  
>      unmap_vtd_domain_page(page);
> +
> +    return 0;
>  }
>  
>  static void iommu_free_pagetable(u64 pt_maddr, int level)
> @@ -1739,15 +1741,14 @@ static int intel_iommu_map_page(
>      return 0;
>  }
>  
> -static int intel_iommu_unmap_page(struct domain *d, unsigned long gfn)
> +static int __must_check intel_iommu_unmap_page(struct domain *d,
> +                                               unsigned long gfn)
>  {
>      /* Do nothing if hardware domain and iommu supports pass thru. */
>      if ( iommu_passthrough && is_hardware_domain(d) )
>          return 0;
>  
> -    dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
> -
> -    return 0;
> +    return dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
>  }
>  
>  void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
> diff --git a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h 
> b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
> index 9c51172..57b6cc1 100644
> --- a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
> +++ b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
> @@ -53,7 +53,7 @@ int amd_iommu_update_ivrs_mapping_acpi(void);
>  /* mapping functions */
>  int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long 
> mfn,
>                         unsigned int flags);
> -int amd_iommu_unmap_page(struct domain *d, unsigned long gfn);
> +int __must_check amd_iommu_unmap_page(struct domain *d, unsigned long gfn);
>  u64 amd_iommu_get_next_table_from_pte(u32 *entry);
>  int amd_iommu_reserve_domain_unity_map(struct domain *domain,
>                                         u64 phys_addr, unsigned long size,
> diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
> index eaa2c77..f45fa5a 100644
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -168,7 +168,7 @@ struct iommu_ops {
>      void (*teardown)(struct domain *d);
>      int (*map_page)(struct domain *d, unsigned long gfn, unsigned long mfn,
>                      unsigned int flags);
> -    int (*unmap_page)(struct domain *d, unsigned long gfn);
> +    int __must_check (*unmap_page)(struct domain *d, unsigned long gfn);
>      void (*free_page_table)(struct page_info *);
>  #ifdef CONFIG_X86
>      void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, 
> unsigned int value);
> -- 
> 1.9.1



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 03/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping (top level ones)
  2016-06-08  8:58 ` [PATCH v7 03/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping (top level ones) Xu, Quan
  2016-06-08 14:42   ` Jan Beulich
  2016-06-08 14:54   ` Jan Beulich
@ 2016-06-09 10:28   ` Julien Grall
  2 siblings, 0 replies; 45+ messages in thread
From: Julien Grall @ 2016-06-09 10:28 UTC (permalink / raw)
  To: Xu, Quan, xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, Andrew Cooper,
	dario.faggioli, Jan Beulich

Hello,

On 08/06/16 09:58, Xu, Quan wrote:
> From: Quan Xu <quan.xu@intel.com>
>
> Signed-off-by: Quan Xu <quan.xu@intel.com>
> Acked-by: Kevin Tian <kevin.tian@intel.com>
>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien.grall@arm.com>

For the ARM bits:

Acked-by: Julien Grall <julien.grall@arm.com>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping (top level ones)
  2016-06-08  8:58 ` [PATCH v7 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping " Xu, Quan
  2016-06-08 14:43   ` Jan Beulich
@ 2016-06-09 10:30   ` Julien Grall
  1 sibling, 0 replies; 45+ messages in thread
From: Julien Grall @ 2016-06-09 10:30 UTC (permalink / raw)
  To: Xu, Quan, xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, Suravee Suthikulpanit,
	dario.faggioli, Jan Beulich

Hello,

On 08/06/16 09:58, Xu, Quan wrote:
> From: Quan Xu <quan.xu@intel.com>
>
> Signed-off-by: Quan Xu <quan.xu@intel.com>
> Acked-by: Kevin Tian <kevin.tian@intel.com>
>
> CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien.grall@arm.com>
> CC: Kevin Tian <kevin.tian@intel.com>
> CC: Feng Wu <feng.wu@intel.com>
> CC: Jan Beulich <jbeulich@suse.com>
>
> v7: drop the amd_iommu_hwdom_init() fix, which has been added in
>      patch 2 in this v7.
> ---
>   xen/drivers/passthrough/arm/smmu.c            | 4 ++--
>   xen/drivers/passthrough/vtd/iommu.c           | 7 ++++---
>   xen/include/asm-x86/hvm/svm/amd-iommu-proto.h | 4 ++--
>   xen/include/xen/iommu.h                       | 4 ++--
>   4 files changed, 10 insertions(+), 9 deletions(-)
>
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index 1ce4ddf..ee5c89d 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -2745,8 +2745,8 @@ static void arm_smmu_iommu_domain_teardown(struct domain *d)
>   	xfree(xen_domain);
>   }
>
> -static int arm_smmu_map_page(struct domain *d, unsigned long gfn,
> -			     unsigned long mfn, unsigned int flags)
> +static int __must_check arm_smmu_map_page(struct domain *d, unsigned long gfn,
> +                                          unsigned long mfn, unsigned int flags)

This file is using the Linux coding style. So you should use hard tab here.

>   {
>   	p2m_type_t t;

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)
  2016-06-08  8:58 ` [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} " Xu, Quan
@ 2016-06-09 11:12   ` Julien Grall
  2016-06-09 11:45     ` Jan Beulich
  2016-06-12  6:46   ` Tian, Kevin
  1 sibling, 1 reply; 45+ messages in thread
From: Julien Grall @ 2016-06-09 11:12 UTC (permalink / raw)
  To: Xu, Quan, xen-devel
  Cc: dario.faggioli, Stefano Stabellini, Kevin Tian, Jan Beulich,
	Steve Capper

Hello,

On 08/06/16 09:58, Xu, Quan wrote:
> From: Quan Xu <quan.xu@intel.com>
>
> Signed-off-by: Quan Xu <quan.xu@intel.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien.grall@arm.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Kevin Tian <kevin.tian@intel.com>
> ---
>   xen/arch/arm/p2m.c                  |  4 +++-
>   xen/common/memory.c                 | 12 ++++++++++--
>   xen/drivers/passthrough/iommu.c     | 13 +++++++++----
>   xen/drivers/passthrough/x86/iommu.c |  5 +++--
>   xen/include/xen/iommu.h             |  5 +++--
>   5 files changed, 28 insertions(+), 11 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 6a19c57..65d8f1a 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1178,7 +1178,9 @@ out:
>       if ( flush )
>       {
>           flush_tlb_domain(d);
> -        iommu_iotlb_flush(d, sgfn, egfn - sgfn);
> +        ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);

Sorry for coming late in the discussion. What kind of error do you 
expect to return with iommu_tlb_flush?

Today the ARM SMMU will always return 0 if the TLB flush timeout (see 
arm_smmu_tlb_inv_context).

We may want in the future to return an error when it has timeout, 
however only returning an error is not safe at all. The TLB may contain 
entries which are invalid (because we remove the mapping earlier) and a 
device/domain could take advantage of that.

So I am not sure if we should let running the guest when a flush has 
timeout. Any thoughts?

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)
  2016-06-09 11:12   ` Julien Grall
@ 2016-06-09 11:45     ` Jan Beulich
  2016-06-09 12:03       ` Julien Grall
  0 siblings, 1 reply; 45+ messages in thread
From: Jan Beulich @ 2016-06-09 11:45 UTC (permalink / raw)
  To: Julien Grall
  Cc: Kevin Tian, Stefano Stabellini, Steve Capper, dario.faggioli,
	xen-devel, Quan Xu

>>> On 09.06.16 at 13:12, <julien.grall@arm.com> wrote:
> On 08/06/16 09:58, Xu, Quan wrote:
>> From: Quan Xu <quan.xu@intel.com>
>>
>> Signed-off-by: Quan Xu <quan.xu@intel.com>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>
>> CC: Stefano Stabellini <sstabellini@kernel.org>
>> CC: Julien Grall <julien.grall@arm.com>
>> CC: Jan Beulich <jbeulich@suse.com>
>> CC: Kevin Tian <kevin.tian@intel.com>
>> ---
>>   xen/arch/arm/p2m.c                  |  4 +++-
>>   xen/common/memory.c                 | 12 ++++++++++--
>>   xen/drivers/passthrough/iommu.c     | 13 +++++++++----
>>   xen/drivers/passthrough/x86/iommu.c |  5 +++--
>>   xen/include/xen/iommu.h             |  5 +++--
>>   5 files changed, 28 insertions(+), 11 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 6a19c57..65d8f1a 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -1178,7 +1178,9 @@ out:
>>       if ( flush )
>>       {
>>           flush_tlb_domain(d);
>> -        iommu_iotlb_flush(d, sgfn, egfn - sgfn);
>> +        ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);
> 
> Sorry for coming late in the discussion. What kind of error do you 
> expect to return with iommu_tlb_flush?
> 
> Today the ARM SMMU will always return 0 if the TLB flush timeout (see 
> arm_smmu_tlb_inv_context).
> 
> We may want in the future to return an error when it has timeout, 
> however only returning an error is not safe at all. The TLB may contain 
> entries which are invalid (because we remove the mapping earlier) and a 
> device/domain could take advantage of that.
> 
> So I am not sure if we should let running the guest when a flush has 
> timeout. Any thoughts?

Well, did you look at the rest of this series, and the other dependent
one? Guests (other than Dom0) get crashed when a flush times out. I
guess that's what you will want on ARM then too.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)
  2016-06-09 11:45     ` Jan Beulich
@ 2016-06-09 12:03       ` Julien Grall
  2016-06-09 12:08         ` Julien Grall
  2016-06-09 12:14         ` Jan Beulich
  0 siblings, 2 replies; 45+ messages in thread
From: Julien Grall @ 2016-06-09 12:03 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, Stefano Stabellini, Steve Capper, dario.faggioli,
	xen-devel, Quan Xu



On 09/06/16 12:45, Jan Beulich wrote:
>>>> On 09.06.16 at 13:12, <julien.grall@arm.com> wrote:
>> On 08/06/16 09:58, Xu, Quan wrote:
>>> From: Quan Xu <quan.xu@intel.com>
>>>
>>> Signed-off-by: Quan Xu <quan.xu@intel.com>
>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> CC: Stefano Stabellini <sstabellini@kernel.org>
>>> CC: Julien Grall <julien.grall@arm.com>
>>> CC: Jan Beulich <jbeulich@suse.com>
>>> CC: Kevin Tian <kevin.tian@intel.com>
>>> ---
>>>    xen/arch/arm/p2m.c                  |  4 +++-
>>>    xen/common/memory.c                 | 12 ++++++++++--
>>>    xen/drivers/passthrough/iommu.c     | 13 +++++++++----
>>>    xen/drivers/passthrough/x86/iommu.c |  5 +++--
>>>    xen/include/xen/iommu.h             |  5 +++--
>>>    5 files changed, 28 insertions(+), 11 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>> index 6a19c57..65d8f1a 100644
>>> --- a/xen/arch/arm/p2m.c
>>> +++ b/xen/arch/arm/p2m.c
>>> @@ -1178,7 +1178,9 @@ out:
>>>        if ( flush )
>>>        {
>>>            flush_tlb_domain(d);
>>> -        iommu_iotlb_flush(d, sgfn, egfn - sgfn);
>>> +        ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);
>>
>> Sorry for coming late in the discussion. What kind of error do you
>> expect to return with iommu_tlb_flush?
>>
>> Today the ARM SMMU will always return 0 if the TLB flush timeout (see
>> arm_smmu_tlb_inv_context).
>>
>> We may want in the future to return an error when it has timeout,
>> however only returning an error is not safe at all. The TLB may contain
>> entries which are invalid (because we remove the mapping earlier) and a
>> device/domain could take advantage of that.
>>
>> So I am not sure if we should let running the guest when a flush has
>> timeout. Any thoughts?
>
> Well, did you look at the rest of this series, and the other dependent
> one? Guests (other than Dom0) get crashed when a flush times out. I
> guess that's what you will want on ARM then too.

I have found a call to domain_crash in iommu_map_page and 
iommu_unmap_page. However, I have not found any for iommu_tlb_flush.

iommu_unmap_page and iommu_map_page are not called in the p2m code 
because the page table are shared with the SMMU. So guest will not be 
crashed if the timeout has failed.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)
  2016-06-09 12:03       ` Julien Grall
@ 2016-06-09 12:08         ` Julien Grall
  2016-06-09 12:15           ` Jan Beulich
  2016-06-09 12:14         ` Jan Beulich
  1 sibling, 1 reply; 45+ messages in thread
From: Julien Grall @ 2016-06-09 12:08 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, Stefano Stabellini, Steve Capper, dario.faggioli,
	xen-devel, Quan Xu



On 09/06/16 13:03, Julien Grall wrote:
> On 09/06/16 12:45, Jan Beulich wrote:
>>>>> On 09.06.16 at 13:12, <julien.grall@arm.com> wrote:
>>> On 08/06/16 09:58, Xu, Quan wrote:
>>>> From: Quan Xu <quan.xu@intel.com>
>>>>
>>>> Signed-off-by: Quan Xu <quan.xu@intel.com>
>>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> CC: Stefano Stabellini <sstabellini@kernel.org>
>>>> CC: Julien Grall <julien.grall@arm.com>
>>>> CC: Jan Beulich <jbeulich@suse.com>
>>>> CC: Kevin Tian <kevin.tian@intel.com>
>>>> ---
>>>>    xen/arch/arm/p2m.c                  |  4 +++-
>>>>    xen/common/memory.c                 | 12 ++++++++++--
>>>>    xen/drivers/passthrough/iommu.c     | 13 +++++++++----
>>>>    xen/drivers/passthrough/x86/iommu.c |  5 +++--
>>>>    xen/include/xen/iommu.h             |  5 +++--
>>>>    5 files changed, 28 insertions(+), 11 deletions(-)
>>>>
>>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>>> index 6a19c57..65d8f1a 100644
>>>> --- a/xen/arch/arm/p2m.c
>>>> +++ b/xen/arch/arm/p2m.c
>>>> @@ -1178,7 +1178,9 @@ out:
>>>>        if ( flush )
>>>>        {
>>>>            flush_tlb_domain(d);
>>>> -        iommu_iotlb_flush(d, sgfn, egfn - sgfn);
>>>> +        ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);
>>>
>>> Sorry for coming late in the discussion. What kind of error do you
>>> expect to return with iommu_tlb_flush?
>>>
>>> Today the ARM SMMU will always return 0 if the TLB flush timeout (see
>>> arm_smmu_tlb_inv_context).
>>>
>>> We may want in the future to return an error when it has timeout,
>>> however only returning an error is not safe at all. The TLB may contain
>>> entries which are invalid (because we remove the mapping earlier) and a
>>> device/domain could take advantage of that.
>>>
>>> So I am not sure if we should let running the guest when a flush has
>>> timeout. Any thoughts?
>>
>> Well, did you look at the rest of this series, and the other dependent
>> one? Guests (other than Dom0) get crashed when a flush times out. I

I missed the bit "other dependent one". Which series are you talking 
about? The cover letter does not give any dependent series...

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)
  2016-06-09 12:03       ` Julien Grall
  2016-06-09 12:08         ` Julien Grall
@ 2016-06-09 12:14         ` Jan Beulich
  1 sibling, 0 replies; 45+ messages in thread
From: Jan Beulich @ 2016-06-09 12:14 UTC (permalink / raw)
  To: Julien Grall
  Cc: Kevin Tian, Stefano Stabellini, Steve Capper, dario.faggioli,
	xen-devel, Quan Xu

>>> On 09.06.16 at 14:03, <julien.grall@arm.com> wrote:

> 
> On 09/06/16 12:45, Jan Beulich wrote:
>>>>> On 09.06.16 at 13:12, <julien.grall@arm.com> wrote:
>>> On 08/06/16 09:58, Xu, Quan wrote:
>>>> From: Quan Xu <quan.xu@intel.com>
>>>>
>>>> Signed-off-by: Quan Xu <quan.xu@intel.com>
>>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> CC: Stefano Stabellini <sstabellini@kernel.org>
>>>> CC: Julien Grall <julien.grall@arm.com>
>>>> CC: Jan Beulich <jbeulich@suse.com>
>>>> CC: Kevin Tian <kevin.tian@intel.com>
>>>> ---
>>>>    xen/arch/arm/p2m.c                  |  4 +++-
>>>>    xen/common/memory.c                 | 12 ++++++++++--
>>>>    xen/drivers/passthrough/iommu.c     | 13 +++++++++----
>>>>    xen/drivers/passthrough/x86/iommu.c |  5 +++--
>>>>    xen/include/xen/iommu.h             |  5 +++--
>>>>    5 files changed, 28 insertions(+), 11 deletions(-)
>>>>
>>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>>> index 6a19c57..65d8f1a 100644
>>>> --- a/xen/arch/arm/p2m.c
>>>> +++ b/xen/arch/arm/p2m.c
>>>> @@ -1178,7 +1178,9 @@ out:
>>>>        if ( flush )
>>>>        {
>>>>            flush_tlb_domain(d);
>>>> -        iommu_iotlb_flush(d, sgfn, egfn - sgfn);
>>>> +        ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);
>>>
>>> Sorry for coming late in the discussion. What kind of error do you
>>> expect to return with iommu_tlb_flush?
>>>
>>> Today the ARM SMMU will always return 0 if the TLB flush timeout (see
>>> arm_smmu_tlb_inv_context).
>>>
>>> We may want in the future to return an error when it has timeout,
>>> however only returning an error is not safe at all. The TLB may contain
>>> entries which are invalid (because we remove the mapping earlier) and a
>>> device/domain could take advantage of that.
>>>
>>> So I am not sure if we should let running the guest when a flush has
>>> timeout. Any thoughts?
>>
>> Well, did you look at the rest of this series, and the other dependent
>> one? Guests (other than Dom0) get crashed when a flush times out. I
>> guess that's what you will want on ARM then too.
> 
> I have found a call to domain_crash in iommu_map_page and 
> iommu_unmap_page. However, I have not found any for iommu_tlb_flush.

Perhaps in that other, 3-patch series then?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)
  2016-06-09 12:08         ` Julien Grall
@ 2016-06-09 12:15           ` Jan Beulich
  2016-06-09 12:24             ` Julien Grall
  0 siblings, 1 reply; 45+ messages in thread
From: Jan Beulich @ 2016-06-09 12:15 UTC (permalink / raw)
  To: Julien Grall
  Cc: Kevin Tian, Stefano Stabellini, Steve Capper, dario.faggioli,
	xen-devel, Quan Xu

>>> On 09.06.16 at 14:08, <julien.grall@arm.com> wrote:

> 
> On 09/06/16 13:03, Julien Grall wrote:
>> On 09/06/16 12:45, Jan Beulich wrote:
>>>>>> On 09.06.16 at 13:12, <julien.grall@arm.com> wrote:
>>>> On 08/06/16 09:58, Xu, Quan wrote:
>>>>> From: Quan Xu <quan.xu@intel.com>
>>>>>
>>>>> Signed-off-by: Quan Xu <quan.xu@intel.com>
>>>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>>>
>>>>> CC: Stefano Stabellini <sstabellini@kernel.org>
>>>>> CC: Julien Grall <julien.grall@arm.com>
>>>>> CC: Jan Beulich <jbeulich@suse.com>
>>>>> CC: Kevin Tian <kevin.tian@intel.com>
>>>>> ---
>>>>>    xen/arch/arm/p2m.c                  |  4 +++-
>>>>>    xen/common/memory.c                 | 12 ++++++++++--
>>>>>    xen/drivers/passthrough/iommu.c     | 13 +++++++++----
>>>>>    xen/drivers/passthrough/x86/iommu.c |  5 +++--
>>>>>    xen/include/xen/iommu.h             |  5 +++--
>>>>>    5 files changed, 28 insertions(+), 11 deletions(-)
>>>>>
>>>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>>>> index 6a19c57..65d8f1a 100644
>>>>> --- a/xen/arch/arm/p2m.c
>>>>> +++ b/xen/arch/arm/p2m.c
>>>>> @@ -1178,7 +1178,9 @@ out:
>>>>>        if ( flush )
>>>>>        {
>>>>>            flush_tlb_domain(d);
>>>>> -        iommu_iotlb_flush(d, sgfn, egfn - sgfn);
>>>>> +        ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);
>>>>
>>>> Sorry for coming late in the discussion. What kind of error do you
>>>> expect to return with iommu_tlb_flush?
>>>>
>>>> Today the ARM SMMU will always return 0 if the TLB flush timeout (see
>>>> arm_smmu_tlb_inv_context).
>>>>
>>>> We may want in the future to return an error when it has timeout,
>>>> however only returning an error is not safe at all. The TLB may contain
>>>> entries which are invalid (because we remove the mapping earlier) and a
>>>> device/domain could take advantage of that.
>>>>
>>>> So I am not sure if we should let running the guest when a flush has
>>>> timeout. Any thoughts?
>>>
>>> Well, did you look at the rest of this series, and the other dependent
>>> one? Guests (other than Dom0) get crashed when a flush times out. I
> 
> I missed the bit "other dependent one". Which series are you talking 
> about? The cover letter does not give any dependent series...

"[Patch v11 0/3] VT-d Device-TLB flush issue"

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)
  2016-06-09 12:15           ` Jan Beulich
@ 2016-06-09 12:24             ` Julien Grall
  2016-06-09 12:32               ` Jan Beulich
  0 siblings, 1 reply; 45+ messages in thread
From: Julien Grall @ 2016-06-09 12:24 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, Stefano Stabellini, Steve Capper, dario.faggioli,
	xen-devel, Quan Xu

On 09/06/16 13:15, Jan Beulich wrote:
>>>> On 09.06.16 at 14:08, <julien.grall@arm.com> wrote:
>
>>
>> On 09/06/16 13:03, Julien Grall wrote:
>>> On 09/06/16 12:45, Jan Beulich wrote:
>>>>>>> On 09.06.16 at 13:12, <julien.grall@arm.com> wrote:
>>>>> On 08/06/16 09:58, Xu, Quan wrote:
>>>>>> From: Quan Xu <quan.xu@intel.com>
>>>>>>
>>>>>> Signed-off-by: Quan Xu <quan.xu@intel.com>
>>>>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>>>>
>>>>>> CC: Stefano Stabellini <sstabellini@kernel.org>
>>>>>> CC: Julien Grall <julien.grall@arm.com>
>>>>>> CC: Jan Beulich <jbeulich@suse.com>
>>>>>> CC: Kevin Tian <kevin.tian@intel.com>
>>>>>> ---
>>>>>>     xen/arch/arm/p2m.c                  |  4 +++-
>>>>>>     xen/common/memory.c                 | 12 ++++++++++--
>>>>>>     xen/drivers/passthrough/iommu.c     | 13 +++++++++----
>>>>>>     xen/drivers/passthrough/x86/iommu.c |  5 +++--
>>>>>>     xen/include/xen/iommu.h             |  5 +++--
>>>>>>     5 files changed, 28 insertions(+), 11 deletions(-)
>>>>>>
>>>>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>>>>> index 6a19c57..65d8f1a 100644
>>>>>> --- a/xen/arch/arm/p2m.c
>>>>>> +++ b/xen/arch/arm/p2m.c
>>>>>> @@ -1178,7 +1178,9 @@ out:
>>>>>>         if ( flush )
>>>>>>         {
>>>>>>             flush_tlb_domain(d);
>>>>>> -        iommu_iotlb_flush(d, sgfn, egfn - sgfn);
>>>>>> +        ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);
>>>>>
>>>>> Sorry for coming late in the discussion. What kind of error do you
>>>>> expect to return with iommu_tlb_flush?
>>>>>
>>>>> Today the ARM SMMU will always return 0 if the TLB flush timeout (see
>>>>> arm_smmu_tlb_inv_context).
>>>>>
>>>>> We may want in the future to return an error when it has timeout,
>>>>> however only returning an error is not safe at all. The TLB may contain
>>>>> entries which are invalid (because we remove the mapping earlier) and a
>>>>> device/domain could take advantage of that.
>>>>>
>>>>> So I am not sure if we should let running the guest when a flush has
>>>>> timeout. Any thoughts?
>>>>
>>>> Well, did you look at the rest of this series, and the other dependent
>>>> one? Guests (other than Dom0) get crashed when a flush times out. I
>>
>> I missed the bit "other dependent one". Which series are you talking
>> about? The cover letter does not give any dependent series...
>
> "[Patch v11 0/3] VT-d Device-TLB flush issue"

To be honest, I am usually not going through x86 patch. In this case, 
the only call to domain_crash I can spot is in patch #2 which is Intel 
VTD specific.

So the behavior of iommu_iotlb_flush is up to the IOMMU driver. Whilst 
the behavior of iommu_{,un}map_page are defined by the common code.

This looks odd to me. If this is expected, then this needs to be 
documented somewhere.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)
  2016-06-09 12:24             ` Julien Grall
@ 2016-06-09 12:32               ` Jan Beulich
  2016-06-09 12:39                 ` Julien Grall
  0 siblings, 1 reply; 45+ messages in thread
From: Jan Beulich @ 2016-06-09 12:32 UTC (permalink / raw)
  To: Julien Grall
  Cc: Kevin Tian, Stefano Stabellini, Steve Capper, dario.faggioli,
	xen-devel, Quan Xu

>>> On 09.06.16 at 14:24, <julien.grall@arm.com> wrote:
> On 09/06/16 13:15, Jan Beulich wrote:
>>>>> On 09.06.16 at 14:08, <julien.grall@arm.com> wrote:
>>
>>>
>>> On 09/06/16 13:03, Julien Grall wrote:
>>>> On 09/06/16 12:45, Jan Beulich wrote:
>>>>>>>> On 09.06.16 at 13:12, <julien.grall@arm.com> wrote:
>>>>>> On 08/06/16 09:58, Xu, Quan wrote:
>>>>>>> From: Quan Xu <quan.xu@intel.com>
>>>>>>>
>>>>>>> Signed-off-by: Quan Xu <quan.xu@intel.com>
>>>>>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>>>>>
>>>>>>> CC: Stefano Stabellini <sstabellini@kernel.org>
>>>>>>> CC: Julien Grall <julien.grall@arm.com>
>>>>>>> CC: Jan Beulich <jbeulich@suse.com>
>>>>>>> CC: Kevin Tian <kevin.tian@intel.com>
>>>>>>> ---
>>>>>>>     xen/arch/arm/p2m.c                  |  4 +++-
>>>>>>>     xen/common/memory.c                 | 12 ++++++++++--
>>>>>>>     xen/drivers/passthrough/iommu.c     | 13 +++++++++----
>>>>>>>     xen/drivers/passthrough/x86/iommu.c |  5 +++--
>>>>>>>     xen/include/xen/iommu.h             |  5 +++--
>>>>>>>     5 files changed, 28 insertions(+), 11 deletions(-)
>>>>>>>
>>>>>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>>>>>> index 6a19c57..65d8f1a 100644
>>>>>>> --- a/xen/arch/arm/p2m.c
>>>>>>> +++ b/xen/arch/arm/p2m.c
>>>>>>> @@ -1178,7 +1178,9 @@ out:
>>>>>>>         if ( flush )
>>>>>>>         {
>>>>>>>             flush_tlb_domain(d);
>>>>>>> -        iommu_iotlb_flush(d, sgfn, egfn - sgfn);
>>>>>>> +        ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);
>>>>>>
>>>>>> Sorry for coming late in the discussion. What kind of error do you
>>>>>> expect to return with iommu_tlb_flush?
>>>>>>
>>>>>> Today the ARM SMMU will always return 0 if the TLB flush timeout (see
>>>>>> arm_smmu_tlb_inv_context).
>>>>>>
>>>>>> We may want in the future to return an error when it has timeout,
>>>>>> however only returning an error is not safe at all. The TLB may contain
>>>>>> entries which are invalid (because we remove the mapping earlier) and a
>>>>>> device/domain could take advantage of that.
>>>>>>
>>>>>> So I am not sure if we should let running the guest when a flush has
>>>>>> timeout. Any thoughts?
>>>>>
>>>>> Well, did you look at the rest of this series, and the other dependent
>>>>> one? Guests (other than Dom0) get crashed when a flush times out. I
>>>
>>> I missed the bit "other dependent one". Which series are you talking
>>> about? The cover letter does not give any dependent series...
>>
>> "[Patch v11 0/3] VT-d Device-TLB flush issue"
> 
> To be honest, I am usually not going through x86 patch. In this case, 
> the only call to domain_crash I can spot is in patch #2 which is Intel 
> VTD specific.

Which is why I said you probably want this on ARM too.

> So the behavior of iommu_iotlb_flush is up to the IOMMU driver. Whilst 
> the behavior of iommu_{,un}map_page are defined by the common code.

I'm certainly up for moving the logic up to the generic IOMMU layer,
if that's feasible.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)
  2016-06-09 12:32               ` Jan Beulich
@ 2016-06-09 12:39                 ` Julien Grall
  2016-06-12  6:55                   ` Tian, Kevin
  0 siblings, 1 reply; 45+ messages in thread
From: Julien Grall @ 2016-06-09 12:39 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, Stefano Stabellini, Steve Capper, dario.faggioli,
	xen-devel, Quan Xu

Hi Jan,

On 09/06/16 13:32, Jan Beulich wrote:
>>>> On 09.06.16 at 14:24, <julien.grall@arm.com> wrote:
>> So the behavior of iommu_iotlb_flush is up to the IOMMU driver. Whilst
>> the behavior of iommu_{,un}map_page are defined by the common code.
>
> I'm certainly up for moving the logic up to the generic IOMMU layer,
> if that's feasible.

That would be my preference.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping (top level ones)
  2016-06-08 14:43   ` Jan Beulich
@ 2016-06-09 18:45     ` Suravee Suthikulanit
  0 siblings, 0 replies; 45+ messages in thread
From: Suravee Suthikulanit @ 2016-06-09 18:45 UTC (permalink / raw)
  To: Jan Beulich, Quan Xu
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, dario.faggioli,
	xen-devel, Julien Grall

On 6/8/2016 9:43 AM, Jan Beulich wrote:
>>>> On 08.06.16 at 10:58, <quan.xu@intel.com> wrote:
>> > From: Quan Xu <quan.xu@intel.com>
>> >
>> > Signed-off-by: Quan Xu <quan.xu@intel.com>
>> > Acked-by: Kevin Tian <kevin.tian@intel.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

Thanks,
Suravee


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 07/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending (top level ones)
  2016-06-08  8:59 ` [PATCH v7 07/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending " Xu, Quan
  2016-06-08 14:51   ` Jan Beulich
@ 2016-06-09 18:58   ` Suravee Suthikulanit
  2016-06-12  6:58   ` Tian, Kevin
  2 siblings, 0 replies; 45+ messages in thread
From: Suravee Suthikulanit @ 2016-06-09 18:58 UTC (permalink / raw)
  To: Xu, Quan, xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Keir Fraser, Liu Jinsong,
	dario.faggioli, Julien Grall, Jan Beulich, Andrew Cooper,
	Feng Wu

On 6/8/2016 3:59 AM, Xu, Quan wrote:
> From: Quan Xu <quan.xu@intel.com>
>
> Signed-off-by: Quan Xu <quan.xu@intel.com>
>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Liu Jinsong <jinsong.liu@alibaba-inc.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien.grall@arm.com>
> CC: Kevin Tian <kevin.tian@intel.com>
> CC: Feng Wu <feng.wu@intel.com>
>
> v7:
>   1. return SAVED_ALL at the bottom of device_power_down(), instead
>      of SAVED_NONE.
>   2. drop the 'if ( error > 0 )', calling device_power_up(error)
>      without any if().
>   3. for vtd_suspend():
>        - drop pointless initializer.
>        - return 0 at the bottom to make obvious that no error path
>          comes there.

Shouldn't the changes log for v7 probably go ...
> ---
... HERE instead so that we don't get this in the commit log.


For AMD part,
Acked-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

Thanks,
Suravee

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 03/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping (top level ones)
  2016-06-08 14:54   ` Jan Beulich
@ 2016-06-09 19:30     ` Suravee Suthikulanit
  0 siblings, 0 replies; 45+ messages in thread
From: Suravee Suthikulanit @ 2016-06-09 19:30 UTC (permalink / raw)
  To: Jan Beulich, Quan Xu
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, Andrew Cooper,
	dario.faggioli, xen-devel, Julien Grall

On 6/8/2016 9:54 AM, Jan Beulich wrote:
>>>> On 08.06.16 at 10:58, <quan.xu@intel.com> wrote:
>> > From: Quan Xu <quan.xu@intel.com>
>> >
>> > Signed-off-by: Quan Xu <quan.xu@intel.com>
>> > Acked-by: Kevin Tian <kevin.tian@intel.com>
>> >
>> > CC: Stefano Stabellini <sstabellini@kernel.org>
>> > CC: Julien Grall <julien.grall@arm.com>
>> > CC: Kevin Tian <kevin.tian@intel.com>
>> > CC: Feng Wu <feng.wu@intel.com>
>> > CC: Jan Beulich <jbeulich@suse.com>
>> > CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
>

Acked-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

Thanks,
Suravee

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 02/11] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping
       [not found]     ` <421857ad-4f6b-e99c-1e81-034b19effdfb@amd.com>
@ 2016-06-10  6:56       ` Jan Beulich
  0 siblings, 0 replies; 45+ messages in thread
From: Jan Beulich @ 2016-06-10  6:56 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Keir Fraser, Suravee Suthikulanit, George Dunlap,
	Andrew Cooper, dario.faggioli, Jun Nakajima, Quan Xu

>>> On 09.06.16 at 20:37, <suravee.suthikulpanit@amd.com> wrote:
> On 6/8/2016 9:53 AM, Jan Beulich wrote:
>> diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c
>>> b/xen/drivers/passthrough/amd/pci_amd_iommu.c
>>> index fce9827..4a860af 100644
>>> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
>>> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
>>> @@ -282,6 +282,8 @@ static void __hwdom_init amd_iommu_hwdom_init(struct
>>> domain *d)
>>>
>>>      if ( !iommu_passthrough && !need_iommu(d) )
>>>      {
>>> +        int rc = 0;
>>> +
>>>          /* Set up 1:1 page table for dom0 */
>>>          for ( i = 0; i < max_pdx; i++ )
>>>          {
>>> @@ -292,12 +294,21 @@ static void __hwdom_init amd_iommu_hwdom_init(struct
>>> domain *d)
>>>               * a pfn_valid() check would seem desirable here.
>>>               */
>>>              if ( mfn_valid(pfn) )
>>> -                amd_iommu_map_page(d, pfn, pfn,
>>> -                                   IOMMUF_readable|IOMMUF_writable);
>>> +            {
>>> +                int ret = amd_iommu_map_page(d, pfn, pfn,
>>> +
>>> IOMMUF_readable|IOMMUF_writable);
>>> +
>>> +                if ( !rc )
>>> +                    rc = ret;
>>> +            }
>>>
>>>              if ( !(i & 0xfffff) )
>>>                  process_pending_softirqs();
>>>          }
>>> +
>>> +        if ( rc )
>>> +            AMD_IOMMU_DEBUG("d%d: IOMMU mapping failed: %d\n",
>>> +                            d->domain_id, rc);
>>>      }
>>>
>>>      for_each_amd_iommu ( iommu )
> 
> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

For the record to the list, since it had got dropped for an unknown
reason.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 02/11] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping
  2016-06-08  8:58 ` [PATCH v7 02/11] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping Xu, Quan
  2016-06-08 14:40   ` Jan Beulich
       [not found]   ` <57584D8602000078000F32D0@prv-mh.provo.novell.com>
@ 2016-06-12  6:41   ` Tian, Kevin
  2 siblings, 0 replies; 45+ messages in thread
From: Tian, Kevin @ 2016-06-12  6:41 UTC (permalink / raw)
  To: Xu, Quan, xen-devel
  Cc: Keir Fraser, Jan Beulich, George Dunlap, Andrew Cooper,
	dario.faggioli, Nakajima, Jun

> From: Xu, Quan
> Sent: Wednesday, June 08, 2016 4:59 PM
> 
> From: Quan Xu <quan.xu@intel.com>
> 
> When IOMMU mapping is failed, we issue a best effort rollback, stopping
> IOMMU mapping, unmapping the previous IOMMU maps and then reporting the
> error up to the call trees. When rollback is not feasible (in early
> initialization phase or trade-off of complexity) for the hardware domain,
> we do things on a best effort basis, only throwing out an error message.
> 
> IOMMU unmapping should perhaps continue despite an error, in an attempt
> to do best effort cleanup.
> 
> Signed-off-by: Quan Xu <quan.xu@intel.com>
> 

Acked-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)
  2016-06-08  8:58 ` [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} " Xu, Quan
  2016-06-09 11:12   ` Julien Grall
@ 2016-06-12  6:46   ` Tian, Kevin
  1 sibling, 0 replies; 45+ messages in thread
From: Tian, Kevin @ 2016-06-12  6:46 UTC (permalink / raw)
  To: Xu, Quan, xen-devel
  Cc: dario.faggioli, Stefano Stabellini, Julien Grall, Jan Beulich

> From: Xu, Quan
> Sent: Wednesday, June 08, 2016 4:59 PM
> 
> From: Quan Xu <quan.xu@intel.com>
> 
> Signed-off-by: Quan Xu <quan.xu@intel.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)
  2016-06-09 12:39                 ` Julien Grall
@ 2016-06-12  6:55                   ` Tian, Kevin
  2016-06-12  7:02                     ` Xu, Quan
  2016-06-12 15:35                     ` Julien Grall
  0 siblings, 2 replies; 45+ messages in thread
From: Tian, Kevin @ 2016-06-12  6:55 UTC (permalink / raw)
  To: Julien Grall, Jan Beulich
  Cc: dario.faggioli, Stefano Stabellini, xen-devel, Xu, Quan, Steve Capper

> From: Julien Grall [mailto:julien.grall@arm.com]
> Sent: Thursday, June 09, 2016 8:40 PM
> 
> Hi Jan,
> 
> On 09/06/16 13:32, Jan Beulich wrote:
> >>>> On 09.06.16 at 14:24, <julien.grall@arm.com> wrote:
> >> So the behavior of iommu_iotlb_flush is up to the IOMMU driver. Whilst
> >> the behavior of iommu_{,un}map_page are defined by the common code.
> >
> > I'm certainly up for moving the logic up to the generic IOMMU layer,
> > if that's feasible.
> 
> That would be my preference.
> 

I suppose above comment is for "[Patch v11 0/3] VT-d Device-TLB flush 
issue", where the crash logic better be moved up. Do you have further 
against on this patch then?

Thanks
Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 07/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending (top level ones)
  2016-06-08  8:59 ` [PATCH v7 07/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending " Xu, Quan
  2016-06-08 14:51   ` Jan Beulich
  2016-06-09 18:58   ` Suravee Suthikulanit
@ 2016-06-12  6:58   ` Tian, Kevin
  2 siblings, 0 replies; 45+ messages in thread
From: Tian, Kevin @ 2016-06-12  6:58 UTC (permalink / raw)
  To: Xu, Quan, xen-devel
  Cc: Stefano Stabellini, Keir Fraser, Jan Beulich, Liu Jinsong,
	dario.faggioli, Julien Grall, Suravee Suthikulpanit,
	Andrew Cooper, Wu, Feng

> From: Xu, Quan
> Sent: Wednesday, June 08, 2016 4:59 PM
> 
> From: Quan Xu <quan.xu@intel.com>
> 
> Signed-off-by: Quan Xu <quan.xu@intel.com>
> 
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Liu Jinsong <jinsong.liu@alibaba-inc.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien.grall@arm.com>
> CC: Kevin Tian <kevin.tian@intel.com>
> CC: Feng Wu <feng.wu@intel.com>
> 

Acked-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)
  2016-06-12  6:55                   ` Tian, Kevin
@ 2016-06-12  7:02                     ` Xu, Quan
  2016-06-12 15:35                     ` Julien Grall
  1 sibling, 0 replies; 45+ messages in thread
From: Xu, Quan @ 2016-06-12  7:02 UTC (permalink / raw)
  To: Tian, Kevin, Julien Grall, Jan Beulich
  Cc: dario.faggioli, Stefano Stabellini, xen-devel, Steve Capper

On June 12, 2016 2:56 PM, Tian, Kevin <kevin.tian@intel.com> wrote:
> > From: Julien Grall [mailto:julien.grall@arm.com]
> > Sent: Thursday, June 09, 2016 8:40 PM
> >
> > Hi Jan,
> >
> > On 09/06/16 13:32, Jan Beulich wrote:
> > >>>> On 09.06.16 at 14:24, <julien.grall@arm.com> wrote:
> > >> So the behavior of iommu_iotlb_flush is up to the IOMMU driver.
> > >> Whilst the behavior of iommu_{,un}map_page are defined by the
> common code.
> > >
> > > I'm certainly up for moving the logic up to the generic IOMMU layer,
> > > if that's feasible.
> >
> > That would be my preference.
> >
> 
> I suppose above comment is for "[Patch v11 0/3] VT-d Device-TLB flush
> issue", where the crash logic better be moved up. 

I think so too. however, I am still reading these AMD/ARM related code.

> Do you have further against
> on this patch then?

Quan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 09/11] vt-d: fix the IOMMU flush issue
  2016-06-08  8:59 ` [PATCH v7 09/11] vt-d: fix the IOMMU flush issue Xu, Quan
@ 2016-06-12  7:32   ` Tian, Kevin
  2016-06-12  9:27     ` Xu, Quan
  0 siblings, 1 reply; 45+ messages in thread
From: Tian, Kevin @ 2016-06-12  7:32 UTC (permalink / raw)
  To: Xu, Quan, xen-devel
  Cc: Keir Fraser, dario.faggioli, Wu, Feng, Jan Beulich, Andrew Cooper

> From: Xu, Quan
> Sent: Wednesday, June 08, 2016 4:59 PM
> @@ -545,18 +549,42 @@ static int __must_check iommu_flush_all(void)
>  {
>      struct acpi_drhd_unit *drhd;
>      struct iommu *iommu;
> -    int flush_dev_iotlb;
> +    int rc = 0;
> 
>      flush_all_cache();
>      for_each_drhd_unit ( drhd )
>      {
>          iommu = drhd->iommu;
> -        iommu_flush_context_global(iommu, 0);
> -        flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> -        iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
> +        /*
> +         * The current logic for rc returns:
> +         *   - positive  invoke iommu_flush_write_buffer to flush cache.
> +         *   - zero      on success.
> +         *   - negative  on failure. Continue to flush IOMMU IOTLB on a
> +         *               best effort basis.
> +         *
> +         * Moreover, IOMMU flush handlers flush_context_qi and flush_iotlb_qi
> +         * (or flush_context_reg and flush_iotlb_reg, deep functions in the
> +         * call trees of iommu_flush_context_global and iommu_flush_iotlb_global)
> +         * are with the same logic to bubble up positive return value.
> +         */
> +        rc = iommu_flush_context_global(iommu, 0);
> +        if ( rc <= 0 )
> +        {
> +            int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> +            int ret = iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
> +
> +            ASSERT(ret <= 0);
> +            if ( !rc )
> +                rc = ret;

I'm dubious about the assertion here. Why can't above call
return 1 upon error on earlier flush? I digged back your
earlier reply like:

> Yes, the iommu_flush_iotlb_dsi() can also return 1.
> Look at the call tree, at the beginning of 
> flush_context_qi()/flush_iotlb_qi(), or 
> flush_context_reg()/flush_iotlb_reg()..
> 
> If rc was negative when we call iommu_flush_context_device(), it is 
> impossible to return 1 for iommu_flush_iotlb_dsi().

But I don't think it a good idea of making so much assumptions
about internal implementations of those low level interfaces.
Also flush_context may fail for one specific reason which doesn't 
block flush_iotlb which could get 1 returned when caching mode
is disabled. We'd better have return-1 case correctly handled here.

Thanks
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 10/11] vt-d: propagate the IOMMU Device-TLB flush error up to ME phantom functions
  2016-06-08  8:59 ` [PATCH v7 10/11] vt-d: propagate the IOMMU Device-TLB flush error up to ME phantom functions Xu, Quan
@ 2016-06-12  7:34   ` Tian, Kevin
  0 siblings, 0 replies; 45+ messages in thread
From: Tian, Kevin @ 2016-06-12  7:34 UTC (permalink / raw)
  To: Xu, Quan, xen-devel; +Cc: dario.faggioli, Wu, Feng, Jan Beulich

> From: Xu, Quan
> Sent: Wednesday, June 08, 2016 4:59 PM
> 
> From: Quan Xu <quan.xu@intel.com>
> 
> Signed-off-by: Quan Xu <quan.xu@intel.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 

Acked-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 11/11] vt-d: add __must_check annotation to IOMMU flush pointers and handlers
  2016-06-08  8:59 ` [PATCH v7 11/11] vt-d: add __must_check annotation to IOMMU flush pointers and handlers Xu, Quan
@ 2016-06-12  7:35   ` Tian, Kevin
  0 siblings, 0 replies; 45+ messages in thread
From: Tian, Kevin @ 2016-06-12  7:35 UTC (permalink / raw)
  To: Xu, Quan, xen-devel; +Cc: dario.faggioli, Wu, Feng, Jan Beulich

> From: Xu, Quan
> Sent: Wednesday, June 08, 2016 4:59 PM
> 
> From: Quan Xu <quan.xu@intel.com>
> 
> Signed-off-by: Quan Xu <quan.xu@intel.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 07/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending (top level ones)
  2016-06-08 14:51   ` Jan Beulich
@ 2016-06-12  7:42     ` Xu, Quan
  2016-06-13  9:25       ` Jan Beulich
  0 siblings, 1 reply; 45+ messages in thread
From: Xu, Quan @ 2016-06-12  7:42 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tian, Kevin, Stefano Stabellini, Wu, Feng, Liu Jinsong,
	dario.faggioli, xen-devel, Julien Grall, Suravee Suthikulpanit,
	Andrew Cooper, Keir Fraser



> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Jan
> Beulich
> Sent: Wednesday, June 08, 2016 10:52 PM
> To: Xu, Quan <quan.xu@intel.com>
> Cc: Tian, Kevin <kevin.tian@intel.com>; Stefano Stabellini
> <sstabellini@kernel.org>; Wu, Feng <feng.wu@intel.com>; Liu Jinsong
> <jinsong.liu@alibaba-inc.com>; dario.faggioli@citrix.com; xen-
> devel@lists.xen.org; Julien Grall <julien.grall@arm.com>; Suravee
> Suthikulpanit <suravee.suthikulpanit@amd.com>; Andrew Cooper
> <andrew.cooper3@citrix.com>; Keir Fraser <keir@xen.org>
> Subject: Re: [Xen-devel] [PATCH v7 07/11] IOMMU: propagate IOMMU
> Device-TLB flush error up to IOMMU suspending (top level ones)
> 

On 
> >>> On 08.06.16 at 10:59, <quan.xu@intel.com> wrote:
> > @@ -169,6 +203,7 @@ static int enter_state(u32 state)
> 
> Right above here we have
> 
>     if ( (error = device_power_down()) )
> 
> which is now wrong as long as SAVED_ALL is not zero.
> 
> >      {
> >          printk(XENLOG_ERR "Some devices failed to power down.");
> >          system_state = SYS_STATE_resume;
> > +        device_power_up(error);
> >          goto done;
> 
> For the goto you need to adjust "error", or else you return something
> meaningless (a sort of random positive number) to your caller.
> 

Yes, it is still not correct. Could I change it as following: 


-    if ( (error = device_power_down()) )
+    if ( (error = device_power_down()) != SAVED_ALL )
     {
         printk(XENLOG_ERR "Some devices failed to power down.");
         system_state = SYS_STATE_resume;
+        device_power_up(error);
+        error = -EIO;
         goto done;
     }

Quan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 09/11] vt-d: fix the IOMMU flush issue
  2016-06-12  7:32   ` Tian, Kevin
@ 2016-06-12  9:27     ` Xu, Quan
  2016-06-13  8:50       ` Xu, Quan
  0 siblings, 1 reply; 45+ messages in thread
From: Xu, Quan @ 2016-06-12  9:27 UTC (permalink / raw)
  To: Tian, Kevin, xen-devel, Jan Beulich
  Cc: Keir Fraser, dario.faggioli, Wu, Feng, Andrew Cooper

On June 12, 2016 3:33 PM, Tian, Kevin <kevin.tian@intel.com> wrote:
> > From: Xu, Quan
> > Sent: Wednesday, June 08, 2016 4:59 PM @@ -545,18 +549,42 @@ static
> > int __must_check iommu_flush_all(void)  {
> >      struct acpi_drhd_unit *drhd;
> >      struct iommu *iommu;
> > -    int flush_dev_iotlb;
> > +    int rc = 0;
> >
> >      flush_all_cache();
> >      for_each_drhd_unit ( drhd )
> >      {
> >          iommu = drhd->iommu;
> > -        iommu_flush_context_global(iommu, 0);
> > -        flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> > -        iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
> > +        /*
> > +         * The current logic for rc returns:
> > +         *   - positive  invoke iommu_flush_write_buffer to flush cache.
> > +         *   - zero      on success.
> > +         *   - negative  on failure. Continue to flush IOMMU IOTLB on a
> > +         *               best effort basis.
> > +         *
> > +         * Moreover, IOMMU flush handlers flush_context_qi and
> flush_iotlb_qi
> > +         * (or flush_context_reg and flush_iotlb_reg, deep functions in the
> > +         * call trees of iommu_flush_context_global and
> iommu_flush_iotlb_global)
> > +         * are with the same logic to bubble up positive return value.
> > +         */
> > +        rc = iommu_flush_context_global(iommu, 0);
> > +        if ( rc <= 0 )
> > +        {
> > +            int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> > +            int ret = iommu_flush_iotlb_global(iommu, 0,
> > + flush_dev_iotlb);
> > +
> > +            ASSERT(ret <= 0);
> > +            if ( !rc )
> > +                rc = ret;
> 
> I'm dubious about the assertion here. Why can't above call return 1 upon error
> on earlier flush? I digged back your earlier reply like:
> 
> > Yes, the iommu_flush_iotlb_dsi() can also return 1.
> > Look at the call tree, at the beginning of
> > flush_context_qi()/flush_iotlb_qi(), or
> > flush_context_reg()/flush_iotlb_reg()..
> >
> > If rc was negative when we call iommu_flush_context_device(), it is
> > impossible to return 1 for iommu_flush_iotlb_dsi().
> 
> But I don't think it a good idea of making so much assumptions about internal
> implementations of those low level interfaces.
> Also flush_context may fail for one specific reason which doesn't block
> flush_iotlb which could get 1 returned when caching mode is disabled. We'd
> better have return-1 case correctly handled here.
> 

Your comment looks reasonable here. Could I change it as below:

-static int iommu_flush_iotlb_psi(
-    struct iommu *iommu, u16 did, u64 addr, unsigned int order,
-    int flush_non_present_entry, int flush_dev_iotlb)
+static int __must_check iommu_flush_iotlb_psi(struct iommu *iommu, u16 did,
+                                              u64 addr, unsigned int order,
+                                              int flush_non_present_entry,
+                                              int flush_dev_iotlb)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     int status;
@@ -546,17 +550,35 @@ static int __must_check iommu_flush_all(void)
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu;
     int flush_dev_iotlb;
+    int rc = 0;

     flush_all_cache();
     for_each_drhd_unit ( drhd )
     {
+        int ret;
+
         iommu = drhd->iommu;
-        iommu_flush_context_global(iommu, 0);
+        /*
+         * The current logic for rc returns:
+         *   - positive  invoke iommu_flush_write_buffer to flush cache.
+         *   - zero      on success.
+         *   - negative  on failure. Continue to flush IOMMU IOTLB on a
+         *               best effort basis.
+         */
+        rc = iommu_flush_context_global(iommu, 0);
         flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
-        iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
+        ret = iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
+        if ( !rc )
+            rc = ret;
+
+        if ( rc > 0 || ret > 0 )
+            iommu_flush_write_buffer(iommu);
     }

-    return 0;
+    if ( rc > 0 )
+        rc = 0;
+
+    return rc;
 }








Also, Jan, what's your opinion?

Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)
  2016-06-12  6:55                   ` Tian, Kevin
  2016-06-12  7:02                     ` Xu, Quan
@ 2016-06-12 15:35                     ` Julien Grall
  1 sibling, 0 replies; 45+ messages in thread
From: Julien Grall @ 2016-06-12 15:35 UTC (permalink / raw)
  To: Tian, Kevin, Jan Beulich
  Cc: dario.faggioli, Stefano Stabellini, xen-devel, Xu, Quan, Steve Capper

Hello,

On 12/06/2016 07:55, Tian, Kevin wrote:
>> From: Julien Grall [mailto:julien.grall@arm.com]
>> Sent: Thursday, June 09, 2016 8:40 PM
>>
>> Hi Jan,
>>
>> On 09/06/16 13:32, Jan Beulich wrote:
>>>>>> On 09.06.16 at 14:24, <julien.grall@arm.com> wrote:
>>>> So the behavior of iommu_iotlb_flush is up to the IOMMU driver. Whilst
>>>> the behavior of iommu_{,un}map_page are defined by the common code.
>>>
>>> I'm certainly up for moving the logic up to the generic IOMMU layer,
>>> if that's feasible.
>>
>> That would be my preference.
>>
>
> I suppose above comment is for "[Patch v11 0/3] VT-d Device-TLB flush
> issue", where the crash logic better be moved up. Do you have further
> against on this patch then?

None. I trust you to ensure that the patch to move the crash logic is 
upstreamed soon:

Acked-by: Julien Grall <julien.grall@arm.com>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 09/11] vt-d: fix the IOMMU flush issue
  2016-06-12  9:27     ` Xu, Quan
@ 2016-06-13  8:50       ` Xu, Quan
  0 siblings, 0 replies; 45+ messages in thread
From: Xu, Quan @ 2016-06-13  8:50 UTC (permalink / raw)
  To: Xu, Quan, Tian, Kevin, xen-devel, Jan Beulich
  Cc: Wu, Feng, dario.faggioli, Keir Fraser, Andrew Cooper

On June 12, 2016 5:27 PM, Xu, Quan <quan.xu@intel.com> wrote:
> On June 12, 2016 3:33 PM, Tian, Kevin <kevin.tian@intel.com> wrote:
> > > From: Xu, Quan
> > > Sent: Wednesday, June 08, 2016 4:59 PM @@ -545,18 +549,42 @@
> static
> > > int __must_check iommu_flush_all(void)  {
> > >      struct acpi_drhd_unit *drhd;
> > >      struct iommu *iommu;
> > > -    int flush_dev_iotlb;
> > > +    int rc = 0;
> > >
> > >      flush_all_cache();
> > >      for_each_drhd_unit ( drhd )
> > >      {
> > >          iommu = drhd->iommu;
> > > -        iommu_flush_context_global(iommu, 0);
> > > -        flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> > > -        iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
> > > +        /*
> > > +         * The current logic for rc returns:
> > > +         *   - positive  invoke iommu_flush_write_buffer to flush cache.
> > > +         *   - zero      on success.
> > > +         *   - negative  on failure. Continue to flush IOMMU IOTLB on a
> > > +         *               best effort basis.
> > > +         *
> > > +         * Moreover, IOMMU flush handlers flush_context_qi and
> > flush_iotlb_qi
> > > +         * (or flush_context_reg and flush_iotlb_reg, deep functions in the
> > > +         * call trees of iommu_flush_context_global and
> > iommu_flush_iotlb_global)
> > > +         * are with the same logic to bubble up positive return value.
> > > +         */
> > > +        rc = iommu_flush_context_global(iommu, 0);
> > > +        if ( rc <= 0 )
> > > +        {
> > > +            int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> > > +            int ret = iommu_flush_iotlb_global(iommu, 0,
> > > + flush_dev_iotlb);
> > > +
> > > +            ASSERT(ret <= 0);
> > > +            if ( !rc )
> > > +                rc = ret;
> >
> > I'm dubious about the assertion here. Why can't above call return 1
> > upon error on earlier flush? I digged back your earlier reply like:
> >
> > > Yes, the iommu_flush_iotlb_dsi() can also return 1.
> > > Look at the call tree, at the beginning of
> > > flush_context_qi()/flush_iotlb_qi(), or
> > > flush_context_reg()/flush_iotlb_reg()..
> > >
> > > If rc was negative when we call iommu_flush_context_device(), it is
> > > impossible to return 1 for iommu_flush_iotlb_dsi().
> >
> > But I don't think it a good idea of making so much assumptions about
> > internal implementations of those low level interfaces.
> > Also flush_context may fail for one specific reason which doesn't
> > block flush_iotlb which could get 1 returned when caching mode is
> > disabled. We'd better have return-1 case correctly handled here.
> >
> 
> Your comment looks reasonable here. Could I change it as below:
> 
> -static int iommu_flush_iotlb_psi(
> -    struct iommu *iommu, u16 did, u64 addr, unsigned int order,
> -    int flush_non_present_entry, int flush_dev_iotlb)
> +static int __must_check iommu_flush_iotlb_psi(struct iommu *iommu, u16
> did,
> +                                              u64 addr, unsigned int order,
> +                                              int flush_non_present_entry,
> +                                              int flush_dev_iotlb)
>  {
>      struct iommu_flush *flush = iommu_get_flush(iommu);
>      int status;
> @@ -546,17 +550,35 @@ static int __must_check iommu_flush_all(void)
>      struct acpi_drhd_unit *drhd;
>      struct iommu *iommu;
>      int flush_dev_iotlb;
> +    int rc = 0;
> 
>      flush_all_cache();
>      for_each_drhd_unit ( drhd )
>      {
> +        int ret;
> +
>          iommu = drhd->iommu;
> -        iommu_flush_context_global(iommu, 0);
> +        /*
> +         * The current logic for rc returns:
> +         *   - positive  invoke iommu_flush_write_buffer to flush cache.
> +         *   - zero      on success.
> +         *   - negative  on failure. Continue to flush IOMMU IOTLB on a
> +         *               best effort basis.
> +         */
> +        rc = iommu_flush_context_global(iommu, 0);
>          flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> -        iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
> +        ret = iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
> +        if ( !rc )
> +            rc = ret;
> +
> +        if ( rc > 0 || ret > 0 )
> +            iommu_flush_write_buffer(iommu);
>      }
> 
> -    return 0;
> +    if ( rc > 0 )
> +        rc = 0;
> +
> +    return rc;
>  }
> 
> 

Ah, this change is not correct, as the previous error return
value may be erased by the later positive / zero value.

I'll highlight this change is not under Jan's R-b in next v8. 

.. I think the below is correct.



+static int __must_check iommu_flush_iotlb_psi(struct iommu *iommu, u16 did,
+                                              u64 addr, unsigned int order,
+                                              int flush_non_present_entry,
+                                              int flush_dev_iotlb)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     int status;
@@ -546,17 +550,37 @@ static int __must_check iommu_flush_all(void)
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu;
     int flush_dev_iotlb;
+    int rc = 0;

     flush_all_cache();
     for_each_drhd_unit ( drhd )
     {
+        int iommu_rc, iommu_ret;
+
         iommu = drhd->iommu;
-        iommu_flush_context_global(iommu, 0);
+        iommu_rc = iommu_flush_context_global(iommu, 0);
         flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
-        iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
+        iommu_ret = iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
+
+        /*
+         * The current logic for returns:
+         *   - positive  invoke iommu_flush_write_buffer to flush cache.
+         *   - zero      on success.
+         *   - negative  on failure. Continue to flush IOMMU IOTLB on a
+         *               best effort basis.
+         */
+        if ( iommu_rc > 0 || iommu_ret > 0 )
+            iommu_flush_write_buffer(iommu);
+        if ( rc >= 0 )
+            rc = iommu_rc;
+        if ( rc >= 0 )
+            rc = iommu_ret;
     }

-    return 0;
+    if ( rc > 0 )
+        rc = 0;
+
+    return rc;
 }


> 
> Also, Jan, what's your opinion?
> 
> Quan
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v7 07/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending (top level ones)
  2016-06-12  7:42     ` Xu, Quan
@ 2016-06-13  9:25       ` Jan Beulich
  0 siblings, 0 replies; 45+ messages in thread
From: Jan Beulich @ 2016-06-13  9:25 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, Liu Jinsong,
	dario.faggioli, xen-devel, Julien Grall, SuraveeSuthikulpanit,
	Andrew Cooper, Keir Fraser

>>> On 12.06.16 at 09:42, <quan.xu@intel.com> wrote:

> 
>> -----Original Message-----
>> From: Xen-devel [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Jan
>> Beulich
>> Sent: Wednesday, June 08, 2016 10:52 PM
>> To: Xu, Quan <quan.xu@intel.com>
>> Cc: Tian, Kevin <kevin.tian@intel.com>; Stefano Stabellini
>> <sstabellini@kernel.org>; Wu, Feng <feng.wu@intel.com>; Liu Jinsong
>> <jinsong.liu@alibaba-inc.com>; dario.faggioli@citrix.com; xen-
>> devel@lists.xen.org; Julien Grall <julien.grall@arm.com>; Suravee
>> Suthikulpanit <suravee.suthikulpanit@amd.com>; Andrew Cooper
>> <andrew.cooper3@citrix.com>; Keir Fraser <keir@xen.org>
>> Subject: Re: [Xen-devel] [PATCH v7 07/11] IOMMU: propagate IOMMU
>> Device-TLB flush error up to IOMMU suspending (top level ones)
>> 
> 
> On 
>> >>> On 08.06.16 at 10:59, <quan.xu@intel.com> wrote:
>> > @@ -169,6 +203,7 @@ static int enter_state(u32 state)
>> 
>> Right above here we have
>> 
>>     if ( (error = device_power_down()) )
>> 
>> which is now wrong as long as SAVED_ALL is not zero.
>> 
>> >      {
>> >          printk(XENLOG_ERR "Some devices failed to power down.");
>> >          system_state = SYS_STATE_resume;
>> > +        device_power_up(error);
>> >          goto done;
>> 
>> For the goto you need to adjust "error", or else you return something
>> meaningless (a sort of random positive number) to your caller.
>> 
> 
> Yes, it is still not correct. Could I change it as following: 
> 
> 
> -    if ( (error = device_power_down()) )
> +    if ( (error = device_power_down()) != SAVED_ALL )
>      {
>          printk(XENLOG_ERR "Some devices failed to power down.");
>          system_state = SYS_STATE_resume;
> +        device_power_up(error);
> +        error = -EIO;
>          goto done;
>      }

This would address only part of the issue afaict - SAVED_ALL
not being zero would still result in the function returning a
positive value instead of zero in the success case. But to be
honest I don't see why this simple to solve an issue requires
any kind of discussion on how to deal with it.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 45+ messages in thread

end of thread, other threads:[~2016-06-13  9:25 UTC | newest]

Thread overview: 45+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-08  8:58 [PATCH v7 00/11] Check VT-d Device-TLB flush error Xu, Quan
2016-06-08  8:58 ` [PATCH v7 01/11] IOMMU: handle IOMMU mapping and unmapping failures Xu, Quan
2016-06-08  8:58 ` [PATCH v7 02/11] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping Xu, Quan
2016-06-08 14:40   ` Jan Beulich
     [not found]   ` <57584D8602000078000F32D0@prv-mh.provo.novell.com>
     [not found]     ` <421857ad-4f6b-e99c-1e81-034b19effdfb@amd.com>
2016-06-10  6:56       ` Jan Beulich
2016-06-12  6:41   ` Tian, Kevin
2016-06-08  8:58 ` [PATCH v7 03/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping (top level ones) Xu, Quan
2016-06-08 14:42   ` Jan Beulich
2016-06-08 14:54   ` Jan Beulich
2016-06-09 19:30     ` Suravee Suthikulanit
2016-06-09 10:28   ` Julien Grall
2016-06-08  8:58 ` [PATCH v7 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping " Xu, Quan
2016-06-08 14:43   ` Jan Beulich
2016-06-09 18:45     ` Suravee Suthikulanit
2016-06-09 10:30   ` Julien Grall
2016-06-08  8:58 ` [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} " Xu, Quan
2016-06-09 11:12   ` Julien Grall
2016-06-09 11:45     ` Jan Beulich
2016-06-09 12:03       ` Julien Grall
2016-06-09 12:08         ` Julien Grall
2016-06-09 12:15           ` Jan Beulich
2016-06-09 12:24             ` Julien Grall
2016-06-09 12:32               ` Jan Beulich
2016-06-09 12:39                 ` Julien Grall
2016-06-12  6:55                   ` Tian, Kevin
2016-06-12  7:02                     ` Xu, Quan
2016-06-12 15:35                     ` Julien Grall
2016-06-09 12:14         ` Jan Beulich
2016-06-12  6:46   ` Tian, Kevin
2016-06-08  8:58 ` [PATCH v7 06/11] propagate IOMMU Device-TLB flush error up to EPT update " Xu, Quan
2016-06-08  8:59 ` [PATCH v7 07/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending " Xu, Quan
2016-06-08 14:51   ` Jan Beulich
2016-06-12  7:42     ` Xu, Quan
2016-06-13  9:25       ` Jan Beulich
2016-06-09 18:58   ` Suravee Suthikulanit
2016-06-12  6:58   ` Tian, Kevin
2016-06-08  8:59 ` [PATCH v7 08/11] IOMMU: propagate IOMMU Device-TLB flush error (leaf ones) Xu, Quan
2016-06-08  8:59 ` [PATCH v7 09/11] vt-d: fix the IOMMU flush issue Xu, Quan
2016-06-12  7:32   ` Tian, Kevin
2016-06-12  9:27     ` Xu, Quan
2016-06-13  8:50       ` Xu, Quan
2016-06-08  8:59 ` [PATCH v7 10/11] vt-d: propagate the IOMMU Device-TLB flush error up to ME phantom functions Xu, Quan
2016-06-12  7:34   ` Tian, Kevin
2016-06-08  8:59 ` [PATCH v7 11/11] vt-d: add __must_check annotation to IOMMU flush pointers and handlers Xu, Quan
2016-06-12  7:35   ` Tian, Kevin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).