xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [Patch v6 00/11] Check VT-d Device-TLB flush error
@ 2016-05-31 13:57 Xu, Quan
  2016-05-31 13:57 ` [Patch v6 01/11] IOMMU: handle IOMMU mapping and unmapping failures Xu, Quan
                   ` (10 more replies)
  0 siblings, 11 replies; 30+ messages in thread
From: Xu, Quan @ 2016-05-31 13:57 UTC (permalink / raw)
  To: xen-devel; +Cc: dario.faggioli, Quan Xu

From: Quan Xu <quan.xu@intel.com>

This patch set is a prereq patch set for Patch:'VT-d Device-TLB flush issue'.

While IOMMU Device-TLB flush timed out, xen calls panic() at present. However the existing panic()
is going to be eliminated, so we must propagate the IOMMU Device-TLB flush error up to the call trees.

This patch set is also based on the discussion of 'abstract model of IOMMU unmaping/mapping failures'

Quan Xu (11):
  IOMMU: handle IOMMU mapping and unmapping failures
  IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping
  IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping
    (top level ones)
  IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping (top
    level ones)
  IOMMU/MMU: propagate IOMMU Device-TLB flush error up to
    iommu_iotlb_flush{,_all} (top level ones)
  propagate IOMMU Device-TLB flush error up to EPT update (top level
    ones)
  IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending
    (top level ones)
  IOMMU: propagate IOMMU Device-TLB flush error (leaf ones).
  vt-d: fix the IOMMU flush issue
  vt-d: propagate the IOMMU Device-TLB flush error up to ME phantom
    functions
  vt-d: add __must_check annotation to IOMMU flush pointers and handlers

 xen/arch/arm/p2m.c                            |   4 +-
 xen/arch/x86/acpi/power.c                     |  76 +++++--
 xen/arch/x86/mm.c                             |  13 +-
 xen/arch/x86/mm/p2m-ept.c                     |  35 +++-
 xen/arch/x86/mm/p2m-pt.c                      |  24 ++-
 xen/arch/x86/mm/p2m.c                         |  16 +-
 xen/common/memory.c                           |  12 +-
 xen/drivers/passthrough/amd/iommu_init.c      |   9 +-
 xen/drivers/passthrough/amd/pci_amd_iommu.c   |  18 +-
 xen/drivers/passthrough/arm/smmu.c            |  19 +-
 xen/drivers/passthrough/iommu.c               |  63 +++++-
 xen/drivers/passthrough/vtd/extern.h          |   3 +-
 xen/drivers/passthrough/vtd/iommu.c           | 285 ++++++++++++++++++--------
 xen/drivers/passthrough/vtd/iommu.h           |  11 +-
 xen/drivers/passthrough/vtd/qinval.c          |  14 +-
 xen/drivers/passthrough/vtd/quirks.c          |  27 ++-
 xen/drivers/passthrough/x86/iommu.c           |   5 +-
 xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |   8 +-
 xen/include/asm-x86/iommu.h                   |   3 +-
 xen/include/xen/iommu.h                       |  20 +-
 20 files changed, 479 insertions(+), 186 deletions(-)

-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [Patch v6 01/11] IOMMU: handle IOMMU mapping and unmapping failures
  2016-05-31 13:57 [Patch v6 00/11] Check VT-d Device-TLB flush error Xu, Quan
@ 2016-05-31 13:57 ` Xu, Quan
  2016-06-01  9:52   ` Jan Beulich
  2016-05-31 13:57 ` [Patch v6 02/11] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping Xu, Quan
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 30+ messages in thread
From: Xu, Quan @ 2016-05-31 13:57 UTC (permalink / raw)
  To: xen-devel; +Cc: Kevin Tian, dario.faggioli, Jan Beulich, Quan Xu

From: Quan Xu <quan.xu@intel.com>

Treat IOMMU mapping and unmapping failures as a fatal to the DomU
If IOMMU mapping and unmapping failed, crash the DomU and propagate
the error up to the call trees.

No spamming of the log can occur. For DomU, we avoid logging any
message for already dying domains. For Dom0, that'll still be more
verbose than we'd really like, but it at least wouldn't outright
flood the console.

Signed-off-by: Quan Xu <quan.xu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>

CC: Jan Beulich <jbeulich@suse.com>
CC: Kevin Tian <kevin.tian@intel.com>

v6:
  1. Enhance commit message, changing "No spamming can occur"
     to "No spamming of the log can occur".
  2. Enhance log messages.
---
 xen/drivers/passthrough/iommu.c | 30 ++++++++++++++++++++++++++++--
 1 file changed, 28 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 9d104d2..673e126 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -240,21 +240,47 @@ int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
                    unsigned int flags)
 {
     const struct domain_iommu *hd = dom_iommu(d);
+    int rc;
 
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    return hd->platform_ops->map_page(d, gfn, mfn, flags);
+    rc = hd->platform_ops->map_page(d, gfn, mfn, flags);
+    if ( unlikely(rc) )
+    {
+        if ( !d->is_shutting_down && printk_ratelimit() )
+            printk(XENLOG_ERR
+                   "d%d: IOMMU mapping gfn %#lx to mfn %#lx failed: %d\n",
+                   d->domain_id, gfn, mfn, rc);
+
+        if ( !is_hardware_domain(d) )
+            domain_crash(d);
+    }
+
+    return rc;
 }
 
 int iommu_unmap_page(struct domain *d, unsigned long gfn)
 {
     const struct domain_iommu *hd = dom_iommu(d);
+    int rc;
 
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    return hd->platform_ops->unmap_page(d, gfn);
+    rc = hd->platform_ops->unmap_page(d, gfn);
+    if ( unlikely(rc) )
+    {
+        if ( !d->is_shutting_down && printk_ratelimit() )
+            printk(XENLOG_ERR
+                   "d%d: IOMMU unmapping gfn %#lx failed: %d\n",
+                   d->domain_id, gfn, rc);
+
+        if ( !is_hardware_domain(d) )
+            domain_crash(d);
+    }
+
+    return rc;
 }
 
 static void iommu_free_pagetables(unsigned long unused)
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [Patch v6 02/11] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping
  2016-05-31 13:57 [Patch v6 00/11] Check VT-d Device-TLB flush error Xu, Quan
  2016-05-31 13:57 ` [Patch v6 01/11] IOMMU: handle IOMMU mapping and unmapping failures Xu, Quan
@ 2016-05-31 13:57 ` Xu, Quan
  2016-06-01 10:05   ` Jan Beulich
  2016-05-31 13:57 ` [Patch v6 03/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping (top level ones) Xu, Quan
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 30+ messages in thread
From: Xu, Quan @ 2016-05-31 13:57 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Keir Fraser, Jan Beulich, George Dunlap,
	Andrew Cooper, dario.faggioli, Jun Nakajima, Quan Xu

From: Quan Xu <quan.xu@intel.com>

When IOMMU mapping is failed, we issue a best effort rollback, stopping
IOMMU mapping, unmapping the previous IOMMU maps and then reporting the
error up to the call trees. When rollback is not feasible (in early
initialization phase or trade-off of complexity) for the hardware domain,
we do things on a best effort basis, only throwing out an error message.

IOMMU unmapping should perhaps continue despite an error, in an attempt
to do best effort cleanup.

Signed-off-by: Quan Xu <quan.xu@intel.com>

CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>

v6:
  1. Limit __must_check annotation to IOMMU functions for this patch set.
  2. Replace 'ret' with a more specific name 'iommu_ret' in __get_page_type().
  3. Return the first error instead of the last one.
---
 xen/arch/x86/mm.c               | 13 ++++++++-----
 xen/arch/x86/mm/p2m-ept.c       | 33 ++++++++++++++++++++++++++-------
 xen/arch/x86/mm/p2m-pt.c        | 24 +++++++++++++++++++++---
 xen/arch/x86/mm/p2m.c           | 16 ++++++++++++++--
 xen/drivers/passthrough/iommu.c | 14 +++++++++++++-
 5 files changed, 82 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 8d10a3e..ae7c8ab 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2467,7 +2467,7 @@ static int __get_page_type(struct page_info *page, unsigned long type,
                            int preemptible)
 {
     unsigned long nx, x, y = page->u.inuse.type_info;
-    int rc = 0;
+    int rc = 0, iommu_ret = 0;
 
     ASSERT(!(type & ~(PGT_type_mask | PGT_pae_xen_l2)));
 
@@ -2578,11 +2578,11 @@ static int __get_page_type(struct page_info *page, unsigned long type,
         if ( d && is_pv_domain(d) && unlikely(need_iommu(d)) )
         {
             if ( (x & PGT_type_mask) == PGT_writable_page )
-                iommu_unmap_page(d, mfn_to_gmfn(d, page_to_mfn(page)));
+                iommu_ret = iommu_unmap_page(d, mfn_to_gmfn(d, page_to_mfn(page)));
             else if ( type == PGT_writable_page )
-                iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
-                               page_to_mfn(page),
-                               IOMMUF_readable|IOMMUF_writable);
+                iommu_ret = iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
+                                           page_to_mfn(page),
+                                           IOMMUF_readable|IOMMUF_writable);
         }
     }
 
@@ -2599,6 +2599,9 @@ static int __get_page_type(struct page_info *page, unsigned long type,
     if ( (x & PGT_partial) && !(nx & PGT_partial) )
         put_page(page);
 
+    if ( !rc )
+        rc = iommu_ret;
+
     return rc;
 }
 
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 1ed5b47..ebb4343 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -667,6 +667,7 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     unsigned long gfn_remainder = gfn;
     unsigned int i, target = order / EPT_TABLE_ORDER;
     int ret, rc = 0;
+    bool_t entry_written = 0;
     bool_t direct_mmio = (p2mt == p2m_mmio_direct);
     uint8_t ipat = 0;
     bool_t need_modify_vtd_table = 1;
@@ -812,10 +813,15 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     rc = atomic_write_ept_entry(ept_entry, new_entry, target);
     if ( unlikely(rc) )
         old_entry.epte = 0;
-    else if ( p2mt != p2m_invalid &&
-              (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
-        /* Track the highest gfn for which we have ever had a valid mapping */
-        p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
+    else
+    {
+        entry_written = 1;
+
+        if ( p2mt != p2m_invalid &&
+             (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
+            /* Track the highest gfn for which we have ever had a valid mapping */
+            p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
+    }
 
 out:
     if ( needs_sync )
@@ -831,10 +837,23 @@ out:
         {
             if ( iommu_flags )
                 for ( i = 0; i < (1 << order); i++ )
-                    iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags);
+                {
+                    rc = iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags);
+                    if ( unlikely(rc) )
+                    {
+                        while ( i-- )
+                            iommu_unmap_page(p2m->domain, gfn + i);
+
+                        break;
+                    }
+                }
             else
                 for ( i = 0; i < (1 << order); i++ )
-                    iommu_unmap_page(d, gfn + i);
+                {
+                    ret = iommu_unmap_page(d, gfn + i);
+                    if ( !rc )
+                        rc = ret;
+                }
         }
     }
 
@@ -847,7 +866,7 @@ out:
     if ( is_epte_present(&old_entry) )
         ept_free_entry(p2m, &old_entry, target);
 
-    if ( rc == 0 && p2m_is_hostp2m(p2m) )
+    if ( entry_written && p2m_is_hostp2m(p2m) )
         p2m_altp2m_propagate_change(d, _gfn(gfn), mfn, order, p2mt, p2ma);
 
     return rc;
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 3d80612..262f500 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -673,6 +673,8 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     if ( iommu_enabled && need_iommu(p2m->domain) &&
          (iommu_old_flags != iommu_pte_flags || old_mfn != mfn_x(mfn)) )
     {
+        int ret;
+
         if ( iommu_use_hap_pt(p2m->domain) )
         {
             if ( iommu_old_flags )
@@ -680,11 +682,27 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
         }
         else if ( iommu_pte_flags )
             for ( i = 0; i < (1UL << page_order); i++ )
-                iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
-                               iommu_pte_flags);
+            {
+                ret = iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
+                                     iommu_pte_flags);
+                if ( unlikely(ret) )
+                {
+                    while ( i-- )
+                        iommu_unmap_page(p2m->domain, gfn + i);
+
+                    if ( !rc )
+                        rc = ret;
+
+                    break;
+                }
+            }
         else
             for ( i = 0; i < (1UL << page_order); i++ )
-                iommu_unmap_page(p2m->domain, gfn + i);
+            {
+                ret = iommu_unmap_page(p2m->domain, gfn + i);
+                if ( !rc )
+                    rc = ret;
+            }
     }
 
     /*
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 9b19769..dd2c74c 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -641,10 +641,21 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn,
 
     if ( !paging_mode_translate(p2m->domain) )
     {
+        int rc = 0;
+
         if ( need_iommu(p2m->domain) )
+        {
             for ( i = 0; i < (1 << page_order); i++ )
-                iommu_unmap_page(p2m->domain, mfn + i);
-        return 0;
+            {
+                int ret;
+
+                ret = iommu_unmap_page(p2m->domain, mfn + i);
+                if ( !rc )
+                    rc = ret;
+            }
+        }
+
+        return rc;
     }
 
     ASSERT(gfn_locked_by_me(p2m, gfn));
@@ -2535,6 +2546,7 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
         if ( mfn_valid(mfn) )
             p2m_remove_page(ap2m, gfn_x(old_gfn), mfn_x(mfn), PAGE_ORDER_4K);
         rc = 0;
+
         goto out;
     }
 
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 673e126..25501cd 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -171,20 +171,32 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
     {
         struct page_info *page;
         unsigned int i = 0;
+        int rc = 0;
+
         page_list_for_each ( page, &d->page_list )
         {
             unsigned long mfn = page_to_mfn(page);
             unsigned long gfn = mfn_to_gmfn(d, mfn);
             unsigned int mapping = IOMMUF_readable;
+            int ret;
 
             if ( ((page->u.inuse.type_info & PGT_count_mask) == 0) ||
                  ((page->u.inuse.type_info & PGT_type_mask)
                   == PGT_writable_page) )
                 mapping |= IOMMUF_writable;
-            hd->platform_ops->map_page(d, gfn, mfn, mapping);
+
+            ret = hd->platform_ops->map_page(d, gfn, mfn, mapping);
+            if ( !rc )
+                rc = ret;
+
             if ( !(i++ & 0xfffff) )
                 process_pending_softirqs();
         }
+
+        if ( rc )
+            printk(XENLOG_WARNING
+                   "d%d: IOMMU mapping failed: %d for hardware domain\n",
+                   d->domain_id, rc);
     }
 
     return hd->platform_ops->hwdom_init(d);
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [Patch v6 03/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping (top level ones)
  2016-05-31 13:57 [Patch v6 00/11] Check VT-d Device-TLB flush error Xu, Quan
  2016-05-31 13:57 ` [Patch v6 01/11] IOMMU: handle IOMMU mapping and unmapping failures Xu, Quan
  2016-05-31 13:57 ` [Patch v6 02/11] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping Xu, Quan
@ 2016-05-31 13:57 ` Xu, Quan
  2016-05-31 13:57 ` [Patch v6 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping " Xu, Quan
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 30+ messages in thread
From: Xu, Quan @ 2016-05-31 13:57 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, Jan Beulich,
	Andrew Cooper, dario.faggioli, Julien Grall, Quan Xu

From: Quan Xu <quan.xu@intel.com>

Signed-off-by: Quan Xu <quan.xu@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Feng Wu <feng.wu@intel.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/drivers/passthrough/arm/smmu.c            |  2 +-
 xen/drivers/passthrough/vtd/iommu.c           | 15 ++++++++-------
 xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |  2 +-
 xen/include/xen/iommu.h                       |  2 +-
 4 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 54a03a6..1ce4ddf 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2774,7 +2774,7 @@ static int arm_smmu_map_page(struct domain *d, unsigned long gfn,
 	return guest_physmap_add_entry(d, gfn, mfn, 0, t);
 }
 
-static int arm_smmu_unmap_page(struct domain *d, unsigned long gfn)
+static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long gfn)
 {
 	/*
 	 * This function should only be used by gnttab code when the domain
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index db83949..4844193 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -609,7 +609,7 @@ static void intel_iommu_iotlb_flush_all(struct domain *d)
 }
 
 /* clear one page's page table */
-static void dma_pte_clear_one(struct domain *domain, u64 addr)
+static int __must_check dma_pte_clear_one(struct domain *domain, u64 addr)
 {
     struct domain_iommu *hd = dom_iommu(domain);
     struct dma_pte *page = NULL, *pte = NULL;
@@ -621,7 +621,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
     if ( pg_maddr == 0 )
     {
         spin_unlock(&hd->arch.mapping_lock);
-        return;
+        return 0;
     }
 
     page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
@@ -631,7 +631,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
     {
         spin_unlock(&hd->arch.mapping_lock);
         unmap_vtd_domain_page(page);
-        return;
+        return 0;
     }
 
     dma_clear_pte(*pte);
@@ -642,6 +642,8 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
         __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K, 1, 1);
 
     unmap_vtd_domain_page(page);
+
+    return 0;
 }
 
 static void iommu_free_pagetable(u64 pt_maddr, int level)
@@ -1739,15 +1741,14 @@ static int intel_iommu_map_page(
     return 0;
 }
 
-static int intel_iommu_unmap_page(struct domain *d, unsigned long gfn)
+static int __must_check intel_iommu_unmap_page(struct domain *d,
+                                               unsigned long gfn)
 {
     /* Do nothing if hardware domain and iommu supports pass thru. */
     if ( iommu_passthrough && is_hardware_domain(d) )
         return 0;
 
-    dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
-
-    return 0;
+    return dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
 }
 
 void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
diff --git a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
index 9c51172..57b6cc1 100644
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
@@ -53,7 +53,7 @@ int amd_iommu_update_ivrs_mapping_acpi(void);
 /* mapping functions */
 int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
                        unsigned int flags);
-int amd_iommu_unmap_page(struct domain *d, unsigned long gfn);
+int __must_check amd_iommu_unmap_page(struct domain *d, unsigned long gfn);
 u64 amd_iommu_get_next_table_from_pte(u32 *entry);
 int amd_iommu_reserve_domain_unity_map(struct domain *domain,
                                        u64 phys_addr, unsigned long size,
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 19ba976..73a7f1e 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -168,7 +168,7 @@ struct iommu_ops {
     void (*teardown)(struct domain *d);
     int (*map_page)(struct domain *d, unsigned long gfn, unsigned long mfn,
                     unsigned int flags);
-    int (*unmap_page)(struct domain *d, unsigned long gfn);
+    int __must_check (*unmap_page)(struct domain *d, unsigned long gfn);
     void (*free_page_table)(struct page_info *);
 #ifdef CONFIG_X86
     void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [Patch v6 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping (top level ones)
  2016-05-31 13:57 [Patch v6 00/11] Check VT-d Device-TLB flush error Xu, Quan
                   ` (2 preceding siblings ...)
  2016-05-31 13:57 ` [Patch v6 03/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping (top level ones) Xu, Quan
@ 2016-05-31 13:57 ` Xu, Quan
  2016-06-01 10:24   ` Jan Beulich
  2016-05-31 13:57 ` [Patch v6 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} " Xu, Quan
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 30+ messages in thread
From: Xu, Quan @ 2016-05-31 13:57 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, Quan Xu, dario.faggioli,
	Julien Grall, Jan Beulich, Suravee Suthikulpanit

From: Quan Xu <quan.xu@intel.com>

Signed-off-by: Quan Xu <quan.xu@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>

CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Feng Wu <feng.wu@intel.com>
CC: Jan Beulich <jbeulich@suse.com>

v6:
  1. Add __must_check annotation to amd_iommu_map_page().
  2. Return the first error instead of the last one.
---
 xen/drivers/passthrough/amd/pci_amd_iommu.c   | 16 ++++++++++++++--
 xen/drivers/passthrough/arm/smmu.c            |  4 ++--
 xen/drivers/passthrough/vtd/iommu.c           |  7 ++++---
 xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |  4 ++--
 xen/include/xen/iommu.h                       |  4 ++--
 5 files changed, 24 insertions(+), 11 deletions(-)

diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 70b7475..17c1f6d 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -285,6 +285,8 @@ static void __hwdom_init amd_iommu_hwdom_init(struct domain *d)
 
     if ( !iommu_passthrough && !need_iommu(d) )
     {
+        int rc = 0;
+
         /* Set up 1:1 page table for dom0 */
         for ( i = 0; i < max_pdx; i++ )
         {
@@ -295,12 +297,22 @@ static void __hwdom_init amd_iommu_hwdom_init(struct domain *d)
              * a pfn_valid() check would seem desirable here.
              */
             if ( mfn_valid(pfn) )
-                amd_iommu_map_page(d, pfn, pfn, 
-                                   IOMMUF_readable|IOMMUF_writable);
+            {
+                int ret;
+
+                ret = amd_iommu_map_page(d, pfn, pfn,
+                                         IOMMUF_readable|IOMMUF_writable);
+                if ( !rc )
+                    rc = ret;
+            }
 
             if ( !(i & 0xfffff) )
                 process_pending_softirqs();
         }
+
+        if ( rc )
+            AMD_IOMMU_DEBUG("d%d: IOMMU mapping failed: %d\n",
+                            d->domain_id, rc);
     }
 
     for_each_amd_iommu ( iommu )
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 1ce4ddf..ee5c89d 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2745,8 +2745,8 @@ static void arm_smmu_iommu_domain_teardown(struct domain *d)
 	xfree(xen_domain);
 }
 
-static int arm_smmu_map_page(struct domain *d, unsigned long gfn,
-			     unsigned long mfn, unsigned int flags)
+static int __must_check arm_smmu_map_page(struct domain *d, unsigned long gfn,
+                                          unsigned long mfn, unsigned int flags)
 {
 	p2m_type_t t;
 
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 4844193..e900019 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1687,9 +1687,10 @@ static void iommu_domain_teardown(struct domain *d)
     spin_unlock(&hd->arch.mapping_lock);
 }
 
-static int intel_iommu_map_page(
-    struct domain *d, unsigned long gfn, unsigned long mfn,
-    unsigned int flags)
+static int __must_check intel_iommu_map_page(struct domain *d,
+                                             unsigned long gfn,
+                                             unsigned long mfn,
+                                             unsigned int flags)
 {
     struct domain_iommu *hd = dom_iommu(d);
     struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
diff --git a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
index 57b6cc1..ac9f036 100644
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
@@ -51,8 +51,8 @@ int amd_iommu_init(void);
 int amd_iommu_update_ivrs_mapping_acpi(void);
 
 /* mapping functions */
-int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
-                       unsigned int flags);
+int __must_check amd_iommu_map_page(struct domain *d, unsigned long gfn,
+                                    unsigned long mfn, unsigned int flags);
 int __must_check amd_iommu_unmap_page(struct domain *d, unsigned long gfn);
 u64 amd_iommu_get_next_table_from_pte(u32 *entry);
 int amd_iommu_reserve_domain_unity_map(struct domain *domain,
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 73a7f1e..14041a1 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -166,8 +166,8 @@ struct iommu_ops {
 #endif /* HAS_PCI */
 
     void (*teardown)(struct domain *d);
-    int (*map_page)(struct domain *d, unsigned long gfn, unsigned long mfn,
-                    unsigned int flags);
+    int __must_check (*map_page)(struct domain *d, unsigned long gfn,
+                                 unsigned long mfn, unsigned int flags);
     int __must_check (*unmap_page)(struct domain *d, unsigned long gfn);
     void (*free_page_table)(struct page_info *);
 #ifdef CONFIG_X86
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [Patch v6 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)
  2016-05-31 13:57 [Patch v6 00/11] Check VT-d Device-TLB flush error Xu, Quan
                   ` (3 preceding siblings ...)
  2016-05-31 13:57 ` [Patch v6 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping " Xu, Quan
@ 2016-05-31 13:57 ` Xu, Quan
  2016-05-31 13:57 ` [Patch v6 06/11] propagate IOMMU Device-TLB flush error up to EPT update " Xu, Quan
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 30+ messages in thread
From: Xu, Quan @ 2016-05-31 13:57 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Quan Xu, dario.faggioli,
	Julien Grall, Jan Beulich

From: Quan Xu <quan.xu@intel.com>

Signed-off-by: Quan Xu <quan.xu@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Kevin Tian <kevin.tian@intel.com>

v6: Remove __must_check annotation from xenmem_add_to_physmap().
---
 xen/arch/arm/p2m.c                  |  4 +++-
 xen/common/memory.c                 | 12 ++++++++++--
 xen/drivers/passthrough/iommu.c     | 13 +++++++++----
 xen/drivers/passthrough/x86/iommu.c |  5 +++--
 xen/include/xen/iommu.h             |  5 +++--
 5 files changed, 28 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 838d004..68679a3 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1178,7 +1178,9 @@ out:
     if ( flush )
     {
         flush_tlb_domain(d);
-        iommu_iotlb_flush(d, sgfn, egfn - sgfn);
+        ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);
+        if ( !rc )
+            rc = ret;
     }
 
     while ( (pg = page_list_remove_head(&free_pages)) )
diff --git a/xen/common/memory.c b/xen/common/memory.c
index ccc6436..46b1d9f 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -683,9 +683,17 @@ static int xenmem_add_to_physmap(struct domain *d,
 #ifdef CONFIG_HAS_PASSTHROUGH
     if ( need_iommu(d) )
     {
+        int ret;
+
         this_cpu(iommu_dont_flush_iotlb) = 0;
-        iommu_iotlb_flush(d, xatp->idx - done, done);
-        iommu_iotlb_flush(d, xatp->gpfn - done, done);
+
+        ret = iommu_iotlb_flush(d, xatp->idx - done, done);
+        if ( unlikely(ret) && rc >= 0 )
+            rc = ret;
+
+        ret = iommu_iotlb_flush(d, xatp->gpfn - done, done);
+        if ( unlikely(ret) && rc >= 0 )
+            rc = ret;
     }
 #endif
 
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 25501cd..e9b2855 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -312,24 +312,29 @@ static void iommu_free_pagetables(unsigned long unused)
                             cpumask_cycle(smp_processor_id(), &cpu_online_map));
 }
 
-void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+int iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+                      unsigned int page_count)
 {
     const struct domain_iommu *hd = dom_iommu(d);
 
     if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
-        return;
+        return 0;
 
     hd->platform_ops->iotlb_flush(d, gfn, page_count);
+
+    return 0;
 }
 
-void iommu_iotlb_flush_all(struct domain *d)
+int iommu_iotlb_flush_all(struct domain *d)
 {
     const struct domain_iommu *hd = dom_iommu(d);
 
     if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush_all )
-        return;
+        return 0;
 
     hd->platform_ops->iotlb_flush_all(d);
+
+    return 0;
 }
 
 int __init iommu_setup(void)
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index b64b98f..a18a608 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -104,8 +104,9 @@ int arch_iommu_populate_page_table(struct domain *d)
     this_cpu(iommu_dont_flush_iotlb) = 0;
 
     if ( !rc )
-        iommu_iotlb_flush_all(d);
-    else if ( rc != -ERESTART )
+        rc = iommu_iotlb_flush_all(d);
+
+    if ( rc && rc != -ERESTART )
         iommu_teardown(d);
 
     return rc;
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 14041a1..22a2002 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -200,8 +200,9 @@ int iommu_do_pci_domctl(struct xen_domctl *, struct domain *d,
 int iommu_do_domctl(struct xen_domctl *, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
-void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
-void iommu_iotlb_flush_all(struct domain *d);
+int __must_check iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+                                   unsigned int page_count);
+int __must_check iommu_iotlb_flush_all(struct domain *d);
 
 /*
  * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [Patch v6 06/11] propagate IOMMU Device-TLB flush error up to EPT update (top level ones)
  2016-05-31 13:57 [Patch v6 00/11] Check VT-d Device-TLB flush error Xu, Quan
                   ` (4 preceding siblings ...)
  2016-05-31 13:57 ` [Patch v6 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} " Xu, Quan
@ 2016-05-31 13:57 ` Xu, Quan
  2016-05-31 13:57 ` [Patch v6 07/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending " Xu, Quan
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 30+ messages in thread
From: Xu, Quan @ 2016-05-31 13:57 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Feng Wu, Jun Nakajima, George Dunlap, Andrew Cooper,
	dario.faggioli, Jan Beulich, Quan Xu

From: Quan Xu <quan.xu@intel.com>

Propagate the IOMMU Device-TLB flush error up to the ept_set_entry(),
when VT-d shares EPT page table.

Signed-off-by: Quan Xu <quan.xu@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Feng Wu <feng.wu@intel.com>
---
 xen/arch/x86/mm/p2m-ept.c           | 2 +-
 xen/drivers/passthrough/vtd/iommu.c | 6 ++++--
 xen/include/asm-x86/iommu.h         | 3 ++-
 3 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index ebb4343..dab3215 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -832,7 +832,7 @@ out:
          need_modify_vtd_table )
     {
         if ( iommu_hap_pt_share )
-            iommu_pte_flush(d, gfn, &ept_entry->epte, order, vtd_pte_present);
+            rc = iommu_pte_flush(d, gfn, &ept_entry->epte, order, vtd_pte_present);
         else
         {
             if ( iommu_flags )
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index e900019..5366267 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1752,8 +1752,8 @@ static int __must_check intel_iommu_unmap_page(struct domain *d,
     return dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
 }
 
-void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
-                     int order, int present)
+int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
+                    int order, int present)
 {
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu = NULL;
@@ -1778,6 +1778,8 @@ void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
                                    order, !present, flush_dev_iotlb) )
             iommu_flush_write_buffer(iommu);
     }
+
+    return 0;
 }
 
 static int __init vtd_ept_page_compatible(struct iommu *iommu)
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
index e82a2f0..815d77e 100644
--- a/xen/include/asm-x86/iommu.h
+++ b/xen/include/asm-x86/iommu.h
@@ -27,7 +27,8 @@ int iommu_setup_hpet_msi(struct msi_desc *);
 
 /* While VT-d specific, this must get declared in a generic header. */
 int adjust_vtd_irq_affinities(void);
-void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int present);
+int __must_check iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
+                                 int order, int present);
 bool_t iommu_supports_eim(void);
 int iommu_enable_x2apic_IR(void);
 void iommu_disable_x2apic_IR(void);
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [Patch v6 07/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending (top level ones)
  2016-05-31 13:57 [Patch v6 00/11] Check VT-d Device-TLB flush error Xu, Quan
                   ` (5 preceding siblings ...)
  2016-05-31 13:57 ` [Patch v6 06/11] propagate IOMMU Device-TLB flush error up to EPT update " Xu, Quan
@ 2016-05-31 13:57 ` Xu, Quan
  2016-06-01 10:39   ` Jan Beulich
  2016-05-31 13:57 ` [Patch v6 08/11] IOMMU: propagate IOMMU Device-TLB flush error (leaf ones) Xu, Quan
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 30+ messages in thread
From: Xu, Quan @ 2016-05-31 13:57 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Keir Fraser, Quan Xu,
	Liu Jinsong, dario.faggioli, Julien Grall, Jan Beulich,
	Andrew Cooper, Feng Wu, Suravee Suthikulpanit

From: Quan Xu <quan.xu@intel.com>

Signed-off-by: Quan Xu <quan.xu@intel.com>

CC: Jan Beulich <jbeulich@suse.com>
CC: Liu Jinsong <jinsong.liu@alibaba-inc.com>
CC: Keir Fraser <keir@xen.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Feng Wu <feng.wu@intel.com>

v6:
  1. Drop comments in enum dev_power_saved.
  2. If console_suspend() fails, return SAVED_NONE.
  3. Put off leaf ones (in iommu_flush_all()) in later patch.
---
 xen/arch/x86/acpi/power.c                     | 76 ++++++++++++++++++++-------
 xen/drivers/passthrough/amd/iommu_init.c      |  9 +++-
 xen/drivers/passthrough/amd/pci_amd_iommu.c   |  2 +-
 xen/drivers/passthrough/iommu.c               |  6 ++-
 xen/drivers/passthrough/vtd/iommu.c           | 35 ++++++++----
 xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |  2 +-
 xen/include/xen/iommu.h                       |  4 +-
 7 files changed, 99 insertions(+), 35 deletions(-)

diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
index 2885e31..e516fbd 100644
--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -43,36 +43,70 @@ struct acpi_sleep_info acpi_sinfo;
 
 void do_suspend_lowlevel(void);
 
+enum dev_power_saved
+{
+    SAVED_NONE,
+    SAVED_CONSOLE,
+    SAVED_TIME,
+    SAVED_I8259A,
+    SAVED_IOAPIC,
+    SAVED_IOMMU,
+    SAVED_LAPIC,
+    SAVED_ALL,
+};
+
 static int device_power_down(void)
 {
-    console_suspend();
+    if ( console_suspend() )
+        return SAVED_NONE;
 
-    time_suspend();
+    if ( time_suspend() )
+        return SAVED_CONSOLE;
 
-    i8259A_suspend();
+    if ( i8259A_suspend() )
+        return SAVED_TIME;
 
+    /* ioapic_suspend cannot fail */
     ioapic_suspend();
 
-    iommu_suspend();
+    if ( iommu_suspend() )
+        return SAVED_IOAPIC;
 
-    lapic_suspend();
+    if ( lapic_suspend() )
+        return SAVED_IOMMU;
 
-    return 0;
+    return SAVED_NONE;
 }
 
-static void device_power_up(void)
+static void device_power_up(enum dev_power_saved saved)
 {
-    lapic_resume();
-
-    iommu_resume();
-
-    ioapic_resume();
-
-    i8259A_resume();
-
-    time_resume();
-
-    console_resume();
+    switch ( saved )
+    {
+    case SAVED_ALL:
+    case SAVED_LAPIC:
+        lapic_resume();
+        /* fall through */
+    case SAVED_IOMMU:
+        iommu_resume();
+        /* fall through */
+    case SAVED_IOAPIC:
+        ioapic_resume();
+        /* fall through */
+    case SAVED_I8259A:
+        i8259A_resume();
+        /* fall through */
+    case SAVED_TIME:
+        time_resume();
+        /* fall through */
+    case SAVED_CONSOLE:
+        console_resume();
+        /* fall through */
+    case SAVED_NONE:
+        break;
+    default:
+        BUG();
+        break;
+    }
 }
 
 static void freeze_domains(void)
@@ -169,6 +203,10 @@ static int enter_state(u32 state)
     {
         printk(XENLOG_ERR "Some devices failed to power down.");
         system_state = SYS_STATE_resume;
+
+        if ( error > 0 )
+            device_power_up(error);
+
         goto done;
     }
 
@@ -196,7 +234,7 @@ static int enter_state(u32 state)
     write_cr4(cr4 & ~X86_CR4_MCE);
     write_efer(read_efer());
 
-    device_power_up();
+    device_power_up(SAVED_ALL);
 
     mcheck_init(&boot_cpu_data, 0);
     write_cr4(cr4);
diff --git a/xen/drivers/passthrough/amd/iommu_init.c b/xen/drivers/passthrough/amd/iommu_init.c
index 4536106..0b68596 100644
--- a/xen/drivers/passthrough/amd/iommu_init.c
+++ b/xen/drivers/passthrough/amd/iommu_init.c
@@ -1339,7 +1339,14 @@ static void invalidate_all_devices(void)
     iterate_ivrs_mappings(_invalidate_all_devices);
 }
 
-void amd_iommu_suspend(void)
+int amd_iommu_suspend(void)
+{
+    amd_iommu_crash_shutdown();
+
+    return 0;
+}
+
+void amd_iommu_crash_shutdown(void)
 {
     struct amd_iommu *iommu;
 
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 17c1f6d..9488eb0 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -638,6 +638,6 @@ const struct iommu_ops amd_iommu_ops = {
     .suspend = amd_iommu_suspend,
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
-    .crash_shutdown = amd_iommu_suspend,
+    .crash_shutdown = amd_iommu_crash_shutdown,
     .dump_p2m_table = amd_dump_p2m_table,
 };
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index e9b2855..e611e72 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -380,10 +380,12 @@ int __init iommu_setup(void)
     return rc;
 }
 
-void iommu_suspend()
+int iommu_suspend()
 {
     if ( iommu_enabled )
-        iommu_get_ops()->suspend();
+        return iommu_get_ops()->suspend();
+
+    return 0;
 }
 
 void iommu_resume()
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 5366267..c2bf1e2 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -541,7 +541,7 @@ static int iommu_flush_iotlb_psi(
     return status;
 }
 
-static void iommu_flush_all(void)
+static int __must_check iommu_flush_all(void)
 {
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu;
@@ -555,6 +555,8 @@ static void iommu_flush_all(void)
         flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
         iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
     }
+
+    return 0;
 }
 
 static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
@@ -1259,7 +1261,9 @@ static void __hwdom_init intel_iommu_hwdom_init(struct domain *d)
     setup_hwdom_pci_devices(d, setup_hwdom_device);
     setup_hwdom_rmrr(d);
 
-    iommu_flush_all();
+    if ( iommu_flush_all() )
+        printk(XENLOG_WARNING VTDPREFIX
+               " IOMMU flush all failed for hardware domain\n");
 
     for_each_drhd_unit ( drhd )
     {
@@ -2001,7 +2005,7 @@ int adjust_vtd_irq_affinities(void)
 }
 __initcall(adjust_vtd_irq_affinities);
 
-static int init_vtd_hw(void)
+static int __must_check init_vtd_hw(void)
 {
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu;
@@ -2099,8 +2103,8 @@ static int init_vtd_hw(void)
             return -EIO;
         }
     }
-    iommu_flush_all();
-    return 0;
+
+    return iommu_flush_all();
 }
 
 static void __hwdom_init setup_hwdom_rmrr(struct domain *d)
@@ -2389,16 +2393,25 @@ static int intel_iommu_group_id(u16 seg, u8 bus, u8 devfn)
 }
 
 static u32 iommu_state[MAX_IOMMUS][MAX_IOMMU_REGS];
-static void vtd_suspend(void)
+
+static int __must_check vtd_suspend(void)
 {
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu;
     u32    i;
+    int rc = 0;
 
     if ( !iommu_enabled )
-        return;
+        return 0;
 
-    iommu_flush_all();
+    rc = iommu_flush_all();
+    if ( unlikely(rc) )
+    {
+        printk(XENLOG_WARNING VTDPREFIX
+               " suspend: IOMMU flush all failed: %d\n", rc);
+
+        return rc;
+    }
 
     for_each_drhd_unit ( drhd )
     {
@@ -2427,6 +2440,8 @@ static void vtd_suspend(void)
         if ( !iommu_intremap && iommu_qinval )
             disable_qinval(iommu);
     }
+
+    return rc;
 }
 
 static void vtd_crash_shutdown(void)
@@ -2437,7 +2452,9 @@ static void vtd_crash_shutdown(void)
     if ( !iommu_enabled )
         return;
 
-    iommu_flush_all();
+    if ( iommu_flush_all() )
+        printk(XENLOG_WARNING VTDPREFIX
+               " crash shutdown: IOMMU flush all failed\n");
 
     for_each_drhd_unit ( drhd )
     {
diff --git a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
index ac9f036..d08dc0b 100644
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
@@ -119,7 +119,7 @@ extern unsigned long *shared_intremap_inuse;
 
 /* power management support */
 void amd_iommu_resume(void);
-void amd_iommu_suspend(void);
+int __must_check amd_iommu_suspend(void);
 void amd_iommu_crash_shutdown(void);
 
 /* guest iommu support */
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 22a2002..d197cd0 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -175,7 +175,7 @@ struct iommu_ops {
     unsigned int (*read_apic_from_ire)(unsigned int apic, unsigned int reg);
     int (*setup_hpet_msi)(struct msi_desc *);
 #endif /* CONFIG_X86 */
-    void (*suspend)(void);
+    int __must_check (*suspend)(void);
     void (*resume)(void);
     void (*share_p2m)(struct domain *d);
     void (*crash_shutdown)(void);
@@ -185,7 +185,7 @@ struct iommu_ops {
     void (*dump_p2m_table)(struct domain *d);
 };
 
-void iommu_suspend(void);
+int __must_check iommu_suspend(void);
 void iommu_resume(void);
 void iommu_crash_shutdown(void);
 int iommu_get_reserved_device_memory(iommu_grdm_t *, void *);
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [Patch v6 08/11] IOMMU: propagate IOMMU Device-TLB flush error (leaf ones).
  2016-05-31 13:57 [Patch v6 00/11] Check VT-d Device-TLB flush error Xu, Quan
                   ` (6 preceding siblings ...)
  2016-05-31 13:57 ` [Patch v6 07/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending " Xu, Quan
@ 2016-05-31 13:57 ` Xu, Quan
  2016-05-31 13:57 ` [Patch v6 09/11] vt-d: fix the IOMMU flush issue Xu, Quan
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 30+ messages in thread
From: Xu, Quan @ 2016-05-31 13:57 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, Quan Xu, dario.faggioli,
	Julien Grall, Jan Beulich

From: Quan Xu <quan.xu@intel.com>

Signed-off-by: Quan Xu <quan.xu@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Feng Wu <feng.wu@intel.com>

v6: Add __must_check annotation to intel_iommu_iotlb_flush_all().
---
 xen/drivers/passthrough/arm/smmu.c  | 13 ++++++++-----
 xen/drivers/passthrough/iommu.c     |  8 ++------
 xen/drivers/passthrough/vtd/iommu.c | 32 ++++++++++++++++++++------------
 xen/include/xen/iommu.h             |  5 +++--
 4 files changed, 33 insertions(+), 25 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index ee5c89d..1d21568 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2540,7 +2540,7 @@ static int force_stage = 2;
  */
 static u32 platform_features = ARM_SMMU_FEAT_COHERENT_WALK;
 
-static void arm_smmu_iotlb_flush_all(struct domain *d)
+static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
 {
 	struct arm_smmu_xen_domain *smmu_domain = dom_iommu(d)->arch.priv;
 	struct iommu_domain *cfg;
@@ -2557,13 +2557,16 @@ static void arm_smmu_iotlb_flush_all(struct domain *d)
 		arm_smmu_tlb_inv_context(cfg->priv);
 	}
 	spin_unlock(&smmu_domain->lock);
+
+	return 0;
 }
 
-static void arm_smmu_iotlb_flush(struct domain *d, unsigned long gfn,
-                                 unsigned int page_count)
+static int __must_check arm_smmu_iotlb_flush(struct domain *d,
+                                             unsigned long gfn,
+                                             unsigned int page_count)
 {
-    /* ARM SMMU v1 doesn't have flush by VMA and VMID */
-    arm_smmu_iotlb_flush_all(d);
+	/* ARM SMMU v1 doesn't have flush by VMA and VMID */
+	return arm_smmu_iotlb_flush_all(d);
 }
 
 static struct iommu_domain *arm_smmu_get_domain(struct domain *d,
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index e611e72..098b601 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -320,9 +320,7 @@ int iommu_iotlb_flush(struct domain *d, unsigned long gfn,
     if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
         return 0;
 
-    hd->platform_ops->iotlb_flush(d, gfn, page_count);
-
-    return 0;
+    return hd->platform_ops->iotlb_flush(d, gfn, page_count);
 }
 
 int iommu_iotlb_flush_all(struct domain *d)
@@ -332,9 +330,7 @@ int iommu_iotlb_flush_all(struct domain *d)
     if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush_all )
         return 0;
 
-    hd->platform_ops->iotlb_flush_all(d);
-
-    return 0;
+    return hd->platform_ops->iotlb_flush_all(d);
 }
 
 int __init iommu_setup(void)
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index c2bf1e2..0788a59 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -559,8 +559,10 @@ static int __must_check iommu_flush_all(void)
     return 0;
 }
 
-static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
-        int dma_old_pte_present, unsigned int page_count)
+static int __must_check iommu_flush_iotlb(struct domain *d,
+                                          unsigned long gfn,
+                                          bool_t dma_old_pte_present,
+                                          unsigned int page_count)
 {
     struct domain_iommu *hd = dom_iommu(d);
     struct acpi_drhd_unit *drhd;
@@ -598,16 +600,20 @@ static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
                 iommu_flush_write_buffer(iommu);
         }
     }
+
+    return 0;
 }
 
-static void intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+static int __must_check iommu_flush_iotlb_pages(struct domain *d,
+                                                unsigned long gfn,
+                                                unsigned int page_count)
 {
-    __intel_iommu_iotlb_flush(d, gfn, 1, page_count);
+    return iommu_flush_iotlb(d, gfn, 1, page_count);
 }
 
-static void intel_iommu_iotlb_flush_all(struct domain *d)
+static int __must_check iommu_flush_iotlb_all(struct domain *d)
 {
-    __intel_iommu_iotlb_flush(d, INVALID_GFN, 0, 0);
+    return iommu_flush_iotlb(d, INVALID_GFN, 0, 0);
 }
 
 /* clear one page's page table */
@@ -616,6 +622,7 @@ static int __must_check dma_pte_clear_one(struct domain *domain, u64 addr)
     struct domain_iommu *hd = dom_iommu(domain);
     struct dma_pte *page = NULL, *pte = NULL;
     u64 pg_maddr;
+    int rc = 0;
 
     spin_lock(&hd->arch.mapping_lock);
     /* get last level pte */
@@ -641,11 +648,11 @@ static int __must_check dma_pte_clear_one(struct domain *domain, u64 addr)
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
 
     if ( !this_cpu(iommu_dont_flush_iotlb) )
-        __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K, 1, 1);
+        rc = iommu_flush_iotlb_pages(domain, addr >> PAGE_SHIFT_4K, 1);
 
     unmap_vtd_domain_page(page);
 
-    return 0;
+    return rc;
 }
 
 static void iommu_free_pagetable(u64 pt_maddr, int level)
@@ -1699,6 +1706,7 @@ static int __must_check intel_iommu_map_page(struct domain *d,
     struct domain_iommu *hd = dom_iommu(d);
     struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
     u64 pg_maddr;
+    int rc = 0;
 
     /* Do nothing if VT-d shares EPT page table */
     if ( iommu_use_hap_pt(d) )
@@ -1741,9 +1749,9 @@ static int __must_check intel_iommu_map_page(struct domain *d,
     unmap_vtd_domain_page(page);
 
     if ( !this_cpu(iommu_dont_flush_iotlb) )
-        __intel_iommu_iotlb_flush(d, gfn, dma_pte_present(old), 1);
+        rc = iommu_flush_iotlb(d, gfn, dma_pte_present(old), 1);
 
-    return 0;
+    return rc;
 }
 
 static int __must_check intel_iommu_unmap_page(struct domain *d,
@@ -2574,8 +2582,8 @@ const struct iommu_ops intel_iommu_ops = {
     .resume = vtd_resume,
     .share_p2m = iommu_set_pgd,
     .crash_shutdown = vtd_crash_shutdown,
-    .iotlb_flush = intel_iommu_iotlb_flush,
-    .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .iotlb_flush = iommu_flush_iotlb_pages,
+    .iotlb_flush_all = iommu_flush_iotlb_all,
     .get_reserved_device_memory = intel_iommu_get_reserved_device_memory,
     .dump_p2m_table = vtd_dump_p2m_table,
 };
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index d197cd0..e917031 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -179,8 +179,9 @@ struct iommu_ops {
     void (*resume)(void);
     void (*share_p2m)(struct domain *d);
     void (*crash_shutdown)(void);
-    void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
-    void (*iotlb_flush_all)(struct domain *d);
+    int __must_check (*iotlb_flush)(struct domain *d, unsigned long gfn,
+                                    unsigned int page_count);
+    int __must_check (*iotlb_flush_all)(struct domain *d);
     int (*get_reserved_device_memory)(iommu_grdm_t *, void *);
     void (*dump_p2m_table)(struct domain *d);
 };
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [Patch v6 09/11] vt-d: fix the IOMMU flush issue
  2016-05-31 13:57 [Patch v6 00/11] Check VT-d Device-TLB flush error Xu, Quan
                   ` (7 preceding siblings ...)
  2016-05-31 13:57 ` [Patch v6 08/11] IOMMU: propagate IOMMU Device-TLB flush error (leaf ones) Xu, Quan
@ 2016-05-31 13:57 ` Xu, Quan
  2016-06-01 15:36   ` Jan Beulich
  2016-05-31 13:57 ` [Patch v6 10/11] vt-d: propagate the IOMMU Device-TLB flush error up to ME phantom functions Xu, Quan
  2016-05-31 13:57 ` [Patch v6 11/11] vt-d: add __must_check annotation to IOMMU flush pointers and handlers Xu, Quan
  10 siblings, 1 reply; 30+ messages in thread
From: Xu, Quan @ 2016-05-31 13:57 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Keir Fraser, Quan Xu, Andrew Cooper, dario.faggioli,
	Jan Beulich, Feng Wu

From: Quan Xu <quan.xu@intel.com>

The propagation value from IOMMU flush interfaces may be positive, which
indicates callers need to flush cache, not one of faliures.

when the propagation value is positive, this patch fixes this flush issue
as follows:
  - call iommu_flush_write_buffer() to flush cache.
  - return zero.

Signed-off-by: Quan Xu <quan.xu@intel.com>

CC: Kevin Tian <kevin.tian@intel.com>
CC: Feng Wu <feng.wu@intel.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>

v6:
  1. Drop blank lines.
  2. Change '(((u16)bus) << 8) | devfn' to 'PCI_BDF2(bus, devfn)'.
  3. Introduce rc for iommu_flush_context_device() / ret for
     iommu_flush_iotlb_dsi(), adding  ASSERT() behind.
  4. Enhance comments.
  5. Fix iommu_flush_context_*().
---
 xen/drivers/passthrough/vtd/iommu.c | 170 +++++++++++++++++++++++++++---------
 1 file changed, 128 insertions(+), 42 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 0788a59..720b867 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -388,17 +388,18 @@ static int flush_context_reg(
     return 0;
 }
 
-static int iommu_flush_context_global(
-    struct iommu *iommu, int flush_non_present_entry)
+static int __must_check iommu_flush_context_global(struct iommu *iommu,
+                                                   int flush_non_present_entry)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     return flush->context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL,
                                  flush_non_present_entry);
 }
 
-static int iommu_flush_context_device(
-    struct iommu *iommu, u16 did, u16 source_id,
-    u8 function_mask, int flush_non_present_entry)
+static int __must_check iommu_flush_context_device(struct iommu *iommu,
+                                                   u16 did, u16 source_id,
+                                                   u8 function_mask,
+                                                   int flush_non_present_entry)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     return flush->context(iommu, did, source_id, function_mask,
@@ -473,8 +474,9 @@ static int flush_iotlb_reg(void *_iommu, u16 did,
     return 0;
 }
 
-static int iommu_flush_iotlb_global(struct iommu *iommu,
-    int flush_non_present_entry, int flush_dev_iotlb)
+static int __must_check iommu_flush_iotlb_global(struct iommu *iommu,
+                                                 int flush_non_present_entry,
+                                                 int flush_dev_iotlb)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     int status;
@@ -491,8 +493,9 @@ static int iommu_flush_iotlb_global(struct iommu *iommu,
     return status;
 }
 
-static int iommu_flush_iotlb_dsi(struct iommu *iommu, u16 did,
-    int flush_non_present_entry, int flush_dev_iotlb)
+static int __must_check iommu_flush_iotlb_dsi(struct iommu *iommu, u16 did,
+                                              int flush_non_present_entry,
+                                              int flush_dev_iotlb)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     int status;
@@ -509,9 +512,10 @@ static int iommu_flush_iotlb_dsi(struct iommu *iommu, u16 did,
     return status;
 }
 
-static int iommu_flush_iotlb_psi(
-    struct iommu *iommu, u16 did, u64 addr, unsigned int order,
-    int flush_non_present_entry, int flush_dev_iotlb)
+static int __must_check iommu_flush_iotlb_psi(struct iommu *iommu, u16 did,
+                                              u64 addr, unsigned int order,
+                                              int flush_non_present_entry,
+                                              int flush_dev_iotlb)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     int status;
@@ -546,17 +550,43 @@ static int __must_check iommu_flush_all(void)
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu;
     int flush_dev_iotlb;
+    int rc = 0;
 
     flush_all_cache();
     for_each_drhd_unit ( drhd )
     {
         iommu = drhd->iommu;
-        iommu_flush_context_global(iommu, 0);
-        flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
-        iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
+        /*
+         * The current logic for rc returns:
+         *   - positive  invoke iommu_flush_write_buffer to flush cache.
+         *   - zero      on success.
+         *   - negative  on failure. Continue to flush IOMMU IOTLB on a
+         *               best effort basis.
+         *
+         * Moreover, IOMMU flush handlers flush_context_qi and flush_iotlb_qi
+         * (or flush_context_reg and flush_iotlb_reg, deep functions in the
+         * call trees of iommu_flush_context_global and iommu_flush_iotlb_global)
+         * are with the same logic to bubble up positive return value.
+         */
+        rc = iommu_flush_context_global(iommu, 0);
+        if ( rc <= 0 )
+        {
+            int ret;
+
+            flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
+            ret = iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
+            ASSERT(ret <= 0);
+            if ( !rc )
+                rc = ret;
+        }
+        else
+        {
+            iommu_flush_write_buffer(iommu);
+            rc = 0;
+        }
     }
 
-    return 0;
+    return rc;
 }
 
 static int __must_check iommu_flush_iotlb(struct domain *d,
@@ -569,6 +599,7 @@ static int __must_check iommu_flush_iotlb(struct domain *d,
     struct iommu *iommu;
     int flush_dev_iotlb;
     int iommu_domid;
+    int rc = 0;
 
     /*
      * No need pcideves_lock here because we have flush
@@ -587,21 +618,23 @@ static int __must_check iommu_flush_iotlb(struct domain *d,
             continue;
 
         if ( page_count != 1 || gfn == INVALID_GFN )
-        {
-            if ( iommu_flush_iotlb_dsi(iommu, iommu_domid,
-                        0, flush_dev_iotlb) )
-                iommu_flush_write_buffer(iommu);
-        }
+            rc = iommu_flush_iotlb_dsi(iommu, iommu_domid,
+                                       0, flush_dev_iotlb);
         else
+            rc = iommu_flush_iotlb_psi(iommu, iommu_domid,
+                                       (paddr_t)gfn << PAGE_SHIFT_4K,
+                                       PAGE_ORDER_4K,
+                                       !dma_old_pte_present,
+                                       flush_dev_iotlb);
+
+        if ( rc > 0 )
         {
-            if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
-                        (paddr_t)gfn << PAGE_SHIFT_4K, PAGE_ORDER_4K,
-                        !dma_old_pte_present, flush_dev_iotlb) )
-                iommu_flush_write_buffer(iommu);
+            iommu_flush_write_buffer(iommu);
+            rc = 0;
         }
     }
 
-    return 0;
+    return rc;
 }
 
 static int __must_check iommu_flush_iotlb_pages(struct domain *d,
@@ -1291,6 +1324,7 @@ int domain_context_mapping_one(
     u64 maddr, pgd_maddr;
     u16 seg = iommu->intel->drhd->segment;
     int agaw;
+    int rc;
 
     ASSERT(pcidevs_locked());
     spin_lock(&iommu->lock);
@@ -1404,13 +1438,35 @@ int domain_context_mapping_one(
     spin_unlock(&iommu->lock);
 
     /* Context entry was previously non-present (with domid 0). */
-    if ( iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) | devfn,
-                                    DMA_CCMD_MASK_NOBIT, 1) )
-        iommu_flush_write_buffer(iommu);
-    else
+    rc = iommu_flush_context_device(iommu, 0, PCI_BDF2(bus, devfn),
+                                    DMA_CCMD_MASK_NOBIT, 1);
+
+    /*
+     * The current logic for rc returns:
+     *   - positive  invoke iommu_flush_write_buffer to flush cache.
+     *   - zero      on success.
+     *   - negative  on failure. Continue to flush IOMMU IOTLB on a
+     *               best effort basis.
+     *
+     * Moreover, IOMMU flush handlers flush_context_qi or flush_iotlb_qi
+     * (or flush_context_reg and flush_iotlb_reg, deep functions in the
+     * call trees of iommu_flush_context_device and iommu_flush_iotlb_dsi)
+     * are with the same logic to bubble up positive return value.
+     */
+    if ( rc <= 0 )
     {
         int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
-        iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
+        int ret;
+
+        ret = iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
+        ASSERT(ret <= 0);
+        if ( !rc )
+            rc = ret;
+    }
+    else
+    {
+        iommu_flush_write_buffer(iommu);
+        rc = 0;
     }
 
     set_bit(iommu->index, &hd->arch.iommu_bitmap);
@@ -1420,7 +1476,7 @@ int domain_context_mapping_one(
     if ( !seg )
         me_wifi_quirk(domain, bus, devfn, MAP_ME_PHANTOM_FUNC);
 
-    return 0;
+    return rc;
 }
 
 static int domain_context_mapping(
@@ -1515,6 +1571,7 @@ int domain_context_unmap_one(
     struct context_entry *context, *context_entries;
     u64 maddr;
     int iommu_domid;
+    int rc;
 
     ASSERT(pcidevs_locked());
     spin_lock(&iommu->lock);
@@ -1535,6 +1592,7 @@ int domain_context_unmap_one(
     iommu_flush_cache_entry(context, sizeof(struct context_entry));
 
     iommu_domid= domain_iommu_domid(domain, iommu);
+
     if ( iommu_domid == -1 )
     {
         spin_unlock(&iommu->lock);
@@ -1542,14 +1600,36 @@ int domain_context_unmap_one(
         return -EINVAL;
     }
 
-    if ( iommu_flush_context_device(iommu, iommu_domid,
-                                    (((u16)bus) << 8) | devfn,
-                                    DMA_CCMD_MASK_NOBIT, 0) )
-        iommu_flush_write_buffer(iommu);
-    else
+    rc = iommu_flush_context_device(iommu, iommu_domid,
+                                    PCI_BDF2(bus, devfn),
+                                    DMA_CCMD_MASK_NOBIT, 0);
+
+    /*
+     * The current logic for rc returns:
+     *   - positive  invoke iommu_flush_write_buffer to flush cache.
+     *   - zero      on success.
+     *   - negative  on failure. Continue to flush IOMMU IOTLB on a
+     *               best effort basis.
+     *
+     * Moreover, IOMMU flush handlers flush_context_qi or flush_iotlb_qi
+     * (or flush_context_reg and flush_iotlb_reg, deep functions in the
+     * call trees of iommu_flush_context_device and iommu_flush_iotlb_dsi)
+     * are with the same logic to bubble up positive return value.
+     */
+    if ( rc <= 0 )
     {
         int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
-        iommu_flush_iotlb_dsi(iommu, iommu_domid, 0, flush_dev_iotlb);
+        int ret;
+
+        ret = iommu_flush_iotlb_dsi(iommu, iommu_domid, 0, flush_dev_iotlb);
+        ASSERT(ret <= 0);
+        if ( !rc )
+            rc = ret;
+    }
+    else
+    {
+        iommu_flush_write_buffer(iommu);
+        rc = 0;
     }
 
     spin_unlock(&iommu->lock);
@@ -1558,7 +1638,7 @@ int domain_context_unmap_one(
     if ( !iommu->intel->drhd->segment )
         me_wifi_quirk(domain, bus, devfn, UNMAP_ME_PHANTOM_FUNC);
 
-    return 0;
+    return rc;
 }
 
 static int domain_context_unmap(
@@ -1772,6 +1852,7 @@ int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
     struct domain_iommu *hd = dom_iommu(d);
     int flush_dev_iotlb;
     int iommu_domid;
+    int rc = 0;
 
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
 
@@ -1785,13 +1866,18 @@ int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
         iommu_domid= domain_iommu_domid(d, iommu);
         if ( iommu_domid == -1 )
             continue;
-        if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
+
+        rc = iommu_flush_iotlb_psi(iommu, iommu_domid,
                                    (paddr_t)gfn << PAGE_SHIFT_4K,
-                                   order, !present, flush_dev_iotlb) )
+                                   order, !present, flush_dev_iotlb);
+        if ( rc > 0 )
+        {
             iommu_flush_write_buffer(iommu);
+            rc = 0;
+        }
     }
 
-    return 0;
+    return rc;
 }
 
 static int __init vtd_ept_page_compatible(struct iommu *iommu)
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [Patch v6 10/11] vt-d: propagate the IOMMU Device-TLB flush error up to ME phantom functions
  2016-05-31 13:57 [Patch v6 00/11] Check VT-d Device-TLB flush error Xu, Quan
                   ` (8 preceding siblings ...)
  2016-05-31 13:57 ` [Patch v6 09/11] vt-d: fix the IOMMU flush issue Xu, Quan
@ 2016-05-31 13:57 ` Xu, Quan
  2016-05-31 13:57 ` [Patch v6 11/11] vt-d: add __must_check annotation to IOMMU flush pointers and handlers Xu, Quan
  10 siblings, 0 replies; 30+ messages in thread
From: Xu, Quan @ 2016-05-31 13:57 UTC (permalink / raw)
  To: xen-devel; +Cc: Kevin Tian, dario.faggioli, Feng Wu, Jan Beulich, Quan Xu

From: Quan Xu <quan.xu@intel.com>

Signed-off-by: Quan Xu <quan.xu@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

CC: Jan Beulich <jbeulich@suse.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Feng Wu <feng.wu@intel.com>

v6: Don't needlessly split the function header onto two lines.
---
 xen/drivers/passthrough/vtd/extern.h |  3 ++-
 xen/drivers/passthrough/vtd/iommu.c  |  8 ++++----
 xen/drivers/passthrough/vtd/quirks.c | 27 +++++++++++++++++----------
 3 files changed, 23 insertions(+), 15 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/extern.h b/xen/drivers/passthrough/vtd/extern.h
index cbe0286..6772839 100644
--- a/xen/drivers/passthrough/vtd/extern.h
+++ b/xen/drivers/passthrough/vtd/extern.h
@@ -91,7 +91,8 @@ int is_igd_vt_enabled_quirk(void);
 void platform_quirks_init(void);
 void vtd_ops_preamble_quirk(struct iommu* iommu);
 void vtd_ops_postamble_quirk(struct iommu* iommu);
-void me_wifi_quirk(struct domain *domain, u8 bus, u8 devfn, int map);
+int __must_check me_wifi_quirk(struct domain *domain,
+                               u8 bus, u8 devfn, int map);
 void pci_vtd_quirk(const struct pci_dev *);
 bool_t platform_supports_intremap(void);
 bool_t platform_supports_x2apic(void);
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 720b867..2cc2c93 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1473,8 +1473,8 @@ int domain_context_mapping_one(
 
     unmap_vtd_domain_page(context_entries);
 
-    if ( !seg )
-        me_wifi_quirk(domain, bus, devfn, MAP_ME_PHANTOM_FUNC);
+    if ( !seg && !rc )
+        rc = me_wifi_quirk(domain, bus, devfn, MAP_ME_PHANTOM_FUNC);
 
     return rc;
 }
@@ -1635,8 +1635,8 @@ int domain_context_unmap_one(
     spin_unlock(&iommu->lock);
     unmap_vtd_domain_page(context_entries);
 
-    if ( !iommu->intel->drhd->segment )
-        me_wifi_quirk(domain, bus, devfn, UNMAP_ME_PHANTOM_FUNC);
+    if ( !iommu->intel->drhd->segment && !rc )
+        rc = me_wifi_quirk(domain, bus, devfn, UNMAP_ME_PHANTOM_FUNC);
 
     return rc;
 }
diff --git a/xen/drivers/passthrough/vtd/quirks.c b/xen/drivers/passthrough/vtd/quirks.c
index 473d1fc..91f96ac 100644
--- a/xen/drivers/passthrough/vtd/quirks.c
+++ b/xen/drivers/passthrough/vtd/quirks.c
@@ -331,10 +331,12 @@ void __init platform_quirks_init(void)
  * assigning Intel integrated wifi device to a guest.
  */
 
-static void map_me_phantom_function(struct domain *domain, u32 dev, int map)
+static int __must_check map_me_phantom_function(struct domain *domain,
+                                                u32 dev, int map)
 {
     struct acpi_drhd_unit *drhd;
     struct pci_dev *pdev;
+    int rc;
 
     /* find ME VT-d engine base on a real ME device */
     pdev = pci_get_pdev(0, 0, PCI_DEVFN(dev, 0));
@@ -342,23 +344,26 @@ static void map_me_phantom_function(struct domain *domain, u32 dev, int map)
 
     /* map or unmap ME phantom function */
     if ( map )
-        domain_context_mapping_one(domain, drhd->iommu, 0,
-                                   PCI_DEVFN(dev, 7), NULL);
+        rc = domain_context_mapping_one(domain, drhd->iommu, 0,
+                                        PCI_DEVFN(dev, 7), NULL);
     else
-        domain_context_unmap_one(domain, drhd->iommu, 0,
-                                 PCI_DEVFN(dev, 7));
+        rc = domain_context_unmap_one(domain, drhd->iommu, 0,
+                                      PCI_DEVFN(dev, 7));
+
+    return rc;
 }
 
-void me_wifi_quirk(struct domain *domain, u8 bus, u8 devfn, int map)
+int me_wifi_quirk(struct domain *domain, u8 bus, u8 devfn, int map)
 {
     u32 id;
+    int rc = 0;
 
     id = pci_conf_read32(0, 0, 0, 0, 0);
     if ( IS_CTG(id) )
     {
         /* quit if ME does not exist */
         if ( pci_conf_read32(0, 0, 3, 0, 0) == 0xffffffff )
-            return;
+            return 0;
 
         /* if device is WLAN device, map ME phantom device 0:3.7 */
         id = pci_conf_read32(0, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), 0);
@@ -372,7 +377,7 @@ void me_wifi_quirk(struct domain *domain, u8 bus, u8 devfn, int map)
             case 0x423b8086:
             case 0x423c8086:
             case 0x423d8086:
-                map_me_phantom_function(domain, 3, map);
+                rc = map_me_phantom_function(domain, 3, map);
                 break;
             default:
                 break;
@@ -382,7 +387,7 @@ void me_wifi_quirk(struct domain *domain, u8 bus, u8 devfn, int map)
     {
         /* quit if ME does not exist */
         if ( pci_conf_read32(0, 0, 22, 0, 0) == 0xffffffff )
-            return;
+            return 0;
 
         /* if device is WLAN device, map ME phantom device 0:22.7 */
         id = pci_conf_read32(0, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), 0);
@@ -398,12 +403,14 @@ void me_wifi_quirk(struct domain *domain, u8 bus, u8 devfn, int map)
             case 0x42388086:        /* Puma Peak */
             case 0x422b8086:
             case 0x422c8086:
-                map_me_phantom_function(domain, 22, map);
+                rc = map_me_phantom_function(domain, 22, map);
                 break;
             default:
                 break;
         }
     }
+
+    return rc;
 }
 
 void pci_vtd_quirk(const struct pci_dev *pdev)
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [Patch v6 11/11] vt-d: add __must_check annotation to IOMMU flush pointers and handlers
  2016-05-31 13:57 [Patch v6 00/11] Check VT-d Device-TLB flush error Xu, Quan
                   ` (9 preceding siblings ...)
  2016-05-31 13:57 ` [Patch v6 10/11] vt-d: propagate the IOMMU Device-TLB flush error up to ME phantom functions Xu, Quan
@ 2016-05-31 13:57 ` Xu, Quan
  2016-06-02 10:06   ` Jan Beulich
  10 siblings, 1 reply; 30+ messages in thread
From: Xu, Quan @ 2016-05-31 13:57 UTC (permalink / raw)
  To: xen-devel; +Cc: Kevin Tian, dario.faggioli, Feng Wu, Jan Beulich, Quan Xu

From: Quan Xu <quan.xu@intel.com>

Signed-off-by: Quan Xu <quan.xu@intel.com>

CC: Jan Beulich <jbeulich@suse.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Feng Wu <feng.wu@intel.com>
---
 xen/drivers/passthrough/vtd/iommu.c  | 40 ++++++++++++++++++------------------
 xen/drivers/passthrough/vtd/iommu.h  | 11 ++++++----
 xen/drivers/passthrough/vtd/qinval.c | 14 ++++++-------
 3 files changed, 34 insertions(+), 31 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 2cc2c93..32202b6 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -335,10 +335,9 @@ static void iommu_flush_write_buffer(struct iommu *iommu)
 }
 
 /* return value determine if we need a write buffer flush */
-static int flush_context_reg(
-    void *_iommu,
-    u16 did, u16 source_id, u8 function_mask, u64 type,
-    int flush_non_present_entry)
+static int __must_check flush_context_reg(void *_iommu, u16 did, u16 source_id,
+                                          u8 function_mask, u64 type,
+                                          bool_t flush_non_present_entry)
 {
     struct iommu *iommu = (struct iommu *) _iommu;
     u64 val = 0;
@@ -389,7 +388,7 @@ static int flush_context_reg(
 }
 
 static int __must_check iommu_flush_context_global(struct iommu *iommu,
-                                                   int flush_non_present_entry)
+                                                   bool_t flush_non_present_entry)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     return flush->context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL,
@@ -399,7 +398,7 @@ static int __must_check iommu_flush_context_global(struct iommu *iommu,
 static int __must_check iommu_flush_context_device(struct iommu *iommu,
                                                    u16 did, u16 source_id,
                                                    u8 function_mask,
-                                                   int flush_non_present_entry)
+                                                   bool_t flush_non_present_entry)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     return flush->context(iommu, did, source_id, function_mask,
@@ -408,9 +407,10 @@ static int __must_check iommu_flush_context_device(struct iommu *iommu,
 }
 
 /* return value determine if we need a write buffer flush */
-static int flush_iotlb_reg(void *_iommu, u16 did,
-                           u64 addr, unsigned int size_order, u64 type,
-                           int flush_non_present_entry, int flush_dev_iotlb)
+static int __must_check flush_iotlb_reg(void *_iommu, u16 did, u64 addr,
+                                        unsigned int size_order, u64 type,
+                                        bool_t flush_non_present_entry,
+                                        bool_t flush_dev_iotlb)
 {
     struct iommu *iommu = (struct iommu *) _iommu;
     int tlb_offset = ecap_iotlb_offset(iommu->ecap);
@@ -475,8 +475,8 @@ static int flush_iotlb_reg(void *_iommu, u16 did,
 }
 
 static int __must_check iommu_flush_iotlb_global(struct iommu *iommu,
-                                                 int flush_non_present_entry,
-                                                 int flush_dev_iotlb)
+                                                 bool_t flush_non_present_entry,
+                                                 bool_t flush_dev_iotlb)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     int status;
@@ -494,8 +494,8 @@ static int __must_check iommu_flush_iotlb_global(struct iommu *iommu,
 }
 
 static int __must_check iommu_flush_iotlb_dsi(struct iommu *iommu, u16 did,
-                                              int flush_non_present_entry,
-                                              int flush_dev_iotlb)
+                                              bool_t flush_non_present_entry,
+                                              bool_t flush_dev_iotlb)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     int status;
@@ -514,8 +514,8 @@ static int __must_check iommu_flush_iotlb_dsi(struct iommu *iommu, u16 did,
 
 static int __must_check iommu_flush_iotlb_psi(struct iommu *iommu, u16 did,
                                               u64 addr, unsigned int order,
-                                              int flush_non_present_entry,
-                                              int flush_dev_iotlb)
+                                              bool_t flush_non_present_entry,
+                                              bool_t flush_dev_iotlb)
 {
     struct iommu_flush *flush = iommu_get_flush(iommu);
     int status;
@@ -549,7 +549,7 @@ static int __must_check iommu_flush_all(void)
 {
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu;
-    int flush_dev_iotlb;
+    bool_t flush_dev_iotlb;
     int rc = 0;
 
     flush_all_cache();
@@ -597,7 +597,7 @@ static int __must_check iommu_flush_iotlb(struct domain *d,
     struct domain_iommu *hd = dom_iommu(d);
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu;
-    int flush_dev_iotlb;
+    bool_t flush_dev_iotlb;
     int iommu_domid;
     int rc = 0;
 
@@ -1455,7 +1455,7 @@ int domain_context_mapping_one(
      */
     if ( rc <= 0 )
     {
-        int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
+        bool_t flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
         int ret;
 
         ret = iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
@@ -1618,7 +1618,7 @@ int domain_context_unmap_one(
      */
     if ( rc <= 0 )
     {
-        int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
+        bool_t flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
         int ret;
 
         ret = iommu_flush_iotlb_dsi(iommu, iommu_domid, 0, flush_dev_iotlb);
@@ -1850,7 +1850,7 @@ int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu = NULL;
     struct domain_iommu *hd = dom_iommu(d);
-    int flush_dev_iotlb;
+    bool_t flush_dev_iotlb;
     int iommu_domid;
     int rc = 0;
 
diff --git a/xen/drivers/passthrough/vtd/iommu.h b/xen/drivers/passthrough/vtd/iommu.h
index c55ee08..e6cf738 100644
--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -514,10 +514,13 @@ struct ir_ctrl {
 };
 
 struct iommu_flush {
-    int (*context)(void *iommu, u16 did, u16 source_id,
-                   u8 function_mask, u64 type, int non_present_entry_flush);
-    int (*iotlb)(void *iommu, u16 did, u64 addr, unsigned int size_order,
-                 u64 type, int flush_non_present_entry, int flush_dev_iotlb);
+    int __must_check (*context)(void *iommu, u16 did, u16 source_id,
+                                u8 function_mask, u64 type,
+                                bool_t non_present_entry_flush);
+    int __must_check (*iotlb)(void *iommu, u16 did, u64 addr,
+                              unsigned int size_order, u64 type,
+                              bool_t flush_non_present_entry,
+                              bool_t flush_dev_iotlb);
 };
 
 struct intel_iommu {
diff --git a/xen/drivers/passthrough/vtd/qinval.c b/xen/drivers/passthrough/vtd/qinval.c
index b81b0bd..aa7841a 100644
--- a/xen/drivers/passthrough/vtd/qinval.c
+++ b/xen/drivers/passthrough/vtd/qinval.c
@@ -274,9 +274,9 @@ int iommu_flush_iec_index(struct iommu *iommu, u8 im, u16 iidx)
    return __iommu_flush_iec(iommu, IEC_INDEX_INVL, im, iidx);
 }
 
-static int flush_context_qi(
-    void *_iommu, u16 did, u16 sid, u8 fm, u64 type,
-    int flush_non_present_entry)
+static int __must_check flush_context_qi(void *_iommu, u16 did,
+                                         u16 sid, u8 fm, u64 type,
+                                         bool_t flush_non_present_entry)
 {
     int ret = 0;
     struct iommu *iommu = (struct iommu *)_iommu;
@@ -305,10 +305,10 @@ static int flush_context_qi(
     return ret;
 }
 
-static int flush_iotlb_qi(
-    void *_iommu, u16 did,
-    u64 addr, unsigned int size_order, u64 type,
-    int flush_non_present_entry, int flush_dev_iotlb)
+static int __must_check flush_iotlb_qi(void *_iommu, u16 did, u64 addr,
+                                       unsigned int size_order, u64 type,
+                                       bool_t flush_non_present_entry,
+                                       bool_t flush_dev_iotlb)
 {
     u8 dr = 0, dw = 0;
     int ret = 0;
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Patch v6 01/11] IOMMU: handle IOMMU mapping and unmapping failures
  2016-05-31 13:57 ` [Patch v6 01/11] IOMMU: handle IOMMU mapping and unmapping failures Xu, Quan
@ 2016-06-01  9:52   ` Jan Beulich
  0 siblings, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2016-06-01  9:52 UTC (permalink / raw)
  To: Quan Xu; +Cc: dario.faggioli, Kevin Tian, xen-devel

>>> On 31.05.16 at 15:57, <quan.xu@intel.com> wrote:
> From: Quan Xu <quan.xu@intel.com>
> 
> Treat IOMMU mapping and unmapping failures as a fatal to the DomU
> If IOMMU mapping and unmapping failed, crash the DomU and propagate
> the error up to the call trees.
> 
> No spamming of the log can occur. For DomU, we avoid logging any
> message for already dying domains. For Dom0, that'll still be more
> verbose than we'd really like, but it at least wouldn't outright
> flood the console.
> 
> Signed-off-by: Quan Xu <quan.xu@intel.com>
> Reviewed-by: Kevin Tian <kevin.tian@intel.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Patch v6 02/11] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping
  2016-05-31 13:57 ` [Patch v6 02/11] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping Xu, Quan
@ 2016-06-01 10:05   ` Jan Beulich
  2016-06-02  6:00     ` Xu, Quan
  0 siblings, 1 reply; 30+ messages in thread
From: Jan Beulich @ 2016-06-01 10:05 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Keir Fraser, George Dunlap, Andrew Cooper,
	dario.faggioli, xen-devel, Jun Nakajima

>>> On 31.05.16 at 15:57, <quan.xu@intel.com> wrote:
> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -673,6 +673,8 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
>      if ( iommu_enabled && need_iommu(p2m->domain) &&
>           (iommu_old_flags != iommu_pte_flags || old_mfn != mfn_x(mfn)) )
>      {
> +        int ret;
> +
>          if ( iommu_use_hap_pt(p2m->domain) )
>          {
>              if ( iommu_old_flags )
> @@ -680,11 +682,27 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
>          }
>          else if ( iommu_pte_flags )
>              for ( i = 0; i < (1UL << page_order); i++ )
> -                iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
> -                               iommu_pte_flags);
> +            {
> +                ret = iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
> +                                     iommu_pte_flags);
> +                if ( unlikely(ret) )
> +                {
> +                    while ( i-- )
> +                        iommu_unmap_page(p2m->domain, gfn + i);
> +
> +                    if ( !rc )
> +                        rc = ret;
> +
> +                    break;
> +                }
> +            }

So why do you not use the same code structure here as you do
on the EPT side? I.e. do away with using "ret" altogether, moving
it all into ...

>          else
>              for ( i = 0; i < (1UL << page_order); i++ )
> -                iommu_unmap_page(p2m->domain, gfn + i);
> +            {
> +                ret = iommu_unmap_page(p2m->domain, gfn + i);
> +                if ( !rc )
> +                    rc = ret;
> +            }

... this extremely narrow scope?

> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -641,10 +641,21 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn,
>  
>      if ( !paging_mode_translate(p2m->domain) )
>      {
> +        int rc = 0;
> +
>          if ( need_iommu(p2m->domain) )
> +        {
>              for ( i = 0; i < (1 << page_order); i++ )
> -                iommu_unmap_page(p2m->domain, mfn + i);
> -        return 0;
> +            {
> +                int ret;
> +
> +                ret = iommu_unmap_page(p2m->domain, mfn + i);

In cases like this I'd prefer the assignment to be replaced by its
right side becoming the variable's initializer.

> @@ -2535,6 +2546,7 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
>          if ( mfn_valid(mfn) )
>              p2m_remove_page(ap2m, gfn_x(old_gfn), mfn_x(mfn), PAGE_ORDER_4K);
>          rc = 0;
> +
>          goto out;

Stray addition of a blank line.

> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -171,20 +171,32 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
>      {
>          struct page_info *page;
>          unsigned int i = 0;
> +        int rc = 0;
> +
>          page_list_for_each ( page, &d->page_list )
>          {
>              unsigned long mfn = page_to_mfn(page);
>              unsigned long gfn = mfn_to_gmfn(d, mfn);
>              unsigned int mapping = IOMMUF_readable;
> +            int ret;
>  
>              if ( ((page->u.inuse.type_info & PGT_count_mask) == 0) ||
>                   ((page->u.inuse.type_info & PGT_type_mask)
>                    == PGT_writable_page) )
>                  mapping |= IOMMUF_writable;
> -            hd->platform_ops->map_page(d, gfn, mfn, mapping);
> +
> +            ret = hd->platform_ops->map_page(d, gfn, mfn, mapping);
> +            if ( !rc )
> +                rc = ret;
> +
>              if ( !(i++ & 0xfffff) )
>                  process_pending_softirqs();
>          }
> +
> +        if ( rc )
> +            printk(XENLOG_WARNING
> +                   "d%d: IOMMU mapping failed: %d for hardware domain\n",
> +                   d->domain_id, rc);

No need to mention "hardware domain" here, as that's going to be clear
both from the domain ID and the point in time when this message would
appear.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Patch v6 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping (top level ones)
  2016-05-31 13:57 ` [Patch v6 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping " Xu, Quan
@ 2016-06-01 10:24   ` Jan Beulich
  2016-06-02  7:25     ` Xu, Quan
  0 siblings, 1 reply; 30+ messages in thread
From: Jan Beulich @ 2016-06-01 10:24 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, dario.faggioli,
	xen-devel, Julien Grall, Suravee Suthikulpanit

>>> On 31.05.16 at 15:57, <quan.xu@intel.com> wrote:
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> @@ -295,12 +297,22 @@ static void __hwdom_init amd_iommu_hwdom_init(struct domain *d)
>               * a pfn_valid() check would seem desirable here.
>               */
>              if ( mfn_valid(pfn) )
> -                amd_iommu_map_page(d, pfn, pfn, 
> -                                   IOMMUF_readable|IOMMUF_writable);
> +            {
> +                int ret;
> +
> +                ret = amd_iommu_map_page(d, pfn, pfn,
> +                                         IOMMUF_readable|IOMMUF_writable);

Same here as for the earlier patch regarding assignment vs initializer.

But overall the entire change to this function seems to rather belong
into patch 2. As would a respective change to
vtd_set_hwdom_mapping(), which I'm not sure which patch you've
put that in.

> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -166,8 +166,8 @@ struct iommu_ops {
>  #endif /* HAS_PCI */
>  
>      void (*teardown)(struct domain *d);
> -    int (*map_page)(struct domain *d, unsigned long gfn, unsigned long mfn,
> -                    unsigned int flags);
> +    int __must_check (*map_page)(struct domain *d, unsigned long gfn,
> +                                 unsigned long mfn, unsigned int flags);

With this and with the rule set forth in the context of the discussion
of v5, iommu_map_page() (as well as any other caller of this hook
that do not themselves _consume_ the error [e.g. hwdom init ones])
should become or already be __must_check, which afaict isn't the
case. The same then, btw., applies to patch 3, and hence I have to
withdraw the R-b that you've got there.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Patch v6 07/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending (top level ones)
  2016-05-31 13:57 ` [Patch v6 07/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending " Xu, Quan
@ 2016-06-01 10:39   ` Jan Beulich
  2016-06-02  2:58     ` Xu, Quan
  0 siblings, 1 reply; 30+ messages in thread
From: Jan Beulich @ 2016-06-01 10:39 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, Liu Jinsong,
	dario.faggioli, xen-devel, Julien Grall, Suravee Suthikulpanit,
	Andrew Cooper, Keir Fraser

>>> On 31.05.16 at 15:57, <quan.xu@intel.com> wrote:
>  static int device_power_down(void)
>  {
> -    console_suspend();
> +    if ( console_suspend() )
> +        return SAVED_NONE;
>  
> -    time_suspend();
> +    if ( time_suspend() )
> +        return SAVED_CONSOLE;
>  
> -    i8259A_suspend();
> +    if ( i8259A_suspend() )
> +        return SAVED_TIME;
>  
> +    /* ioapic_suspend cannot fail */
>      ioapic_suspend();
>  
> -    iommu_suspend();
> +    if ( iommu_suspend() )
> +        return SAVED_IOAPIC;
>  
> -    lapic_suspend();
> +    if ( lapic_suspend() )
> +        return SAVED_IOMMU;
>  
> -    return 0;
> +    return SAVED_NONE;

SAVED_ALL

> @@ -169,6 +203,10 @@ static int enter_state(u32 state)
>      {
>          printk(XENLOG_ERR "Some devices failed to power down.");
>          system_state = SYS_STATE_resume;
> +
> +        if ( error > 0 )
> +            device_power_up(error);

if ( error != SAVED_NONE )

(Or really you could just call this without any if().)

> @@ -2389,16 +2393,25 @@ static int intel_iommu_group_id(u16 seg, u8 bus, u8 devfn)
>  }
>  
>  static u32 iommu_state[MAX_IOMMUS][MAX_IOMMU_REGS];
> -static void vtd_suspend(void)
> +
> +static int __must_check vtd_suspend(void)
>  {
>      struct acpi_drhd_unit *drhd;
>      struct iommu *iommu;
>      u32    i;
> +    int rc = 0;

Pointless initializer.

>      if ( !iommu_enabled )
> -        return;
> +        return 0;
>  
> -    iommu_flush_all();
> +    rc = iommu_flush_all();
> +    if ( unlikely(rc) )
> +    {
> +        printk(XENLOG_WARNING VTDPREFIX
> +               " suspend: IOMMU flush all failed: %d\n", rc);
> +
> +        return rc;
> +    }
>  
>      for_each_drhd_unit ( drhd )
>      {
> @@ -2427,6 +2440,8 @@ static void vtd_suspend(void)
>          if ( !iommu_intremap && iommu_qinval )
>              disable_qinval(iommu);
>      }
> +
> +    return rc;
>  }

Perhaps better "return 0" to make obvious that no error path comes
here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Patch v6 09/11] vt-d: fix the IOMMU flush issue
  2016-05-31 13:57 ` [Patch v6 09/11] vt-d: fix the IOMMU flush issue Xu, Quan
@ 2016-06-01 15:36   ` Jan Beulich
  2016-06-02  2:50     ` Xu, Quan
  0 siblings, 1 reply; 30+ messages in thread
From: Jan Beulich @ 2016-06-01 15:36 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Keir Fraser, Andrew Cooper, dario.faggioli,
	xen-devel, Feng Wu

>>> On 31.05.16 at 15:57, <quan.xu@intel.com> wrote:
> @@ -1404,13 +1438,35 @@ int domain_context_mapping_one(
>      spin_unlock(&iommu->lock);
>  
>      /* Context entry was previously non-present (with domid 0). */
> -    if ( iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) | devfn,
> -                                    DMA_CCMD_MASK_NOBIT, 1) )
> -        iommu_flush_write_buffer(iommu);
> -    else
> +    rc = iommu_flush_context_device(iommu, 0, PCI_BDF2(bus, devfn),
> +                                    DMA_CCMD_MASK_NOBIT, 1);
> +
> +    /*
> +     * The current logic for rc returns:
> +     *   - positive  invoke iommu_flush_write_buffer to flush cache.
> +     *   - zero      on success.
> +     *   - negative  on failure. Continue to flush IOMMU IOTLB on a
> +     *               best effort basis.
> +     *
> +     * Moreover, IOMMU flush handlers flush_context_qi or flush_iotlb_qi
> +     * (or flush_context_reg and flush_iotlb_reg, deep functions in the
> +     * call trees of iommu_flush_context_device and iommu_flush_iotlb_dsi)
> +     * are with the same logic to bubble up positive return value.
> +     */
> +    if ( rc <= 0 )
>      {
>          int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> -        iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
> +        int ret;
> +
> +        ret = iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);

Please make this the initializer again (at least one more such case
further down).

> @@ -1535,6 +1592,7 @@ int domain_context_unmap_one(
>      iommu_flush_cache_entry(context, sizeof(struct context_entry));
>  
>      iommu_domid= domain_iommu_domid(domain, iommu);
> +
>      if ( iommu_domid == -1 )

Once again a stray addition of a blank line, contradicting point 1 of
your v6 list of changes. Please actually _look_ at your patches
before sending them out.

> @@ -1542,14 +1600,36 @@ int domain_context_unmap_one(
>          return -EINVAL;
>      }
>  
> -    if ( iommu_flush_context_device(iommu, iommu_domid,
> -                                    (((u16)bus) << 8) | devfn,
> -                                    DMA_CCMD_MASK_NOBIT, 0) )
> -        iommu_flush_write_buffer(iommu);
> -    else
> +    rc = iommu_flush_context_device(iommu, iommu_domid,
> +                                    PCI_BDF2(bus, devfn),
> +                                    DMA_CCMD_MASK_NOBIT, 0);
> +
> +    /*
> +     * The current logic for rc returns:
> +     *   - positive  invoke iommu_flush_write_buffer to flush cache.
> +     *   - zero      on success.
> +     *   - negative  on failure. Continue to flush IOMMU IOTLB on a
> +     *               best effort basis.
> +     *
> +     * Moreover, IOMMU flush handlers flush_context_qi or flush_iotlb_qi
> +     * (or flush_context_reg and flush_iotlb_reg, deep functions in the
> +     * call trees of iommu_flush_context_device and iommu_flush_iotlb_dsi)
> +     * are with the same logic to bubble up positive return value.
> +     */

This is the 3rd instance of that comment. I'd prefer the latter ones to
simply refer to the first one, but I'll obviously leave it to the maintainers
to decide.

With those cosmetic issues taken care of
Reviewed-by: Jen Beulich <jbeulich@suse.com>

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Patch v6 09/11] vt-d: fix the IOMMU flush issue
  2016-06-01 15:36   ` Jan Beulich
@ 2016-06-02  2:50     ` Xu, Quan
  0 siblings, 0 replies; 30+ messages in thread
From: Xu, Quan @ 2016-06-02  2:50 UTC (permalink / raw)
  To: Tian, Kevin, Wu, Feng
  Cc: Andrew Cooper, dario.faggioli, Keir Fraser, Jan Beulich, xen-devel

On June 01, 2016 11:37 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 31.05.16 at 15:57, <quan.xu@intel.com> wrote:
> > @@ -1542,14 +1600,36 @@ int domain_context_unmap_one(
> >          return -EINVAL;
> >      }
> >
> > -    if ( iommu_flush_context_device(iommu, iommu_domid,
> > -                                    (((u16)bus) << 8) | devfn,
> > -                                    DMA_CCMD_MASK_NOBIT, 0) )
> > -        iommu_flush_write_buffer(iommu);
> > -    else
> > +    rc = iommu_flush_context_device(iommu, iommu_domid,
> > +                                    PCI_BDF2(bus, devfn),
> > +                                    DMA_CCMD_MASK_NOBIT, 0);
> > +
> > +    /*
> > +     * The current logic for rc returns:
> > +     *   - positive  invoke iommu_flush_write_buffer to flush cache.
> > +     *   - zero      on success.
> > +     *   - negative  on failure. Continue to flush IOMMU IOTLB on a
> > +     *               best effort basis.
> > +     *
> > +     * Moreover, IOMMU flush handlers flush_context_qi or flush_iotlb_qi
> > +     * (or flush_context_reg and flush_iotlb_reg, deep functions in the
> > +     * call trees of iommu_flush_context_device and iommu_flush_iotlb_dsi)
> > +     * are with the same logic to bubble up positive return value.
> > +     */
> 
> This is the 3rd instance of that comment. I'd prefer the latter ones to simply
> refer to the first one, but I'll obviously leave it to the maintainers to decide.
> 

Kevin / Feng .. What's your opinion?

> With those cosmetic issues taken care of
> Reviewed-by: Jen Beulich <jbeulich@suse.com>
> 

-Quan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Patch v6 07/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending (top level ones)
  2016-06-01 10:39   ` Jan Beulich
@ 2016-06-02  2:58     ` Xu, Quan
  2016-06-02  9:22       ` Jan Beulich
  0 siblings, 1 reply; 30+ messages in thread
From: Xu, Quan @ 2016-06-02  2:58 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tian, Kevin, Stefano Stabellini, Wu, Feng, Liu Jinsong,
	dario.faggioli, xen-devel, Julien Grall, Suravee Suthikulpanit,
	Andrew Cooper, Keir Fraser

On June 01, 2016 6:39 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 31.05.16 at 15:57, <quan.xu@intel.com> wrote:
> >  static int device_power_down(void)
> >  {
> > -    console_suspend();
> > +    if ( console_suspend() )
> > +        return SAVED_NONE;
> >
> > -    time_suspend();
> > +    if ( time_suspend() )
> > +        return SAVED_CONSOLE;
> >
> > -    i8259A_suspend();
> > +    if ( i8259A_suspend() )
> > +        return SAVED_TIME;
> >
> > +    /* ioapic_suspend cannot fail */
> >      ioapic_suspend();
> >
> > -    iommu_suspend();
> > +    if ( iommu_suspend() )
> > +        return SAVED_IOAPIC;
> >
> > -    lapic_suspend();
> > +    if ( lapic_suspend() )
> > +        return SAVED_IOMMU;
> >
> > -    return 0;
> > +    return SAVED_NONE;
> 
> SAVED_ALL

Agreed. 
I was disturbed by the below 'if ( error > 0 )'.

> 
> > @@ -169,6 +203,10 @@ static int enter_state(u32 state)
> >      {
> >          printk(XENLOG_ERR "Some devices failed to power down.");
> >          system_state = SYS_STATE_resume;
> > +
> > +        if ( error > 0 )
> > +            device_power_up(error);
> 
> if ( error != SAVED_NONE )
> 
> (Or really you could just call this without any if().)

I prefer to drop this if().

> 
> > @@ -2389,16 +2393,25 @@ static int intel_iommu_group_id(u16 seg, u8
> > bus, u8 devfn)  }
> >
> >  static u32 iommu_state[MAX_IOMMUS][MAX_IOMMU_REGS];
> > -static void vtd_suspend(void)
> > +
> > +static int __must_check vtd_suspend(void)
> >  {
> >      struct acpi_drhd_unit *drhd;
> >      struct iommu *iommu;
> >      u32    i;
> > +    int rc = 0;
> 
> Pointless initializer.
> 

Indeed, if "return 0" to make obvious that no error path comes at the end of this function.

> >      if ( !iommu_enabled )
> > -        return;
> > +        return 0;
> >
> > -    iommu_flush_all();
> > +    rc = iommu_flush_all();
> > +    if ( unlikely(rc) )
> > +    {
> > +        printk(XENLOG_WARNING VTDPREFIX
> > +               " suspend: IOMMU flush all failed: %d\n", rc);
> > +
> > +        return rc;
> > +    }
> >
> >      for_each_drhd_unit ( drhd )
> >      {
> > @@ -2427,6 +2440,8 @@ static void vtd_suspend(void)
> >          if ( !iommu_intremap && iommu_qinval )
> >              disable_qinval(iommu);
> >      }
> > +
> > +    return rc;
> >  }
> 
> Perhaps better "return 0" to make obvious that no error path comes here.
> 

Agreed.


Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Patch v6 02/11] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping
  2016-06-01 10:05   ` Jan Beulich
@ 2016-06-02  6:00     ` Xu, Quan
  2016-06-02  9:13       ` Jan Beulich
  0 siblings, 1 reply; 30+ messages in thread
From: Xu, Quan @ 2016-06-02  6:00 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tian, Kevin, Keir Fraser, George Dunlap, Andrew Cooper,
	dario.faggioli, xen-devel, Nakajima, Jun

On June 01, 2016 6:05 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 31.05.16 at 15:57, <quan.xu@intel.com> wrote:
> > --- a/xen/arch/x86/mm/p2m-pt.c
> > +++ b/xen/arch/x86/mm/p2m-pt.c
> > @@ -673,6 +673,8 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
> unsigned long gfn, mfn_t mfn,
> >      if ( iommu_enabled && need_iommu(p2m->domain) &&
> >           (iommu_old_flags != iommu_pte_flags || old_mfn != mfn_x(mfn)) )
> >      {
> > +        int ret;
> > +
> >          if ( iommu_use_hap_pt(p2m->domain) )
> >          {
> >              if ( iommu_old_flags )
> > @@ -680,11 +682,27 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
> unsigned long gfn, mfn_t mfn,
> >          }
> >          else if ( iommu_pte_flags )
> >              for ( i = 0; i < (1UL << page_order); i++ )
> > -                iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
> > -                               iommu_pte_flags);
> > +            {
> > +                ret = iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
> > +                                     iommu_pte_flags);
> > +                if ( unlikely(ret) )
> > +                {
> > +                    while ( i-- )
> > +                        iommu_unmap_page(p2m->domain, gfn + i);
> > +
> > +                    if ( !rc )
> > +                        rc = ret;
> > +
> > +                    break;
> > +                }
> > +            }
> 
> So why do you not use the same code structure here as you do on the EPT
> side? 

I try to modify these code in a slight way, but if you point out some extra issue, I am pleased to fix it.
Furthermore, I was not sure whether I made an arbitrary decision here to add the condition 'rc == 0' or not,
Even I was aware of that the 'rc' is zero when the code run here..

> I.e. do away with using "ret" altogether, moving it all into ...
> 
> >          else
> >              for ( i = 0; i < (1UL << page_order); i++ )
> > -                iommu_unmap_page(p2m->domain, gfn + i);
> > +            {
> > +                ret = iommu_unmap_page(p2m->domain, gfn + i);
> > +                if ( !rc )
> > +                    rc = ret;
> > +            }
> 
> ... this extremely narrow scope?
> 

What about this fix:

--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -670,7 +670,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
          && (gfn + (1UL << page_order) - 1 > p2m->max_mapped_pfn) )
         p2m->max_mapped_pfn = gfn + (1UL << page_order) - 1;

-    if ( iommu_enabled && need_iommu(p2m->domain) &&
+    if ( rc == 0 && iommu_enabled && need_iommu(p2m->domain) &&
          (iommu_old_flags != iommu_pte_flags || old_mfn != mfn_x(mfn)) )
     {
         if ( iommu_use_hap_pt(p2m->domain) )
@@ -678,13 +678,33 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
             if ( iommu_old_flags )
                 amd_iommu_flush_pages(p2m->domain, gfn, page_order);
         }
-        else if ( iommu_pte_flags )
-            for ( i = 0; i < (1UL << page_order); i++ )
-                iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
-                               iommu_pte_flags);
         else
-            for ( i = 0; i < (1UL << page_order); i++ )
-                iommu_unmap_page(p2m->domain, gfn + i);
+        {
+            if ( iommu_pte_flags )
+                for ( i = 0; i < (1UL << page_order); i++ )
+                {
+                    rc = iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
+                                        iommu_pte_flags);
+                    if ( unlikely(rc) )
+                    {
+                        while ( i-- )
+                            iommu_unmap_page(p2m->domain, gfn + i);
+
+                        break;
+                    }
+                }
+            else
+            {
+                int ret;
+
+                for ( i = 0; i < (1UL << page_order); i++ )
+                {
+                    ret = iommu_unmap_page(p2m->domain, gfn + i);
+                    if ( !rc )
+                        rc = ret;
+                }
+            }
+        }
     }

Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Patch v6 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping (top level ones)
  2016-06-01 10:24   ` Jan Beulich
@ 2016-06-02  7:25     ` Xu, Quan
  2016-06-02  9:21       ` Jan Beulich
  0 siblings, 1 reply; 30+ messages in thread
From: Xu, Quan @ 2016-06-02  7:25 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tian, Kevin, Stefano Stabellini, Wu, Feng, dario.faggioli,
	xen-devel, Julien Grall, Suravee Suthikulpanit

On June 01, 2016 6:24 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 31.05.16 at 15:57, <quan.xu@intel.com> wrote:
> > --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
> > +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> > @@ -295,12 +297,22 @@ static void __hwdom_init
> amd_iommu_hwdom_init(struct domain *d)
> >               * a pfn_valid() check would seem desirable here.
> >               */
> >              if ( mfn_valid(pfn) )
> > -                amd_iommu_map_page(d, pfn, pfn,
> > -                                   IOMMUF_readable|IOMMUF_writable);
> > +            {
> > +                int ret;
> > +
> > +                ret = amd_iommu_map_page(d, pfn, pfn,
> > +
> > + IOMMUF_readable|IOMMUF_writable);
> 
> Same here as for the earlier patch regarding assignment vs initializer.
> 

I'll fix it in next v7.

> But overall the entire change to this function seems to rather belong into
> patch 2.

Indeed.

 As would a respective change to vtd_set_hwdom_mapping(), which
> I'm not sure which patch you've put that in.
> 

Sorry,  I missed it. I indeed it need to fix it as similar as above.
I wonder whether I could add a __must_check annotation to iommu_map_page() or not, as which may be inconsistent with iommu_unmap_page().

these modifications should belong to patch 2.

> > --- a/xen/include/xen/iommu.h
> > +++ b/xen/include/xen/iommu.h
> > @@ -166,8 +166,8 @@ struct iommu_ops {  #endif /* HAS_PCI */
> >
> >      void (*teardown)(struct domain *d);
> > -    int (*map_page)(struct domain *d, unsigned long gfn, unsigned long mfn,
> > -                    unsigned int flags);
> > +    int __must_check (*map_page)(struct domain *d, unsigned long gfn,
> > +                                 unsigned long mfn, unsigned int
> > + flags);
> 
> With this and with the rule set forth in the context of the discussion of v5,
> iommu_map_page() (as well as any other caller of this hook that do not
> themselves _consume_ the error [e.g. hwdom init ones]) should become or
> already be __must_check, which afaict isn't the case.

But does this rule also apply to these 'void' annotation functions?  .e.g, in the call tree of hwdom init ones / domain crash ones, we are no need to bubble up
error code, leaving these void annotation as is.

> The same then, btw.,
> applies to patch 3, and hence I have to withdraw the R-b that you've got
> there.
> 

I find these callers are grant_table/mm, and we limit __must_check annotation to IOMMU functions for this patch set..
So I think I can remain R-b as is for patch 3. 

btw, your R-b is a very expensive tag to me, and I really don't want to drop it. :):)..

Quan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Patch v6 02/11] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping
  2016-06-02  6:00     ` Xu, Quan
@ 2016-06-02  9:13       ` Jan Beulich
  0 siblings, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2016-06-02  9:13 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Keir Fraser, George Dunlap, Andrew Cooper,
	dario.faggioli, xen-devel, Jun Nakajima

>>> On 02.06.16 at 08:00, <quan.xu@intel.com> wrote:
> On June 01, 2016 6:05 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> >>> On 31.05.16 at 15:57, <quan.xu@intel.com> wrote:
>> > @@ -680,11 +682,27 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
>> unsigned long gfn, mfn_t mfn,
>> >          }
>> >          else if ( iommu_pte_flags )
>> >              for ( i = 0; i < (1UL << page_order); i++ )
>> > -                iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
>> > -                               iommu_pte_flags);
>> > +            {
>> > +                ret = iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
>> > +                                     iommu_pte_flags);
>> > +                if ( unlikely(ret) )
>> > +                {
>> > +                    while ( i-- )
>> > +                        iommu_unmap_page(p2m->domain, gfn + i);
>> > +
>> > +                    if ( !rc )
>> > +                        rc = ret;
>> > +
>> > +                    break;
>> > +                }
>> > +            }
>> 
>> So why do you not use the same code structure here as you do on the EPT
>> side? 
> 
> I try to modify these code in a slight way, but if you point out some extra 
> issue, I am pleased to fix it.
> Furthermore, I was not sure whether I made an arbitrary decision here to add 
> the condition 'rc == 0' or not,
> Even I was aware of that the 'rc' is zero when the code run here..

If you want to document that rc can't be other than zero when getting
here, simply add an ASSERT(). I.e. ...

>> I.e. do away with using "ret" altogether, moving it all into ...
>> 
>> >          else
>> >              for ( i = 0; i < (1UL << page_order); i++ )
>> > -                iommu_unmap_page(p2m->domain, gfn + i);
>> > +            {
>> > +                ret = iommu_unmap_page(p2m->domain, gfn + i);
>> > +                if ( !rc )
>> > +                    rc = ret;
>> > +            }
>> 
>> ... this extremely narrow scope?
>> 
> 
> What about this fix:
> 
> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -670,7 +670,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
>           && (gfn + (1UL << page_order) - 1 > p2m->max_mapped_pfn) )
>          p2m->max_mapped_pfn = gfn + (1UL << page_order) - 1;
> 
> -    if ( iommu_enabled && need_iommu(p2m->domain) &&
> +    if ( rc == 0 && iommu_enabled && need_iommu(p2m->domain) &&

... I'm not in favor of this, because it's still pointless code. The
nature of ASSERT()s is that - if everything works right - they're
pointless by definition (and hence ought to serve as just
documentation).

> @@ -678,13 +678,33 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
>              if ( iommu_old_flags )
>                  amd_iommu_flush_pages(p2m->domain, gfn, page_order);
>          }
> -        else if ( iommu_pte_flags )

I also don't see why you replace this line, at once causing needlessly
deeper indentation.

> -            for ( i = 0; i < (1UL << page_order); i++ )
> -                iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
> -                               iommu_pte_flags);
>          else
> -            for ( i = 0; i < (1UL << page_order); i++ )
> -                iommu_unmap_page(p2m->domain, gfn + i);
> +        {
> +            if ( iommu_pte_flags )
> +                for ( i = 0; i < (1UL << page_order); i++ )
> +                {
> +                    rc = iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
> +                                        iommu_pte_flags);
> +                    if ( unlikely(rc) )
> +                    {
> +                        while ( i-- )
> +                            iommu_unmap_page(p2m->domain, gfn + i);
> +
> +                        break;
> +                    }
> +                }
> +            else
> +            {
> +                int ret;
> +
> +                for ( i = 0; i < (1UL << page_order); i++ )
> +                {
> +                    ret = iommu_unmap_page(p2m->domain, gfn + i);

And I said specially to move "ret" into this scope (the narrowest one
possible), instead of in the one surrounding it.

Jan

> +                    if ( !rc )
> +                        rc = ret;
> +                }
> +            }
> +        }
>      }
> 
> Quan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Patch v6 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping (top level ones)
  2016-06-02  7:25     ` Xu, Quan
@ 2016-06-02  9:21       ` Jan Beulich
  2016-06-02 12:43         ` Xu, Quan
  2016-06-07  7:51         ` Xu, Quan
  0 siblings, 2 replies; 30+ messages in thread
From: Jan Beulich @ 2016-06-02  9:21 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, dario.faggioli,
	xen-devel, Julien Grall, Suravee Suthikulpanit

>>> On 02.06.16 at 09:25, <quan.xu@intel.com> wrote:
> On June 01, 2016 6:24 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> >>> On 31.05.16 at 15:57, <quan.xu@intel.com> wrote:
>  As would a respective change to vtd_set_hwdom_mapping(), which
>> I'm not sure which patch you've put that in.
>> 
> 
> Sorry,  I missed it. I indeed it need to fix it as similar as above.
> I wonder whether I could add a __must_check annotation to iommu_map_page() 
> or not, as which may be inconsistent with iommu_unmap_page().

Urgh - are you saying that by the end of the series they aren't
_both_ __must_check? Then I must have overlooked something
while reviewing: They definitely both ought to be. Or wait - I've
pointed this out in the context of this patch, still seen below.

>> > --- a/xen/include/xen/iommu.h
>> > +++ b/xen/include/xen/iommu.h
>> > @@ -166,8 +166,8 @@ struct iommu_ops {  #endif /* HAS_PCI */
>> >
>> >      void (*teardown)(struct domain *d);
>> > -    int (*map_page)(struct domain *d, unsigned long gfn, unsigned long mfn,
>> > -                    unsigned int flags);
>> > +    int __must_check (*map_page)(struct domain *d, unsigned long gfn,
>> > +                                 unsigned long mfn, unsigned int
>> > + flags);
>> 
>> With this and with the rule set forth in the context of the discussion of 
> v5,
>> iommu_map_page() (as well as any other caller of this hook that do not
>> themselves _consume_ the error [e.g. hwdom init ones]) should become or
>> already be __must_check, which afaict isn't the case.
> 
> But does this rule also apply to these 'void' annotation functions?  .e.g, 
> in the call tree of hwdom init ones / domain crash ones, we are no need to 
> bubble up
> error code, leaving these void annotation as is.

Note that my previous reply already answered that question (as I
expected you would otherwise ask): I specifically excluded those
functions that _consume_ the error, and I did give an example. I
really don't know what else I could have done to make clear what
exceptions are to be expected.

>> The same then, btw.,
>> applies to patch 3, and hence I have to withdraw the R-b that you've got
>> there.
> 
> I find these callers are grant_table/mm, and we limit __must_check 
> annotation to IOMMU functions for this patch set..

Talk isn't of those ones. The subject of patch 3 is unmapping, and
hence the parallel to the one here is that iommu_unmap_page()
needs to become __must_check there, along with the iommu_ops
unmap_page() hook.

> So I think I can remain R-b as is for patch 3. 

No.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Patch v6 07/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending (top level ones)
  2016-06-02  2:58     ` Xu, Quan
@ 2016-06-02  9:22       ` Jan Beulich
  0 siblings, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2016-06-02  9:22 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, Liu Jinsong,
	dario.faggioli, xen-devel, Julien Grall, Suravee Suthikulpanit,
	Andrew Cooper, Keir Fraser

>>> On 02.06.16 at 04:58, <quan.xu@intel.com> wrote:
> On June 01, 2016 6:39 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> >>> On 31.05.16 at 15:57, <quan.xu@intel.com> wrote:
>> > @@ -2389,16 +2393,25 @@ static int intel_iommu_group_id(u16 seg, u8
>> > bus, u8 devfn)  }
>> >
>> >  static u32 iommu_state[MAX_IOMMUS][MAX_IOMMU_REGS];
>> > -static void vtd_suspend(void)
>> > +
>> > +static int __must_check vtd_suspend(void)
>> >  {
>> >      struct acpi_drhd_unit *drhd;
>> >      struct iommu *iommu;
>> >      u32    i;
>> > +    int rc = 0;
>> 
>> Pointless initializer.
>> 
> 
> Indeed, if "return 0" to make obvious that no error path comes at the end of 
> this function.

No, it's pointless even without that because ...

>> >      if ( !iommu_enabled )
>> > -        return;
>> > +        return 0;
>> >
>> > -    iommu_flush_all();
>> > +    rc = iommu_flush_all();

... here you unconditionally initialize it (i.e. there's no code path
bypassing this).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Patch v6 11/11] vt-d: add __must_check annotation to IOMMU flush pointers and handlers
  2016-05-31 13:57 ` [Patch v6 11/11] vt-d: add __must_check annotation to IOMMU flush pointers and handlers Xu, Quan
@ 2016-06-02 10:06   ` Jan Beulich
  0 siblings, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2016-06-02 10:06 UTC (permalink / raw)
  To: Quan Xu; +Cc: dario.faggioli, Feng Wu, Kevin Tian, xen-devel

>>> On 31.05.16 at 15:57, <quan.xu@intel.com> wrote:
> @@ -1455,7 +1455,7 @@ int domain_context_mapping_one(
>       */
>      if ( rc <= 0 )
>      {
> -        int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> +        bool_t flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;

Please convert all of these that you touch anyway to the canonical
form used elsewhere: The plain expression if it's already of boolean
type, or else using ! or !! instead of ?:.

With that
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Patch v6 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping (top level ones)
  2016-06-02  9:21       ` Jan Beulich
@ 2016-06-02 12:43         ` Xu, Quan
  2016-06-07  7:51         ` Xu, Quan
  1 sibling, 0 replies; 30+ messages in thread
From: Xu, Quan @ 2016-06-02 12:43 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tian, Kevin, Stefano Stabellini, Wu, Feng, dario.faggioli,
	xen-devel, Julien Grall, Suravee Suthikulpanit

On June 02, 2016 5:21 PM, Jan Beulich <JBeulich@suse.com> wrote:
> I really don't know what else I
> could have done to make clear what exceptions are to be expected.

Sorry,  it may make you frustrated, but what you said is very clear. I really need to slow down and read your email along with code context carefully.
Maybe I want to save time for that sriov PT bug. Sorry again.


Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Patch v6 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping (top level ones)
  2016-06-02  9:21       ` Jan Beulich
  2016-06-02 12:43         ` Xu, Quan
@ 2016-06-07  7:51         ` Xu, Quan
  2016-06-07  8:19           ` Jan Beulich
  1 sibling, 1 reply; 30+ messages in thread
From: Xu, Quan @ 2016-06-07  7:51 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tian, Kevin, Stefano Stabellini, Wu, Feng, dario.faggioli,
	xen-devel, Julien Grall, Suravee Suthikulpanit

On June 02, 2016 5:21 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 02.06.16 at 09:25, <quan.xu@intel.com> wrote:
> > On June 01, 2016 6:24 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >> >>> On 31.05.16 at 15:57, <quan.xu@intel.com> wrote:
> >  As would a respective change to vtd_set_hwdom_mapping(), which
> >> I'm not sure which patch you've put that in.
> >>
> >
> > Sorry,  I missed it. I indeed it need to fix it as similar as above.
> > I wonder whether I could add a __must_check annotation to
> > iommu_map_page() or not, as which may be inconsistent with
> iommu_unmap_page().
> 
> Urgh - are you saying that by the end of the series they aren't _both_
> __must_check? Then I must have overlooked something while reviewing: They
> definitely both ought to be. Or wait - I've pointed this out in the context of this
> patch, still seen below.
> 
> >> > --- a/xen/include/xen/iommu.h
> >> > +++ b/xen/include/xen/iommu.h
> >> > @@ -166,8 +166,8 @@ struct iommu_ops {  #endif /* HAS_PCI */
> >> >
> >> >      void (*teardown)(struct domain *d);
> >> > -    int (*map_page)(struct domain *d, unsigned long gfn, unsigned long
> mfn,
> >> > -                    unsigned int flags);
> >> > +    int __must_check (*map_page)(struct domain *d, unsigned long gfn,
> >> > +                                 unsigned long mfn, unsigned int
> >> > + flags);
> >>
> >> With this and with the rule set forth in the context of the
> >> discussion of
> > v5,
> >> iommu_map_page() (as well as any other caller of this hook that do
> >> not themselves _consume_ the error [e.g. hwdom init ones]) should
> >> become or already be __must_check, which afaict isn't the case.
> >
> > But does this rule also apply to these 'void' annotation functions?
> > .e.g, in the call tree of hwdom init ones / domain crash ones, we are
> > no need to bubble up error code, leaving these void annotation as is.
> 
> Note that my previous reply already answered that question (as I expected
> you would otherwise ask): I specifically excluded those functions that
> _consume_ the error, and I did give an example. I really don't know what else I
> could have done to make clear what exceptions are to be expected.
> 
> >> The same then, btw.,
> >> applies to patch 3, and hence I have to withdraw the R-b that you've
> >> got there.
> >
> > I find these callers are grant_table/mm, and we limit __must_check
> > annotation to IOMMU functions for this patch set..
> 
> Talk isn't of those ones. The subject of patch 3 is unmapping, and hence the
> parallel to the one here is that iommu_unmap_page() needs to become
> __must_check there, along with the iommu_ops
> unmap_page() hook.
> 


Jan,
I still have one question. If I add __must_check annotation to iommu_unmap_page().
How to fix this -- unmapping the previous IOMMU maps when IOMMU mapping is failed..
e.g.,

ept_set_entry()
{
...
                for ( i = 0; i < (1 << order); i++ )
                {
                    rc = iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags);
                    if ( unlikely(rc) )
                    {
                        while ( i-- )
                            iommu_unmap_page(p2m->domain, gfn + i);

                        break;
                    }
                }
...
}


If we leave it as is, it leads to compilation errors as __must_check annotation. Also we return the first error, so we are no need to cumulate the error of iommu_unmap_page().
That's also why I hesitated to add __must_check annotation to iommu_unmap_page().

Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Patch v6 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping (top level ones)
  2016-06-07  7:51         ` Xu, Quan
@ 2016-06-07  8:19           ` Jan Beulich
  2016-06-07  8:40             ` Xu, Quan
  0 siblings, 1 reply; 30+ messages in thread
From: Jan Beulich @ 2016-06-07  8:19 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, dario.faggioli,
	xen-devel, Julien Grall, Suravee Suthikulpanit

>>> On 07.06.16 at 09:51, <quan.xu@intel.com> wrote:
> I still have one question. If I add __must_check annotation to 
> iommu_unmap_page().
> How to fix this -- unmapping the previous IOMMU maps when IOMMU mapping is 
> failed..
> e.g.,
> 
> ept_set_entry()
> {
> ...
>                 for ( i = 0; i < (1 << order); i++ )
>                 {
>                     rc = iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags);
>                     if ( unlikely(rc) )
>                     {
>                         while ( i-- )
>                             iommu_unmap_page(p2m->domain, gfn + i);
> 
>                         break;
>                     }
>                 }
> ...
> }
> 
> 
> If we leave it as is, it leads to compilation errors as __must_check 
> annotation. Also we return the first error, so we are no need to cumulate the 
> error of iommu_unmap_page().
> That's also why I hesitated to add __must_check annotation to 
> iommu_unmap_page().

Well, with there already being a message logged down the call
tree in case of error, I think the return value should simply be
latched into a local variable, and nothing really be done with it
(which should satisfy the compiler afaict, but it may be that
Coverity would grumble about such). We really don't want to omit
the __must_check just because there are a few cases where the
error is of no further relevance; the annotation gets put there to
make sure we catch all (current and future) cases where errors
must not be ignored.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Patch v6 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping (top level ones)
  2016-06-07  8:19           ` Jan Beulich
@ 2016-06-07  8:40             ` Xu, Quan
  2016-06-07 10:11               ` Jan Beulich
  0 siblings, 1 reply; 30+ messages in thread
From: Xu, Quan @ 2016-06-07  8:40 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tian, Kevin, Stefano Stabellini, Wu, Feng, dario.faggioli,
	xen-devel, Julien Grall, Suravee Suthikulpanit

On June 07, 2016 4:19 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 07.06.16 at 09:51, <quan.xu@intel.com> wrote:
> > I still have one question. If I add __must_check annotation to
> > iommu_unmap_page().
> > How to fix this -- unmapping the previous IOMMU maps when IOMMU
> > mapping is failed..
> > e.g.,
> >
> > ept_set_entry()
> > {
> > ...
> >                 for ( i = 0; i < (1 << order); i++ )
> >                 {
> >                     rc = iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags);
> >                     if ( unlikely(rc) )
> >                     {
> >                         while ( i-- )
> >                             iommu_unmap_page(p2m->domain, gfn + i);
> >
> >                         break;
> >                     }
> >                 }
> > ...
> > }
> >
> >
> > If we leave it as is, it leads to compilation errors as __must_check
> > annotation. Also we return the first error, so we are no need to
> > cumulate the error of iommu_unmap_page().
> > That's also why I hesitated to add __must_check annotation to
> > iommu_unmap_page().
> 
> Well, with there already being a message logged down the call tree in case of
> error, I think the return value should simply be latched into a local variable,
> and nothing really be done with it (which should satisfy the compiler afaict,
> but it may be that Coverity would grumble about such). We really don't want
> to omit the __must_check just because there are a few cases where the error
> is of no further relevance; the annotation gets put there to make sure we
> catch all (current and future) cases where errors must not be ignored.
> 

Agreed. We really need to add __must_check annotation to iommu_map_page() / iommu_unmap_page().


Along with discussion of  a local variable for _for()_ loop,  I need to define this local variable into _while()_ loop:
e.g., as above case:
 
+while ( i-- )
+{
+    int iommu_ret = iommu_unmap_page(p2m->domain, gfn + i);
+
+    break;
+}

,  right?


But I really like this way:

+ int iommu_ret;
+
+while ( i-- )
+{
+    iommu_ret = iommu_unmap_page(p2m->domain, gfn + i);
+
+    break;
+}

Quan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Patch v6 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping (top level ones)
  2016-06-07  8:40             ` Xu, Quan
@ 2016-06-07 10:11               ` Jan Beulich
  0 siblings, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2016-06-07 10:11 UTC (permalink / raw)
  To: Andrew Cooper, Quan Xu
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, dario.faggioli,
	xen-devel, Julien Grall, Suravee Suthikulpanit

>>> On 07.06.16 at 10:40, <quan.xu@intel.com> wrote:
> Along with discussion of  a local variable for _for()_ loop,  I need to 
> define this local variable into _while()_ loop:
> e.g., as above case:
>  
> +while ( i-- )
> +{
> +    int iommu_ret = iommu_unmap_page(p2m->domain, gfn + i);
> +
> +    break;
> +}
> 
> ,  right?

Something like this. As said, tricking Coverity into not reporting the
unused value as a possible problem would additionally be needed.
Andrew - do you know of any pre-existing pattern usable for this
purpose?

> But I really like this way:
> 
> + int iommu_ret;
> +
> +while ( i-- )
> +{
> +    iommu_ret = iommu_unmap_page(p2m->domain, gfn + i);
> +
> +    break;
> +}

Rather not - iommu_ret isn't used outside of the while() scope.
(Unless, of course, the Coverity related adjustment makes this
desirable / necessary, but I would guess the tool would then still
be able to figure that the assignment is pointless on all but the
last iteration.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2016-06-07 10:11 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-31 13:57 [Patch v6 00/11] Check VT-d Device-TLB flush error Xu, Quan
2016-05-31 13:57 ` [Patch v6 01/11] IOMMU: handle IOMMU mapping and unmapping failures Xu, Quan
2016-06-01  9:52   ` Jan Beulich
2016-05-31 13:57 ` [Patch v6 02/11] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping Xu, Quan
2016-06-01 10:05   ` Jan Beulich
2016-06-02  6:00     ` Xu, Quan
2016-06-02  9:13       ` Jan Beulich
2016-05-31 13:57 ` [Patch v6 03/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping (top level ones) Xu, Quan
2016-05-31 13:57 ` [Patch v6 04/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping " Xu, Quan
2016-06-01 10:24   ` Jan Beulich
2016-06-02  7:25     ` Xu, Quan
2016-06-02  9:21       ` Jan Beulich
2016-06-02 12:43         ` Xu, Quan
2016-06-07  7:51         ` Xu, Quan
2016-06-07  8:19           ` Jan Beulich
2016-06-07  8:40             ` Xu, Quan
2016-06-07 10:11               ` Jan Beulich
2016-05-31 13:57 ` [Patch v6 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} " Xu, Quan
2016-05-31 13:57 ` [Patch v6 06/11] propagate IOMMU Device-TLB flush error up to EPT update " Xu, Quan
2016-05-31 13:57 ` [Patch v6 07/11] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending " Xu, Quan
2016-06-01 10:39   ` Jan Beulich
2016-06-02  2:58     ` Xu, Quan
2016-06-02  9:22       ` Jan Beulich
2016-05-31 13:57 ` [Patch v6 08/11] IOMMU: propagate IOMMU Device-TLB flush error (leaf ones) Xu, Quan
2016-05-31 13:57 ` [Patch v6 09/11] vt-d: fix the IOMMU flush issue Xu, Quan
2016-06-01 15:36   ` Jan Beulich
2016-06-02  2:50     ` Xu, Quan
2016-05-31 13:57 ` [Patch v6 10/11] vt-d: propagate the IOMMU Device-TLB flush error up to ME phantom functions Xu, Quan
2016-05-31 13:57 ` [Patch v6 11/11] vt-d: add __must_check annotation to IOMMU flush pointers and handlers Xu, Quan
2016-06-02 10:06   ` Jan Beulich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).