All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 00/10] Check VT-d Device-TLB flush error
@ 2016-05-18  8:08 Quan Xu
  2016-05-18  8:08 ` [PATCH v5 01/10] vt-d: fix the IOMMU flush issue Quan Xu
                   ` (10 more replies)
  0 siblings, 11 replies; 50+ messages in thread
From: Quan Xu @ 2016-05-18  8:08 UTC (permalink / raw)
  To: xen-devel; +Cc: dario.faggioli, Quan Xu

This patch set is a prereq patch set for Patch:'VT-d Device-TLB flush issue'.

While IOMMU Device-TLB flush timed out, xen calls panic() at present. However the existing panic()
is going to be eliminated, so we must propagate the IOMMU Device-TLB flush error up to the call trees.

This patch set is also based on the discussion of 'abstract model of IOMMU unmaping/mapping failures'

--Changes in v5:

patch 1:
  * add the missing blank line.
  * add comments.
  * if iommu_flush_context_device failed, continue to flush IOMMU IOTLB on a best effort basis.
  * __defer__ to:
      - rename __intel_iommu_iotlb_flush to iommu_flush_iotlb
      - rename intel_iommu_iotlb_flush to iommu_flush_iotlb_pages
      - rename intel_iommu_iotlb_flush_all to iommu_flush_iotlb_all
      - add __must_check annotation
    in patch 7 / 8,
    otherwise, this will disrupt the order due to __must_check annotation.


patch 2:
  * enhance the logging.
  * state why no spamming can occur in commit message.

patch 3:
  * keep the "rc == 0" untouched.
  * add __must_check annotation.
  * restricting the scope of "ret" to the innermost block.
  * in ept_set_entry(), in the case of iommu_map_page(), just use rc directly and not
    bother using ret at all.

patch 4:
  * add __must_check annotation.

patch 5:
  * add __must_check annotation.

patch 6:
  * switch the two sides of the && for "if ( rc >= 0 && unlikely(ret) )".
  * remove redundant __must_check.


patch 7:
  * rename __intel_iommu_iotlb_flush to iommu_flush_iotlb.
  * rename intel_iommu_iotlb_flush to iommu_flush_iotlb_pages.
  * rename intel_iommu_iotlb_flush_all to iommu_flush_iotlb_all.
  * add __must_check annotation.

patch 8:
  * change ret to rc.

patch 9:
  * change 'enum dev_power_type' to 'enum dev_power_saved'.
  * change 'TYPE_*' to 'SAVED_*'
  * drop '*_UNKNOWN' in enum.
  * change 'should' to 'cannot' for comment.
  * reorder in device_power_up().
  * enhance logging message.

patch 10:
  * if an earlier error occurred ( rc != 0 ), no use in calling me_wifi_quirk().

Quan Xu (10):
  vt-d: fix the IOMMU flush issue
  IOMMU: handle IOMMU mapping and unmapping failures
  IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping
  IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping.
  IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping.
  IOMMU/MMU: propagate IOMMU Device-TLB flush error up to
    iommu_iotlb_flush{,_all} (top level ones).
  IOMMU: propagate IOMMU Device-TLB flush error up to
    iommu_iotlb_flush{,_all} (leaf ones).
  vt-d/ept: propagate IOMMU Device-TLB flush error up to EPT update.
  IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending
  vt-d: propagate error up to ME phantom function mapping and unmapping

 xen/arch/arm/p2m.c                            |   5 +-
 xen/arch/x86/acpi/power.c                     |  72 +++++++---
 xen/arch/x86/mm.c                             |  18 ++-
 xen/arch/x86/mm/p2m-ept.c                     |  39 +++--
 xen/arch/x86/mm/p2m-pt.c                      |  28 +++-
 xen/arch/x86/mm/p2m.c                         |  34 ++++-
 xen/common/memory.c                           |  20 ++-
 xen/drivers/passthrough/amd/iommu_init.c      |   9 +-
 xen/drivers/passthrough/amd/pci_amd_iommu.c   |  19 ++-
 xen/drivers/passthrough/arm/smmu.c            |  19 +--
 xen/drivers/passthrough/iommu.c               |  66 +++++++--
 xen/drivers/passthrough/vtd/extern.h          |   3 +-
 xen/drivers/passthrough/vtd/iommu.c           | 199 ++++++++++++++++++--------
 xen/drivers/passthrough/vtd/quirks.c          |  28 ++--
 xen/drivers/passthrough/x86/iommu.c           |   5 +-
 xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |   4 +-
 xen/include/asm-x86/iommu.h                   |   3 +-
 xen/include/asm-x86/p2m.h                     |  12 +-
 xen/include/xen/iommu.h                       |  22 +--
 19 files changed, 441 insertions(+), 164 deletions(-)

-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH v5 01/10] vt-d: fix the IOMMU flush issue
  2016-05-18  8:08 [PATCH v5 00/10] Check VT-d Device-TLB flush error Quan Xu
@ 2016-05-18  8:08 ` Quan Xu
  2016-05-23 13:30   ` Jan Beulich
  2016-05-18  8:08 ` [PATCH v5 02/10] IOMMU: handle IOMMU mapping and unmapping failures Quan Xu
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 50+ messages in thread
From: Quan Xu @ 2016-05-18  8:08 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Keir Fraser, Quan Xu, Andrew Cooper, dario.faggioli,
	Jan Beulich, Feng Wu

The propagation value from IOMMU flush interfaces may be positive, which
indicates callers need to flush cache, not one of faliures.

when the propagation value is positive, this patch fixes this flush issue
as follows:
  - call iommu_flush_write_buffer() to flush cache.
  - return zero.

Signed-off-by: Quan Xu <quan.xu@intel.com>

CC: Kevin Tian <kevin.tian@intel.com>
CC: Feng Wu <feng.wu@intel.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/drivers/passthrough/vtd/iommu.c | 101 ++++++++++++++++++++++++++----------
 xen/include/asm-x86/iommu.h         |   2 +-
 2 files changed, 75 insertions(+), 28 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index db83949..3ece815 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -557,14 +557,16 @@ static void iommu_flush_all(void)
     }
 }
 
-static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
-        int dma_old_pte_present, unsigned int page_count)
+static int __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+                                     bool_t dma_old_pte_present,
+                                     unsigned int page_count)
 {
     struct domain_iommu *hd = dom_iommu(d);
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu;
     int flush_dev_iotlb;
     int iommu_domid;
+    int rc = 0;
 
     /*
      * No need pcideves_lock here because we have flush
@@ -579,23 +581,28 @@ static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
 
         flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
         iommu_domid= domain_iommu_domid(d, iommu);
+
         if ( iommu_domid == -1 )
             continue;
 
         if ( page_count != 1 || gfn == INVALID_GFN )
-        {
-            if ( iommu_flush_iotlb_dsi(iommu, iommu_domid,
-                        0, flush_dev_iotlb) )
-                iommu_flush_write_buffer(iommu);
-        }
+            rc = iommu_flush_iotlb_dsi(iommu, iommu_domid,
+                                       0, flush_dev_iotlb);
         else
+            rc = iommu_flush_iotlb_psi(iommu, iommu_domid,
+                                       (paddr_t)gfn << PAGE_SHIFT_4K,
+                                       PAGE_ORDER_4K,
+                                       !dma_old_pte_present,
+                                       flush_dev_iotlb);
+
+        if ( rc > 0 )
         {
-            if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
-                        (paddr_t)gfn << PAGE_SHIFT_4K, PAGE_ORDER_4K,
-                        !dma_old_pte_present, flush_dev_iotlb) )
-                iommu_flush_write_buffer(iommu);
+            iommu_flush_write_buffer(iommu);
+            rc = 0;
         }
     }
+
+    return rc;
 }
 
 static void intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
@@ -1278,6 +1285,7 @@ int domain_context_mapping_one(
     u64 maddr, pgd_maddr;
     u16 seg = iommu->intel->drhd->segment;
     int agaw;
+    int rc;
 
     ASSERT(pcidevs_locked());
     spin_lock(&iommu->lock);
@@ -1391,13 +1399,26 @@ int domain_context_mapping_one(
     spin_unlock(&iommu->lock);
 
     /* Context entry was previously non-present (with domid 0). */
-    if ( iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) | devfn,
-                                    DMA_CCMD_MASK_NOBIT, 1) )
-        iommu_flush_write_buffer(iommu);
-    else
+    rc = iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) | devfn,
+                                    DMA_CCMD_MASK_NOBIT, 1);
+
+    /*
+     * The current logic for rc returns:
+     *   - positive  invoke iommu_flush_write_buffer to flush cache.
+     *   - zero      success.
+     *   - negative  failure. Continue to flush IOMMU IOTLB on a best
+     *               effort basis.
+     */
+    if ( rc <= 0 )
     {
         int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
-        iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
+
+        rc = iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
+    }
+    else
+    {
+        iommu_flush_write_buffer(iommu);
+        rc = 0;
     }
 
     set_bit(iommu->index, &hd->arch.iommu_bitmap);
@@ -1407,7 +1428,7 @@ int domain_context_mapping_one(
     if ( !seg )
         me_wifi_quirk(domain, bus, devfn, MAP_ME_PHANTOM_FUNC);
 
-    return 0;
+    return rc;
 }
 
 static int domain_context_mapping(
@@ -1502,6 +1523,7 @@ int domain_context_unmap_one(
     struct context_entry *context, *context_entries;
     u64 maddr;
     int iommu_domid;
+    int rc;
 
     ASSERT(pcidevs_locked());
     spin_lock(&iommu->lock);
@@ -1522,6 +1544,7 @@ int domain_context_unmap_one(
     iommu_flush_cache_entry(context, sizeof(struct context_entry));
 
     iommu_domid= domain_iommu_domid(domain, iommu);
+
     if ( iommu_domid == -1 )
     {
         spin_unlock(&iommu->lock);
@@ -1529,14 +1552,27 @@ int domain_context_unmap_one(
         return -EINVAL;
     }
 
-    if ( iommu_flush_context_device(iommu, iommu_domid,
+    rc = iommu_flush_context_device(iommu, iommu_domid,
                                     (((u16)bus) << 8) | devfn,
-                                    DMA_CCMD_MASK_NOBIT, 0) )
-        iommu_flush_write_buffer(iommu);
-    else
+                                    DMA_CCMD_MASK_NOBIT, 0);
+
+    /*
+     * The current logic for rc returns:
+     *   - positive  invoke iommu_flush_write_buffer to flush cache.
+     *   - zero      success.
+     *   - negative  failure. Continue to flush IOMMU IOTLB on a best
+     *               effort basis.
+     */
+    if ( rc <= 0 )
     {
         int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
-        iommu_flush_iotlb_dsi(iommu, iommu_domid, 0, flush_dev_iotlb);
+
+        rc = iommu_flush_iotlb_dsi(iommu, iommu_domid, 0, flush_dev_iotlb);
+    }
+    else
+    {
+        iommu_flush_write_buffer(iommu);
+        rc = 0;
     }
 
     spin_unlock(&iommu->lock);
@@ -1545,7 +1581,7 @@ int domain_context_unmap_one(
     if ( !iommu->intel->drhd->segment )
         me_wifi_quirk(domain, bus, devfn, UNMAP_ME_PHANTOM_FUNC);
 
-    return 0;
+    return rc;
 }
 
 static int domain_context_unmap(
@@ -1750,32 +1786,43 @@ static int intel_iommu_unmap_page(struct domain *d, unsigned long gfn)
     return 0;
 }
 
-void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
-                     int order, int present)
+int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
+                    int order, bool_t present)
 {
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu = NULL;
     struct domain_iommu *hd = dom_iommu(d);
     int flush_dev_iotlb;
     int iommu_domid;
+    int rc = 0;
 
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
 
     for_each_drhd_unit ( drhd )
     {
         iommu = drhd->iommu;
+
         if ( !test_bit(iommu->index, &hd->arch.iommu_bitmap) )
             continue;
 
         flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
         iommu_domid= domain_iommu_domid(d, iommu);
+
         if ( iommu_domid == -1 )
             continue;
-        if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
+
+        rc = iommu_flush_iotlb_psi(iommu, iommu_domid,
                                    (paddr_t)gfn << PAGE_SHIFT_4K,
-                                   order, !present, flush_dev_iotlb) )
+                                   order, !present, flush_dev_iotlb);
+
+        if ( rc > 0 )
+        {
             iommu_flush_write_buffer(iommu);
+            rc = 0;
+        }
     }
+
+    return rc;
 }
 
 static int __init vtd_ept_page_compatible(struct iommu *iommu)
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
index e82a2f0..43f1620 100644
--- a/xen/include/asm-x86/iommu.h
+++ b/xen/include/asm-x86/iommu.h
@@ -27,7 +27,7 @@ int iommu_setup_hpet_msi(struct msi_desc *);
 
 /* While VT-d specific, this must get declared in a generic header. */
 int adjust_vtd_irq_affinities(void);
-void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int present);
+int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, bool_t present);
 bool_t iommu_supports_eim(void);
 int iommu_enable_x2apic_IR(void);
 void iommu_disable_x2apic_IR(void);
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v5 02/10] IOMMU: handle IOMMU mapping and unmapping failures
  2016-05-18  8:08 [PATCH v5 00/10] Check VT-d Device-TLB flush error Quan Xu
  2016-05-18  8:08 ` [PATCH v5 01/10] vt-d: fix the IOMMU flush issue Quan Xu
@ 2016-05-18  8:08 ` Quan Xu
  2016-05-23 13:40   ` Jan Beulich
  2016-05-18  8:08 ` [PATCH v5 03/10] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping Quan Xu
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 50+ messages in thread
From: Quan Xu @ 2016-05-18  8:08 UTC (permalink / raw)
  To: xen-devel; +Cc: Kevin Tian, dario.faggioli, Jan Beulich, Quan Xu

Treat IOMMU mapping and unmapping failures as a fatal to the DomU
If IOMMU mapping and unmapping failed, crash the DomU and propagate
the error up to the call trees.

No spamming can occur. For DomU, we avoid logging any message
for already dying domains. For Dom0, that'll still be more verbose
than we'd really like, but it at least wouldn't outright flood the
console.

Signed-off-by: Quan Xu <quan.xu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>

CC: Jan Beulich <jbeulich@suse.com>
CC: Kevin Tian <kevin.tian@intel.com>
---
 xen/drivers/passthrough/iommu.c | 32 ++++++++++++++++++++++++++++++--
 1 file changed, 30 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 9d104d2..7c70306 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -240,21 +240,49 @@ int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
                    unsigned int flags)
 {
     const struct domain_iommu *hd = dom_iommu(d);
+    int rc;
 
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    return hd->platform_ops->map_page(d, gfn, mfn, flags);
+    rc = hd->platform_ops->map_page(d, gfn, mfn, flags);
+
+    if ( unlikely(rc) )
+    {
+        if ( !d->is_shutting_down && printk_ratelimit() )
+            printk(XENLOG_ERR
+                   "d%d: IOMMU mapping gfn %#lx mfn %#lx failed %d.",
+                   d->domain_id, gfn, mfn, rc);
+
+        if ( !is_hardware_domain(d) )
+            domain_crash(d);
+    }
+
+    return rc;
 }
 
 int iommu_unmap_page(struct domain *d, unsigned long gfn)
 {
     const struct domain_iommu *hd = dom_iommu(d);
+    int rc;
 
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    return hd->platform_ops->unmap_page(d, gfn);
+    rc = hd->platform_ops->unmap_page(d, gfn);
+
+    if ( unlikely(rc) )
+    {
+        if ( !d->is_shutting_down && printk_ratelimit() )
+            printk(XENLOG_ERR
+                   "d%d: IOMMU unmapping gfn %#lx failed %d.",
+                   d->domain_id, gfn, rc);
+
+        if ( !is_hardware_domain(d) )
+            domain_crash(d);
+    }
+
+    return rc;
 }
 
 static void iommu_free_pagetables(unsigned long unused)
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v5 03/10] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping
  2016-05-18  8:08 [PATCH v5 00/10] Check VT-d Device-TLB flush error Quan Xu
  2016-05-18  8:08 ` [PATCH v5 01/10] vt-d: fix the IOMMU flush issue Quan Xu
  2016-05-18  8:08 ` [PATCH v5 02/10] IOMMU: handle IOMMU mapping and unmapping failures Quan Xu
@ 2016-05-18  8:08 ` Quan Xu
  2016-05-23 14:19   ` Jan Beulich
  2016-05-18  8:08 ` [PATCH v5 04/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping Quan Xu
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 50+ messages in thread
From: Quan Xu @ 2016-05-18  8:08 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Keir Fraser, Jan Beulich, George Dunlap,
	Andrew Cooper, dario.faggioli, Jun Nakajima, Quan Xu

When IOMMU mapping is failed, we issue a best effort rollback, stopping
IOMMU mapping, unmapping the previous IOMMU maps and then reporting the
error up to the call trees. When rollback is not feasible (in early
initialization phase or trade-off of complexity) for the hardware domain,
we do things on a best effort basis, only throwing out an error message.

IOMMU unmapping should perhaps continue despite an error, in an attempt
to do best effort cleanup.

Signed-off-by: Quan Xu <quan.xu@intel.com>

CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>
---
 xen/arch/x86/mm.c               | 18 +++++++++++-------
 xen/arch/x86/mm/p2m-ept.c       | 37 +++++++++++++++++++++++++++++--------
 xen/arch/x86/mm/p2m-pt.c        | 28 ++++++++++++++++++++++++----
 xen/arch/x86/mm/p2m.c           | 34 ++++++++++++++++++++++++++++------
 xen/drivers/passthrough/iommu.c | 15 ++++++++++++++-
 xen/include/asm-x86/p2m.h       | 12 +++++++-----
 6 files changed, 113 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 03a4d5b..fee3b9d 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2463,11 +2463,12 @@ static int __put_page_type(struct page_info *page,
 }
 
 
-static int __get_page_type(struct page_info *page, unsigned long type,
-                           int preemptible)
+static int __must_check __get_page_type(struct page_info *page,
+                                        unsigned long type,
+                                        int preemptible)
 {
     unsigned long nx, x, y = page->u.inuse.type_info;
-    int rc = 0;
+    int rc = 0, ret = 0;
 
     ASSERT(!(type & ~(PGT_type_mask | PGT_pae_xen_l2)));
 
@@ -2578,11 +2579,11 @@ static int __get_page_type(struct page_info *page, unsigned long type,
         if ( d && is_pv_domain(d) && unlikely(need_iommu(d)) )
         {
             if ( (x & PGT_type_mask) == PGT_writable_page )
-                iommu_unmap_page(d, mfn_to_gmfn(d, page_to_mfn(page)));
+                ret = iommu_unmap_page(d, mfn_to_gmfn(d, page_to_mfn(page)));
             else if ( type == PGT_writable_page )
-                iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
-                               page_to_mfn(page),
-                               IOMMUF_readable|IOMMUF_writable);
+                ret = iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
+                                     page_to_mfn(page),
+                                     IOMMUF_readable|IOMMUF_writable);
         }
     }
 
@@ -2599,6 +2600,9 @@ static int __get_page_type(struct page_info *page, unsigned long type,
     if ( (x & PGT_partial) && !(nx & PGT_partial) )
         put_page(page);
 
+    if ( !rc )
+        rc = ret;
+
     return rc;
 }
 
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 1ed5b47..7d4809f 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -658,7 +658,7 @@ bool_t ept_handle_misconfig(uint64_t gpa)
  *
  * Returns: 0 for success, -errno for failure
  */
-static int
+static int __must_check
 ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, 
               unsigned int order, p2m_type_t p2mt, p2m_access_t p2ma,
               int sve)
@@ -667,6 +667,7 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     unsigned long gfn_remainder = gfn;
     unsigned int i, target = order / EPT_TABLE_ORDER;
     int ret, rc = 0;
+    bool_t entry_written = 0;
     bool_t direct_mmio = (p2mt == p2m_mmio_direct);
     uint8_t ipat = 0;
     bool_t need_modify_vtd_table = 1;
@@ -812,10 +813,15 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     rc = atomic_write_ept_entry(ept_entry, new_entry, target);
     if ( unlikely(rc) )
         old_entry.epte = 0;
-    else if ( p2mt != p2m_invalid &&
-              (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
-        /* Track the highest gfn for which we have ever had a valid mapping */
-        p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
+    else
+    {
+        entry_written = 1;
+
+        if ( p2mt != p2m_invalid &&
+             (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
+            /* Track the highest gfn for which we have ever had a valid mapping */
+            p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
+    }
 
 out:
     if ( needs_sync )
@@ -831,10 +837,25 @@ out:
         {
             if ( iommu_flags )
                 for ( i = 0; i < (1 << order); i++ )
-                    iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags);
+                {
+                    rc = iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags);
+
+                    if ( unlikely(rc) )
+                    {
+                        while ( i-- )
+                            iommu_unmap_page(p2m->domain, gfn + i);
+
+                        break;
+                    }
+                }
             else
                 for ( i = 0; i < (1 << order); i++ )
-                    iommu_unmap_page(d, gfn + i);
+                {
+                    ret = iommu_unmap_page(d, gfn + i);
+
+                    if ( !rc && unlikely(ret) )
+                        rc = ret;
+                }
         }
     }
 
@@ -847,7 +868,7 @@ out:
     if ( is_epte_present(&old_entry) )
         ept_free_entry(p2m, &old_entry, target);
 
-    if ( rc == 0 && p2m_is_hostp2m(p2m) )
+    if ( entry_written && p2m_is_hostp2m(p2m) )
         p2m_altp2m_propagate_change(d, _gfn(gfn), mfn, order, p2mt, p2ma);
 
     return rc;
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 3d80612..b50bd01 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -485,7 +485,7 @@ int p2m_pt_handle_deferred_changes(uint64_t gpa)
 }
 
 /* Returns: 0 for success, -errno for failure */
-static int
+static int __must_check
 p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
                  unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma,
                  int sve)
@@ -673,6 +673,8 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     if ( iommu_enabled && need_iommu(p2m->domain) &&
          (iommu_old_flags != iommu_pte_flags || old_mfn != mfn_x(mfn)) )
     {
+        int ret;
+
         if ( iommu_use_hap_pt(p2m->domain) )
         {
             if ( iommu_old_flags )
@@ -680,11 +682,29 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
         }
         else if ( iommu_pte_flags )
             for ( i = 0; i < (1UL << page_order); i++ )
-                iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
-                               iommu_pte_flags);
+            {
+                ret = iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
+                                     iommu_pte_flags);
+
+                if ( unlikely(ret) )
+                {
+                    while ( i-- )
+                        iommu_unmap_page(p2m->domain, gfn + i);
+
+                    if ( !rc )
+                        rc = ret;
+
+                    break;
+                }
+            }
         else
             for ( i = 0; i < (1UL << page_order); i++ )
-                iommu_unmap_page(p2m->domain, gfn + i);
+            {
+                ret = iommu_unmap_page(p2m->domain, gfn + i);
+
+                if ( !rc && unlikely(ret) )
+                    rc = ret;
+            }
     }
 
     /*
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 94eabf6..6e33987 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -630,7 +630,7 @@ void p2m_final_teardown(struct domain *d)
 }
 
 
-static int
+static int __must_check
 p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn,
                 unsigned int page_order)
 {
@@ -641,10 +641,22 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn,
 
     if ( !paging_mode_translate(p2m->domain) )
     {
+        int rc = 0;
+
         if ( need_iommu(p2m->domain) )
+        {
             for ( i = 0; i < (1 << page_order); i++ )
-                iommu_unmap_page(p2m->domain, mfn + i);
-        return 0;
+            {
+                int ret;
+
+                ret = iommu_unmap_page(p2m->domain, mfn + i);
+
+                if ( !rc && unlikely(ret) )
+                    rc = ret;
+            }
+        }
+
+        return rc;
     }
 
     ASSERT(gfn_locked_by_me(p2m, gfn));
@@ -798,7 +810,14 @@ guest_physmap_add_entry(struct domain *d, unsigned long gfn,
                 P2M_DEBUG("old gfn=%#lx -> mfn %#lx\n",
                           ogfn , mfn_x(omfn));
                 if ( mfn_x(omfn) == (mfn + i) )
-                    p2m_remove_page(p2m, ogfn, mfn + i, 0);
+                {
+                    int ret;
+
+                    ret = p2m_remove_page(p2m, ogfn, mfn + i, 0);
+
+                    if ( !rc && unlikely(ret) )
+                        rc = ret;
+                }
             }
         }
     }
@@ -2513,9 +2532,12 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
 
     if ( gfn_x(new_gfn) == INVALID_GFN )
     {
-        if ( mfn_valid(mfn) )
-            p2m_remove_page(ap2m, gfn_x(old_gfn), mfn_x(mfn), PAGE_ORDER_4K);
         rc = 0;
+
+        if ( mfn_valid(mfn) )
+            rc = p2m_remove_page(ap2m, gfn_x(old_gfn), mfn_x(mfn),
+                                 PAGE_ORDER_4K);
+
         goto out;
     }
 
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 7c70306..e04db16 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -171,20 +171,33 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
     {
         struct page_info *page;
         unsigned int i = 0;
+        int rc = 0;
+
         page_list_for_each ( page, &d->page_list )
         {
             unsigned long mfn = page_to_mfn(page);
             unsigned long gfn = mfn_to_gmfn(d, mfn);
             unsigned int mapping = IOMMUF_readable;
+            int ret;
 
             if ( ((page->u.inuse.type_info & PGT_count_mask) == 0) ||
                  ((page->u.inuse.type_info & PGT_type_mask)
                   == PGT_writable_page) )
                 mapping |= IOMMUF_writable;
-            hd->platform_ops->map_page(d, gfn, mfn, mapping);
+
+            ret = hd->platform_ops->map_page(d, gfn, mfn, mapping);
+
+            if ( unlikely(ret) )
+                rc = ret;
+
             if ( !(i++ & 0xfffff) )
                 process_pending_softirqs();
         }
+
+        if ( rc )
+            printk(XENLOG_WARNING
+                   "d%d: IOMMU mapping failed %d for hardware domain.",
+                   d->domain_id, rc);
     }
 
     return hd->platform_ops->hwdom_init(d);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 65675a2..dd51380 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -545,9 +545,11 @@ void p2m_teardown(struct p2m_domain *p2m);
 void p2m_final_teardown(struct domain *d);
 
 /* Add a page to a domain's p2m table */
-int guest_physmap_add_entry(struct domain *d, unsigned long gfn,
-                            unsigned long mfn, unsigned int page_order, 
-                            p2m_type_t t);
+int __must_check guest_physmap_add_entry(struct domain *d,
+                                         unsigned long gfn,
+                                         unsigned long mfn,
+                                         unsigned int page_order,
+                                         p2m_type_t t);
 
 /* Untyped version for RAM only, for compatibility */
 static inline int guest_physmap_add_page(struct domain *d,
@@ -808,8 +810,8 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx);
 int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx);
 
 /* Change a gfn->mfn mapping */
-int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
-                          gfn_t old_gfn, gfn_t new_gfn);
+int __must_check p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
+                                       gfn_t old_gfn, gfn_t new_gfn);
 
 /* Propagate a host p2m change to all alternate p2m's */
 void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v5 04/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping.
  2016-05-18  8:08 [PATCH v5 00/10] Check VT-d Device-TLB flush error Quan Xu
                   ` (2 preceding siblings ...)
  2016-05-18  8:08 ` [PATCH v5 03/10] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping Quan Xu
@ 2016-05-18  8:08 ` Quan Xu
  2016-05-18  8:08 ` [PATCH v5 05/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping Quan Xu
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 50+ messages in thread
From: Quan Xu @ 2016-05-18  8:08 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, Jan Beulich,
	Andrew Cooper, dario.faggioli, Julien Grall, Quan Xu

Propagate the IOMMU Device-TLB flush error up to IOMMU unmapping.

Signed-off-by: Quan Xu <quan.xu@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Feng Wu <feng.wu@intel.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/drivers/passthrough/arm/smmu.c            |  2 +-
 xen/drivers/passthrough/vtd/iommu.c           | 18 ++++++++++--------
 xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |  2 +-
 xen/include/xen/iommu.h                       |  2 +-
 4 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 54a03a6..1ce4ddf 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2774,7 +2774,7 @@ static int arm_smmu_map_page(struct domain *d, unsigned long gfn,
 	return guest_physmap_add_entry(d, gfn, mfn, 0, t);
 }
 
-static int arm_smmu_unmap_page(struct domain *d, unsigned long gfn)
+static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long gfn)
 {
 	/*
 	 * This function should only be used by gnttab code when the domain
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 3ece815..29fb7fd 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -616,11 +616,12 @@ static void intel_iommu_iotlb_flush_all(struct domain *d)
 }
 
 /* clear one page's page table */
-static void dma_pte_clear_one(struct domain *domain, u64 addr)
+static int __must_check dma_pte_clear_one(struct domain *domain, u64 addr)
 {
     struct domain_iommu *hd = dom_iommu(domain);
     struct dma_pte *page = NULL, *pte = NULL;
     u64 pg_maddr;
+    int rc = 0;
 
     spin_lock(&hd->arch.mapping_lock);
     /* get last level pte */
@@ -628,7 +629,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
     if ( pg_maddr == 0 )
     {
         spin_unlock(&hd->arch.mapping_lock);
-        return;
+        return 0;
     }
 
     page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
@@ -638,7 +639,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
     {
         spin_unlock(&hd->arch.mapping_lock);
         unmap_vtd_domain_page(page);
-        return;
+        return 0;
     }
 
     dma_clear_pte(*pte);
@@ -646,9 +647,11 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
 
     if ( !this_cpu(iommu_dont_flush_iotlb) )
-        __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K, 1, 1);
+        rc = __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K, 1, 1);
 
     unmap_vtd_domain_page(page);
+
+    return rc;
 }
 
 static void iommu_free_pagetable(u64 pt_maddr, int level)
@@ -1775,15 +1778,14 @@ static int intel_iommu_map_page(
     return 0;
 }
 
-static int intel_iommu_unmap_page(struct domain *d, unsigned long gfn)
+static int __must_check intel_iommu_unmap_page(struct domain *d,
+                                               unsigned long gfn)
 {
     /* Do nothing if hardware domain and iommu supports pass thru. */
     if ( iommu_passthrough && is_hardware_domain(d) )
         return 0;
 
-    dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
-
-    return 0;
+    return dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
 }
 
 int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
diff --git a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
index 9c51172..57b6cc1 100644
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
@@ -53,7 +53,7 @@ int amd_iommu_update_ivrs_mapping_acpi(void);
 /* mapping functions */
 int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
                        unsigned int flags);
-int amd_iommu_unmap_page(struct domain *d, unsigned long gfn);
+int __must_check amd_iommu_unmap_page(struct domain *d, unsigned long gfn);
 u64 amd_iommu_get_next_table_from_pte(u32 *entry);
 int amd_iommu_reserve_domain_unity_map(struct domain *domain,
                                        u64 phys_addr, unsigned long size,
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 19ba976..73a7f1e 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -168,7 +168,7 @@ struct iommu_ops {
     void (*teardown)(struct domain *d);
     int (*map_page)(struct domain *d, unsigned long gfn, unsigned long mfn,
                     unsigned int flags);
-    int (*unmap_page)(struct domain *d, unsigned long gfn);
+    int __must_check (*unmap_page)(struct domain *d, unsigned long gfn);
     void (*free_page_table)(struct page_info *);
 #ifdef CONFIG_X86
     void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v5 05/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping.
  2016-05-18  8:08 [PATCH v5 00/10] Check VT-d Device-TLB flush error Quan Xu
                   ` (3 preceding siblings ...)
  2016-05-18  8:08 ` [PATCH v5 04/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping Quan Xu
@ 2016-05-18  8:08 ` Quan Xu
  2016-05-23 15:53   ` Jan Beulich
  2016-05-18  8:08 ` [PATCH v5 06/10] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones) Quan Xu
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 50+ messages in thread
From: Quan Xu @ 2016-05-18  8:08 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, Quan Xu, dario.faggioli,
	Julien Grall, Jan Beulich, Suravee Suthikulpanit

Propagate the IOMMU Device-TLB flush error up to IOMMU mapping.

Signed-off-by: Quan Xu <quan.xu@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>

CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Feng Wu <feng.wu@intel.com>
CC: Jan Beulich <jbeulich@suse.com>
---
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 17 +++++++++++++++--
 xen/drivers/passthrough/arm/smmu.c          |  4 ++--
 xen/drivers/passthrough/vtd/iommu.c         | 12 +++++++-----
 xen/include/xen/iommu.h                     |  4 ++--
 4 files changed, 26 insertions(+), 11 deletions(-)

diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 70b7475..86d6fb3 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -285,6 +285,8 @@ static void __hwdom_init amd_iommu_hwdom_init(struct domain *d)
 
     if ( !iommu_passthrough && !need_iommu(d) )
     {
+        int rc = 0;
+
         /* Set up 1:1 page table for dom0 */
         for ( i = 0; i < max_pdx; i++ )
         {
@@ -295,12 +297,23 @@ static void __hwdom_init amd_iommu_hwdom_init(struct domain *d)
              * a pfn_valid() check would seem desirable here.
              */
             if ( mfn_valid(pfn) )
-                amd_iommu_map_page(d, pfn, pfn, 
-                                   IOMMUF_readable|IOMMUF_writable);
+            {
+                int ret;
+
+                ret = amd_iommu_map_page(d, pfn, pfn,
+                                         IOMMUF_readable|IOMMUF_writable);
+
+                if ( unlikely(ret) )
+                    rc = ret;
+            }
 
             if ( !(i & 0xfffff) )
                 process_pending_softirqs();
         }
+
+        if ( rc )
+            AMD_IOMMU_DEBUG("d%d: IOMMU mapping failed %d.",
+                            d->domain_id, rc);
     }
 
     for_each_amd_iommu ( iommu )
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 1ce4ddf..ee5c89d 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2745,8 +2745,8 @@ static void arm_smmu_iommu_domain_teardown(struct domain *d)
 	xfree(xen_domain);
 }
 
-static int arm_smmu_map_page(struct domain *d, unsigned long gfn,
-			     unsigned long mfn, unsigned int flags)
+static int __must_check arm_smmu_map_page(struct domain *d, unsigned long gfn,
+                                          unsigned long mfn, unsigned int flags)
 {
 	p2m_type_t t;
 
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 29fb7fd..461688c 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1724,13 +1724,15 @@ static void iommu_domain_teardown(struct domain *d)
     spin_unlock(&hd->arch.mapping_lock);
 }
 
-static int intel_iommu_map_page(
-    struct domain *d, unsigned long gfn, unsigned long mfn,
-    unsigned int flags)
+static int __must_check intel_iommu_map_page(struct domain *d,
+                                             unsigned long gfn,
+                                             unsigned long mfn,
+                                             unsigned int flags)
 {
     struct domain_iommu *hd = dom_iommu(d);
     struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
     u64 pg_maddr;
+    int rc = 0;
 
     /* Do nothing if VT-d shares EPT page table */
     if ( iommu_use_hap_pt(d) )
@@ -1773,9 +1775,9 @@ static int intel_iommu_map_page(
     unmap_vtd_domain_page(page);
 
     if ( !this_cpu(iommu_dont_flush_iotlb) )
-        __intel_iommu_iotlb_flush(d, gfn, dma_pte_present(old), 1);
+        rc = __intel_iommu_iotlb_flush(d, gfn, dma_pte_present(old), 1);
 
-    return 0;
+    return rc;
 }
 
 static int __must_check intel_iommu_unmap_page(struct domain *d,
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 73a7f1e..14041a1 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -166,8 +166,8 @@ struct iommu_ops {
 #endif /* HAS_PCI */
 
     void (*teardown)(struct domain *d);
-    int (*map_page)(struct domain *d, unsigned long gfn, unsigned long mfn,
-                    unsigned int flags);
+    int __must_check (*map_page)(struct domain *d, unsigned long gfn,
+                                 unsigned long mfn, unsigned int flags);
     int __must_check (*unmap_page)(struct domain *d, unsigned long gfn);
     void (*free_page_table)(struct page_info *);
 #ifdef CONFIG_X86
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v5 06/10] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones).
  2016-05-18  8:08 [PATCH v5 00/10] Check VT-d Device-TLB flush error Quan Xu
                   ` (4 preceding siblings ...)
  2016-05-18  8:08 ` [PATCH v5 05/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping Quan Xu
@ 2016-05-18  8:08 ` Quan Xu
  2016-05-23 16:05   ` Jan Beulich
  2016-05-18  8:08 ` [PATCH v5 07/10] IOMMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (leaf ones) Quan Xu
                   ` (4 subsequent siblings)
  10 siblings, 1 reply; 50+ messages in thread
From: Quan Xu @ 2016-05-18  8:08 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Quan Xu, dario.faggioli,
	Julien Grall, Jan Beulich

Propagate the IOMMU Device-TLB flush error up to the iommu_iotlb_flush{,_all}.

This patch fixes the top level ones (this patch doesn't fix the leaf ones but
the next patch does).

Signed-off-by: Quan Xu <quan.xu@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/arm/p2m.c                  |  5 ++++-
 xen/common/memory.c                 | 20 +++++++++++++++-----
 xen/drivers/passthrough/iommu.c     | 13 +++++++++----
 xen/drivers/passthrough/x86/iommu.c |  5 +++--
 xen/include/xen/iommu.h             |  7 ++++---
 5 files changed, 35 insertions(+), 15 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index db21433..fe201d0 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1178,7 +1178,10 @@ out:
     if ( flush )
     {
         flush_tlb_domain(d);
-        iommu_iotlb_flush(d, sgfn, egfn - sgfn);
+        ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);
+
+        if ( !rc )
+            rc = ret;
     }
 
     while ( (pg = page_list_remove_head(&free_pages)) )
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 644f81a..754c33c 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -633,9 +633,9 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
     return rc;
 }
 
-static int xenmem_add_to_physmap(struct domain *d,
-                                 struct xen_add_to_physmap *xatp,
-                                 unsigned int start)
+static int __must_check xenmem_add_to_physmap(struct domain *d,
+                                              struct xen_add_to_physmap *xatp,
+                                              unsigned int start)
 {
     unsigned int done = 0;
     long rc = 0;
@@ -677,9 +677,19 @@ static int xenmem_add_to_physmap(struct domain *d,
 #ifdef CONFIG_HAS_PASSTHROUGH
     if ( need_iommu(d) )
     {
+        int ret;
+
         this_cpu(iommu_dont_flush_iotlb) = 0;
-        iommu_iotlb_flush(d, xatp->idx - done, done);
-        iommu_iotlb_flush(d, xatp->gpfn - done, done);
+
+        ret = iommu_iotlb_flush(d, xatp->idx - done, done);
+
+        if ( unlikely(ret) && rc >= 0 )
+            rc = ret;
+
+        ret = iommu_iotlb_flush(d, xatp->gpfn - done, done);
+
+        if ( unlikely(ret) && rc >= 0 )
+            rc = ret;
     }
 #endif
 
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index e04db16..54d307b 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -315,24 +315,29 @@ static void iommu_free_pagetables(unsigned long unused)
                             cpumask_cycle(smp_processor_id(), &cpu_online_map));
 }
 
-void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+int iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+                      unsigned int page_count)
 {
     const struct domain_iommu *hd = dom_iommu(d);
 
     if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
-        return;
+        return 0;
 
     hd->platform_ops->iotlb_flush(d, gfn, page_count);
+
+    return 0;
 }
 
-void iommu_iotlb_flush_all(struct domain *d)
+int iommu_iotlb_flush_all(struct domain *d)
 {
     const struct domain_iommu *hd = dom_iommu(d);
 
     if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush_all )
-        return;
+        return 0;
 
     hd->platform_ops->iotlb_flush_all(d);
+
+    return 0;
 }
 
 int __init iommu_setup(void)
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index b64b98f..a18a608 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -104,8 +104,9 @@ int arch_iommu_populate_page_table(struct domain *d)
     this_cpu(iommu_dont_flush_iotlb) = 0;
 
     if ( !rc )
-        iommu_iotlb_flush_all(d);
-    else if ( rc != -ERESTART )
+        rc = iommu_iotlb_flush_all(d);
+
+    if ( rc && rc != -ERESTART )
         iommu_teardown(d);
 
     return rc;
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 14041a1..27e2d3e 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -61,7 +61,7 @@ int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn);
 
 void arch_iommu_domain_destroy(struct domain *d);
 int arch_iommu_domain_init(struct domain *d);
-int arch_iommu_populate_page_table(struct domain *d);
+int __must_check arch_iommu_populate_page_table(struct domain *d);
 void arch_iommu_check_autotranslated_hwdom(struct domain *d);
 
 int iommu_construct(struct domain *d);
@@ -200,8 +200,9 @@ int iommu_do_pci_domctl(struct xen_domctl *, struct domain *d,
 int iommu_do_domctl(struct xen_domctl *, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
-void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
-void iommu_iotlb_flush_all(struct domain *d);
+int __must_check iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+                                   unsigned int page_count);
+int __must_check iommu_iotlb_flush_all(struct domain *d);
 
 /*
  * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v5 07/10] IOMMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (leaf ones).
  2016-05-18  8:08 [PATCH v5 00/10] Check VT-d Device-TLB flush error Quan Xu
                   ` (5 preceding siblings ...)
  2016-05-18  8:08 ` [PATCH v5 06/10] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones) Quan Xu
@ 2016-05-18  8:08 ` Quan Xu
  2016-05-24  3:42   ` Xu, Quan
  2016-05-18  8:08 ` [PATCH v5 08/10] vt-d/ept: propagate IOMMU Device-TLB flush error up to EPT update Quan Xu
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 50+ messages in thread
From: Quan Xu @ 2016-05-18  8:08 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, Quan Xu, dario.faggioli,
	Julien Grall, Jan Beulich

Propagate the IOMMU Device-TLB flush error up to the iommu_iotlb_flush{,_all}.

This patch fixes the leaf ones.

Signed-off-by: Quan Xu <quan.xu@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>

CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Feng Wu <feng.wu@intel.com>
---
 xen/drivers/passthrough/arm/smmu.c  | 13 ++++++++-----
 xen/drivers/passthrough/iommu.c     |  8 ++------
 xen/drivers/passthrough/vtd/iommu.c | 25 ++++++++++++++-----------
 xen/include/xen/iommu.h             |  5 +++--
 4 files changed, 27 insertions(+), 24 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index ee5c89d..1d21568 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2540,7 +2540,7 @@ static int force_stage = 2;
  */
 static u32 platform_features = ARM_SMMU_FEAT_COHERENT_WALK;
 
-static void arm_smmu_iotlb_flush_all(struct domain *d)
+static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
 {
 	struct arm_smmu_xen_domain *smmu_domain = dom_iommu(d)->arch.priv;
 	struct iommu_domain *cfg;
@@ -2557,13 +2557,16 @@ static void arm_smmu_iotlb_flush_all(struct domain *d)
 		arm_smmu_tlb_inv_context(cfg->priv);
 	}
 	spin_unlock(&smmu_domain->lock);
+
+	return 0;
 }
 
-static void arm_smmu_iotlb_flush(struct domain *d, unsigned long gfn,
-                                 unsigned int page_count)
+static int __must_check arm_smmu_iotlb_flush(struct domain *d,
+                                             unsigned long gfn,
+                                             unsigned int page_count)
 {
-    /* ARM SMMU v1 doesn't have flush by VMA and VMID */
-    arm_smmu_iotlb_flush_all(d);
+	/* ARM SMMU v1 doesn't have flush by VMA and VMID */
+	return arm_smmu_iotlb_flush_all(d);
 }
 
 static struct iommu_domain *arm_smmu_get_domain(struct domain *d,
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 54d307b..92d37c5 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -323,9 +323,7 @@ int iommu_iotlb_flush(struct domain *d, unsigned long gfn,
     if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
         return 0;
 
-    hd->platform_ops->iotlb_flush(d, gfn, page_count);
-
-    return 0;
+    return hd->platform_ops->iotlb_flush(d, gfn, page_count);
 }
 
 int iommu_iotlb_flush_all(struct domain *d)
@@ -335,9 +333,7 @@ int iommu_iotlb_flush_all(struct domain *d)
     if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush_all )
         return 0;
 
-    hd->platform_ops->iotlb_flush_all(d);
-
-    return 0;
+    return hd->platform_ops->iotlb_flush_all(d);
 }
 
 int __init iommu_setup(void)
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 461688c..a6caf97 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -557,9 +557,10 @@ static void iommu_flush_all(void)
     }
 }
 
-static int __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
-                                     bool_t dma_old_pte_present,
-                                     unsigned int page_count)
+static int __must_check iommu_flush_iotlb(struct domain *d,
+                                          unsigned long gfn,
+                                          bool_t dma_old_pte_present,
+                                          unsigned int page_count)
 {
     struct domain_iommu *hd = dom_iommu(d);
     struct acpi_drhd_unit *drhd;
@@ -605,14 +606,16 @@ static int __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
     return rc;
 }
 
-static void intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+static int __must_check iommu_flush_iotlb_pages(struct domain *d,
+                                                unsigned long gfn,
+                                                unsigned int page_count)
 {
-    __intel_iommu_iotlb_flush(d, gfn, 1, page_count);
+    return iommu_flush_iotlb(d, gfn, 1, page_count);
 }
 
-static void intel_iommu_iotlb_flush_all(struct domain *d)
+static int iommu_flush_iotlb_all(struct domain *d)
 {
-    __intel_iommu_iotlb_flush(d, INVALID_GFN, 0, 0);
+    return iommu_flush_iotlb(d, INVALID_GFN, 0, 0);
 }
 
 /* clear one page's page table */
@@ -647,7 +650,7 @@ static int __must_check dma_pte_clear_one(struct domain *domain, u64 addr)
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
 
     if ( !this_cpu(iommu_dont_flush_iotlb) )
-        rc = __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K, 1, 1);
+        rc = iommu_flush_iotlb_pages(domain, addr >> PAGE_SHIFT_4K, 1);
 
     unmap_vtd_domain_page(page);
 
@@ -1775,7 +1778,7 @@ static int __must_check intel_iommu_map_page(struct domain *d,
     unmap_vtd_domain_page(page);
 
     if ( !this_cpu(iommu_dont_flush_iotlb) )
-        rc = __intel_iommu_iotlb_flush(d, gfn, dma_pte_present(old), 1);
+        rc = iommu_flush_iotlb(d, gfn, dma_pte_present(old), 1);
 
     return rc;
 }
@@ -2604,8 +2607,8 @@ const struct iommu_ops intel_iommu_ops = {
     .resume = vtd_resume,
     .share_p2m = iommu_set_pgd,
     .crash_shutdown = vtd_crash_shutdown,
-    .iotlb_flush = intel_iommu_iotlb_flush,
-    .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .iotlb_flush = iommu_flush_iotlb_pages,
+    .iotlb_flush_all = iommu_flush_iotlb_all,
     .get_reserved_device_memory = intel_iommu_get_reserved_device_memory,
     .dump_p2m_table = vtd_dump_p2m_table,
 };
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 27e2d3e..760cf65 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -179,8 +179,9 @@ struct iommu_ops {
     void (*resume)(void);
     void (*share_p2m)(struct domain *d);
     void (*crash_shutdown)(void);
-    void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
-    void (*iotlb_flush_all)(struct domain *d);
+    int __must_check (*iotlb_flush)(struct domain *d, unsigned long gfn,
+                                    unsigned int page_count);
+    int __must_check (*iotlb_flush_all)(struct domain *d);
     int (*get_reserved_device_memory)(iommu_grdm_t *, void *);
     void (*dump_p2m_table)(struct domain *d);
 };
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v5 08/10] vt-d/ept: propagate IOMMU Device-TLB flush error up to EPT update.
  2016-05-18  8:08 [PATCH v5 00/10] Check VT-d Device-TLB flush error Quan Xu
                   ` (6 preceding siblings ...)
  2016-05-18  8:08 ` [PATCH v5 07/10] IOMMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (leaf ones) Quan Xu
@ 2016-05-18  8:08 ` Quan Xu
  2016-05-24  7:58   ` Jan Beulich
  2016-05-18  8:08 ` [PATCH v5 09/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending Quan Xu
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 50+ messages in thread
From: Quan Xu @ 2016-05-18  8:08 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Feng Wu, Jun Nakajima, George Dunlap, Andrew Cooper,
	dario.faggioli, Jan Beulich, Quan Xu

Propagate the IOMMU Device-TLB flush error up to the ept_set_entry(),
when VT-d shares EPT page table.

Signed-off-by: Quan Xu <quan.xu@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>

CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Feng Wu <feng.wu@intel.com>
---
 xen/arch/x86/mm/p2m-ept.c   | 2 +-
 xen/include/asm-x86/iommu.h | 3 ++-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 7d4809f..2b02f02 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -832,7 +832,7 @@ out:
          need_modify_vtd_table )
     {
         if ( iommu_hap_pt_share )
-            iommu_pte_flush(d, gfn, &ept_entry->epte, order, vtd_pte_present);
+            rc = iommu_pte_flush(d, gfn, &ept_entry->epte, order, vtd_pte_present);
         else
         {
             if ( iommu_flags )
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
index 43f1620..3d2c354 100644
--- a/xen/include/asm-x86/iommu.h
+++ b/xen/include/asm-x86/iommu.h
@@ -27,7 +27,8 @@ int iommu_setup_hpet_msi(struct msi_desc *);
 
 /* While VT-d specific, this must get declared in a generic header. */
 int adjust_vtd_irq_affinities(void);
-int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, bool_t present);
+int __must_check iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
+                                 int order, bool_t present);
 bool_t iommu_supports_eim(void);
 int iommu_enable_x2apic_IR(void);
 void iommu_disable_x2apic_IR(void);
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v5 09/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending
  2016-05-18  8:08 [PATCH v5 00/10] Check VT-d Device-TLB flush error Quan Xu
                   ` (7 preceding siblings ...)
  2016-05-18  8:08 ` [PATCH v5 08/10] vt-d/ept: propagate IOMMU Device-TLB flush error up to EPT update Quan Xu
@ 2016-05-18  8:08 ` Quan Xu
  2016-05-24  8:21   ` Jan Beulich
  2016-05-18  8:08 ` [PATCH v5 10/10] vt-d: propagate error up to ME phantom function mapping and unmapping Quan Xu
  2016-05-18 10:20 ` [PATCH v5 00/10] Check VT-d Device-TLB flush error Jan Beulich
  10 siblings, 1 reply; 50+ messages in thread
From: Quan Xu @ 2016-05-18  8:08 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Keir Fraser, Quan Xu,
	Liu Jinsong, dario.faggioli, Julien Grall, Jan Beulich,
	Andrew Cooper, Feng Wu, Suravee Suthikulpanit

Propagate the IOMMU Device-TLB flush error up to IOMMU suspending.

Signed-off-by: Quan Xu <quan.xu@intel.com>

CC: Jan Beulich <jbeulich@suse.com>
CC: Liu Jinsong <jinsong.liu@alibaba-inc.com>
CC: Keir Fraser <keir@xen.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Feng Wu <feng.wu@intel.com>
---
 xen/arch/x86/acpi/power.c                     | 72 ++++++++++++++++++++-------
 xen/drivers/passthrough/amd/iommu_init.c      |  9 +++-
 xen/drivers/passthrough/amd/pci_amd_iommu.c   |  2 +-
 xen/drivers/passthrough/iommu.c               |  6 ++-
 xen/drivers/passthrough/vtd/iommu.c           | 45 +++++++++++++----
 xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |  2 +-
 xen/include/xen/iommu.h                       |  4 +-
 7 files changed, 105 insertions(+), 35 deletions(-)

diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
index 2885e31..8e60c75 100644
--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -43,36 +43,68 @@ struct acpi_sleep_info acpi_sinfo;
 
 void do_suspend_lowlevel(void);
 
+enum dev_power_saved
+{
+    SAVED_NONE,    /* None of device power saved */
+    SAVED_CONSOLE, /* Device power of console saved */
+    SAVED_TIME,    /* Device power of time saved */
+    SAVED_I8259A,  /* Device power of I8259A saved */
+    SAVED_IOAPIC,  /* Device power of IOAPIC saved */
+    SAVED_IOMMU,   /* Device power of IOMMU saved */
+    SAVED_LAPIC,   /* Device power of LAPIC saved */
+};
+
 static int device_power_down(void)
 {
-    console_suspend();
+    if ( console_suspend() )
+        return SAVED_CONSOLE;
 
-    time_suspend();
+    if ( time_suspend() )
+        return SAVED_TIME;
 
-    i8259A_suspend();
+    if ( i8259A_suspend() )
+        return SAVED_I8259A;
 
+    /* ioapic_suspend cannot fail */
     ioapic_suspend();
 
-    iommu_suspend();
+    if ( iommu_suspend() )
+        return SAVED_IOMMU;
 
-    lapic_suspend();
+    if ( lapic_suspend() )
+        return SAVED_LAPIC;
 
     return 0;
 }
 
-static void device_power_up(void)
+static void device_power_up(enum dev_power_saved saved)
 {
-    lapic_resume();
-
-    iommu_resume();
-
-    ioapic_resume();
-
-    i8259A_resume();
-
-    time_resume();
-
-    console_resume();
+    switch ( saved )
+    {
+    case SAVED_NONE:
+        lapic_resume();
+        /* fall through */
+    case SAVED_LAPIC:
+        iommu_resume();
+        /* fall through */
+    case SAVED_IOMMU:
+        ioapic_resume();
+        /* fall through */
+    case SAVED_IOAPIC:
+        i8259A_resume();
+        /* fall through */
+    case SAVED_I8259A:
+        time_resume();
+        /* fall through */
+    case SAVED_TIME:
+        console_resume();
+        /* fall through */
+    case SAVED_CONSOLE:
+        break;
+    default:
+        BUG();
+        break;
+    }
 }
 
 static void freeze_domains(void)
@@ -169,6 +201,10 @@ static int enter_state(u32 state)
     {
         printk(XENLOG_ERR "Some devices failed to power down.");
         system_state = SYS_STATE_resume;
+
+        if ( error > 0 )
+            device_power_up(error);
+
         goto done;
     }
 
@@ -196,7 +232,7 @@ static int enter_state(u32 state)
     write_cr4(cr4 & ~X86_CR4_MCE);
     write_efer(read_efer());
 
-    device_power_up();
+    device_power_up(SAVED_NONE);
 
     mcheck_init(&boot_cpu_data, 0);
     write_cr4(cr4);
diff --git a/xen/drivers/passthrough/amd/iommu_init.c b/xen/drivers/passthrough/amd/iommu_init.c
index 4536106..0b68596 100644
--- a/xen/drivers/passthrough/amd/iommu_init.c
+++ b/xen/drivers/passthrough/amd/iommu_init.c
@@ -1339,7 +1339,14 @@ static void invalidate_all_devices(void)
     iterate_ivrs_mappings(_invalidate_all_devices);
 }
 
-void amd_iommu_suspend(void)
+int amd_iommu_suspend(void)
+{
+    amd_iommu_crash_shutdown();
+
+    return 0;
+}
+
+void amd_iommu_crash_shutdown(void)
 {
     struct amd_iommu *iommu;
 
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 86d6fb3..f162703 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -639,6 +639,6 @@ const struct iommu_ops amd_iommu_ops = {
     .suspend = amd_iommu_suspend,
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
-    .crash_shutdown = amd_iommu_suspend,
+    .crash_shutdown = amd_iommu_crash_shutdown,
     .dump_p2m_table = amd_dump_p2m_table,
 };
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 92d37c5..b610a1f 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -379,10 +379,12 @@ int __init iommu_setup(void)
     return rc;
 }
 
-void iommu_suspend()
+int iommu_suspend()
 {
     if ( iommu_enabled )
-        iommu_get_ops()->suspend();
+        return iommu_get_ops()->suspend();
+
+    return 0;
 }
 
 void iommu_resume()
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index a6caf97..3d07139 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -541,20 +541,28 @@ static int iommu_flush_iotlb_psi(
     return status;
 }
 
-static void iommu_flush_all(void)
+static int __must_check iommu_flush_all(void)
 {
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu;
     int flush_dev_iotlb;
+    int rc = 0;
 
     flush_all_cache();
     for_each_drhd_unit ( drhd )
     {
+        int ret;
+
         iommu = drhd->iommu;
         iommu_flush_context_global(iommu, 0);
         flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
-        iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
+        ret = iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
+
+        if ( unlikely(ret) )
+            rc = ret;
     }
+
+    return rc;
 }
 
 static int __must_check iommu_flush_iotlb(struct domain *d,
@@ -1270,7 +1278,9 @@ static void __hwdom_init intel_iommu_hwdom_init(struct domain *d)
     setup_hwdom_pci_devices(d, setup_hwdom_device);
     setup_hwdom_rmrr(d);
 
-    iommu_flush_all();
+    if ( iommu_flush_all() )
+        printk(XENLOG_WARNING VTDPREFIX
+               " IOMMU flush all failed for hardware domain.\n");
 
     for_each_drhd_unit ( drhd )
     {
@@ -2051,7 +2061,7 @@ int adjust_vtd_irq_affinities(void)
 }
 __initcall(adjust_vtd_irq_affinities);
 
-static int init_vtd_hw(void)
+static int __must_check init_vtd_hw(void)
 {
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu;
@@ -2149,8 +2159,8 @@ static int init_vtd_hw(void)
             return -EIO;
         }
     }
-    iommu_flush_all();
-    return 0;
+
+    return iommu_flush_all();
 }
 
 static void __hwdom_init setup_hwdom_rmrr(struct domain *d)
@@ -2439,16 +2449,27 @@ static int intel_iommu_group_id(u16 seg, u8 bus, u8 devfn)
 }
 
 static u32 iommu_state[MAX_IOMMUS][MAX_IOMMU_REGS];
-static void vtd_suspend(void)
+
+static int __must_check vtd_suspend(void)
 {
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu;
     u32    i;
+    int rc;
 
     if ( !iommu_enabled )
-        return;
+        return 0;
+
+    rc = iommu_flush_all();
+
+    if ( unlikely(rc) )
+    {
+        printk(XENLOG_WARNING VTDPREFIX
+               " suspend: IOMMU flush all failed %d.\n",
+               rc);
 
-    iommu_flush_all();
+        return rc;
+    }
 
     for_each_drhd_unit ( drhd )
     {
@@ -2477,6 +2498,8 @@ static void vtd_suspend(void)
         if ( !iommu_intremap && iommu_qinval )
             disable_qinval(iommu);
     }
+
+    return 0;
 }
 
 static void vtd_crash_shutdown(void)
@@ -2487,7 +2510,9 @@ static void vtd_crash_shutdown(void)
     if ( !iommu_enabled )
         return;
 
-    iommu_flush_all();
+    if ( iommu_flush_all() )
+        printk(XENLOG_WARNING VTDPREFIX
+               " crash shutdown: IOMMU flush all failed.\n");
 
     for_each_drhd_unit ( drhd )
     {
diff --git a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
index 57b6cc1..570885a 100644
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
@@ -119,7 +119,7 @@ extern unsigned long *shared_intremap_inuse;
 
 /* power management support */
 void amd_iommu_resume(void);
-void amd_iommu_suspend(void);
+int __must_check amd_iommu_suspend(void);
 void amd_iommu_crash_shutdown(void);
 
 /* guest iommu support */
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 760cf65..7851a81 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -175,7 +175,7 @@ struct iommu_ops {
     unsigned int (*read_apic_from_ire)(unsigned int apic, unsigned int reg);
     int (*setup_hpet_msi)(struct msi_desc *);
 #endif /* CONFIG_X86 */
-    void (*suspend)(void);
+    int __must_check (*suspend)(void);
     void (*resume)(void);
     void (*share_p2m)(struct domain *d);
     void (*crash_shutdown)(void);
@@ -186,7 +186,7 @@ struct iommu_ops {
     void (*dump_p2m_table)(struct domain *d);
 };
 
-void iommu_suspend(void);
+int __must_check iommu_suspend(void);
 void iommu_resume(void);
 void iommu_crash_shutdown(void);
 int iommu_get_reserved_device_memory(iommu_grdm_t *, void *);
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v5 10/10] vt-d: propagate error up to ME phantom function mapping and unmapping
  2016-05-18  8:08 [PATCH v5 00/10] Check VT-d Device-TLB flush error Quan Xu
                   ` (8 preceding siblings ...)
  2016-05-18  8:08 ` [PATCH v5 09/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending Quan Xu
@ 2016-05-18  8:08 ` Quan Xu
  2016-05-24  8:28   ` Jan Beulich
  2016-05-18 10:20 ` [PATCH v5 00/10] Check VT-d Device-TLB flush error Jan Beulich
  10 siblings, 1 reply; 50+ messages in thread
From: Quan Xu @ 2016-05-18  8:08 UTC (permalink / raw)
  To: xen-devel; +Cc: Kevin Tian, dario.faggioli, Feng Wu, Jan Beulich, Quan Xu

Propagate the IOMMU Device-TLB flush error up to ME phantom function
mapping and unmapping.

Signed-off-by: Quan Xu <quan.xu@intel.com>

CC: Jan Beulich <jbeulich@suse.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Feng Wu <feng.wu@intel.com>
---
 xen/drivers/passthrough/vtd/extern.h |  3 ++-
 xen/drivers/passthrough/vtd/iommu.c  |  8 ++++----
 xen/drivers/passthrough/vtd/quirks.c | 28 ++++++++++++++++++----------
 3 files changed, 24 insertions(+), 15 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/extern.h b/xen/drivers/passthrough/vtd/extern.h
index cbe0286..6772839 100644
--- a/xen/drivers/passthrough/vtd/extern.h
+++ b/xen/drivers/passthrough/vtd/extern.h
@@ -91,7 +91,8 @@ int is_igd_vt_enabled_quirk(void);
 void platform_quirks_init(void);
 void vtd_ops_preamble_quirk(struct iommu* iommu);
 void vtd_ops_postamble_quirk(struct iommu* iommu);
-void me_wifi_quirk(struct domain *domain, u8 bus, u8 devfn, int map);
+int __must_check me_wifi_quirk(struct domain *domain,
+                               u8 bus, u8 devfn, int map);
 void pci_vtd_quirk(const struct pci_dev *);
 bool_t platform_supports_intremap(void);
 bool_t platform_supports_x2apic(void);
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 3d07139..9175ddf 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1441,8 +1441,8 @@ int domain_context_mapping_one(
 
     unmap_vtd_domain_page(context_entries);
 
-    if ( !seg )
-        me_wifi_quirk(domain, bus, devfn, MAP_ME_PHANTOM_FUNC);
+    if ( !seg && !rc )
+        rc = me_wifi_quirk(domain, bus, devfn, MAP_ME_PHANTOM_FUNC);
 
     return rc;
 }
@@ -1594,8 +1594,8 @@ int domain_context_unmap_one(
     spin_unlock(&iommu->lock);
     unmap_vtd_domain_page(context_entries);
 
-    if ( !iommu->intel->drhd->segment )
-        me_wifi_quirk(domain, bus, devfn, UNMAP_ME_PHANTOM_FUNC);
+    if ( !iommu->intel->drhd->segment && !rc )
+        rc = me_wifi_quirk(domain, bus, devfn, UNMAP_ME_PHANTOM_FUNC);
 
     return rc;
 }
diff --git a/xen/drivers/passthrough/vtd/quirks.c b/xen/drivers/passthrough/vtd/quirks.c
index 473d1fc..3606b52 100644
--- a/xen/drivers/passthrough/vtd/quirks.c
+++ b/xen/drivers/passthrough/vtd/quirks.c
@@ -331,10 +331,12 @@ void __init platform_quirks_init(void)
  * assigning Intel integrated wifi device to a guest.
  */
 
-static void map_me_phantom_function(struct domain *domain, u32 dev, int map)
+static int __must_check map_me_phantom_function(struct domain *domain,
+                                                u32 dev, int map)
 {
     struct acpi_drhd_unit *drhd;
     struct pci_dev *pdev;
+    int rc;
 
     /* find ME VT-d engine base on a real ME device */
     pdev = pci_get_pdev(0, 0, PCI_DEVFN(dev, 0));
@@ -342,23 +344,27 @@ static void map_me_phantom_function(struct domain *domain, u32 dev, int map)
 
     /* map or unmap ME phantom function */
     if ( map )
-        domain_context_mapping_one(domain, drhd->iommu, 0,
-                                   PCI_DEVFN(dev, 7), NULL);
+        rc = domain_context_mapping_one(domain, drhd->iommu, 0,
+                                        PCI_DEVFN(dev, 7), NULL);
     else
-        domain_context_unmap_one(domain, drhd->iommu, 0,
-                                 PCI_DEVFN(dev, 7));
+        rc = domain_context_unmap_one(domain, drhd->iommu, 0,
+                                      PCI_DEVFN(dev, 7));
+
+    return rc;
 }
 
-void me_wifi_quirk(struct domain *domain, u8 bus, u8 devfn, int map)
+int me_wifi_quirk(struct domain *domain,
+                  u8 bus, u8 devfn, int map)
 {
     u32 id;
+    int rc = 0;
 
     id = pci_conf_read32(0, 0, 0, 0, 0);
     if ( IS_CTG(id) )
     {
         /* quit if ME does not exist */
         if ( pci_conf_read32(0, 0, 3, 0, 0) == 0xffffffff )
-            return;
+            return 0;
 
         /* if device is WLAN device, map ME phantom device 0:3.7 */
         id = pci_conf_read32(0, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), 0);
@@ -372,7 +378,7 @@ void me_wifi_quirk(struct domain *domain, u8 bus, u8 devfn, int map)
             case 0x423b8086:
             case 0x423c8086:
             case 0x423d8086:
-                map_me_phantom_function(domain, 3, map);
+                rc = map_me_phantom_function(domain, 3, map);
                 break;
             default:
                 break;
@@ -382,7 +388,7 @@ void me_wifi_quirk(struct domain *domain, u8 bus, u8 devfn, int map)
     {
         /* quit if ME does not exist */
         if ( pci_conf_read32(0, 0, 22, 0, 0) == 0xffffffff )
-            return;
+            return 0;
 
         /* if device is WLAN device, map ME phantom device 0:22.7 */
         id = pci_conf_read32(0, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), 0);
@@ -398,12 +404,14 @@ void me_wifi_quirk(struct domain *domain, u8 bus, u8 devfn, int map)
             case 0x42388086:        /* Puma Peak */
             case 0x422b8086:
             case 0x422c8086:
-                map_me_phantom_function(domain, 22, map);
+                rc = map_me_phantom_function(domain, 22, map);
                 break;
             default:
                 break;
         }
     }
+
+    return rc;
 }
 
 void pci_vtd_quirk(const struct pci_dev *pdev)
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 00/10] Check VT-d Device-TLB flush error
  2016-05-18  8:08 [PATCH v5 00/10] Check VT-d Device-TLB flush error Quan Xu
                   ` (9 preceding siblings ...)
  2016-05-18  8:08 ` [PATCH v5 10/10] vt-d: propagate error up to ME phantom function mapping and unmapping Quan Xu
@ 2016-05-18 10:20 ` Jan Beulich
  2016-05-18 12:13   ` Xu, Quan
  10 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2016-05-18 10:20 UTC (permalink / raw)
  To: Quan Xu; +Cc: dario.faggioli, xen-devel

>>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> --Changes in v5:
> 
> patch 1:
>   * add the missing blank line.
>   * add comments.
>   * if iommu_flush_context_device failed, continue to flush IOMMU IOTLB on a best effort basis.
>   * __defer__ to:
>       - rename __intel_iommu_iotlb_flush to iommu_flush_iotlb
>       - rename intel_iommu_iotlb_flush to iommu_flush_iotlb_pages
>       - rename intel_iommu_iotlb_flush_all to iommu_flush_iotlb_all
>       - add __must_check annotation
>     in patch 7 / 8,
>     otherwise, this will disrupt the order due to __must_check annotation.

I've peeked into patch 1, and this information is still not there,
despite you having been asked before to put this change
information into the individual patches to aid reviewers.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 00/10] Check VT-d Device-TLB flush error
  2016-05-18 10:20 ` [PATCH v5 00/10] Check VT-d Device-TLB flush error Jan Beulich
@ 2016-05-18 12:13   ` Xu, Quan
  0 siblings, 0 replies; 50+ messages in thread
From: Xu, Quan @ 2016-05-18 12:13 UTC (permalink / raw)
  To: Jan Beulich; +Cc: dario.faggioli, xen-devel

On May 18, 2016 6:21 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> > --Changes in v5:
> >
> > patch 1:
> >   * add the missing blank line.
> >   * add comments.
> >   * if iommu_flush_context_device failed, continue to flush IOMMU IOTLB on
> a best effort basis.
> >   * __defer__ to:
> >       - rename __intel_iommu_iotlb_flush to iommu_flush_iotlb
> >       - rename intel_iommu_iotlb_flush to iommu_flush_iotlb_pages
> >       - rename intel_iommu_iotlb_flush_all to iommu_flush_iotlb_all
> >       - add __must_check annotation
> >     in patch 7 / 8,
> >     otherwise, this will disrupt the order due to __must_check annotation.
> 
> I've peeked into patch 1, and this information is still not there, despite you
> having been asked before to put this change information into the individual
> patches to aid reviewers.
> 

Sorry, I misunderstood the 'Somewhere' in previous discussion, furthermore, I don't like to add so much change information at the beginning of the patch.
But if you like, I will follow it. Sorry again.

Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 01/10] vt-d: fix the IOMMU flush issue
  2016-05-18  8:08 ` [PATCH v5 01/10] vt-d: fix the IOMMU flush issue Quan Xu
@ 2016-05-23 13:30   ` Jan Beulich
  2016-05-23 15:22     ` Xu, Quan
  2016-05-26  6:20     ` Xu, Quan
  0 siblings, 2 replies; 50+ messages in thread
From: Jan Beulich @ 2016-05-23 13:30 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Keir Fraser, Andrew Cooper, dario.faggioli,
	xen-devel, Feng Wu

>>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -557,14 +557,16 @@ static void iommu_flush_all(void)
>      }
>  }
>  
> -static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
> -        int dma_old_pte_present, unsigned int page_count)
> +static int __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
> +                                     bool_t dma_old_pte_present,
> +                                     unsigned int page_count)

I realize you say so in the overview mail, but the continuing lack of
__must_check here causes review trouble again. And I have a hard
time seeing how adding these annotations right away would "disrupt
the order", as long as the series is properly ordered / broken up.

> @@ -579,23 +581,28 @@ static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
>  
>          flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
>          iommu_domid= domain_iommu_domid(d, iommu);
> +
>          if ( iommu_domid == -1 )

I appreciate you adding blank lines where needed, but this one
looks spurious.

> @@ -1391,13 +1399,26 @@ int domain_context_mapping_one(
>      spin_unlock(&iommu->lock);
>  
>      /* Context entry was previously non-present (with domid 0). */
> -    if ( iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) | devfn,
> -                                    DMA_CCMD_MASK_NOBIT, 1) )
> -        iommu_flush_write_buffer(iommu);
> -    else
> +    rc = iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) | devfn,

If you already touch such code, I'd appreciate if you did away with
the open coding of pre-canned macros or inline functions (PCI_BDF2()
in this case).

> +                                    DMA_CCMD_MASK_NOBIT, 1);
> +
> +    /*
> +     * The current logic for rc returns:
> +     *   - positive  invoke iommu_flush_write_buffer to flush cache.
> +     *   - zero      success.
> +     *   - negative  failure. Continue to flush IOMMU IOTLB on a best
> +     *               effort basis.
> +     */
> +    if ( rc <= 0 )
>      {
>          int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> -        iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
> +
> +        rc = iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);

If rc was negative before this call, you may end up returning
success without having been successful. Furthermore I think it
was you who last time round reminded me that
iommu_flush_iotlb_dsi() can also return 1, which you don't take
care of.

> @@ -1522,6 +1544,7 @@ int domain_context_unmap_one(
>      iommu_flush_cache_entry(context, sizeof(struct context_entry));
>  
>      iommu_domid= domain_iommu_domid(domain, iommu);
> +
>      if ( iommu_domid == -1 )

Seemingly stray addition of blank line again (more such below). And
the code below has the same issue as that above.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 02/10] IOMMU: handle IOMMU mapping and unmapping failures
  2016-05-18  8:08 ` [PATCH v5 02/10] IOMMU: handle IOMMU mapping and unmapping failures Quan Xu
@ 2016-05-23 13:40   ` Jan Beulich
  2016-05-24  9:09     ` Xu, Quan
  0 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2016-05-23 13:40 UTC (permalink / raw)
  To: Quan Xu; +Cc: dario.faggioli, Kevin Tian, xen-devel

>>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> No spamming can occur.

May I suggest "No spamming of the log can occur", to set some
context for what follows?

> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -240,21 +240,49 @@ int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
>                     unsigned int flags)
>  {
>      const struct domain_iommu *hd = dom_iommu(d);
> +    int rc;
>  
>      if ( !iommu_enabled || !hd->platform_ops )
>          return 0;
>  
> -    return hd->platform_ops->map_page(d, gfn, mfn, flags);
> +    rc = hd->platform_ops->map_page(d, gfn, mfn, flags);
> +
> +    if ( unlikely(rc) )

A more general word on the use of blank lines: I think their use is
well advised to separate logically (mostly) independent steps. In
cases like this, where you check the return value of a function,
the two things really belong together imo. Using too little blank
lines negatively affects readability, but using too many easily
leads to not seeing enough context anymore when looking at
some code fragment.

> +    {
> +        if ( !d->is_shutting_down && printk_ratelimit() )
> +            printk(XENLOG_ERR
> +                   "d%d: IOMMU mapping gfn %#lx mfn %#lx failed %d.",

I really dislike having to repeat this yet another time: No stops in
log messages please. Also to the reader of the log it would be
unclear what the number at the end represents. May I suggest

            printk(XENLOG_ERR
                   "d%d: IOMMU mapping gfn %#lx to mfn %#lx failed: %d",
                   d->domain_id, gfn, mfn, rc);

(arguably one might then also remove the words "gfn" and "mfn").

Apart from these cosmetic issues I think this is fine now.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 03/10] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping
  2016-05-18  8:08 ` [PATCH v5 03/10] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping Quan Xu
@ 2016-05-23 14:19   ` Jan Beulich
  2016-05-25 15:34     ` Xu, Quan
  0 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2016-05-23 14:19 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Keir Fraser, George Dunlap, Andrew Cooper,
	dario.faggioli, xen-devel, Jun Nakajima

>>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -2463,11 +2463,12 @@ static int __put_page_type(struct page_info *page,
>  }
>  
>  
> -static int __get_page_type(struct page_info *page, unsigned long type,
> -                           int preemptible)
> +static int __must_check __get_page_type(struct page_info *page,

Such a __must_check is relatively pointless when all existing callers
already check the return value (and surely all code inside mm.c
knows it needs to), and you don't at the same time make the
non-static functions propagating its return value also __must_check.
Hence I think best is to limit your effort to IOMMU functions for this
patch set.

> +                                        unsigned long type,
> +                                        int preemptible)
>  {
>      unsigned long nx, x, y = page->u.inuse.type_info;
> -    int rc = 0;
> +    int rc = 0, ret = 0;
>  
>      ASSERT(!(type & ~(PGT_type_mask | PGT_pae_xen_l2)));
>  
> @@ -2578,11 +2579,11 @@ static int __get_page_type(struct page_info *page, unsigned long type,
>          if ( d && is_pv_domain(d) && unlikely(need_iommu(d)) )
>          {
>              if ( (x & PGT_type_mask) == PGT_writable_page )
> -                iommu_unmap_page(d, mfn_to_gmfn(d, page_to_mfn(page)));
> +                ret = iommu_unmap_page(d, mfn_to_gmfn(d, page_to_mfn(page)));
>              else if ( type == PGT_writable_page )
> -                iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
> -                               page_to_mfn(page),
> -                               IOMMUF_readable|IOMMUF_writable);
> +                ret = iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
> +                                     page_to_mfn(page),
> +                                     IOMMUF_readable|IOMMUF_writable);
>          }
>      }
>  
> @@ -2599,6 +2600,9 @@ static int __get_page_type(struct page_info *page, unsigned long type,
>      if ( (x & PGT_partial) && !(nx & PGT_partial) )
>          put_page(page);
>  
> +    if ( !rc )
> +        rc = ret;

I know I've seen this a couple of time already, but with the special
purpose of "ret" I really wonder whether a more specific name
wouldn't be warranted - e.g. "iommu_rc" or "iommu_ret".

> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -658,7 +658,7 @@ bool_t ept_handle_misconfig(uint64_t gpa)
>   *
>   * Returns: 0 for success, -errno for failure
>   */
> -static int
> +static int __must_check
>  ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, 

Now adding the annotation here, without also (first) adding it to
the p2m method which this gets used for (and which is this function's
sole purpose), is not useful at all. Please remember - we don't want
this annotation because it looks good, but in order to make sure
errors don't get wrongly ignored. That's why, from the beginning, I
have been telling you that adding such annotations to other than
new code means doing it top down (which you clearly don't do here).

> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -171,20 +171,33 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
>      {
>          struct page_info *page;
>          unsigned int i = 0;
> +        int rc = 0;
> +
>          page_list_for_each ( page, &d->page_list )
>          {
>              unsigned long mfn = page_to_mfn(page);
>              unsigned long gfn = mfn_to_gmfn(d, mfn);
>              unsigned int mapping = IOMMUF_readable;
> +            int ret;
>  
>              if ( ((page->u.inuse.type_info & PGT_count_mask) == 0) ||
>                   ((page->u.inuse.type_info & PGT_type_mask)
>                    == PGT_writable_page) )
>                  mapping |= IOMMUF_writable;
> -            hd->platform_ops->map_page(d, gfn, mfn, mapping);
> +
> +            ret = hd->platform_ops->map_page(d, gfn, mfn, mapping);
> +
> +            if ( unlikely(ret) )
> +                rc = ret;

This should be good enough, but I think it would be better if, just
like elsewhere, you returned the first error instead of the last one.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 01/10] vt-d: fix the IOMMU flush issue
  2016-05-23 13:30   ` Jan Beulich
@ 2016-05-23 15:22     ` Xu, Quan
  2016-05-23 15:43       ` Jan Beulich
  2016-05-26  6:20     ` Xu, Quan
  1 sibling, 1 reply; 50+ messages in thread
From: Xu, Quan @ 2016-05-23 15:22 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tian, Kevin, Keir Fraser, Andrew Cooper, dario.faggioli,
	xen-devel, Wu, Feng

On May 23, 2016 9:31 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> > --- a/xen/drivers/passthrough/vtd/iommu.c
> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> > @@ -557,14 +557,16 @@ static void iommu_flush_all(void)
> >      }
> >  }
> >
> > -static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
> > -        int dma_old_pte_present, unsigned int page_count)
> > +static int __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
> > +                                     bool_t dma_old_pte_present,
> > +                                     unsigned int page_count)
> 
> I realize you say so in the overview mail, but the continuing lack of
> __must_check here causes review trouble again. And I have a hard time seeing
> how adding these annotations right away would "disrupt the order", as long
> as the series is properly ordered / broken up.
> 

If I add __must_check annotations here right now, e.g. 

-static void intel_iommu_iotlb_flush()
+static int __must_check iommu_flush_iotlb_pages()

... 

@@ -179,8 +179,9 @@ struct iommu_ops {
-    void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
+    int __must_check (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
...
}

Should belong  to here too. 




> > @@ -579,23 +581,28 @@ static void __intel_iommu_iotlb_flush(struct
> > domain *d, unsigned long gfn,
> >
> >          flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> >          iommu_domid= domain_iommu_domid(d, iommu);
> > +
> >          if ( iommu_domid == -1 )
> 
> I appreciate you adding blank lines where needed, but this one looks spurious.
> 
> > @@ -1391,13 +1399,26 @@ int domain_context_mapping_one(
> >      spin_unlock(&iommu->lock);
> >
> >      /* Context entry was previously non-present (with domid 0). */
> > -    if ( iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) | devfn,
> > -                                    DMA_CCMD_MASK_NOBIT, 1) )
> > -        iommu_flush_write_buffer(iommu);
> > -    else
> > +    rc = iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) |
> > + devfn,
> 
> If you already touch such code, I'd appreciate if you did away with the open
> coding of pre-canned macros or inline functions (PCI_BDF2() in this case).

I will enhance it in v6.

> 
> > +                                    DMA_CCMD_MASK_NOBIT, 1);
> > +
> > +    /*
> > +     * The current logic for rc returns:
> > +     *   - positive  invoke iommu_flush_write_buffer to flush cache.
> > +     *   - zero      success.
> > +     *   - negative  failure. Continue to flush IOMMU IOTLB on a best
> > +     *               effort basis.
> > +     */
> > +    if ( rc <= 0 )
> >      {
> >          int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> > -        iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
> > +
> > +        rc = iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
> 
> If rc was negative before this call, you may end up returning success without
> having been successful. Furthermore I think it was you who last time round
> reminded me that
> iommu_flush_iotlb_dsi() can also return 1, which you don't take care of.
> 

Yes, the iommu_flush_iotlb_dsi() can also return 1.
Look at the call tree, at the beginning of flush_context_qi()/flush_iotlb_qi(), or flush_context_reg()/flush_iotlb_reg()..

If rc was negative when we call iommu_flush_context_device(), it is impossible to return 1 for iommu_flush_iotlb_dsi().
 
IMO, furthermore, this should not belong  to comment.

> > @@ -1522,6 +1544,7 @@ int domain_context_unmap_one(
> >      iommu_flush_cache_entry(context, sizeof(struct context_entry));
> >
> >      iommu_domid= domain_iommu_domid(domain, iommu);
> > +
> >      if ( iommu_domid == -1 )
> 
> Seemingly stray addition of blank line again (more such below). And the code
> below has the same issue as that above.
> 
I will enhance it in v6.

Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 01/10] vt-d: fix the IOMMU flush issue
  2016-05-23 15:22     ` Xu, Quan
@ 2016-05-23 15:43       ` Jan Beulich
  2016-05-25  8:04         ` Xu, Quan
  0 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2016-05-23 15:43 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Keir Fraser, Andrew Cooper, dario.faggioli,
	xen-devel, Feng Wu

>>> On 23.05.16 at 17:22, <quan.xu@intel.com> wrote:
> On May 23, 2016 9:31 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
>> > --- a/xen/drivers/passthrough/vtd/iommu.c
>> > +++ b/xen/drivers/passthrough/vtd/iommu.c
>> > @@ -557,14 +557,16 @@ static void iommu_flush_all(void)
>> >      }
>> >  }
>> >
>> > -static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
>> > -        int dma_old_pte_present, unsigned int page_count)
>> > +static int __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
>> > +                                     bool_t dma_old_pte_present,
>> > +                                     unsigned int page_count)
>> 
>> I realize you say so in the overview mail, but the continuing lack of
>> __must_check here causes review trouble again. And I have a hard time seeing
>> how adding these annotations right away would "disrupt the order", as long
>> as the series is properly ordered / broken up.
>> 
> 
> If I add __must_check annotations here right now, e.g. 
> 
> -static void intel_iommu_iotlb_flush()
> +static int __must_check iommu_flush_iotlb_pages()
> 
> ... 
> 
> @@ -179,8 +179,9 @@ struct iommu_ops {
> -    void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int 
> page_count);
> +    int __must_check (*iotlb_flush)(struct domain *d, unsigned long gfn, 
> unsigned int page_count);
> ...
> }
> 
> Should belong  to here too. 

Correct. And where's the problem?

>> > +                                    DMA_CCMD_MASK_NOBIT, 1);
>> > +
>> > +    /*
>> > +     * The current logic for rc returns:
>> > +     *   - positive  invoke iommu_flush_write_buffer to flush cache.
>> > +     *   - zero      success.
>> > +     *   - negative  failure. Continue to flush IOMMU IOTLB on a best
>> > +     *               effort basis.
>> > +     */
>> > +    if ( rc <= 0 )
>> >      {
>> >          int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
>> > -        iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
>> > +
>> > +        rc = iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
>> 
>> If rc was negative before this call, you may end up returning success 
> without
>> having been successful. Furthermore I think it was you who last time round
>> reminded me that
>> iommu_flush_iotlb_dsi() can also return 1, which you don't take care of.
>> 
> 
> Yes, the iommu_flush_iotlb_dsi() can also return 1.
> Look at the call tree, at the beginning of 
> flush_context_qi()/flush_iotlb_qi(), or 
> flush_context_reg()/flush_iotlb_reg()..
> 
> If rc was negative when we call iommu_flush_context_device(), it is 
> impossible to return 1 for iommu_flush_iotlb_dsi().

This is far from obvious, so please add a respective ASSERT() if
you want to rely on that.

> IMO, furthermore, this should not belong  to comment.

???

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 05/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping.
  2016-05-18  8:08 ` [PATCH v5 05/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping Quan Xu
@ 2016-05-23 15:53   ` Jan Beulich
  2016-05-24  9:01     ` Xu, Quan
  0 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2016-05-23 15:53 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, dario.faggioli,
	xen-devel, Julien Grall, Suravee Suthikulpanit

>>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> Propagate the IOMMU Device-TLB flush error up to IOMMU mapping.

Btw - there's little reason to repeat the title here.

> @@ -295,12 +297,23 @@ static void __hwdom_init amd_iommu_hwdom_init(struct domain *d)
>               * a pfn_valid() check would seem desirable here.
>               */
>              if ( mfn_valid(pfn) )
> -                amd_iommu_map_page(d, pfn, pfn, 
> -                                   IOMMUF_readable|IOMMUF_writable);
> +            {
> +                int ret;
> +
> +                ret = amd_iommu_map_page(d, pfn, pfn,
> +                                         IOMMUF_readable|IOMMUF_writable);
> +
> +                if ( unlikely(ret) )
> +                    rc = ret;
> +            }

So you do the adjustment needed to add __must_check to
amd_iommu_map_page(), but you don't actually add the annotation.
Is there a reason for this?

And of course the comment to an earlier patch applies regarding
which error to return.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 06/10] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones).
  2016-05-18  8:08 ` [PATCH v5 06/10] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones) Quan Xu
@ 2016-05-23 16:05   ` Jan Beulich
  2016-05-24  1:16     ` Xu, Quan
  0 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2016-05-23 16:05 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Julien Grall, Stefano Stabellini, dario.faggioli, xen-devel

>>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -633,9 +633,9 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
>      return rc;
>  }
>  
> -static int xenmem_add_to_physmap(struct domain *d,
> -                                 struct xen_add_to_physmap *xatp,
> -                                 unsigned int start)
> +static int __must_check xenmem_add_to_physmap(struct domain *d,
> +                                              struct xen_add_to_physmap *xatp,
> +                                              unsigned int start)
>  {

As before - either you do this adding of annotations completely, or
you stop at the IOMMU / MM boundary. And note that this
addition was not covered by the R-b tag I had offered during the
v4 review.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 06/10] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones).
  2016-05-23 16:05   ` Jan Beulich
@ 2016-05-24  1:16     ` Xu, Quan
  2016-05-24  7:01       ` Jan Beulich
  0 siblings, 1 reply; 50+ messages in thread
From: Xu, Quan @ 2016-05-24  1:16 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tian, Kevin, Julien Grall, Stefano Stabellini, dario.faggioli, xen-devel

On May 24, 2016 12:06 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> > --- a/xen/common/memory.c
> > +++ b/xen/common/memory.c
> > @@ -633,9 +633,9 @@ static long
> memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t)
> arg)
> >      return rc;
> >  }
> >
> > -static int xenmem_add_to_physmap(struct domain *d,
> > -                                 struct xen_add_to_physmap *xatp,
> > -                                 unsigned int start)
> > +static int __must_check xenmem_add_to_physmap(struct domain *d,
> > +                                              struct xen_add_to_physmap *xatp,
> > +                                              unsigned int start)
> >  {
> 
> As before - either you do this adding of annotations completely, or you stop at
> the IOMMU / MM boundary. 

I prefer to stop at the IOMMU / MM boundary. The IOMMU boundary is obvious, but what's the definition of MM boundary? I thought this is at MM boundary.

Quan

> And note that this addition was not covered by
> the R-b tag I had offered during the
> v4 review.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 07/10] IOMMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (leaf ones).
  2016-05-18  8:08 ` [PATCH v5 07/10] IOMMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (leaf ones) Quan Xu
@ 2016-05-24  3:42   ` Xu, Quan
  2016-05-24  7:52     ` Jan Beulich
  0 siblings, 1 reply; 50+ messages in thread
From: Xu, Quan @ 2016-05-24  3:42 UTC (permalink / raw)
  To: xen-devel, Jan Beulich
  Cc: Tian, Kevin, Stefano Stabellini, Wu, Feng, dario.faggioli,
	Julien Grall, Xu, Quan

On May 18, 2016 4:08 PM, Quan Xu <quan.xu@intel.com> wrote:
> Propagate the IOMMU Device-TLB flush error up to the
> iommu_iotlb_flush{,_all}.
> 
> This patch fixes the leaf ones.
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> -static void intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
> unsigned int page_count)
> +static int __must_check iommu_flush_iotlb_pages(struct domain *d,
> +                                                unsigned long gfn,
> +                                                unsigned int
> +page_count)
>  {
> -    __intel_iommu_iotlb_flush(d, gfn, 1, page_count);
> +    return iommu_flush_iotlb(d, gfn, 1, page_count);
>  }
> 
> -static void intel_iommu_iotlb_flush_all(struct domain *d)
> +static int iommu_flush_iotlb_all(struct domain *d)

Sorry,  I should add __must_check annotation here too.

Quan

>  {
> -    __intel_iommu_iotlb_flush(d, INVALID_GFN, 0, 0);
> +    return iommu_flush_iotlb(d, INVALID_GFN, 0, 0);
>  }
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 06/10] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones).
  2016-05-24  1:16     ` Xu, Quan
@ 2016-05-24  7:01       ` Jan Beulich
  2016-05-24  7:08         ` Tian, Kevin
  0 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2016-05-24  7:01 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Julien Grall, StefanoStabellini, dario.faggioli, xen-devel

>>> On 24.05.16 at 03:16, <quan.xu@intel.com> wrote:
> On May 24, 2016 12:06 AM, Jan Beulich <JBeulich@suse.com> wrote:
>> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
>> > --- a/xen/common/memory.c
>> > +++ b/xen/common/memory.c
>> > @@ -633,9 +633,9 @@ static long
>> memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t)
>> arg)
>> >      return rc;
>> >  }
>> >
>> > -static int xenmem_add_to_physmap(struct domain *d,
>> > -                                 struct xen_add_to_physmap *xatp,
>> > -                                 unsigned int start)
>> > +static int __must_check xenmem_add_to_physmap(struct domain *d,
>> > +                                              struct xen_add_to_physmap 
> *xatp,
>> > +                                              unsigned int start)
>> >  {
>> 
>> As before - either you do this adding of annotations completely, or you stop 
> at
>> the IOMMU / MM boundary. 
> 
> I prefer to stop at the IOMMU / MM boundary. The IOMMU boundary is obvious, 
> but what's the definition of MM boundary? I thought this is at MM boundary.

Not sure what you mean to understand. The IOMMU / MM boundary
is the boundary between those two components, there's no talk of
two boundaries here, and hence the question is unclear to me.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 06/10] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones).
  2016-05-24  7:01       ` Jan Beulich
@ 2016-05-24  7:08         ` Tian, Kevin
  2016-05-24  8:11           ` Xu, Quan
  0 siblings, 1 reply; 50+ messages in thread
From: Tian, Kevin @ 2016-05-24  7:08 UTC (permalink / raw)
  To: Jan Beulich, Xu, Quan
  Cc: Julien Grall, StefanoStabellini, dario.faggioli, xen-devel

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Tuesday, May 24, 2016 3:02 PM
> 
> >>> On 24.05.16 at 03:16, <quan.xu@intel.com> wrote:
> > On May 24, 2016 12:06 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> >> > --- a/xen/common/memory.c
> >> > +++ b/xen/common/memory.c
> >> > @@ -633,9 +633,9 @@ static long
> >> memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t)
> >> arg)
> >> >      return rc;
> >> >  }
> >> >
> >> > -static int xenmem_add_to_physmap(struct domain *d,
> >> > -                                 struct xen_add_to_physmap *xatp,
> >> > -                                 unsigned int start)
> >> > +static int __must_check xenmem_add_to_physmap(struct domain *d,
> >> > +                                              struct xen_add_to_physmap
> > *xatp,
> >> > +                                              unsigned int start)
> >> >  {
> >>
> >> As before - either you do this adding of annotations completely, or you stop
> > at
> >> the IOMMU / MM boundary.
> >
> > I prefer to stop at the IOMMU / MM boundary. The IOMMU boundary is obvious,
> > but what's the definition of MM boundary? I thought this is at MM boundary.
> 
> Not sure what you mean to understand. The IOMMU / MM boundary
> is the boundary between those two components, there's no talk of
> two boundaries here, and hence the question is unclear to me.
> 
> Jan

Hi, Quan,

A file-based map about IOMMU/MM boundary is under arch/x86/mm. You
need focus on low-level interaction between IOMMU and MM components,
i.e. when some state change in MM code (mostly p2m change) needs to 
conduct IOMMU operations.

Above xenmem is much higher level, which will be routed to various MM 
operations internally so you don't need bother here.

Thanks
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 07/10] IOMMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (leaf ones).
  2016-05-24  3:42   ` Xu, Quan
@ 2016-05-24  7:52     ` Jan Beulich
  0 siblings, 0 replies; 50+ messages in thread
From: Jan Beulich @ 2016-05-24  7:52 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, dario.faggioli,
	xen-devel, Julien Grall

>>> On 24.05.16 at 05:42, <quan.xu@intel.com> wrote:
> On May 18, 2016 4:08 PM, Quan Xu <quan.xu@intel.com> wrote:
>> Propagate the IOMMU Device-TLB flush error up to the
>> iommu_iotlb_flush{,_all}.
>> 
>> This patch fixes the leaf ones.
>> --- a/xen/drivers/passthrough/vtd/iommu.c
>> +++ b/xen/drivers/passthrough/vtd/iommu.c
>> -static void intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
>> unsigned int page_count)
>> +static int __must_check iommu_flush_iotlb_pages(struct domain *d,
>> +                                                unsigned long gfn,
>> +                                                unsigned int
>> +page_count)
>>  {
>> -    __intel_iommu_iotlb_flush(d, gfn, 1, page_count);
>> +    return iommu_flush_iotlb(d, gfn, 1, page_count);
>>  }
>> 
>> -static void intel_iommu_iotlb_flush_all(struct domain *d)
>> +static int iommu_flush_iotlb_all(struct domain *d)
> 
> Sorry,  I should add __must_check annotation here too.

And with that adjustment
Reviewed-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 08/10] vt-d/ept: propagate IOMMU Device-TLB flush error up to EPT update.
  2016-05-18  8:08 ` [PATCH v5 08/10] vt-d/ept: propagate IOMMU Device-TLB flush error up to EPT update Quan Xu
@ 2016-05-24  7:58   ` Jan Beulich
  0 siblings, 0 replies; 50+ messages in thread
From: Jan Beulich @ 2016-05-24  7:58 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Feng Wu, George Dunlap, Andrew Cooper,
	dario.faggioli, xen-devel, Jun Nakajima

>>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> Propagate the IOMMU Device-TLB flush error up to the ept_set_entry(),
> when VT-d shares EPT page table.
> 
> Signed-off-by: Quan Xu <quan.xu@intel.com>
> Acked-by: Kevin Tian <kevin.tian@intel.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 06/10] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones).
  2016-05-24  7:08         ` Tian, Kevin
@ 2016-05-24  8:11           ` Xu, Quan
  2016-05-24  8:34             ` Jan Beulich
  0 siblings, 1 reply; 50+ messages in thread
From: Xu, Quan @ 2016-05-24  8:11 UTC (permalink / raw)
  To: Tian, Kevin, Jan Beulich
  Cc: Julien Grall, StefanoStabellini, dario.faggioli, xen-devel

On May 24, 2016 3:09 PM, Tian, Kevin <kevin.tian@intel.com> wrote:
> > From: Jan Beulich [mailto:JBeulich@suse.com]
> > Sent: Tuesday, May 24, 2016 3:02 PM
> >
> > >>> On 24.05.16 at 03:16, <quan.xu@intel.com> wrote:
> > > On May 24, 2016 12:06 AM, Jan Beulich <JBeulich@suse.com> wrote:
> > >> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> > >> > --- a/xen/common/memory.c
> > >> > +++ b/xen/common/memory.c
> > >> > @@ -633,9 +633,9 @@ static long
> > >>
> memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t)
> > >> arg)
> > >> >      return rc;
> > >> >  }
> > >> >
> > >> > -static int xenmem_add_to_physmap(struct domain *d,
> > >> > -                                 struct xen_add_to_physmap *xatp,
> > >> > -                                 unsigned int start)
> > >> > +static int __must_check xenmem_add_to_physmap(struct domain *d,
> > >> > +                                              struct
> > >> > +xen_add_to_physmap
> > > *xatp,
> > >> > +                                              unsigned int
> > >> > + start)
> > >> >  {
> > >>
> > >> As before - either you do this adding of annotations completely, or
> > >> you stop
> > > at
> > >> the IOMMU / MM boundary.
> > >
> > > I prefer to stop at the IOMMU / MM boundary. The IOMMU boundary is
> > > obvious, but what's the definition of MM boundary? I thought this is at
> MM boundary.
> >
> > Not sure what you mean to understand. The IOMMU / MM boundary is the
> > boundary between those two components, there's no talk of two
> > boundaries here, and hence the question is unclear to me.
> >
> > Jan
> 
> Hi, Quan,
> 
> A file-based map about IOMMU/MM boundary is under arch/x86/mm. You
> need focus on low-level interaction between IOMMU and MM components,
> i.e. when some state change in MM code (mostly p2m change) needs to
> conduct IOMMU operations.
> 
> Above xenmem is much higher level, which will be routed to various MM
> operations internally so you don't need bother here.

Jan / Kevin,

I thought the IOMMU / MM boundary is the MM functions (high level callers) which call iommu_* interfaces (such as,  iommu_map_page / iommu_unmap_page / iommu_iotlb_flush ...).
For this case, the xenmem_add_to_physmap() indeed calls iommu_iotlb_flush(),  but xenmem_add_to_physmap() may be hypervisor interface, instead of MM interface.

If I drop this __must_check and fix patch 3 / patch 5, then I think __must_check would not be a block issue.

Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 09/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending
  2016-05-18  8:08 ` [PATCH v5 09/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending Quan Xu
@ 2016-05-24  8:21   ` Jan Beulich
  2016-05-25  6:41     ` Xu, Quan
  0 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2016-05-24  8:21 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, Liu Jinsong,
	dario.faggioli, xen-devel, Julien Grall, Suravee Suthikulpanit,
	Andrew Cooper, Keir Fraser

>>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> --- a/xen/arch/x86/acpi/power.c
> +++ b/xen/arch/x86/acpi/power.c
> @@ -43,36 +43,68 @@ struct acpi_sleep_info acpi_sinfo;
>  
>  void do_suspend_lowlevel(void);
>  
> +enum dev_power_saved
> +{
> +    SAVED_NONE,    /* None of device power saved */
> +    SAVED_CONSOLE, /* Device power of console saved */
> +    SAVED_TIME,    /* Device power of time saved */
> +    SAVED_I8259A,  /* Device power of I8259A saved */
> +    SAVED_IOAPIC,  /* Device power of IOAPIC saved */
> +    SAVED_IOMMU,   /* Device power of IOMMU saved */
> +    SAVED_LAPIC,   /* Device power of LAPIC saved */
> +};

Please avoid comments saying nothing substantially different than
the code they accompany, and also not helping the reader to
understand the commented code.

>  static int device_power_down(void)
>  {
> -    console_suspend();
> +    if ( console_suspend() )
> +        return SAVED_CONSOLE;

I said so on the previous round, and I need to repeat it now: If
console_suspend() fails, you saved _nothing_.

> -    time_suspend();
> +    if ( time_suspend() )
> +        return SAVED_TIME;
>  
> -    i8259A_suspend();
> +    if ( i8259A_suspend() )
> +        return SAVED_I8259A;
>  
> +    /* ioapic_suspend cannot fail */
>      ioapic_suspend();
>  
> -    iommu_suspend();
> +    if ( iommu_suspend() )
> +        return SAVED_IOMMU;
>  
> -    lapic_suspend();
> +    if ( lapic_suspend() )
> +        return SAVED_LAPIC;
>  
>      return 0;

And this silently means SAVED_NONE, whereas here you saved
everything. Yielding clearly bogus code ...

> -static void device_power_up(void)
> +static void device_power_up(enum dev_power_saved saved)
>  {
> -    lapic_resume();
> -
> -    iommu_resume();
> -
> -    ioapic_resume();
> -
> -    i8259A_resume();
> -
> -    time_resume();
> -
> -    console_resume();
> +    switch ( saved )
> +    {
> +    case SAVED_NONE:
> +        lapic_resume();

... here and ...

> @@ -196,7 +232,7 @@ static int enter_state(u32 state)
>      write_cr4(cr4 & ~X86_CR4_MCE);
>      write_efer(read_efer());
>  
> -    device_power_up();
> +    device_power_up(SAVED_NONE);

... here.

> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -541,20 +541,28 @@ static int iommu_flush_iotlb_psi(
>      return status;
>  }
>  
> -static void iommu_flush_all(void)
> +static int __must_check iommu_flush_all(void)
>  {
>      struct acpi_drhd_unit *drhd;
>      struct iommu *iommu;
>      int flush_dev_iotlb;
> +    int rc = 0;
>  
>      flush_all_cache();
>      for_each_drhd_unit ( drhd )
>      {
> +        int ret;
> +
>          iommu = drhd->iommu;
>          iommu_flush_context_global(iommu, 0);
>          flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> -        iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
> +        ret = iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
> +
> +        if ( unlikely(ret) )
> +            rc = ret;

Same as for earlier patches - "if ( unlikely(!rc) )" please.

Also, having come here - did I miss iommu_flush_iotlb_global() gaining
a __must_check annotation somewhere? And the struct iommu_flush
pointers and handlers? And, by analogy, iommu_flush_context_*()?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 10/10] vt-d: propagate error up to ME phantom function mapping and unmapping
  2016-05-18  8:08 ` [PATCH v5 10/10] vt-d: propagate error up to ME phantom function mapping and unmapping Quan Xu
@ 2016-05-24  8:28   ` Jan Beulich
  0 siblings, 0 replies; 50+ messages in thread
From: Jan Beulich @ 2016-05-24  8:28 UTC (permalink / raw)
  To: Quan Xu; +Cc: dario.faggioli, Feng Wu, Kevin Tian, xen-devel

>>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> -void me_wifi_quirk(struct domain *domain, u8 bus, u8 devfn, int map)
> +int me_wifi_quirk(struct domain *domain,
> +                  u8 bus, u8 devfn, int map)

Please don't needlessly split the function header onto two lines.

With this adjusted,
Reviewed-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 06/10] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones).
  2016-05-24  8:11           ` Xu, Quan
@ 2016-05-24  8:34             ` Jan Beulich
  2016-05-24  8:44               ` Xu, Quan
  0 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2016-05-24  8:34 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Julien Grall, StefanoStabellini, dario.faggioli, xen-devel

>>> On 24.05.16 at 10:11, <quan.xu@intel.com> wrote:
> I thought the IOMMU / MM boundary is the MM functions (high level callers) 
> which call iommu_* interfaces (such as,  iommu_map_page / iommu_unmap_page / 
> iommu_iotlb_flush ...).

Exactly - the boundary is _in_ those MM functions, at the points where
they call IOMMU ones.

> For this case, the xenmem_add_to_physmap() indeed calls iommu_iotlb_flush(), 
>  but xenmem_add_to_physmap() may be hypervisor interface, instead of MM 
> interface.
> 
> If I drop this __must_check and fix patch 3 / patch 5, then I think 
> __must_check would not be a block issue.

Not sure what "a block issue" is supposed to mean here, but indeed
if you drop the annotations from non-IOMMU functions (unless, as
said, you mean to also add them further up the call trees), then I
think things should be coming close.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 06/10] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones).
  2016-05-24  8:34             ` Jan Beulich
@ 2016-05-24  8:44               ` Xu, Quan
  0 siblings, 0 replies; 50+ messages in thread
From: Xu, Quan @ 2016-05-24  8:44 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tian, Kevin, Julien Grall, StefanoStabellini, dario.faggioli, xen-devel

On May 24, 2016 4:34 PM, Jan Beulich <JBeulich@suse.com> wrote:
> but indeed if you
> drop the annotations from non-IOMMU functions (unless, as said, you mean
> to also add them further up the call trees), then I think things should be
> coming close.
> 

I'll drop the annotations from non-IOMMU functions.
Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 05/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping.
  2016-05-23 15:53   ` Jan Beulich
@ 2016-05-24  9:01     ` Xu, Quan
  2016-05-24  9:09       ` Jan Beulich
  0 siblings, 1 reply; 50+ messages in thread
From: Xu, Quan @ 2016-05-24  9:01 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tian, Kevin, Stefano Stabellini, Wu, Feng, dario.faggioli,
	xen-devel, Julien Grall, Suravee Suthikulpanit

On May 23, 2016 11:53 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> > Propagate the IOMMU Device-TLB flush error up to IOMMU mapping.
> 
> Btw - there's little reason to repeat the title here.
> 

I'll drop it.

Can I apply it to  other patches?


> > @@ -295,12 +297,23 @@ static void __hwdom_init
> amd_iommu_hwdom_init(struct domain *d)
> >               * a pfn_valid() check would seem desirable here.
> >               */
> >              if ( mfn_valid(pfn) )
> > -                amd_iommu_map_page(d, pfn, pfn,
> > -                                   IOMMUF_readable|IOMMUF_writable);
> > +            {
> > +                int ret;
> > +
> > +                ret = amd_iommu_map_page(d, pfn, pfn,
> > +
> > + IOMMUF_readable|IOMMUF_writable);
> > +
> > +                if ( unlikely(ret) )
> > +                    rc = ret;
> > +            }
> 
> So you do the adjustment needed to add __must_check to
> amd_iommu_map_page(), but you don't actually add the annotation.
> Is there a reason for this?
>

Sorry, I missed it.
I really need a __must_check to amd_iommu_map_page() in include/asm-x86/hvm/svm/amd-iommu-proto.h.


 
> And of course the comment to an earlier patch applies regarding which error
> to return.

I'll apply it to all of my patch set.

Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 05/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping.
  2016-05-24  9:01     ` Xu, Quan
@ 2016-05-24  9:09       ` Jan Beulich
  2016-05-24  9:14         ` Xu, Quan
  0 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2016-05-24  9:09 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, dario.faggioli,
	xen-devel, Julien Grall, Suravee Suthikulpanit

>>> On 24.05.16 at 11:01, <quan.xu@intel.com> wrote:
> On May 23, 2016 11:53 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
>> > Propagate the IOMMU Device-TLB flush error up to IOMMU mapping.
>> 
>> Btw - there's little reason to repeat the title here.
>> 
> 
> I'll drop it.
> 
> Can I apply it to  other patches?

You mean dropping a description that simply repeats the title - yes,
of course. Even better would of course be, where meaningful, if
you added actual descriptions.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 02/10] IOMMU: handle IOMMU mapping and unmapping failures
  2016-05-23 13:40   ` Jan Beulich
@ 2016-05-24  9:09     ` Xu, Quan
  0 siblings, 0 replies; 50+ messages in thread
From: Xu, Quan @ 2016-05-24  9:09 UTC (permalink / raw)
  To: Jan Beulich; +Cc: dario.faggioli, Tian, Kevin, xen-devel

On May 23, 2016 9:41 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> > No spamming can occur.
> 
> May I suggest "No spamming of the log can occur", to set some context for
> what follows?
> 

Make sense.

> > --- a/xen/drivers/passthrough/iommu.c
> > +++ b/xen/drivers/passthrough/iommu.c
> > @@ -240,21 +240,49 @@ int iommu_map_page(struct domain *d,
> unsigned long gfn, unsigned long mfn,
> >                     unsigned int flags)  {
> >      const struct domain_iommu *hd = dom_iommu(d);
> > +    int rc;
> >
> >      if ( !iommu_enabled || !hd->platform_ops )
> >          return 0;
> >
> > -    return hd->platform_ops->map_page(d, gfn, mfn, flags);
> > +    rc = hd->platform_ops->map_page(d, gfn, mfn, flags);
> > +
> > +    if ( unlikely(rc) )
> 
> A more general word on the use of blank lines: I think their use is well advised
> to separate logically (mostly) independent steps. In cases like this, where you
> check the return value of a function, the two things really belong together
> imo. Using too little blank lines negatively affects readability, but using too
> many easily leads to not seeing enough context anymore when looking at
> some code fragment.
> 

I will also apply it to other patches.

> > +    {
> > +        if ( !d->is_shutting_down && printk_ratelimit() )
> > +            printk(XENLOG_ERR
> > +                   "d%d: IOMMU mapping gfn %#lx mfn %#lx failed %d.",
> 
> I really dislike having to repeat this yet another time: No stops in log messages
> please. Also to the reader of the log it would be unclear what the number at
> the end represents. May I suggest
> 
>             printk(XENLOG_ERR
>                    "d%d: IOMMU mapping gfn %#lx to mfn %#lx failed: %d",
>                    d->domain_id, gfn, mfn, rc);
> 
> (arguably one might then also remove the words "gfn" and "mfn").
> 

To me, we are better not to remove 'gfn' / 'mfn', but I really need to add a '\n'.. then

printk(XENLOG_ERR
       "d%d: IOMMU mapping gfn %#lx to mfn %#lx failed: %d\n",
       d->domain_id, gfn, mfn, rc);


Quan

> Apart from these cosmetic issues I think this is fine now.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 05/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping.
  2016-05-24  9:09       ` Jan Beulich
@ 2016-05-24  9:14         ` Xu, Quan
  0 siblings, 0 replies; 50+ messages in thread
From: Xu, Quan @ 2016-05-24  9:14 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tian, Kevin, Stefano Stabellini, Wu, Feng, dario.faggioli,
	xen-devel, Julien Grall, Suravee Suthikulpanit

On May 24, 2016 5:09 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 24.05.16 at 11:01, <quan.xu@intel.com> wrote:
> > On May 23, 2016 11:53 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> >> > Propagate the IOMMU Device-TLB flush error up to IOMMU mapping.
> >>
> >> Btw - there's little reason to repeat the title here.
> >>
> >
> > I'll drop it.
> >
> > Can I apply it to  other patches?
> 
> You mean dropping a description that simply repeats the title - yes, of course.

Yes, I think so too.

> Even better would of course be, where meaningful, if you added actual
> descriptions.
> 

I am afraid I would make things worse, so I will just simply drop  the description that simply repeats the title.

Quan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 09/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending
  2016-05-24  8:21   ` Jan Beulich
@ 2016-05-25  6:41     ` Xu, Quan
  2016-05-25  8:06       ` Jan Beulich
  0 siblings, 1 reply; 50+ messages in thread
From: Xu, Quan @ 2016-05-25  6:41 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tian, Kevin, Stefano Stabellini, Wu, Feng, Liu Jinsong,
	dario.faggioli, xen-devel, Julien Grall, Suravee Suthikulpanit,
	Andrew Cooper, Keir Fraser

On May 24, 2016 4:22 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> > --- a/xen/arch/x86/acpi/power.c
> > +++ b/xen/arch/x86/acpi/power.c
> > @@ -43,36 +43,68 @@ struct acpi_sleep_info acpi_sinfo;
> >
> >  void do_suspend_lowlevel(void);
> >
> > +enum dev_power_saved
> > +{
> > +    SAVED_NONE,    /* None of device power saved */
> > +    SAVED_CONSOLE, /* Device power of console saved */
> > +    SAVED_TIME,    /* Device power of time saved */
> > +    SAVED_I8259A,  /* Device power of I8259A saved */
> > +    SAVED_IOAPIC,  /* Device power of IOAPIC saved */
> > +    SAVED_IOMMU,   /* Device power of IOMMU saved */
> > +    SAVED_LAPIC,   /* Device power of LAPIC saved */
> > +};
> 
> Please avoid comments saying nothing substantially different than the code
> they accompany, and also not helping the reader to understand the
> commented code.
>

I'll drop these comments in v6.

 
> >  static int device_power_down(void)
> >  {
> > -    console_suspend();
> > +    if ( console_suspend() )
> > +        return SAVED_CONSOLE;
> 
> I said so on the previous round, and I need to repeat it now: If
> console_suspend() fails, you saved _nothing_.
> 

Ah, we may have some different views for SAVED_*, which I mean has been saved and we are no need to resume.

e.g. if console_suspend() fails, I did return SAVED_CONSOLE, reading my patch again, and I really resume nothing at all.

device_power_up()
{
     ...
    +    case SAVED_CONSOLE:
    +        break;
    ...
}


I know we can also propagate SAVED_NONE for console_suspend() failure, then we need adjust device_power_up() relevantly.


> > -    time_suspend();
> > +    if ( time_suspend() )
> > +        return SAVED_TIME;
> >
> > -    i8259A_suspend();
> > +    if ( i8259A_suspend() )
> > +        return SAVED_I8259A;
> >
> > +    /* ioapic_suspend cannot fail */
> >      ioapic_suspend();
> >
> > -    iommu_suspend();
> > +    if ( iommu_suspend() )
> > +        return SAVED_IOMMU;
> >
> > -    lapic_suspend();
> > +    if ( lapic_suspend() )
> > +        return SAVED_LAPIC;
> >
> >      return 0;
> 
> And this silently means SAVED_NONE, whereas here you saved everything.
> Yielding clearly bogus code ...
>


 '0' is just on success here.  Look at the condition where we call device_power_up():

+        if ( error > 0 )
+            device_power_up(error);

Then, it is not bogus code.
 

> > -static void device_power_up(void)
> > +static void device_power_up(enum dev_power_saved saved)
> >  {
> > -    lapic_resume();
> > -
> > -    iommu_resume();
> > -
> > -    ioapic_resume();
> > -
> > -    i8259A_resume();
> > -
> > -    time_resume();
> > -
> > -    console_resume();
> > +    switch ( saved )
> > +    {
> > +    case SAVED_NONE:
> > +        lapic_resume();
> 
> ... here and ...
> 
> > @@ -196,7 +232,7 @@ static int enter_state(u32 state)
> >      write_cr4(cr4 & ~X86_CR4_MCE);
> >      write_efer(read_efer());
> >
> > -    device_power_up();
> > +    device_power_up(SAVED_NONE);
> 
> ... here.

Then we need to call all of *_resume(). I think the logic is correct, but the SAVED_*  may be not what you suggested.

> 
> > --- a/xen/drivers/passthrough/vtd/iommu.c
> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> > @@ -541,20 +541,28 @@ static int iommu_flush_iotlb_psi(
> >      return status;
> >  }
> >
> > -static void iommu_flush_all(void)
> > +static int __must_check iommu_flush_all(void)
> >  {
> >      struct acpi_drhd_unit *drhd;
> >      struct iommu *iommu;
> >      int flush_dev_iotlb;
> > +    int rc = 0;
> >
> >      flush_all_cache();
> >      for_each_drhd_unit ( drhd )
> >      {
> > +        int ret;
> > +
> >          iommu = drhd->iommu;
> >          iommu_flush_context_global(iommu, 0);
> >          flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> > -        iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
> > +        ret = iommu_flush_iotlb_global(iommu, 0, flush_dev_iotlb);
> > +
> > +        if ( unlikely(ret) )
> > +            rc = ret;
> 
> Same as for earlier patches - "if ( unlikely(!rc) )" please.
> 


Yes,


> Also, having come here - did I miss iommu_flush_iotlb_global() gaining a
> __must_check annotation somewhere? 

I will add __must_check annotation to iommu_flush_iotlb_global().

> And the struct iommu_flush pointers
> and handlers? And, by analogy, iommu_flush_context_*()?
> 

I am better only add __must_check annotation to flush->iotlb and handlers,
but leaving flush->context/handers and  iommu_flush_context_*() as are in current patch set..
the coming patch set will fix them.

Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 01/10] vt-d: fix the IOMMU flush issue
  2016-05-23 15:43       ` Jan Beulich
@ 2016-05-25  8:04         ` Xu, Quan
  2016-05-25  8:29           ` Jan Beulich
  0 siblings, 1 reply; 50+ messages in thread
From: Xu, Quan @ 2016-05-25  8:04 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tian, Kevin, Keir Fraser, Andrew Cooper, dario.faggioli,
	xen-devel, Wu, Feng

On May 23, 2016 11:43 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 23.05.16 at 17:22, <quan.xu@intel.com> wrote:
> > On May 23, 2016 9:31 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> >> > --- a/xen/drivers/passthrough/vtd/iommu.c
> >> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> >> > @@ -557,14 +557,16 @@ static void iommu_flush_all(void)
> >> >      }
> >> >  }
> >> >
> >> > -static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long
> gfn,
> >> > -        int dma_old_pte_present, unsigned int page_count)
> >> > +static int __intel_iommu_iotlb_flush(struct domain *d, unsigned long
> gfn,
> >> > +                                     bool_t dma_old_pte_present,
> >> > +                                     unsigned int page_count)
> >>
> >> I realize you say so in the overview mail, but the continuing lack of
> >> __must_check here causes review trouble again. And I have a hard time
> >> seeing how adding these annotations right away would "disrupt the
> >> order", as long as the series is properly ordered / broken up.
> >>
> >
> > If I add __must_check annotations here right now, e.g.
> >
> > -static void intel_iommu_iotlb_flush()
> > +static int __must_check iommu_flush_iotlb_pages()
> >
> > ...
> >
> > @@ -179,8 +179,9 @@ struct iommu_ops {
> > -    void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int
> > page_count);
> > +    int __must_check (*iotlb_flush)(struct domain *d, unsigned long
> > + gfn,
> > unsigned int page_count);
> > ...
> > }
> >
> > Should belong  to here too.
> 
> Correct. And where's the problem?
>

IMO it is not a big deal..

I think this makes this patch 1 fat.. why not focus on the positive propagation value from IOMMU flush interfaces in this patch.
If we are clear I will add annotation and rename them in another patches, it is acceptable to me.

Furthermore, we also need to add (from patch 4):

-static void dma_pte_clear_one(struct domain *domain, u64 addr)
+static int __must_check dma_pte_clear_one(struct domain *domain, u64 addr)
{
...
-        __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K, 1, 1);
+        rc = __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K, 1, 1);
...
}

In this patch.

 
> >> > +                                    DMA_CCMD_MASK_NOBIT, 1);
> >> > +
> >> > +    /*
> >> > +     * The current logic for rc returns:
> >> > +     *   - positive  invoke iommu_flush_write_buffer to flush cache.
> >> > +     *   - zero      success.
> >> > +     *   - negative  failure. Continue to flush IOMMU IOTLB on a best
> >> > +     *               effort basis.
> >> > +     */
> >> > +    if ( rc <= 0 )
> >> >      {
> >> >          int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> >> > -        iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
> >> > +
> >> > +        rc = iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
> >>
> >> If rc was negative before this call, you may end up returning success
> > without
> >> having been successful. Furthermore I think it was you who last time
> >> round reminded me that
> >> iommu_flush_iotlb_dsi() can also return 1, which you don't take care of.
> >>
> >
> > Yes, the iommu_flush_iotlb_dsi() can also return 1.
> > Look at the call tree, at the beginning of
> > flush_context_qi()/flush_iotlb_qi(), or
> > flush_context_reg()/flush_iotlb_reg()..
> >
> > If rc was negative when we call iommu_flush_context_device(), it is
> > impossible to return 1 for iommu_flush_iotlb_dsi().
> 
> This is far from obvious, so please add a respective ASSERT() if you want to
> rely on that.
> 
> > IMO, furthermore, this should not belong  to comment.
> 
> ???

Think twice, I will add comments and a respective ASSERT()..

Quan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 09/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending
  2016-05-25  6:41     ` Xu, Quan
@ 2016-05-25  8:06       ` Jan Beulich
  2016-05-25 15:13         ` Xu, Quan
  0 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2016-05-25  8:06 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Stefano Stabellini, Feng Wu, Liu Jinsong,
	dario.faggioli, xen-devel, Julien Grall, Suravee Suthikulpanit,
	Andrew Cooper, Keir Fraser

>>> On 25.05.16 at 08:41, <quan.xu@intel.com> wrote:
> On May 24, 2016 4:22 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
>> >  static int device_power_down(void)
>> >  {
>> > -    console_suspend();
>> > +    if ( console_suspend() )
>> > +        return SAVED_CONSOLE;
>> 
>> I said so on the previous round, and I need to repeat it now: If
>> console_suspend() fails, you saved _nothing_.
>> 
> 
> Ah, we may have some different views for SAVED_*, which I mean has been 
> saved and we are no need to resume.
> 
> e.g. if console_suspend() fails, I did return SAVED_CONSOLE, reading my 
> patch again, and I really resume nothing at all.
> 
> device_power_up()
> {
>      ...
>     +    case SAVED_CONSOLE:
>     +        break;
>     ...
> }
> 
> 
> I know we can also propagate SAVED_NONE for console_suspend() failure, then 
> we need adjust device_power_up() relevantly.

My main point is that the names of these enumerators should reflect
their purpose. If one reads "SAVED_CONSOLE", then (s)he should be
allowed this to mean that console state was saved (and hence needs
to be restored upon error / resume).

>> > -    time_suspend();
>> > +    if ( time_suspend() )
>> > +        return SAVED_TIME;
>> >
>> > -    i8259A_suspend();
>> > +    if ( i8259A_suspend() )
>> > +        return SAVED_I8259A;
>> >
>> > +    /* ioapic_suspend cannot fail */
>> >      ioapic_suspend();
>> >
>> > -    iommu_suspend();
>> > +    if ( iommu_suspend() )
>> > +        return SAVED_IOMMU;
>> >
>> > -    lapic_suspend();
>> > +    if ( lapic_suspend() )
>> > +        return SAVED_LAPIC;
>> >
>> >      return 0;
>> 
>> And this silently means SAVED_NONE, whereas here you saved everything.
>> Yielding clearly bogus code ...
>>
> 
> 
>  '0' is just on success here.  Look at the condition where we call 
> device_power_up():
> 
> +        if ( error > 0 )
> +            device_power_up(error);
> 
> Then, it is not bogus code.

See above: Zero should not mean both "nothing saved" and "saved
everything".

>> Also, having come here - did I miss iommu_flush_iotlb_global() gaining a
>> __must_check annotation somewhere? 
> 
> I will add __must_check annotation to iommu_flush_iotlb_global().
> 
>> And the struct iommu_flush pointers
>> and handlers? And, by analogy, iommu_flush_context_*()?
> 
> I am better only add __must_check annotation to flush->iotlb and handlers,
> but leaving flush->context/handers and  iommu_flush_context_*() as are in 
> current patch set..
> the coming patch set will fix them.

I don't follow the logic behind this: The purpose of this series is to
make sure flushing errors get properly bubbled up, which includes
adding __must_check annotations. I'm not saying this needs to
happen in this patch, but it should happen in this series (and please
following the same basic model: A caller or a __must_check function
should either already be __must_check, or should become so at the
same time).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 01/10] vt-d: fix the IOMMU flush issue
  2016-05-25  8:04         ` Xu, Quan
@ 2016-05-25  8:29           ` Jan Beulich
  2016-05-25  8:53             ` Xu, Quan
  2016-05-26 10:37             ` Xu, Quan
  0 siblings, 2 replies; 50+ messages in thread
From: Jan Beulich @ 2016-05-25  8:29 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Keir Fraser, AndrewCooper, dario.faggioli, xen-devel,
	Feng Wu

>>> On 25.05.16 at 10:04, <quan.xu@intel.com> wrote:
> On May 23, 2016 11:43 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> >>> On 23.05.16 at 17:22, <quan.xu@intel.com> wrote:
>> > On May 23, 2016 9:31 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> >> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
>> >> > --- a/xen/drivers/passthrough/vtd/iommu.c
>> >> > +++ b/xen/drivers/passthrough/vtd/iommu.c
>> >> > @@ -557,14 +557,16 @@ static void iommu_flush_all(void)
>> >> >      }
>> >> >  }
>> >> >
>> >> > -static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long
>> gfn,
>> >> > -        int dma_old_pte_present, unsigned int page_count)
>> >> > +static int __intel_iommu_iotlb_flush(struct domain *d, unsigned long
>> gfn,
>> >> > +                                     bool_t dma_old_pte_present,
>> >> > +                                     unsigned int page_count)
>> >>
>> >> I realize you say so in the overview mail, but the continuing lack of
>> >> __must_check here causes review trouble again. And I have a hard time
>> >> seeing how adding these annotations right away would "disrupt the
>> >> order", as long as the series is properly ordered / broken up.
>> >>
>> >
>> > If I add __must_check annotations here right now, e.g.
>> >
>> > -static void intel_iommu_iotlb_flush()
>> > +static int __must_check iommu_flush_iotlb_pages()
>> >
>> > ...
>> >
>> > @@ -179,8 +179,9 @@ struct iommu_ops {
>> > -    void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int
>> > page_count);
>> > +    int __must_check (*iotlb_flush)(struct domain *d, unsigned long
>> > + gfn,
>> > unsigned int page_count);
>> > ...
>> > }
>> >
>> > Should belong  to here too.
>> 
>> Correct. And where's the problem?
>>
> 
> IMO it is not a big deal..
> 
> I think this makes this patch 1 fat.. why not focus on the positive 
> propagation value from IOMMU flush interfaces in this patch.
> If we are clear I will add annotation and rename them in another patches, it 
> is acceptable to me.

The patch getting too large is easy to deal with: Split it at a reasonable
boundary. It's one thing that I want to be clear: Any conversion of a
void return type of some function to non-void should be accompanied
by it at the same time becoming __must_check. I dislike having to
repeat yet again what I have been saying a number of times: Without
doing so, it is harder for you as the person writing the patch to verify
all callers deal with errors, and it's perhaps even harder for reviewers
to verify you did.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 01/10] vt-d: fix the IOMMU flush issue
  2016-05-25  8:29           ` Jan Beulich
@ 2016-05-25  8:53             ` Xu, Quan
  2016-05-26 10:37             ` Xu, Quan
  1 sibling, 0 replies; 50+ messages in thread
From: Xu, Quan @ 2016-05-25  8:53 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tian, Kevin, Keir Fraser, AndrewCooper, dario.faggioli,
	xen-devel, Wu, Feng

On May 25, 2016 4:30 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 25.05.16 at 10:04, <quan.xu@intel.com> wrote:
> > On May 23, 2016 11:43 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >> >>> On 23.05.16 at 17:22, <quan.xu@intel.com> wrote:
> >> > On May 23, 2016 9:31 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >> >> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> >> >> > --- a/xen/drivers/passthrough/vtd/iommu.c
> >> >> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> >> >> > @@ -557,14 +557,16 @@ static void iommu_flush_all(void)
> >> >> >      }
> >> >> >  }
> >> >> >
> >> >> > -static void __intel_iommu_iotlb_flush(struct domain *d,
> >> >> > unsigned long
> >> gfn,
> >> >> > -        int dma_old_pte_present, unsigned int page_count)
> >> >> > +static int __intel_iommu_iotlb_flush(struct domain *d, unsigned
> >> >> > +long
> >> gfn,
> >> >> > +                                     bool_t dma_old_pte_present,
> >> >> > +                                     unsigned int page_count)
> >> >>
> >> >> I realize you say so in the overview mail, but the continuing lack
> >> >> of __must_check here causes review trouble again. And I have a
> >> >> hard time seeing how adding these annotations right away would
> >> >> "disrupt the order", as long as the series is properly ordered / broken up.
> >> >>
> >> >
> >> > If I add __must_check annotations here right now, e.g.
> >> >
> >> > -static void intel_iommu_iotlb_flush()
> >> > +static int __must_check iommu_flush_iotlb_pages()
> >> >
> >> > ...
> >> >
> >> > @@ -179,8 +179,9 @@ struct iommu_ops {
> >> > -    void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int
> >> > page_count);
> >> > +    int __must_check (*iotlb_flush)(struct domain *d, unsigned
> >> > + long gfn,
> >> > unsigned int page_count);
> >> > ...
> >> > }
> >> >
> >> > Should belong  to here too.
> >>
> >> Correct. And where's the problem?
> >>
> >
> > IMO it is not a big deal..
> >
> > I think this makes this patch 1 fat.. why not focus on the positive
> > propagation value from IOMMU flush interfaces in this patch.
> > If we are clear I will add annotation and rename them in another
> > patches, it is acceptable to me.
> 
> The patch getting too large is easy to deal with: Split it at a reasonable
> boundary. It's one thing that I want to be clear: Any conversion of a void
> return type of some function to non-void should be accompanied by it at the
> same time becoming __must_check. I dislike having to repeat yet again what I
> have been saying a number of times: Without doing so, it is harder for you as
> the person writing the patch to verify all callers deal with errors, and it's
> perhaps even harder for reviewers to verify you did.
> 

It is clear to me, but I was lock of attention on reviewers part.
I'll follow your suggestion from now on.. thanks.

Quan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 09/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending
  2016-05-25  8:06       ` Jan Beulich
@ 2016-05-25 15:13         ` Xu, Quan
  0 siblings, 0 replies; 50+ messages in thread
From: Xu, Quan @ 2016-05-25 15:13 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tian, Kevin, Stefano Stabellini, Wu, Feng, Liu Jinsong,
	dario.faggioli, xen-devel, Julien Grall, Suravee Suthikulpanit,
	Andrew Cooper, Keir Fraser

On May 25, 2016 4:07 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 25.05.16 at 08:41, <quan.xu@intel.com> wrote:
> > On May 24, 2016 4:22 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> >> >  static int device_power_down(void)  {
> >> > -    console_suspend();
> >> > +    if ( console_suspend() )
> >> > +        return SAVED_CONSOLE;
> >>
> >> I said so on the previous round, and I need to repeat it now: If
> >> console_suspend() fails, you saved _nothing_.
> >>
> >
> > Ah, we may have some different views for SAVED_*, which I mean has
> > been saved and we are no need to resume.
> >
> > e.g. if console_suspend() fails, I did return SAVED_CONSOLE, reading
> > my patch again, and I really resume nothing at all.
> >
> > device_power_up()
> > {
> >      ...
> >     +    case SAVED_CONSOLE:
> >     +        break;
> >     ...
> > }
> >
> >
> > I know we can also propagate SAVED_NONE for console_suspend() failure,
> > then we need adjust device_power_up() relevantly.
> 
> My main point is that the names of these enumerators should reflect their
> purpose. If one reads "SAVED_CONSOLE", then (s)he should be allowed this
> to mean that console state was saved (and hence needs to be restored upon
> error / resume).
> 


I'll follow this.. it is much better than what I understand.


> >> > -    time_suspend();
> >> > +    if ( time_suspend() )
> >> > +        return SAVED_TIME;
> >> >
> >> > -    i8259A_suspend();
> >> > +    if ( i8259A_suspend() )
> >> > +        return SAVED_I8259A;
> >> >
> >> > +    /* ioapic_suspend cannot fail */
> >> >      ioapic_suspend();
> >> >
> >> > -    iommu_suspend();
> >> > +    if ( iommu_suspend() )
> >> > +        return SAVED_IOMMU;
> >> >
> >> > -    lapic_suspend();
> >> > +    if ( lapic_suspend() )
> >> > +        return SAVED_LAPIC;
> >> >
> >> >      return 0;
> >>
> >> And this silently means SAVED_NONE, whereas here you saved everything.
> >> Yielding clearly bogus code ...
> >>
> >
> >
> >  '0' is just on success here.  Look at the condition where we call
> > device_power_up():
> >
> > +        if ( error > 0 )
> > +            device_power_up(error);
> >
> > Then, it is not bogus code.
> 
> See above: Zero should not mean both "nothing saved" and "saved
> everything".
> 
> >> Also, having come here - did I miss iommu_flush_iotlb_global()
> >> gaining a __must_check annotation somewhere?
> >
> > I will add __must_check annotation to iommu_flush_iotlb_global().
> >
> >> And the struct iommu_flush pointers
> >> and handlers? And, by analogy, iommu_flush_context_*()?
> >
> > I am better only add __must_check annotation to flush->iotlb and
> > handlers, but leaving flush->context/handers and
> > iommu_flush_context_*() as are in current patch set..
> > the coming patch set will fix them.
> 
> I don't follow the logic behind this: The purpose of this series is to make sure
> flushing errors get properly bubbled up, which includes adding __must_check
> annotations. I'm not saying this needs to happen in this patch, but it should
> happen in this series

I will add a patch..

> (and please following the same basic model: A caller or a
> __must_check function should either already be __must_check, or should
> become so at the same time).
> 

Agreed.

Quan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 03/10] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping
  2016-05-23 14:19   ` Jan Beulich
@ 2016-05-25 15:34     ` Xu, Quan
  2016-05-25 16:01       ` Jan Beulich
  0 siblings, 1 reply; 50+ messages in thread
From: Xu, Quan @ 2016-05-25 15:34 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tian, Kevin, Keir Fraser, George Dunlap, Andrew Cooper,
	dario.faggioli, xen-devel, Nakajima, Jun

On May 23, 2016 10:19 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> 
> > --- a/xen/arch/x86/mm.c
> > +++ b/xen/arch/x86/mm.c
> > @@ -2463,11 +2463,12 @@ static int __put_page_type(struct page_info
> > *page,  }
> >
> >
> > -static int __get_page_type(struct page_info *page, unsigned long type,
> > -                           int preemptible)
> > +static int __must_check __get_page_type(struct page_info *page,
> 
> Such a __must_check is relatively pointless when all existing callers already
> check the return value (and surely all code inside mm.c knows it needs to), and
> you don't at the same time make the non-static functions propagating its
> return value also __must_check.

I will drop this __must_check annotation.

> Hence I think best is to limit your effort to IOMMU functions for this patch set.
>

Good idea.
 
> > +                                        unsigned long type,
> > +                                        int preemptible)
> >  {
> >      unsigned long nx, x, y = page->u.inuse.type_info;
> > -    int rc = 0;
> > +    int rc = 0, ret = 0;
> >
> >      ASSERT(!(type & ~(PGT_type_mask | PGT_pae_xen_l2)));
> >
> > @@ -2578,11 +2579,11 @@ static int __get_page_type(struct page_info
> *page, unsigned long type,
> >          if ( d && is_pv_domain(d) && unlikely(need_iommu(d)) )
> >          {
> >              if ( (x & PGT_type_mask) == PGT_writable_page )
> > -                iommu_unmap_page(d, mfn_to_gmfn(d, page_to_mfn(page)));
> > +                ret = iommu_unmap_page(d, mfn_to_gmfn(d,
> > + page_to_mfn(page)));
> >              else if ( type == PGT_writable_page )
> > -                iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
> > -                               page_to_mfn(page),
> > -                               IOMMUF_readable|IOMMUF_writable);
> > +                ret = iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
> > +                                     page_to_mfn(page),
> > +
> > + IOMMUF_readable|IOMMUF_writable);
> >          }
> >      }
> >
> > @@ -2599,6 +2600,9 @@ static int __get_page_type(struct page_info
> *page, unsigned long type,
> >      if ( (x & PGT_partial) && !(nx & PGT_partial) )
> >          put_page(page);
> >
> > +    if ( !rc )
> > +        rc = ret;
> 
> I know I've seen this a couple of time already, but with the special purpose of
> "ret" I really wonder whether a more specific name wouldn't be warranted -
> e.g. "iommu_rc" or "iommu_ret".


rc is return value for this function, and no directly association with IOMMU related code ( rc is only for alloc_page_type() ).
So the rc cannot be "iommu_rc"..

ret can be "iommu_ret", but I think the pair 'rc' / 'ret' may look good.


> 
> > --- a/xen/arch/x86/mm/p2m-ept.c
> > +++ b/xen/arch/x86/mm/p2m-ept.c
> > @@ -658,7 +658,7 @@ bool_t ept_handle_misconfig(uint64_t gpa)
> >   *
> >   * Returns: 0 for success, -errno for failure
> >   */
> > -static int
> > +static int __must_check
> >  ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
> 
> Now adding the annotation here, without also (first) adding it to the p2m
> method which this gets used for (and which is this function's sole purpose), is
> not useful at all. Please remember - we don't want this annotation because it
> looks good, but in order to make sure errors don't get wrongly ignored. That's
> why, from the beginning, I have been telling you that adding such annotations
> to other than new code means doing it top down (which you clearly don't do
> here).
> 

I will drop this __must_check annotation.

> > --- a/xen/drivers/passthrough/iommu.c
> > +++ b/xen/drivers/passthrough/iommu.c
> > @@ -171,20 +171,33 @@ void __hwdom_init iommu_hwdom_init(struct
> domain *d)
> >      {
> >          struct page_info *page;
> >          unsigned int i = 0;
> > +        int rc = 0;
> > +
> >          page_list_for_each ( page, &d->page_list )
> >          {
> >              unsigned long mfn = page_to_mfn(page);
> >              unsigned long gfn = mfn_to_gmfn(d, mfn);
> >              unsigned int mapping = IOMMUF_readable;
> > +            int ret;
> >
> >              if ( ((page->u.inuse.type_info & PGT_count_mask) == 0) ||
> >                   ((page->u.inuse.type_info & PGT_type_mask)
> >                    == PGT_writable_page) )
> >                  mapping |= IOMMUF_writable;
> > -            hd->platform_ops->map_page(d, gfn, mfn, mapping);
> > +
> > +            ret = hd->platform_ops->map_page(d, gfn, mfn, mapping);
> > +
> > +            if ( unlikely(ret) )
> > +                rc = ret;
> 
> This should be good enough, but I think it would be better if, just like
> elsewhere, you returned the first error instead of the last one.
> 

I will also apply it to this series.


(Jan, I know It is not an easy work to review these little things. I very appreciate it. Thank you!! )
Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 03/10] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping
  2016-05-25 15:34     ` Xu, Quan
@ 2016-05-25 16:01       ` Jan Beulich
  2016-05-26  1:42         ` Xu, Quan
  0 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2016-05-25 16:01 UTC (permalink / raw)
  To: Quan Xu
  Cc: Kevin Tian, Keir Fraser, George Dunlap, Andrew Cooper,
	dario.faggioli, xen-devel, Jun Nakajima

>>> On 25.05.16 at 17:34, <quan.xu@intel.com> wrote:
> On May 23, 2016 10:19 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
>> > +                                        unsigned long type,
>> > +                                        int preemptible)
>> >  {
>> >      unsigned long nx, x, y = page->u.inuse.type_info;
>> > -    int rc = 0;
>> > +    int rc = 0, ret = 0;
>> >
>> >      ASSERT(!(type & ~(PGT_type_mask | PGT_pae_xen_l2)));
>> >
>> > @@ -2578,11 +2579,11 @@ static int __get_page_type(struct page_info
>> *page, unsigned long type,
>> >          if ( d && is_pv_domain(d) && unlikely(need_iommu(d)) )
>> >          {
>> >              if ( (x & PGT_type_mask) == PGT_writable_page )
>> > -                iommu_unmap_page(d, mfn_to_gmfn(d, page_to_mfn(page)));
>> > +                ret = iommu_unmap_page(d, mfn_to_gmfn(d,
>> > + page_to_mfn(page)));
>> >              else if ( type == PGT_writable_page )
>> > -                iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
>> > -                               page_to_mfn(page),
>> > -                               IOMMUF_readable|IOMMUF_writable);
>> > +                ret = iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
>> > +                                     page_to_mfn(page),
>> > +
>> > + IOMMUF_readable|IOMMUF_writable);
>> >          }
>> >      }
>> >
>> > @@ -2599,6 +2600,9 @@ static int __get_page_type(struct page_info
>> *page, unsigned long type,
>> >      if ( (x & PGT_partial) && !(nx & PGT_partial) )
>> >          put_page(page);
>> >
>> > +    if ( !rc )
>> > +        rc = ret;
>> 
>> I know I've seen this a couple of time already, but with the special purpose of
>> "ret" I really wonder whether a more specific name wouldn't be warranted -
>> e.g. "iommu_rc" or "iommu_ret".
> 
> 
> rc is return value for this function, and no directly association with IOMMU 
> related code ( rc is only for alloc_page_type() ).
> So the rc cannot be "iommu_rc"..
> 
> ret can be "iommu_ret", but I think the pair 'rc' / 'ret' may look good.

Well, I'm not entirely opposed to keeping the names, but I continue
to think that while at the call sites the shorter name is reasonable, it
is quite a bit less clear at the point where you conditionally update rc.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 03/10] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping
  2016-05-25 16:01       ` Jan Beulich
@ 2016-05-26  1:42         ` Xu, Quan
  2016-05-26 15:49           ` Jan Beulich
  0 siblings, 1 reply; 50+ messages in thread
From: Xu, Quan @ 2016-05-26  1:42 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tian, Kevin, Keir Fraser, George Dunlap, Andrew Cooper,
	dario.faggioli, xen-devel, Nakajima, Jun

On May 26, 2016 12:02 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 25.05.16 at 17:34, <quan.xu@intel.com> wrote:
> > On May 23, 2016 10:19 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> >> > +                                        unsigned long type,
> >> > +                                        int preemptible)
> >> >  {
> >> >      unsigned long nx, x, y = page->u.inuse.type_info;
> >> > -    int rc = 0;
> >> > +    int rc = 0, ret = 0;
> >> >
> >> >      ASSERT(!(type & ~(PGT_type_mask | PGT_pae_xen_l2)));
> >> >
> >> > @@ -2578,11 +2579,11 @@ static int __get_page_type(struct page_info
> >> *page, unsigned long type,
> >> >          if ( d && is_pv_domain(d) && unlikely(need_iommu(d)) )
> >> >          {
> >> >              if ( (x & PGT_type_mask) == PGT_writable_page )
> >> > -                iommu_unmap_page(d, mfn_to_gmfn(d, page_to_mfn(page)));
> >> > +                ret = iommu_unmap_page(d, mfn_to_gmfn(d,
> >> > + page_to_mfn(page)));
> >> >              else if ( type == PGT_writable_page )
> >> > -                iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
> >> > -                               page_to_mfn(page),
> >> > -                               IOMMUF_readable|IOMMUF_writable);
> >> > +                ret = iommu_map_page(d, mfn_to_gmfn(d,
> page_to_mfn(page)),
> >> > +                                     page_to_mfn(page),
> >> > +
> >> > + IOMMUF_readable|IOMMUF_writable);
> >> >          }
> >> >      }
> >> >
> >> > @@ -2599,6 +2600,9 @@ static int __get_page_type(struct page_info
> >> *page, unsigned long type,
> >> >      if ( (x & PGT_partial) && !(nx & PGT_partial) )
> >> >          put_page(page);
> >> >
> >> > +    if ( !rc )
> >> > +        rc = ret;
> >>
> >> I know I've seen this a couple of time already, but with the special
> >> purpose of "ret" I really wonder whether a more specific name
> >> wouldn't be warranted - e.g. "iommu_rc" or "iommu_ret".
> >
> >
> > rc is return value for this function, and no directly association with
> > IOMMU related code ( rc is only for alloc_page_type() ).
> > So the rc cannot be "iommu_rc"..
> >
> > ret can be "iommu_ret", but I think the pair 'rc' / 'ret' may look good.
> 
> Well, I'm not entirely opposed to keeping the names, but I continue to think
> that while at the call sites the shorter name is reasonable, it is quite a bit less
> clear at the point where you conditionally update rc.
> 

I am open to it. What about 'rc' / 'iommu_ret' ?

Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 01/10] vt-d: fix the IOMMU flush issue
  2016-05-23 13:30   ` Jan Beulich
  2016-05-23 15:22     ` Xu, Quan
@ 2016-05-26  6:20     ` Xu, Quan
  1 sibling, 0 replies; 50+ messages in thread
From: Xu, Quan @ 2016-05-26  6:20 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tian, Kevin, Keir Fraser, Andrew Cooper, dario.faggioli,
	xen-devel, Wu, Feng

On May 23, 2016 9:31 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
> > --- a/xen/drivers/passthrough/vtd/iommu.c
> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> > @@ -1391,13 +1399,26 @@ int domain_context_mapping_one(
> >      spin_unlock(&iommu->lock);
> >
> >      /* Context entry was previously non-present (with domid 0). */
> > -    if ( iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) | devfn,
> > -                                    DMA_CCMD_MASK_NOBIT, 1) )
> > -        iommu_flush_write_buffer(iommu);
> > -    else
> > +    rc = iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) |
> > + devfn,
> > +                                    DMA_CCMD_MASK_NOBIT, 1);
> > +
> > +    /*
> > +     * The current logic for rc returns:
> > +     *   - positive  invoke iommu_flush_write_buffer to flush cache.
> > +     *   - zero      success.
> > +     *   - negative  failure. Continue to flush IOMMU IOTLB on a best
> > +     *               effort basis.
> > +     */
> > +    if ( rc <= 0 )
> >      {
> >          int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> > -        iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
> > +
> > +        rc = iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
> 
> If rc was negative before this call, you may end up returning success without
> having been successful. 

You are right.
IMO I need to add  'ret' here. Then ..

+    if ( rc <= 0 )
     {
         int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
-        iommu_flush_iotlb_dsi(iommu, iommu_domid, 0, flush_dev_iotlb);
+        int ret;
+
+        ret = iommu_flush_iotlb_dsi(iommu, iommu_domid, 0, flush_dev_iotlb);
+        ASSERT(ret <= 0);
+
+        if ( !rc )
+            rc = ret;
+    }


Quan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 01/10] vt-d: fix the IOMMU flush issue
  2016-05-25  8:29           ` Jan Beulich
  2016-05-25  8:53             ` Xu, Quan
@ 2016-05-26 10:37             ` Xu, Quan
  2016-05-26 14:37               ` Xu, Quan
  2016-05-26 15:56               ` Jan Beulich
  1 sibling, 2 replies; 50+ messages in thread
From: Xu, Quan @ 2016-05-26 10:37 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tian, Kevin, Keir Fraser, AndrewCooper, dario.faggioli,
	xen-devel, Wu, Feng

On May 25, 2016 4:30 PM, Jan Beulich <JBeulich@suse.com> wrote:
> The patch getting too large is easy to deal with: Split it at a reasonable
> boundary.

Jan, 
If I follow the below rule, I need to merge most of patches into this one. I can't find a reasonable boundary.
I recall your suggestion: top one first, then low level one..
I am better not to make this patch as a first one, as this is really a low level one.
Then, I need to change condition from 'if ( !rc )'  to ' if ( rc < 0 )' in my series. (but if this series would be merged together, I don't need to think about it.)
Does it make sense?

Quan


> It's one thing that I want to be clear: Any conversion of a void
> return type of some function to non-void should be accompanied by it at the
> same time becoming __must_check. I dislike having to repeat yet again what I
> have been saying a number of times: Without doing so, it is harder for you as
> the person writing the patch to verify all callers deal with errors, and it's
> perhaps even harder for reviewers to verify you did.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 01/10] vt-d: fix the IOMMU flush issue
  2016-05-26 10:37             ` Xu, Quan
@ 2016-05-26 14:37               ` Xu, Quan
  2016-05-26 15:56               ` Jan Beulich
  1 sibling, 0 replies; 50+ messages in thread
From: Xu, Quan @ 2016-05-26 14:37 UTC (permalink / raw)
  To: Xu, Quan, Jan Beulich
  Cc: Tian, Kevin, Keir Fraser, AndrewCooper, dario.faggioli,
	xen-devel, Wu, Feng

On May 26, 2016 6:38 PM, Xu, Quan <quan.xu@intel.com> wrote:
> On May 25, 2016 4:30 PM, Jan Beulich <JBeulich@suse.com> wrote:
> > The patch getting too large is easy to deal with: Split it at a
> > reasonable boundary.
> 
> Jan,
> If I follow the below rule, I need to merge most of patches into this one. I can't
> find a reasonable boundary.
> I recall your suggestion: top one first, then low level one..
> I am better not to make this patch as a first one, as this is really a low level one.
> Then, I need to change condition from 'if ( !rc )'  

                            Sorry, a typo, 'if ( rc )'

btw, the __must_check annotation is helpful, and  we have  multiple rounds  review..
I think a big patch is not a big deal. 

Quan

> to ' if ( rc < 0 )' in my series.
> (but if this series would be merged together, I don't need to think about it.)
> Does it make sense?
> 
> Quan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 03/10] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping
  2016-05-26  1:42         ` Xu, Quan
@ 2016-05-26 15:49           ` Jan Beulich
  0 siblings, 0 replies; 50+ messages in thread
From: Jan Beulich @ 2016-05-26 15:49 UTC (permalink / raw)
  To: quan.xu
  Cc: kevin.tian, keir, george.dunlap, andrew.cooper3, dario.faggioli,
	xen-devel, jun.nakajima

>>> "Xu, Quan" <quan.xu@intel.com> 05/26/16 3:42 AM >>>
>On May 26, 2016 12:02 AM, Jan Beulich <JBeulich@suse.com> wrote:
>> >>> On 25.05.16 at 17:34, <quan.xu@intel.com> wrote:
>> > On May 23, 2016 10:19 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> >> >>> On 18.05.16 at 10:08, <quan.xu@intel.com> wrote:
>> >> > +                                        unsigned long type,
>> >> > +                                        int preemptible)
>> >> >  {
>> >> >      unsigned long nx, x, y = page->u.inuse.type_info;
>> >> > -    int rc = 0;
>> >> > +    int rc = 0, ret = 0;
>> >> >
>> >> >      ASSERT(!(type & ~(PGT_type_mask | PGT_pae_xen_l2)));
>> >> >
>> >> > @@ -2578,11 +2579,11 @@ static int __get_page_type(struct page_info
>> >> *page, unsigned long type,
>> >> >          if ( d && is_pv_domain(d) && unlikely(need_iommu(d)) )
>> >> >          {
>> >> >              if ( (x & PGT_type_mask) == PGT_writable_page )
>> >> > -                iommu_unmap_page(d, mfn_to_gmfn(d, page_to_mfn(page)));
>> >> > +                ret = iommu_unmap_page(d, mfn_to_gmfn(d,
>> >> > + page_to_mfn(page)));
>> >> >              else if ( type == PGT_writable_page )
>> >> > -                iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
>> >> > -                               page_to_mfn(page),
>> >> > -                               IOMMUF_readable|IOMMUF_writable);
>> >> > +                ret = iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
>> >> > +                                     page_to_mfn(page),
>> >> > +
>> >> > + IOMMUF_readable|IOMMUF_writable);
>> >> >          }
>> >> >      }
>> >> >
>> >> > @@ -2599,6 +2600,9 @@ static int __get_page_type(struct page_info
>> >> *page, unsigned long type,
>> >> >      if ( (x & PGT_partial) && !(nx & PGT_partial) )
>> >> >          put_page(page);
>> >> >
>> >> > +    if ( !rc )
>> >> > +        rc = ret;
>> >>
>> >> I know I've seen this a couple of time already, but with the special
>> >> purpose of "ret" I really wonder whether a more specific name
>> >> wouldn't be warranted - e.g. "iommu_rc" or "iommu_ret".
>> >
>> >
>> > rc is return value for this function, and no directly association with
>> > IOMMU related code ( rc is only for alloc_page_type() ).
>> > So the rc cannot be "iommu_rc"..
>> >
>> > ret can be "iommu_ret", but I think the pair 'rc' / 'ret' may look good.
>> 
>> Well, I'm not entirely opposed to keeping the names, but I continue to think
>> that while at the call sites the shorter name is reasonable, it is quite a bit less
>> clear at the point where you conditionally update rc.
>
>I am open to it. What about 'rc' / 'iommu_ret' ?

As that matches one of the two options I had suggested, I don't really see why
you ask back.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 01/10] vt-d: fix the IOMMU flush issue
  2016-05-26 10:37             ` Xu, Quan
  2016-05-26 14:37               ` Xu, Quan
@ 2016-05-26 15:56               ` Jan Beulich
  2016-05-26 16:20                 ` Xu, Quan
  1 sibling, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2016-05-26 15:56 UTC (permalink / raw)
  To: quan.xu
  Cc: kevin.tian, keir, andrew.cooper3, dario.faggioli, xen-devel, feng.wu

>>> "Xu, Quan" <quan.xu@intel.com> 05/26/16 12:38 PM >>>
>On May 25, 2016 4:30 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> The patch getting too large is easy to deal with: Split it at a reasonable
>> boundary.
>
>If I follow the below rule, I need to merge most of patches into this one. I can't find a reasonable boundary.

As before, boundaries are pretty easy to set: Just change one function at a time,
or when two or a few are very closely related, do the changes together. But try to
avoid changing ones in the same patch that call each other (unless of course
there's some sort of recursion).

But yes, as you say in the other reply, a big patch may not be a problem as
long as it remains reasonably understandable (e.g. many small hunks are
usually fine, but a single or a few hunks changing dozens or even hundreds
of lines in one go are usually hard to review).

>I recall your suggestion: top one first, then low level one..
>I am better not to make this patch as a first one, as this is really a low level one.
>Then, I need to change condition from 'if ( !rc )'  to ' if ( rc < 0 )' in my series. (but if this series would be merged together, I don't need to think about it.)
>Does it make sense?

I'm afraid I'm lacking context.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v5 01/10] vt-d: fix the IOMMU flush issue
  2016-05-26 15:56               ` Jan Beulich
@ 2016-05-26 16:20                 ` Xu, Quan
  0 siblings, 0 replies; 50+ messages in thread
From: Xu, Quan @ 2016-05-26 16:20 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tian, Kevin, keir, andrew.cooper3, dario.faggioli, xen-devel, Wu, Feng

On Thursday, May 26, 2016 11:57 PM, Jan Beulich <jbeulich@suse.com> wrote:
> >>> "Xu, Quan" <quan.xu@intel.com> 05/26/16 12:38 PM >>>
> >On May 25, 2016 4:30 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >> The patch getting too large is easy to deal with: Split it at a
> >> reasonable boundary.
> >I recall your suggestion: top one first, then low level one..
> >I am better not to make this patch as a first one, as this is really a low level
> one.
> >Then, I need to change condition from 'if ( !rc )'  to ' if ( rc < 0 )'
> >in my series. (but if this series would be merged together, I don't need to
> think about it.) Does it make sense?
> 
> I'm afraid I'm lacking context.

when the propagation value is positive, this patch fixes this flush issue
as follows:
  - call iommu_flush_write_buffer() to flush cache.
  - return zero.

So the condition can be 'if( rc )' for error, but if this patch is not a first one in series,
I need to change condition from 'if ( rc )'  to ' if ( rc < 0 )'.. as the propagation value may be positive..

Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

end of thread, other threads:[~2016-05-26 16:20 UTC | newest]

Thread overview: 50+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-18  8:08 [PATCH v5 00/10] Check VT-d Device-TLB flush error Quan Xu
2016-05-18  8:08 ` [PATCH v5 01/10] vt-d: fix the IOMMU flush issue Quan Xu
2016-05-23 13:30   ` Jan Beulich
2016-05-23 15:22     ` Xu, Quan
2016-05-23 15:43       ` Jan Beulich
2016-05-25  8:04         ` Xu, Quan
2016-05-25  8:29           ` Jan Beulich
2016-05-25  8:53             ` Xu, Quan
2016-05-26 10:37             ` Xu, Quan
2016-05-26 14:37               ` Xu, Quan
2016-05-26 15:56               ` Jan Beulich
2016-05-26 16:20                 ` Xu, Quan
2016-05-26  6:20     ` Xu, Quan
2016-05-18  8:08 ` [PATCH v5 02/10] IOMMU: handle IOMMU mapping and unmapping failures Quan Xu
2016-05-23 13:40   ` Jan Beulich
2016-05-24  9:09     ` Xu, Quan
2016-05-18  8:08 ` [PATCH v5 03/10] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping Quan Xu
2016-05-23 14:19   ` Jan Beulich
2016-05-25 15:34     ` Xu, Quan
2016-05-25 16:01       ` Jan Beulich
2016-05-26  1:42         ` Xu, Quan
2016-05-26 15:49           ` Jan Beulich
2016-05-18  8:08 ` [PATCH v5 04/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU unmapping Quan Xu
2016-05-18  8:08 ` [PATCH v5 05/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU mapping Quan Xu
2016-05-23 15:53   ` Jan Beulich
2016-05-24  9:01     ` Xu, Quan
2016-05-24  9:09       ` Jan Beulich
2016-05-24  9:14         ` Xu, Quan
2016-05-18  8:08 ` [PATCH v5 06/10] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones) Quan Xu
2016-05-23 16:05   ` Jan Beulich
2016-05-24  1:16     ` Xu, Quan
2016-05-24  7:01       ` Jan Beulich
2016-05-24  7:08         ` Tian, Kevin
2016-05-24  8:11           ` Xu, Quan
2016-05-24  8:34             ` Jan Beulich
2016-05-24  8:44               ` Xu, Quan
2016-05-18  8:08 ` [PATCH v5 07/10] IOMMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (leaf ones) Quan Xu
2016-05-24  3:42   ` Xu, Quan
2016-05-24  7:52     ` Jan Beulich
2016-05-18  8:08 ` [PATCH v5 08/10] vt-d/ept: propagate IOMMU Device-TLB flush error up to EPT update Quan Xu
2016-05-24  7:58   ` Jan Beulich
2016-05-18  8:08 ` [PATCH v5 09/10] IOMMU: propagate IOMMU Device-TLB flush error up to IOMMU suspending Quan Xu
2016-05-24  8:21   ` Jan Beulich
2016-05-25  6:41     ` Xu, Quan
2016-05-25  8:06       ` Jan Beulich
2016-05-25 15:13         ` Xu, Quan
2016-05-18  8:08 ` [PATCH v5 10/10] vt-d: propagate error up to ME phantom function mapping and unmapping Quan Xu
2016-05-24  8:28   ` Jan Beulich
2016-05-18 10:20 ` [PATCH v5 00/10] Check VT-d Device-TLB flush error Jan Beulich
2016-05-18 12:13   ` Xu, Quan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.