All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paul Durrant <paul.durrant@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Jun Nakajima <jun.nakajima@intel.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Julien Grall <julien.grall@arm.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v6 02/14] iommu: make use of type-safe BFN and MFN in exported functions
Date: Thu, 23 Aug 2018 10:46:59 +0100	[thread overview]
Message-ID: <20180823094711.21376-3-paul.durrant@citrix.com> (raw)
In-Reply-To: <20180823094711.21376-1-paul.durrant@citrix.com>

This patch modifies the declaration of the entry points to the IOMMU
sub-system to use bfn_t and mfn_t in place of unsigned long. A subsequent
patch will similarly modify the methods in the iommu_ops structure.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>

v6:
 - Re-base.

v3:
 - Removed most of the uses of an intermediate 'frame' variable.

v2:
 - Addressed comments from Jan.
 - Use intermediate 'frame' variable to avoid directly encapsulating
   mfn or gfn values as bfns.
---
 xen/arch/arm/p2m.c                    |  3 ++-
 xen/arch/x86/mm.c                     | 10 ++++----
 xen/arch/x86/mm/p2m-ept.c             | 10 +++++---
 xen/arch/x86/mm/p2m-pt.c              | 45 ++++++++++++++++++++---------------
 xen/arch/x86/mm/p2m.c                 | 16 ++++++++-----
 xen/arch/x86/x86_64/mm.c              |  5 ++--
 xen/common/grant_table.c              | 12 +++++-----
 xen/common/memory.c                   |  4 ++--
 xen/drivers/passthrough/iommu.c       | 25 ++++++++++---------
 xen/drivers/passthrough/vtd/x86/vtd.c |  3 ++-
 xen/include/xen/iommu.h               | 14 +++++++----
 11 files changed, 85 insertions(+), 62 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 56b5474625..072029dfbe 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -957,7 +957,8 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
 
     if ( need_iommu(p2m->domain) &&
          (lpae_is_valid(orig_pte) || lpae_is_valid(*entry)) )
-        rc = iommu_iotlb_flush(p2m->domain, gfn_x(sgfn), 1UL << page_order);
+        rc = iommu_iotlb_flush(p2m->domain, _bfn(gfn_x(sgfn)),
+                               1UL << page_order);
     else
         rc = 0;
 
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 8ac4412554..9e9fb9421e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2752,14 +2752,14 @@ static int _get_page_type(struct page_info *page, unsigned long type,
         struct domain *d = page_get_owner(page);
         if ( d && is_pv_domain(d) && unlikely(need_iommu(d)) )
         {
-            gfn_t gfn = _gfn(mfn_to_gmfn(d, mfn_x(page_to_mfn(page))));
+            mfn_t mfn = page_to_mfn(page);
 
             if ( (x & PGT_type_mask) == PGT_writable_page )
-                iommu_ret = iommu_unmap_page(d, gfn_x(gfn));
+                iommu_ret = iommu_unmap_page(d, _bfn(mfn_x(mfn)));
             else if ( type == PGT_writable_page )
-                iommu_ret = iommu_map_page(d, gfn_x(gfn),
-                                           mfn_x(page_to_mfn(page)),
-                                           IOMMUF_readable|IOMMUF_writable);
+                iommu_ret = iommu_map_page(d, _bfn(mfn_x(mfn)), mfn,
+                                           IOMMUF_readable |
+                                           IOMMUF_writable);
         }
     }
 
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 14b593923b..2089b5232d 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -870,15 +870,19 @@ out:
             rc = iommu_pte_flush(d, gfn, &ept_entry->epte, order, vtd_pte_present);
         else
         {
+            bfn_t bfn = _bfn(gfn);
+
             if ( iommu_flags )
                 for ( i = 0; i < (1 << order); i++ )
                 {
-                    rc = iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags);
+                    rc = iommu_map_page(d, bfn_add(bfn, i),
+                                        mfn_add(mfn, i), iommu_flags);
                     if ( unlikely(rc) )
                     {
                         while ( i-- )
                             /* If statement to satisfy __must_check. */
-                            if ( iommu_unmap_page(p2m->domain, gfn + i) )
+                            if ( iommu_unmap_page(p2m->domain,
+                                                  bfn_add(bfn, i)) )
                                 continue;
 
                         break;
@@ -887,7 +891,7 @@ out:
             else
                 for ( i = 0; i < (1 << order); i++ )
                 {
-                    ret = iommu_unmap_page(d, gfn + i);
+                    ret = iommu_unmap_page(d, bfn_add(bfn, i));
                     if ( !rc )
                         rc = ret;
                 }
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index b8c5d2ed26..a441af388a 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -687,29 +687,36 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
             if ( iommu_old_flags )
                 amd_iommu_flush_pages(p2m->domain, gfn, page_order);
         }
-        else if ( iommu_pte_flags )
-            for ( i = 0; i < (1UL << page_order); i++ )
-            {
-                rc = iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
-                                    iommu_pte_flags);
-                if ( unlikely(rc) )
+        else
+        {
+            bfn_t bfn = _bfn(gfn);
+
+            if ( iommu_pte_flags )
+                for ( i = 0; i < (1UL << page_order); i++ )
                 {
-                    while ( i-- )
-                        /* If statement to satisfy __must_check. */
-                        if ( iommu_unmap_page(p2m->domain, gfn + i) )
-                            continue;
+                    rc = iommu_map_page(p2m->domain, bfn_add(bfn, i),
+                                        mfn_add(mfn, i), iommu_pte_flags);
+                    if ( unlikely(rc) )
+                    {
+                        while ( i-- )
+                            /* If statement to satisfy __must_check. */
+                            if ( iommu_unmap_page(p2m->domain,
+                                                  bfn_add(bfn, i)) )
+                                continue;
 
-                    break;
+                        break;
+                    }
                 }
-            }
-        else
-            for ( i = 0; i < (1UL << page_order); i++ )
-            {
-                int ret = iommu_unmap_page(p2m->domain, gfn + i);
+            else
+                for ( i = 0; i < (1UL << page_order); i++ )
+                {
+                    int ret = iommu_unmap_page(p2m->domain,
+                                               bfn_add(bfn, i));
 
-                if ( !rc )
-                    rc = ret;
-            }
+                    if ( !rc )
+                        rc = ret;
+                }
+        }
     }
 
     /*
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 8e9fbb5a14..fbf67def50 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -715,9 +715,11 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn_l, unsigned long mfn,
 
         if ( need_iommu(p2m->domain) )
         {
+            bfn_t bfn = _bfn(mfn);
+
             for ( i = 0; i < (1 << page_order); i++ )
             {
-                int ret = iommu_unmap_page(p2m->domain, mfn + i);
+                int ret = iommu_unmap_page(p2m->domain, bfn_add(bfn, i));
 
                 if ( !rc )
                     rc = ret;
@@ -774,16 +776,17 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
     {
         if ( need_iommu(d) && t == p2m_ram_rw )
         {
+            bfn_t bfn = _bfn(mfn_x(mfn));
+
             for ( i = 0; i < (1 << page_order); i++ )
             {
-                rc = iommu_map_page(d, mfn_x(mfn_add(mfn, i)),
-                                    mfn_x(mfn_add(mfn, i)),
+                rc = iommu_map_page(d, bfn_add(bfn, i), mfn_add(mfn, i),
                                     IOMMUF_readable|IOMMUF_writable);
                 if ( rc != 0 )
                 {
                     while ( i-- > 0 )
                         /* If statement to satisfy __must_check. */
-                        if ( iommu_unmap_page(d, mfn_x(mfn_add(mfn, i))) )
+                        if ( iommu_unmap_page(d, bfn_add(bfn, i)) )
                             continue;
 
                     return rc;
@@ -1158,7 +1161,8 @@ int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
     {
         if ( !need_iommu(d) )
             return 0;
-        return iommu_map_page(d, gfn_l, gfn_l, IOMMUF_readable|IOMMUF_writable);
+        return iommu_map_page(d, _bfn(gfn_l), _mfn(gfn_l),
+                              IOMMUF_readable | IOMMUF_writable);
     }
 
     gfn_lock(p2m, gfn, 0);
@@ -1248,7 +1252,7 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l)
     {
         if ( !need_iommu(d) )
             return 0;
-        return iommu_unmap_page(d, gfn_l);
+        return iommu_unmap_page(d, _bfn(gfn_l));
     }
 
     gfn_lock(p2m, gfn, 0);
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index cca4ae926e..cc58e4cef4 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1429,13 +1429,14 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm)
     if ( iommu_enabled && !iommu_passthrough && !need_iommu(hardware_domain) )
     {
         for ( i = spfn; i < epfn; i++ )
-            if ( iommu_map_page(hardware_domain, i, i, IOMMUF_readable|IOMMUF_writable) )
+            if ( iommu_map_page(hardware_domain, _bfn(i), _mfn(i),
+                                IOMMUF_readable | IOMMUF_writable) )
                 break;
         if ( i != epfn )
         {
             while (i-- > old_max)
                 /* If statement to satisfy __must_check. */
-                if ( iommu_unmap_page(hardware_domain, i) )
+                if ( iommu_unmap_page(hardware_domain, _bfn(i)) )
                     continue;
 
             goto destroy_m2p;
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index ad55cfa0ec..af41133953 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1143,13 +1143,13 @@ map_grant_ref(
              !(old_pin & (GNTPIN_hstw_mask|GNTPIN_devw_mask)) )
         {
             if ( !(kind & MAPKIND_WRITE) )
-                err = iommu_map_page(ld, mfn_x(mfn), mfn_x(mfn),
-                                     IOMMUF_readable|IOMMUF_writable);
+                err = iommu_map_page(ld, _bfn(mfn_x(mfn)), mfn,
+                                     IOMMUF_readable | IOMMUF_writable);
         }
         else if ( act_pin && !old_pin )
         {
             if ( !kind )
-                err = iommu_map_page(ld, mfn_x(mfn), mfn_x(mfn),
+                err = iommu_map_page(ld, _bfn(mfn_x(mfn)), mfn,
                                      IOMMUF_readable);
         }
         if ( err )
@@ -1398,10 +1398,10 @@ unmap_common(
 
         kind = mapkind(lgt, rd, op->mfn);
         if ( !kind )
-            err = iommu_unmap_page(ld, mfn_x(op->mfn));
+            err = iommu_unmap_page(ld, _bfn(mfn_x(op->mfn)));
         else if ( !(kind & MAPKIND_WRITE) )
-            err = iommu_map_page(ld, mfn_x(op->mfn),
-                                 mfn_x(op->mfn), IOMMUF_readable);
+            err = iommu_map_page(ld, _bfn(mfn_x(op->mfn)), op->mfn,
+                                 IOMMUF_readable);
 
         double_gt_unlock(lgt, rgt);
 
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 996f94b103..8ba8921c79 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -834,11 +834,11 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
 
         this_cpu(iommu_dont_flush_iotlb) = 0;
 
-        ret = iommu_iotlb_flush(d, xatp->idx - done, done);
+        ret = iommu_iotlb_flush(d, _bfn(xatp->idx - done), done);
         if ( unlikely(ret) && rc >= 0 )
             rc = ret;
 
-        ret = iommu_iotlb_flush(d, xatp->gpfn - done, done);
+        ret = iommu_iotlb_flush(d, _bfn(xatp->gpfn - done), done);
         if ( unlikely(ret) && rc >= 0 )
             rc = ret;
     }
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index f88dad0177..42b53b15e9 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -255,7 +255,7 @@ void iommu_domain_destroy(struct domain *d)
     arch_iommu_domain_destroy(d);
 }
 
-int iommu_map_page(struct domain *d, unsigned long bfn, unsigned long mfn,
+int iommu_map_page(struct domain *d, bfn_t bfn, mfn_t mfn,
                    unsigned int flags)
 {
     const struct domain_iommu *hd = dom_iommu(d);
@@ -264,13 +264,13 @@ int iommu_map_page(struct domain *d, unsigned long bfn, unsigned long mfn,
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    rc = hd->platform_ops->map_page(d, bfn, mfn, flags);
+    rc = hd->platform_ops->map_page(d, bfn_x(bfn), mfn_x(mfn), flags);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
             printk(XENLOG_ERR
-                   "d%d: IOMMU mapping bfn %#lx to mfn %#lx failed: %d\n",
-                   d->domain_id, bfn, mfn, rc);
+                   "d%d: IOMMU mapping bfn %"PRI_bfn" to mfn %"PRI_mfn" failed: %d\n",
+                   d->domain_id, bfn_x(bfn), mfn_x(mfn), rc);
 
         if ( !is_hardware_domain(d) )
             domain_crash(d);
@@ -279,7 +279,7 @@ int iommu_map_page(struct domain *d, unsigned long bfn, unsigned long mfn,
     return rc;
 }
 
-int iommu_unmap_page(struct domain *d, unsigned long bfn)
+int iommu_unmap_page(struct domain *d, bfn_t bfn)
 {
     const struct domain_iommu *hd = dom_iommu(d);
     int rc;
@@ -287,13 +287,13 @@ int iommu_unmap_page(struct domain *d, unsigned long bfn)
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    rc = hd->platform_ops->unmap_page(d, bfn);
+    rc = hd->platform_ops->unmap_page(d, bfn_x(bfn));
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
             printk(XENLOG_ERR
-                   "d%d: IOMMU unmapping bfn %#lx failed: %d\n",
-                   d->domain_id, bfn, rc);
+                   "d%d: IOMMU unmapping bfn %"PRI_bfn" failed: %d\n",
+                   d->domain_id, bfn_x(bfn), rc);
 
         if ( !is_hardware_domain(d) )
             domain_crash(d);
@@ -319,8 +319,7 @@ static void iommu_free_pagetables(unsigned long unused)
                             cpumask_cycle(smp_processor_id(), &cpu_online_map));
 }
 
-int iommu_iotlb_flush(struct domain *d, unsigned long bfn,
-                      unsigned int page_count)
+int iommu_iotlb_flush(struct domain *d, bfn_t bfn, unsigned int page_count)
 {
     const struct domain_iommu *hd = dom_iommu(d);
     int rc;
@@ -328,13 +327,13 @@ int iommu_iotlb_flush(struct domain *d, unsigned long bfn,
     if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
         return 0;
 
-    rc = hd->platform_ops->iotlb_flush(d, bfn, page_count);
+    rc = hd->platform_ops->iotlb_flush(d, bfn_x(bfn), page_count);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
             printk(XENLOG_ERR
-                   "d%d: IOMMU IOTLB flush failed: %d, bfn %#lx, page count %u\n",
-                   d->domain_id, rc, bfn, page_count);
+                   "d%d: IOMMU IOTLB flush failed: %d, bfn %"PRI_bfn", page count %u\n",
+                   d->domain_id, rc, bfn_x(bfn), page_count);
 
         if ( !is_hardware_domain(d) )
             domain_crash(d);
diff --git a/xen/drivers/passthrough/vtd/x86/vtd.c b/xen/drivers/passthrough/vtd/x86/vtd.c
index 00a9891005..fb674cdc68 100644
--- a/xen/drivers/passthrough/vtd/x86/vtd.c
+++ b/xen/drivers/passthrough/vtd/x86/vtd.c
@@ -152,7 +152,8 @@ void __hwdom_init vtd_set_hwdom_mapping(struct domain *d)
              page_is_ram_type(pfn, RAM_TYPE_CONVENTIONAL) )
             continue;
 
-        rc = iommu_map_page(d, pfn, pfn, IOMMUF_readable|IOMMUF_writable);
+        rc = iommu_map_page(d, _bfn(pfn), _mfn(pfn),
+			    IOMMUF_readable | IOMMUF_writable);
         if ( rc )
             printk(XENLOG_WARNING VTDPREFIX " d%d: IOMMU mapping failed: %d\n",
                    d->domain_id, rc);
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 40af7bd7c9..3d4f3e7d26 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -24,6 +24,7 @@
 #include <xen/spinlock.h>
 #include <xen/pci.h>
 #include <xen/typesafe.h>
+#include <xen/mm.h>
 #include <public/hvm/ioreq.h>
 #include <public/domctl.h>
 #include <asm/device.h>
@@ -42,6 +43,11 @@ TYPE_SAFE(uint64_t, bfn);
 #undef bfn_x
 #endif
 
+static inline bfn_t bfn_add(bfn_t bfn, unsigned long i)
+{
+    return _bfn(bfn_x(bfn) + i);
+}
+
 #define IOMMU_PAGE_SHIFT 12
 #define IOMMU_PAGE_SIZE  (_AC(1,L) << IOMMU_PAGE_SHIFT)
 #define IOMMU_PAGE_MASK  (~(IOMMU_PAGE_SIZE - 1))
@@ -86,9 +92,9 @@ void iommu_teardown(struct domain *d);
 #define IOMMUF_readable  (1u<<_IOMMUF_readable)
 #define _IOMMUF_writable 1
 #define IOMMUF_writable  (1u<<_IOMMUF_writable)
-int __must_check iommu_map_page(struct domain *d, unsigned long bfn,
-                                unsigned long mfn, unsigned int flags);
-int __must_check iommu_unmap_page(struct domain *d, unsigned long bfn);
+int __must_check iommu_map_page(struct domain *d, bfn_t bfn,
+                                mfn_t mfn, unsigned int flags);
+int __must_check iommu_unmap_page(struct domain *d, bfn_t bfn);
 
 enum iommu_feature
 {
@@ -211,7 +217,7 @@ int iommu_do_pci_domctl(struct xen_domctl *, struct domain *d,
 int iommu_do_domctl(struct xen_domctl *, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
-int __must_check iommu_iotlb_flush(struct domain *d, unsigned long bfn,
+int __must_check iommu_iotlb_flush(struct domain *d, bfn_t bfn,
                                    unsigned int page_count);
 int __must_check iommu_iotlb_flush_all(struct domain *d);
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  parent reply	other threads:[~2018-08-23  9:47 UTC|newest]

Thread overview: 111+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-23  9:46 [PATCH v6 00/14] paravirtual IOMMU interface Paul Durrant
2018-08-23  9:46 ` [PATCH v6 01/14] iommu: introduce the concept of BFN Paul Durrant
2018-08-30 15:59   ` Jan Beulich
2018-09-03  8:23     ` Paul Durrant
2018-09-03 11:46       ` Jan Beulich
2018-09-04  6:48         ` Tian, Kevin
2018-09-04  8:32           ` Jan Beulich
2018-09-04  8:37             ` Tian, Kevin
2018-09-04  8:47               ` Jan Beulich
2018-09-04  8:49                 ` Paul Durrant
2018-09-04  9:08                   ` Jan Beulich
2018-09-05  0:42                     ` Tian, Kevin
2018-09-05  6:48                       ` Jan Beulich
2018-09-05  6:56                         ` Tian, Kevin
2018-09-05  7:11                           ` Jan Beulich
2018-09-05  9:13                             ` Paul Durrant
2018-09-05  9:38                               ` Jan Beulich
2018-09-06 10:36                                 ` Paul Durrant
2018-09-06 13:13                                   ` Jan Beulich
2018-09-06 14:54                                     ` Paul Durrant
2018-09-07  1:47                                       ` Tian, Kevin
2018-09-07  6:24                                         ` Jan Beulich
2018-09-07  8:13                                           ` Paul Durrant
2018-09-07  8:16                                             ` Tian, Kevin
2018-09-07  8:25                                               ` Paul Durrant
2018-08-23  9:46 ` Paul Durrant [this message]
2018-09-04 10:29   ` [PATCH v6 02/14] iommu: make use of type-safe BFN and MFN in exported functions Jan Beulich
2018-08-23  9:47 ` [PATCH v6 03/14] iommu: push use of type-safe BFN and MFN into iommu_ops Paul Durrant
2018-09-04 10:32   ` Jan Beulich
2018-08-23  9:47 ` [PATCH v6 04/14] iommu: don't domain_crash() inside iommu_map/unmap_page() Paul Durrant
2018-09-04 10:38   ` Jan Beulich
2018-09-04 10:39     ` Paul Durrant
2018-08-23  9:47 ` [PATCH v6 05/14] public / x86: introduce __HYPERCALL_iommu_op Paul Durrant
2018-09-04 11:50   ` Jan Beulich
2018-09-04 12:23     ` Paul Durrant
2018-09-04 12:55       ` Jan Beulich
2018-09-04 13:17         ` Paul Durrant
2018-09-07 10:52   ` Jan Beulich
2018-08-23  9:47 ` [PATCH v6 06/14] iommu: track reserved ranges using a rangeset Paul Durrant
2018-09-07 10:40   ` Jan Beulich
2018-09-11  9:28     ` Paul Durrant
2018-08-23  9:47 ` [PATCH v6 07/14] x86: add iommu_op to query reserved ranges Paul Durrant
2018-09-07 11:01   ` Jan Beulich
2018-09-11  9:34     ` Paul Durrant
2018-09-11  9:43       ` Jan Beulich
2018-09-13  6:11     ` Tian, Kevin
2018-08-23  9:47 ` [PATCH v6 08/14] vtd: add lookup_page method to iommu_ops Paul Durrant
2018-09-07 11:11   ` Jan Beulich
2018-09-07 12:36     ` Paul Durrant
2018-09-07 14:56       ` Jan Beulich
2018-09-07 15:24         ` Paul Durrant
2018-09-07 15:52           ` Jan Beulich
2018-09-12  8:31     ` Paul Durrant
2018-09-12  8:43       ` Jan Beulich
2018-09-12  8:45         ` Paul Durrant
2018-09-12  8:51           ` Paul Durrant
2018-09-12  8:53             ` Paul Durrant
2018-09-12  9:03               ` Jan Beulich
2018-09-12  9:05                 ` Paul Durrant
2018-09-12  9:12                   ` Jan Beulich
2018-09-12  9:15                     ` Paul Durrant
2018-09-12  9:21                       ` Jan Beulich
2018-09-12  9:30                         ` Paul Durrant
2018-09-12 10:07                           ` Jan Beulich
2018-09-12 10:09                             ` Paul Durrant
2018-09-12 12:15                               ` Jan Beulich
2018-09-12 12:22                                 ` Paul Durrant
2018-09-12 12:39                                   ` Jan Beulich
2018-09-12 12:53                                     ` Paul Durrant
2018-09-12 13:19                                       ` Jan Beulich
2018-09-12 13:25                                         ` Paul Durrant
2018-09-12 13:39                                           ` Jan Beulich
2018-09-12 13:43                                             ` Paul Durrant
2018-09-12  8:59           ` Jan Beulich
2018-08-23  9:47 ` [PATCH v6 09/14] mm / iommu: include need_iommu() test in iommu_use_hap_pt() Paul Durrant
2018-09-07 11:20   ` Jan Beulich
2018-09-11  9:39     ` Paul Durrant
2018-09-11  9:47       ` Jan Beulich
2018-09-13  6:23         ` Tian, Kevin
2018-09-13  8:34           ` Paul Durrant
2018-08-23  9:47 ` [PATCH v6 10/14] mm / iommu: split need_iommu() into has_iommu_pt() and need_iommu_pt_sync() Paul Durrant
2018-08-23 11:10   ` Razvan Cojocaru
2018-09-11 14:31   ` Jan Beulich
2018-09-11 15:40     ` Paul Durrant
2018-09-12  6:45       ` Jan Beulich
2018-09-12  8:07         ` Paul Durrant
2018-08-23  9:47 ` [PATCH v6 11/14] x86: add iommu_op to enable modification of IOMMU mappings Paul Durrant
2018-09-11 14:48   ` Jan Beulich
2018-09-11 15:52     ` Paul Durrant
2018-09-12  6:53       ` Jan Beulich
2018-09-12  8:04         ` Paul Durrant
2018-08-23  9:47 ` [PATCH v6 12/14] memory: add get_paged_gfn() as a wrapper Paul Durrant
2018-08-23 10:24   ` Julien Grall
2018-08-23 10:30     ` Paul Durrant
2018-09-11 14:56   ` Jan Beulich
2018-09-12  9:10     ` Paul Durrant
2018-09-12  9:15       ` Jan Beulich
2018-09-12 10:01         ` George Dunlap
2018-09-12 10:08           ` Paul Durrant
2018-09-12 10:10           ` Jan Beulich
2018-08-23  9:47 ` [PATCH v6 13/14] x86: add iommu_ops to modify and flush IOMMU mappings Paul Durrant
2018-09-11 15:15   ` Jan Beulich
2018-09-12  7:03   ` Jan Beulich
2018-09-12  8:02     ` Paul Durrant
2018-09-12  8:27       ` Jan Beulich
2018-09-13  6:41       ` Tian, Kevin
2018-09-13  8:32         ` Paul Durrant
2018-09-13  8:49         ` Jan Beulich
2018-08-23  9:47 ` [PATCH v6 14/14] x86: extend the map and unmap iommu_ops to support grant references Paul Durrant
2018-09-12 14:12   ` Jan Beulich
2018-09-12 16:28     ` Paul Durrant

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180823094711.21376-3-paul.durrant@citrix.com \
    --to=paul.durrant@citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=jbeulich@suse.com \
    --cc=julien.grall@arm.com \
    --cc=jun.nakajima@intel.com \
    --cc=sstabellini@kernel.org \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.