All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/13] "Non-shared" IOMMU support on ARM
@ 2017-07-25 17:26 Oleksandr Tyshchenko
  2017-07-25 17:26 ` [PATCH v2 01/13] xen/device-tree: Add dt_count_phandle_with_args helper Oleksandr Tyshchenko
                   ` (13 more replies)
  0 siblings, 14 replies; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-07-25 17:26 UTC (permalink / raw)
  To: xen-devel; +Cc: Oleksandr Tyshchenko

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Hi, all.

The purpose of this patch series is to create a base for porting
any "Non-shared" IOMMUs to Xen on ARM. Saying "Non-shared" IOMMU I mean
the IOMMU that can't share the page table with the CPU.
Primarily, we are interested in IPMMU-VMSA and I hope that it will be the first candidate.
It is VMSA-compatible IOMMU that integrated in the newest Renesas R-Car Gen3 SoCs (ARM).
I am about to push IPMMU-VMSA support in a while.

With regard to the patch series, it was rebased on Xen 4.9.0 release and tested on Renesas R-Car Gen3
H3/M3 based boards with applied IPMMU-VMSA support:
- Patches 1 and 3 have Julien's Rb.
- Patch 2 has Jan's Rb but only for x86 and generic parts.
- Patch 4 has Julien's Ab.
- Patches 5,6,9,10 were slightly reworked.
- Patch 7 was significantly reworked. The previous patch -> iommu: Split iommu_hwdom_init() into arch specific parts
- Patches 8,11,12,13 are new.

Not really sure about x86-related changes since I had no possibility to check.
So, compile-tested on x86.

You can find current patch series here:
repo: https://github.com/otyshchenko1/xen.git branch: non_shared_iommu_v2

Previous patch series here:
[PATCH v1 00/10] "Non-shared" IOMMU support on ARM
https://www.mail-archive.com/xen-devel@lists.xen.org/msg107532.html

[RFC PATCH 0/9] "Non-shared" IOMMU support on ARM
https://www.mail-archive.com/xen-devel@lists.xen.org/msg100468.html

Thank you.

Oleksandr Tyshchenko (13):
  xen/device-tree: Add dt_count_phandle_with_args helper
  iommu: Add extra order argument to the IOMMU APIs and platform
    callbacks
  xen/arm: p2m: Add helper to convert p2m type to IOMMU flags
  xen/arm: p2m: Update IOMMU mapping whenever possible if page table is
    not shared
  iommu/arm: Re-define iommu_use_hap_pt(d) as iommu_hap_pt_share
  iommu: Add extra use_iommu argument to iommu_domain_init()
  iommu: Make decision about needing IOMMU for hardware domains in
    advance
  iommu/arm: Misc fixes for arch specific part
  xen/arm: Add use_iommu flag to xen_arch_domainconfig
  xen/arm: domain_build: Don't expose IOMMU specific properties to the
    guest
  iommu/arm: smmu: Squash map_pages/unmap_pages with map_page/unmap_page
  [RFC] iommu: VT-d: Squash map_pages/unmap_pages with
    map_page/unmap_page
  [RFC] iommu: AMD-Vi: Squash map_pages/unmap_pages with
    map_page/unmap_page

 tools/libxl/libxl_arm.c                       |   8 +
 xen/arch/arm/domain.c                         |   2 +-
 xen/arch/arm/domain_build.c                   |  10 ++
 xen/arch/arm/p2m.c                            |  10 +-
 xen/arch/x86/domain.c                         |   2 +-
 xen/arch/x86/mm.c                             |  11 +-
 xen/arch/x86/mm/p2m-ept.c                     |  21 +--
 xen/arch/x86/mm/p2m-pt.c                      |  26 +---
 xen/arch/x86/mm/p2m.c                         |  38 +----
 xen/arch/x86/x86_64/mm.c                      |   5 +-
 xen/common/device_tree.c                      |   7 +
 xen/common/grant_table.c                      |  10 +-
 xen/drivers/passthrough/amd/iommu_map.c       | 212 +++++++++++++++-----------
 xen/drivers/passthrough/amd/pci_amd_iommu.c   |  10 +-
 xen/drivers/passthrough/arm/iommu.c           |   7 +-
 xen/drivers/passthrough/arm/smmu.c            |  23 ++-
 xen/drivers/passthrough/iommu.c               |  73 ++++-----
 xen/drivers/passthrough/vtd/iommu.c           | 116 +++++++++-----
 xen/drivers/passthrough/vtd/x86/vtd.c         |   4 +-
 xen/drivers/passthrough/x86/iommu.c           |   6 +-
 xen/include/asm-arm/iommu.h                   |   4 +-
 xen/include/asm-arm/p2m.h                     |  34 +++++
 xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |   8 +-
 xen/include/public/arch-arm.h                 |   5 +
 xen/include/xen/device_tree.h                 |  19 +++
 xen/include/xen/iommu.h                       |  24 +--
 26 files changed, 402 insertions(+), 293 deletions(-)

-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* [PATCH v2 01/13] xen/device-tree: Add dt_count_phandle_with_args helper
  2017-07-25 17:26 [PATCH v2 00/13] "Non-shared" IOMMU support on ARM Oleksandr Tyshchenko
@ 2017-07-25 17:26 ` Oleksandr Tyshchenko
  2017-07-25 17:26 ` [PATCH v2 02/13] iommu: Add extra order argument to the IOMMU APIs and platform callbacks Oleksandr Tyshchenko
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-07-25 17:26 UTC (permalink / raw)
  To: xen-devel; +Cc: Oleksandr Tyshchenko

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Port Linux helper of_count_phandle_with_args for counting
number of phandles in a property.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Reviewed-by: Julien Grall <julien.grall@arm.com>

---
   Changes in v1:
      - Add Julien's reviewed-by

   Changes in v2:
      -
---
 xen/common/device_tree.c      |  7 +++++++
 xen/include/xen/device_tree.h | 19 +++++++++++++++++++
 2 files changed, 26 insertions(+)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 7b009ea..60b0095 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -1663,6 +1663,13 @@ int dt_parse_phandle_with_args(const struct dt_device_node *np,
                                         index, out_args);
 }
 
+int dt_count_phandle_with_args(const struct dt_device_node *np,
+                               const char *list_name,
+                               const char *cells_name)
+{
+    return __dt_parse_phandle_with_args(np, list_name, cells_name, 0, -1, NULL);
+}
+
 /**
  * unflatten_dt_node - Alloc and populate a device_node from the flat tree
  * @fdt: The parent device tree blob
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 0aecbe0..738f1b6 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -764,6 +764,25 @@ int dt_parse_phandle_with_args(const struct dt_device_node *np,
                                const char *cells_name, int index,
                                struct dt_phandle_args *out_args);
 
+/**
+ * dt_count_phandle_with_args() - Find the number of phandles references in a property
+ * @np: pointer to a device tree node containing a list
+ * @list_name: property name that contains a list
+ * @cells_name: property name that specifies phandles' arguments count
+ *
+ * Returns the number of phandle + argument tuples within a property. It
+ * is a typical pattern to encode a list of phandle and variable
+ * arguments into a single property. The number of arguments is encoded
+ * by a property in the phandle-target node. For example, a gpios
+ * property would contain a list of GPIO specifies consisting of a
+ * phandle and 1 or more arguments. The number of arguments are
+ * determined by the #gpio-cells property in the node pointed to by the
+ * phandle.
+ */
+int dt_count_phandle_with_args(const struct dt_device_node *np,
+                               const char *list_name,
+                               const char *cells_name);
+
 #ifdef CONFIG_DEVICE_TREE_DEBUG
 #define dt_dprintk(fmt, args...)  \
     printk(XENLOG_DEBUG fmt, ## args)
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH v2 02/13] iommu: Add extra order argument to the IOMMU APIs and platform callbacks
  2017-07-25 17:26 [PATCH v2 00/13] "Non-shared" IOMMU support on ARM Oleksandr Tyshchenko
  2017-07-25 17:26 ` [PATCH v2 01/13] xen/device-tree: Add dt_count_phandle_with_args helper Oleksandr Tyshchenko
@ 2017-07-25 17:26 ` Oleksandr Tyshchenko
  2017-08-03 11:21   ` Julien Grall
  2017-07-25 17:26 ` [PATCH v2 03/13] xen/arm: p2m: Add helper to convert p2m type to IOMMU flags Oleksandr Tyshchenko
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-07-25 17:26 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, George Dunlap,
	Andrew Cooper, Ian Jackson, Tim Deegan, Oleksandr Tyshchenko,
	Julien Grall, Suravee Suthikulpanit

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Replace existing single-page stuff (IOMMU APIs and platform callbacks)
with the multi-page one followed by modifications of all related parts.

These new map_pages/unmap_pages APIs do almost the same thing
as old map_page/unmap_page ones except the formers have extra
order argument and as the result can handle the number of pages.
So have new platform callbacks.

Although the current behavior was retained in all places (I hope),
it should be noted that the rollback logic was moved from the common code
to the IOMMU drivers. Now the IOMMU drivers are responsible for unmapping
already mapped pages if something went wrong during mapping the number
of pages (order > 0).

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
(only for x86 and generic parts)
Reviewed-by/CC: Jan Beulich <jbeulich@suse.com>
CC: Julien Grall <julien.grall@arm.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Ian Jackson <ian.jackson@eu.citrix.com>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Tim Deegan <tim@xen.org>
CC: Wei Liu <wei.liu2@citrix.com>

---
   Changes in v1:
      - Replace existing single-page IOMMU APIs/platform callbacks with
        multi-page ones instead of just keeping both variants of them.
      - Use order argument instead of page_count.
      - Clarify patch subject/description.

   Changes in v2:
      - Add maintainers in CC
---
 xen/arch/x86/mm.c                             | 11 +++---
 xen/arch/x86/mm/p2m-ept.c                     | 21 ++---------
 xen/arch/x86/mm/p2m-pt.c                      | 26 +++-----------
 xen/arch/x86/mm/p2m.c                         | 38 ++++----------------
 xen/arch/x86/x86_64/mm.c                      |  5 +--
 xen/common/grant_table.c                      | 10 +++---
 xen/drivers/passthrough/amd/iommu_map.c       | 50 +++++++++++++++++++++++++--
 xen/drivers/passthrough/amd/pci_amd_iommu.c   |  8 ++---
 xen/drivers/passthrough/arm/smmu.c            | 41 ++++++++++++++++++++--
 xen/drivers/passthrough/iommu.c               | 21 +++++------
 xen/drivers/passthrough/vtd/iommu.c           | 48 +++++++++++++++++++++++--
 xen/drivers/passthrough/vtd/x86/vtd.c         |  4 +--
 xen/drivers/passthrough/x86/iommu.c           |  6 ++--
 xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |  8 +++--
 xen/include/xen/iommu.h                       | 20 ++++++-----
 15 files changed, 196 insertions(+), 121 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 2dc7db9..33fcffe 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2623,11 +2623,14 @@ static int __get_page_type(struct page_info *page, unsigned long type,
         if ( d && is_pv_domain(d) && unlikely(need_iommu(d)) )
         {
             if ( (x & PGT_type_mask) == PGT_writable_page )
-                iommu_ret = iommu_unmap_page(d, mfn_to_gmfn(d, page_to_mfn(page)));
+                iommu_ret = iommu_unmap_pages(d,
+                                              mfn_to_gmfn(d, page_to_mfn(page)),
+                                              0);
             else if ( type == PGT_writable_page )
-                iommu_ret = iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
-                                           page_to_mfn(page),
-                                           IOMMUF_readable|IOMMUF_writable);
+                iommu_ret = iommu_map_pages(d,
+                                            mfn_to_gmfn(d, page_to_mfn(page)),
+                                            page_to_mfn(page), 0,
+                                            IOMMUF_readable|IOMMUF_writable);
         }
     }
 
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index ecab56f..0ccf451 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -870,26 +870,9 @@ out:
         else
         {
             if ( iommu_flags )
-                for ( i = 0; i < (1 << order); i++ )
-                {
-                    rc = iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags);
-                    if ( unlikely(rc) )
-                    {
-                        while ( i-- )
-                            /* If statement to satisfy __must_check. */
-                            if ( iommu_unmap_page(p2m->domain, gfn + i) )
-                                continue;
-
-                        break;
-                    }
-                }
+                rc = iommu_map_pages(d, gfn, mfn_x(mfn), order, iommu_flags);
             else
-                for ( i = 0; i < (1 << order); i++ )
-                {
-                    ret = iommu_unmap_page(d, gfn + i);
-                    if ( !rc )
-                        rc = ret;
-                }
+                rc = iommu_unmap_pages(d, gfn, order);
         }
     }
 
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 06e64b8..b512ee3 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -514,7 +514,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
 {
     /* XXX -- this might be able to be faster iff current->domain == d */
     void *table;
-    unsigned long i, gfn_remainder = gfn;
+    unsigned long gfn_remainder = gfn;
     l1_pgentry_t *p2m_entry, entry_content;
     /* Intermediate table to free if we're replacing it with a superpage. */
     l1_pgentry_t intermediate_entry = l1e_empty();
@@ -722,28 +722,10 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
                 amd_iommu_flush_pages(p2m->domain, gfn, page_order);
         }
         else if ( iommu_pte_flags )
-            for ( i = 0; i < (1UL << page_order); i++ )
-            {
-                rc = iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
-                                    iommu_pte_flags);
-                if ( unlikely(rc) )
-                {
-                    while ( i-- )
-                        /* If statement to satisfy __must_check. */
-                        if ( iommu_unmap_page(p2m->domain, gfn + i) )
-                            continue;
-
-                    break;
-                }
-            }
+            rc = iommu_map_pages(p2m->domain, gfn, mfn_x(mfn), page_order,
+                                 iommu_pte_flags);
         else
-            for ( i = 0; i < (1UL << page_order); i++ )
-            {
-                int ret = iommu_unmap_page(p2m->domain, gfn + i);
-
-                if ( !rc )
-                    rc = ret;
-            }
+            rc = iommu_unmap_pages(p2m->domain, gfn, page_order);
     }
 
     /*
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index ece32ff..18a71f8 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -708,20 +708,9 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn,
 
     if ( !paging_mode_translate(p2m->domain) )
     {
-        int rc = 0;
-
         if ( need_iommu(p2m->domain) )
-        {
-            for ( i = 0; i < (1 << page_order); i++ )
-            {
-                int ret = iommu_unmap_page(p2m->domain, mfn + i);
-
-                if ( !rc )
-                    rc = ret;
-            }
-        }
-
-        return rc;
+            return iommu_unmap_pages(p2m->domain, mfn, page_order);
+        return 0;
     }
 
     ASSERT(gfn_locked_by_me(p2m, gfn));
@@ -768,23 +757,8 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
     if ( !paging_mode_translate(d) )
     {
         if ( need_iommu(d) && t == p2m_ram_rw )
-        {
-            for ( i = 0; i < (1 << page_order); i++ )
-            {
-                rc = iommu_map_page(d, mfn_x(mfn_add(mfn, i)),
-                                    mfn_x(mfn_add(mfn, i)),
-                                    IOMMUF_readable|IOMMUF_writable);
-                if ( rc != 0 )
-                {
-                    while ( i-- > 0 )
-                        /* If statement to satisfy __must_check. */
-                        if ( iommu_unmap_page(d, mfn_x(mfn_add(mfn, i))) )
-                            continue;
-
-                    return rc;
-                }
-            }
-        }
+            return iommu_map_pages(d, mfn_x(mfn), mfn_x(mfn), page_order,
+                                   IOMMUF_readable|IOMMUF_writable);
         return 0;
     }
 
@@ -1148,7 +1122,7 @@ int set_identity_p2m_entry(struct domain *d, unsigned long gfn,
     {
         if ( !need_iommu(d) )
             return 0;
-        return iommu_map_page(d, gfn, gfn, IOMMUF_readable|IOMMUF_writable);
+        return iommu_map_pages(d, gfn, gfn, 0, IOMMUF_readable|IOMMUF_writable);
     }
 
     gfn_lock(p2m, gfn, 0);
@@ -1236,7 +1210,7 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn)
     {
         if ( !need_iommu(d) )
             return 0;
-        return iommu_unmap_page(d, gfn);
+        return iommu_unmap_pages(d, gfn, 0);
     }
 
     gfn_lock(p2m, gfn, 0);
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index aa1b94f..5fd1d4c 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1442,13 +1442,14 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm)
     if ( iommu_enabled && !iommu_passthrough && !need_iommu(hardware_domain) )
     {
         for ( i = spfn; i < epfn; i++ )
-            if ( iommu_map_page(hardware_domain, i, i, IOMMUF_readable|IOMMUF_writable) )
+            if ( iommu_map_pages(hardware_domain, i, i, 0,
+                                 IOMMUF_readable|IOMMUF_writable) )
                 break;
         if ( i != epfn )
         {
             while (i-- > old_max)
                 /* If statement to satisfy __must_check. */
-                if ( iommu_unmap_page(hardware_domain, i) )
+                if ( iommu_unmap_pages(hardware_domain, i, 0) )
                     continue;
 
             goto destroy_m2p;
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 03de2be..5399c36 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -987,13 +987,13 @@ __gnttab_map_grant_ref(
              !(old_pin & (GNTPIN_hstw_mask|GNTPIN_devw_mask)) )
         {
             if ( !(kind & MAPKIND_WRITE) )
-                err = iommu_map_page(ld, frame, frame,
-                                     IOMMUF_readable|IOMMUF_writable);
+                err = iommu_map_pages(ld, frame, frame, 0,
+                                      IOMMUF_readable|IOMMUF_writable);
         }
         else if ( act_pin && !old_pin )
         {
             if ( !kind )
-                err = iommu_map_page(ld, frame, frame, IOMMUF_readable);
+                err = iommu_map_pages(ld, frame, frame, 0, IOMMUF_readable);
         }
         if ( err )
         {
@@ -1248,9 +1248,9 @@ __gnttab_unmap_common(
 
         kind = mapkind(lgt, rd, op->frame);
         if ( !kind )
-            err = iommu_unmap_page(ld, op->frame);
+            err = iommu_unmap_pages(ld, op->frame, 0);
         else if ( !(kind & MAPKIND_WRITE) )
-            err = iommu_map_page(ld, op->frame, op->frame, IOMMUF_readable);
+            err = iommu_map_pages(ld, op->frame, op->frame, 0, IOMMUF_readable);
 
         double_gt_unlock(lgt, rgt);
 
diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index fd2327d..ea3a728 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -631,8 +631,9 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
     return 0;
 }
 
-int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
-                       unsigned int flags)
+static int __must_check amd_iommu_map_page(struct domain *d, unsigned long gfn,
+                                           unsigned long mfn,
+                                           unsigned int flags)
 {
     bool_t need_flush = 0;
     struct domain_iommu *hd = dom_iommu(d);
@@ -720,7 +721,8 @@ out:
     return 0;
 }
 
-int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
+static int __must_check amd_iommu_unmap_page(struct domain *d,
+                                             unsigned long gfn)
 {
     unsigned long pt_mfn[7];
     struct domain_iommu *hd = dom_iommu(d);
@@ -771,6 +773,48 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
     return 0;
 }
 
+/* TODO: Optimize by squashing map_pages/unmap_pages with map_page/unmap_page */
+int __must_check amd_iommu_map_pages(struct domain *d, unsigned long gfn,
+                                     unsigned long mfn, unsigned int order,
+                                     unsigned int flags)
+{
+    unsigned long i;
+    int rc = 0;
+
+    for ( i = 0; i < (1UL << order); i++ )
+    {
+        rc = amd_iommu_map_page(d, gfn + i, mfn + i, flags);
+        if ( unlikely(rc) )
+        {
+            while ( i-- )
+                /* If statement to satisfy __must_check. */
+                if ( amd_iommu_unmap_page(d, gfn + i) )
+                    continue;
+
+            break;
+        }
+    }
+
+    return rc;
+}
+
+int __must_check amd_iommu_unmap_pages(struct domain *d, unsigned long gfn,
+                                       unsigned int order)
+{
+    unsigned long i;
+    int rc = 0;
+
+    for ( i = 0; i < (1UL << order); i++ )
+    {
+        int ret = amd_iommu_unmap_page(d, gfn + i);
+
+        if ( !rc )
+            rc = ret;
+    }
+
+    return rc;
+}
+
 int amd_iommu_reserve_domain_unity_map(struct domain *domain,
                                        u64 phys_addr,
                                        unsigned long size, int iw, int ir)
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 8c25110..fe744d2 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -296,8 +296,8 @@ static void __hwdom_init amd_iommu_hwdom_init(struct domain *d)
              */
             if ( mfn_valid(_mfn(pfn)) )
             {
-                int ret = amd_iommu_map_page(d, pfn, pfn,
-                                             IOMMUF_readable|IOMMUF_writable);
+                int ret = amd_iommu_map_pages(d, pfn, pfn, 0,
+                                              IOMMUF_readable|IOMMUF_writable);
 
                 if ( !rc )
                     rc = ret;
@@ -620,8 +620,8 @@ const struct iommu_ops amd_iommu_ops = {
     .remove_device = amd_iommu_remove_device,
     .assign_device  = amd_iommu_assign_device,
     .teardown = amd_iommu_domain_destroy,
-    .map_page = amd_iommu_map_page,
-    .unmap_page = amd_iommu_unmap_page,
+    .map_pages = amd_iommu_map_pages,
+    .unmap_pages = amd_iommu_unmap_pages,
     .free_page_table = deallocate_page_table,
     .reassign_device = reassign_device,
     .get_device_group_id = amd_iommu_group_id,
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 74c09b0..7c313c0 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2778,6 +2778,43 @@ static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long gfn)
 	return guest_physmap_remove_page(d, _gfn(gfn), _mfn(gfn), 0);
 }
 
+/* TODO: Optimize by squashing map_pages/unmap_pages with map_page/unmap_page */
+static int __must_check arm_smmu_map_pages(struct domain *d, unsigned long gfn,
+		unsigned long mfn, unsigned int order, unsigned int flags)
+{
+	unsigned long i;
+	int rc = 0;
+
+	for (i = 0; i < (1UL << order); i++) {
+		rc = arm_smmu_map_page(d, gfn + i, mfn + i, flags);
+		if (unlikely(rc)) {
+			while (i--)
+				/* If statement to satisfy __must_check. */
+				if (arm_smmu_unmap_page(d, gfn + i))
+					continue;
+
+			break;
+		}
+	}
+
+	return rc;
+}
+
+static int __must_check arm_smmu_unmap_pages(struct domain *d,
+		unsigned long gfn, unsigned int order)
+{
+	unsigned long i;
+	int rc = 0;
+
+	for (i = 0; i < (1UL << order); i++) {
+		int ret = arm_smmu_unmap_page(d, gfn + i);
+		if (!rc)
+			rc = ret;
+	}
+
+	return rc;
+}
+
 static const struct iommu_ops arm_smmu_iommu_ops = {
     .init = arm_smmu_iommu_domain_init,
     .hwdom_init = arm_smmu_iommu_hwdom_init,
@@ -2786,8 +2823,8 @@ static const struct iommu_ops arm_smmu_iommu_ops = {
     .iotlb_flush_all = arm_smmu_iotlb_flush_all,
     .assign_device = arm_smmu_assign_dev,
     .reassign_device = arm_smmu_reassign_dev,
-    .map_page = arm_smmu_map_page,
-    .unmap_page = arm_smmu_unmap_page,
+    .map_pages = arm_smmu_map_pages,
+    .unmap_pages = arm_smmu_unmap_pages,
 };
 
 static __init const struct arm_smmu_device *find_smmu(const struct device *dev)
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 5e81813..3e9e4c3 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -188,7 +188,7 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
                   == PGT_writable_page) )
                 mapping |= IOMMUF_writable;
 
-            ret = hd->platform_ops->map_page(d, gfn, mfn, mapping);
+            ret = hd->platform_ops->map_pages(d, gfn, mfn, 0, mapping);
             if ( !rc )
                 rc = ret;
 
@@ -249,8 +249,8 @@ void iommu_domain_destroy(struct domain *d)
     arch_iommu_domain_destroy(d);
 }
 
-int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
-                   unsigned int flags)
+int iommu_map_pages(struct domain *d, unsigned long gfn, unsigned long mfn,
+                    unsigned int order, unsigned int flags)
 {
     const struct domain_iommu *hd = dom_iommu(d);
     int rc;
@@ -258,13 +258,13 @@ int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    rc = hd->platform_ops->map_page(d, gfn, mfn, flags);
+    rc = hd->platform_ops->map_pages(d, gfn, mfn, order, flags);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
             printk(XENLOG_ERR
-                   "d%d: IOMMU mapping gfn %#lx to mfn %#lx failed: %d\n",
-                   d->domain_id, gfn, mfn, rc);
+                   "d%d: IOMMU mapping gfn %#lx to mfn %#lx order %u failed: %d\n",
+                   d->domain_id, gfn, mfn, order, rc);
 
         if ( !is_hardware_domain(d) )
             domain_crash(d);
@@ -273,7 +273,8 @@ int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     return rc;
 }
 
-int iommu_unmap_page(struct domain *d, unsigned long gfn)
+int iommu_unmap_pages(struct domain *d, unsigned long gfn,
+                      unsigned int order)
 {
     const struct domain_iommu *hd = dom_iommu(d);
     int rc;
@@ -281,13 +282,13 @@ int iommu_unmap_page(struct domain *d, unsigned long gfn)
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    rc = hd->platform_ops->unmap_page(d, gfn);
+    rc = hd->platform_ops->unmap_pages(d, gfn, order);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
             printk(XENLOG_ERR
-                   "d%d: IOMMU unmapping gfn %#lx failed: %d\n",
-                   d->domain_id, gfn, rc);
+                   "d%d: IOMMU unmapping gfn %#lx order %u failed: %d\n",
+                   d->domain_id, gfn, order, rc);
 
         if ( !is_hardware_domain(d) )
             domain_crash(d);
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 19328f6..b4e8c89 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1816,6 +1816,50 @@ static int __must_check intel_iommu_unmap_page(struct domain *d,
     return dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
 }
 
+/* TODO: Optimize by squashing map_pages/unmap_pages with map_page/unmap_page */
+static int __must_check intel_iommu_map_pages(struct domain *d,
+                                              unsigned long gfn,
+                                              unsigned long mfn,
+                                              unsigned int order,
+                                              unsigned int flags)
+{
+    unsigned long i;
+    int rc = 0;
+
+    for ( i = 0; i < (1UL << order); i++ )
+    {
+        rc = intel_iommu_map_page(d, gfn + i, mfn + i, flags);
+        if ( unlikely(rc) )
+        {
+            while ( i-- )
+                /* If statement to satisfy __must_check. */
+                if ( intel_iommu_unmap_page(d, gfn + i) )
+                    continue;
+
+            break;
+        }
+    }
+
+    return rc;
+}
+
+static int __must_check intel_iommu_unmap_pages(struct domain *d,
+                                                unsigned long gfn,
+                                                unsigned int order)
+{
+    unsigned long i;
+    int rc = 0;
+
+    for ( i = 0; i < (1UL << order); i++ )
+    {
+        int ret = intel_iommu_unmap_page(d, gfn + i);
+        if ( !rc )
+            rc = ret;
+    }
+
+    return rc;
+}
+
 int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
                     int order, int present)
 {
@@ -2639,8 +2683,8 @@ const struct iommu_ops intel_iommu_ops = {
     .remove_device = intel_iommu_remove_device,
     .assign_device  = intel_iommu_assign_device,
     .teardown = iommu_domain_teardown,
-    .map_page = intel_iommu_map_page,
-    .unmap_page = intel_iommu_unmap_page,
+    .map_pages = intel_iommu_map_pages,
+    .unmap_pages = intel_iommu_unmap_pages,
     .free_page_table = iommu_free_page_table,
     .reassign_device = reassign_device_ownership,
     .get_device_group_id = intel_iommu_group_id,
diff --git a/xen/drivers/passthrough/vtd/x86/vtd.c b/xen/drivers/passthrough/vtd/x86/vtd.c
index 88a60b3..62a6ee6 100644
--- a/xen/drivers/passthrough/vtd/x86/vtd.c
+++ b/xen/drivers/passthrough/vtd/x86/vtd.c
@@ -143,8 +143,8 @@ void __hwdom_init vtd_set_hwdom_mapping(struct domain *d)
         tmp = 1 << (PAGE_SHIFT - PAGE_SHIFT_4K);
         for ( j = 0; j < tmp; j++ )
         {
-            int ret = iommu_map_page(d, pfn * tmp + j, pfn * tmp + j,
-                                     IOMMUF_readable|IOMMUF_writable);
+            int ret = iommu_map_pages(d, pfn * tmp + j, pfn * tmp + j, 0,
+                                      IOMMUF_readable|IOMMUF_writable);
 
             if ( !rc )
                rc = ret;
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index 0253823..973b72f 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -65,9 +65,9 @@ int arch_iommu_populate_page_table(struct domain *d)
             {
                 ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
                 BUG_ON(SHARED_M2P(gfn));
-                rc = hd->platform_ops->map_page(d, gfn, mfn,
-                                                IOMMUF_readable |
-                                                IOMMUF_writable);
+                rc = hd->platform_ops->map_pages(d, gfn, mfn, 0,
+                                                 IOMMUF_readable |
+                                                 IOMMUF_writable);
             }
             if ( rc )
             {
diff --git a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
index 99bc21c..8f44489 100644
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
@@ -52,9 +52,11 @@ int amd_iommu_init(void);
 int amd_iommu_update_ivrs_mapping_acpi(void);
 
 /* mapping functions */
-int __must_check amd_iommu_map_page(struct domain *d, unsigned long gfn,
-                                    unsigned long mfn, unsigned int flags);
-int __must_check amd_iommu_unmap_page(struct domain *d, unsigned long gfn);
+int __must_check amd_iommu_map_pages(struct domain *d, unsigned long gfn,
+                                     unsigned long mfn, unsigned int order,
+                                     unsigned int flags);
+int __must_check amd_iommu_unmap_pages(struct domain *d, unsigned long gfn,
+                                       unsigned int order);
 u64 amd_iommu_get_next_table_from_pte(u32 *entry);
 int __must_check amd_iommu_alloc_root(struct domain_iommu *hd);
 int amd_iommu_reserve_domain_unity_map(struct domain *domain,
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 5803e3f..3297998 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -71,14 +71,16 @@ int iommu_construct(struct domain *d);
 /* Function used internally, use iommu_domain_destroy */
 void iommu_teardown(struct domain *d);
 
-/* iommu_map_page() takes flags to direct the mapping operation. */
+/* iommu_map_pages() takes flags to direct the mapping operation. */
 #define _IOMMUF_readable 0
 #define IOMMUF_readable  (1u<<_IOMMUF_readable)
 #define _IOMMUF_writable 1
 #define IOMMUF_writable  (1u<<_IOMMUF_writable)
-int __must_check iommu_map_page(struct domain *d, unsigned long gfn,
-                                unsigned long mfn, unsigned int flags);
-int __must_check iommu_unmap_page(struct domain *d, unsigned long gfn);
+int __must_check iommu_map_pages(struct domain *d, unsigned long gfn,
+                                 unsigned long mfn, unsigned int order,
+                                 unsigned int flags);
+int __must_check iommu_unmap_pages(struct domain *d, unsigned long gfn,
+                                   unsigned int order);
 
 enum iommu_feature
 {
@@ -168,9 +170,11 @@ struct iommu_ops {
 #endif /* HAS_PCI */
 
     void (*teardown)(struct domain *d);
-    int __must_check (*map_page)(struct domain *d, unsigned long gfn,
-                                 unsigned long mfn, unsigned int flags);
-    int __must_check (*unmap_page)(struct domain *d, unsigned long gfn);
+    int __must_check (*map_pages)(struct domain *d, unsigned long gfn,
+                                  unsigned long mfn, unsigned int order,
+                                  unsigned int flags);
+    int __must_check (*unmap_pages)(struct domain *d, unsigned long gfn,
+                                    unsigned int order);
     void (*free_page_table)(struct page_info *);
 #ifdef CONFIG_X86
     void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
@@ -213,7 +217,7 @@ void iommu_dev_iotlb_flush_timeout(struct domain *d, struct pci_dev *pdev);
  * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to
  * avoid unecessary iotlb_flush in the low level IOMMU code.
  *
- * iommu_map_page/iommu_unmap_page must flush the iotlb but somethimes
+ * iommu_map_pages/iommu_unmap_pages must flush the iotlb but somethimes
  * this operation can be really expensive. This flag will be set by the
  * caller to notify the low level IOMMU code to avoid the iotlb flushes.
  * iommu_iotlb_flush/iommu_iotlb_flush_all will be explicitly called by
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH v2 03/13] xen/arm: p2m: Add helper to convert p2m type to IOMMU flags
  2017-07-25 17:26 [PATCH v2 00/13] "Non-shared" IOMMU support on ARM Oleksandr Tyshchenko
  2017-07-25 17:26 ` [PATCH v2 01/13] xen/device-tree: Add dt_count_phandle_with_args helper Oleksandr Tyshchenko
  2017-07-25 17:26 ` [PATCH v2 02/13] iommu: Add extra order argument to the IOMMU APIs and platform callbacks Oleksandr Tyshchenko
@ 2017-07-25 17:26 ` Oleksandr Tyshchenko
  2017-07-25 17:26 ` [PATCH v2 04/13] xen/arm: p2m: Update IOMMU mapping whenever possible if page table is not shared Oleksandr Tyshchenko
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-07-25 17:26 UTC (permalink / raw)
  To: xen-devel; +Cc: Oleksandr Tyshchenko

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The helper has the same purpose as existing for x86 one.
It is used for choosing IOMMU mapping attribute according to
the memory type.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Reviewed-by: Julien Grall <julien.grall@arm.com>

---
   Changes in v1:
      - Add Julien's reviewed-by

   Changes in v2:
      -
---
 xen/include/asm-arm/p2m.h | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 1269052..635cc25 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -5,6 +5,7 @@
 #include <xen/radix-tree.h>
 #include <xen/rwlock.h>
 #include <xen/mem_access.h>
+#include <xen/iommu.h>
 #include <public/vm_event.h> /* for vm_event_response_t */
 #include <public/memory.h>
 #include <xen/p2m-common.h>
@@ -353,6 +354,39 @@ static inline gfn_t gfn_next_boundary(gfn_t gfn, unsigned int order)
     return gfn_add(gfn, 1UL << order);
 }
 
+/*
+ * p2m type to IOMMU flags
+ */
+static inline unsigned int p2m_get_iommu_flags(p2m_type_t p2mt)
+{
+    unsigned int flags;
+
+    switch( p2mt )
+    {
+    case p2m_ram_rw:
+    case p2m_iommu_map_rw:
+    case p2m_map_foreign:
+    case p2m_grant_map_rw:
+    case p2m_mmio_direct_dev:
+    case p2m_mmio_direct_nc:
+    case p2m_mmio_direct_c:
+        flags = IOMMUF_readable | IOMMUF_writable;
+        break;
+    case p2m_ram_ro:
+    case p2m_iommu_map_ro:
+    case p2m_grant_map_ro:
+        flags = IOMMUF_readable;
+        break;
+    default:
+        flags = 0;
+        break;
+    }
+
+    /* TODO Do we need to handle access permissions here? */
+
+    return flags;
+}
+
 #endif /* _XEN_P2M_H */
 
 /*
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH v2 04/13] xen/arm: p2m: Update IOMMU mapping whenever possible if page table is not shared
  2017-07-25 17:26 [PATCH v2 00/13] "Non-shared" IOMMU support on ARM Oleksandr Tyshchenko
                   ` (2 preceding siblings ...)
  2017-07-25 17:26 ` [PATCH v2 03/13] xen/arm: p2m: Add helper to convert p2m type to IOMMU flags Oleksandr Tyshchenko
@ 2017-07-25 17:26 ` Oleksandr Tyshchenko
  2017-07-25 17:26 ` [PATCH v2 05/13] iommu/arm: Re-define iommu_use_hap_pt(d) as iommu_hap_pt_share Oleksandr Tyshchenko
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-07-25 17:26 UTC (permalink / raw)
  To: xen-devel; +Cc: Oleksandr Tyshchenko

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Update IOMMU mapping if the IOMMU doesn't share page table with the CPU.
The best place to do so on ARM is __p2m_set_entry(). Use mfn as an indicator
of the required action. If mfn is valid call iommu_map_pages(),
otherwise - iommu_unmap_pages().

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Acked-by: Julien Grall <julien.grall@arm.com>

---
   Changes in v1:
      - Update IOMMU mapping in __p2m_set_entry() instead of p2m_set_entry().
      - Pass order argument to IOMMU APIs instead of page_count.

   Changes in v2:
      - Add Julien's acked-by
---
 xen/arch/arm/p2m.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index fc2a106..729ed94 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -984,7 +984,15 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
         p2m_free_entry(p2m, orig_pte, level);
 
     if ( need_iommu(p2m->domain) && (p2m_valid(orig_pte) || p2m_valid(*entry)) )
-        rc = iommu_iotlb_flush(p2m->domain, gfn_x(sgfn), 1UL << page_order);
+    {
+        if ( iommu_use_hap_pt(p2m->domain) )
+            rc = iommu_iotlb_flush(p2m->domain, gfn_x(sgfn), 1UL << page_order);
+        else if ( !mfn_eq(smfn, INVALID_MFN) )
+            rc = iommu_map_pages(p2m->domain, gfn_x(sgfn), mfn_x(smfn),
+                                 page_order, p2m_get_iommu_flags(t));
+        else
+            rc = iommu_unmap_pages(p2m->domain, gfn_x(sgfn), page_order);
+    }
     else
         rc = 0;
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH v2 05/13] iommu/arm: Re-define iommu_use_hap_pt(d) as iommu_hap_pt_share
  2017-07-25 17:26 [PATCH v2 00/13] "Non-shared" IOMMU support on ARM Oleksandr Tyshchenko
                   ` (3 preceding siblings ...)
  2017-07-25 17:26 ` [PATCH v2 04/13] xen/arm: p2m: Update IOMMU mapping whenever possible if page table is not shared Oleksandr Tyshchenko
@ 2017-07-25 17:26 ` Oleksandr Tyshchenko
  2017-08-03 11:23   ` Julien Grall
  2017-07-25 17:26 ` [PATCH v2 06/13] iommu: Add extra use_iommu argument to iommu_domain_init() Oleksandr Tyshchenko
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-07-25 17:26 UTC (permalink / raw)
  To: xen-devel; +Cc: Oleksandr Tyshchenko, Julien Grall

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Not every integrated into ARM SoCs IOMMU can share page tables
with the CPU and as the result the iommu_use_hap_pt(d) mustn't
always be true.
Reuse x86's iommu_hap_pt_share flag to indicate whether the IOMMU
page table is shared or not.

As P2M table must always be shared between the CPU and the SMMU
print an error message and bail out if this flag was previously unset.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
   Changes in v1:
      -

   Changes in v2:
      - Bail out if iommu_hap_pt_share is unset instead of overriding it
      - Clarify comment in code
---
 xen/drivers/passthrough/arm/smmu.c | 6 ++++++
 xen/include/asm-arm/iommu.h        | 4 ++--
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 7c313c0..e828308 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2856,6 +2856,12 @@ static __init int arm_smmu_dt_init(struct dt_device_node *dev,
 	 */
 	dt_device_set_used_by(dev, DOMID_XEN);
 
+	if (!iommu_hap_pt_share) {
+		dev_err(dt_to_dev(dev),
+			"P2M table must always be shared between the CPU and the SMMU\n");
+		return -EINVAL;
+	}
+
 	rc = arm_smmu_device_dt_probe(dev);
 	if (rc)
 		return rc;
diff --git a/xen/include/asm-arm/iommu.h b/xen/include/asm-arm/iommu.h
index 57d9b1e..2a6bd3d 100644
--- a/xen/include/asm-arm/iommu.h
+++ b/xen/include/asm-arm/iommu.h
@@ -20,8 +20,8 @@ struct arch_iommu
     void *priv;
 };
 
-/* Always share P2M Table between the CPU and the IOMMU */
-#define iommu_use_hap_pt(d) (1)
+/* Not every ARM SoCs IOMMU use the same page-table format as the CPU. */
+#define iommu_use_hap_pt(d) (iommu_hap_pt_share)
 
 const struct iommu_ops *iommu_get_ops(void);
 void __init iommu_set_ops(const struct iommu_ops *ops);
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH v2 06/13] iommu: Add extra use_iommu argument to iommu_domain_init()
  2017-07-25 17:26 [PATCH v2 00/13] "Non-shared" IOMMU support on ARM Oleksandr Tyshchenko
                   ` (4 preceding siblings ...)
  2017-07-25 17:26 ` [PATCH v2 05/13] iommu/arm: Re-define iommu_use_hap_pt(d) as iommu_hap_pt_share Oleksandr Tyshchenko
@ 2017-07-25 17:26 ` Oleksandr Tyshchenko
  2017-08-21 16:29   ` Oleksandr Tyshchenko
  2017-12-06 16:51   ` Jan Beulich
  2017-07-25 17:26 ` [PATCH v2 07/13] iommu: Make decision about needing IOMMU for hardware domains in advance Oleksandr Tyshchenko
                   ` (7 subsequent siblings)
  13 siblings, 2 replies; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-07-25 17:26 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Suravee Suthikulpanit,
	Andrew Cooper, Oleksandr Tyshchenko, Julien Grall, Jan Beulich

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The presence of this flag lets us know that the guest domain has statically
assigned devices which will most likely be used for passthrough
and as the result the IOMMU is expected to be used for this domain.

Taking into the account this hint when dealing with non-shared IOMMUs
we can populate IOMMU page tables before hand avoid going through
the list of pages at the first assigned device.
As this flag doesn't cover hotplug case, we will continue to populate
IOMMU page tables on the fly.

Extend corresponding platform callback with extra argument as well and
pass thought incoming flag to the IOMMU drivers followed by updating
"d->need_iommu" flag for any domains. But, it must be an additional logic before
updating this flag for hardware domains which the next patch is introducing.

As iommu_domain_init() is called with "use_iommu" flag being forced
to false for now, no functional change is intended for both ARM and x86.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Julien Grall <julien.grall@arm.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

---
   Changes in v1:
      - Clarify patch subject/description.
      - s/bool_t/bool/

   Changes in v2:
      - Extend "init" callback with extra argument too.
      - Clarify patch description.
      - Add maintainers in CC
---
 xen/arch/arm/domain.c                       |  2 +-
 xen/arch/x86/domain.c                       |  2 +-
 xen/drivers/passthrough/amd/pci_amd_iommu.c |  2 +-
 xen/drivers/passthrough/arm/smmu.c          |  2 +-
 xen/drivers/passthrough/iommu.c             | 10 ++++++++--
 xen/drivers/passthrough/vtd/iommu.c         |  2 +-
 xen/include/xen/iommu.h                     |  4 ++--
 7 files changed, 15 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 76310ed..ec19310 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -569,7 +569,7 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
     ASSERT(config != NULL);
 
     /* p2m_init relies on some value initialized by the IOMMU subsystem */
-    if ( (rc = iommu_domain_init(d)) != 0 )
+    if ( (rc = iommu_domain_init(d, false)) != 0 )
         goto fail;
 
     if ( (rc = p2m_init(d)) != 0 )
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index d7e6992..1ffe76c 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -641,7 +641,7 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
         if ( (rc = init_domain_irq_mapping(d)) != 0 )
             goto fail;
 
-        if ( (rc = iommu_domain_init(d)) != 0 )
+        if ( (rc = iommu_domain_init(d, false)) != 0 )
             goto fail;
     }
     spin_lock_init(&d->arch.e820_lock);
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index fe744d2..2491e8c 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -261,7 +261,7 @@ static int get_paging_mode(unsigned long entries)
     return level;
 }
 
-static int amd_iommu_domain_init(struct domain *d)
+static int amd_iommu_domain_init(struct domain *d, bool use_iommu)
 {
     struct domain_iommu *hd = dom_iommu(d);
 
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index e828308..652b58c 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2705,7 +2705,7 @@ static int arm_smmu_reassign_dev(struct domain *s, struct domain *t,
 	return 0;
 }
 
-static int arm_smmu_iommu_domain_init(struct domain *d)
+static int arm_smmu_iommu_domain_init(struct domain *d, bool use_iommu)
 {
 	struct arm_smmu_xen_domain *xen_domain;
 
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 3e9e4c3..19c87d1 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -129,7 +129,7 @@ static void __init parse_iommu_param(char *s)
     } while ( ss );
 }
 
-int iommu_domain_init(struct domain *d)
+int iommu_domain_init(struct domain *d, bool use_iommu)
 {
     struct domain_iommu *hd = dom_iommu(d);
     int ret = 0;
@@ -142,7 +142,13 @@ int iommu_domain_init(struct domain *d)
         return 0;
 
     hd->platform_ops = iommu_get_ops();
-    return hd->platform_ops->init(d);
+    ret = hd->platform_ops->init(d, use_iommu);
+    if ( ret )
+        return ret;
+
+    d->need_iommu = use_iommu;
+
+    return 0;
 }
 
 static void __hwdom_init check_hwdom_reqs(struct domain *d)
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index b4e8c89..45d1f36 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1277,7 +1277,7 @@ void __init iommu_free(struct acpi_drhd_unit *drhd)
         agaw = 64;                              \
     agaw; })
 
-static int intel_iommu_domain_init(struct domain *d)
+static int intel_iommu_domain_init(struct domain *d, bool use_iommu)
 {
     dom_iommu(d)->arch.agaw = width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH);
 
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 3297998..f4d489e 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -56,7 +56,7 @@ int iommu_setup(void);
 int iommu_add_device(struct pci_dev *pdev);
 int iommu_enable_device(struct pci_dev *pdev);
 int iommu_remove_device(struct pci_dev *pdev);
-int iommu_domain_init(struct domain *d);
+int iommu_domain_init(struct domain *d, bool use_iommu);
 void iommu_hwdom_init(struct domain *d);
 void iommu_domain_destroy(struct domain *d);
 int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn);
@@ -155,7 +155,7 @@ struct page_info;
 typedef int iommu_grdm_t(xen_pfn_t start, xen_ulong_t nr, u32 id, void *ctxt);
 
 struct iommu_ops {
-    int (*init)(struct domain *d);
+    int (*init)(struct domain *d, bool use_iommu);
     void (*hwdom_init)(struct domain *d);
     int (*add_device)(u8 devfn, device_t *dev);
     int (*enable_device)(device_t *dev);
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH v2 07/13] iommu: Make decision about needing IOMMU for hardware domains in advance
  2017-07-25 17:26 [PATCH v2 00/13] "Non-shared" IOMMU support on ARM Oleksandr Tyshchenko
                   ` (5 preceding siblings ...)
  2017-07-25 17:26 ` [PATCH v2 06/13] iommu: Add extra use_iommu argument to iommu_domain_init() Oleksandr Tyshchenko
@ 2017-07-25 17:26 ` Oleksandr Tyshchenko
  2017-08-21 16:30   ` Oleksandr Tyshchenko
                     ` (2 more replies)
  2017-07-25 17:26 ` [PATCH v2 08/13] iommu/arm: Misc fixes for arch specific part Oleksandr Tyshchenko
                   ` (6 subsequent siblings)
  13 siblings, 3 replies; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-07-25 17:26 UTC (permalink / raw)
  To: xen-devel; +Cc: Oleksandr Tyshchenko, Julien Grall, Jan Beulich

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The hardware domains require IOMMU to be used in the most cases and
a decision to use it is made at hardware domain construction time.
But, it is not the best moment for the non-shared IOMMUs due to
the necessity of retrieving all mapping which could happen in a period
of time between IOMMU per-domain initialization and this moment.

So, make a decision about needing IOMMU a bit earlier, in iommu_domain_init().
Having "d->need_iommu" flag set at the early stage we won't skip
any IOMMU mapping updates. And as the result the existing in iommu_hwdom_init()
code that goes through the list of the pages and tries to retrieve mapping
for non-shared IOMMUs won't be needed anymore and can be just dropped.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Julien Grall <julien.grall@arm.com>

---
Changes in v1:
   -

Changes in v2:
   - This is the result of reworking old patch:
     [PATCH v1 08/10] iommu: Split iommu_hwdom_init() into arch specific parts
---
 xen/drivers/passthrough/iommu.c | 44 ++++++++++-------------------------------
 1 file changed, 10 insertions(+), 34 deletions(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 19c87d1..f5e5b7e 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -52,7 +52,7 @@ custom_param("iommu", parse_iommu_param);
 bool_t __initdata iommu_enable = 1;
 bool_t __read_mostly iommu_enabled;
 bool_t __read_mostly force_iommu;
-bool_t __hwdom_initdata iommu_dom0_strict;
+bool_t __read_mostly iommu_dom0_strict;
 bool_t __read_mostly iommu_verbose;
 bool_t __read_mostly iommu_workaround_bios_bug;
 bool_t __read_mostly iommu_igfx = 1;
@@ -141,6 +141,15 @@ int iommu_domain_init(struct domain *d, bool use_iommu)
     if ( !iommu_enabled )
         return 0;
 
+    if ( is_hardware_domain(d) )
+    {
+        if ( (paging_mode_translate(d) && !iommu_passthrough) ||
+              iommu_dom0_strict )
+            use_iommu = 1;
+        else
+            use_iommu = 0;
+    }
+
     hd->platform_ops = iommu_get_ops();
     ret = hd->platform_ops->init(d, use_iommu);
     if ( ret )
@@ -161,8 +170,6 @@ static void __hwdom_init check_hwdom_reqs(struct domain *d)
     if ( iommu_passthrough )
         panic("Dom0 uses paging translated mode, dom0-passthrough must not be "
               "enabled\n");
-
-    iommu_dom0_strict = 1;
 }
 
 void __hwdom_init iommu_hwdom_init(struct domain *d)
@@ -175,37 +182,6 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
         return;
 
     register_keyhandler('o', &iommu_dump_p2m_table, "dump iommu p2m table", 0);
-    d->need_iommu = !!iommu_dom0_strict;
-    if ( need_iommu(d) && !iommu_use_hap_pt(d) )
-    {
-        struct page_info *page;
-        unsigned int i = 0;
-        int rc = 0;
-
-        page_list_for_each ( page, &d->page_list )
-        {
-            unsigned long mfn = page_to_mfn(page);
-            unsigned long gfn = mfn_to_gmfn(d, mfn);
-            unsigned int mapping = IOMMUF_readable;
-            int ret;
-
-            if ( ((page->u.inuse.type_info & PGT_count_mask) == 0) ||
-                 ((page->u.inuse.type_info & PGT_type_mask)
-                  == PGT_writable_page) )
-                mapping |= IOMMUF_writable;
-
-            ret = hd->platform_ops->map_pages(d, gfn, mfn, 0, mapping);
-            if ( !rc )
-                rc = ret;
-
-            if ( !(i++ & 0xfffff) )
-                process_pending_softirqs();
-        }
-
-        if ( rc )
-            printk(XENLOG_WARNING "d%d: IOMMU mapping failed: %d\n",
-                   d->domain_id, rc);
-    }
 
     return hd->platform_ops->hwdom_init(d);
 }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH v2 08/13] iommu/arm: Misc fixes for arch specific part
  2017-07-25 17:26 [PATCH v2 00/13] "Non-shared" IOMMU support on ARM Oleksandr Tyshchenko
                   ` (6 preceding siblings ...)
  2017-07-25 17:26 ` [PATCH v2 07/13] iommu: Make decision about needing IOMMU for hardware domains in advance Oleksandr Tyshchenko
@ 2017-07-25 17:26 ` Oleksandr Tyshchenko
  2017-08-03 11:31   ` Julien Grall
  2017-07-25 17:26 ` [PATCH v2 09/13] xen/arm: Add use_iommu flag to xen_arch_domainconfig Oleksandr Tyshchenko
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-07-25 17:26 UTC (permalink / raw)
  To: xen-devel; +Cc: Oleksandr Tyshchenko, Julien Grall

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

1. Add missing return in case if IOMMU ops have been already set.
2. Add check for shared IOMMU before returning an error.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
   Changes in V1:
      -

   Changes in V2:
      -
---
 xen/drivers/passthrough/arm/iommu.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/arm/iommu.c b/xen/drivers/passthrough/arm/iommu.c
index 95b1abb..6f01c13 100644
--- a/xen/drivers/passthrough/arm/iommu.c
+++ b/xen/drivers/passthrough/arm/iommu.c
@@ -32,7 +32,10 @@ void __init iommu_set_ops(const struct iommu_ops *ops)
     BUG_ON(ops == NULL);
 
     if ( iommu_ops && iommu_ops != ops )
+    {
         printk("WARNING: Cannot set IOMMU ops, already set to a different value\n");
+        return;
+    }
 
     iommu_ops = ops;
 }
@@ -70,6 +73,6 @@ void arch_iommu_domain_destroy(struct domain *d)
 
 int arch_iommu_populate_page_table(struct domain *d)
 {
-    /* The IOMMU shares the p2m with the CPU */
-    return -ENOSYS;
+    /* Return an error if the IOMMU shares the p2m with the CPU */
+    return iommu_use_hap_pt(d) ? -ENOSYS : 0;
 }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH v2 09/13] xen/arm: Add use_iommu flag to xen_arch_domainconfig
  2017-07-25 17:26 [PATCH v2 00/13] "Non-shared" IOMMU support on ARM Oleksandr Tyshchenko
                   ` (7 preceding siblings ...)
  2017-07-25 17:26 ` [PATCH v2 08/13] iommu/arm: Misc fixes for arch specific part Oleksandr Tyshchenko
@ 2017-07-25 17:26 ` Oleksandr Tyshchenko
  2017-07-28 16:16   ` Wei Liu
  2017-08-03 11:33   ` Julien Grall
  2017-07-25 17:26 ` [PATCH v2 10/13] xen/arm: domain_build: Don't expose IOMMU specific properties to the guest Oleksandr Tyshchenko
                   ` (4 subsequent siblings)
  13 siblings, 2 replies; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-07-25 17:26 UTC (permalink / raw)
  To: xen-devel
  Cc: Oleksandr Tyshchenko, Julien Grall, Ian Jackson, Wei Liu, Jan Beulich

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

This flag is intended to let Xen know that the guest has devices
which will most likely be used for passthrough and as the result
the IOMMU is expected to be used for this domain.

The primary aim of this knowledge is to help the IOMMUs that don't
share page tables with the CPU on ARM be ready before P2M code starts
updating IOMMU mapping.
So, if this flag is set the non-shared IOMMUs will populate
their page tables at the domain creation time and thereby will be able
to handle IOMMU mapping updates from *the very beginning*.

In order to retain the current behavior for x86 still call
iommu_domain_init() with use_iommu flag being forced to false.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Julien Grall <julien.grall@arm.com>
CC: Ian Jackson <ian.jackson@eu.citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>

---
   Changes in V1:
      - Treat use_iommu flag as the ARM decision only. Don't use
        common domain creation flag for it, use ARM config instead.
      - Clarify patch subject/description.

   Changes in V2:
      - Cosmetic fixes.
---
 tools/libxl/libxl_arm.c       | 8 ++++++++
 xen/arch/arm/domain.c         | 2 +-
 xen/include/public/arch-arm.h | 5 +++++
 3 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index d842d88..cb9fe05 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -78,6 +78,14 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
         return ERROR_FAIL;
     }
 
+    if (d_config->num_dtdevs || d_config->num_pcidevs)
+        xc_config->use_iommu = 1;
+    else
+        xc_config->use_iommu = 0;
+
+    LOG(DEBUG, "IOMMU %s expected to be used for this domain",
+        xc_config->use_iommu ? "is" : "isn't");
+
     return 0;
 }
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index ec19310..3079bbe 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -569,7 +569,7 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
     ASSERT(config != NULL);
 
     /* p2m_init relies on some value initialized by the IOMMU subsystem */
-    if ( (rc = iommu_domain_init(d, false)) != 0 )
+    if ( (rc = iommu_domain_init(d, !!config->use_iommu)) != 0 )
         goto fail;
 
     if ( (rc = p2m_init(d)) != 0 )
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index bd974fb..b1fae45 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -322,6 +322,11 @@ struct xen_arch_domainconfig {
      *
      */
     uint32_t clock_frequency;
+    /*
+     * IN
+     * IOMMU is expected to be used for this domain.
+     */
+    uint8_t use_iommu;
 };
 #endif /* __XEN__ || __XEN_TOOLS__ */
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH v2 10/13] xen/arm: domain_build: Don't expose IOMMU specific properties to the guest
  2017-07-25 17:26 [PATCH v2 00/13] "Non-shared" IOMMU support on ARM Oleksandr Tyshchenko
                   ` (8 preceding siblings ...)
  2017-07-25 17:26 ` [PATCH v2 09/13] xen/arm: Add use_iommu flag to xen_arch_domainconfig Oleksandr Tyshchenko
@ 2017-07-25 17:26 ` Oleksandr Tyshchenko
  2017-08-03 11:37   ` Julien Grall
  2017-07-25 17:26 ` [PATCH v2 11/13] iommu/arm: smmu: Squash map_pages/unmap_pages with map_page/unmap_page Oleksandr Tyshchenko
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-07-25 17:26 UTC (permalink / raw)
  To: xen-devel; +Cc: Oleksandr Tyshchenko, Julien Grall

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

We don't passthrough IOMMU device to DOM0 even if it is not used by
Xen. Therefore exposing the properties that describe relationship
between master devices and IOMMUs does not make any sense.

According to the:
1. Documentation/devicetree/bindings/iommu/iommu.txt
2. Documentation/devicetree/bindings/pci/pci-iommu.txt

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
   Changes in v1:
      -

   Changes in v2:
      - Skip optional properties too.
      - Clarify patch description
---
 xen/arch/arm/domain_build.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 3abacc0..fadfbbc 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -432,6 +432,16 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
             continue;
         }
 
+        /* Don't expose IOMMU specific properties to the guest */
+        if ( dt_property_name_is_equal(prop, "iommus") )
+            continue;
+
+        if ( dt_property_name_is_equal(prop, "iommu-map") )
+            continue;
+
+        if ( dt_property_name_is_equal(prop, "iommu-map-mask") )
+            continue;
+
         res = fdt_property(kinfo->fdt, prop->name, prop_data, prop_len);
 
         if ( res )
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH v2 11/13] iommu/arm: smmu: Squash map_pages/unmap_pages with map_page/unmap_page
  2017-07-25 17:26 [PATCH v2 00/13] "Non-shared" IOMMU support on ARM Oleksandr Tyshchenko
                   ` (9 preceding siblings ...)
  2017-07-25 17:26 ` [PATCH v2 10/13] xen/arm: domain_build: Don't expose IOMMU specific properties to the guest Oleksandr Tyshchenko
@ 2017-07-25 17:26 ` Oleksandr Tyshchenko
  2017-08-03 12:36   ` Julien Grall
  2017-07-25 17:26 ` [PATCH v2 12/13] [RFC] iommu: VT-d: " Oleksandr Tyshchenko
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-07-25 17:26 UTC (permalink / raw)
  To: xen-devel; +Cc: Oleksandr Tyshchenko, Julien Grall

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Eliminate TODO by squashing single-page stuff with multi-page one.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
   Changes in v1:
      -

   Changes in v2:
      -
---
 xen/drivers/passthrough/arm/smmu.c | 48 +++++---------------------------------
 1 file changed, 6 insertions(+), 42 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 652b58c..021031a 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2737,8 +2737,8 @@ static void arm_smmu_iommu_domain_teardown(struct domain *d)
 	xfree(xen_domain);
 }
 
-static int __must_check arm_smmu_map_page(struct domain *d, unsigned long gfn,
-			unsigned long mfn, unsigned int flags)
+static int __must_check arm_smmu_map_pages(struct domain *d, unsigned long gfn,
+			unsigned long mfn, unsigned int order, unsigned int flags)
 {
 	p2m_type_t t;
 
@@ -2763,10 +2763,11 @@ static int __must_check arm_smmu_map_page(struct domain *d, unsigned long gfn,
 	 * The function guest_physmap_add_entry replaces the current mapping
 	 * if there is already one...
 	 */
-	return guest_physmap_add_entry(d, _gfn(gfn), _mfn(mfn), 0, t);
+	return guest_physmap_add_entry(d, _gfn(gfn), _mfn(mfn), order, t);
 }
 
-static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long gfn)
+static int __must_check arm_smmu_unmap_pages(struct domain *d,
+			unsigned long gfn, unsigned int order)
 {
 	/*
 	 * This function should only be used by gnttab code when the domain
@@ -2775,44 +2776,7 @@ static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long gfn)
 	if ( !is_domain_direct_mapped(d) )
 		return -EINVAL;
 
-	return guest_physmap_remove_page(d, _gfn(gfn), _mfn(gfn), 0);
-}
-
-/* TODO: Optimize by squashing map_pages/unmap_pages with map_page/unmap_page */
-static int __must_check arm_smmu_map_pages(struct domain *d, unsigned long gfn,
-		unsigned long mfn, unsigned int order, unsigned int flags)
-{
-	unsigned long i;
-	int rc = 0;
-
-	for (i = 0; i < (1UL << order); i++) {
-		rc = arm_smmu_map_page(d, gfn + i, mfn + i, flags);
-		if (unlikely(rc)) {
-			while (i--)
-				/* If statement to satisfy __must_check. */
-				if (arm_smmu_unmap_page(d, gfn + i))
-					continue;
-
-			break;
-		}
-	}
-
-	return rc;
-}
-
-static int __must_check arm_smmu_unmap_pages(struct domain *d,
-		unsigned long gfn, unsigned int order)
-{
-	unsigned long i;
-	int rc = 0;
-
-	for (i = 0; i < (1UL << order); i++) {
-		int ret = arm_smmu_unmap_page(d, gfn + i);
-		if (!rc)
-			rc = ret;
-	}
-
-	return rc;
+	return guest_physmap_remove_page(d, _gfn(gfn), _mfn(gfn), order);
 }
 
 static const struct iommu_ops arm_smmu_iommu_ops = {
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH v2 12/13] [RFC] iommu: VT-d: Squash map_pages/unmap_pages with map_page/unmap_page
  2017-07-25 17:26 [PATCH v2 00/13] "Non-shared" IOMMU support on ARM Oleksandr Tyshchenko
                   ` (10 preceding siblings ...)
  2017-07-25 17:26 ` [PATCH v2 11/13] iommu/arm: smmu: Squash map_pages/unmap_pages with map_page/unmap_page Oleksandr Tyshchenko
@ 2017-07-25 17:26 ` Oleksandr Tyshchenko
  2017-08-21 16:44   ` Oleksandr Tyshchenko
  2017-07-25 17:26 ` [PATCH v2 13/13] [RFC] iommu: AMD-Vi: " Oleksandr Tyshchenko
  2017-07-31  5:57 ` [PATCH v2 00/13] "Non-shared" IOMMU support on ARM Tian, Kevin
  13 siblings, 1 reply; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-07-25 17:26 UTC (permalink / raw)
  To: xen-devel; +Cc: Oleksandr Tyshchenko, Kevin Tian, Jan Beulich

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Reduce the scope of the TODO by squashing single-page stuff with
multi-page one. Next target is to use large pages whenever possible
in the case that hardware supports them.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Kevin Tian <kevin.tian@intel.com>

---
   Changes in v1:
      -

   Changes in v2:
      -
---
 xen/drivers/passthrough/vtd/iommu.c | 138 +++++++++++++++++-------------------
 1 file changed, 67 insertions(+), 71 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 45d1f36..d20b2f9 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1750,15 +1750,24 @@ static void iommu_domain_teardown(struct domain *d)
     spin_unlock(&hd->arch.mapping_lock);
 }
 
-static int __must_check intel_iommu_map_page(struct domain *d,
-                                             unsigned long gfn,
-                                             unsigned long mfn,
-                                             unsigned int flags)
+static int __must_check intel_iommu_unmap_pages(struct domain *d,
+                                                unsigned long gfn,
+                                                unsigned int order);
+
+/*
+ * TODO: Optimize by using large pages whenever possible in the case
+ * that hardware supports them.
+ */
+static int __must_check intel_iommu_map_pages(struct domain *d,
+                                              unsigned long gfn,
+                                              unsigned long mfn,
+                                              unsigned int order,
+                                              unsigned int flags)
 {
     struct domain_iommu *hd = dom_iommu(d);
-    struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
-    u64 pg_maddr;
     int rc = 0;
+    unsigned long orig_gfn = gfn;
+    unsigned long i;
 
     /* Do nothing if VT-d shares EPT page table */
     if ( iommu_use_hap_pt(d) )
@@ -1768,78 +1777,60 @@ static int __must_check intel_iommu_map_page(struct domain *d,
     if ( iommu_passthrough && is_hardware_domain(d) )
         return 0;
 
-    spin_lock(&hd->arch.mapping_lock);
-
-    pg_maddr = addr_to_dma_page_maddr(d, (paddr_t)gfn << PAGE_SHIFT_4K, 1);
-    if ( pg_maddr == 0 )
+    for ( i = 0; i < (1UL << order); i++, gfn++, mfn++ )
     {
-        spin_unlock(&hd->arch.mapping_lock);
-        return -ENOMEM;
-    }
-    page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
-    pte = page + (gfn & LEVEL_MASK);
-    old = *pte;
-    dma_set_pte_addr(new, (paddr_t)mfn << PAGE_SHIFT_4K);
-    dma_set_pte_prot(new,
-                     ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
-                     ((flags & IOMMUF_writable) ? DMA_PTE_WRITE : 0));
+        struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
+        u64 pg_maddr;
 
-    /* Set the SNP on leaf page table if Snoop Control available */
-    if ( iommu_snoop )
-        dma_set_pte_snp(new);
+        spin_lock(&hd->arch.mapping_lock);
 
-    if ( old.val == new.val )
-    {
+        pg_maddr = addr_to_dma_page_maddr(d, (paddr_t)gfn << PAGE_SHIFT_4K, 1);
+        if ( pg_maddr == 0 )
+        {
+            spin_unlock(&hd->arch.mapping_lock);
+            rc = -ENOMEM;
+            goto err;
+        }
+        page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
+        pte = page + (gfn & LEVEL_MASK);
+        old = *pte;
+        dma_set_pte_addr(new, (paddr_t)mfn << PAGE_SHIFT_4K);
+        dma_set_pte_prot(new,
+                         ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
+                         ((flags & IOMMUF_writable) ? DMA_PTE_WRITE : 0));
+
+        /* Set the SNP on leaf page table if Snoop Control available */
+        if ( iommu_snoop )
+            dma_set_pte_snp(new);
+
+        if ( old.val == new.val )
+        {
+            spin_unlock(&hd->arch.mapping_lock);
+            unmap_vtd_domain_page(page);
+            continue;
+        }
+        *pte = new;
+
+        iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
         spin_unlock(&hd->arch.mapping_lock);
         unmap_vtd_domain_page(page);
-        return 0;
-    }
-    *pte = new;
-
-    iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
-    spin_unlock(&hd->arch.mapping_lock);
-    unmap_vtd_domain_page(page);
 
-    if ( !this_cpu(iommu_dont_flush_iotlb) )
-        rc = iommu_flush_iotlb(d, gfn, dma_pte_present(old), 1);
-
-    return rc;
-}
-
-static int __must_check intel_iommu_unmap_page(struct domain *d,
-                                               unsigned long gfn)
-{
-    /* Do nothing if hardware domain and iommu supports pass thru. */
-    if ( iommu_passthrough && is_hardware_domain(d) )
-        return 0;
-
-    return dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
-}
-
-/* TODO: Optimize by squashing map_pages/unmap_pages with map_page/unmap_page */
-static int __must_check intel_iommu_map_pages(struct domain *d,
-                                              unsigned long gfn,
-                                              unsigned long mfn,
-                                              unsigned int order,
-                                              unsigned int flags)
-{
-    unsigned long i;
-    int rc = 0;
-
-    for ( i = 0; i < (1UL << order); i++ )
-    {
-        rc = intel_iommu_map_page(d, gfn + i, mfn + i, flags);
-        if ( unlikely(rc) )
+        if ( !this_cpu(iommu_dont_flush_iotlb) )
         {
-            while ( i-- )
-                /* If statement to satisfy __must_check. */
-                if ( intel_iommu_unmap_page(d, gfn + i) )
-                    continue;
-
-            break;
+            rc = iommu_flush_iotlb(d, gfn, dma_pte_present(old), 1);
+            if ( rc )
+                goto err;
         }
     }
 
+    return 0;
+
+err:
+    while ( i-- )
+        /* If statement to satisfy __must_check. */
+        if ( intel_iommu_unmap_pages(d, orig_gfn + i, 0) )
+            continue;
+
     return rc;
 }
 
@@ -1847,12 +1838,17 @@ static int __must_check intel_iommu_unmap_pages(struct domain *d,
                                                 unsigned long gfn,
                                                 unsigned int order)
 {
-    unsigned long i;
     int rc = 0;
+    unsigned long i;
+
+    /* Do nothing if hardware domain and iommu supports pass thru. */
+    if ( iommu_passthrough && is_hardware_domain(d) )
+        return 0;
 
-    for ( i = 0; i < (1UL << order); i++ )
+    for ( i = 0; i < (1UL << order); i++, gfn++ )
     {
-        int ret = intel_iommu_unmap_page(d, gfn + i);
+        int ret = dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
+
         if ( !rc )
             rc = ret;
     }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH v2 13/13] [RFC] iommu: AMD-Vi: Squash map_pages/unmap_pages with map_page/unmap_page
  2017-07-25 17:26 [PATCH v2 00/13] "Non-shared" IOMMU support on ARM Oleksandr Tyshchenko
                   ` (11 preceding siblings ...)
  2017-07-25 17:26 ` [PATCH v2 12/13] [RFC] iommu: VT-d: " Oleksandr Tyshchenko
@ 2017-07-25 17:26 ` Oleksandr Tyshchenko
  2017-08-21 16:44   ` Oleksandr Tyshchenko
  2017-07-31  5:57 ` [PATCH v2 00/13] "Non-shared" IOMMU support on ARM Tian, Kevin
  13 siblings, 1 reply; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-07-25 17:26 UTC (permalink / raw)
  To: xen-devel; +Cc: Oleksandr Tyshchenko, Suravee Suthikulpanit, Jan Beulich

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Reduce the scope of the TODO by squashing single-page stuff with
multi-page one. Next target is to use large pages whenever possible
in the case that hardware supports them.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

---
   Changes in v1:
      -

   Changes in v2:
      -

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
---
 xen/drivers/passthrough/amd/iommu_map.c | 250 ++++++++++++++++----------------
 1 file changed, 121 insertions(+), 129 deletions(-)

diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index ea3a728..22d0cc6 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -631,188 +631,180 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
     return 0;
 }
 
-static int __must_check amd_iommu_map_page(struct domain *d, unsigned long gfn,
-                                           unsigned long mfn,
-                                           unsigned int flags)
+/*
+ * TODO: Optimize by using large pages whenever possible in the case
+ * that hardware supports them.
+ */
+int __must_check amd_iommu_map_pages(struct domain *d, unsigned long gfn,
+                                     unsigned long mfn,
+                                     unsigned int order,
+                                     unsigned int flags)
 {
-    bool_t need_flush = 0;
     struct domain_iommu *hd = dom_iommu(d);
     int rc;
-    unsigned long pt_mfn[7];
-    unsigned int merge_level;
+    unsigned long orig_gfn = gfn;
+    unsigned long i;
 
     if ( iommu_use_hap_pt(d) )
         return 0;
 
-    memset(pt_mfn, 0, sizeof(pt_mfn));
-
     spin_lock(&hd->arch.mapping_lock);
-
     rc = amd_iommu_alloc_root(hd);
+    spin_unlock(&hd->arch.mapping_lock);
     if ( rc )
     {
-        spin_unlock(&hd->arch.mapping_lock);
         AMD_IOMMU_DEBUG("Root table alloc failed, gfn = %lx\n", gfn);
         domain_crash(d);
         return rc;
     }
 
-    /* Since HVM domain is initialized with 2 level IO page table,
-     * we might need a deeper page table for lager gfn now */
-    if ( is_hvm_domain(d) )
+    for ( i = 0; i < (1UL << order); i++, gfn++, mfn++ )
     {
-        if ( update_paging_mode(d, gfn) )
+        bool_t need_flush = 0;
+        unsigned long pt_mfn[7];
+        unsigned int merge_level;
+
+        memset(pt_mfn, 0, sizeof(pt_mfn));
+
+        spin_lock(&hd->arch.mapping_lock);
+
+        /* Since HVM domain is initialized with 2 level IO page table,
+         * we might need a deeper page table for lager gfn now */
+        if ( is_hvm_domain(d) )
+        {
+            if ( update_paging_mode(d, gfn) )
+            {
+                spin_unlock(&hd->arch.mapping_lock);
+                AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
+                domain_crash(d);
+                rc = -EFAULT;
+                goto err;
+            }
+        }
+
+        if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
         {
             spin_unlock(&hd->arch.mapping_lock);
-            AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
+            AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
             domain_crash(d);
-            return -EFAULT;
+            rc = -EFAULT;
+            goto err;
         }
-    }
 
-    if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
-    {
-        spin_unlock(&hd->arch.mapping_lock);
-        AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
-        domain_crash(d);
-        return -EFAULT;
-    }
+        /* Install 4k mapping first */
+        need_flush = set_iommu_pte_present(pt_mfn[1], gfn, mfn,
+                                           IOMMU_PAGING_MODE_LEVEL_1,
+                                           !!(flags & IOMMUF_writable),
+                                           !!(flags & IOMMUF_readable));
 
-    /* Install 4k mapping first */
-    need_flush = set_iommu_pte_present(pt_mfn[1], gfn, mfn, 
-                                       IOMMU_PAGING_MODE_LEVEL_1,
-                                       !!(flags & IOMMUF_writable),
-                                       !!(flags & IOMMUF_readable));
+        /* Do not increase pde count if io mapping has not been changed */
+        if ( !need_flush )
+        {
+            spin_unlock(&hd->arch.mapping_lock);
+            continue;
+        }
 
-    /* Do not increase pde count if io mapping has not been changed */
-    if ( !need_flush )
-        goto out;
+        /* 4K mapping for PV guests never changes,
+         * no need to flush if we trust non-present bits */
+        if ( is_hvm_domain(d) )
+            amd_iommu_flush_pages(d, gfn, 0);
 
-    /* 4K mapping for PV guests never changes, 
-     * no need to flush if we trust non-present bits */
-    if ( is_hvm_domain(d) )
-        amd_iommu_flush_pages(d, gfn, 0);
-
-    for ( merge_level = IOMMU_PAGING_MODE_LEVEL_2;
-          merge_level <= hd->arch.paging_mode; merge_level++ )
-    {
-        if ( pt_mfn[merge_level] == 0 )
-            break;
-        if ( !iommu_update_pde_count(d, pt_mfn[merge_level],
-                                     gfn, mfn, merge_level) )
-            break;
-
-        if ( iommu_merge_pages(d, pt_mfn[merge_level], gfn, 
-                               flags, merge_level) )
+        for ( merge_level = IOMMU_PAGING_MODE_LEVEL_2;
+              merge_level <= hd->arch.paging_mode; merge_level++ )
         {
-            spin_unlock(&hd->arch.mapping_lock);
-            AMD_IOMMU_DEBUG("Merge iommu page failed at level %d, "
-                            "gfn = %lx mfn = %lx\n", merge_level, gfn, mfn);
-            domain_crash(d);
-            return -EFAULT;
+            if ( pt_mfn[merge_level] == 0 )
+                break;
+            if ( !iommu_update_pde_count(d, pt_mfn[merge_level],
+                                         gfn, mfn, merge_level) )
+                break;
+
+            if ( iommu_merge_pages(d, pt_mfn[merge_level], gfn,
+                                   flags, merge_level) )
+            {
+                spin_unlock(&hd->arch.mapping_lock);
+                AMD_IOMMU_DEBUG("Merge iommu page failed at level %d, "
+                                "gfn = %lx mfn = %lx\n", merge_level, gfn, mfn);
+                domain_crash(d);
+                rc = -EFAULT;
+                goto err;
+            }
+
+            /* Deallocate lower level page table */
+            free_amd_iommu_pgtable(mfn_to_page(pt_mfn[merge_level - 1]));
         }
 
-        /* Deallocate lower level page table */
-        free_amd_iommu_pgtable(mfn_to_page(pt_mfn[merge_level - 1]));
+        spin_unlock(&hd->arch.mapping_lock);
     }
 
-out:
-    spin_unlock(&hd->arch.mapping_lock);
     return 0;
+
+err:
+    while ( i-- )
+        /* If statement to satisfy __must_check. */
+        if ( amd_iommu_unmap_pages(d, orig_gfn + i, 0) )
+            continue;
+
+    return rc;
 }
 
-static int __must_check amd_iommu_unmap_page(struct domain *d,
-                                             unsigned long gfn)
+int __must_check amd_iommu_unmap_pages(struct domain *d,
+                                       unsigned long gfn,
+                                       unsigned int order)
 {
-    unsigned long pt_mfn[7];
     struct domain_iommu *hd = dom_iommu(d);
+    int rt = 0;
+    unsigned long i;
 
     if ( iommu_use_hap_pt(d) )
         return 0;
 
-    memset(pt_mfn, 0, sizeof(pt_mfn));
-
-    spin_lock(&hd->arch.mapping_lock);
-
     if ( !hd->arch.root_table )
-    {
-        spin_unlock(&hd->arch.mapping_lock);
         return 0;
-    }
 
-    /* Since HVM domain is initialized with 2 level IO page table,
-     * we might need a deeper page table for lager gfn now */
-    if ( is_hvm_domain(d) )
+    for ( i = 0; i < (1UL << order); i++, gfn++ )
     {
-        int rc = update_paging_mode(d, gfn);
+        unsigned long pt_mfn[7];
 
-        if ( rc )
-        {
-            spin_unlock(&hd->arch.mapping_lock);
-            AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
-            if ( rc != -EADDRNOTAVAIL )
-                domain_crash(d);
-            return rc;
-        }
-    }
+        memset(pt_mfn, 0, sizeof(pt_mfn));
 
-    if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
-    {
-        spin_unlock(&hd->arch.mapping_lock);
-        AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
-        domain_crash(d);
-        return -EFAULT;
-    }
-
-    /* mark PTE as 'page not present' */
-    clear_iommu_pte_present(pt_mfn[1], gfn);
-    spin_unlock(&hd->arch.mapping_lock);
+        spin_lock(&hd->arch.mapping_lock);
 
-    amd_iommu_flush_pages(d, gfn, 0);
-
-    return 0;
-}
-
-/* TODO: Optimize by squashing map_pages/unmap_pages with map_page/unmap_page */
-int __must_check amd_iommu_map_pages(struct domain *d, unsigned long gfn,
-                                     unsigned long mfn, unsigned int order,
-                                     unsigned int flags)
-{
-    unsigned long i;
-    int rc = 0;
-
-    for ( i = 0; i < (1UL << order); i++ )
-    {
-        rc = amd_iommu_map_page(d, gfn + i, mfn + i, flags);
-        if ( unlikely(rc) )
+        /* Since HVM domain is initialized with 2 level IO page table,
+         * we might need a deeper page table for lager gfn now */
+        if ( is_hvm_domain(d) )
         {
-            while ( i-- )
-                /* If statement to satisfy __must_check. */
-                if ( amd_iommu_unmap_page(d, gfn + i) )
-                    continue;
+            int rc = update_paging_mode(d, gfn);
 
-            break;
+            if ( rc )
+            {
+                spin_unlock(&hd->arch.mapping_lock);
+                AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
+                if ( rc != -EADDRNOTAVAIL )
+                    domain_crash(d);
+                if ( !rt )
+                    rt = rc;
+                continue;
+            }
         }
-    }
-
-    return rc;
-}
 
-int __must_check amd_iommu_unmap_pages(struct domain *d, unsigned long gfn,
-                                       unsigned int order)
-{
-    unsigned long i;
-    int rc = 0;
+        if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
+        {
+            spin_unlock(&hd->arch.mapping_lock);
+            AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
+            domain_crash(d);
+            if ( !rt )
+                rt = -EFAULT;
+            continue;
+        }
 
-    for ( i = 0; i < (1UL << order); i++ )
-    {
-        int ret = amd_iommu_unmap_page(d, gfn + i);
+        /* mark PTE as 'page not present' */
+        clear_iommu_pte_present(pt_mfn[1], gfn);
+        spin_unlock(&hd->arch.mapping_lock);
 
-        if ( !rc )
-            rc = ret;
+        amd_iommu_flush_pages(d, gfn, 0);
     }
 
-    return rc;
+    return rt;
 }
 
 int amd_iommu_reserve_domain_unity_map(struct domain *domain,
@@ -831,7 +823,7 @@ int amd_iommu_reserve_domain_unity_map(struct domain *domain,
     gfn = phys_addr >> PAGE_SHIFT;
     for ( i = 0; i < npages; i++ )
     {
-        rt = amd_iommu_map_page(domain, gfn +i, gfn +i, flags);
+        rt = amd_iommu_map_pages(domain, gfn +i, gfn +i, flags, 0);
         if ( rt != 0 )
             return rt;
     }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 09/13] xen/arm: Add use_iommu flag to xen_arch_domainconfig
  2017-07-25 17:26 ` [PATCH v2 09/13] xen/arm: Add use_iommu flag to xen_arch_domainconfig Oleksandr Tyshchenko
@ 2017-07-28 16:16   ` Wei Liu
  2017-07-28 16:30     ` Oleksandr Tyshchenko
  2017-08-03 11:33   ` Julien Grall
  1 sibling, 1 reply; 62+ messages in thread
From: Wei Liu @ 2017-07-28 16:16 UTC (permalink / raw)
  To: Oleksandr Tyshchenko
  Cc: Wei Liu, Ian Jackson, Oleksandr Tyshchenko, Julien Grall,
	Jan Beulich, xen-devel

On Tue, Jul 25, 2017 at 08:26:51PM +0300, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> This flag is intended to let Xen know that the guest has devices
> which will most likely be used for passthrough and as the result
> the IOMMU is expected to be used for this domain.
> 
> The primary aim of this knowledge is to help the IOMMUs that don't
> share page tables with the CPU on ARM be ready before P2M code starts
> updating IOMMU mapping.
> So, if this flag is set the non-shared IOMMUs will populate
> their page tables at the domain creation time and thereby will be able
> to handle IOMMU mapping updates from *the very beginning*.
> 
> In order to retain the current behavior for x86 still call
> iommu_domain_init() with use_iommu flag being forced to false.
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Julien Grall <julien.grall@arm.com>
> CC: Ian Jackson <ian.jackson@eu.citrix.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> 
> ---
>    Changes in V1:
>       - Treat use_iommu flag as the ARM decision only. Don't use
>         common domain creation flag for it, use ARM config instead.
>       - Clarify patch subject/description.
> 
>    Changes in V2:
>       - Cosmetic fixes.
> ---
>  tools/libxl/libxl_arm.c       | 8 ++++++++

Acked-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 09/13] xen/arm: Add use_iommu flag to xen_arch_domainconfig
  2017-07-28 16:16   ` Wei Liu
@ 2017-07-28 16:30     ` Oleksandr Tyshchenko
  0 siblings, 0 replies; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-07-28 16:30 UTC (permalink / raw)
  To: Wei Liu
  Cc: xen-devel, Julien Grall, Ian Jackson, Jan Beulich, Oleksandr Tyshchenko

Hi, Wei

On Fri, Jul 28, 2017 at 7:16 PM, Wei Liu <wei.liu2@citrix.com> wrote:
> On Tue, Jul 25, 2017 at 08:26:51PM +0300, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> This flag is intended to let Xen know that the guest has devices
>> which will most likely be used for passthrough and as the result
>> the IOMMU is expected to be used for this domain.
>>
>> The primary aim of this knowledge is to help the IOMMUs that don't
>> share page tables with the CPU on ARM be ready before P2M code starts
>> updating IOMMU mapping.
>> So, if this flag is set the non-shared IOMMUs will populate
>> their page tables at the domain creation time and thereby will be able
>> to handle IOMMU mapping updates from *the very beginning*.
>>
>> In order to retain the current behavior for x86 still call
>> iommu_domain_init() with use_iommu flag being forced to false.
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> CC: Jan Beulich <jbeulich@suse.com>
>> CC: Julien Grall <julien.grall@arm.com>
>> CC: Ian Jackson <ian.jackson@eu.citrix.com>
>> CC: Wei Liu <wei.liu2@citrix.com>
>>
>> ---
>>    Changes in V1:
>>       - Treat use_iommu flag as the ARM decision only. Don't use
>>         common domain creation flag for it, use ARM config instead.
>>       - Clarify patch subject/description.
>>
>>    Changes in V2:
>>       - Cosmetic fixes.
>> ---
>>  tools/libxl/libxl_arm.c       | 8 ++++++++
>
> Acked-by: Wei Liu <wei.liu2@citrix.com>

Great. Thank you.

-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 00/13] "Non-shared" IOMMU support on ARM
  2017-07-25 17:26 [PATCH v2 00/13] "Non-shared" IOMMU support on ARM Oleksandr Tyshchenko
                   ` (12 preceding siblings ...)
  2017-07-25 17:26 ` [PATCH v2 13/13] [RFC] iommu: AMD-Vi: " Oleksandr Tyshchenko
@ 2017-07-31  5:57 ` Tian, Kevin
  2017-07-31 11:57   ` Oleksandr Tyshchenko
  2017-08-01 17:56   ` Julien Grall
  13 siblings, 2 replies; 62+ messages in thread
From: Tian, Kevin @ 2017-07-31  5:57 UTC (permalink / raw)
  To: Oleksandr Tyshchenko, xen-devel; +Cc: Oleksandr Tyshchenko

> From: Oleksandr Tyshchenko
> Sent: Wednesday, July 26, 2017 1:27 AM
> 
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> Hi, all.
> 
> The purpose of this patch series is to create a base for porting
> any "Non-shared" IOMMUs to Xen on ARM. Saying "Non-shared" IOMMU I
> mean
> the IOMMU that can't share the page table with the CPU.

Is "non-shared" IOMMU a standard terminology in ARM side? I quickly 
searched to find it mostly used in this thread...

On the other hand, all IOMMUs support a basic DMA remapping 
mechanism with page table not shared with CPU. Then some IOMMUs
may optional support Shared Virtual Memory (SVM) through page
sharing with CPU. Then I'm not sure why need to highlight the
"non-shared" manner in this thread, instead of just saying 
IPMMU-VMSA support...

> Primarily, we are interested in IPMMU-VMSA and I hope that it will be the
> first candidate.
> It is VMSA-compatible IOMMU that integrated in the newest Renesas R-Car
> Gen3 SoCs (ARM).
> I am about to push IPMMU-VMSA support in a while.
> 
> With regard to the patch series, it was rebased on Xen 4.9.0 release and
> tested on Renesas R-Car Gen3
> H3/M3 based boards with applied IPMMU-VMSA support:
> - Patches 1 and 3 have Julien's Rb.
> - Patch 2 has Jan's Rb but only for x86 and generic parts.
> - Patch 4 has Julien's Ab.
> - Patches 5,6,9,10 were slightly reworked.
> - Patch 7 was significantly reworked. The previous patch -> iommu: Split
> iommu_hwdom_init() into arch specific parts
> - Patches 8,11,12,13 are new.
> 
> Not really sure about x86-related changes since I had no possibility to check.
> So, compile-tested on x86.
> 
> You can find current patch series here:
> repo: https://github.com/otyshchenko1/xen.git branch:
> non_shared_iommu_v2
> 
> Previous patch series here:
> [PATCH v1 00/10] "Non-shared" IOMMU support on ARM
> https://www.mail-archive.com/xen-devel@lists.xen.org/msg107532.html
> 
> [RFC PATCH 0/9] "Non-shared" IOMMU support on ARM
> https://www.mail-archive.com/xen-devel@lists.xen.org/msg100468.html
> 
> Thank you.
> 
> Oleksandr Tyshchenko (13):
>   xen/device-tree: Add dt_count_phandle_with_args helper
>   iommu: Add extra order argument to the IOMMU APIs and platform
>     callbacks
>   xen/arm: p2m: Add helper to convert p2m type to IOMMU flags
>   xen/arm: p2m: Update IOMMU mapping whenever possible if page table is
>     not shared
>   iommu/arm: Re-define iommu_use_hap_pt(d) as iommu_hap_pt_share
>   iommu: Add extra use_iommu argument to iommu_domain_init()
>   iommu: Make decision about needing IOMMU for hardware domains in
>     advance
>   iommu/arm: Misc fixes for arch specific part
>   xen/arm: Add use_iommu flag to xen_arch_domainconfig
>   xen/arm: domain_build: Don't expose IOMMU specific properties to the
>     guest
>   iommu/arm: smmu: Squash map_pages/unmap_pages with
> map_page/unmap_page
>   [RFC] iommu: VT-d: Squash map_pages/unmap_pages with
>     map_page/unmap_page
>   [RFC] iommu: AMD-Vi: Squash map_pages/unmap_pages with
>     map_page/unmap_page
> 
>  tools/libxl/libxl_arm.c                       |   8 +
>  xen/arch/arm/domain.c                         |   2 +-
>  xen/arch/arm/domain_build.c                   |  10 ++
>  xen/arch/arm/p2m.c                            |  10 +-
>  xen/arch/x86/domain.c                         |   2 +-
>  xen/arch/x86/mm.c                             |  11 +-
>  xen/arch/x86/mm/p2m-ept.c                     |  21 +--
>  xen/arch/x86/mm/p2m-pt.c                      |  26 +---
>  xen/arch/x86/mm/p2m.c                         |  38 +----
>  xen/arch/x86/x86_64/mm.c                      |   5 +-
>  xen/common/device_tree.c                      |   7 +
>  xen/common/grant_table.c                      |  10 +-
>  xen/drivers/passthrough/amd/iommu_map.c       | 212 +++++++++++++++----
> -------
>  xen/drivers/passthrough/amd/pci_amd_iommu.c   |  10 +-
>  xen/drivers/passthrough/arm/iommu.c           |   7 +-
>  xen/drivers/passthrough/arm/smmu.c            |  23 ++-
>  xen/drivers/passthrough/iommu.c               |  73 ++++-----
>  xen/drivers/passthrough/vtd/iommu.c           | 116 +++++++++-----
>  xen/drivers/passthrough/vtd/x86/vtd.c         |   4 +-
>  xen/drivers/passthrough/x86/iommu.c           |   6 +-
>  xen/include/asm-arm/iommu.h                   |   4 +-
>  xen/include/asm-arm/p2m.h                     |  34 +++++
>  xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |   8 +-
>  xen/include/public/arch-arm.h                 |   5 +
>  xen/include/xen/device_tree.h                 |  19 +++
>  xen/include/xen/iommu.h                       |  24 +--
>  26 files changed, 402 insertions(+), 293 deletions(-)
> 
> --
> 2.7.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 00/13] "Non-shared" IOMMU support on ARM
  2017-07-31  5:57 ` [PATCH v2 00/13] "Non-shared" IOMMU support on ARM Tian, Kevin
@ 2017-07-31 11:57   ` Oleksandr Tyshchenko
  2017-08-01  3:06     ` Tian, Kevin
  2017-08-01 17:56   ` Julien Grall
  1 sibling, 1 reply; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-07-31 11:57 UTC (permalink / raw)
  To: Tian, Kevin; +Cc: xen-devel, Oleksandr Tyshchenko

Hi, Kevin

On Mon, Jul 31, 2017 at 8:57 AM, Tian, Kevin <kevin.tian@intel.com> wrote:
>> From: Oleksandr Tyshchenko
>> Sent: Wednesday, July 26, 2017 1:27 AM
>>
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> Hi, all.
>>
>> The purpose of this patch series is to create a base for porting
>> any "Non-shared" IOMMUs to Xen on ARM. Saying "Non-shared" IOMMU I
>> mean
>> the IOMMU that can't share the page table with the CPU.
>
> Is "non-shared" IOMMU a standard terminology in ARM side? I quickly
> searched to find it mostly used in this thread...
I don't think that it is a standard terminology.

>
> On the other hand, all IOMMUs support a basic DMA remapping
> mechanism with page table not shared with CPU. Then some IOMMUs
> may optional support Shared Virtual Memory (SVM) through page
> sharing with CPU. Then I'm not sure why need to highlight the
> "non-shared" manner in this thread, instead of just saying
> IPMMU-VMSA support...
I wouldn't use "IPMMU-VMSA support" in this thread since it may be any
other IOMMUs which can't share page table
with CPU because of format incompatibilities.
I needed something short to describe such IOMMUs, but, If title
"non-shared" IOMMU sounds confusing
I won't use it anymore. Do you have something in mind?

>
>> Primarily, we are interested in IPMMU-VMSA and I hope that it will be the
>> first candidate.
>> It is VMSA-compatible IOMMU that integrated in the newest Renesas R-Car
>> Gen3 SoCs (ARM).
>> I am about to push IPMMU-VMSA support in a while.
>>
>> With regard to the patch series, it was rebased on Xen 4.9.0 release and
>> tested on Renesas R-Car Gen3
>> H3/M3 based boards with applied IPMMU-VMSA support:
>> - Patches 1 and 3 have Julien's Rb.
>> - Patch 2 has Jan's Rb but only for x86 and generic parts.
>> - Patch 4 has Julien's Ab.
>> - Patches 5,6,9,10 were slightly reworked.
>> - Patch 7 was significantly reworked. The previous patch -> iommu: Split
>> iommu_hwdom_init() into arch specific parts
>> - Patches 8,11,12,13 are new.
>>
>> Not really sure about x86-related changes since I had no possibility to check.
>> So, compile-tested on x86.
>>
>> You can find current patch series here:
>> repo: https://github.com/otyshchenko1/xen.git branch:
>> non_shared_iommu_v2
>>
>> Previous patch series here:
>> [PATCH v1 00/10] "Non-shared" IOMMU support on ARM
>> https://www.mail-archive.com/xen-devel@lists.xen.org/msg107532.html
>>
>> [RFC PATCH 0/9] "Non-shared" IOMMU support on ARM
>> https://www.mail-archive.com/xen-devel@lists.xen.org/msg100468.html
>>
>> Thank you.
>>
>> Oleksandr Tyshchenko (13):
>>   xen/device-tree: Add dt_count_phandle_with_args helper
>>   iommu: Add extra order argument to the IOMMU APIs and platform
>>     callbacks
>>   xen/arm: p2m: Add helper to convert p2m type to IOMMU flags
>>   xen/arm: p2m: Update IOMMU mapping whenever possible if page table is
>>     not shared
>>   iommu/arm: Re-define iommu_use_hap_pt(d) as iommu_hap_pt_share
>>   iommu: Add extra use_iommu argument to iommu_domain_init()
>>   iommu: Make decision about needing IOMMU for hardware domains in
>>     advance
>>   iommu/arm: Misc fixes for arch specific part
>>   xen/arm: Add use_iommu flag to xen_arch_domainconfig
>>   xen/arm: domain_build: Don't expose IOMMU specific properties to the
>>     guest
>>   iommu/arm: smmu: Squash map_pages/unmap_pages with
>> map_page/unmap_page
>>   [RFC] iommu: VT-d: Squash map_pages/unmap_pages with
>>     map_page/unmap_page
>>   [RFC] iommu: AMD-Vi: Squash map_pages/unmap_pages with
>>     map_page/unmap_page
>>
>>  tools/libxl/libxl_arm.c                       |   8 +
>>  xen/arch/arm/domain.c                         |   2 +-
>>  xen/arch/arm/domain_build.c                   |  10 ++
>>  xen/arch/arm/p2m.c                            |  10 +-
>>  xen/arch/x86/domain.c                         |   2 +-
>>  xen/arch/x86/mm.c                             |  11 +-
>>  xen/arch/x86/mm/p2m-ept.c                     |  21 +--
>>  xen/arch/x86/mm/p2m-pt.c                      |  26 +---
>>  xen/arch/x86/mm/p2m.c                         |  38 +----
>>  xen/arch/x86/x86_64/mm.c                      |   5 +-
>>  xen/common/device_tree.c                      |   7 +
>>  xen/common/grant_table.c                      |  10 +-
>>  xen/drivers/passthrough/amd/iommu_map.c       | 212 +++++++++++++++----
>> -------
>>  xen/drivers/passthrough/amd/pci_amd_iommu.c   |  10 +-
>>  xen/drivers/passthrough/arm/iommu.c           |   7 +-
>>  xen/drivers/passthrough/arm/smmu.c            |  23 ++-
>>  xen/drivers/passthrough/iommu.c               |  73 ++++-----
>>  xen/drivers/passthrough/vtd/iommu.c           | 116 +++++++++-----
>>  xen/drivers/passthrough/vtd/x86/vtd.c         |   4 +-
>>  xen/drivers/passthrough/x86/iommu.c           |   6 +-
>>  xen/include/asm-arm/iommu.h                   |   4 +-
>>  xen/include/asm-arm/p2m.h                     |  34 +++++
>>  xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |   8 +-
>>  xen/include/public/arch-arm.h                 |   5 +
>>  xen/include/xen/device_tree.h                 |  19 +++
>>  xen/include/xen/iommu.h                       |  24 +--
>>  26 files changed, 402 insertions(+), 293 deletions(-)
>>
>> --
>> 2.7.4
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> https://lists.xen.org/xen-devel



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 00/13] "Non-shared" IOMMU support on ARM
  2017-07-31 11:57   ` Oleksandr Tyshchenko
@ 2017-08-01  3:06     ` Tian, Kevin
  2017-08-01 11:08       ` Oleksandr Tyshchenko
  2017-08-01 18:09       ` Julien Grall
  0 siblings, 2 replies; 62+ messages in thread
From: Tian, Kevin @ 2017-08-01  3:06 UTC (permalink / raw)
  To: Oleksandr Tyshchenko; +Cc: xen-devel, Oleksandr Tyshchenko

> From: Oleksandr Tyshchenko [mailto:olekstysh@gmail.com]
> Sent: Monday, July 31, 2017 7:58 PM
> 
> Hi, Kevin
> 
> On Mon, Jul 31, 2017 at 8:57 AM, Tian, Kevin <kevin.tian@intel.com> wrote:
> >> From: Oleksandr Tyshchenko
> >> Sent: Wednesday, July 26, 2017 1:27 AM
> >>
> >> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> >>
> >> Hi, all.
> >>
> >> The purpose of this patch series is to create a base for porting
> >> any "Non-shared" IOMMUs to Xen on ARM. Saying "Non-shared" IOMMU
> I
> >> mean
> >> the IOMMU that can't share the page table with the CPU.
> >
> > Is "non-shared" IOMMU a standard terminology in ARM side? I quickly
> > searched to find it mostly used in this thread...
> I don't think that it is a standard terminology.
> 
> >
> > On the other hand, all IOMMUs support a basic DMA remapping
> > mechanism with page table not shared with CPU. Then some IOMMUs
> > may optional support Shared Virtual Memory (SVM) through page
> > sharing with CPU. Then I'm not sure why need to highlight the
> > "non-shared" manner in this thread, instead of just saying
> > IPMMU-VMSA support...
> I wouldn't use "IPMMU-VMSA support" in this thread since it may be any
> other IOMMUs which can't share page table
> with CPU because of format incompatibilities.

As I commented you can assume all IOMMUs cannot share page
table with CPU as the starting point. It's not good to name an IOMMU
driver based on such fact.

> I needed something short to describe such IOMMUs, but, If title
> "non-shared" IOMMU sounds confusing
> I won't use it anymore. Do you have something in mind?

IOMMU driver needs to be vendor specific. Is your driver working
for all IPMMU-VMSA compatible IOMMUs or only for Renesas?
If the latter, you may make the name explicit for such purpose.

btw since you're porting Linux driver to Xen. What 's the name
used in Linux side? that should be a good reference to you.

> 
> >
> >> Primarily, we are interested in IPMMU-VMSA and I hope that it will be the
> >> first candidate.
> >> It is VMSA-compatible IOMMU that integrated in the newest Renesas R-
> Car
> >> Gen3 SoCs (ARM).
> >> I am about to push IPMMU-VMSA support in a while.
> >>
> >> With regard to the patch series, it was rebased on Xen 4.9.0 release and
> >> tested on Renesas R-Car Gen3
> >> H3/M3 based boards with applied IPMMU-VMSA support:
> >> - Patches 1 and 3 have Julien's Rb.
> >> - Patch 2 has Jan's Rb but only for x86 and generic parts.
> >> - Patch 4 has Julien's Ab.
> >> - Patches 5,6,9,10 were slightly reworked.
> >> - Patch 7 was significantly reworked. The previous patch -> iommu: Split
> >> iommu_hwdom_init() into arch specific parts
> >> - Patches 8,11,12,13 are new.
> >>
> >> Not really sure about x86-related changes since I had no possibility to
> check.
> >> So, compile-tested on x86.
> >>
> >> You can find current patch series here:
> >> repo: https://github.com/otyshchenko1/xen.git branch:
> >> non_shared_iommu_v2
> >>
> >> Previous patch series here:
> >> [PATCH v1 00/10] "Non-shared" IOMMU support on ARM
> >> https://www.mail-archive.com/xen-devel@lists.xen.org/msg107532.html
> >>
> >> [RFC PATCH 0/9] "Non-shared" IOMMU support on ARM
> >> https://www.mail-archive.com/xen-devel@lists.xen.org/msg100468.html
> >>
> >> Thank you.
> >>
> >> Oleksandr Tyshchenko (13):
> >>   xen/device-tree: Add dt_count_phandle_with_args helper
> >>   iommu: Add extra order argument to the IOMMU APIs and platform
> >>     callbacks
> >>   xen/arm: p2m: Add helper to convert p2m type to IOMMU flags
> >>   xen/arm: p2m: Update IOMMU mapping whenever possible if page table
> is
> >>     not shared
> >>   iommu/arm: Re-define iommu_use_hap_pt(d) as iommu_hap_pt_share
> >>   iommu: Add extra use_iommu argument to iommu_domain_init()
> >>   iommu: Make decision about needing IOMMU for hardware domains in
> >>     advance
> >>   iommu/arm: Misc fixes for arch specific part
> >>   xen/arm: Add use_iommu flag to xen_arch_domainconfig
> >>   xen/arm: domain_build: Don't expose IOMMU specific properties to the
> >>     guest
> >>   iommu/arm: smmu: Squash map_pages/unmap_pages with
> >> map_page/unmap_page
> >>   [RFC] iommu: VT-d: Squash map_pages/unmap_pages with
> >>     map_page/unmap_page
> >>   [RFC] iommu: AMD-Vi: Squash map_pages/unmap_pages with
> >>     map_page/unmap_page
> >>
> >>  tools/libxl/libxl_arm.c                       |   8 +
> >>  xen/arch/arm/domain.c                         |   2 +-
> >>  xen/arch/arm/domain_build.c                   |  10 ++
> >>  xen/arch/arm/p2m.c                            |  10 +-
> >>  xen/arch/x86/domain.c                         |   2 +-
> >>  xen/arch/x86/mm.c                             |  11 +-
> >>  xen/arch/x86/mm/p2m-ept.c                     |  21 +--
> >>  xen/arch/x86/mm/p2m-pt.c                      |  26 +---
> >>  xen/arch/x86/mm/p2m.c                         |  38 +----
> >>  xen/arch/x86/x86_64/mm.c                      |   5 +-
> >>  xen/common/device_tree.c                      |   7 +
> >>  xen/common/grant_table.c                      |  10 +-
> >>  xen/drivers/passthrough/amd/iommu_map.c       | 212
> +++++++++++++++----
> >> -------
> >>  xen/drivers/passthrough/amd/pci_amd_iommu.c   |  10 +-
> >>  xen/drivers/passthrough/arm/iommu.c           |   7 +-
> >>  xen/drivers/passthrough/arm/smmu.c            |  23 ++-
> >>  xen/drivers/passthrough/iommu.c               |  73 ++++-----
> >>  xen/drivers/passthrough/vtd/iommu.c           | 116 +++++++++-----
> >>  xen/drivers/passthrough/vtd/x86/vtd.c         |   4 +-
> >>  xen/drivers/passthrough/x86/iommu.c           |   6 +-
> >>  xen/include/asm-arm/iommu.h                   |   4 +-
> >>  xen/include/asm-arm/p2m.h                     |  34 +++++
> >>  xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |   8 +-
> >>  xen/include/public/arch-arm.h                 |   5 +
> >>  xen/include/xen/device_tree.h                 |  19 +++
> >>  xen/include/xen/iommu.h                       |  24 +--
> >>  26 files changed, 402 insertions(+), 293 deletions(-)
> >>
> >> --
> >> 2.7.4
> >>
> >>
> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@lists.xen.org
> >> https://lists.xen.org/xen-devel
> 
> 
> 
> --
> Regards,
> 
> Oleksandr Tyshchenko
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 00/13] "Non-shared" IOMMU support on ARM
  2017-08-01  3:06     ` Tian, Kevin
@ 2017-08-01 11:08       ` Oleksandr Tyshchenko
  2017-08-02  6:12         ` Tian, Kevin
  2017-08-01 18:09       ` Julien Grall
  1 sibling, 1 reply; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-08-01 11:08 UTC (permalink / raw)
  To: Tian, Kevin; +Cc: xen-devel, Oleksandr Tyshchenko

Hi, Kevin

On Tue, Aug 1, 2017 at 6:06 AM, Tian, Kevin <kevin.tian@intel.com> wrote:
>> From: Oleksandr Tyshchenko [mailto:olekstysh@gmail.com]
>> Sent: Monday, July 31, 2017 7:58 PM
>>
>> Hi, Kevin
>>
>> On Mon, Jul 31, 2017 at 8:57 AM, Tian, Kevin <kevin.tian@intel.com> wrote:
>> >> From: Oleksandr Tyshchenko
>> >> Sent: Wednesday, July 26, 2017 1:27 AM
>> >>
>> >> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> >>
>> >> Hi, all.
>> >>
>> >> The purpose of this patch series is to create a base for porting
>> >> any "Non-shared" IOMMUs to Xen on ARM. Saying "Non-shared" IOMMU
>> I
>> >> mean
>> >> the IOMMU that can't share the page table with the CPU.
>> >
>> > Is "non-shared" IOMMU a standard terminology in ARM side? I quickly
>> > searched to find it mostly used in this thread...
>> I don't think that it is a standard terminology.
>>
>> >
>> > On the other hand, all IOMMUs support a basic DMA remapping
>> > mechanism with page table not shared with CPU. Then some IOMMUs
>> > may optional support Shared Virtual Memory (SVM) through page
>> > sharing with CPU. Then I'm not sure why need to highlight the
>> > "non-shared" manner in this thread, instead of just saying
>> > IPMMU-VMSA support...
>> I wouldn't use "IPMMU-VMSA support" in this thread since it may be any
>> other IOMMUs which can't share page table
>> with CPU because of format incompatibilities.
>
> As I commented you can assume all IOMMUs cannot share page
> table with CPU as the starting point. It's not good to name an IOMMU
> driver based on such fact.
>
>> I needed something short to describe such IOMMUs, but, If title
>> "non-shared" IOMMU sounds confusing
>> I won't use it anymore. Do you have something in mind?
>
> IOMMU driver needs to be vendor specific. Is your driver working
> for all IPMMU-VMSA compatible IOMMUs or only for Renesas?
> If the latter, you may make the name explicit for such purpose.
>
> btw since you're porting Linux driver to Xen. What 's the name
> used in Linux side? that should be a good reference to you.

I am afraid a misunderstanding took place. Let me elaborate a bit more
about this.

The IOMMU driver I am porting to Xen is IPMMU-VMSA [1]. This name is
used in Linux
and this name was retained during porting to Xen. This driver is
intended to work with Renesas VMSA-compatible IPMMUs.
But, IPMMU-VMSA support is not a target for the current thread, there
is another thread for adding it [2].

The purpose of the current thread is to create ground for IPMMU-VMSA IOMMUs
(as well as other IOMMUs which can't share page table with CPU because
of format incompatibilities) to be functional inside Xen on ARM.
The only IOMMU supported today in Xen on ARM is the ARM SMMU (which
uses the same page table format as the CPU and can share page table
with it).
And ARM specific code assumes that P2M table is *always* shared and
acts accordingly. So, this patch series is trying
to, let's say, brake this assumption and create environment to handle
such IOMMUs as well.
So, I may use the whole sentence as a patch series title in order not
to confuse people:
"Support IOMMUs which don't share page table with the CPU on ARM"
No objections?

P.S. This patch series also touches x86 specific code, it's because I
had to modify common code.

[1] http://elixir.free-electrons.com/linux/latest/source/drivers/iommu/ipmmu-vmsa.c
[2] https://lists.xen.org/archives/html/xen-devel/2017-07/msg02679.html

>
>>
>> >
>> >> Primarily, we are interested in IPMMU-VMSA and I hope that it will be the
>> >> first candidate.
>> >> It is VMSA-compatible IOMMU that integrated in the newest Renesas R-
>> Car
>> >> Gen3 SoCs (ARM).
>> >> I am about to push IPMMU-VMSA support in a while.
>> >>
>> >> With regard to the patch series, it was rebased on Xen 4.9.0 release and
>> >> tested on Renesas R-Car Gen3
>> >> H3/M3 based boards with applied IPMMU-VMSA support:
>> >> - Patches 1 and 3 have Julien's Rb.
>> >> - Patch 2 has Jan's Rb but only for x86 and generic parts.
>> >> - Patch 4 has Julien's Ab.
>> >> - Patches 5,6,9,10 were slightly reworked.
>> >> - Patch 7 was significantly reworked. The previous patch -> iommu: Split
>> >> iommu_hwdom_init() into arch specific parts
>> >> - Patches 8,11,12,13 are new.
>> >>
>> >> Not really sure about x86-related changes since I had no possibility to
>> check.
>> >> So, compile-tested on x86.
>> >>
>> >> You can find current patch series here:
>> >> repo: https://github.com/otyshchenko1/xen.git branch:
>> >> non_shared_iommu_v2
>> >>
>> >> Previous patch series here:
>> >> [PATCH v1 00/10] "Non-shared" IOMMU support on ARM
>> >> https://www.mail-archive.com/xen-devel@lists.xen.org/msg107532.html
>> >>
>> >> [RFC PATCH 0/9] "Non-shared" IOMMU support on ARM
>> >> https://www.mail-archive.com/xen-devel@lists.xen.org/msg100468.html
>> >>
>> >> Thank you.
>> >>
>> >> Oleksandr Tyshchenko (13):
>> >>   xen/device-tree: Add dt_count_phandle_with_args helper
>> >>   iommu: Add extra order argument to the IOMMU APIs and platform
>> >>     callbacks
>> >>   xen/arm: p2m: Add helper to convert p2m type to IOMMU flags
>> >>   xen/arm: p2m: Update IOMMU mapping whenever possible if page table
>> is
>> >>     not shared
>> >>   iommu/arm: Re-define iommu_use_hap_pt(d) as iommu_hap_pt_share
>> >>   iommu: Add extra use_iommu argument to iommu_domain_init()
>> >>   iommu: Make decision about needing IOMMU for hardware domains in
>> >>     advance
>> >>   iommu/arm: Misc fixes for arch specific part
>> >>   xen/arm: Add use_iommu flag to xen_arch_domainconfig
>> >>   xen/arm: domain_build: Don't expose IOMMU specific properties to the
>> >>     guest
>> >>   iommu/arm: smmu: Squash map_pages/unmap_pages with
>> >> map_page/unmap_page
>> >>   [RFC] iommu: VT-d: Squash map_pages/unmap_pages with
>> >>     map_page/unmap_page
>> >>   [RFC] iommu: AMD-Vi: Squash map_pages/unmap_pages with
>> >>     map_page/unmap_page
>> >>
>> >>  tools/libxl/libxl_arm.c                       |   8 +
>> >>  xen/arch/arm/domain.c                         |   2 +-
>> >>  xen/arch/arm/domain_build.c                   |  10 ++
>> >>  xen/arch/arm/p2m.c                            |  10 +-
>> >>  xen/arch/x86/domain.c                         |   2 +-
>> >>  xen/arch/x86/mm.c                             |  11 +-
>> >>  xen/arch/x86/mm/p2m-ept.c                     |  21 +--
>> >>  xen/arch/x86/mm/p2m-pt.c                      |  26 +---
>> >>  xen/arch/x86/mm/p2m.c                         |  38 +----
>> >>  xen/arch/x86/x86_64/mm.c                      |   5 +-
>> >>  xen/common/device_tree.c                      |   7 +
>> >>  xen/common/grant_table.c                      |  10 +-
>> >>  xen/drivers/passthrough/amd/iommu_map.c       | 212
>> +++++++++++++++----
>> >> -------
>> >>  xen/drivers/passthrough/amd/pci_amd_iommu.c   |  10 +-
>> >>  xen/drivers/passthrough/arm/iommu.c           |   7 +-
>> >>  xen/drivers/passthrough/arm/smmu.c            |  23 ++-
>> >>  xen/drivers/passthrough/iommu.c               |  73 ++++-----
>> >>  xen/drivers/passthrough/vtd/iommu.c           | 116 +++++++++-----
>> >>  xen/drivers/passthrough/vtd/x86/vtd.c         |   4 +-
>> >>  xen/drivers/passthrough/x86/iommu.c           |   6 +-
>> >>  xen/include/asm-arm/iommu.h                   |   4 +-
>> >>  xen/include/asm-arm/p2m.h                     |  34 +++++
>> >>  xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |   8 +-
>> >>  xen/include/public/arch-arm.h                 |   5 +
>> >>  xen/include/xen/device_tree.h                 |  19 +++
>> >>  xen/include/xen/iommu.h                       |  24 +--
>> >>  26 files changed, 402 insertions(+), 293 deletions(-)
>> >>
>> >> --
>> >> 2.7.4
>> >>
>> >>
>> >> _______________________________________________
>> >> Xen-devel mailing list
>> >> Xen-devel@lists.xen.org
>> >> https://lists.xen.org/xen-devel
>>
>>
>>
>> --
>> Regards,
>>
>> Oleksandr Tyshchenko



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 00/13] "Non-shared" IOMMU support on ARM
  2017-07-31  5:57 ` [PATCH v2 00/13] "Non-shared" IOMMU support on ARM Tian, Kevin
  2017-07-31 11:57   ` Oleksandr Tyshchenko
@ 2017-08-01 17:56   ` Julien Grall
  1 sibling, 0 replies; 62+ messages in thread
From: Julien Grall @ 2017-08-01 17:56 UTC (permalink / raw)
  To: Tian, Kevin, Oleksandr Tyshchenko, xen-devel; +Cc: Oleksandr Tyshchenko



On 31/07/17 06:57, Tian, Kevin wrote:
>> From: Oleksandr Tyshchenko
>> Sent: Wednesday, July 26, 2017 1:27 AM
>>
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> Hi, all.
>>
>> The purpose of this patch series is to create a base for porting
>> any "Non-shared" IOMMUs to Xen on ARM. Saying "Non-shared" IOMMU I
>> mean
>> the IOMMU that can't share the page table with the CPU.
>
> Is "non-shared" IOMMU a standard terminology in ARM side? I quickly
> searched to find it mostly used in this thread...
>
> On the other hand, all IOMMUs support a basic DMA remapping
> mechanism with page table not shared with CPU. Then some IOMMUs
> may optional support Shared Virtual Memory (SVM) through page
> sharing with CPU. Then I'm not sure why need to highlight the
> "non-shared" manner in this thread, instead of just saying
> IPMMU-VMSA support...

This is not entirely true. You can share the page-table with the IOMMU 
as long as the page-table are the same. This may still involve some TLB 
flush on the IOMMU side, but you don't have allocate twice the 
page-table. Therefore you save memory.

This is actually the case of SMMUv{1,2}, we share the page table but 
still have to do the TLB flush and potential clean the page entry table.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 00/13] "Non-shared" IOMMU support on ARM
  2017-08-01  3:06     ` Tian, Kevin
  2017-08-01 11:08       ` Oleksandr Tyshchenko
@ 2017-08-01 18:09       ` Julien Grall
  2017-08-01 18:20         ` Oleksandr Tyshchenko
  1 sibling, 1 reply; 62+ messages in thread
From: Julien Grall @ 2017-08-01 18:09 UTC (permalink / raw)
  To: Tian, Kevin, Oleksandr Tyshchenko; +Cc: xen-devel, Oleksandr Tyshchenko



On 01/08/17 04:06, Tian, Kevin wrote:
>> From: Oleksandr Tyshchenko [mailto:olekstysh@gmail.com]
>> Sent: Monday, July 31, 2017 7:58 PM
>>
>> Hi, Kevin
>>
>> On Mon, Jul 31, 2017 at 8:57 AM, Tian, Kevin <kevin.tian@intel.com> wrote:
>>>> From: Oleksandr Tyshchenko
>>>> Sent: Wednesday, July 26, 2017 1:27 AM
>>>>
>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>
>>>> Hi, all.
>>>>
>>>> The purpose of this patch series is to create a base for porting
>>>> any "Non-shared" IOMMUs to Xen on ARM. Saying "Non-shared" IOMMU
>> I
>>>> mean
>>>> the IOMMU that can't share the page table with the CPU.
>>>
>>> Is "non-shared" IOMMU a standard terminology in ARM side? I quickly
>>> searched to find it mostly used in this thread...
>> I don't think that it is a standard terminology.
>>
>>>
>>> On the other hand, all IOMMUs support a basic DMA remapping
>>> mechanism with page table not shared with CPU. Then some IOMMUs
>>> may optional support Shared Virtual Memory (SVM) through page
>>> sharing with CPU. Then I'm not sure why need to highlight the
>>> "non-shared" manner in this thread, instead of just saying
>>> IPMMU-VMSA support...
>> I wouldn't use "IPMMU-VMSA support" in this thread since it may be any
>> other IOMMUs which can't share page table
>> with CPU because of format incompatibilities.
>
> As I commented you can assume all IOMMUs cannot share page
> table with CPU as the starting point. It's not good to name an IOMMU
> driver based on such fact.

As said in a previous reply. This is a wrong assumption, you can share 
page-tables with the IOMMU if the format is the same. You may still need 
to do TLB flush manually on the IOMMU, but you still save memory.

x86 is already supporting that and also call "sharing" and is done by 
default (see iommu_hap_pt_share). If the naming is wrong, then feel free 
to send a patch.

But I don't think you should complain about Oleksandr use this name.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 00/13] "Non-shared" IOMMU support on ARM
  2017-08-01 18:09       ` Julien Grall
@ 2017-08-01 18:20         ` Oleksandr Tyshchenko
  0 siblings, 0 replies; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-08-01 18:20 UTC (permalink / raw)
  To: xen-devel
  Cc: Tian, Kevin, Stefano Stabellini, Wei Liu, Jan Beulich,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Oleksandr Tyshchenko, Julien Grall, Suravee Suthikulpanit

СС persons from all patches in current series into the cover letter.

On Tue, Aug 1, 2017 at 9:09 PM, Julien Grall <julien.grall@arm.com> wrote:
>
>
> On 01/08/17 04:06, Tian, Kevin wrote:
>>>
>>> From: Oleksandr Tyshchenko [mailto:olekstysh@gmail.com]
>>> Sent: Monday, July 31, 2017 7:58 PM
>>>
>>> Hi, Kevin
>>>
>>> On Mon, Jul 31, 2017 at 8:57 AM, Tian, Kevin <kevin.tian@intel.com>
>>> wrote:
>>>>>
>>>>> From: Oleksandr Tyshchenko
>>>>> Sent: Wednesday, July 26, 2017 1:27 AM
>>>>>
>>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>>
>>>>> Hi, all.
>>>>>
>>>>> The purpose of this patch series is to create a base for porting
>>>>> any "Non-shared" IOMMUs to Xen on ARM. Saying "Non-shared" IOMMU
>>>
>>> I
>>>>>
>>>>> mean
>>>>> the IOMMU that can't share the page table with the CPU.
>>>>
>>>>
>>>> Is "non-shared" IOMMU a standard terminology in ARM side? I quickly
>>>> searched to find it mostly used in this thread...
>>>
>>> I don't think that it is a standard terminology.
>>>
>>>>
>>>> On the other hand, all IOMMUs support a basic DMA remapping
>>>> mechanism with page table not shared with CPU. Then some IOMMUs
>>>> may optional support Shared Virtual Memory (SVM) through page
>>>> sharing with CPU. Then I'm not sure why need to highlight the
>>>> "non-shared" manner in this thread, instead of just saying
>>>> IPMMU-VMSA support...
>>>
>>> I wouldn't use "IPMMU-VMSA support" in this thread since it may be any
>>> other IOMMUs which can't share page table
>>> with CPU because of format incompatibilities.
>>
>>
>> As I commented you can assume all IOMMUs cannot share page
>> table with CPU as the starting point. It's not good to name an IOMMU
>> driver based on such fact.
>
>
> As said in a previous reply. This is a wrong assumption, you can share
> page-tables with the IOMMU if the format is the same. You may still need to
> do TLB flush manually on the IOMMU, but you still save memory.
>
> x86 is already supporting that and also call "sharing" and is done by
> default (see iommu_hap_pt_share). If the naming is wrong, then feel free to
> send a patch.
>
> But I don't think you should complain about Oleksandr use this name.
>
> Cheers,
>
> --
> Julien Grall



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 00/13] "Non-shared" IOMMU support on ARM
  2017-08-01 11:08       ` Oleksandr Tyshchenko
@ 2017-08-02  6:12         ` Tian, Kevin
  2017-08-02 17:47           ` Oleksandr Tyshchenko
  0 siblings, 1 reply; 62+ messages in thread
From: Tian, Kevin @ 2017-08-02  6:12 UTC (permalink / raw)
  To: Oleksandr Tyshchenko; +Cc: xen-devel, Oleksandr Tyshchenko

> From: Oleksandr Tyshchenko [mailto:olekstysh@gmail.com]
> Sent: Tuesday, August 1, 2017 7:08 PM
> 
> Hi, Kevin
> 
> On Tue, Aug 1, 2017 at 6:06 AM, Tian, Kevin <kevin.tian@intel.com> wrote:
> >> From: Oleksandr Tyshchenko [mailto:olekstysh@gmail.com]
> >> Sent: Monday, July 31, 2017 7:58 PM
> >>
> >> Hi, Kevin
> >>
> >> On Mon, Jul 31, 2017 at 8:57 AM, Tian, Kevin <kevin.tian@intel.com>
> wrote:
> >> >> From: Oleksandr Tyshchenko
> >> >> Sent: Wednesday, July 26, 2017 1:27 AM
> >> >>
> >> >> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> >> >>
> >> >> Hi, all.
> >> >>
> >> >> The purpose of this patch series is to create a base for porting
> >> >> any "Non-shared" IOMMUs to Xen on ARM. Saying "Non-shared"
> IOMMU
> >> I
> >> >> mean
> >> >> the IOMMU that can't share the page table with the CPU.
> >> >
> >> > Is "non-shared" IOMMU a standard terminology in ARM side? I quickly
> >> > searched to find it mostly used in this thread...
> >> I don't think that it is a standard terminology.
> >>
> >> >
> >> > On the other hand, all IOMMUs support a basic DMA remapping
> >> > mechanism with page table not shared with CPU. Then some IOMMUs
> >> > may optional support Shared Virtual Memory (SVM) through page
> >> > sharing with CPU. Then I'm not sure why need to highlight the
> >> > "non-shared" manner in this thread, instead of just saying
> >> > IPMMU-VMSA support...
> >> I wouldn't use "IPMMU-VMSA support" in this thread since it may be any
> >> other IOMMUs which can't share page table
> >> with CPU because of format incompatibilities.
> >
> > As I commented you can assume all IOMMUs cannot share page
> > table with CPU as the starting point. It's not good to name an IOMMU
> > driver based on such fact.
> >
> >> I needed something short to describe such IOMMUs, but, If title
> >> "non-shared" IOMMU sounds confusing
> >> I won't use it anymore. Do you have something in mind?
> >
> > IOMMU driver needs to be vendor specific. Is your driver working
> > for all IPMMU-VMSA compatible IOMMUs or only for Renesas?
> > If the latter, you may make the name explicit for such purpose.
> >
> > btw since you're porting Linux driver to Xen. What 's the name
> > used in Linux side? that should be a good reference to you.
> 
> I am afraid a misunderstanding took place. Let me elaborate a bit more
> about this.
> 
> The IOMMU driver I am porting to Xen is IPMMU-VMSA [1]. This name is
> used in Linux
> and this name was retained during porting to Xen. This driver is
> intended to work with Renesas VMSA-compatible IPMMUs.
> But, IPMMU-VMSA support is not a target for the current thread, there
> is another thread for adding it [2].
> 
> The purpose of the current thread is to create ground for IPMMU-VMSA
> IOMMUs
> (as well as other IOMMUs which can't share page table with CPU because
> of format incompatibilities) to be functional inside Xen on ARM.
> The only IOMMU supported today in Xen on ARM is the ARM SMMU (which
> uses the same page table format as the CPU and can share page table
> with it).
> And ARM specific code assumes that P2M table is *always* shared and
> acts accordingly. So, this patch series is trying
> to, let's say, brake this assumption and create environment to handle
> such IOMMUs as well.
> So, I may use the whole sentence as a patch series title in order not
> to confuse people:
> "Support IOMMUs which don't share page table with the CPU on ARM"
> No objections?

well, I saw where disconnect comes. My context when reviewing
this message is right around thinking some about Shared Virtual
Memory (SVM), which is a feature to allow IOMMU sharing same
CPU page table for VA->PA so user application can directly send
VA to device to do DMA. In virtualization it means IOMMU supports
two-level translation with 1st level for GVA -> GPA and 2nd level
for GPA->HPA. 1st level directly uses guest CPU page table. That 
is an optional feature not supported by all IOMMUs.

while your work is really about IOMMU sharing CPU EPT page
table (GPA->HPA) (sorry I don't know ARM's term for EPT), and
for this usage some ARM SMMUs have compatible format while
others may not, which is why you introduced the "non-shared"
model here.

I'm not sure whether it's worthy of differentiating two usages
in your subject line, since SVM support is not in Xen today. And
from Julien's comment looks people don't have confusion on
its meaning. Then I'm fine with your original description. it's
Julien's call. :-)

Thanks
Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 00/13] "Non-shared" IOMMU support on ARM
  2017-08-02  6:12         ` Tian, Kevin
@ 2017-08-02 17:47           ` Oleksandr Tyshchenko
  0 siblings, 0 replies; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-08-02 17:47 UTC (permalink / raw)
  To: Tian, Kevin; +Cc: xen-devel, Julien Grall, Oleksandr Tyshchenko

Hi, Kevin

On Wed, Aug 2, 2017 at 9:12 AM, Tian, Kevin <kevin.tian@intel.com> wrote:
>> From: Oleksandr Tyshchenko [mailto:olekstysh@gmail.com]
>> Sent: Tuesday, August 1, 2017 7:08 PM
>>
>> Hi, Kevin
>>
>> On Tue, Aug 1, 2017 at 6:06 AM, Tian, Kevin <kevin.tian@intel.com> wrote:
>> >> From: Oleksandr Tyshchenko [mailto:olekstysh@gmail.com]
>> >> Sent: Monday, July 31, 2017 7:58 PM
>> >>
>> >> Hi, Kevin
>> >>
>> >> On Mon, Jul 31, 2017 at 8:57 AM, Tian, Kevin <kevin.tian@intel.com>
>> wrote:
>> >> >> From: Oleksandr Tyshchenko
>> >> >> Sent: Wednesday, July 26, 2017 1:27 AM
>> >> >>
>> >> >> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> >> >>
>> >> >> Hi, all.
>> >> >>
>> >> >> The purpose of this patch series is to create a base for porting
>> >> >> any "Non-shared" IOMMUs to Xen on ARM. Saying "Non-shared"
>> IOMMU
>> >> I
>> >> >> mean
>> >> >> the IOMMU that can't share the page table with the CPU.
>> >> >
>> >> > Is "non-shared" IOMMU a standard terminology in ARM side? I quickly
>> >> > searched to find it mostly used in this thread...
>> >> I don't think that it is a standard terminology.
>> >>
>> >> >
>> >> > On the other hand, all IOMMUs support a basic DMA remapping
>> >> > mechanism with page table not shared with CPU. Then some IOMMUs
>> >> > may optional support Shared Virtual Memory (SVM) through page
>> >> > sharing with CPU. Then I'm not sure why need to highlight the
>> >> > "non-shared" manner in this thread, instead of just saying
>> >> > IPMMU-VMSA support...
>> >> I wouldn't use "IPMMU-VMSA support" in this thread since it may be any
>> >> other IOMMUs which can't share page table
>> >> with CPU because of format incompatibilities.
>> >
>> > As I commented you can assume all IOMMUs cannot share page
>> > table with CPU as the starting point. It's not good to name an IOMMU
>> > driver based on such fact.
>> >
>> >> I needed something short to describe such IOMMUs, but, If title
>> >> "non-shared" IOMMU sounds confusing
>> >> I won't use it anymore. Do you have something in mind?
>> >
>> > IOMMU driver needs to be vendor specific. Is your driver working
>> > for all IPMMU-VMSA compatible IOMMUs or only for Renesas?
>> > If the latter, you may make the name explicit for such purpose.
>> >
>> > btw since you're porting Linux driver to Xen. What 's the name
>> > used in Linux side? that should be a good reference to you.
>>
>> I am afraid a misunderstanding took place. Let me elaborate a bit more
>> about this.
>>
>> The IOMMU driver I am porting to Xen is IPMMU-VMSA [1]. This name is
>> used in Linux
>> and this name was retained during porting to Xen. This driver is
>> intended to work with Renesas VMSA-compatible IPMMUs.
>> But, IPMMU-VMSA support is not a target for the current thread, there
>> is another thread for adding it [2].
>>
>> The purpose of the current thread is to create ground for IPMMU-VMSA
>> IOMMUs
>> (as well as other IOMMUs which can't share page table with CPU because
>> of format incompatibilities) to be functional inside Xen on ARM.
>> The only IOMMU supported today in Xen on ARM is the ARM SMMU (which
>> uses the same page table format as the CPU and can share page table
>> with it).
>> And ARM specific code assumes that P2M table is *always* shared and
>> acts accordingly. So, this patch series is trying
>> to, let's say, brake this assumption and create environment to handle
>> such IOMMUs as well.
>> So, I may use the whole sentence as a patch series title in order not
>> to confuse people:
>> "Support IOMMUs which don't share page table with the CPU on ARM"
>> No objections?
>
> well, I saw where disconnect comes. My context when reviewing
> this message is right around thinking some about Shared Virtual
> Memory (SVM), which is a feature to allow IOMMU sharing same
> CPU page table for VA->PA so user application can directly send
> VA to device to do DMA. In virtualization it means IOMMU supports
> two-level translation with 1st level for GVA -> GPA and 2nd level
> for GPA->HPA. 1st level directly uses guest CPU page table. That
> is an optional feature not supported by all IOMMUs.
>
> while your work is really about IOMMU sharing CPU EPT page
> table (GPA->HPA) (sorry I don't know ARM's term for EPT), and
> for this usage some ARM SMMUs have compatible format while
> others may not, which is why you introduced the "non-shared"
> model here.
Yes. Exactly. It was about sharing *stage-2* page table (that handles
IPA->PA mappings).
Sorry if I was unclear.

>
> I'm not sure whether it's worthy of differentiating two usages
> in your subject line, since SVM support is not in Xen today. And
> from Julien's comment looks people don't have confusion on
> its meaning. Then I'm fine with your original description. it's
> Julien's call. :-)
>
> Thanks
> Kevin



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 02/13] iommu: Add extra order argument to the IOMMU APIs and platform callbacks
  2017-07-25 17:26 ` [PATCH v2 02/13] iommu: Add extra order argument to the IOMMU APIs and platform callbacks Oleksandr Tyshchenko
@ 2017-08-03 11:21   ` Julien Grall
  2017-08-03 12:32     ` Oleksandr Tyshchenko
  0 siblings, 1 reply; 62+ messages in thread
From: Julien Grall @ 2017-08-03 11:21 UTC (permalink / raw)
  To: Oleksandr Tyshchenko, xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, George Dunlap,
	Andrew Cooper, Ian Jackson, Tim Deegan, Oleksandr Tyshchenko,
	Suravee Suthikulpanit

Hi Oleksandr,

On 25/07/17 18:26, Oleksandr Tyshchenko wrote:
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index 74c09b0..7c313c0 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c

[...]

> +static int __must_check arm_smmu_unmap_pages(struct domain *d,
> +		unsigned long gfn, unsigned int order)
> +{
> +	unsigned long i;
> +	int rc = 0;
> +
> +	for (i = 0; i < (1UL << order); i++) {
> +		int ret = arm_smmu_unmap_page(d, gfn + i);


Missing blank line between declaration(s) and statement(s).

> +		if (!rc)
> +			rc = ret;
> +	}
> +
> +	return rc;
> +}
> +
>  static const struct iommu_ops arm_smmu_iommu_ops = {
>      .init = arm_smmu_iommu_domain_init,
>      .hwdom_init = arm_smmu_iommu_hwdom_init,
> @@ -2786,8 +2823,8 @@ static const struct iommu_ops arm_smmu_iommu_ops = {
>      .iotlb_flush_all = arm_smmu_iotlb_flush_all,
>      .assign_device = arm_smmu_assign_dev,
>      .reassign_device = arm_smmu_reassign_dev,
> -    .map_page = arm_smmu_map_page,
> -    .unmap_page = arm_smmu_unmap_page,
> +    .map_pages = arm_smmu_map_pages,
> +    .unmap_pages = arm_smmu_unmap_pages,
>  };
>
>  static __init const struct arm_smmu_device *find_smmu(const struct device *dev)

[...]

> diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
> index 19328f6..b4e8c89 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c

[...]

> +static int __must_check intel_iommu_unmap_pages(struct domain *d,
> +                                                unsigned long gfn,
> +                                                unsigned int order)
> +{
> +    unsigned long i;
> +    int rc = 0;
> +
> +    for ( i = 0; i < (1UL << order); i++ )
> +    {
> +        int ret = intel_iommu_unmap_page(d, gfn + i);

Missing blank line between declaration(s) and statement(s).

> +        if ( !rc )
> +            rc = ret;
> +    }
> +
> +    return rc;
> +}
> +

Cheers,


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 05/13] iommu/arm: Re-define iommu_use_hap_pt(d) as iommu_hap_pt_share
  2017-07-25 17:26 ` [PATCH v2 05/13] iommu/arm: Re-define iommu_use_hap_pt(d) as iommu_hap_pt_share Oleksandr Tyshchenko
@ 2017-08-03 11:23   ` Julien Grall
  2017-08-03 12:33     ` Oleksandr Tyshchenko
  0 siblings, 1 reply; 62+ messages in thread
From: Julien Grall @ 2017-08-03 11:23 UTC (permalink / raw)
  To: Oleksandr Tyshchenko, xen-devel; +Cc: Oleksandr Tyshchenko

Hi Oleksandr,

On 25/07/17 18:26, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>
> Not every integrated into ARM SoCs IOMMU can share page tables
> with the CPU and as the result the iommu_use_hap_pt(d) mustn't
> always be true.
> Reuse x86's iommu_hap_pt_share flag to indicate whether the IOMMU
> page table is shared or not.
>
> As P2M table must always be shared between the CPU and the SMMU
> print an error message and bail out if this flag was previously unset.
>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Julien Grall <julien.grall@arm.com>

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 08/13] iommu/arm: Misc fixes for arch specific part
  2017-07-25 17:26 ` [PATCH v2 08/13] iommu/arm: Misc fixes for arch specific part Oleksandr Tyshchenko
@ 2017-08-03 11:31   ` Julien Grall
  2017-08-03 12:34     ` Oleksandr Tyshchenko
  0 siblings, 1 reply; 62+ messages in thread
From: Julien Grall @ 2017-08-03 11:31 UTC (permalink / raw)
  To: Oleksandr Tyshchenko, xen-devel; +Cc: Oleksandr Tyshchenko

Hi Oleksandr,

On 25/07/17 18:26, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>
> 1. Add missing return in case if IOMMU ops have been already set.
> 2. Add check for shared IOMMU before returning an error.

Technically 1. is a fix, 2. is a new feature as sharing IOMMU is not 
supported today. I am ok for you to keep both in the same patch this 
time, but in the future please avoid mixing fixes and new feature.

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 09/13] xen/arm: Add use_iommu flag to xen_arch_domainconfig
  2017-07-25 17:26 ` [PATCH v2 09/13] xen/arm: Add use_iommu flag to xen_arch_domainconfig Oleksandr Tyshchenko
  2017-07-28 16:16   ` Wei Liu
@ 2017-08-03 11:33   ` Julien Grall
  2017-08-03 12:31     ` Oleksandr Tyshchenko
  1 sibling, 1 reply; 62+ messages in thread
From: Julien Grall @ 2017-08-03 11:33 UTC (permalink / raw)
  To: Oleksandr Tyshchenko, xen-devel
  Cc: Oleksandr Tyshchenko, Ian Jackson, Wei Liu, Jan Beulich

Hi Oleksandr,

On 25/07/17 18:26, Oleksandr Tyshchenko wrote:
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index ec19310..3079bbe 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -569,7 +569,7 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
>      ASSERT(config != NULL);
>
>      /* p2m_init relies on some value initialized by the IOMMU subsystem */
> -    if ( (rc = iommu_domain_init(d, false)) != 0 )
> +    if ( (rc = iommu_domain_init(d, !!config->use_iommu)) != 0 )

NIT: !! is not necessary as the parameter is a bool :).

Acked-by: Julien Grall <julien.grall@arm.com>

>          goto fail;
>
>      if ( (rc = p2m_init(d)) != 0 )

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 10/13] xen/arm: domain_build: Don't expose IOMMU specific properties to the guest
  2017-07-25 17:26 ` [PATCH v2 10/13] xen/arm: domain_build: Don't expose IOMMU specific properties to the guest Oleksandr Tyshchenko
@ 2017-08-03 11:37   ` Julien Grall
  2017-08-03 13:24     ` Oleksandr Tyshchenko
  0 siblings, 1 reply; 62+ messages in thread
From: Julien Grall @ 2017-08-03 11:37 UTC (permalink / raw)
  To: Oleksandr Tyshchenko, xen-devel; +Cc: Oleksandr Tyshchenko

Hi Oleksandr,

On 25/07/17 18:26, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>
> We don't passthrough IOMMU device to DOM0 even if it is not used by
> Xen. Therefore exposing the properties that describe relationship
> between master devices and IOMMUs does not make any sense.
>
> According to the:
> 1. Documentation/devicetree/bindings/iommu/iommu.txt
> 2. Documentation/devicetree/bindings/pci/pci-iommu.txt
>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Julien Grall <julien.grall@arm.com>
>
> ---
>    Changes in v1:
>       -
>
>    Changes in v2:
>       - Skip optional properties too.
>       - Clarify patch description
> ---
>  xen/arch/arm/domain_build.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
>
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 3abacc0..fadfbbc 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -432,6 +432,16 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
>              continue;
>          }
>
> +        /* Don't expose IOMMU specific properties to the guest */
> +        if ( dt_property_name_is_equal(prop, "iommus") )
> +            continue;
> +
> +        if ( dt_property_name_is_equal(prop, "iommu-map") )
> +            continue;
> +
> +        if ( dt_property_name_is_equal(prop, "iommu-map-mask") )
> +            continue;
> +

Sadly we don't have some sort of array to blacklist property. This could 
be a good improvement if you have time to look at it.

In any case:

Acked-by: Julien Grall <julien.grall@arm.com>

>          res = fdt_property(kinfo->fdt, prop->name, prop_data, prop_len);
>
>          if ( res )
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 09/13] xen/arm: Add use_iommu flag to xen_arch_domainconfig
  2017-08-03 11:33   ` Julien Grall
@ 2017-08-03 12:31     ` Oleksandr Tyshchenko
  2017-08-03 12:35       ` Julien Grall
  0 siblings, 1 reply; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-08-03 12:31 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Ian Jackson, Wei Liu, Jan Beulich, Oleksandr Tyshchenko

Hi, Julien

On Thu, Aug 3, 2017 at 2:33 PM, Julien Grall <julien.grall@arm.com> wrote:
> Hi Oleksandr,
>
> On 25/07/17 18:26, Oleksandr Tyshchenko wrote:
>>
>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>> index ec19310..3079bbe 100644
>> --- a/xen/arch/arm/domain.c
>> +++ b/xen/arch/arm/domain.c
>> @@ -569,7 +569,7 @@ int arch_domain_create(struct domain *d, unsigned int
>> domcr_flags,
>>      ASSERT(config != NULL);
>>
>>      /* p2m_init relies on some value initialized by the IOMMU subsystem
>> */
>> -    if ( (rc = iommu_domain_init(d, false)) != 0 )
>> +    if ( (rc = iommu_domain_init(d, !!config->use_iommu)) != 0 )
>
>
> NIT: !! is not necessary as the parameter is a bool :).
Shall I drop "!!"?

>
> Acked-by: Julien Grall <julien.grall@arm.com>
Thank you!

>
>>          goto fail;
>>
>>      if ( (rc = p2m_init(d)) != 0 )
>
>
> Cheers,
>
> --
> Julien Grall



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 02/13] iommu: Add extra order argument to the IOMMU APIs and platform callbacks
  2017-08-03 11:21   ` Julien Grall
@ 2017-08-03 12:32     ` Oleksandr Tyshchenko
  2017-08-21 16:20       ` Oleksandr Tyshchenko
  0 siblings, 1 reply; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-08-03 12:32 UTC (permalink / raw)
  To: Julien Grall
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, George Dunlap,
	Andrew Cooper, Ian Jackson, Tim Deegan, Oleksandr Tyshchenko,
	Suravee Suthikulpanit, xen-devel

Hi, Julien

On Thu, Aug 3, 2017 at 2:21 PM, Julien Grall <julien.grall@arm.com> wrote:
> Hi Oleksandr,
>
> On 25/07/17 18:26, Oleksandr Tyshchenko wrote:
>>
>> diff --git a/xen/drivers/passthrough/arm/smmu.c
>> b/xen/drivers/passthrough/arm/smmu.c
>> index 74c09b0..7c313c0 100644
>> --- a/xen/drivers/passthrough/arm/smmu.c
>> +++ b/xen/drivers/passthrough/arm/smmu.c
>
>
> [...]
>
>> +static int __must_check arm_smmu_unmap_pages(struct domain *d,
>> +               unsigned long gfn, unsigned int order)
>> +{
>> +       unsigned long i;
>> +       int rc = 0;
>> +
>> +       for (i = 0; i < (1UL << order); i++) {
>> +               int ret = arm_smmu_unmap_page(d, gfn + i);
>
>
>
> Missing blank line between declaration(s) and statement(s).
Will add.

>
>> +               if (!rc)
>> +                       rc = ret;
>> +       }
>> +
>> +       return rc;
>> +}
>> +
>>  static const struct iommu_ops arm_smmu_iommu_ops = {
>>      .init = arm_smmu_iommu_domain_init,
>>      .hwdom_init = arm_smmu_iommu_hwdom_init,
>> @@ -2786,8 +2823,8 @@ static const struct iommu_ops arm_smmu_iommu_ops = {
>>      .iotlb_flush_all = arm_smmu_iotlb_flush_all,
>>      .assign_device = arm_smmu_assign_dev,
>>      .reassign_device = arm_smmu_reassign_dev,
>> -    .map_page = arm_smmu_map_page,
>> -    .unmap_page = arm_smmu_unmap_page,
>> +    .map_pages = arm_smmu_map_pages,
>> +    .unmap_pages = arm_smmu_unmap_pages,
>>  };
>>
>>  static __init const struct arm_smmu_device *find_smmu(const struct device
>> *dev)
>
>
> [...]
>
>> diff --git a/xen/drivers/passthrough/vtd/iommu.c
>> b/xen/drivers/passthrough/vtd/iommu.c
>> index 19328f6..b4e8c89 100644
>> --- a/xen/drivers/passthrough/vtd/iommu.c
>> +++ b/xen/drivers/passthrough/vtd/iommu.c
>
>
> [...]
>
>> +static int __must_check intel_iommu_unmap_pages(struct domain *d,
>> +                                                unsigned long gfn,
>> +                                                unsigned int order)
>> +{
>> +    unsigned long i;
>> +    int rc = 0;
>> +
>> +    for ( i = 0; i < (1UL << order); i++ )
>> +    {
>> +        int ret = intel_iommu_unmap_page(d, gfn + i);
>
>
> Missing blank line between declaration(s) and statement(s).
Will add.

>
>> +        if ( !rc )
>> +            rc = ret;
>> +    }
>> +
>> +    return rc;
>> +}
>> +
>
>
> Cheers,
>
>
> --
> Julien Grall



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 05/13] iommu/arm: Re-define iommu_use_hap_pt(d) as iommu_hap_pt_share
  2017-08-03 11:23   ` Julien Grall
@ 2017-08-03 12:33     ` Oleksandr Tyshchenko
  0 siblings, 0 replies; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-08-03 12:33 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Oleksandr Tyshchenko

Hi, Julien

On Thu, Aug 3, 2017 at 2:23 PM, Julien Grall <julien.grall@arm.com> wrote:
> Hi Oleksandr,
>
> On 25/07/17 18:26, Oleksandr Tyshchenko wrote:
>>
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> Not every integrated into ARM SoCs IOMMU can share page tables
>> with the CPU and as the result the iommu_use_hap_pt(d) mustn't
>> always be true.
>> Reuse x86's iommu_hap_pt_share flag to indicate whether the IOMMU
>> page table is shared or not.
>>
>> As P2M table must always be shared between the CPU and the SMMU
>> print an error message and bail out if this flag was previously unset.
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> CC: Julien Grall <julien.grall@arm.com>
>
>
> Reviewed-by: Julien Grall <julien.grall@arm.com>
Thank you!

>
> Cheers,
>
> --
> Julien Grall



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 08/13] iommu/arm: Misc fixes for arch specific part
  2017-08-03 11:31   ` Julien Grall
@ 2017-08-03 12:34     ` Oleksandr Tyshchenko
  0 siblings, 0 replies; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-08-03 12:34 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Oleksandr Tyshchenko

Hi, Julien

On Thu, Aug 3, 2017 at 2:31 PM, Julien Grall <julien.grall@arm.com> wrote:
> Hi Oleksandr,
>
> On 25/07/17 18:26, Oleksandr Tyshchenko wrote:
>>
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> 1. Add missing return in case if IOMMU ops have been already set.
>> 2. Add check for shared IOMMU before returning an error.
>
>
> Technically 1. is a fix, 2. is a new feature as sharing IOMMU is not
> supported today. I am ok for you to keep both in the same patch this time,
> but in the future please avoid mixing fixes and new feature.
ok

>
> Reviewed-by: Julien Grall <julien.grall@arm.com>
Thank you!

>
> Cheers,
>
> --
> Julien Grall



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 09/13] xen/arm: Add use_iommu flag to xen_arch_domainconfig
  2017-08-03 12:31     ` Oleksandr Tyshchenko
@ 2017-08-03 12:35       ` Julien Grall
  0 siblings, 0 replies; 62+ messages in thread
From: Julien Grall @ 2017-08-03 12:35 UTC (permalink / raw)
  To: Oleksandr Tyshchenko
  Cc: xen-devel, Ian Jackson, Wei Liu, Jan Beulich, Oleksandr Tyshchenko



On 03/08/17 13:31, Oleksandr Tyshchenko wrote:
> Hi, Julien
>
> On Thu, Aug 3, 2017 at 2:33 PM, Julien Grall <julien.grall@arm.com> wrote:
>> Hi Oleksandr,
>>
>> On 25/07/17 18:26, Oleksandr Tyshchenko wrote:
>>>
>>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>>> index ec19310..3079bbe 100644
>>> --- a/xen/arch/arm/domain.c
>>> +++ b/xen/arch/arm/domain.c
>>> @@ -569,7 +569,7 @@ int arch_domain_create(struct domain *d, unsigned int
>>> domcr_flags,
>>>      ASSERT(config != NULL);
>>>
>>>      /* p2m_init relies on some value initialized by the IOMMU subsystem
>>> */
>>> -    if ( (rc = iommu_domain_init(d, false)) != 0 )
>>> +    if ( (rc = iommu_domain_init(d, !!config->use_iommu)) != 0 )
>>
>>
>> NIT: !! is not necessary as the parameter is a bool :).
> Shall I drop "!!"?

Yes please.

>
>>
>> Acked-by: Julien Grall <julien.grall@arm.com>
> Thank you!
>
>>
>>>          goto fail;
>>>
>>>      if ( (rc = p2m_init(d)) != 0 )
>>
>>
>> Cheers,
>>
>> --
>> Julien Grall
>
>
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 11/13] iommu/arm: smmu: Squash map_pages/unmap_pages with map_page/unmap_page
  2017-07-25 17:26 ` [PATCH v2 11/13] iommu/arm: smmu: Squash map_pages/unmap_pages with map_page/unmap_page Oleksandr Tyshchenko
@ 2017-08-03 12:36   ` Julien Grall
  2017-08-03 13:26     ` Oleksandr Tyshchenko
  0 siblings, 1 reply; 62+ messages in thread
From: Julien Grall @ 2017-08-03 12:36 UTC (permalink / raw)
  To: Oleksandr Tyshchenko, xen-devel; +Cc: Oleksandr Tyshchenko

Hi,

On 25/07/17 18:26, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>
> Eliminate TODO by squashing single-page stuff with multi-page one.
>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Julien Grall <julien.grall@arm.com>

Acked-by: Julien Grall <julien.grall@arm.com>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 10/13] xen/arm: domain_build: Don't expose IOMMU specific properties to the guest
  2017-08-03 11:37   ` Julien Grall
@ 2017-08-03 13:24     ` Oleksandr Tyshchenko
  0 siblings, 0 replies; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-08-03 13:24 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Oleksandr Tyshchenko

Hi, Julien

On Thu, Aug 3, 2017 at 2:37 PM, Julien Grall <julien.grall@arm.com> wrote:
> Hi Oleksandr,
>
>
> On 25/07/17 18:26, Oleksandr Tyshchenko wrote:
>>
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> We don't passthrough IOMMU device to DOM0 even if it is not used by
>> Xen. Therefore exposing the properties that describe relationship
>> between master devices and IOMMUs does not make any sense.
>>
>> According to the:
>> 1. Documentation/devicetree/bindings/iommu/iommu.txt
>> 2. Documentation/devicetree/bindings/pci/pci-iommu.txt
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> CC: Julien Grall <julien.grall@arm.com>
>>
>> ---
>>    Changes in v1:
>>       -
>>
>>    Changes in v2:
>>       - Skip optional properties too.
>>       - Clarify patch description
>> ---
>>  xen/arch/arm/domain_build.c | 10 ++++++++++
>>  1 file changed, 10 insertions(+)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 3abacc0..fadfbbc 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -432,6 +432,16 @@ static int write_properties(struct domain *d, struct
>> kernel_info *kinfo,
>>              continue;
>>          }
>>
>> +        /* Don't expose IOMMU specific properties to the guest */
>> +        if ( dt_property_name_is_equal(prop, "iommus") )
>> +            continue;
>> +
>> +        if ( dt_property_name_is_equal(prop, "iommu-map") )
>> +            continue;
>> +
>> +        if ( dt_property_name_is_equal(prop, "iommu-map-mask") )
>> +            continue;
>> +
>
>
> Sadly we don't have some sort of array to blacklist property. This could be
> a good improvement if you have time to look at it.
I think, yes. I will have a look when I have free time.

>
> In any case:
>
> Acked-by: Julien Grall <julien.grall@arm.com>
Thank you!

>
>>          res = fdt_property(kinfo->fdt, prop->name, prop_data, prop_len);
>>
>>          if ( res )
>>
>
> Cheers,
>
> --
> Julien Grall



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 11/13] iommu/arm: smmu: Squash map_pages/unmap_pages with map_page/unmap_page
  2017-08-03 12:36   ` Julien Grall
@ 2017-08-03 13:26     ` Oleksandr Tyshchenko
  0 siblings, 0 replies; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-08-03 13:26 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Oleksandr Tyshchenko

Hi, Julien

On Thu, Aug 3, 2017 at 3:36 PM, Julien Grall <julien.grall@arm.com> wrote:
> Hi,
>
> On 25/07/17 18:26, Oleksandr Tyshchenko wrote:
>>
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> Eliminate TODO by squashing single-page stuff with multi-page one.
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> CC: Julien Grall <julien.grall@arm.com>
>
>
> Acked-by: Julien Grall <julien.grall@arm.com>
Thank you!

>
> Cheers,
>
> --
> Julien Grall



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 02/13] iommu: Add extra order argument to the IOMMU APIs and platform callbacks
  2017-08-03 12:32     ` Oleksandr Tyshchenko
@ 2017-08-21 16:20       ` Oleksandr Tyshchenko
  2017-08-22  7:21         ` Jan Beulich
  0 siblings, 1 reply; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-08-21 16:20 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, George Dunlap,
	Andrew Cooper, Ian Jackson, Tim Deegan, Oleksandr Tyshchenko,
	Julien Grall, Suravee Suthikulpanit

Hi, all.

Any comments?

On Thu, Aug 3, 2017 at 3:32 PM, Oleksandr Tyshchenko
<olekstysh@gmail.com> wrote:
> Hi, Julien
>
> On Thu, Aug 3, 2017 at 2:21 PM, Julien Grall <julien.grall@arm.com> wrote:
>> Hi Oleksandr,
>>
>> On 25/07/17 18:26, Oleksandr Tyshchenko wrote:
>>>
>>> diff --git a/xen/drivers/passthrough/arm/smmu.c
>>> b/xen/drivers/passthrough/arm/smmu.c
>>> index 74c09b0..7c313c0 100644
>>> --- a/xen/drivers/passthrough/arm/smmu.c
>>> +++ b/xen/drivers/passthrough/arm/smmu.c
>>
>>
>> [...]
>>
>>> +static int __must_check arm_smmu_unmap_pages(struct domain *d,
>>> +               unsigned long gfn, unsigned int order)
>>> +{
>>> +       unsigned long i;
>>> +       int rc = 0;
>>> +
>>> +       for (i = 0; i < (1UL << order); i++) {
>>> +               int ret = arm_smmu_unmap_page(d, gfn + i);
>>
>>
>>
>> Missing blank line between declaration(s) and statement(s).
> Will add.
>
>>
>>> +               if (!rc)
>>> +                       rc = ret;
>>> +       }
>>> +
>>> +       return rc;
>>> +}
>>> +
>>>  static const struct iommu_ops arm_smmu_iommu_ops = {
>>>      .init = arm_smmu_iommu_domain_init,
>>>      .hwdom_init = arm_smmu_iommu_hwdom_init,
>>> @@ -2786,8 +2823,8 @@ static const struct iommu_ops arm_smmu_iommu_ops = {
>>>      .iotlb_flush_all = arm_smmu_iotlb_flush_all,
>>>      .assign_device = arm_smmu_assign_dev,
>>>      .reassign_device = arm_smmu_reassign_dev,
>>> -    .map_page = arm_smmu_map_page,
>>> -    .unmap_page = arm_smmu_unmap_page,
>>> +    .map_pages = arm_smmu_map_pages,
>>> +    .unmap_pages = arm_smmu_unmap_pages,
>>>  };
>>>
>>>  static __init const struct arm_smmu_device *find_smmu(const struct device
>>> *dev)
>>
>>
>> [...]
>>
>>> diff --git a/xen/drivers/passthrough/vtd/iommu.c
>>> b/xen/drivers/passthrough/vtd/iommu.c
>>> index 19328f6..b4e8c89 100644
>>> --- a/xen/drivers/passthrough/vtd/iommu.c
>>> +++ b/xen/drivers/passthrough/vtd/iommu.c
>>
>>
>> [...]
>>
>>> +static int __must_check intel_iommu_unmap_pages(struct domain *d,
>>> +                                                unsigned long gfn,
>>> +                                                unsigned int order)
>>> +{
>>> +    unsigned long i;
>>> +    int rc = 0;
>>> +
>>> +    for ( i = 0; i < (1UL << order); i++ )
>>> +    {
>>> +        int ret = intel_iommu_unmap_page(d, gfn + i);
>>
>>
>> Missing blank line between declaration(s) and statement(s).
> Will add.
>
>>
>>> +        if ( !rc )
>>> +            rc = ret;
>>> +    }
>>> +
>>> +    return rc;
>>> +}
>>> +
>>
>>
>> Cheers,
>>
>>
>> --
>> Julien Grall
>
>
>
> --
> Regards,
>
> Oleksandr Tyshchenko



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 06/13] iommu: Add extra use_iommu argument to iommu_domain_init()
  2017-07-25 17:26 ` [PATCH v2 06/13] iommu: Add extra use_iommu argument to iommu_domain_init() Oleksandr Tyshchenko
@ 2017-08-21 16:29   ` Oleksandr Tyshchenko
  2017-12-06 16:51   ` Jan Beulich
  1 sibling, 0 replies; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-08-21 16:29 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Suravee Suthikulpanit,
	Andrew Cooper, Oleksandr Tyshchenko, Julien Grall, Jan Beulich

Hi, all.

Any comments?

On Tue, Jul 25, 2017 at 8:26 PM, Oleksandr Tyshchenko
<olekstysh@gmail.com> wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>
> The presence of this flag lets us know that the guest domain has statically
> assigned devices which will most likely be used for passthrough
> and as the result the IOMMU is expected to be used for this domain.
>
> Taking into the account this hint when dealing with non-shared IOMMUs
> we can populate IOMMU page tables before hand avoid going through
> the list of pages at the first assigned device.
> As this flag doesn't cover hotplug case, we will continue to populate
> IOMMU page tables on the fly.
>
> Extend corresponding platform callback with extra argument as well and
> pass thought incoming flag to the IOMMU drivers followed by updating
> "d->need_iommu" flag for any domains. But, it must be an additional logic before
> updating this flag for hardware domains which the next patch is introducing.
>
> As iommu_domain_init() is called with "use_iommu" flag being forced
> to false for now, no functional change is intended for both ARM and x86.
>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Julien Grall <julien.grall@arm.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Kevin Tian <kevin.tian@intel.com>
> CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
>
> ---
>    Changes in v1:
>       - Clarify patch subject/description.
>       - s/bool_t/bool/
>
>    Changes in v2:
>       - Extend "init" callback with extra argument too.
>       - Clarify patch description.
>       - Add maintainers in CC
> ---
>  xen/arch/arm/domain.c                       |  2 +-
>  xen/arch/x86/domain.c                       |  2 +-
>  xen/drivers/passthrough/amd/pci_amd_iommu.c |  2 +-
>  xen/drivers/passthrough/arm/smmu.c          |  2 +-
>  xen/drivers/passthrough/iommu.c             | 10 ++++++++--
>  xen/drivers/passthrough/vtd/iommu.c         |  2 +-
>  xen/include/xen/iommu.h                     |  4 ++--
>  7 files changed, 15 insertions(+), 9 deletions(-)
>
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 76310ed..ec19310 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -569,7 +569,7 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
>      ASSERT(config != NULL);
>
>      /* p2m_init relies on some value initialized by the IOMMU subsystem */
> -    if ( (rc = iommu_domain_init(d)) != 0 )
> +    if ( (rc = iommu_domain_init(d, false)) != 0 )
>          goto fail;
>
>      if ( (rc = p2m_init(d)) != 0 )
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index d7e6992..1ffe76c 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -641,7 +641,7 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
>          if ( (rc = init_domain_irq_mapping(d)) != 0 )
>              goto fail;
>
> -        if ( (rc = iommu_domain_init(d)) != 0 )
> +        if ( (rc = iommu_domain_init(d, false)) != 0 )
>              goto fail;
>      }
>      spin_lock_init(&d->arch.e820_lock);
> diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> index fe744d2..2491e8c 100644
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> @@ -261,7 +261,7 @@ static int get_paging_mode(unsigned long entries)
>      return level;
>  }
>
> -static int amd_iommu_domain_init(struct domain *d)
> +static int amd_iommu_domain_init(struct domain *d, bool use_iommu)
>  {
>      struct domain_iommu *hd = dom_iommu(d);
>
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index e828308..652b58c 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -2705,7 +2705,7 @@ static int arm_smmu_reassign_dev(struct domain *s, struct domain *t,
>         return 0;
>  }
>
> -static int arm_smmu_iommu_domain_init(struct domain *d)
> +static int arm_smmu_iommu_domain_init(struct domain *d, bool use_iommu)
>  {
>         struct arm_smmu_xen_domain *xen_domain;
>
> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
> index 3e9e4c3..19c87d1 100644
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -129,7 +129,7 @@ static void __init parse_iommu_param(char *s)
>      } while ( ss );
>  }
>
> -int iommu_domain_init(struct domain *d)
> +int iommu_domain_init(struct domain *d, bool use_iommu)
>  {
>      struct domain_iommu *hd = dom_iommu(d);
>      int ret = 0;
> @@ -142,7 +142,13 @@ int iommu_domain_init(struct domain *d)
>          return 0;
>
>      hd->platform_ops = iommu_get_ops();
> -    return hd->platform_ops->init(d);
> +    ret = hd->platform_ops->init(d, use_iommu);
> +    if ( ret )
> +        return ret;
> +
> +    d->need_iommu = use_iommu;
> +
> +    return 0;
>  }
>
>  static void __hwdom_init check_hwdom_reqs(struct domain *d)
> diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
> index b4e8c89..45d1f36 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1277,7 +1277,7 @@ void __init iommu_free(struct acpi_drhd_unit *drhd)
>          agaw = 64;                              \
>      agaw; })
>
> -static int intel_iommu_domain_init(struct domain *d)
> +static int intel_iommu_domain_init(struct domain *d, bool use_iommu)
>  {
>      dom_iommu(d)->arch.agaw = width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH);
>
> diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
> index 3297998..f4d489e 100644
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -56,7 +56,7 @@ int iommu_setup(void);
>  int iommu_add_device(struct pci_dev *pdev);
>  int iommu_enable_device(struct pci_dev *pdev);
>  int iommu_remove_device(struct pci_dev *pdev);
> -int iommu_domain_init(struct domain *d);
> +int iommu_domain_init(struct domain *d, bool use_iommu);
>  void iommu_hwdom_init(struct domain *d);
>  void iommu_domain_destroy(struct domain *d);
>  int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn);
> @@ -155,7 +155,7 @@ struct page_info;
>  typedef int iommu_grdm_t(xen_pfn_t start, xen_ulong_t nr, u32 id, void *ctxt);
>
>  struct iommu_ops {
> -    int (*init)(struct domain *d);
> +    int (*init)(struct domain *d, bool use_iommu);
>      void (*hwdom_init)(struct domain *d);
>      int (*add_device)(u8 devfn, device_t *dev);
>      int (*enable_device)(device_t *dev);
> --
> 2.7.4
>



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 07/13] iommu: Make decision about needing IOMMU for hardware domains in advance
  2017-07-25 17:26 ` [PATCH v2 07/13] iommu: Make decision about needing IOMMU for hardware domains in advance Oleksandr Tyshchenko
@ 2017-08-21 16:30   ` Oleksandr Tyshchenko
  2017-12-06 17:01   ` Jan Beulich
  2018-01-18 12:09   ` Roger Pau Monné
  2 siblings, 0 replies; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-08-21 16:30 UTC (permalink / raw)
  To: xen-devel; +Cc: Oleksandr Tyshchenko, Julien Grall, Jan Beulich

Hi, all.

Any comments?

On Tue, Jul 25, 2017 at 8:26 PM, Oleksandr Tyshchenko
<olekstysh@gmail.com> wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>
> The hardware domains require IOMMU to be used in the most cases and
> a decision to use it is made at hardware domain construction time.
> But, it is not the best moment for the non-shared IOMMUs due to
> the necessity of retrieving all mapping which could happen in a period
> of time between IOMMU per-domain initialization and this moment.
>
> So, make a decision about needing IOMMU a bit earlier, in iommu_domain_init().
> Having "d->need_iommu" flag set at the early stage we won't skip
> any IOMMU mapping updates. And as the result the existing in iommu_hwdom_init()
> code that goes through the list of the pages and tries to retrieve mapping
> for non-shared IOMMUs won't be needed anymore and can be just dropped.
>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Julien Grall <julien.grall@arm.com>
>
> ---
> Changes in v1:
>    -
>
> Changes in v2:
>    - This is the result of reworking old patch:
>      [PATCH v1 08/10] iommu: Split iommu_hwdom_init() into arch specific parts
> ---
>  xen/drivers/passthrough/iommu.c | 44 ++++++++++-------------------------------
>  1 file changed, 10 insertions(+), 34 deletions(-)
>
> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
> index 19c87d1..f5e5b7e 100644
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -52,7 +52,7 @@ custom_param("iommu", parse_iommu_param);
>  bool_t __initdata iommu_enable = 1;
>  bool_t __read_mostly iommu_enabled;
>  bool_t __read_mostly force_iommu;
> -bool_t __hwdom_initdata iommu_dom0_strict;
> +bool_t __read_mostly iommu_dom0_strict;
>  bool_t __read_mostly iommu_verbose;
>  bool_t __read_mostly iommu_workaround_bios_bug;
>  bool_t __read_mostly iommu_igfx = 1;
> @@ -141,6 +141,15 @@ int iommu_domain_init(struct domain *d, bool use_iommu)
>      if ( !iommu_enabled )
>          return 0;
>
> +    if ( is_hardware_domain(d) )
> +    {
> +        if ( (paging_mode_translate(d) && !iommu_passthrough) ||
> +              iommu_dom0_strict )
> +            use_iommu = 1;
> +        else
> +            use_iommu = 0;
> +    }
> +
>      hd->platform_ops = iommu_get_ops();
>      ret = hd->platform_ops->init(d, use_iommu);
>      if ( ret )
> @@ -161,8 +170,6 @@ static void __hwdom_init check_hwdom_reqs(struct domain *d)
>      if ( iommu_passthrough )
>          panic("Dom0 uses paging translated mode, dom0-passthrough must not be "
>                "enabled\n");
> -
> -    iommu_dom0_strict = 1;
>  }
>
>  void __hwdom_init iommu_hwdom_init(struct domain *d)
> @@ -175,37 +182,6 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
>          return;
>
>      register_keyhandler('o', &iommu_dump_p2m_table, "dump iommu p2m table", 0);
> -    d->need_iommu = !!iommu_dom0_strict;
> -    if ( need_iommu(d) && !iommu_use_hap_pt(d) )
> -    {
> -        struct page_info *page;
> -        unsigned int i = 0;
> -        int rc = 0;
> -
> -        page_list_for_each ( page, &d->page_list )
> -        {
> -            unsigned long mfn = page_to_mfn(page);
> -            unsigned long gfn = mfn_to_gmfn(d, mfn);
> -            unsigned int mapping = IOMMUF_readable;
> -            int ret;
> -
> -            if ( ((page->u.inuse.type_info & PGT_count_mask) == 0) ||
> -                 ((page->u.inuse.type_info & PGT_type_mask)
> -                  == PGT_writable_page) )
> -                mapping |= IOMMUF_writable;
> -
> -            ret = hd->platform_ops->map_pages(d, gfn, mfn, 0, mapping);
> -            if ( !rc )
> -                rc = ret;
> -
> -            if ( !(i++ & 0xfffff) )
> -                process_pending_softirqs();
> -        }
> -
> -        if ( rc )
> -            printk(XENLOG_WARNING "d%d: IOMMU mapping failed: %d\n",
> -                   d->domain_id, rc);
> -    }
>
>      return hd->platform_ops->hwdom_init(d);
>  }
> --
> 2.7.4
>



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 13/13] [RFC] iommu: AMD-Vi: Squash map_pages/unmap_pages with map_page/unmap_page
  2017-07-25 17:26 ` [PATCH v2 13/13] [RFC] iommu: AMD-Vi: " Oleksandr Tyshchenko
@ 2017-08-21 16:44   ` Oleksandr Tyshchenko
  2017-09-12 14:45     ` Oleksandr Tyshchenko
  0 siblings, 1 reply; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-08-21 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Oleksandr Tyshchenko, Suravee Suthikulpanit, Jan Beulich

Hi, all.

Any comments?

On Tue, Jul 25, 2017 at 8:26 PM, Oleksandr Tyshchenko
<olekstysh@gmail.com> wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>
> Reduce the scope of the TODO by squashing single-page stuff with
> multi-page one. Next target is to use large pages whenever possible
> in the case that hardware supports them.
>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
>
> ---
>    Changes in v1:
>       -
>
>    Changes in v2:
>       -
>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> ---
>  xen/drivers/passthrough/amd/iommu_map.c | 250 ++++++++++++++++----------------
>  1 file changed, 121 insertions(+), 129 deletions(-)
>
> diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
> index ea3a728..22d0cc6 100644
> --- a/xen/drivers/passthrough/amd/iommu_map.c
> +++ b/xen/drivers/passthrough/amd/iommu_map.c
> @@ -631,188 +631,180 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
>      return 0;
>  }
>
> -static int __must_check amd_iommu_map_page(struct domain *d, unsigned long gfn,
> -                                           unsigned long mfn,
> -                                           unsigned int flags)
> +/*
> + * TODO: Optimize by using large pages whenever possible in the case
> + * that hardware supports them.
> + */
> +int __must_check amd_iommu_map_pages(struct domain *d, unsigned long gfn,
> +                                     unsigned long mfn,
> +                                     unsigned int order,
> +                                     unsigned int flags)
>  {
> -    bool_t need_flush = 0;
>      struct domain_iommu *hd = dom_iommu(d);
>      int rc;
> -    unsigned long pt_mfn[7];
> -    unsigned int merge_level;
> +    unsigned long orig_gfn = gfn;
> +    unsigned long i;
>
>      if ( iommu_use_hap_pt(d) )
>          return 0;
>
> -    memset(pt_mfn, 0, sizeof(pt_mfn));
> -
>      spin_lock(&hd->arch.mapping_lock);
> -
>      rc = amd_iommu_alloc_root(hd);
> +    spin_unlock(&hd->arch.mapping_lock);
>      if ( rc )
>      {
> -        spin_unlock(&hd->arch.mapping_lock);
>          AMD_IOMMU_DEBUG("Root table alloc failed, gfn = %lx\n", gfn);
>          domain_crash(d);
>          return rc;
>      }
>
> -    /* Since HVM domain is initialized with 2 level IO page table,
> -     * we might need a deeper page table for lager gfn now */
> -    if ( is_hvm_domain(d) )
> +    for ( i = 0; i < (1UL << order); i++, gfn++, mfn++ )
>      {
> -        if ( update_paging_mode(d, gfn) )
> +        bool_t need_flush = 0;
> +        unsigned long pt_mfn[7];
> +        unsigned int merge_level;
> +
> +        memset(pt_mfn, 0, sizeof(pt_mfn));
> +
> +        spin_lock(&hd->arch.mapping_lock);
> +
> +        /* Since HVM domain is initialized with 2 level IO page table,
> +         * we might need a deeper page table for lager gfn now */
> +        if ( is_hvm_domain(d) )
> +        {
> +            if ( update_paging_mode(d, gfn) )
> +            {
> +                spin_unlock(&hd->arch.mapping_lock);
> +                AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
> +                domain_crash(d);
> +                rc = -EFAULT;
> +                goto err;
> +            }
> +        }
> +
> +        if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
>          {
>              spin_unlock(&hd->arch.mapping_lock);
> -            AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
> +            AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
>              domain_crash(d);
> -            return -EFAULT;
> +            rc = -EFAULT;
> +            goto err;
>          }
> -    }
>
> -    if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
> -    {
> -        spin_unlock(&hd->arch.mapping_lock);
> -        AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
> -        domain_crash(d);
> -        return -EFAULT;
> -    }
> +        /* Install 4k mapping first */
> +        need_flush = set_iommu_pte_present(pt_mfn[1], gfn, mfn,
> +                                           IOMMU_PAGING_MODE_LEVEL_1,
> +                                           !!(flags & IOMMUF_writable),
> +                                           !!(flags & IOMMUF_readable));
>
> -    /* Install 4k mapping first */
> -    need_flush = set_iommu_pte_present(pt_mfn[1], gfn, mfn,
> -                                       IOMMU_PAGING_MODE_LEVEL_1,
> -                                       !!(flags & IOMMUF_writable),
> -                                       !!(flags & IOMMUF_readable));
> +        /* Do not increase pde count if io mapping has not been changed */
> +        if ( !need_flush )
> +        {
> +            spin_unlock(&hd->arch.mapping_lock);
> +            continue;
> +        }
>
> -    /* Do not increase pde count if io mapping has not been changed */
> -    if ( !need_flush )
> -        goto out;
> +        /* 4K mapping for PV guests never changes,
> +         * no need to flush if we trust non-present bits */
> +        if ( is_hvm_domain(d) )
> +            amd_iommu_flush_pages(d, gfn, 0);
>
> -    /* 4K mapping for PV guests never changes,
> -     * no need to flush if we trust non-present bits */
> -    if ( is_hvm_domain(d) )
> -        amd_iommu_flush_pages(d, gfn, 0);
> -
> -    for ( merge_level = IOMMU_PAGING_MODE_LEVEL_2;
> -          merge_level <= hd->arch.paging_mode; merge_level++ )
> -    {
> -        if ( pt_mfn[merge_level] == 0 )
> -            break;
> -        if ( !iommu_update_pde_count(d, pt_mfn[merge_level],
> -                                     gfn, mfn, merge_level) )
> -            break;
> -
> -        if ( iommu_merge_pages(d, pt_mfn[merge_level], gfn,
> -                               flags, merge_level) )
> +        for ( merge_level = IOMMU_PAGING_MODE_LEVEL_2;
> +              merge_level <= hd->arch.paging_mode; merge_level++ )
>          {
> -            spin_unlock(&hd->arch.mapping_lock);
> -            AMD_IOMMU_DEBUG("Merge iommu page failed at level %d, "
> -                            "gfn = %lx mfn = %lx\n", merge_level, gfn, mfn);
> -            domain_crash(d);
> -            return -EFAULT;
> +            if ( pt_mfn[merge_level] == 0 )
> +                break;
> +            if ( !iommu_update_pde_count(d, pt_mfn[merge_level],
> +                                         gfn, mfn, merge_level) )
> +                break;
> +
> +            if ( iommu_merge_pages(d, pt_mfn[merge_level], gfn,
> +                                   flags, merge_level) )
> +            {
> +                spin_unlock(&hd->arch.mapping_lock);
> +                AMD_IOMMU_DEBUG("Merge iommu page failed at level %d, "
> +                                "gfn = %lx mfn = %lx\n", merge_level, gfn, mfn);
> +                domain_crash(d);
> +                rc = -EFAULT;
> +                goto err;
> +            }
> +
> +            /* Deallocate lower level page table */
> +            free_amd_iommu_pgtable(mfn_to_page(pt_mfn[merge_level - 1]));
>          }
>
> -        /* Deallocate lower level page table */
> -        free_amd_iommu_pgtable(mfn_to_page(pt_mfn[merge_level - 1]));
> +        spin_unlock(&hd->arch.mapping_lock);
>      }
>
> -out:
> -    spin_unlock(&hd->arch.mapping_lock);
>      return 0;
> +
> +err:
> +    while ( i-- )
> +        /* If statement to satisfy __must_check. */
> +        if ( amd_iommu_unmap_pages(d, orig_gfn + i, 0) )
> +            continue;
> +
> +    return rc;
>  }
>
> -static int __must_check amd_iommu_unmap_page(struct domain *d,
> -                                             unsigned long gfn)
> +int __must_check amd_iommu_unmap_pages(struct domain *d,
> +                                       unsigned long gfn,
> +                                       unsigned int order)
>  {
> -    unsigned long pt_mfn[7];
>      struct domain_iommu *hd = dom_iommu(d);
> +    int rt = 0;
> +    unsigned long i;
>
>      if ( iommu_use_hap_pt(d) )
>          return 0;
>
> -    memset(pt_mfn, 0, sizeof(pt_mfn));
> -
> -    spin_lock(&hd->arch.mapping_lock);
> -
>      if ( !hd->arch.root_table )
> -    {
> -        spin_unlock(&hd->arch.mapping_lock);
>          return 0;
> -    }
>
> -    /* Since HVM domain is initialized with 2 level IO page table,
> -     * we might need a deeper page table for lager gfn now */
> -    if ( is_hvm_domain(d) )
> +    for ( i = 0; i < (1UL << order); i++, gfn++ )
>      {
> -        int rc = update_paging_mode(d, gfn);
> +        unsigned long pt_mfn[7];
>
> -        if ( rc )
> -        {
> -            spin_unlock(&hd->arch.mapping_lock);
> -            AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
> -            if ( rc != -EADDRNOTAVAIL )
> -                domain_crash(d);
> -            return rc;
> -        }
> -    }
> +        memset(pt_mfn, 0, sizeof(pt_mfn));
>
> -    if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
> -    {
> -        spin_unlock(&hd->arch.mapping_lock);
> -        AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
> -        domain_crash(d);
> -        return -EFAULT;
> -    }
> -
> -    /* mark PTE as 'page not present' */
> -    clear_iommu_pte_present(pt_mfn[1], gfn);
> -    spin_unlock(&hd->arch.mapping_lock);
> +        spin_lock(&hd->arch.mapping_lock);
>
> -    amd_iommu_flush_pages(d, gfn, 0);
> -
> -    return 0;
> -}
> -
> -/* TODO: Optimize by squashing map_pages/unmap_pages with map_page/unmap_page */
> -int __must_check amd_iommu_map_pages(struct domain *d, unsigned long gfn,
> -                                     unsigned long mfn, unsigned int order,
> -                                     unsigned int flags)
> -{
> -    unsigned long i;
> -    int rc = 0;
> -
> -    for ( i = 0; i < (1UL << order); i++ )
> -    {
> -        rc = amd_iommu_map_page(d, gfn + i, mfn + i, flags);
> -        if ( unlikely(rc) )
> +        /* Since HVM domain is initialized with 2 level IO page table,
> +         * we might need a deeper page table for lager gfn now */
> +        if ( is_hvm_domain(d) )
>          {
> -            while ( i-- )
> -                /* If statement to satisfy __must_check. */
> -                if ( amd_iommu_unmap_page(d, gfn + i) )
> -                    continue;
> +            int rc = update_paging_mode(d, gfn);
>
> -            break;
> +            if ( rc )
> +            {
> +                spin_unlock(&hd->arch.mapping_lock);
> +                AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
> +                if ( rc != -EADDRNOTAVAIL )
> +                    domain_crash(d);
> +                if ( !rt )
> +                    rt = rc;
> +                continue;
> +            }
>          }
> -    }
> -
> -    return rc;
> -}
>
> -int __must_check amd_iommu_unmap_pages(struct domain *d, unsigned long gfn,
> -                                       unsigned int order)
> -{
> -    unsigned long i;
> -    int rc = 0;
> +        if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
> +        {
> +            spin_unlock(&hd->arch.mapping_lock);
> +            AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
> +            domain_crash(d);
> +            if ( !rt )
> +                rt = -EFAULT;
> +            continue;
> +        }
>
> -    for ( i = 0; i < (1UL << order); i++ )
> -    {
> -        int ret = amd_iommu_unmap_page(d, gfn + i);
> +        /* mark PTE as 'page not present' */
> +        clear_iommu_pte_present(pt_mfn[1], gfn);
> +        spin_unlock(&hd->arch.mapping_lock);
>
> -        if ( !rc )
> -            rc = ret;
> +        amd_iommu_flush_pages(d, gfn, 0);
>      }
>
> -    return rc;
> +    return rt;
>  }
>
>  int amd_iommu_reserve_domain_unity_map(struct domain *domain,
> @@ -831,7 +823,7 @@ int amd_iommu_reserve_domain_unity_map(struct domain *domain,
>      gfn = phys_addr >> PAGE_SHIFT;
>      for ( i = 0; i < npages; i++ )
>      {
> -        rt = amd_iommu_map_page(domain, gfn +i, gfn +i, flags);
> +        rt = amd_iommu_map_pages(domain, gfn +i, gfn +i, flags, 0);
>          if ( rt != 0 )
>              return rt;
>      }
> --
> 2.7.4
>



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 12/13] [RFC] iommu: VT-d: Squash map_pages/unmap_pages with map_page/unmap_page
  2017-07-25 17:26 ` [PATCH v2 12/13] [RFC] iommu: VT-d: " Oleksandr Tyshchenko
@ 2017-08-21 16:44   ` Oleksandr Tyshchenko
  2017-09-12 14:44     ` Oleksandr Tyshchenko
  0 siblings, 1 reply; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-08-21 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Oleksandr Tyshchenko, Kevin Tian, Jan Beulich

Hi, all.

Any comments?

On Tue, Jul 25, 2017 at 8:26 PM, Oleksandr Tyshchenko
<olekstysh@gmail.com> wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>
> Reduce the scope of the TODO by squashing single-page stuff with
> multi-page one. Next target is to use large pages whenever possible
> in the case that hardware supports them.
>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Kevin Tian <kevin.tian@intel.com>
>
> ---
>    Changes in v1:
>       -
>
>    Changes in v2:
>       -
> ---
>  xen/drivers/passthrough/vtd/iommu.c | 138 +++++++++++++++++-------------------
>  1 file changed, 67 insertions(+), 71 deletions(-)
>
> diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
> index 45d1f36..d20b2f9 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1750,15 +1750,24 @@ static void iommu_domain_teardown(struct domain *d)
>      spin_unlock(&hd->arch.mapping_lock);
>  }
>
> -static int __must_check intel_iommu_map_page(struct domain *d,
> -                                             unsigned long gfn,
> -                                             unsigned long mfn,
> -                                             unsigned int flags)
> +static int __must_check intel_iommu_unmap_pages(struct domain *d,
> +                                                unsigned long gfn,
> +                                                unsigned int order);
> +
> +/*
> + * TODO: Optimize by using large pages whenever possible in the case
> + * that hardware supports them.
> + */
> +static int __must_check intel_iommu_map_pages(struct domain *d,
> +                                              unsigned long gfn,
> +                                              unsigned long mfn,
> +                                              unsigned int order,
> +                                              unsigned int flags)
>  {
>      struct domain_iommu *hd = dom_iommu(d);
> -    struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
> -    u64 pg_maddr;
>      int rc = 0;
> +    unsigned long orig_gfn = gfn;
> +    unsigned long i;
>
>      /* Do nothing if VT-d shares EPT page table */
>      if ( iommu_use_hap_pt(d) )
> @@ -1768,78 +1777,60 @@ static int __must_check intel_iommu_map_page(struct domain *d,
>      if ( iommu_passthrough && is_hardware_domain(d) )
>          return 0;
>
> -    spin_lock(&hd->arch.mapping_lock);
> -
> -    pg_maddr = addr_to_dma_page_maddr(d, (paddr_t)gfn << PAGE_SHIFT_4K, 1);
> -    if ( pg_maddr == 0 )
> +    for ( i = 0; i < (1UL << order); i++, gfn++, mfn++ )
>      {
> -        spin_unlock(&hd->arch.mapping_lock);
> -        return -ENOMEM;
> -    }
> -    page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
> -    pte = page + (gfn & LEVEL_MASK);
> -    old = *pte;
> -    dma_set_pte_addr(new, (paddr_t)mfn << PAGE_SHIFT_4K);
> -    dma_set_pte_prot(new,
> -                     ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
> -                     ((flags & IOMMUF_writable) ? DMA_PTE_WRITE : 0));
> +        struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
> +        u64 pg_maddr;
>
> -    /* Set the SNP on leaf page table if Snoop Control available */
> -    if ( iommu_snoop )
> -        dma_set_pte_snp(new);
> +        spin_lock(&hd->arch.mapping_lock);
>
> -    if ( old.val == new.val )
> -    {
> +        pg_maddr = addr_to_dma_page_maddr(d, (paddr_t)gfn << PAGE_SHIFT_4K, 1);
> +        if ( pg_maddr == 0 )
> +        {
> +            spin_unlock(&hd->arch.mapping_lock);
> +            rc = -ENOMEM;
> +            goto err;
> +        }
> +        page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
> +        pte = page + (gfn & LEVEL_MASK);
> +        old = *pte;
> +        dma_set_pte_addr(new, (paddr_t)mfn << PAGE_SHIFT_4K);
> +        dma_set_pte_prot(new,
> +                         ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
> +                         ((flags & IOMMUF_writable) ? DMA_PTE_WRITE : 0));
> +
> +        /* Set the SNP on leaf page table if Snoop Control available */
> +        if ( iommu_snoop )
> +            dma_set_pte_snp(new);
> +
> +        if ( old.val == new.val )
> +        {
> +            spin_unlock(&hd->arch.mapping_lock);
> +            unmap_vtd_domain_page(page);
> +            continue;
> +        }
> +        *pte = new;
> +
> +        iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
>          spin_unlock(&hd->arch.mapping_lock);
>          unmap_vtd_domain_page(page);
> -        return 0;
> -    }
> -    *pte = new;
> -
> -    iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
> -    spin_unlock(&hd->arch.mapping_lock);
> -    unmap_vtd_domain_page(page);
>
> -    if ( !this_cpu(iommu_dont_flush_iotlb) )
> -        rc = iommu_flush_iotlb(d, gfn, dma_pte_present(old), 1);
> -
> -    return rc;
> -}
> -
> -static int __must_check intel_iommu_unmap_page(struct domain *d,
> -                                               unsigned long gfn)
> -{
> -    /* Do nothing if hardware domain and iommu supports pass thru. */
> -    if ( iommu_passthrough && is_hardware_domain(d) )
> -        return 0;
> -
> -    return dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
> -}
> -
> -/* TODO: Optimize by squashing map_pages/unmap_pages with map_page/unmap_page */
> -static int __must_check intel_iommu_map_pages(struct domain *d,
> -                                              unsigned long gfn,
> -                                              unsigned long mfn,
> -                                              unsigned int order,
> -                                              unsigned int flags)
> -{
> -    unsigned long i;
> -    int rc = 0;
> -
> -    for ( i = 0; i < (1UL << order); i++ )
> -    {
> -        rc = intel_iommu_map_page(d, gfn + i, mfn + i, flags);
> -        if ( unlikely(rc) )
> +        if ( !this_cpu(iommu_dont_flush_iotlb) )
>          {
> -            while ( i-- )
> -                /* If statement to satisfy __must_check. */
> -                if ( intel_iommu_unmap_page(d, gfn + i) )
> -                    continue;
> -
> -            break;
> +            rc = iommu_flush_iotlb(d, gfn, dma_pte_present(old), 1);
> +            if ( rc )
> +                goto err;
>          }
>      }
>
> +    return 0;
> +
> +err:
> +    while ( i-- )
> +        /* If statement to satisfy __must_check. */
> +        if ( intel_iommu_unmap_pages(d, orig_gfn + i, 0) )
> +            continue;
> +
>      return rc;
>  }
>
> @@ -1847,12 +1838,17 @@ static int __must_check intel_iommu_unmap_pages(struct domain *d,
>                                                  unsigned long gfn,
>                                                  unsigned int order)
>  {
> -    unsigned long i;
>      int rc = 0;
> +    unsigned long i;
> +
> +    /* Do nothing if hardware domain and iommu supports pass thru. */
> +    if ( iommu_passthrough && is_hardware_domain(d) )
> +        return 0;
>
> -    for ( i = 0; i < (1UL << order); i++ )
> +    for ( i = 0; i < (1UL << order); i++, gfn++ )
>      {
> -        int ret = intel_iommu_unmap_page(d, gfn + i);
> +        int ret = dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
> +
>          if ( !rc )
>              rc = ret;
>      }
> --
> 2.7.4
>



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 02/13] iommu: Add extra order argument to the IOMMU APIs and platform callbacks
  2017-08-21 16:20       ` Oleksandr Tyshchenko
@ 2017-08-22  7:21         ` Jan Beulich
  2017-08-22 10:28           ` Oleksandr Tyshchenko
  0 siblings, 1 reply; 62+ messages in thread
From: Jan Beulich @ 2017-08-22  7:21 UTC (permalink / raw)
  To: Oleksandr Tyshchenko
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, George Dunlap,
	Andrew Cooper, Ian Jackson, Tim Deegan, Oleksandr Tyshchenko,
	Julien Grall, Suravee Suthikulpanit, xen-devel

>>> On 21.08.17 at 18:20, <olekstysh@gmail.com> wrote:
> Hi, all.
> 
> Any comments?

Excuse me, but comments on what? The quoted text below just has
two "will add" comments of yours. I don't think you expect any
further comments on those. As to the series as a whole - I still have
it on my to-be-reviewed list, but there's no way I can predict when
I would get to it.

Jan

> On Thu, Aug 3, 2017 at 3:32 PM, Oleksandr Tyshchenko
> <olekstysh@gmail.com> wrote:
>> Hi, Julien
>>
>> On Thu, Aug 3, 2017 at 2:21 PM, Julien Grall <julien.grall@arm.com> wrote:
>>> Hi Oleksandr,
>>>
>>> On 25/07/17 18:26, Oleksandr Tyshchenko wrote:
>>>>
>>>> diff --git a/xen/drivers/passthrough/arm/smmu.c
>>>> b/xen/drivers/passthrough/arm/smmu.c
>>>> index 74c09b0..7c313c0 100644
>>>> --- a/xen/drivers/passthrough/arm/smmu.c
>>>> +++ b/xen/drivers/passthrough/arm/smmu.c
>>>
>>>
>>> [...]
>>>
>>>> +static int __must_check arm_smmu_unmap_pages(struct domain *d,
>>>> +               unsigned long gfn, unsigned int order)
>>>> +{
>>>> +       unsigned long i;
>>>> +       int rc = 0;
>>>> +
>>>> +       for (i = 0; i < (1UL << order); i++) {
>>>> +               int ret = arm_smmu_unmap_page(d, gfn + i);
>>>
>>>
>>>
>>> Missing blank line between declaration(s) and statement(s).
>> Will add.
>>
>>>
>>>> +               if (!rc)
>>>> +                       rc = ret;
>>>> +       }
>>>> +
>>>> +       return rc;
>>>> +}
>>>> +
>>>>  static const struct iommu_ops arm_smmu_iommu_ops = {
>>>>      .init = arm_smmu_iommu_domain_init,
>>>>      .hwdom_init = arm_smmu_iommu_hwdom_init,
>>>> @@ -2786,8 +2823,8 @@ static const struct iommu_ops arm_smmu_iommu_ops = {
>>>>      .iotlb_flush_all = arm_smmu_iotlb_flush_all,
>>>>      .assign_device = arm_smmu_assign_dev,
>>>>      .reassign_device = arm_smmu_reassign_dev,
>>>> -    .map_page = arm_smmu_map_page,
>>>> -    .unmap_page = arm_smmu_unmap_page,
>>>> +    .map_pages = arm_smmu_map_pages,
>>>> +    .unmap_pages = arm_smmu_unmap_pages,
>>>>  };
>>>>
>>>>  static __init const struct arm_smmu_device *find_smmu(const struct device
>>>> *dev)
>>>
>>>
>>> [...]
>>>
>>>> diff --git a/xen/drivers/passthrough/vtd/iommu.c
>>>> b/xen/drivers/passthrough/vtd/iommu.c
>>>> index 19328f6..b4e8c89 100644
>>>> --- a/xen/drivers/passthrough/vtd/iommu.c
>>>> +++ b/xen/drivers/passthrough/vtd/iommu.c
>>>
>>>
>>> [...]
>>>
>>>> +static int __must_check intel_iommu_unmap_pages(struct domain *d,
>>>> +                                                unsigned long gfn,
>>>> +                                                unsigned int order)
>>>> +{
>>>> +    unsigned long i;
>>>> +    int rc = 0;
>>>> +
>>>> +    for ( i = 0; i < (1UL << order); i++ )
>>>> +    {
>>>> +        int ret = intel_iommu_unmap_page(d, gfn + i);
>>>
>>>
>>> Missing blank line between declaration(s) and statement(s).
>> Will add.
>>
>>>
>>>> +        if ( !rc )
>>>> +            rc = ret;
>>>> +    }
>>>> +
>>>> +    return rc;
>>>> +}
>>>> +
>>>
>>>
>>> Cheers,
>>>
>>>
>>> --
>>> Julien Grall
>>
>>
>>
>> --
>> Regards,
>>
>> Oleksandr Tyshchenko
> 
> 
> 
> -- 
> Regards,
> 
> Oleksandr Tyshchenko
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> https://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 02/13] iommu: Add extra order argument to the IOMMU APIs and platform callbacks
  2017-08-22  7:21         ` Jan Beulich
@ 2017-08-22 10:28           ` Oleksandr Tyshchenko
  0 siblings, 0 replies; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-08-22 10:28 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, George Dunlap,
	Andrew Cooper, Ian Jackson, Tim Deegan, Oleksandr Tyshchenko,
	Julien Grall, Suravee Suthikulpanit, xen-devel

Hi, Jan

On Tue, Aug 22, 2017 at 10:21 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 21.08.17 at 18:20, <olekstysh@gmail.com> wrote:
>> Hi, all.
>>
>> Any comments?
>
> Excuse me, but comments on what? The quoted text below just has
> two "will add" comments of yours. I don't think you expect any
> further comments on those. As to the series as a whole - I still have
> it on my to-be-reviewed list, but there's no way I can predict when
> I would get to it.
I got it. No problem, will wait.

>
> Jan
>
>> On Thu, Aug 3, 2017 at 3:32 PM, Oleksandr Tyshchenko
>> <olekstysh@gmail.com> wrote:
>>> Hi, Julien
>>>
>>> On Thu, Aug 3, 2017 at 2:21 PM, Julien Grall <julien.grall@arm.com> wrote:
>>>> Hi Oleksandr,
>>>>
>>>> On 25/07/17 18:26, Oleksandr Tyshchenko wrote:
>>>>>
>>>>> diff --git a/xen/drivers/passthrough/arm/smmu.c
>>>>> b/xen/drivers/passthrough/arm/smmu.c
>>>>> index 74c09b0..7c313c0 100644
>>>>> --- a/xen/drivers/passthrough/arm/smmu.c
>>>>> +++ b/xen/drivers/passthrough/arm/smmu.c
>>>>
>>>>
>>>> [...]
>>>>
>>>>> +static int __must_check arm_smmu_unmap_pages(struct domain *d,
>>>>> +               unsigned long gfn, unsigned int order)
>>>>> +{
>>>>> +       unsigned long i;
>>>>> +       int rc = 0;
>>>>> +
>>>>> +       for (i = 0; i < (1UL << order); i++) {
>>>>> +               int ret = arm_smmu_unmap_page(d, gfn + i);
>>>>
>>>>
>>>>
>>>> Missing blank line between declaration(s) and statement(s).
>>> Will add.
>>>
>>>>
>>>>> +               if (!rc)
>>>>> +                       rc = ret;
>>>>> +       }
>>>>> +
>>>>> +       return rc;
>>>>> +}
>>>>> +
>>>>>  static const struct iommu_ops arm_smmu_iommu_ops = {
>>>>>      .init = arm_smmu_iommu_domain_init,
>>>>>      .hwdom_init = arm_smmu_iommu_hwdom_init,
>>>>> @@ -2786,8 +2823,8 @@ static const struct iommu_ops arm_smmu_iommu_ops = {
>>>>>      .iotlb_flush_all = arm_smmu_iotlb_flush_all,
>>>>>      .assign_device = arm_smmu_assign_dev,
>>>>>      .reassign_device = arm_smmu_reassign_dev,
>>>>> -    .map_page = arm_smmu_map_page,
>>>>> -    .unmap_page = arm_smmu_unmap_page,
>>>>> +    .map_pages = arm_smmu_map_pages,
>>>>> +    .unmap_pages = arm_smmu_unmap_pages,
>>>>>  };
>>>>>
>>>>>  static __init const struct arm_smmu_device *find_smmu(const struct device
>>>>> *dev)
>>>>
>>>>
>>>> [...]
>>>>
>>>>> diff --git a/xen/drivers/passthrough/vtd/iommu.c
>>>>> b/xen/drivers/passthrough/vtd/iommu.c
>>>>> index 19328f6..b4e8c89 100644
>>>>> --- a/xen/drivers/passthrough/vtd/iommu.c
>>>>> +++ b/xen/drivers/passthrough/vtd/iommu.c
>>>>
>>>>
>>>> [...]
>>>>
>>>>> +static int __must_check intel_iommu_unmap_pages(struct domain *d,
>>>>> +                                                unsigned long gfn,
>>>>> +                                                unsigned int order)
>>>>> +{
>>>>> +    unsigned long i;
>>>>> +    int rc = 0;
>>>>> +
>>>>> +    for ( i = 0; i < (1UL << order); i++ )
>>>>> +    {
>>>>> +        int ret = intel_iommu_unmap_page(d, gfn + i);
>>>>
>>>>
>>>> Missing blank line between declaration(s) and statement(s).
>>> Will add.
>>>
>>>>
>>>>> +        if ( !rc )
>>>>> +            rc = ret;
>>>>> +    }
>>>>> +
>>>>> +    return rc;
>>>>> +}
>>>>> +
>>>>
>>>>
>>>> Cheers,
>>>>
>>>>
>>>> --
>>>> Julien Grall
>>>
>>>
>>>
>>> --
>>> Regards,
>>>
>>> Oleksandr Tyshchenko
>>
>>
>>
>> --
>> Regards,
>>
>> Oleksandr Tyshchenko
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> https://lists.xen.org/xen-devel
>
>
>



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 12/13] [RFC] iommu: VT-d: Squash map_pages/unmap_pages with map_page/unmap_page
  2017-08-21 16:44   ` Oleksandr Tyshchenko
@ 2017-09-12 14:44     ` Oleksandr Tyshchenko
  2017-09-20  8:54       ` Tian, Kevin
  0 siblings, 1 reply; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-09-12 14:44 UTC (permalink / raw)
  To: Kevin Tian; +Cc: Oleksandr Tyshchenko, Jan Beulich, xen-devel

Hi.

Gentle reminder.

On Mon, Aug 21, 2017 at 7:44 PM, Oleksandr Tyshchenko
<olekstysh@gmail.com> wrote:
> Hi, all.
>
> Any comments?
>
> On Tue, Jul 25, 2017 at 8:26 PM, Oleksandr Tyshchenko
> <olekstysh@gmail.com> wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> Reduce the scope of the TODO by squashing single-page stuff with
>> multi-page one. Next target is to use large pages whenever possible
>> in the case that hardware supports them.
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> CC: Jan Beulich <jbeulich@suse.com>
>> CC: Kevin Tian <kevin.tian@intel.com>
>>
>> ---
>>    Changes in v1:
>>       -
>>
>>    Changes in v2:
>>       -
>> ---
>>  xen/drivers/passthrough/vtd/iommu.c | 138 +++++++++++++++++-------------------
>>  1 file changed, 67 insertions(+), 71 deletions(-)
>>
>> diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
>> index 45d1f36..d20b2f9 100644
>> --- a/xen/drivers/passthrough/vtd/iommu.c
>> +++ b/xen/drivers/passthrough/vtd/iommu.c
>> @@ -1750,15 +1750,24 @@ static void iommu_domain_teardown(struct domain *d)
>>      spin_unlock(&hd->arch.mapping_lock);
>>  }
>>
>> -static int __must_check intel_iommu_map_page(struct domain *d,
>> -                                             unsigned long gfn,
>> -                                             unsigned long mfn,
>> -                                             unsigned int flags)
>> +static int __must_check intel_iommu_unmap_pages(struct domain *d,
>> +                                                unsigned long gfn,
>> +                                                unsigned int order);
>> +
>> +/*
>> + * TODO: Optimize by using large pages whenever possible in the case
>> + * that hardware supports them.
>> + */
>> +static int __must_check intel_iommu_map_pages(struct domain *d,
>> +                                              unsigned long gfn,
>> +                                              unsigned long mfn,
>> +                                              unsigned int order,
>> +                                              unsigned int flags)
>>  {
>>      struct domain_iommu *hd = dom_iommu(d);
>> -    struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
>> -    u64 pg_maddr;
>>      int rc = 0;
>> +    unsigned long orig_gfn = gfn;
>> +    unsigned long i;
>>
>>      /* Do nothing if VT-d shares EPT page table */
>>      if ( iommu_use_hap_pt(d) )
>> @@ -1768,78 +1777,60 @@ static int __must_check intel_iommu_map_page(struct domain *d,
>>      if ( iommu_passthrough && is_hardware_domain(d) )
>>          return 0;
>>
>> -    spin_lock(&hd->arch.mapping_lock);
>> -
>> -    pg_maddr = addr_to_dma_page_maddr(d, (paddr_t)gfn << PAGE_SHIFT_4K, 1);
>> -    if ( pg_maddr == 0 )
>> +    for ( i = 0; i < (1UL << order); i++, gfn++, mfn++ )
>>      {
>> -        spin_unlock(&hd->arch.mapping_lock);
>> -        return -ENOMEM;
>> -    }
>> -    page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
>> -    pte = page + (gfn & LEVEL_MASK);
>> -    old = *pte;
>> -    dma_set_pte_addr(new, (paddr_t)mfn << PAGE_SHIFT_4K);
>> -    dma_set_pte_prot(new,
>> -                     ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
>> -                     ((flags & IOMMUF_writable) ? DMA_PTE_WRITE : 0));
>> +        struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
>> +        u64 pg_maddr;
>>
>> -    /* Set the SNP on leaf page table if Snoop Control available */
>> -    if ( iommu_snoop )
>> -        dma_set_pte_snp(new);
>> +        spin_lock(&hd->arch.mapping_lock);
>>
>> -    if ( old.val == new.val )
>> -    {
>> +        pg_maddr = addr_to_dma_page_maddr(d, (paddr_t)gfn << PAGE_SHIFT_4K, 1);
>> +        if ( pg_maddr == 0 )
>> +        {
>> +            spin_unlock(&hd->arch.mapping_lock);
>> +            rc = -ENOMEM;
>> +            goto err;
>> +        }
>> +        page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
>> +        pte = page + (gfn & LEVEL_MASK);
>> +        old = *pte;
>> +        dma_set_pte_addr(new, (paddr_t)mfn << PAGE_SHIFT_4K);
>> +        dma_set_pte_prot(new,
>> +                         ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
>> +                         ((flags & IOMMUF_writable) ? DMA_PTE_WRITE : 0));
>> +
>> +        /* Set the SNP on leaf page table if Snoop Control available */
>> +        if ( iommu_snoop )
>> +            dma_set_pte_snp(new);
>> +
>> +        if ( old.val == new.val )
>> +        {
>> +            spin_unlock(&hd->arch.mapping_lock);
>> +            unmap_vtd_domain_page(page);
>> +            continue;
>> +        }
>> +        *pte = new;
>> +
>> +        iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
>>          spin_unlock(&hd->arch.mapping_lock);
>>          unmap_vtd_domain_page(page);
>> -        return 0;
>> -    }
>> -    *pte = new;
>> -
>> -    iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
>> -    spin_unlock(&hd->arch.mapping_lock);
>> -    unmap_vtd_domain_page(page);
>>
>> -    if ( !this_cpu(iommu_dont_flush_iotlb) )
>> -        rc = iommu_flush_iotlb(d, gfn, dma_pte_present(old), 1);
>> -
>> -    return rc;
>> -}
>> -
>> -static int __must_check intel_iommu_unmap_page(struct domain *d,
>> -                                               unsigned long gfn)
>> -{
>> -    /* Do nothing if hardware domain and iommu supports pass thru. */
>> -    if ( iommu_passthrough && is_hardware_domain(d) )
>> -        return 0;
>> -
>> -    return dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
>> -}
>> -
>> -/* TODO: Optimize by squashing map_pages/unmap_pages with map_page/unmap_page */
>> -static int __must_check intel_iommu_map_pages(struct domain *d,
>> -                                              unsigned long gfn,
>> -                                              unsigned long mfn,
>> -                                              unsigned int order,
>> -                                              unsigned int flags)
>> -{
>> -    unsigned long i;
>> -    int rc = 0;
>> -
>> -    for ( i = 0; i < (1UL << order); i++ )
>> -    {
>> -        rc = intel_iommu_map_page(d, gfn + i, mfn + i, flags);
>> -        if ( unlikely(rc) )
>> +        if ( !this_cpu(iommu_dont_flush_iotlb) )
>>          {
>> -            while ( i-- )
>> -                /* If statement to satisfy __must_check. */
>> -                if ( intel_iommu_unmap_page(d, gfn + i) )
>> -                    continue;
>> -
>> -            break;
>> +            rc = iommu_flush_iotlb(d, gfn, dma_pte_present(old), 1);
>> +            if ( rc )
>> +                goto err;
>>          }
>>      }
>>
>> +    return 0;
>> +
>> +err:
>> +    while ( i-- )
>> +        /* If statement to satisfy __must_check. */
>> +        if ( intel_iommu_unmap_pages(d, orig_gfn + i, 0) )
>> +            continue;
>> +
>>      return rc;
>>  }
>>
>> @@ -1847,12 +1838,17 @@ static int __must_check intel_iommu_unmap_pages(struct domain *d,
>>                                                  unsigned long gfn,
>>                                                  unsigned int order)
>>  {
>> -    unsigned long i;
>>      int rc = 0;
>> +    unsigned long i;
>> +
>> +    /* Do nothing if hardware domain and iommu supports pass thru. */
>> +    if ( iommu_passthrough && is_hardware_domain(d) )
>> +        return 0;
>>
>> -    for ( i = 0; i < (1UL << order); i++ )
>> +    for ( i = 0; i < (1UL << order); i++, gfn++ )
>>      {
>> -        int ret = intel_iommu_unmap_page(d, gfn + i);
>> +        int ret = dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
>> +
>>          if ( !rc )
>>              rc = ret;
>>      }
>> --
>> 2.7.4
>>
>
>
>
> --
> Regards,
>
> Oleksandr Tyshchenko



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 13/13] [RFC] iommu: AMD-Vi: Squash map_pages/unmap_pages with map_page/unmap_page
  2017-08-21 16:44   ` Oleksandr Tyshchenko
@ 2017-09-12 14:45     ` Oleksandr Tyshchenko
  0 siblings, 0 replies; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-09-12 14:45 UTC (permalink / raw)
  To: Suravee Suthikulpanit; +Cc: Oleksandr Tyshchenko, Jan Beulich, xen-devel

Hi.

Gentle reminder.

On Mon, Aug 21, 2017 at 7:44 PM, Oleksandr Tyshchenko
<olekstysh@gmail.com> wrote:
> Hi, all.
>
> Any comments?
>
> On Tue, Jul 25, 2017 at 8:26 PM, Oleksandr Tyshchenko
> <olekstysh@gmail.com> wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> Reduce the scope of the TODO by squashing single-page stuff with
>> multi-page one. Next target is to use large pages whenever possible
>> in the case that hardware supports them.
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> CC: Jan Beulich <jbeulich@suse.com>
>> CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
>>
>> ---
>>    Changes in v1:
>>       -
>>
>>    Changes in v2:
>>       -
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> ---
>>  xen/drivers/passthrough/amd/iommu_map.c | 250 ++++++++++++++++----------------
>>  1 file changed, 121 insertions(+), 129 deletions(-)
>>
>> diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
>> index ea3a728..22d0cc6 100644
>> --- a/xen/drivers/passthrough/amd/iommu_map.c
>> +++ b/xen/drivers/passthrough/amd/iommu_map.c
>> @@ -631,188 +631,180 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
>>      return 0;
>>  }
>>
>> -static int __must_check amd_iommu_map_page(struct domain *d, unsigned long gfn,
>> -                                           unsigned long mfn,
>> -                                           unsigned int flags)
>> +/*
>> + * TODO: Optimize by using large pages whenever possible in the case
>> + * that hardware supports them.
>> + */
>> +int __must_check amd_iommu_map_pages(struct domain *d, unsigned long gfn,
>> +                                     unsigned long mfn,
>> +                                     unsigned int order,
>> +                                     unsigned int flags)
>>  {
>> -    bool_t need_flush = 0;
>>      struct domain_iommu *hd = dom_iommu(d);
>>      int rc;
>> -    unsigned long pt_mfn[7];
>> -    unsigned int merge_level;
>> +    unsigned long orig_gfn = gfn;
>> +    unsigned long i;
>>
>>      if ( iommu_use_hap_pt(d) )
>>          return 0;
>>
>> -    memset(pt_mfn, 0, sizeof(pt_mfn));
>> -
>>      spin_lock(&hd->arch.mapping_lock);
>> -
>>      rc = amd_iommu_alloc_root(hd);
>> +    spin_unlock(&hd->arch.mapping_lock);
>>      if ( rc )
>>      {
>> -        spin_unlock(&hd->arch.mapping_lock);
>>          AMD_IOMMU_DEBUG("Root table alloc failed, gfn = %lx\n", gfn);
>>          domain_crash(d);
>>          return rc;
>>      }
>>
>> -    /* Since HVM domain is initialized with 2 level IO page table,
>> -     * we might need a deeper page table for lager gfn now */
>> -    if ( is_hvm_domain(d) )
>> +    for ( i = 0; i < (1UL << order); i++, gfn++, mfn++ )
>>      {
>> -        if ( update_paging_mode(d, gfn) )
>> +        bool_t need_flush = 0;
>> +        unsigned long pt_mfn[7];
>> +        unsigned int merge_level;
>> +
>> +        memset(pt_mfn, 0, sizeof(pt_mfn));
>> +
>> +        spin_lock(&hd->arch.mapping_lock);
>> +
>> +        /* Since HVM domain is initialized with 2 level IO page table,
>> +         * we might need a deeper page table for lager gfn now */
>> +        if ( is_hvm_domain(d) )
>> +        {
>> +            if ( update_paging_mode(d, gfn) )
>> +            {
>> +                spin_unlock(&hd->arch.mapping_lock);
>> +                AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
>> +                domain_crash(d);
>> +                rc = -EFAULT;
>> +                goto err;
>> +            }
>> +        }
>> +
>> +        if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
>>          {
>>              spin_unlock(&hd->arch.mapping_lock);
>> -            AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
>> +            AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
>>              domain_crash(d);
>> -            return -EFAULT;
>> +            rc = -EFAULT;
>> +            goto err;
>>          }
>> -    }
>>
>> -    if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
>> -    {
>> -        spin_unlock(&hd->arch.mapping_lock);
>> -        AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
>> -        domain_crash(d);
>> -        return -EFAULT;
>> -    }
>> +        /* Install 4k mapping first */
>> +        need_flush = set_iommu_pte_present(pt_mfn[1], gfn, mfn,
>> +                                           IOMMU_PAGING_MODE_LEVEL_1,
>> +                                           !!(flags & IOMMUF_writable),
>> +                                           !!(flags & IOMMUF_readable));
>>
>> -    /* Install 4k mapping first */
>> -    need_flush = set_iommu_pte_present(pt_mfn[1], gfn, mfn,
>> -                                       IOMMU_PAGING_MODE_LEVEL_1,
>> -                                       !!(flags & IOMMUF_writable),
>> -                                       !!(flags & IOMMUF_readable));
>> +        /* Do not increase pde count if io mapping has not been changed */
>> +        if ( !need_flush )
>> +        {
>> +            spin_unlock(&hd->arch.mapping_lock);
>> +            continue;
>> +        }
>>
>> -    /* Do not increase pde count if io mapping has not been changed */
>> -    if ( !need_flush )
>> -        goto out;
>> +        /* 4K mapping for PV guests never changes,
>> +         * no need to flush if we trust non-present bits */
>> +        if ( is_hvm_domain(d) )
>> +            amd_iommu_flush_pages(d, gfn, 0);
>>
>> -    /* 4K mapping for PV guests never changes,
>> -     * no need to flush if we trust non-present bits */
>> -    if ( is_hvm_domain(d) )
>> -        amd_iommu_flush_pages(d, gfn, 0);
>> -
>> -    for ( merge_level = IOMMU_PAGING_MODE_LEVEL_2;
>> -          merge_level <= hd->arch.paging_mode; merge_level++ )
>> -    {
>> -        if ( pt_mfn[merge_level] == 0 )
>> -            break;
>> -        if ( !iommu_update_pde_count(d, pt_mfn[merge_level],
>> -                                     gfn, mfn, merge_level) )
>> -            break;
>> -
>> -        if ( iommu_merge_pages(d, pt_mfn[merge_level], gfn,
>> -                               flags, merge_level) )
>> +        for ( merge_level = IOMMU_PAGING_MODE_LEVEL_2;
>> +              merge_level <= hd->arch.paging_mode; merge_level++ )
>>          {
>> -            spin_unlock(&hd->arch.mapping_lock);
>> -            AMD_IOMMU_DEBUG("Merge iommu page failed at level %d, "
>> -                            "gfn = %lx mfn = %lx\n", merge_level, gfn, mfn);
>> -            domain_crash(d);
>> -            return -EFAULT;
>> +            if ( pt_mfn[merge_level] == 0 )
>> +                break;
>> +            if ( !iommu_update_pde_count(d, pt_mfn[merge_level],
>> +                                         gfn, mfn, merge_level) )
>> +                break;
>> +
>> +            if ( iommu_merge_pages(d, pt_mfn[merge_level], gfn,
>> +                                   flags, merge_level) )
>> +            {
>> +                spin_unlock(&hd->arch.mapping_lock);
>> +                AMD_IOMMU_DEBUG("Merge iommu page failed at level %d, "
>> +                                "gfn = %lx mfn = %lx\n", merge_level, gfn, mfn);
>> +                domain_crash(d);
>> +                rc = -EFAULT;
>> +                goto err;
>> +            }
>> +
>> +            /* Deallocate lower level page table */
>> +            free_amd_iommu_pgtable(mfn_to_page(pt_mfn[merge_level - 1]));
>>          }
>>
>> -        /* Deallocate lower level page table */
>> -        free_amd_iommu_pgtable(mfn_to_page(pt_mfn[merge_level - 1]));
>> +        spin_unlock(&hd->arch.mapping_lock);
>>      }
>>
>> -out:
>> -    spin_unlock(&hd->arch.mapping_lock);
>>      return 0;
>> +
>> +err:
>> +    while ( i-- )
>> +        /* If statement to satisfy __must_check. */
>> +        if ( amd_iommu_unmap_pages(d, orig_gfn + i, 0) )
>> +            continue;
>> +
>> +    return rc;
>>  }
>>
>> -static int __must_check amd_iommu_unmap_page(struct domain *d,
>> -                                             unsigned long gfn)
>> +int __must_check amd_iommu_unmap_pages(struct domain *d,
>> +                                       unsigned long gfn,
>> +                                       unsigned int order)
>>  {
>> -    unsigned long pt_mfn[7];
>>      struct domain_iommu *hd = dom_iommu(d);
>> +    int rt = 0;
>> +    unsigned long i;
>>
>>      if ( iommu_use_hap_pt(d) )
>>          return 0;
>>
>> -    memset(pt_mfn, 0, sizeof(pt_mfn));
>> -
>> -    spin_lock(&hd->arch.mapping_lock);
>> -
>>      if ( !hd->arch.root_table )
>> -    {
>> -        spin_unlock(&hd->arch.mapping_lock);
>>          return 0;
>> -    }
>>
>> -    /* Since HVM domain is initialized with 2 level IO page table,
>> -     * we might need a deeper page table for lager gfn now */
>> -    if ( is_hvm_domain(d) )
>> +    for ( i = 0; i < (1UL << order); i++, gfn++ )
>>      {
>> -        int rc = update_paging_mode(d, gfn);
>> +        unsigned long pt_mfn[7];
>>
>> -        if ( rc )
>> -        {
>> -            spin_unlock(&hd->arch.mapping_lock);
>> -            AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
>> -            if ( rc != -EADDRNOTAVAIL )
>> -                domain_crash(d);
>> -            return rc;
>> -        }
>> -    }
>> +        memset(pt_mfn, 0, sizeof(pt_mfn));
>>
>> -    if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
>> -    {
>> -        spin_unlock(&hd->arch.mapping_lock);
>> -        AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
>> -        domain_crash(d);
>> -        return -EFAULT;
>> -    }
>> -
>> -    /* mark PTE as 'page not present' */
>> -    clear_iommu_pte_present(pt_mfn[1], gfn);
>> -    spin_unlock(&hd->arch.mapping_lock);
>> +        spin_lock(&hd->arch.mapping_lock);
>>
>> -    amd_iommu_flush_pages(d, gfn, 0);
>> -
>> -    return 0;
>> -}
>> -
>> -/* TODO: Optimize by squashing map_pages/unmap_pages with map_page/unmap_page */
>> -int __must_check amd_iommu_map_pages(struct domain *d, unsigned long gfn,
>> -                                     unsigned long mfn, unsigned int order,
>> -                                     unsigned int flags)
>> -{
>> -    unsigned long i;
>> -    int rc = 0;
>> -
>> -    for ( i = 0; i < (1UL << order); i++ )
>> -    {
>> -        rc = amd_iommu_map_page(d, gfn + i, mfn + i, flags);
>> -        if ( unlikely(rc) )
>> +        /* Since HVM domain is initialized with 2 level IO page table,
>> +         * we might need a deeper page table for lager gfn now */
>> +        if ( is_hvm_domain(d) )
>>          {
>> -            while ( i-- )
>> -                /* If statement to satisfy __must_check. */
>> -                if ( amd_iommu_unmap_page(d, gfn + i) )
>> -                    continue;
>> +            int rc = update_paging_mode(d, gfn);
>>
>> -            break;
>> +            if ( rc )
>> +            {
>> +                spin_unlock(&hd->arch.mapping_lock);
>> +                AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
>> +                if ( rc != -EADDRNOTAVAIL )
>> +                    domain_crash(d);
>> +                if ( !rt )
>> +                    rt = rc;
>> +                continue;
>> +            }
>>          }
>> -    }
>> -
>> -    return rc;
>> -}
>>
>> -int __must_check amd_iommu_unmap_pages(struct domain *d, unsigned long gfn,
>> -                                       unsigned int order)
>> -{
>> -    unsigned long i;
>> -    int rc = 0;
>> +        if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
>> +        {
>> +            spin_unlock(&hd->arch.mapping_lock);
>> +            AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
>> +            domain_crash(d);
>> +            if ( !rt )
>> +                rt = -EFAULT;
>> +            continue;
>> +        }
>>
>> -    for ( i = 0; i < (1UL << order); i++ )
>> -    {
>> -        int ret = amd_iommu_unmap_page(d, gfn + i);
>> +        /* mark PTE as 'page not present' */
>> +        clear_iommu_pte_present(pt_mfn[1], gfn);
>> +        spin_unlock(&hd->arch.mapping_lock);
>>
>> -        if ( !rc )
>> -            rc = ret;
>> +        amd_iommu_flush_pages(d, gfn, 0);
>>      }
>>
>> -    return rc;
>> +    return rt;
>>  }
>>
>>  int amd_iommu_reserve_domain_unity_map(struct domain *domain,
>> @@ -831,7 +823,7 @@ int amd_iommu_reserve_domain_unity_map(struct domain *domain,
>>      gfn = phys_addr >> PAGE_SHIFT;
>>      for ( i = 0; i < npages; i++ )
>>      {
>> -        rt = amd_iommu_map_page(domain, gfn +i, gfn +i, flags);
>> +        rt = amd_iommu_map_pages(domain, gfn +i, gfn +i, flags, 0);
>>          if ( rt != 0 )
>>              return rt;
>>      }
>> --
>> 2.7.4
>>
>
>
>
> --
> Regards,
>
> Oleksandr Tyshchenko



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 12/13] [RFC] iommu: VT-d: Squash map_pages/unmap_pages with map_page/unmap_page
  2017-09-12 14:44     ` Oleksandr Tyshchenko
@ 2017-09-20  8:54       ` Tian, Kevin
  2017-09-20 18:23         ` Oleksandr Tyshchenko
  0 siblings, 1 reply; 62+ messages in thread
From: Tian, Kevin @ 2017-09-20  8:54 UTC (permalink / raw)
  To: Oleksandr Tyshchenko; +Cc: Oleksandr Tyshchenko, Jan Beulich, xen-devel

this patch alone looks OK to me. but I haven't found time to review
the whole series to judge whether below change is necessary.

> From: Oleksandr Tyshchenko [mailto:olekstysh@gmail.com]
> Sent: Tuesday, September 12, 2017 10:44 PM
> 
> Hi.
> 
> Gentle reminder.
> 
> On Mon, Aug 21, 2017 at 7:44 PM, Oleksandr Tyshchenko
> <olekstysh@gmail.com> wrote:
> > Hi, all.
> >
> > Any comments?
> >
> > On Tue, Jul 25, 2017 at 8:26 PM, Oleksandr Tyshchenko
> > <olekstysh@gmail.com> wrote:
> >> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> >>
> >> Reduce the scope of the TODO by squashing single-page stuff with
> >> multi-page one. Next target is to use large pages whenever possible
> >> in the case that hardware supports them.
> >>
> >> Signed-off-by: Oleksandr Tyshchenko
> <oleksandr_tyshchenko@epam.com>
> >> CC: Jan Beulich <jbeulich@suse.com>
> >> CC: Kevin Tian <kevin.tian@intel.com>
> >>
> >> ---
> >>    Changes in v1:
> >>       -
> >>
> >>    Changes in v2:
> >>       -
> >> ---
> >>  xen/drivers/passthrough/vtd/iommu.c | 138 +++++++++++++++++--------
> -----------
> >>  1 file changed, 67 insertions(+), 71 deletions(-)
> >>
> >> diff --git a/xen/drivers/passthrough/vtd/iommu.c
> b/xen/drivers/passthrough/vtd/iommu.c
> >> index 45d1f36..d20b2f9 100644
> >> --- a/xen/drivers/passthrough/vtd/iommu.c
> >> +++ b/xen/drivers/passthrough/vtd/iommu.c
> >> @@ -1750,15 +1750,24 @@ static void
> iommu_domain_teardown(struct domain *d)
> >>      spin_unlock(&hd->arch.mapping_lock);
> >>  }
> >>
> >> -static int __must_check intel_iommu_map_page(struct domain *d,
> >> -                                             unsigned long gfn,
> >> -                                             unsigned long mfn,
> >> -                                             unsigned int flags)
> >> +static int __must_check intel_iommu_unmap_pages(struct domain *d,
> >> +                                                unsigned long gfn,
> >> +                                                unsigned int order);
> >> +
> >> +/*
> >> + * TODO: Optimize by using large pages whenever possible in the case
> >> + * that hardware supports them.
> >> + */
> >> +static int __must_check intel_iommu_map_pages(struct domain *d,
> >> +                                              unsigned long gfn,
> >> +                                              unsigned long mfn,
> >> +                                              unsigned int order,
> >> +                                              unsigned int flags)
> >>  {
> >>      struct domain_iommu *hd = dom_iommu(d);
> >> -    struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
> >> -    u64 pg_maddr;
> >>      int rc = 0;
> >> +    unsigned long orig_gfn = gfn;
> >> +    unsigned long i;
> >>
> >>      /* Do nothing if VT-d shares EPT page table */
> >>      if ( iommu_use_hap_pt(d) )
> >> @@ -1768,78 +1777,60 @@ static int __must_check
> intel_iommu_map_page(struct domain *d,
> >>      if ( iommu_passthrough && is_hardware_domain(d) )
> >>          return 0;
> >>
> >> -    spin_lock(&hd->arch.mapping_lock);
> >> -
> >> -    pg_maddr = addr_to_dma_page_maddr(d, (paddr_t)gfn <<
> PAGE_SHIFT_4K, 1);
> >> -    if ( pg_maddr == 0 )
> >> +    for ( i = 0; i < (1UL << order); i++, gfn++, mfn++ )
> >>      {
> >> -        spin_unlock(&hd->arch.mapping_lock);
> >> -        return -ENOMEM;
> >> -    }
> >> -    page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
> >> -    pte = page + (gfn & LEVEL_MASK);
> >> -    old = *pte;
> >> -    dma_set_pte_addr(new, (paddr_t)mfn << PAGE_SHIFT_4K);
> >> -    dma_set_pte_prot(new,
> >> -                     ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
> >> -                     ((flags & IOMMUF_writable) ? DMA_PTE_WRITE : 0));
> >> +        struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
> >> +        u64 pg_maddr;
> >>
> >> -    /* Set the SNP on leaf page table if Snoop Control available */
> >> -    if ( iommu_snoop )
> >> -        dma_set_pte_snp(new);
> >> +        spin_lock(&hd->arch.mapping_lock);
> >>
> >> -    if ( old.val == new.val )
> >> -    {
> >> +        pg_maddr = addr_to_dma_page_maddr(d, (paddr_t)gfn <<
> PAGE_SHIFT_4K, 1);
> >> +        if ( pg_maddr == 0 )
> >> +        {
> >> +            spin_unlock(&hd->arch.mapping_lock);
> >> +            rc = -ENOMEM;
> >> +            goto err;
> >> +        }
> >> +        page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
> >> +        pte = page + (gfn & LEVEL_MASK);
> >> +        old = *pte;
> >> +        dma_set_pte_addr(new, (paddr_t)mfn << PAGE_SHIFT_4K);
> >> +        dma_set_pte_prot(new,
> >> +                         ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
> >> +                         ((flags & IOMMUF_writable) ? DMA_PTE_WRITE : 0));
> >> +
> >> +        /* Set the SNP on leaf page table if Snoop Control available */
> >> +        if ( iommu_snoop )
> >> +            dma_set_pte_snp(new);
> >> +
> >> +        if ( old.val == new.val )
> >> +        {
> >> +            spin_unlock(&hd->arch.mapping_lock);
> >> +            unmap_vtd_domain_page(page);
> >> +            continue;
> >> +        }
> >> +        *pte = new;
> >> +
> >> +        iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
> >>          spin_unlock(&hd->arch.mapping_lock);
> >>          unmap_vtd_domain_page(page);
> >> -        return 0;
> >> -    }
> >> -    *pte = new;
> >> -
> >> -    iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
> >> -    spin_unlock(&hd->arch.mapping_lock);
> >> -    unmap_vtd_domain_page(page);
> >>
> >> -    if ( !this_cpu(iommu_dont_flush_iotlb) )
> >> -        rc = iommu_flush_iotlb(d, gfn, dma_pte_present(old), 1);
> >> -
> >> -    return rc;
> >> -}
> >> -
> >> -static int __must_check intel_iommu_unmap_page(struct domain *d,
> >> -                                               unsigned long gfn)
> >> -{
> >> -    /* Do nothing if hardware domain and iommu supports pass thru. */
> >> -    if ( iommu_passthrough && is_hardware_domain(d) )
> >> -        return 0;
> >> -
> >> -    return dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
> >> -}
> >> -
> >> -/* TODO: Optimize by squashing map_pages/unmap_pages with
> map_page/unmap_page */
> >> -static int __must_check intel_iommu_map_pages(struct domain *d,
> >> -                                              unsigned long gfn,
> >> -                                              unsigned long mfn,
> >> -                                              unsigned int order,
> >> -                                              unsigned int flags)
> >> -{
> >> -    unsigned long i;
> >> -    int rc = 0;
> >> -
> >> -    for ( i = 0; i < (1UL << order); i++ )
> >> -    {
> >> -        rc = intel_iommu_map_page(d, gfn + i, mfn + i, flags);
> >> -        if ( unlikely(rc) )
> >> +        if ( !this_cpu(iommu_dont_flush_iotlb) )
> >>          {
> >> -            while ( i-- )
> >> -                /* If statement to satisfy __must_check. */
> >> -                if ( intel_iommu_unmap_page(d, gfn + i) )
> >> -                    continue;
> >> -
> >> -            break;
> >> +            rc = iommu_flush_iotlb(d, gfn, dma_pte_present(old), 1);
> >> +            if ( rc )
> >> +                goto err;
> >>          }
> >>      }
> >>
> >> +    return 0;
> >> +
> >> +err:
> >> +    while ( i-- )
> >> +        /* If statement to satisfy __must_check. */
> >> +        if ( intel_iommu_unmap_pages(d, orig_gfn + i, 0) )
> >> +            continue;
> >> +
> >>      return rc;
> >>  }
> >>
> >> @@ -1847,12 +1838,17 @@ static int __must_check
> intel_iommu_unmap_pages(struct domain *d,
> >>                                                  unsigned long gfn,
> >>                                                  unsigned int order)
> >>  {
> >> -    unsigned long i;
> >>      int rc = 0;
> >> +    unsigned long i;
> >> +
> >> +    /* Do nothing if hardware domain and iommu supports pass thru. */
> >> +    if ( iommu_passthrough && is_hardware_domain(d) )
> >> +        return 0;
> >>
> >> -    for ( i = 0; i < (1UL << order); i++ )
> >> +    for ( i = 0; i < (1UL << order); i++, gfn++ )
> >>      {
> >> -        int ret = intel_iommu_unmap_page(d, gfn + i);
> >> +        int ret = dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
> >> +
> >>          if ( !rc )
> >>              rc = ret;
> >>      }
> >> --
> >> 2.7.4
> >>
> >
> >
> >
> > --
> > Regards,
> >
> > Oleksandr Tyshchenko
> 
> 
> 
> --
> Regards,
> 
> Oleksandr Tyshchenko
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 12/13] [RFC] iommu: VT-d: Squash map_pages/unmap_pages with map_page/unmap_page
  2017-09-20  8:54       ` Tian, Kevin
@ 2017-09-20 18:23         ` Oleksandr Tyshchenko
  0 siblings, 0 replies; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-09-20 18:23 UTC (permalink / raw)
  To: Tian, Kevin; +Cc: Oleksandr Tyshchenko, Jan Beulich, xen-devel

Hi, Kevin

On Wed, Sep 20, 2017 at 11:54 AM, Tian, Kevin <kevin.tian@intel.com> wrote:
> this patch alone looks OK to me. but I haven't found time to review
> the whole series to judge whether below change is necessary.

Let me explain.

Here [1] I touched common IOMMU code, and as the result I had to
modify all existing IOMMU platform drivers and left TODO regarding
future optimization.
I did this patch as I was interested in adding IPMMU-VMSA IOMMU on ARM
which supported super-pages.

Current patch as well as the following [2] are attempts to *reduce*
the scope of the TODO for x86-related IOMMUs.
This is a maximum I could do without knowledge in subject.

But the ideal patch would be to use large pages whenever possible in
the case that hardware supports them.

[1] [PATCH v2 02/13] iommu: Add extra order argument to the IOMMU APIs
and platform callbacks
https://www.mail-archive.com/xen-devel@lists.xen.org/msg115910.html

[2] [PATCH v2 13/13] [RFC] iommu: AMD-Vi: Squash map_pages/unmap_pages
with map_page/unmap_page
https://patchwork.kernel.org/patch/9862571/

>
>> From: Oleksandr Tyshchenko [mailto:olekstysh@gmail.com]
>> Sent: Tuesday, September 12, 2017 10:44 PM
>>
>> Hi.
>>
>> Gentle reminder.
>>
>> On Mon, Aug 21, 2017 at 7:44 PM, Oleksandr Tyshchenko
>> <olekstysh@gmail.com> wrote:
>> > Hi, all.
>> >
>> > Any comments?
>> >
>> > On Tue, Jul 25, 2017 at 8:26 PM, Oleksandr Tyshchenko
>> > <olekstysh@gmail.com> wrote:
>> >> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> >>
>> >> Reduce the scope of the TODO by squashing single-page stuff with
>> >> multi-page one. Next target is to use large pages whenever possible
>> >> in the case that hardware supports them.
>> >>
>> >> Signed-off-by: Oleksandr Tyshchenko
>> <oleksandr_tyshchenko@epam.com>
>> >> CC: Jan Beulich <jbeulich@suse.com>
>> >> CC: Kevin Tian <kevin.tian@intel.com>
>> >>
>> >> ---
>> >>    Changes in v1:
>> >>       -
>> >>
>> >>    Changes in v2:
>> >>       -
>> >> ---
>> >>  xen/drivers/passthrough/vtd/iommu.c | 138 +++++++++++++++++--------
>> -----------
>> >>  1 file changed, 67 insertions(+), 71 deletions(-)
>> >>
>> >> diff --git a/xen/drivers/passthrough/vtd/iommu.c
>> b/xen/drivers/passthrough/vtd/iommu.c
>> >> index 45d1f36..d20b2f9 100644
>> >> --- a/xen/drivers/passthrough/vtd/iommu.c
>> >> +++ b/xen/drivers/passthrough/vtd/iommu.c
>> >> @@ -1750,15 +1750,24 @@ static void
>> iommu_domain_teardown(struct domain *d)
>> >>      spin_unlock(&hd->arch.mapping_lock);
>> >>  }
>> >>
>> >> -static int __must_check intel_iommu_map_page(struct domain *d,
>> >> -                                             unsigned long gfn,
>> >> -                                             unsigned long mfn,
>> >> -                                             unsigned int flags)
>> >> +static int __must_check intel_iommu_unmap_pages(struct domain *d,
>> >> +                                                unsigned long gfn,
>> >> +                                                unsigned int order);
>> >> +
>> >> +/*
>> >> + * TODO: Optimize by using large pages whenever possible in the case
>> >> + * that hardware supports them.
>> >> + */
>> >> +static int __must_check intel_iommu_map_pages(struct domain *d,
>> >> +                                              unsigned long gfn,
>> >> +                                              unsigned long mfn,
>> >> +                                              unsigned int order,
>> >> +                                              unsigned int flags)
>> >>  {
>> >>      struct domain_iommu *hd = dom_iommu(d);
>> >> -    struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
>> >> -    u64 pg_maddr;
>> >>      int rc = 0;
>> >> +    unsigned long orig_gfn = gfn;
>> >> +    unsigned long i;
>> >>
>> >>      /* Do nothing if VT-d shares EPT page table */
>> >>      if ( iommu_use_hap_pt(d) )
>> >> @@ -1768,78 +1777,60 @@ static int __must_check
>> intel_iommu_map_page(struct domain *d,
>> >>      if ( iommu_passthrough && is_hardware_domain(d) )
>> >>          return 0;
>> >>
>> >> -    spin_lock(&hd->arch.mapping_lock);
>> >> -
>> >> -    pg_maddr = addr_to_dma_page_maddr(d, (paddr_t)gfn <<
>> PAGE_SHIFT_4K, 1);
>> >> -    if ( pg_maddr == 0 )
>> >> +    for ( i = 0; i < (1UL << order); i++, gfn++, mfn++ )
>> >>      {
>> >> -        spin_unlock(&hd->arch.mapping_lock);
>> >> -        return -ENOMEM;
>> >> -    }
>> >> -    page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
>> >> -    pte = page + (gfn & LEVEL_MASK);
>> >> -    old = *pte;
>> >> -    dma_set_pte_addr(new, (paddr_t)mfn << PAGE_SHIFT_4K);
>> >> -    dma_set_pte_prot(new,
>> >> -                     ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
>> >> -                     ((flags & IOMMUF_writable) ? DMA_PTE_WRITE : 0));
>> >> +        struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
>> >> +        u64 pg_maddr;
>> >>
>> >> -    /* Set the SNP on leaf page table if Snoop Control available */
>> >> -    if ( iommu_snoop )
>> >> -        dma_set_pte_snp(new);
>> >> +        spin_lock(&hd->arch.mapping_lock);
>> >>
>> >> -    if ( old.val == new.val )
>> >> -    {
>> >> +        pg_maddr = addr_to_dma_page_maddr(d, (paddr_t)gfn <<
>> PAGE_SHIFT_4K, 1);
>> >> +        if ( pg_maddr == 0 )
>> >> +        {
>> >> +            spin_unlock(&hd->arch.mapping_lock);
>> >> +            rc = -ENOMEM;
>> >> +            goto err;
>> >> +        }
>> >> +        page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
>> >> +        pte = page + (gfn & LEVEL_MASK);
>> >> +        old = *pte;
>> >> +        dma_set_pte_addr(new, (paddr_t)mfn << PAGE_SHIFT_4K);
>> >> +        dma_set_pte_prot(new,
>> >> +                         ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
>> >> +                         ((flags & IOMMUF_writable) ? DMA_PTE_WRITE : 0));
>> >> +
>> >> +        /* Set the SNP on leaf page table if Snoop Control available */
>> >> +        if ( iommu_snoop )
>> >> +            dma_set_pte_snp(new);
>> >> +
>> >> +        if ( old.val == new.val )
>> >> +        {
>> >> +            spin_unlock(&hd->arch.mapping_lock);
>> >> +            unmap_vtd_domain_page(page);
>> >> +            continue;
>> >> +        }
>> >> +        *pte = new;
>> >> +
>> >> +        iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
>> >>          spin_unlock(&hd->arch.mapping_lock);
>> >>          unmap_vtd_domain_page(page);
>> >> -        return 0;
>> >> -    }
>> >> -    *pte = new;
>> >> -
>> >> -    iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
>> >> -    spin_unlock(&hd->arch.mapping_lock);
>> >> -    unmap_vtd_domain_page(page);
>> >>
>> >> -    if ( !this_cpu(iommu_dont_flush_iotlb) )
>> >> -        rc = iommu_flush_iotlb(d, gfn, dma_pte_present(old), 1);
>> >> -
>> >> -    return rc;
>> >> -}
>> >> -
>> >> -static int __must_check intel_iommu_unmap_page(struct domain *d,
>> >> -                                               unsigned long gfn)
>> >> -{
>> >> -    /* Do nothing if hardware domain and iommu supports pass thru. */
>> >> -    if ( iommu_passthrough && is_hardware_domain(d) )
>> >> -        return 0;
>> >> -
>> >> -    return dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
>> >> -}
>> >> -
>> >> -/* TODO: Optimize by squashing map_pages/unmap_pages with
>> map_page/unmap_page */
>> >> -static int __must_check intel_iommu_map_pages(struct domain *d,
>> >> -                                              unsigned long gfn,
>> >> -                                              unsigned long mfn,
>> >> -                                              unsigned int order,
>> >> -                                              unsigned int flags)
>> >> -{
>> >> -    unsigned long i;
>> >> -    int rc = 0;
>> >> -
>> >> -    for ( i = 0; i < (1UL << order); i++ )
>> >> -    {
>> >> -        rc = intel_iommu_map_page(d, gfn + i, mfn + i, flags);
>> >> -        if ( unlikely(rc) )
>> >> +        if ( !this_cpu(iommu_dont_flush_iotlb) )
>> >>          {
>> >> -            while ( i-- )
>> >> -                /* If statement to satisfy __must_check. */
>> >> -                if ( intel_iommu_unmap_page(d, gfn + i) )
>> >> -                    continue;
>> >> -
>> >> -            break;
>> >> +            rc = iommu_flush_iotlb(d, gfn, dma_pte_present(old), 1);
>> >> +            if ( rc )
>> >> +                goto err;
>> >>          }
>> >>      }
>> >>
>> >> +    return 0;
>> >> +
>> >> +err:
>> >> +    while ( i-- )
>> >> +        /* If statement to satisfy __must_check. */
>> >> +        if ( intel_iommu_unmap_pages(d, orig_gfn + i, 0) )
>> >> +            continue;
>> >> +
>> >>      return rc;
>> >>  }
>> >>
>> >> @@ -1847,12 +1838,17 @@ static int __must_check
>> intel_iommu_unmap_pages(struct domain *d,
>> >>                                                  unsigned long gfn,
>> >>                                                  unsigned int order)
>> >>  {
>> >> -    unsigned long i;
>> >>      int rc = 0;
>> >> +    unsigned long i;
>> >> +
>> >> +    /* Do nothing if hardware domain and iommu supports pass thru. */
>> >> +    if ( iommu_passthrough && is_hardware_domain(d) )
>> >> +        return 0;
>> >>
>> >> -    for ( i = 0; i < (1UL << order); i++ )
>> >> +    for ( i = 0; i < (1UL << order); i++, gfn++ )
>> >>      {
>> >> -        int ret = intel_iommu_unmap_page(d, gfn + i);
>> >> +        int ret = dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
>> >> +
>> >>          if ( !rc )
>> >>              rc = ret;
>> >>      }
>> >> --
>> >> 2.7.4
>> >>
>> >
>> >
>> >
>> > --
>> > Regards,
>> >
>> > Oleksandr Tyshchenko
>>
>>
>>
>> --
>> Regards,
>>
>> Oleksandr Tyshchenko



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 06/13] iommu: Add extra use_iommu argument to iommu_domain_init()
  2017-07-25 17:26 ` [PATCH v2 06/13] iommu: Add extra use_iommu argument to iommu_domain_init() Oleksandr Tyshchenko
  2017-08-21 16:29   ` Oleksandr Tyshchenko
@ 2017-12-06 16:51   ` Jan Beulich
  2017-12-06 19:53     ` Oleksandr Tyshchenko
  1 sibling, 1 reply; 62+ messages in thread
From: Jan Beulich @ 2017-12-06 16:51 UTC (permalink / raw)
  To: Oleksandr Tyshchenko
  Cc: Kevin Tian, Stefano Stabellini, Andrew Cooper,
	Oleksandr Tyshchenko, Julien Grall, Suravee Suthikulpanit,
	xen-devel

>>> On 25.07.17 at 19:26, <olekstysh@gmail.com> wrote:
> The presence of this flag lets us know that the guest domain has statically
> assigned devices which will most likely be used for passthrough
> and as the result the IOMMU is expected to be used for this domain.
> 
> Taking into the account this hint when dealing with non-shared IOMMUs
> we can populate IOMMU page tables before hand avoid going through
> the list of pages at the first assigned device.
> As this flag doesn't cover hotplug case, we will continue to populate
> IOMMU page tables on the fly.

While of course it would have been nice if I would have found time
earlier to look at this patch (and hence closer to when the discussion
happened), I still don't see it being made sufficiently clear here why
current behavior (without a need for such a flag) is a problem for the
non-shared IOMMU case on ARM, when it isn't on x86.

The patch itself looks mechanical enough that it could get my ack,
but I really want to understand the background without having to
dig out old discussions (which would be even more difficult for
future archaeologists running into this change in a few years time).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 07/13] iommu: Make decision about needing IOMMU for hardware domains in advance
  2017-07-25 17:26 ` [PATCH v2 07/13] iommu: Make decision about needing IOMMU for hardware domains in advance Oleksandr Tyshchenko
  2017-08-21 16:30   ` Oleksandr Tyshchenko
@ 2017-12-06 17:01   ` Jan Beulich
  2017-12-06 19:23     ` Oleksandr Tyshchenko
  2018-01-18 12:09   ` Roger Pau Monné
  2 siblings, 1 reply; 62+ messages in thread
From: Jan Beulich @ 2017-12-06 17:01 UTC (permalink / raw)
  To: Oleksandr Tyshchenko; +Cc: Oleksandr Tyshchenko, Julien Grall, xen-devel

>>> On 25.07.17 at 19:26, <olekstysh@gmail.com> wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> The hardware domains require IOMMU to be used in the most cases and
> a decision to use it is made at hardware domain construction time.
> But, it is not the best moment for the non-shared IOMMUs due to
> the necessity of retrieving all mapping which could happen in a period
> of time between IOMMU per-domain initialization and this moment.

Which mappings are you talking about here? Just like with the earlier
patch - the reason for the change needs to be clear to someone
reading just this commit message.

> @@ -141,6 +141,15 @@ int iommu_domain_init(struct domain *d, bool use_iommu)
>      if ( !iommu_enabled )
>          return 0;
>  
> +    if ( is_hardware_domain(d) )
> +    {
> +        if ( (paging_mode_translate(d) && !iommu_passthrough) ||
> +              iommu_dom0_strict )
> +            use_iommu = 1;
> +        else
> +            use_iommu = 0;

I'd prefer if you used a simple assignment here, rather than if/else.

> @@ -175,37 +182,6 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
>          return;
>  
>      register_keyhandler('o', &iommu_dump_p2m_table, "dump iommu p2m table", 0);
> -    d->need_iommu = !!iommu_dom0_strict;
> -    if ( need_iommu(d) && !iommu_use_hap_pt(d) )
> -    {
> -        struct page_info *page;
> -        unsigned int i = 0;
> -        int rc = 0;
> -
> -        page_list_for_each ( page, &d->page_list )
> -        {
> -            unsigned long mfn = page_to_mfn(page);
> -            unsigned long gfn = mfn_to_gmfn(d, mfn);
> -            unsigned int mapping = IOMMUF_readable;
> -            int ret;
> -
> -            if ( ((page->u.inuse.type_info & PGT_count_mask) == 0) ||
> -                 ((page->u.inuse.type_info & PGT_type_mask)
> -                  == PGT_writable_page) )
> -                mapping |= IOMMUF_writable;
> -
> -            ret = hd->platform_ops->map_pages(d, gfn, mfn, 0, mapping);
> -            if ( !rc )
> -                rc = ret;
> -
> -            if ( !(i++ & 0xfffff) )
> -                process_pending_softirqs();
> -        }
> -
> -        if ( rc )
> -            printk(XENLOG_WARNING "d%d: IOMMU mapping failed: %d\n",
> -                   d->domain_id, rc);
> -    }
>  
>      return hd->platform_ops->hwdom_init(d);
>  }

Just to double check - this change was tested on x86 Dom0, at
least PV (for PVH I'd at least expect that you've did some static
code analysis to make sure this doesn't put in further roadblocks)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 07/13] iommu: Make decision about needing IOMMU for hardware domains in advance
  2017-12-06 17:01   ` Jan Beulich
@ 2017-12-06 19:23     ` Oleksandr Tyshchenko
  2017-12-07  8:57       ` Jan Beulich
  0 siblings, 1 reply; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-12-06 19:23 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Oleksandr Tyshchenko, Julien Grall, xen-devel

Hi, Jan.

On Wed, Dec 6, 2017 at 7:01 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 25.07.17 at 19:26, <olekstysh@gmail.com> wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> The hardware domains require IOMMU to be used in the most cases and
>> a decision to use it is made at hardware domain construction time.
>> But, it is not the best moment for the non-shared IOMMUs due to
>> the necessity of retrieving all mapping which could happen in a period
>> of time between IOMMU per-domain initialization and this moment.
>
> Which mappings are you talking about here? Just like with the earlier
> patch - the reason for the change needs to be clear to someone
> reading just this commit message.

I am talking about the IOMMU mappings (gfn <-> mfn) which were skipped and
as the result they didn't reach IOMMU pagetable.
P2M code didn't invoke iommu_map_pages() since a "need_iommu" flag wasn't set.
So, iterating through the list of the pages we had to re-create lost
IOMMU mapping page by page.

I will clarify description.

>
>> @@ -141,6 +141,15 @@ int iommu_domain_init(struct domain *d, bool use_iommu)
>>      if ( !iommu_enabled )
>>          return 0;
>>
>> +    if ( is_hardware_domain(d) )
>> +    {
>> +        if ( (paging_mode_translate(d) && !iommu_passthrough) ||
>> +              iommu_dom0_strict )
>> +            use_iommu = 1;
>> +        else
>> +            use_iommu = 0;
>
> I'd prefer if you used a simple assignment here, rather than if/else.
ok

>
>> @@ -175,37 +182,6 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
>>          return;
>>
>>      register_keyhandler('o', &iommu_dump_p2m_table, "dump iommu p2m table", 0);
>> -    d->need_iommu = !!iommu_dom0_strict;
>> -    if ( need_iommu(d) && !iommu_use_hap_pt(d) )
>> -    {
>> -        struct page_info *page;
>> -        unsigned int i = 0;
>> -        int rc = 0;
>> -
>> -        page_list_for_each ( page, &d->page_list )
>> -        {
>> -            unsigned long mfn = page_to_mfn(page);
>> -            unsigned long gfn = mfn_to_gmfn(d, mfn);
>> -            unsigned int mapping = IOMMUF_readable;
>> -            int ret;
>> -
>> -            if ( ((page->u.inuse.type_info & PGT_count_mask) == 0) ||
>> -                 ((page->u.inuse.type_info & PGT_type_mask)
>> -                  == PGT_writable_page) )
>> -                mapping |= IOMMUF_writable;
>> -
>> -            ret = hd->platform_ops->map_pages(d, gfn, mfn, 0, mapping);
>> -            if ( !rc )
>> -                rc = ret;
>> -
>> -            if ( !(i++ & 0xfffff) )
>> -                process_pending_softirqs();
>> -        }
>> -
>> -        if ( rc )
>> -            printk(XENLOG_WARNING "d%d: IOMMU mapping failed: %d\n",
>> -                   d->domain_id, rc);
>> -    }
>>
>>      return hd->platform_ops->hwdom_init(d);
>>  }
>
> Just to double check - this change was tested on x86 Dom0, at
> least PV (for PVH I'd at least expect that you've did some static
> code analysis to make sure this doesn't put in further roadblocks)?

I am afraid I didn't get the second part of this sentence.

>
> Jan
>



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 06/13] iommu: Add extra use_iommu argument to iommu_domain_init()
  2017-12-06 16:51   ` Jan Beulich
@ 2017-12-06 19:53     ` Oleksandr Tyshchenko
  2017-12-06 22:49       ` Julien Grall
  0 siblings, 1 reply; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-12-06 19:53 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, Stefano Stabellini, Andrew Cooper,
	Oleksandr Tyshchenko, Julien Grall, Suravee Suthikulpanit,
	xen-devel

Hi Jan.

On Wed, Dec 6, 2017 at 6:51 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 25.07.17 at 19:26, <olekstysh@gmail.com> wrote:
>> The presence of this flag lets us know that the guest domain has statically
>> assigned devices which will most likely be used for passthrough
>> and as the result the IOMMU is expected to be used for this domain.
>>
>> Taking into the account this hint when dealing with non-shared IOMMUs
>> we can populate IOMMU page tables before hand avoid going through
>> the list of pages at the first assigned device.
>> As this flag doesn't cover hotplug case, we will continue to populate
>> IOMMU page tables on the fly.
>
> While of course it would have been nice if I would have found time
> earlier to look at this patch (and hence closer to when the discussion
> happened), I still don't see it being made sufficiently clear here why
> current behavior (without a need for such a flag) is a problem for the
> non-shared IOMMU case on ARM, when it isn't on x86.

The answer is the lack of M2P on ARM. When the first device is being
assigned to domain we are populating non-shared IOMMU page-table.
What does it mean? We are iterating through the list of the pages
(d->page_list) and retrieving a pair of mfn <-> gfn for each page on
x86.
We can't do the same on ARM, since there is no M2P table. The
mfn_to_gmfn macros is just a stub:
#define mfn_to_gmfn(_d, mfn)  (mfn)

To be honest I haven't played with non-shared IOMMU on ARM
since the end of this summer to be 100% sure that it is still an
issue. But, it seems to be.

>
> The patch itself looks mechanical enough that it could get my ack,
> but I really want to understand the background without having to
> dig out old discussions (which would be even more difficult for
> future archaeologists running into this change in a few years time).
>
> Jan
>

-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 06/13] iommu: Add extra use_iommu argument to iommu_domain_init()
  2017-12-06 19:53     ` Oleksandr Tyshchenko
@ 2017-12-06 22:49       ` Julien Grall
  2017-12-07 12:08         ` Oleksandr Tyshchenko
  0 siblings, 1 reply; 62+ messages in thread
From: Julien Grall @ 2017-12-06 22:49 UTC (permalink / raw)
  To: Oleksandr Tyshchenko, Jan Beulich
  Cc: Kevin Tian, Stefano Stabellini, Andrew Cooper,
	Oleksandr Tyshchenko, Julien Grall, Suravee Suthikulpanit,
	xen-devel

Hi,

On 12/06/2017 07:53 PM, Oleksandr Tyshchenko wrote:
> On Wed, Dec 6, 2017 at 6:51 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 25.07.17 at 19:26, <olekstysh@gmail.com> wrote:
>>> The presence of this flag lets us know that the guest domain has statically
>>> assigned devices which will most likely be used for passthrough
>>> and as the result the IOMMU is expected to be used for this domain.
>>>
>>> Taking into the account this hint when dealing with non-shared IOMMUs
>>> we can populate IOMMU page tables before hand avoid going through
>>> the list of pages at the first assigned device.
>>> As this flag doesn't cover hotplug case, we will continue to populate
>>> IOMMU page tables on the fly.
>>
>> While of course it would have been nice if I would have found time
>> earlier to look at this patch (and hence closer to when the discussion
>> happened), I still don't see it being made sufficiently clear here why
>> current behavior (without a need for such a flag) is a problem for the
>> non-shared IOMMU case on ARM, when it isn't on x86.
> 
> The answer is the lack of M2P on ARM. When the first device is being
> assigned to domain we are populating non-shared IOMMU page-table.
> What does it mean? We are iterating through the list of the pages
> (d->page_list) and retrieving a pair of mfn <-> gfn for each page on
> x86.
> We can't do the same on ARM, since there is no M2P table. The
> mfn_to_gmfn macros is just a stub:
> #define mfn_to_gmfn(_d, mfn)  (mfn)
> 
> To be honest I haven't played with non-shared IOMMU on ARM
> since the end of this summer to be 100% sure that it is still an
> issue. But, it seems to be.

The situation has not changed. I still see no point of waste memory for 
the M2P (see the full discussion here [1]).

However, I agree with Jan that we need a summary of the discussion in 
the commit message.

>>
>> The patch itself looks mechanical enough that it could get my ack,
>> but I really want to understand the background without having to
>> dig out old discussions (which would be even more difficult for
>> future archaeologists running into this change in a few years time).

Cheers,

[1] https://lists.xen.org/archives/html/xen-devel/2017-05/msg01737.html

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 07/13] iommu: Make decision about needing IOMMU for hardware domains in advance
  2017-12-06 19:23     ` Oleksandr Tyshchenko
@ 2017-12-07  8:57       ` Jan Beulich
  2017-12-07 13:50         ` Oleksandr Tyshchenko
  0 siblings, 1 reply; 62+ messages in thread
From: Jan Beulich @ 2017-12-07  8:57 UTC (permalink / raw)
  To: Oleksandr Tyshchenko
  Cc: Oleksandr Tyshchenko, Julien Grall, xen-devel, Roger Pau Monne

>>> On 06.12.17 at 20:23, <olekstysh@gmail.com> wrote:
> On Wed, Dec 6, 2017 at 7:01 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 25.07.17 at 19:26, <olekstysh@gmail.com> wrote:
>>> @@ -175,37 +182,6 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
>>>          return;
>>>
>>>      register_keyhandler('o', &iommu_dump_p2m_table, "dump iommu p2m table", 0);
>>> -    d->need_iommu = !!iommu_dom0_strict;
>>> -    if ( need_iommu(d) && !iommu_use_hap_pt(d) )
>>> -    {
>>> -        struct page_info *page;
>>> -        unsigned int i = 0;
>>> -        int rc = 0;
>>> -
>>> -        page_list_for_each ( page, &d->page_list )
>>> -        {
>>> -            unsigned long mfn = page_to_mfn(page);
>>> -            unsigned long gfn = mfn_to_gmfn(d, mfn);
>>> -            unsigned int mapping = IOMMUF_readable;
>>> -            int ret;
>>> -
>>> -            if ( ((page->u.inuse.type_info & PGT_count_mask) == 0) ||
>>> -                 ((page->u.inuse.type_info & PGT_type_mask)
>>> -                  == PGT_writable_page) )
>>> -                mapping |= IOMMUF_writable;
>>> -
>>> -            ret = hd->platform_ops->map_pages(d, gfn, mfn, 0, mapping);
>>> -            if ( !rc )
>>> -                rc = ret;
>>> -
>>> -            if ( !(i++ & 0xfffff) )
>>> -                process_pending_softirqs();
>>> -        }
>>> -
>>> -        if ( rc )
>>> -            printk(XENLOG_WARNING "d%d: IOMMU mapping failed: %d\n",
>>> -                   d->domain_id, rc);
>>> -    }
>>>
>>>      return hd->platform_ops->hwdom_init(d);
>>>  }
>>
>> Just to double check - this change was tested on x86 Dom0, at
>> least PV (for PVH I'd at least expect that you've did some static
>> code analysis to make sure this doesn't put in further roadblocks)?
> 
> I am afraid I didn't get the second part of this sentence.

Understandably, since I've broken grammar in the course of
re-phrasing a number of times before sending. Dom0 PVH isn't
complete at this point, so I can't ask you to actually test it. But
I want to be reasonably certain that the change you make won't
further complicate this enabling work (you may want to also Cc
Roger on future versions of the patch for this very reason), the
more that on AMD we've been unconditionally using non-shared
page tables for quite some time. (In fact I see chances that the
change might actually help the time it takes to set up PVH Dom0,
especially when a sufficiently large chunk of memory is being
handed to it.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 06/13] iommu: Add extra use_iommu argument to iommu_domain_init()
  2017-12-06 22:49       ` Julien Grall
@ 2017-12-07 12:08         ` Oleksandr Tyshchenko
  2017-12-07 12:51           ` Jan Beulich
  0 siblings, 1 reply; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-12-07 12:08 UTC (permalink / raw)
  To: Julien Grall, Jan Beulich
  Cc: Kevin Tian, Stefano Stabellini, Andrew Cooper,
	Oleksandr Tyshchenko, Julien Grall, Suravee Suthikulpanit,
	xen-devel

On Thu, Dec 7, 2017 at 12:49 AM, Julien Grall <julien.grall@linaro.org> wrote:
> Hi,
Hi Julien, Jan

>
>
> On 12/06/2017 07:53 PM, Oleksandr Tyshchenko wrote:
>>
>> On Wed, Dec 6, 2017 at 6:51 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>
>>>>>> On 25.07.17 at 19:26, <olekstysh@gmail.com> wrote:
>>>>
>>>> The presence of this flag lets us know that the guest domain has
>>>> statically
>>>> assigned devices which will most likely be used for passthrough
>>>> and as the result the IOMMU is expected to be used for this domain.
>>>>
>>>> Taking into the account this hint when dealing with non-shared IOMMUs
>>>> we can populate IOMMU page tables before hand avoid going through
>>>> the list of pages at the first assigned device.
>>>> As this flag doesn't cover hotplug case, we will continue to populate
>>>> IOMMU page tables on the fly.
>>>
>>>
>>> While of course it would have been nice if I would have found time
>>> earlier to look at this patch (and hence closer to when the discussion
>>> happened), I still don't see it being made sufficiently clear here why
>>> current behavior (without a need for such a flag) is a problem for the
>>> non-shared IOMMU case on ARM, when it isn't on x86.
>>
>>
>> The answer is the lack of M2P on ARM. When the first device is being
>> assigned to domain we are populating non-shared IOMMU page-table.
>> What does it mean? We are iterating through the list of the pages
>> (d->page_list) and retrieving a pair of mfn <-> gfn for each page on
>> x86.
>> We can't do the same on ARM, since there is no M2P table. The
>> mfn_to_gmfn macros is just a stub:
>> #define mfn_to_gmfn(_d, mfn)  (mfn)
>>
>> To be honest I haven't played with non-shared IOMMU on ARM
>> since the end of this summer to be 100% sure that it is still an
>> issue. But, it seems to be.
>
>
> The situation has not changed. I still see no point of waste memory for the
> M2P (see the full discussion here [1]).
>
> However, I agree with Jan that we need a summary of the discussion in the
> commit message.
Sure.

Jan, is the clarification I have provided (why it is a problem for the
non-shared IOMMU case on ARM, when it isn't on x86) sufficiently clear and
the current patch with "updated" commit message will be ready to get your ack?

>
>>>
>>> The patch itself looks mechanical enough that it could get my ack,
>>> but I really want to understand the background without having to
>>> dig out old discussions (which would be even more difficult for
>>> future archaeologists running into this change in a few years time).
>
>
> Cheers,
>
> [1] https://lists.xen.org/archives/html/xen-devel/2017-05/msg01737.html
>
> --
> Julien Grall



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 06/13] iommu: Add extra use_iommu argument to iommu_domain_init()
  2017-12-07 12:08         ` Oleksandr Tyshchenko
@ 2017-12-07 12:51           ` Jan Beulich
  0 siblings, 0 replies; 62+ messages in thread
From: Jan Beulich @ 2017-12-07 12:51 UTC (permalink / raw)
  To: Oleksandr Tyshchenko, Julien Grall
  Cc: Kevin Tian, Stefano Stabellini, Andrew Cooper,
	Oleksandr Tyshchenko, Julien Grall, Suravee Suthikulpanit,
	xen-devel

>>> On 07.12.17 at 13:08, <olekstysh@gmail.com> wrote:
> On Thu, Dec 7, 2017 at 12:49 AM, Julien Grall <julien.grall@linaro.org> wrote:
>> On 12/06/2017 07:53 PM, Oleksandr Tyshchenko wrote:
>>> On Wed, Dec 6, 2017 at 6:51 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>>
>>>>>>> On 25.07.17 at 19:26, <olekstysh@gmail.com> wrote:
>>>>>
>>>>> The presence of this flag lets us know that the guest domain has
>>>>> statically
>>>>> assigned devices which will most likely be used for passthrough
>>>>> and as the result the IOMMU is expected to be used for this domain.
>>>>>
>>>>> Taking into the account this hint when dealing with non-shared IOMMUs
>>>>> we can populate IOMMU page tables before hand avoid going through
>>>>> the list of pages at the first assigned device.
>>>>> As this flag doesn't cover hotplug case, we will continue to populate
>>>>> IOMMU page tables on the fly.
>>>>
>>>>
>>>> While of course it would have been nice if I would have found time
>>>> earlier to look at this patch (and hence closer to when the discussion
>>>> happened), I still don't see it being made sufficiently clear here why
>>>> current behavior (without a need for such a flag) is a problem for the
>>>> non-shared IOMMU case on ARM, when it isn't on x86.
>>>
>>>
>>> The answer is the lack of M2P on ARM. When the first device is being
>>> assigned to domain we are populating non-shared IOMMU page-table.
>>> What does it mean? We are iterating through the list of the pages
>>> (d->page_list) and retrieving a pair of mfn <-> gfn for each page on
>>> x86.
>>> We can't do the same on ARM, since there is no M2P table. The
>>> mfn_to_gmfn macros is just a stub:
>>> #define mfn_to_gmfn(_d, mfn)  (mfn)
>>>
>>> To be honest I haven't played with non-shared IOMMU on ARM
>>> since the end of this summer to be 100% sure that it is still an
>>> issue. But, it seems to be.
>>
>>
>> The situation has not changed. I still see no point of waste memory for the
>> M2P (see the full discussion here [1]).
>>
>> However, I agree with Jan that we need a summary of the discussion in the
>> commit message.
> Sure.
> 
> Jan, is the clarification I have provided (why it is a problem for the
> non-shared IOMMU case on ARM, when it isn't on x86) sufficiently clear and

Yes, mentioning the lack of M2P is goin g to be sufficient rationale.

> the current patch with "updated" commit message will be ready to get your 
> ack?

I think so, albeit I haven't fully settled yet whether to push back
on the ARM folks not wanting to introduce M2P.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 07/13] iommu: Make decision about needing IOMMU for hardware domains in advance
  2017-12-07  8:57       ` Jan Beulich
@ 2017-12-07 13:50         ` Oleksandr Tyshchenko
  2017-12-07 13:57           ` Jan Beulich
  0 siblings, 1 reply; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-12-07 13:50 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Oleksandr Tyshchenko, Julien Grall, xen-devel, Roger Pau Monne

Hi, Jan

On Thu, Dec 7, 2017 at 10:57 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 06.12.17 at 20:23, <olekstysh@gmail.com> wrote:
>> On Wed, Dec 6, 2017 at 7:01 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>> On 25.07.17 at 19:26, <olekstysh@gmail.com> wrote:
>>>> @@ -175,37 +182,6 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
>>>>          return;
>>>>
>>>>      register_keyhandler('o', &iommu_dump_p2m_table, "dump iommu p2m table", 0);
>>>> -    d->need_iommu = !!iommu_dom0_strict;
>>>> -    if ( need_iommu(d) && !iommu_use_hap_pt(d) )
>>>> -    {
>>>> -        struct page_info *page;
>>>> -        unsigned int i = 0;
>>>> -        int rc = 0;
>>>> -
>>>> -        page_list_for_each ( page, &d->page_list )
>>>> -        {
>>>> -            unsigned long mfn = page_to_mfn(page);
>>>> -            unsigned long gfn = mfn_to_gmfn(d, mfn);
>>>> -            unsigned int mapping = IOMMUF_readable;
>>>> -            int ret;
>>>> -
>>>> -            if ( ((page->u.inuse.type_info & PGT_count_mask) == 0) ||
>>>> -                 ((page->u.inuse.type_info & PGT_type_mask)
>>>> -                  == PGT_writable_page) )
>>>> -                mapping |= IOMMUF_writable;
>>>> -
>>>> -            ret = hd->platform_ops->map_pages(d, gfn, mfn, 0, mapping);
>>>> -            if ( !rc )
>>>> -                rc = ret;
>>>> -
>>>> -            if ( !(i++ & 0xfffff) )
>>>> -                process_pending_softirqs();
>>>> -        }
>>>> -
>>>> -        if ( rc )
>>>> -            printk(XENLOG_WARNING "d%d: IOMMU mapping failed: %d\n",
>>>> -                   d->domain_id, rc);
>>>> -    }
>>>>
>>>>      return hd->platform_ops->hwdom_init(d);
>>>>  }
>>>
>>> Just to double check - this change was tested on x86 Dom0, at
>>> least PV (for PVH I'd at least expect that you've did some static
>>> code analysis to make sure this doesn't put in further roadblocks)?
>>
>> I am afraid I didn't get the second part of this sentence.
>
> Understandably, since I've broken grammar in the course of
> re-phrasing a number of times before sending. Dom0 PVH isn't
> complete at this point, so I can't ask you to actually test it. But
> I want to be reasonably certain that the change you make won't
> further complicate this enabling work (you may want to also Cc
> Roger on future versions of the patch for this very reason), the
> more that on AMD we've been unconditionally using non-shared
> page tables for quite some time. (In fact I see chances that the
> change might actually help the time it takes to set up PVH Dom0,
> especially when a sufficiently large chunk of memory is being
> handed to it.)

As I understand, the current patch was tested on x86 with PV dom0
(thanks for doing that), but wasn't
tested with PVH dom0 since the latter wasn't ready. And there is some
activity for bringing PVH dom0
which the current patch may affect in a negative way (complicate, brake, etc).

What pending patch(es) or a part of already existing code on x86
should I pay special attention to?
I haven't worked with "non-shared IOMMU support on ARM" patch series
for last 3-4 months, so I could lose the context.

Sorry for the maybe naive question, but what should be done from my
side for this patch to be accepted,
except addressing comments regarding the patch itself?

Sure, I will CC Roger on future versions of this patch.

>
> Jan
>



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 07/13] iommu: Make decision about needing IOMMU for hardware domains in advance
  2017-12-07 13:50         ` Oleksandr Tyshchenko
@ 2017-12-07 13:57           ` Jan Beulich
  2017-12-08 12:28             ` Oleksandr Tyshchenko
  0 siblings, 1 reply; 62+ messages in thread
From: Jan Beulich @ 2017-12-07 13:57 UTC (permalink / raw)
  To: Oleksandr Tyshchenko
  Cc: Oleksandr Tyshchenko, Julien Grall, xen-devel, Roger Pau Monne

>>> On 07.12.17 at 14:50, <olekstysh@gmail.com> wrote:
> On Thu, Dec 7, 2017 at 10:57 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 06.12.17 at 20:23, <olekstysh@gmail.com> wrote:
>>> On Wed, Dec 6, 2017 at 7:01 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>> On 25.07.17 at 19:26, <olekstysh@gmail.com> wrote:
>>>>> @@ -175,37 +182,6 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
>>>>>          return;
>>>>>
>>>>>      register_keyhandler('o', &iommu_dump_p2m_table, "dump iommu p2m table", 0);
>>>>> -    d->need_iommu = !!iommu_dom0_strict;
>>>>> -    if ( need_iommu(d) && !iommu_use_hap_pt(d) )
>>>>> -    {
>>>>> -        struct page_info *page;
>>>>> -        unsigned int i = 0;
>>>>> -        int rc = 0;
>>>>> -
>>>>> -        page_list_for_each ( page, &d->page_list )
>>>>> -        {
>>>>> -            unsigned long mfn = page_to_mfn(page);
>>>>> -            unsigned long gfn = mfn_to_gmfn(d, mfn);
>>>>> -            unsigned int mapping = IOMMUF_readable;
>>>>> -            int ret;
>>>>> -
>>>>> -            if ( ((page->u.inuse.type_info & PGT_count_mask) == 0) ||
>>>>> -                 ((page->u.inuse.type_info & PGT_type_mask)
>>>>> -                  == PGT_writable_page) )
>>>>> -                mapping |= IOMMUF_writable;
>>>>> -
>>>>> -            ret = hd->platform_ops->map_pages(d, gfn, mfn, 0, mapping);
>>>>> -            if ( !rc )
>>>>> -                rc = ret;
>>>>> -
>>>>> -            if ( !(i++ & 0xfffff) )
>>>>> -                process_pending_softirqs();
>>>>> -        }
>>>>> -
>>>>> -        if ( rc )
>>>>> -            printk(XENLOG_WARNING "d%d: IOMMU mapping failed: %d\n",
>>>>> -                   d->domain_id, rc);
>>>>> -    }
>>>>>
>>>>>      return hd->platform_ops->hwdom_init(d);
>>>>>  }
>>>>
>>>> Just to double check - this change was tested on x86 Dom0, at
>>>> least PV (for PVH I'd at least expect that you've did some static
>>>> code analysis to make sure this doesn't put in further roadblocks)?
>>>
>>> I am afraid I didn't get the second part of this sentence.
>>
>> Understandably, since I've broken grammar in the course of
>> re-phrasing a number of times before sending. Dom0 PVH isn't
>> complete at this point, so I can't ask you to actually test it. But
>> I want to be reasonably certain that the change you make won't
>> further complicate this enabling work (you may want to also Cc
>> Roger on future versions of the patch for this very reason), the
>> more that on AMD we've been unconditionally using non-shared
>> page tables for quite some time. (In fact I see chances that the
>> change might actually help the time it takes to set up PVH Dom0,
>> especially when a sufficiently large chunk of memory is being
>> handed to it.)
> 
> As I understand, the current patch was tested on x86 with PV dom0
> (thanks for doing that),

This sounds as if you believe I would have tested anything. I
certainly didn't (or at least I don't recall), and never meant to.

> but wasn't
> tested with PVH dom0 since the latter wasn't ready. And there is some
> activity for bringing PVH dom0
> which the current patch may affect in a negative way (complicate, brake, 
> etc).
> 
> What pending patch(es) or a part of already existing code on x86
> should I pay special attention to?

The question is not so much pending patches, but making sure your
changes don't adversely affect what's already in the tree. Beyond
that I'll defer to Roger.

> Sorry for the maybe naive question, but what should be done from my
> side for this patch to be accepted,
> except addressing comments regarding the patch itself?

You will want (need) to assess the impact of your changes on
code paths you can't possibly test.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 07/13] iommu: Make decision about needing IOMMU for hardware domains in advance
  2017-12-07 13:57           ` Jan Beulich
@ 2017-12-08 12:28             ` Oleksandr Tyshchenko
  0 siblings, 0 replies; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2017-12-08 12:28 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Oleksandr Tyshchenko, Julien Grall, xen-devel, Roger Pau Monne

Hi, Jan

On Thu, Dec 7, 2017 at 3:57 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 07.12.17 at 14:50, <olekstysh@gmail.com> wrote:
>> On Thu, Dec 7, 2017 at 10:57 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>> On 06.12.17 at 20:23, <olekstysh@gmail.com> wrote:
>>>> On Wed, Dec 6, 2017 at 7:01 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>>> On 25.07.17 at 19:26, <olekstysh@gmail.com> wrote:
>>>>>> @@ -175,37 +182,6 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
>>>>>>          return;
>>>>>>
>>>>>>      register_keyhandler('o', &iommu_dump_p2m_table, "dump iommu p2m table", 0);
>>>>>> -    d->need_iommu = !!iommu_dom0_strict;
>>>>>> -    if ( need_iommu(d) && !iommu_use_hap_pt(d) )
>>>>>> -    {
>>>>>> -        struct page_info *page;
>>>>>> -        unsigned int i = 0;
>>>>>> -        int rc = 0;
>>>>>> -
>>>>>> -        page_list_for_each ( page, &d->page_list )
>>>>>> -        {
>>>>>> -            unsigned long mfn = page_to_mfn(page);
>>>>>> -            unsigned long gfn = mfn_to_gmfn(d, mfn);
>>>>>> -            unsigned int mapping = IOMMUF_readable;
>>>>>> -            int ret;
>>>>>> -
>>>>>> -            if ( ((page->u.inuse.type_info & PGT_count_mask) == 0) ||
>>>>>> -                 ((page->u.inuse.type_info & PGT_type_mask)
>>>>>> -                  == PGT_writable_page) )
>>>>>> -                mapping |= IOMMUF_writable;
>>>>>> -
>>>>>> -            ret = hd->platform_ops->map_pages(d, gfn, mfn, 0, mapping);
>>>>>> -            if ( !rc )
>>>>>> -                rc = ret;
>>>>>> -
>>>>>> -            if ( !(i++ & 0xfffff) )
>>>>>> -                process_pending_softirqs();
>>>>>> -        }
>>>>>> -
>>>>>> -        if ( rc )
>>>>>> -            printk(XENLOG_WARNING "d%d: IOMMU mapping failed: %d\n",
>>>>>> -                   d->domain_id, rc);
>>>>>> -    }
>>>>>>
>>>>>>      return hd->platform_ops->hwdom_init(d);
>>>>>>  }
>>>>>
>>>>> Just to double check - this change was tested on x86 Dom0, at
>>>>> least PV (for PVH I'd at least expect that you've did some static
>>>>> code analysis to make sure this doesn't put in further roadblocks)?
>>>>
>>>> I am afraid I didn't get the second part of this sentence.
>>>
>>> Understandably, since I've broken grammar in the course of
>>> re-phrasing a number of times before sending. Dom0 PVH isn't
>>> complete at this point, so I can't ask you to actually test it. But
>>> I want to be reasonably certain that the change you make won't
>>> further complicate this enabling work (you may want to also Cc
>>> Roger on future versions of the patch for this very reason), the
>>> more that on AMD we've been unconditionally using non-shared
>>> page tables for quite some time. (In fact I see chances that the
>>> change might actually help the time it takes to set up PVH Dom0,
>>> especially when a sufficiently large chunk of memory is being
>>> handed to it.)
>>
>> As I understand, the current patch was tested on x86 with PV dom0
>> (thanks for doing that),
>
> This sounds as if you believe I would have tested anything. I
> certainly didn't (or at least I don't recall), and never meant to.
>
>> but wasn't
>> tested with PVH dom0 since the latter wasn't ready. And there is some
>> activity for bringing PVH dom0
>> which the current patch may affect in a negative way (complicate, brake,
>> etc).
>>
>> What pending patch(es) or a part of already existing code on x86
>> should I pay special attention to?
>
> The question is not so much pending patches, but making sure your
> changes don't adversely affect what's already in the tree.
Sure.

> Beyond  that I'll defer to Roger.

>
>> Sorry for the maybe naive question, but what should be done from my
>> side for this patch to be accepted,
>> except addressing comments regarding the patch itself?
>
> You will want (need) to assess the impact of your changes on
> code paths you can't possibly test.
Sure.

I would like to clarify: I haven't tested this patch on x86. Only on ARM.

>
> Jan
>



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 07/13] iommu: Make decision about needing IOMMU for hardware domains in advance
  2017-07-25 17:26 ` [PATCH v2 07/13] iommu: Make decision about needing IOMMU for hardware domains in advance Oleksandr Tyshchenko
  2017-08-21 16:30   ` Oleksandr Tyshchenko
  2017-12-06 17:01   ` Jan Beulich
@ 2018-01-18 12:09   ` Roger Pau Monné
  2018-01-18 14:50     ` Oleksandr Tyshchenko
  2 siblings, 1 reply; 62+ messages in thread
From: Roger Pau Monné @ 2018-01-18 12:09 UTC (permalink / raw)
  To: Oleksandr Tyshchenko
  Cc: xen-devel, Julien Grall, Jan Beulich, Oleksandr Tyshchenko

Sorry for the delay in the reply.

On Tue, Jul 25, 2017 at 08:26:49PM +0300, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> The hardware domains require IOMMU to be used in the most cases and
> a decision to use it is made at hardware domain construction time.
> But, it is not the best moment for the non-shared IOMMUs due to
> the necessity of retrieving all mapping which could happen in a period
> of time between IOMMU per-domain initialization and this moment.
> 
> So, make a decision about needing IOMMU a bit earlier, in iommu_domain_init().
> Having "d->need_iommu" flag set at the early stage we won't skip
> any IOMMU mapping updates. And as the result the existing in iommu_hwdom_init()
> code that goes through the list of the pages and tries to retrieve mapping
> for non-shared IOMMUs won't be needed anymore and can be just dropped.

If I understand this correctly the approach looks fine to me, and it's
inline with what I'm doing for PVHv2 Dom0. Ie: the IOMMU is
initialized _before_ populating the memory map, so that when pages are
added to the p2m they are also added to the IOMMU page tables if
required.

This avoids having to iterate over the list of domain pages in
iommu_hwdom_init, because it's empty at the point iommu_hwdom_init is
called for a PVHv2 Dom0.

> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Julien Grall <julien.grall@arm.com>
> 
> ---
> Changes in v1:
>    -
> 
> Changes in v2:
>    - This is the result of reworking old patch:
>      [PATCH v1 08/10] iommu: Split iommu_hwdom_init() into arch specific parts
> ---
>  xen/drivers/passthrough/iommu.c | 44 ++++++++++-------------------------------
>  1 file changed, 10 insertions(+), 34 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
> index 19c87d1..f5e5b7e 100644
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -52,7 +52,7 @@ custom_param("iommu", parse_iommu_param);
>  bool_t __initdata iommu_enable = 1;
>  bool_t __read_mostly iommu_enabled;
>  bool_t __read_mostly force_iommu;
> -bool_t __hwdom_initdata iommu_dom0_strict;
> +bool_t __read_mostly iommu_dom0_strict;
>  bool_t __read_mostly iommu_verbose;
>  bool_t __read_mostly iommu_workaround_bios_bug;
>  bool_t __read_mostly iommu_igfx = 1;
> @@ -141,6 +141,15 @@ int iommu_domain_init(struct domain *d, bool use_iommu)
>      if ( !iommu_enabled )
>          return 0;
>  
> +    if ( is_hardware_domain(d) )
> +    {
> +        if ( (paging_mode_translate(d) && !iommu_passthrough) ||
> +              iommu_dom0_strict )
> +            use_iommu = 1;
> +        else
> +            use_iommu = 0;
> +    }
> +
>      hd->platform_ops = iommu_get_ops();
>      ret = hd->platform_ops->init(d, use_iommu);
>      if ( ret )
> @@ -161,8 +170,6 @@ static void __hwdom_init check_hwdom_reqs(struct domain *d)
>      if ( iommu_passthrough )
>          panic("Dom0 uses paging translated mode, dom0-passthrough must not be "
>                "enabled\n");
> -
> -    iommu_dom0_strict = 1;
>  }
>  
>  void __hwdom_init iommu_hwdom_init(struct domain *d)
> @@ -175,37 +182,6 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
>          return;
>  
>      register_keyhandler('o', &iommu_dump_p2m_table, "dump iommu p2m table", 0);
> -    d->need_iommu = !!iommu_dom0_strict;

Where is this set now? You seem to remove setting need_iommu here, but
I don't see it being set anywhere else. Am I missing something?

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH v2 07/13] iommu: Make decision about needing IOMMU for hardware domains in advance
  2018-01-18 12:09   ` Roger Pau Monné
@ 2018-01-18 14:50     ` Oleksandr Tyshchenko
  0 siblings, 0 replies; 62+ messages in thread
From: Oleksandr Tyshchenko @ 2018-01-18 14:50 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: xen-devel, Julien Grall, Jan Beulich, Oleksandr Tyshchenko

Hi, Roger

On Thu, Jan 18, 2018 at 2:09 PM, Roger Pau Monné <roger.pau@citrix.com> wrote:
> Sorry for the delay in the reply.
No problem.

>
> On Tue, Jul 25, 2017 at 08:26:49PM +0300, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> The hardware domains require IOMMU to be used in the most cases and
>> a decision to use it is made at hardware domain construction time.
>> But, it is not the best moment for the non-shared IOMMUs due to
>> the necessity of retrieving all mapping which could happen in a period
>> of time between IOMMU per-domain initialization and this moment.
>>
>> So, make a decision about needing IOMMU a bit earlier, in iommu_domain_init().
>> Having "d->need_iommu" flag set at the early stage we won't skip
>> any IOMMU mapping updates. And as the result the existing in iommu_hwdom_init()
>> code that goes through the list of the pages and tries to retrieve mapping
>> for non-shared IOMMUs won't be needed anymore and can be just dropped.
>
> If I understand this correctly the approach looks fine to me, and it's
> inline with what I'm doing for PVHv2 Dom0. Ie: the IOMMU is
> initialized _before_ populating the memory map, so that when pages are
> added to the p2m they are also added to the IOMMU page tables if
> required.
>
> This avoids having to iterate over the list of domain pages in
> iommu_hwdom_init, because it's empty at the point iommu_hwdom_init is
> called for a PVHv2 Dom0.
>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> CC: Jan Beulich <jbeulich@suse.com>
>> CC: Julien Grall <julien.grall@arm.com>
>>
>> ---
>> Changes in v1:
>>    -
>>
>> Changes in v2:
>>    - This is the result of reworking old patch:
>>      [PATCH v1 08/10] iommu: Split iommu_hwdom_init() into arch specific parts
>> ---
>>  xen/drivers/passthrough/iommu.c | 44 ++++++++++-------------------------------
>>  1 file changed, 10 insertions(+), 34 deletions(-)
>>
>> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
>> index 19c87d1..f5e5b7e 100644
>> --- a/xen/drivers/passthrough/iommu.c
>> +++ b/xen/drivers/passthrough/iommu.c
>> @@ -52,7 +52,7 @@ custom_param("iommu", parse_iommu_param);
>>  bool_t __initdata iommu_enable = 1;
>>  bool_t __read_mostly iommu_enabled;
>>  bool_t __read_mostly force_iommu;
>> -bool_t __hwdom_initdata iommu_dom0_strict;
>> +bool_t __read_mostly iommu_dom0_strict;
>>  bool_t __read_mostly iommu_verbose;
>>  bool_t __read_mostly iommu_workaround_bios_bug;
>>  bool_t __read_mostly iommu_igfx = 1;
>> @@ -141,6 +141,15 @@ int iommu_domain_init(struct domain *d, bool use_iommu)
>>      if ( !iommu_enabled )
>>          return 0;
>>
>> +    if ( is_hardware_domain(d) )
>> +    {
>> +        if ( (paging_mode_translate(d) && !iommu_passthrough) ||
>> +              iommu_dom0_strict )
>> +            use_iommu = 1;
>> +        else
>> +            use_iommu = 0;
>> +    }
>> +
>>      hd->platform_ops = iommu_get_ops();
>>      ret = hd->platform_ops->init(d, use_iommu);
>>      if ( ret )
>> @@ -161,8 +170,6 @@ static void __hwdom_init check_hwdom_reqs(struct domain *d)
>>      if ( iommu_passthrough )
>>          panic("Dom0 uses paging translated mode, dom0-passthrough must not be "
>>                "enabled\n");
>> -
>> -    iommu_dom0_strict = 1;
>>  }
>>
>>  void __hwdom_init iommu_hwdom_init(struct domain *d)
>> @@ -175,37 +182,6 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
>>          return;
>>
>>      register_keyhandler('o', &iommu_dump_p2m_table, "dump iommu p2m table", 0);
>> -    d->need_iommu = !!iommu_dom0_strict;
>
> Where is this set now? You seem to remove setting need_iommu here, but
> I don't see it being set anywhere else. Am I missing something?

d->need_iommu is set in iommu_domain_init(). This was done by previous
patch [1].
For your convenience you can see how the whole function looks like [2].

[1]
[PATCH v2 06/13] iommu: Add extra use_iommu argument to iommu_domain_init()
https://marc.info/?l=xen-devel&m=150100368126600&w=2

[2] https://github.com/otyshchenko1/xen/blob/non_shared_iommu_v2/xen/drivers/passthrough/iommu.c#L158

>
> Thanks, Roger.

-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 62+ messages in thread

end of thread, other threads:[~2018-01-18 14:50 UTC | newest]

Thread overview: 62+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-25 17:26 [PATCH v2 00/13] "Non-shared" IOMMU support on ARM Oleksandr Tyshchenko
2017-07-25 17:26 ` [PATCH v2 01/13] xen/device-tree: Add dt_count_phandle_with_args helper Oleksandr Tyshchenko
2017-07-25 17:26 ` [PATCH v2 02/13] iommu: Add extra order argument to the IOMMU APIs and platform callbacks Oleksandr Tyshchenko
2017-08-03 11:21   ` Julien Grall
2017-08-03 12:32     ` Oleksandr Tyshchenko
2017-08-21 16:20       ` Oleksandr Tyshchenko
2017-08-22  7:21         ` Jan Beulich
2017-08-22 10:28           ` Oleksandr Tyshchenko
2017-07-25 17:26 ` [PATCH v2 03/13] xen/arm: p2m: Add helper to convert p2m type to IOMMU flags Oleksandr Tyshchenko
2017-07-25 17:26 ` [PATCH v2 04/13] xen/arm: p2m: Update IOMMU mapping whenever possible if page table is not shared Oleksandr Tyshchenko
2017-07-25 17:26 ` [PATCH v2 05/13] iommu/arm: Re-define iommu_use_hap_pt(d) as iommu_hap_pt_share Oleksandr Tyshchenko
2017-08-03 11:23   ` Julien Grall
2017-08-03 12:33     ` Oleksandr Tyshchenko
2017-07-25 17:26 ` [PATCH v2 06/13] iommu: Add extra use_iommu argument to iommu_domain_init() Oleksandr Tyshchenko
2017-08-21 16:29   ` Oleksandr Tyshchenko
2017-12-06 16:51   ` Jan Beulich
2017-12-06 19:53     ` Oleksandr Tyshchenko
2017-12-06 22:49       ` Julien Grall
2017-12-07 12:08         ` Oleksandr Tyshchenko
2017-12-07 12:51           ` Jan Beulich
2017-07-25 17:26 ` [PATCH v2 07/13] iommu: Make decision about needing IOMMU for hardware domains in advance Oleksandr Tyshchenko
2017-08-21 16:30   ` Oleksandr Tyshchenko
2017-12-06 17:01   ` Jan Beulich
2017-12-06 19:23     ` Oleksandr Tyshchenko
2017-12-07  8:57       ` Jan Beulich
2017-12-07 13:50         ` Oleksandr Tyshchenko
2017-12-07 13:57           ` Jan Beulich
2017-12-08 12:28             ` Oleksandr Tyshchenko
2018-01-18 12:09   ` Roger Pau Monné
2018-01-18 14:50     ` Oleksandr Tyshchenko
2017-07-25 17:26 ` [PATCH v2 08/13] iommu/arm: Misc fixes for arch specific part Oleksandr Tyshchenko
2017-08-03 11:31   ` Julien Grall
2017-08-03 12:34     ` Oleksandr Tyshchenko
2017-07-25 17:26 ` [PATCH v2 09/13] xen/arm: Add use_iommu flag to xen_arch_domainconfig Oleksandr Tyshchenko
2017-07-28 16:16   ` Wei Liu
2017-07-28 16:30     ` Oleksandr Tyshchenko
2017-08-03 11:33   ` Julien Grall
2017-08-03 12:31     ` Oleksandr Tyshchenko
2017-08-03 12:35       ` Julien Grall
2017-07-25 17:26 ` [PATCH v2 10/13] xen/arm: domain_build: Don't expose IOMMU specific properties to the guest Oleksandr Tyshchenko
2017-08-03 11:37   ` Julien Grall
2017-08-03 13:24     ` Oleksandr Tyshchenko
2017-07-25 17:26 ` [PATCH v2 11/13] iommu/arm: smmu: Squash map_pages/unmap_pages with map_page/unmap_page Oleksandr Tyshchenko
2017-08-03 12:36   ` Julien Grall
2017-08-03 13:26     ` Oleksandr Tyshchenko
2017-07-25 17:26 ` [PATCH v2 12/13] [RFC] iommu: VT-d: " Oleksandr Tyshchenko
2017-08-21 16:44   ` Oleksandr Tyshchenko
2017-09-12 14:44     ` Oleksandr Tyshchenko
2017-09-20  8:54       ` Tian, Kevin
2017-09-20 18:23         ` Oleksandr Tyshchenko
2017-07-25 17:26 ` [PATCH v2 13/13] [RFC] iommu: AMD-Vi: " Oleksandr Tyshchenko
2017-08-21 16:44   ` Oleksandr Tyshchenko
2017-09-12 14:45     ` Oleksandr Tyshchenko
2017-07-31  5:57 ` [PATCH v2 00/13] "Non-shared" IOMMU support on ARM Tian, Kevin
2017-07-31 11:57   ` Oleksandr Tyshchenko
2017-08-01  3:06     ` Tian, Kevin
2017-08-01 11:08       ` Oleksandr Tyshchenko
2017-08-02  6:12         ` Tian, Kevin
2017-08-02 17:47           ` Oleksandr Tyshchenko
2017-08-01 18:09       ` Julien Grall
2017-08-01 18:20         ` Oleksandr Tyshchenko
2017-08-01 17:56   ` Julien Grall

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.