All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v9 0/6] paravirtual IOMMU pre-requisites and clean-up
@ 2018-09-13 15:21 Paul Durrant
  2018-09-13 15:21 ` [PATCH v9 1/7] iommu: introduce the concept of DFN Paul Durrant
                   ` (8 more replies)
  0 siblings, 9 replies; 34+ messages in thread
From: Paul Durrant @ 2018-09-13 15:21 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Jun Nakajima,
	Konrad Rzeszutek Wilk, Andrew Cooper, Ian Jackson, George Dunlap,
	Tim Deegan, Julien Grall, Paul Durrant, Jan Beulich,
	Suravee Suthikulpanit

This series contains pre-requisites and clean-up needed for paravirtual
IOMMU support.

I have separated these patches to avoid further delaying their application
whilst I re-work the implementation of paravirtual IOMMU after review of
v6 of the series. Several of them already have all necessary acks.

Paul Durrant (7):
  iommu: introduce the concept of DFN...
  iommu: make use of type-safe DFN and MFN in exported functions
  iommu: push use of type-safe DFN and MFN into iommu_ops
  iommu: don't domain_crash() inside iommu_map/unmap_page()
  memory: add check_get_page_from_gfn() as a wrapper...
  vtd: add missing check for shared EPT...
  vtd: add lookup_page method to iommu_ops

 xen/arch/arm/p2m.c                            |  7 ++-
 xen/arch/x86/hvm/emulate.c                    | 25 ++++----
 xen/arch/x86/hvm/hvm.c                        | 14 +----
 xen/arch/x86/mm.c                             | 13 ++--
 xen/arch/x86/mm/p2m-ept.c                     | 13 +++-
 xen/arch/x86/mm/p2m-pt.c                      | 48 ++++++++------
 xen/arch/x86/mm/p2m.c                         | 32 +++++++---
 xen/arch/x86/x86_64/mm.c                      |  5 +-
 xen/common/grant_table.c                      | 48 +++++++-------
 xen/common/memory.c                           | 56 +++++++++++++----
 xen/drivers/passthrough/amd/iommu_cmd.c       | 18 +++---
 xen/drivers/passthrough/amd/iommu_map.c       | 88 ++++++++++++++------------
 xen/drivers/passthrough/amd/pci_amd_iommu.c   |  4 +-
 xen/drivers/passthrough/arm/smmu.c            | 20 +++---
 xen/drivers/passthrough/iommu.c               | 53 ++++++++--------
 xen/drivers/passthrough/vtd/iommu.c           | 91 ++++++++++++++++++++-------
 xen/drivers/passthrough/vtd/iommu.h           |  3 +
 xen/drivers/passthrough/vtd/x86/vtd.c         |  1 -
 xen/drivers/passthrough/x86/iommu.c           |  9 ++-
 xen/include/asm-arm/p2m.h                     |  4 ++
 xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |  8 +--
 xen/include/asm-x86/iommu.h                   | 12 ++++
 xen/include/asm-x86/p2m.h                     |  3 +
 xen/include/xen/iommu.h                       | 51 ++++++++++++---
 xen/include/xen/mm.h                          |  5 ++
 xen/include/xen/sched.h                       |  5 ++
 26 files changed, 414 insertions(+), 222 deletions(-)
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v9 1/7] iommu: introduce the concept of DFN...
  2018-09-13 15:21 [PATCH v9 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
@ 2018-09-13 15:21 ` Paul Durrant
  2018-09-14  2:27   ` Tian, Kevin
  2018-09-13 15:21 ` [PATCH v9 2/7] iommu: make use of type-safe DFN and MFN in exported functions Paul Durrant
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 34+ messages in thread
From: Paul Durrant @ 2018-09-13 15:21 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Julien Grall,
	Paul Durrant, Suravee Suthikulpanit

...meaning 'device DMA frame number' i.e. a frame number mapped in the IOMMU
(rather than the MMU) and hence used for DMA address translation.

This patch is a largely cosmetic change that substitutes the terms 'gfn'
and 'gaddr' for 'dfn' and 'daddr' in all the places where the frame number
or address relate to a device rather than the CPU.

The parts that are not purely cosmetic are:

 - the introduction of a type-safe declaration of dfn_t and definition of
   INVALID_DFN to make the substitution of gfn_x(INVALID_GFN) mechanical.
 - the introduction of __dfn_to_daddr and __daddr_to_dfn (and type-safe
   variants without the leading __) with some use of the former.

Subsequent patches will convert code to make use of type-safe DFNs.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Kevin Tian <kevin.tian@intel.com>

v9:
 - Re-word comment in mm.h.
 - Move definitions relating to daddr into asm-x86/iommu.h since these are
   not used by any ARM IOMMU implementation.
 - Fix __daddr_to_dfn() to properly parenthesize and remove cast.

v8:
 - Correct definition of INVALID_DFN.
 - Don't use _AC in definition of IOMMU_PAGE_SIZE.

v7:
 - Re-name BFN -> DFN as requested by Jan.
 - Dropped Wei's R-b because of name change.

v6:
 - Dropped changes to 'mfn' section in xen/mm.h as suggested by Kevin.

v3:
 - Get rid of intermediate 'frame' variables again.

v2:
 - Addressed comments from Jan.
---
 xen/drivers/passthrough/amd/iommu_cmd.c     | 18 +++----
 xen/drivers/passthrough/amd/iommu_map.c     | 78 ++++++++++++++---------------
 xen/drivers/passthrough/amd/pci_amd_iommu.c |  2 +-
 xen/drivers/passthrough/arm/smmu.c          | 16 +++---
 xen/drivers/passthrough/iommu.c             | 28 +++++------
 xen/drivers/passthrough/vtd/iommu.c         | 30 +++++------
 xen/include/asm-x86/iommu.h                 | 12 +++++
 xen/include/xen/iommu.h                     | 26 +++++++---
 xen/include/xen/mm.h                        |  5 ++
 9 files changed, 123 insertions(+), 92 deletions(-)

diff --git a/xen/drivers/passthrough/amd/iommu_cmd.c b/xen/drivers/passthrough/amd/iommu_cmd.c
index 08247fa354..d4d071e53e 100644
--- a/xen/drivers/passthrough/amd/iommu_cmd.c
+++ b/xen/drivers/passthrough/amd/iommu_cmd.c
@@ -284,7 +284,7 @@ void invalidate_iommu_all(struct amd_iommu *iommu)
 }
 
 void amd_iommu_flush_iotlb(u8 devfn, const struct pci_dev *pdev,
-                           uint64_t gaddr, unsigned int order)
+                           daddr_t daddr, unsigned int order)
 {
     unsigned long flags;
     struct amd_iommu *iommu;
@@ -315,12 +315,12 @@ void amd_iommu_flush_iotlb(u8 devfn, const struct pci_dev *pdev,
 
     /* send INVALIDATE_IOTLB_PAGES command */
     spin_lock_irqsave(&iommu->lock, flags);
-    invalidate_iotlb_pages(iommu, maxpend, 0, queueid, gaddr, req_id, order);
+    invalidate_iotlb_pages(iommu, maxpend, 0, queueid, daddr, req_id, order);
     flush_command_buffer(iommu);
     spin_unlock_irqrestore(&iommu->lock, flags);
 }
 
-static void amd_iommu_flush_all_iotlbs(struct domain *d, uint64_t gaddr,
+static void amd_iommu_flush_all_iotlbs(struct domain *d, daddr_t daddr,
                                        unsigned int order)
 {
     struct pci_dev *pdev;
@@ -333,7 +333,7 @@ static void amd_iommu_flush_all_iotlbs(struct domain *d, uint64_t gaddr,
         u8 devfn = pdev->devfn;
 
         do {
-            amd_iommu_flush_iotlb(devfn, pdev, gaddr, order);
+            amd_iommu_flush_iotlb(devfn, pdev, daddr, order);
             devfn += pdev->phantom_stride;
         } while ( devfn != pdev->devfn &&
                   PCI_SLOT(devfn) == PCI_SLOT(pdev->devfn) );
@@ -342,7 +342,7 @@ static void amd_iommu_flush_all_iotlbs(struct domain *d, uint64_t gaddr,
 
 /* Flush iommu cache after p2m changes. */
 static void _amd_iommu_flush_pages(struct domain *d,
-                                   uint64_t gaddr, unsigned int order)
+                                   daddr_t daddr, unsigned int order)
 {
     unsigned long flags;
     struct amd_iommu *iommu;
@@ -352,13 +352,13 @@ static void _amd_iommu_flush_pages(struct domain *d,
     for_each_amd_iommu ( iommu )
     {
         spin_lock_irqsave(&iommu->lock, flags);
-        invalidate_iommu_pages(iommu, gaddr, dom_id, order);
+        invalidate_iommu_pages(iommu, daddr, dom_id, order);
         flush_command_buffer(iommu);
         spin_unlock_irqrestore(&iommu->lock, flags);
     }
 
     if ( ats_enabled )
-        amd_iommu_flush_all_iotlbs(d, gaddr, order);
+        amd_iommu_flush_all_iotlbs(d, daddr, order);
 }
 
 void amd_iommu_flush_all_pages(struct domain *d)
@@ -367,9 +367,9 @@ void amd_iommu_flush_all_pages(struct domain *d)
 }
 
 void amd_iommu_flush_pages(struct domain *d,
-                           unsigned long gfn, unsigned int order)
+                           unsigned long dfn, unsigned int order)
 {
-    _amd_iommu_flush_pages(d, (uint64_t) gfn << PAGE_SHIFT, order);
+    _amd_iommu_flush_pages(d, __dfn_to_daddr(dfn), order);
 }
 
 void amd_iommu_flush_device(struct amd_iommu *iommu, uint16_t bdf)
diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index 70b4345b37..61ade71850 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -35,12 +35,12 @@ static unsigned int pfn_to_pde_idx(unsigned long pfn, unsigned int level)
     return idx;
 }
 
-void clear_iommu_pte_present(unsigned long l1_mfn, unsigned long gfn)
+void clear_iommu_pte_present(unsigned long l1_mfn, unsigned long dfn)
 {
     u64 *table, *pte;
 
     table = map_domain_page(_mfn(l1_mfn));
-    pte = table + pfn_to_pde_idx(gfn, IOMMU_PAGING_MODE_LEVEL_1);
+    pte = table + pfn_to_pde_idx(dfn, IOMMU_PAGING_MODE_LEVEL_1);
     *pte = 0;
     unmap_domain_page(table);
 }
@@ -104,7 +104,7 @@ static bool_t set_iommu_pde_present(u32 *pde, unsigned long next_mfn,
     return need_flush;
 }
 
-static bool_t set_iommu_pte_present(unsigned long pt_mfn, unsigned long gfn, 
+static bool_t set_iommu_pte_present(unsigned long pt_mfn, unsigned long dfn,
                                     unsigned long next_mfn, int pde_level, 
                                     bool_t iw, bool_t ir)
 {
@@ -114,7 +114,7 @@ static bool_t set_iommu_pte_present(unsigned long pt_mfn, unsigned long gfn,
 
     table = map_domain_page(_mfn(pt_mfn));
 
-    pde = (u32*)(table + pfn_to_pde_idx(gfn, pde_level));
+    pde = (u32*)(table + pfn_to_pde_idx(dfn, pde_level));
 
     need_flush = set_iommu_pde_present(pde, next_mfn, 
                                        IOMMU_PAGING_MODE_LEVEL_0, iw, ir);
@@ -331,7 +331,7 @@ static void set_pde_count(u64 *pde, unsigned int count)
  * otherwise increase pde count if mfn is contigous with mfn - 1
  */
 static int iommu_update_pde_count(struct domain *d, unsigned long pt_mfn,
-                                  unsigned long gfn, unsigned long mfn,
+                                  unsigned long dfn, unsigned long mfn,
                                   unsigned int merge_level)
 {
     unsigned int pde_count, next_level;
@@ -347,7 +347,7 @@ static int iommu_update_pde_count(struct domain *d, unsigned long pt_mfn,
 
     /* get pde at merge level */
     table = map_domain_page(_mfn(pt_mfn));
-    pde = table + pfn_to_pde_idx(gfn, merge_level);
+    pde = table + pfn_to_pde_idx(dfn, merge_level);
 
     /* get page table of next level */
     ntable_maddr = amd_iommu_get_next_table_from_pte((u32*)pde);
@@ -362,7 +362,7 @@ static int iommu_update_pde_count(struct domain *d, unsigned long pt_mfn,
     mask = (1ULL<< (PTE_PER_TABLE_SHIFT * next_level)) - 1;
 
     if ( ((first_mfn & mask) == 0) &&
-         (((gfn & mask) | first_mfn) == mfn) )
+         (((dfn & mask) | first_mfn) == mfn) )
     {
         pde_count = get_pde_count(*pde);
 
@@ -387,7 +387,7 @@ out:
 }
 
 static int iommu_merge_pages(struct domain *d, unsigned long pt_mfn,
-                             unsigned long gfn, unsigned int flags,
+                             unsigned long dfn, unsigned int flags,
                              unsigned int merge_level)
 {
     u64 *table, *pde, *ntable;
@@ -398,7 +398,7 @@ static int iommu_merge_pages(struct domain *d, unsigned long pt_mfn,
     ASSERT( spin_is_locked(&hd->arch.mapping_lock) && pt_mfn );
 
     table = map_domain_page(_mfn(pt_mfn));
-    pde = table + pfn_to_pde_idx(gfn, merge_level);
+    pde = table + pfn_to_pde_idx(dfn, merge_level);
 
     /* get first mfn */
     ntable_mfn = amd_iommu_get_next_table_from_pte((u32*)pde) >> PAGE_SHIFT;
@@ -436,7 +436,7 @@ static int iommu_merge_pages(struct domain *d, unsigned long pt_mfn,
  * {Re, un}mapping super page frames causes re-allocation of io
  * page tables.
  */
-static int iommu_pde_from_gfn(struct domain *d, unsigned long pfn, 
+static int iommu_pde_from_dfn(struct domain *d, unsigned long dfn,
                               unsigned long pt_mfn[])
 {
     u64 *pde, *next_table_vaddr;
@@ -465,7 +465,7 @@ static int iommu_pde_from_gfn(struct domain *d, unsigned long pfn,
         pt_mfn[level] = next_table_mfn;
 
         next_table_vaddr = map_domain_page(_mfn(next_table_mfn));
-        pde = next_table_vaddr + pfn_to_pde_idx(pfn, level);
+        pde = next_table_vaddr + pfn_to_pde_idx(dfn, level);
 
         /* Here might be a super page frame */
         next_table_mfn = amd_iommu_get_next_table_from_pte((uint32_t*)pde) 
@@ -477,11 +477,11 @@ static int iommu_pde_from_gfn(struct domain *d, unsigned long pfn,
              next_table_mfn != 0 )
         {
             int i;
-            unsigned long mfn, gfn;
+            unsigned long mfn, pfn;
             unsigned int page_sz;
 
             page_sz = 1 << (PTE_PER_TABLE_SHIFT * (next_level - 1));
-            gfn =  pfn & ~((1 << (PTE_PER_TABLE_SHIFT * next_level)) - 1);
+            pfn =  dfn & ~((1 << (PTE_PER_TABLE_SHIFT * next_level)) - 1);
             mfn = next_table_mfn;
 
             /* allocate lower level page table */
@@ -499,10 +499,10 @@ static int iommu_pde_from_gfn(struct domain *d, unsigned long pfn,
 
             for ( i = 0; i < PTE_PER_TABLE_SIZE; i++ )
             {
-                set_iommu_pte_present(next_table_mfn, gfn, mfn, next_level,
+                set_iommu_pte_present(next_table_mfn, pfn, mfn, next_level,
                                       !!IOMMUF_writable, !!IOMMUF_readable);
                 mfn += page_sz;
-                gfn += page_sz;
+                pfn += page_sz;
              }
 
             amd_iommu_flush_all_pages(d);
@@ -540,7 +540,7 @@ static int iommu_pde_from_gfn(struct domain *d, unsigned long pfn,
     return 0;
 }
 
-static int update_paging_mode(struct domain *d, unsigned long gfn)
+static int update_paging_mode(struct domain *d, unsigned long dfn)
 {
     u16 bdf;
     void *device_entry;
@@ -554,13 +554,13 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
     unsigned long old_root_mfn;
     struct domain_iommu *hd = dom_iommu(d);
 
-    if ( gfn == gfn_x(INVALID_GFN) )
+    if ( dfn == dfn_x(INVALID_DFN) )
         return -EADDRNOTAVAIL;
-    ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
+    ASSERT(!(dfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
 
     level = hd->arch.paging_mode;
     old_root = hd->arch.root_table;
-    offset = gfn >> (PTE_PER_TABLE_SHIFT * (level - 1));
+    offset = dfn >> (PTE_PER_TABLE_SHIFT * (level - 1));
 
     ASSERT(spin_is_locked(&hd->arch.mapping_lock) && is_hvm_domain(d));
 
@@ -631,7 +631,7 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
     return 0;
 }
 
-int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
+int amd_iommu_map_page(struct domain *d, unsigned long dfn, unsigned long mfn,
                        unsigned int flags)
 {
     bool_t need_flush = 0;
@@ -651,34 +651,34 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     if ( rc )
     {
         spin_unlock(&hd->arch.mapping_lock);
-        AMD_IOMMU_DEBUG("Root table alloc failed, gfn = %lx\n", gfn);
+        AMD_IOMMU_DEBUG("Root table alloc failed, dfn = %lx\n", dfn);
         domain_crash(d);
         return rc;
     }
 
     /* Since HVM domain is initialized with 2 level IO page table,
-     * we might need a deeper page table for lager gfn now */
+     * we might need a deeper page table for wider dfn now */
     if ( is_hvm_domain(d) )
     {
-        if ( update_paging_mode(d, gfn) )
+        if ( update_paging_mode(d, dfn) )
         {
             spin_unlock(&hd->arch.mapping_lock);
-            AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
+            AMD_IOMMU_DEBUG("Update page mode failed dfn = %lx\n", dfn);
             domain_crash(d);
             return -EFAULT;
         }
     }
 
-    if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
+    if ( iommu_pde_from_dfn(d, dfn, pt_mfn) || (pt_mfn[1] == 0) )
     {
         spin_unlock(&hd->arch.mapping_lock);
-        AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
+        AMD_IOMMU_DEBUG("Invalid IO pagetable entry dfn = %lx\n", dfn);
         domain_crash(d);
         return -EFAULT;
     }
 
     /* Install 4k mapping first */
-    need_flush = set_iommu_pte_present(pt_mfn[1], gfn, mfn, 
+    need_flush = set_iommu_pte_present(pt_mfn[1], dfn, mfn,
                                        IOMMU_PAGING_MODE_LEVEL_1,
                                        !!(flags & IOMMUF_writable),
                                        !!(flags & IOMMUF_readable));
@@ -690,7 +690,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     /* 4K mapping for PV guests never changes, 
      * no need to flush if we trust non-present bits */
     if ( is_hvm_domain(d) )
-        amd_iommu_flush_pages(d, gfn, 0);
+        amd_iommu_flush_pages(d, dfn, 0);
 
     for ( merge_level = IOMMU_PAGING_MODE_LEVEL_2;
           merge_level <= hd->arch.paging_mode; merge_level++ )
@@ -698,15 +698,15 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
         if ( pt_mfn[merge_level] == 0 )
             break;
         if ( !iommu_update_pde_count(d, pt_mfn[merge_level],
-                                     gfn, mfn, merge_level) )
+                                     dfn, mfn, merge_level) )
             break;
 
-        if ( iommu_merge_pages(d, pt_mfn[merge_level], gfn, 
+        if ( iommu_merge_pages(d, pt_mfn[merge_level], dfn,
                                flags, merge_level) )
         {
             spin_unlock(&hd->arch.mapping_lock);
             AMD_IOMMU_DEBUG("Merge iommu page failed at level %d, "
-                            "gfn = %lx mfn = %lx\n", merge_level, gfn, mfn);
+                            "dfn = %lx mfn = %lx\n", merge_level, dfn, mfn);
             domain_crash(d);
             return -EFAULT;
         }
@@ -720,7 +720,7 @@ out:
     return 0;
 }
 
-int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
+int amd_iommu_unmap_page(struct domain *d, unsigned long dfn)
 {
     unsigned long pt_mfn[7];
     struct domain_iommu *hd = dom_iommu(d);
@@ -739,34 +739,34 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
     }
 
     /* Since HVM domain is initialized with 2 level IO page table,
-     * we might need a deeper page table for lager gfn now */
+     * we might need a deeper page table for lager dfn now */
     if ( is_hvm_domain(d) )
     {
-        int rc = update_paging_mode(d, gfn);
+        int rc = update_paging_mode(d, dfn);
 
         if ( rc )
         {
             spin_unlock(&hd->arch.mapping_lock);
-            AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
+            AMD_IOMMU_DEBUG("Update page mode failed dfn = %lx\n", dfn);
             if ( rc != -EADDRNOTAVAIL )
                 domain_crash(d);
             return rc;
         }
     }
 
-    if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
+    if ( iommu_pde_from_dfn(d, dfn, pt_mfn) || (pt_mfn[1] == 0) )
     {
         spin_unlock(&hd->arch.mapping_lock);
-        AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
+        AMD_IOMMU_DEBUG("Invalid IO pagetable entry dfn = %lx\n", dfn);
         domain_crash(d);
         return -EFAULT;
     }
 
     /* mark PTE as 'page not present' */
-    clear_iommu_pte_present(pt_mfn[1], gfn);
+    clear_iommu_pte_present(pt_mfn[1], dfn);
     spin_unlock(&hd->arch.mapping_lock);
 
-    amd_iommu_flush_pages(d, gfn, 0);
+    amd_iommu_flush_pages(d, dfn, 0);
 
     return 0;
 }
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 330f9ce386..bcc47eb96c 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -585,7 +585,7 @@ static void amd_dump_p2m_table_level(struct page_info* pg, int level,
                 maddr_to_page(next_table_maddr), next_level,
                 address, indent + 1);
         else
-            printk("%*sgfn: %08lx  mfn: %08lx\n",
+            printk("%*sdfn: %08lx  mfn: %08lx\n",
                    indent, "",
                    (unsigned long)PFN_DOWN(address),
                    (unsigned long)PFN_DOWN(next_table_maddr));
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 43ece42a50..1ff09bbb02 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2551,7 +2551,7 @@ static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
 }
 
 static int __must_check arm_smmu_iotlb_flush(struct domain *d,
-                                             unsigned long gfn,
+                                             unsigned long dfn,
                                              unsigned int page_count)
 {
 	/* ARM SMMU v1 doesn't have flush by VMA and VMID */
@@ -2746,7 +2746,7 @@ static void arm_smmu_iommu_domain_teardown(struct domain *d)
 	xfree(xen_domain);
 }
 
-static int __must_check arm_smmu_map_page(struct domain *d, unsigned long gfn,
+static int __must_check arm_smmu_map_page(struct domain *d, unsigned long dfn,
 			unsigned long mfn, unsigned int flags)
 {
 	p2m_type_t t;
@@ -2757,10 +2757,10 @@ static int __must_check arm_smmu_map_page(struct domain *d, unsigned long gfn,
 	 * protected by an IOMMU, Xen needs to add a 1:1 mapping in the domain
 	 * p2m to allow DMA request to work.
 	 * This is only valid when the domain is directed mapped. Hence this
-	 * function should only be used by gnttab code with gfn == mfn.
+	 * function should only be used by gnttab code with gfn == mfn == dfn.
 	 */
 	BUG_ON(!is_domain_direct_mapped(d));
-	BUG_ON(mfn != gfn);
+	BUG_ON(mfn != dfn);
 
 	/* We only support readable and writable flags */
 	if (!(flags & (IOMMUF_readable | IOMMUF_writable)))
@@ -2772,19 +2772,19 @@ static int __must_check arm_smmu_map_page(struct domain *d, unsigned long gfn,
 	 * The function guest_physmap_add_entry replaces the current mapping
 	 * if there is already one...
 	 */
-	return guest_physmap_add_entry(d, _gfn(gfn), _mfn(mfn), 0, t);
+	return guest_physmap_add_entry(d, _gfn(dfn), _mfn(dfn), 0, t);
 }
 
-static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long gfn)
+static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long dfn)
 {
 	/*
 	 * This function should only be used by gnttab code when the domain
-	 * is direct mapped
+	 * is direct mapped (i.e. gfn == mfn == dfn).
 	 */
 	if ( !is_domain_direct_mapped(d) )
 		return -EINVAL;
 
-	return guest_physmap_remove_page(d, _gfn(gfn), _mfn(gfn), 0);
+	return guest_physmap_remove_page(d, _gfn(dfn), _mfn(dfn), 0);
 }
 
 static const struct iommu_ops arm_smmu_iommu_ops = {
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index ee3f523fdf..ca8262be7c 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -215,7 +215,7 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
         page_list_for_each ( page, &d->page_list )
         {
             unsigned long mfn = mfn_x(page_to_mfn(page));
-            unsigned long gfn = mfn_to_gmfn(d, mfn);
+            unsigned long dfn = mfn_to_gmfn(d, mfn);
             unsigned int mapping = IOMMUF_readable;
             int ret;
 
@@ -224,7 +224,7 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
                   == PGT_writable_page) )
                 mapping |= IOMMUF_writable;
 
-            ret = hd->platform_ops->map_page(d, gfn, mfn, mapping);
+            ret = hd->platform_ops->map_page(d, dfn, mfn, mapping);
             if ( !rc )
                 rc = ret;
 
@@ -295,7 +295,7 @@ void iommu_domain_destroy(struct domain *d)
     arch_iommu_domain_destroy(d);
 }
 
-int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
+int iommu_map_page(struct domain *d, unsigned long dfn, unsigned long mfn,
                    unsigned int flags)
 {
     const struct domain_iommu *hd = dom_iommu(d);
@@ -304,13 +304,13 @@ int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    rc = hd->platform_ops->map_page(d, gfn, mfn, flags);
+    rc = hd->platform_ops->map_page(d, dfn, mfn, flags);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
             printk(XENLOG_ERR
-                   "d%d: IOMMU mapping gfn %#lx to mfn %#lx failed: %d\n",
-                   d->domain_id, gfn, mfn, rc);
+                   "d%d: IOMMU mapping dfn %#lx to mfn %#lx failed: %d\n",
+                   d->domain_id, dfn, mfn, rc);
 
         if ( !is_hardware_domain(d) )
             domain_crash(d);
@@ -319,7 +319,7 @@ int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     return rc;
 }
 
-int iommu_unmap_page(struct domain *d, unsigned long gfn)
+int iommu_unmap_page(struct domain *d, unsigned long dfn)
 {
     const struct domain_iommu *hd = dom_iommu(d);
     int rc;
@@ -327,13 +327,13 @@ int iommu_unmap_page(struct domain *d, unsigned long gfn)
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    rc = hd->platform_ops->unmap_page(d, gfn);
+    rc = hd->platform_ops->unmap_page(d, dfn);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
             printk(XENLOG_ERR
-                   "d%d: IOMMU unmapping gfn %#lx failed: %d\n",
-                   d->domain_id, gfn, rc);
+                   "d%d: IOMMU unmapping dfn %#lx failed: %d\n",
+                   d->domain_id, dfn, rc);
 
         if ( !is_hardware_domain(d) )
             domain_crash(d);
@@ -359,7 +359,7 @@ static void iommu_free_pagetables(unsigned long unused)
                             cpumask_cycle(smp_processor_id(), &cpu_online_map));
 }
 
-int iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+int iommu_iotlb_flush(struct domain *d, unsigned long dfn,
                       unsigned int page_count)
 {
     const struct domain_iommu *hd = dom_iommu(d);
@@ -368,13 +368,13 @@ int iommu_iotlb_flush(struct domain *d, unsigned long gfn,
     if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
         return 0;
 
-    rc = hd->platform_ops->iotlb_flush(d, gfn, page_count);
+    rc = hd->platform_ops->iotlb_flush(d, dfn, page_count);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
             printk(XENLOG_ERR
-                   "d%d: IOMMU IOTLB flush failed: %d, gfn %#lx, page count %u\n",
-                   d->domain_id, rc, gfn, page_count);
+                   "d%d: IOMMU IOTLB flush failed: %d, dfn %#lx, page count %u\n",
+                   d->domain_id, rc, dfn, page_count);
 
         if ( !is_hardware_domain(d) )
             domain_crash(d);
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index adc70f205a..88720d4406 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -585,7 +585,7 @@ static int __must_check iommu_flush_all(void)
 }
 
 static int __must_check iommu_flush_iotlb(struct domain *d,
-                                          unsigned long gfn,
+                                          unsigned long dfn,
                                           bool_t dma_old_pte_present,
                                           unsigned int page_count)
 {
@@ -612,12 +612,12 @@ static int __must_check iommu_flush_iotlb(struct domain *d,
         if ( iommu_domid == -1 )
             continue;
 
-        if ( page_count != 1 || gfn == gfn_x(INVALID_GFN) )
+        if ( page_count != 1 || dfn == dfn_x(INVALID_DFN) )
             rc = iommu_flush_iotlb_dsi(iommu, iommu_domid,
                                        0, flush_dev_iotlb);
         else
             rc = iommu_flush_iotlb_psi(iommu, iommu_domid,
-                                       (paddr_t)gfn << PAGE_SHIFT_4K,
+                                       __dfn_to_daddr(dfn),
                                        PAGE_ORDER_4K,
                                        !dma_old_pte_present,
                                        flush_dev_iotlb);
@@ -633,15 +633,15 @@ static int __must_check iommu_flush_iotlb(struct domain *d,
 }
 
 static int __must_check iommu_flush_iotlb_pages(struct domain *d,
-                                                unsigned long gfn,
+                                                unsigned long dfn,
                                                 unsigned int page_count)
 {
-    return iommu_flush_iotlb(d, gfn, 1, page_count);
+    return iommu_flush_iotlb(d, dfn, 1, page_count);
 }
 
 static int __must_check iommu_flush_iotlb_all(struct domain *d)
 {
-    return iommu_flush_iotlb(d, gfn_x(INVALID_GFN), 0, 0);
+    return iommu_flush_iotlb(d, dfn_x(INVALID_DFN), 0, 0);
 }
 
 /* clear one page's page table */
@@ -1768,7 +1768,7 @@ static void iommu_domain_teardown(struct domain *d)
 }
 
 static int __must_check intel_iommu_map_page(struct domain *d,
-                                             unsigned long gfn,
+                                             unsigned long dfn,
                                              unsigned long mfn,
                                              unsigned int flags)
 {
@@ -1787,14 +1787,14 @@ static int __must_check intel_iommu_map_page(struct domain *d,
 
     spin_lock(&hd->arch.mapping_lock);
 
-    pg_maddr = addr_to_dma_page_maddr(d, (paddr_t)gfn << PAGE_SHIFT_4K, 1);
+    pg_maddr = addr_to_dma_page_maddr(d, __dfn_to_daddr(dfn), 1);
     if ( pg_maddr == 0 )
     {
         spin_unlock(&hd->arch.mapping_lock);
         return -ENOMEM;
     }
     page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
-    pte = page + (gfn & LEVEL_MASK);
+    pte = page + (dfn & LEVEL_MASK);
     old = *pte;
     dma_set_pte_addr(new, (paddr_t)mfn << PAGE_SHIFT_4K);
     dma_set_pte_prot(new,
@@ -1818,22 +1818,22 @@ static int __must_check intel_iommu_map_page(struct domain *d,
     unmap_vtd_domain_page(page);
 
     if ( !this_cpu(iommu_dont_flush_iotlb) )
-        rc = iommu_flush_iotlb(d, gfn, dma_pte_present(old), 1);
+        rc = iommu_flush_iotlb(d, dfn, dma_pte_present(old), 1);
 
     return rc;
 }
 
 static int __must_check intel_iommu_unmap_page(struct domain *d,
-                                               unsigned long gfn)
+                                               unsigned long dfn)
 {
     /* Do nothing if hardware domain and iommu supports pass thru. */
     if ( iommu_hwdom_passthrough && is_hardware_domain(d) )
         return 0;
 
-    return dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
+    return dma_pte_clear_one(d, __dfn_to_daddr(dfn));
 }
 
-int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
+int iommu_pte_flush(struct domain *d, uint64_t dfn, uint64_t *pte,
                     int order, int present)
 {
     struct acpi_drhd_unit *drhd;
@@ -1857,7 +1857,7 @@ int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
             continue;
 
         rc = iommu_flush_iotlb_psi(iommu, iommu_domid,
-                                   (paddr_t)gfn << PAGE_SHIFT_4K,
+                                   __dfn_to_daddr(dfn),
                                    order, !present, flush_dev_iotlb);
         if ( rc > 0 )
         {
@@ -2627,7 +2627,7 @@ static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
             vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
                                      address, indent + 1);
         else
-            printk("%*sgfn: %08lx mfn: %08lx\n",
+            printk("%*sdfn: %08lx mfn: %08lx\n",
                    indent, "",
                    (unsigned long)(address >> PAGE_SHIFT_4K),
                    (unsigned long)(dma_pte_addr(*pte) >> PAGE_SHIFT_4K));
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
index 14ad0489a6..0ed4a9e86d 100644
--- a/xen/include/asm-x86/iommu.h
+++ b/xen/include/asm-x86/iommu.h
@@ -30,6 +30,18 @@ struct g2m_ioport {
     unsigned int np;
 };
 
+#define IOMMU_PAGE_SHIFT 12
+#define IOMMU_PAGE_SIZE  (1 << IOMMU_PAGE_SHIFT)
+#define IOMMU_PAGE_MASK  (~(IOMMU_PAGE_SIZE - 1))
+
+typedef uint64_t daddr_t;
+
+#define __dfn_to_daddr(dfn) ((daddr_t)(dfn) << IOMMU_PAGE_SHIFT)
+#define __daddr_to_dfn(daddr) ((daddr) >> IOMMU_PAGE_SHIFT)
+
+#define dfn_to_daddr(dfn) __dfn_to_daddr(dfn_x(dfn))
+#define daddr_to_dfn(daddr) _dfn(__daddr_to_dfn(daddr))
+
 struct arch_iommu
 {
     u64 pgd_maddr;                 /* io page directory machine address */
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 57c4e81ec6..290e0aada6 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -23,11 +23,25 @@
 #include <xen/page-defs.h>
 #include <xen/spinlock.h>
 #include <xen/pci.h>
+#include <xen/typesafe.h>
 #include <public/hvm/ioreq.h>
 #include <public/domctl.h>
 #include <asm/device.h>
 #include <asm/iommu.h>
 
+TYPE_SAFE(uint64_t, dfn);
+#define PRI_dfn     PRIx64
+#define INVALID_DFN _dfn(~0ULL)
+
+#ifndef dfn_t
+#define dfn_t /* Grep fodder: dfn_t, _dfn() and dfn_x() are defined above */
+#define _dfn
+#define dfn_x
+#undef dfn_t
+#undef _dfn
+#undef dfn_x
+#endif
+
 extern bool_t iommu_enable, iommu_enabled;
 extern bool_t force_iommu, iommu_verbose;
 extern bool_t iommu_workaround_bios_bug, iommu_igfx;
@@ -64,9 +78,9 @@ void iommu_teardown(struct domain *d);
 #define IOMMUF_readable  (1u<<_IOMMUF_readable)
 #define _IOMMUF_writable 1
 #define IOMMUF_writable  (1u<<_IOMMUF_writable)
-int __must_check iommu_map_page(struct domain *d, unsigned long gfn,
+int __must_check iommu_map_page(struct domain *d, unsigned long dfn,
                                 unsigned long mfn, unsigned int flags);
-int __must_check iommu_unmap_page(struct domain *d, unsigned long gfn);
+int __must_check iommu_unmap_page(struct domain *d, unsigned long dfn);
 
 enum iommu_feature
 {
@@ -154,9 +168,9 @@ struct iommu_ops {
 #endif /* HAS_PCI */
 
     void (*teardown)(struct domain *d);
-    int __must_check (*map_page)(struct domain *d, unsigned long gfn,
+    int __must_check (*map_page)(struct domain *d, unsigned long dfn,
                                  unsigned long mfn, unsigned int flags);
-    int __must_check (*unmap_page)(struct domain *d, unsigned long gfn);
+    int __must_check (*unmap_page)(struct domain *d, unsigned long dfn);
     void (*free_page_table)(struct page_info *);
 #ifdef CONFIG_X86
     void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
@@ -167,7 +181,7 @@ struct iommu_ops {
     void (*resume)(void);
     void (*share_p2m)(struct domain *d);
     void (*crash_shutdown)(void);
-    int __must_check (*iotlb_flush)(struct domain *d, unsigned long gfn,
+    int __must_check (*iotlb_flush)(struct domain *d, unsigned long dfn,
                                     unsigned int page_count);
     int __must_check (*iotlb_flush_all)(struct domain *d);
     int (*get_reserved_device_memory)(iommu_grdm_t *, void *);
@@ -189,7 +203,7 @@ int iommu_do_pci_domctl(struct xen_domctl *, struct domain *d,
 int iommu_do_domctl(struct xen_domctl *, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
-int __must_check iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+int __must_check iommu_iotlb_flush(struct domain *d, unsigned long dfn,
                                    unsigned int page_count);
 int __must_check iommu_iotlb_flush_all(struct domain *d);
 
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index b3d46ab56b..720aa49b05 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -26,6 +26,11 @@
  *   A linear idea of a guest physical address space. For an auto-translated
  *   guest, pfn == gfn while for a non-translated guest, pfn != gfn.
  *
+ * dfn: Device DMA Frame Number (definitions in include/xen/iommu.h)
+ *   The linear frame numbers of device DMA address space. All initiators for
+ *   (i.e. all devices assigned to) a guest share a single DMA address space
+ *   and, by default, Xen will ensure dfn == pfn.
+ *
  * WARNING: Some of these terms have changed over time while others have been
  * used inconsistently, meaning that a lot of existing code does not match the
  * definitions above.  New code should use these terms as described here, and
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v9 2/7] iommu: make use of type-safe DFN and MFN in exported functions
  2018-09-13 15:21 [PATCH v9 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
  2018-09-13 15:21 ` [PATCH v9 1/7] iommu: introduce the concept of DFN Paul Durrant
@ 2018-09-13 15:21 ` Paul Durrant
  2018-09-13 15:21 ` [PATCH v9 3/7] iommu: push use of type-safe DFN and MFN into iommu_ops Paul Durrant
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Paul Durrant @ 2018-09-13 15:21 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Konrad Rzeszutek Wilk, George Dunlap,
	Andrew Cooper, Ian Jackson, Tim Deegan, Julien Grall,
	Paul Durrant, Jun Nakajima

This patch modifies the declaration of the entry points to the IOMMU
sub-system to use dfn_t and mfn_t in place of unsigned long. A subsequent
patch will similarly modify the methods in the iommu_ops structure.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Roger Pau Monne <roger.pau@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>

v9:
 - Re-base.

v7:
 - Re-base and re-name BFN -> DFN.
 - Added Jan's A-b since re-naming was purely mechanical.

v6:
 - Re-base.

v3:
 - Removed most of the uses of an intermediate 'frame' variable.

v2:
 - Addressed comments from Jan.
 - Use intermediate 'frame' variable to avoid directly encapsulating
   mfn or gfn values as dfns.
---
 xen/arch/arm/p2m.c                    |  3 ++-
 xen/arch/x86/mm.c                     | 10 ++++----
 xen/arch/x86/mm/p2m-ept.c             | 10 +++++---
 xen/arch/x86/mm/p2m-pt.c              | 45 ++++++++++++++++++++---------------
 xen/arch/x86/mm/p2m.c                 | 16 ++++++++-----
 xen/arch/x86/x86_64/mm.c              |  5 ++--
 xen/common/grant_table.c              | 12 +++++-----
 xen/common/memory.c                   |  4 ++--
 xen/drivers/passthrough/iommu.c       | 25 ++++++++++---------
 xen/drivers/passthrough/vtd/x86/vtd.c |  1 -
 xen/drivers/passthrough/x86/iommu.c   |  3 ++-
 xen/include/xen/iommu.h               | 14 +++++++----
 12 files changed, 85 insertions(+), 63 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 1364e5960a..0db12b01f1 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -957,7 +957,8 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
 
     if ( need_iommu(p2m->domain) &&
          (lpae_is_valid(orig_pte) || lpae_is_valid(*entry)) )
-        rc = iommu_iotlb_flush(p2m->domain, gfn_x(sgfn), 1UL << page_order);
+        rc = iommu_iotlb_flush(p2m->domain, _dfn(gfn_x(sgfn)),
+                               1UL << page_order);
     else
         rc = 0;
 
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index d37eea53d1..f308c97ec9 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2799,14 +2799,14 @@ static int _get_page_type(struct page_info *page, unsigned long type,
         struct domain *d = page_get_owner(page);
         if ( d && is_pv_domain(d) && unlikely(need_iommu(d)) )
         {
-            gfn_t gfn = _gfn(mfn_to_gmfn(d, mfn_x(page_to_mfn(page))));
+            mfn_t mfn = page_to_mfn(page);
 
             if ( (x & PGT_type_mask) == PGT_writable_page )
-                iommu_ret = iommu_unmap_page(d, gfn_x(gfn));
+                iommu_ret = iommu_unmap_page(d, _dfn(mfn_x(mfn)));
             else if ( type == PGT_writable_page )
-                iommu_ret = iommu_map_page(d, gfn_x(gfn),
-                                           mfn_x(page_to_mfn(page)),
-                                           IOMMUF_readable|IOMMUF_writable);
+                iommu_ret = iommu_map_page(d, _dfn(mfn_x(mfn)), mfn,
+                                           IOMMUF_readable |
+                                           IOMMUF_writable);
         }
     }
 
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 1ff4f14ae4..9a3a90e9e6 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -870,15 +870,19 @@ out:
             rc = iommu_pte_flush(d, gfn, &ept_entry->epte, order, vtd_pte_present);
         else
         {
+            dfn_t dfn = _dfn(gfn);
+
             if ( iommu_flags )
                 for ( i = 0; i < (1 << order); i++ )
                 {
-                    rc = iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags);
+                    rc = iommu_map_page(d, dfn_add(dfn, i),
+                                        mfn_add(mfn, i), iommu_flags);
                     if ( unlikely(rc) )
                     {
                         while ( i-- )
                             /* If statement to satisfy __must_check. */
-                            if ( iommu_unmap_page(p2m->domain, gfn + i) )
+                            if ( iommu_unmap_page(p2m->domain,
+                                                  dfn_add(dfn, i)) )
                                 continue;
 
                         break;
@@ -887,7 +891,7 @@ out:
             else
                 for ( i = 0; i < (1 << order); i++ )
                 {
-                    ret = iommu_unmap_page(d, gfn + i);
+                    ret = iommu_unmap_page(d, dfn_add(dfn, i));
                     if ( !rc )
                         rc = ret;
                 }
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index b8c5d2ed26..881e9e87b8 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -687,29 +687,36 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
             if ( iommu_old_flags )
                 amd_iommu_flush_pages(p2m->domain, gfn, page_order);
         }
-        else if ( iommu_pte_flags )
-            for ( i = 0; i < (1UL << page_order); i++ )
-            {
-                rc = iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
-                                    iommu_pte_flags);
-                if ( unlikely(rc) )
+        else
+        {
+            dfn_t dfn = _dfn(gfn);
+
+            if ( iommu_pte_flags )
+                for ( i = 0; i < (1UL << page_order); i++ )
                 {
-                    while ( i-- )
-                        /* If statement to satisfy __must_check. */
-                        if ( iommu_unmap_page(p2m->domain, gfn + i) )
-                            continue;
+                    rc = iommu_map_page(p2m->domain, dfn_add(dfn, i),
+                                        mfn_add(mfn, i), iommu_pte_flags);
+                    if ( unlikely(rc) )
+                    {
+                        while ( i-- )
+                            /* If statement to satisfy __must_check. */
+                            if ( iommu_unmap_page(p2m->domain,
+                                                  dfn_add(dfn, i)) )
+                                continue;
 
-                    break;
+                        break;
+                    }
                 }
-            }
-        else
-            for ( i = 0; i < (1UL << page_order); i++ )
-            {
-                int ret = iommu_unmap_page(p2m->domain, gfn + i);
+            else
+                for ( i = 0; i < (1UL << page_order); i++ )
+                {
+                    int ret = iommu_unmap_page(p2m->domain,
+                                               dfn_add(dfn, i));
 
-                if ( !rc )
-                    rc = ret;
-            }
+                    if ( !rc )
+                        rc = ret;
+                }
+        }
     }
 
     /*
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 6020553c17..801b629b95 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -715,9 +715,11 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn_l, unsigned long mfn,
 
         if ( need_iommu(p2m->domain) )
         {
+            dfn_t dfn = _dfn(mfn);
+
             for ( i = 0; i < (1 << page_order); i++ )
             {
-                int ret = iommu_unmap_page(p2m->domain, mfn + i);
+                int ret = iommu_unmap_page(p2m->domain, dfn_add(dfn, i));
 
                 if ( !rc )
                     rc = ret;
@@ -774,16 +776,17 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
     {
         if ( need_iommu(d) && t == p2m_ram_rw )
         {
+            dfn_t dfn = _dfn(mfn_x(mfn));
+
             for ( i = 0; i < (1 << page_order); i++ )
             {
-                rc = iommu_map_page(d, mfn_x(mfn_add(mfn, i)),
-                                    mfn_x(mfn_add(mfn, i)),
+                rc = iommu_map_page(d, dfn_add(dfn, i), mfn_add(mfn, i),
                                     IOMMUF_readable|IOMMUF_writable);
                 if ( rc != 0 )
                 {
                     while ( i-- > 0 )
                         /* If statement to satisfy __must_check. */
-                        if ( iommu_unmap_page(d, mfn_x(mfn_add(mfn, i))) )
+                        if ( iommu_unmap_page(d, dfn_add(dfn, i)) )
                             continue;
 
                     return rc;
@@ -1158,7 +1161,8 @@ int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
     {
         if ( !need_iommu(d) )
             return 0;
-        return iommu_map_page(d, gfn_l, gfn_l, IOMMUF_readable|IOMMUF_writable);
+        return iommu_map_page(d, _dfn(gfn_l), _mfn(gfn_l),
+                              IOMMUF_readable | IOMMUF_writable);
     }
 
     gfn_lock(p2m, gfn, 0);
@@ -1248,7 +1252,7 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l)
     {
         if ( !need_iommu(d) )
             return 0;
-        return iommu_unmap_page(d, gfn_l);
+        return iommu_unmap_page(d, _dfn(gfn_l));
     }
 
     gfn_lock(p2m, gfn, 0);
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 90a648c956..d1fce57432 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1430,13 +1430,14 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm)
          !need_iommu(hardware_domain) )
     {
         for ( i = spfn; i < epfn; i++ )
-            if ( iommu_map_page(hardware_domain, i, i, IOMMUF_readable|IOMMUF_writable) )
+            if ( iommu_map_page(hardware_domain, _dfn(i), _mfn(i),
+                                IOMMUF_readable | IOMMUF_writable) )
                 break;
         if ( i != epfn )
         {
             while (i-- > old_max)
                 /* If statement to satisfy __must_check. */
-                if ( iommu_unmap_page(hardware_domain, i) )
+                if ( iommu_unmap_page(hardware_domain, _dfn(i)) )
                     continue;
 
             goto destroy_m2p;
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 73d3ed3701..2d01cad176 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1143,13 +1143,13 @@ map_grant_ref(
              !(old_pin & (GNTPIN_hstw_mask|GNTPIN_devw_mask)) )
         {
             if ( !(kind & MAPKIND_WRITE) )
-                err = iommu_map_page(ld, mfn_x(mfn), mfn_x(mfn),
-                                     IOMMUF_readable|IOMMUF_writable);
+                err = iommu_map_page(ld, _dfn(mfn_x(mfn)), mfn,
+                                     IOMMUF_readable | IOMMUF_writable);
         }
         else if ( act_pin && !old_pin )
         {
             if ( !kind )
-                err = iommu_map_page(ld, mfn_x(mfn), mfn_x(mfn),
+                err = iommu_map_page(ld, _dfn(mfn_x(mfn)), mfn,
                                      IOMMUF_readable);
         }
         if ( err )
@@ -1398,10 +1398,10 @@ unmap_common(
 
         kind = mapkind(lgt, rd, op->mfn);
         if ( !kind )
-            err = iommu_unmap_page(ld, mfn_x(op->mfn));
+            err = iommu_unmap_page(ld, _dfn(mfn_x(op->mfn)));
         else if ( !(kind & MAPKIND_WRITE) )
-            err = iommu_map_page(ld, mfn_x(op->mfn),
-                                 mfn_x(op->mfn), IOMMUF_readable);
+            err = iommu_map_page(ld, _dfn(mfn_x(op->mfn)), op->mfn,
+                                 IOMMUF_readable);
 
         double_gt_unlock(lgt, rgt);
 
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 996f94b103..89b211d6aa 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -834,11 +834,11 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
 
         this_cpu(iommu_dont_flush_iotlb) = 0;
 
-        ret = iommu_iotlb_flush(d, xatp->idx - done, done);
+        ret = iommu_iotlb_flush(d, _dfn(xatp->idx - done), done);
         if ( unlikely(ret) && rc >= 0 )
             rc = ret;
 
-        ret = iommu_iotlb_flush(d, xatp->gpfn - done, done);
+        ret = iommu_iotlb_flush(d, _dfn(xatp->gpfn - done), done);
         if ( unlikely(ret) && rc >= 0 )
             rc = ret;
     }
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index ca8262be7c..9390b1366f 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -295,7 +295,7 @@ void iommu_domain_destroy(struct domain *d)
     arch_iommu_domain_destroy(d);
 }
 
-int iommu_map_page(struct domain *d, unsigned long dfn, unsigned long mfn,
+int iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
                    unsigned int flags)
 {
     const struct domain_iommu *hd = dom_iommu(d);
@@ -304,13 +304,13 @@ int iommu_map_page(struct domain *d, unsigned long dfn, unsigned long mfn,
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    rc = hd->platform_ops->map_page(d, dfn, mfn, flags);
+    rc = hd->platform_ops->map_page(d, dfn_x(dfn), mfn_x(mfn), flags);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
             printk(XENLOG_ERR
-                   "d%d: IOMMU mapping dfn %#lx to mfn %#lx failed: %d\n",
-                   d->domain_id, dfn, mfn, rc);
+                   "d%d: IOMMU mapping dfn %"PRI_dfn" to mfn %"PRI_mfn" failed: %d\n",
+                   d->domain_id, dfn_x(dfn), mfn_x(mfn), rc);
 
         if ( !is_hardware_domain(d) )
             domain_crash(d);
@@ -319,7 +319,7 @@ int iommu_map_page(struct domain *d, unsigned long dfn, unsigned long mfn,
     return rc;
 }
 
-int iommu_unmap_page(struct domain *d, unsigned long dfn)
+int iommu_unmap_page(struct domain *d, dfn_t dfn)
 {
     const struct domain_iommu *hd = dom_iommu(d);
     int rc;
@@ -327,13 +327,13 @@ int iommu_unmap_page(struct domain *d, unsigned long dfn)
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    rc = hd->platform_ops->unmap_page(d, dfn);
+    rc = hd->platform_ops->unmap_page(d, dfn_x(dfn));
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
             printk(XENLOG_ERR
-                   "d%d: IOMMU unmapping dfn %#lx failed: %d\n",
-                   d->domain_id, dfn, rc);
+                   "d%d: IOMMU unmapping dfn %"PRI_dfn" failed: %d\n",
+                   d->domain_id, dfn_x(dfn), rc);
 
         if ( !is_hardware_domain(d) )
             domain_crash(d);
@@ -359,8 +359,7 @@ static void iommu_free_pagetables(unsigned long unused)
                             cpumask_cycle(smp_processor_id(), &cpu_online_map));
 }
 
-int iommu_iotlb_flush(struct domain *d, unsigned long dfn,
-                      unsigned int page_count)
+int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_count)
 {
     const struct domain_iommu *hd = dom_iommu(d);
     int rc;
@@ -368,13 +367,13 @@ int iommu_iotlb_flush(struct domain *d, unsigned long dfn,
     if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
         return 0;
 
-    rc = hd->platform_ops->iotlb_flush(d, dfn, page_count);
+    rc = hd->platform_ops->iotlb_flush(d, dfn_x(dfn), page_count);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
             printk(XENLOG_ERR
-                   "d%d: IOMMU IOTLB flush failed: %d, dfn %#lx, page count %u\n",
-                   d->domain_id, rc, dfn, page_count);
+                   "d%d: IOMMU IOTLB flush failed: %d, dfn %"PRI_dfn", page count %u\n",
+                   d->domain_id, rc, dfn_x(dfn), page_count);
 
         if ( !is_hardware_domain(d) )
             domain_crash(d);
diff --git a/xen/drivers/passthrough/vtd/x86/vtd.c b/xen/drivers/passthrough/vtd/x86/vtd.c
index 48e16f956b..ff456e1e70 100644
--- a/xen/drivers/passthrough/vtd/x86/vtd.c
+++ b/xen/drivers/passthrough/vtd/x86/vtd.c
@@ -60,4 +60,3 @@ void flush_all_cache()
 {
     wbinvd();
 }
-
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index 47a078272a..d6cbeae5c9 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -227,7 +227,8 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
         if ( paging_mode_translate(d) )
             rc = set_identity_p2m_entry(d, pfn, p2m_access_rw, 0);
         else
-            rc = iommu_map_page(d, pfn, pfn, IOMMUF_readable|IOMMUF_writable);
+            rc = iommu_map_page(d, _dfn(pfn), _mfn(pfn),
+                                IOMMUF_readable | IOMMUF_writable);
         if ( rc )
             printk(XENLOG_WARNING " d%d: IOMMU mapping failed: %d\n",
                    d->domain_id, rc);
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 290e0aada6..f9d86fc816 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -24,6 +24,7 @@
 #include <xen/spinlock.h>
 #include <xen/pci.h>
 #include <xen/typesafe.h>
+#include <xen/mm.h>
 #include <public/hvm/ioreq.h>
 #include <public/domctl.h>
 #include <asm/device.h>
@@ -42,6 +43,11 @@ TYPE_SAFE(uint64_t, dfn);
 #undef dfn_x
 #endif
 
+static inline dfn_t dfn_add(dfn_t dfn, unsigned long i)
+{
+    return _dfn(dfn_x(dfn) + i);
+}
+
 extern bool_t iommu_enable, iommu_enabled;
 extern bool_t force_iommu, iommu_verbose;
 extern bool_t iommu_workaround_bios_bug, iommu_igfx;
@@ -78,9 +84,9 @@ void iommu_teardown(struct domain *d);
 #define IOMMUF_readable  (1u<<_IOMMUF_readable)
 #define _IOMMUF_writable 1
 #define IOMMUF_writable  (1u<<_IOMMUF_writable)
-int __must_check iommu_map_page(struct domain *d, unsigned long dfn,
-                                unsigned long mfn, unsigned int flags);
-int __must_check iommu_unmap_page(struct domain *d, unsigned long dfn);
+int __must_check iommu_map_page(struct domain *d, dfn_t dfn,
+                                mfn_t mfn, unsigned int flags);
+int __must_check iommu_unmap_page(struct domain *d, dfn_t dfn);
 
 enum iommu_feature
 {
@@ -203,7 +209,7 @@ int iommu_do_pci_domctl(struct xen_domctl *, struct domain *d,
 int iommu_do_domctl(struct xen_domctl *, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
-int __must_check iommu_iotlb_flush(struct domain *d, unsigned long dfn,
+int __must_check iommu_iotlb_flush(struct domain *d, dfn_t dfn,
                                    unsigned int page_count);
 int __must_check iommu_iotlb_flush_all(struct domain *d);
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v9 3/7] iommu: push use of type-safe DFN and MFN into iommu_ops
  2018-09-13 15:21 [PATCH v9 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
  2018-09-13 15:21 ` [PATCH v9 1/7] iommu: introduce the concept of DFN Paul Durrant
  2018-09-13 15:21 ` [PATCH v9 2/7] iommu: make use of type-safe DFN and MFN in exported functions Paul Durrant
@ 2018-09-13 15:21 ` Paul Durrant
  2018-09-13 15:21 ` [PATCH v9 4/7] iommu: don't domain_crash() inside iommu_map/unmap_page() Paul Durrant
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Paul Durrant @ 2018-09-13 15:21 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, George Dunlap, Suravee Suthikulpanit

This patch modifies the methods in struct iommu_ops to use type-safe DFN
and MFN. This follows on from the prior patch that modified the functions
exported in xen/iommu.h.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Roger Pau Monne <roger.pau@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>

v9:
 - Re-base.

v7:
 - Re-base and re-name BFN -> DFN.
 - Added Jan's A-b since re-naming was purely mechanical.

v6:
 - Re-base.

v3:
 - Remove some use of intermediate 'frame' variables.

v2:
 - Addressed comments from Jan.
 - Extend use of intermediate 'frame' variable to avoid directly
   encapsulating gfn values as bfns (now dfns) or vice versa.
---
 xen/drivers/passthrough/amd/iommu_map.c       | 46 ++++++++++++++++-----------
 xen/drivers/passthrough/amd/pci_amd_iommu.c   |  2 +-
 xen/drivers/passthrough/arm/smmu.c            | 16 +++++-----
 xen/drivers/passthrough/iommu.c               |  9 +++---
 xen/drivers/passthrough/vtd/iommu.c           | 26 +++++++--------
 xen/drivers/passthrough/x86/iommu.c           |  2 +-
 xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |  8 ++---
 xen/include/xen/iommu.h                       | 13 +++++---
 8 files changed, 67 insertions(+), 55 deletions(-)

diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index 61ade71850..c89c54fdb6 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -631,7 +631,7 @@ static int update_paging_mode(struct domain *d, unsigned long dfn)
     return 0;
 }
 
-int amd_iommu_map_page(struct domain *d, unsigned long dfn, unsigned long mfn,
+int amd_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
                        unsigned int flags)
 {
     bool_t need_flush = 0;
@@ -651,7 +651,8 @@ int amd_iommu_map_page(struct domain *d, unsigned long dfn, unsigned long mfn,
     if ( rc )
     {
         spin_unlock(&hd->arch.mapping_lock);
-        AMD_IOMMU_DEBUG("Root table alloc failed, dfn = %lx\n", dfn);
+        AMD_IOMMU_DEBUG("Root table alloc failed, dfn = %"PRI_dfn"\n",
+                        dfn_x(dfn));
         domain_crash(d);
         return rc;
     }
@@ -660,25 +661,27 @@ int amd_iommu_map_page(struct domain *d, unsigned long dfn, unsigned long mfn,
      * we might need a deeper page table for wider dfn now */
     if ( is_hvm_domain(d) )
     {
-        if ( update_paging_mode(d, dfn) )
+        if ( update_paging_mode(d, dfn_x(dfn)) )
         {
             spin_unlock(&hd->arch.mapping_lock);
-            AMD_IOMMU_DEBUG("Update page mode failed dfn = %lx\n", dfn);
+            AMD_IOMMU_DEBUG("Update page mode failed dfn = %"PRI_dfn"\n",
+                            dfn_x(dfn));
             domain_crash(d);
             return -EFAULT;
         }
     }
 
-    if ( iommu_pde_from_dfn(d, dfn, pt_mfn) || (pt_mfn[1] == 0) )
+    if ( iommu_pde_from_dfn(d, dfn_x(dfn), pt_mfn) || (pt_mfn[1] == 0) )
     {
         spin_unlock(&hd->arch.mapping_lock);
-        AMD_IOMMU_DEBUG("Invalid IO pagetable entry dfn = %lx\n", dfn);
+        AMD_IOMMU_DEBUG("Invalid IO pagetable entry dfn = %"PRI_dfn"\n",
+                        dfn_x(dfn));
         domain_crash(d);
         return -EFAULT;
     }
 
     /* Install 4k mapping first */
-    need_flush = set_iommu_pte_present(pt_mfn[1], dfn, mfn,
+    need_flush = set_iommu_pte_present(pt_mfn[1], dfn_x(dfn), mfn_x(mfn),
                                        IOMMU_PAGING_MODE_LEVEL_1,
                                        !!(flags & IOMMUF_writable),
                                        !!(flags & IOMMUF_readable));
@@ -690,7 +693,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long dfn, unsigned long mfn,
     /* 4K mapping for PV guests never changes, 
      * no need to flush if we trust non-present bits */
     if ( is_hvm_domain(d) )
-        amd_iommu_flush_pages(d, dfn, 0);
+        amd_iommu_flush_pages(d, dfn_x(dfn), 0);
 
     for ( merge_level = IOMMU_PAGING_MODE_LEVEL_2;
           merge_level <= hd->arch.paging_mode; merge_level++ )
@@ -698,15 +701,16 @@ int amd_iommu_map_page(struct domain *d, unsigned long dfn, unsigned long mfn,
         if ( pt_mfn[merge_level] == 0 )
             break;
         if ( !iommu_update_pde_count(d, pt_mfn[merge_level],
-                                     dfn, mfn, merge_level) )
+                                     dfn_x(dfn), mfn_x(mfn), merge_level) )
             break;
 
-        if ( iommu_merge_pages(d, pt_mfn[merge_level], dfn,
+        if ( iommu_merge_pages(d, pt_mfn[merge_level], dfn_x(dfn),
                                flags, merge_level) )
         {
             spin_unlock(&hd->arch.mapping_lock);
             AMD_IOMMU_DEBUG("Merge iommu page failed at level %d, "
-                            "dfn = %lx mfn = %lx\n", merge_level, dfn, mfn);
+                            "dfn = %"PRI_dfn" mfn = %"PRI_mfn"\n",
+                            merge_level, dfn_x(dfn), mfn_x(mfn));
             domain_crash(d);
             return -EFAULT;
         }
@@ -720,7 +724,7 @@ out:
     return 0;
 }
 
-int amd_iommu_unmap_page(struct domain *d, unsigned long dfn)
+int amd_iommu_unmap_page(struct domain *d, dfn_t dfn)
 {
     unsigned long pt_mfn[7];
     struct domain_iommu *hd = dom_iommu(d);
@@ -742,31 +746,33 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long dfn)
      * we might need a deeper page table for lager dfn now */
     if ( is_hvm_domain(d) )
     {
-        int rc = update_paging_mode(d, dfn);
+        int rc = update_paging_mode(d, dfn_x(dfn));
 
         if ( rc )
         {
             spin_unlock(&hd->arch.mapping_lock);
-            AMD_IOMMU_DEBUG("Update page mode failed dfn = %lx\n", dfn);
+            AMD_IOMMU_DEBUG("Update page mode failed dfn = %"PRI_dfn"\n",
+                            dfn_x(dfn));
             if ( rc != -EADDRNOTAVAIL )
                 domain_crash(d);
             return rc;
         }
     }
 
-    if ( iommu_pde_from_dfn(d, dfn, pt_mfn) || (pt_mfn[1] == 0) )
+    if ( iommu_pde_from_dfn(d, dfn_x(dfn), pt_mfn) || (pt_mfn[1] == 0) )
     {
         spin_unlock(&hd->arch.mapping_lock);
-        AMD_IOMMU_DEBUG("Invalid IO pagetable entry dfn = %lx\n", dfn);
+        AMD_IOMMU_DEBUG("Invalid IO pagetable entry dfn = %"PRI_dfn"\n",
+                        dfn_x(dfn));
         domain_crash(d);
         return -EFAULT;
     }
 
     /* mark PTE as 'page not present' */
-    clear_iommu_pte_present(pt_mfn[1], dfn);
+    clear_iommu_pte_present(pt_mfn[1], dfn_x(dfn));
     spin_unlock(&hd->arch.mapping_lock);
 
-    amd_iommu_flush_pages(d, dfn, 0);
+    amd_iommu_flush_pages(d, dfn_x(dfn), 0);
 
     return 0;
 }
@@ -787,7 +793,9 @@ int amd_iommu_reserve_domain_unity_map(struct domain *domain,
     gfn = phys_addr >> PAGE_SHIFT;
     for ( i = 0; i < npages; i++ )
     {
-        rt = amd_iommu_map_page(domain, gfn +i, gfn +i, flags);
+        unsigned long frame = gfn + i;
+
+        rt = amd_iommu_map_page(domain, _dfn(frame), _mfn(frame), flags);
         if ( rt != 0 )
             return rt;
     }
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index bcc47eb96c..2f38752ebe 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -278,7 +278,7 @@ static void __hwdom_init amd_iommu_hwdom_init(struct domain *d)
              */
             if ( mfn_valid(_mfn(pfn)) )
             {
-                int ret = amd_iommu_map_page(d, pfn, pfn,
+                int ret = amd_iommu_map_page(d, _dfn(pfn), _mfn(pfn),
                                              IOMMUF_readable|IOMMUF_writable);
 
                 if ( !rc )
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 1ff09bbb02..d66601b62e 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2550,8 +2550,7 @@ static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
 	return 0;
 }
 
-static int __must_check arm_smmu_iotlb_flush(struct domain *d,
-                                             unsigned long dfn,
+static int __must_check arm_smmu_iotlb_flush(struct domain *d, dfn_t dfn,
                                              unsigned int page_count)
 {
 	/* ARM SMMU v1 doesn't have flush by VMA and VMID */
@@ -2746,8 +2745,8 @@ static void arm_smmu_iommu_domain_teardown(struct domain *d)
 	xfree(xen_domain);
 }
 
-static int __must_check arm_smmu_map_page(struct domain *d, unsigned long dfn,
-			unsigned long mfn, unsigned int flags)
+static int __must_check arm_smmu_map_page(struct domain *d, dfn_t dfn,
+					  mfn_t mfn, unsigned int flags)
 {
 	p2m_type_t t;
 
@@ -2760,7 +2759,7 @@ static int __must_check arm_smmu_map_page(struct domain *d, unsigned long dfn,
 	 * function should only be used by gnttab code with gfn == mfn == dfn.
 	 */
 	BUG_ON(!is_domain_direct_mapped(d));
-	BUG_ON(mfn != dfn);
+	BUG_ON(mfn_x(mfn) != dfn_x(dfn));
 
 	/* We only support readable and writable flags */
 	if (!(flags & (IOMMUF_readable | IOMMUF_writable)))
@@ -2772,10 +2771,11 @@ static int __must_check arm_smmu_map_page(struct domain *d, unsigned long dfn,
 	 * The function guest_physmap_add_entry replaces the current mapping
 	 * if there is already one...
 	 */
-	return guest_physmap_add_entry(d, _gfn(dfn), _mfn(dfn), 0, t);
+	return guest_physmap_add_entry(d, _gfn(dfn_x(dfn)), _mfn(dfn_x(dfn)),
+				       0, t);
 }
 
-static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long dfn)
+static int __must_check arm_smmu_unmap_page(struct domain *d, dfn_t dfn)
 {
 	/*
 	 * This function should only be used by gnttab code when the domain
@@ -2784,7 +2784,7 @@ static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long dfn)
 	if ( !is_domain_direct_mapped(d) )
 		return -EINVAL;
 
-	return guest_physmap_remove_page(d, _gfn(dfn), _mfn(dfn), 0);
+	return guest_physmap_remove_page(d, _gfn(dfn_x(dfn)), _mfn(dfn_x(dfn)), 0);
 }
 
 static const struct iommu_ops arm_smmu_iommu_ops = {
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 9390b1366f..ac32a7022b 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -224,7 +224,8 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
                   == PGT_writable_page) )
                 mapping |= IOMMUF_writable;
 
-            ret = hd->platform_ops->map_page(d, dfn, mfn, mapping);
+            ret = hd->platform_ops->map_page(d, _dfn(dfn), _mfn(mfn),
+                                             mapping);
             if ( !rc )
                 rc = ret;
 
@@ -304,7 +305,7 @@ int iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    rc = hd->platform_ops->map_page(d, dfn_x(dfn), mfn_x(mfn), flags);
+    rc = hd->platform_ops->map_page(d, dfn, mfn, flags);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
@@ -327,7 +328,7 @@ int iommu_unmap_page(struct domain *d, dfn_t dfn)
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    rc = hd->platform_ops->unmap_page(d, dfn_x(dfn));
+    rc = hd->platform_ops->unmap_page(d, dfn);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
@@ -367,7 +368,7 @@ int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_count)
     if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
         return 0;
 
-    rc = hd->platform_ops->iotlb_flush(d, dfn_x(dfn), page_count);
+    rc = hd->platform_ops->iotlb_flush(d, dfn, page_count);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 88720d4406..63eabb3c1b 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -584,8 +584,7 @@ static int __must_check iommu_flush_all(void)
     return rc;
 }
 
-static int __must_check iommu_flush_iotlb(struct domain *d,
-                                          unsigned long dfn,
+static int __must_check iommu_flush_iotlb(struct domain *d, dfn_t dfn,
                                           bool_t dma_old_pte_present,
                                           unsigned int page_count)
 {
@@ -612,12 +611,12 @@ static int __must_check iommu_flush_iotlb(struct domain *d,
         if ( iommu_domid == -1 )
             continue;
 
-        if ( page_count != 1 || dfn == dfn_x(INVALID_DFN) )
+        if ( page_count != 1 || dfn_eq(dfn, INVALID_DFN) )
             rc = iommu_flush_iotlb_dsi(iommu, iommu_domid,
                                        0, flush_dev_iotlb);
         else
             rc = iommu_flush_iotlb_psi(iommu, iommu_domid,
-                                       __dfn_to_daddr(dfn),
+                                       dfn_to_daddr(dfn),
                                        PAGE_ORDER_4K,
                                        !dma_old_pte_present,
                                        flush_dev_iotlb);
@@ -633,7 +632,7 @@ static int __must_check iommu_flush_iotlb(struct domain *d,
 }
 
 static int __must_check iommu_flush_iotlb_pages(struct domain *d,
-                                                unsigned long dfn,
+                                                dfn_t dfn,
                                                 unsigned int page_count)
 {
     return iommu_flush_iotlb(d, dfn, 1, page_count);
@@ -641,7 +640,7 @@ static int __must_check iommu_flush_iotlb_pages(struct domain *d,
 
 static int __must_check iommu_flush_iotlb_all(struct domain *d)
 {
-    return iommu_flush_iotlb(d, dfn_x(INVALID_DFN), 0, 0);
+    return iommu_flush_iotlb(d, INVALID_DFN, 0, 0);
 }
 
 /* clear one page's page table */
@@ -676,7 +675,7 @@ static int __must_check dma_pte_clear_one(struct domain *domain, u64 addr)
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
 
     if ( !this_cpu(iommu_dont_flush_iotlb) )
-        rc = iommu_flush_iotlb_pages(domain, addr >> PAGE_SHIFT_4K, 1);
+        rc = iommu_flush_iotlb_pages(domain, daddr_to_dfn(addr), 1);
 
     unmap_vtd_domain_page(page);
 
@@ -1768,8 +1767,7 @@ static void iommu_domain_teardown(struct domain *d)
 }
 
 static int __must_check intel_iommu_map_page(struct domain *d,
-                                             unsigned long dfn,
-                                             unsigned long mfn,
+                                             dfn_t dfn, mfn_t mfn,
                                              unsigned int flags)
 {
     struct domain_iommu *hd = dom_iommu(d);
@@ -1787,16 +1785,16 @@ static int __must_check intel_iommu_map_page(struct domain *d,
 
     spin_lock(&hd->arch.mapping_lock);
 
-    pg_maddr = addr_to_dma_page_maddr(d, __dfn_to_daddr(dfn), 1);
+    pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 1);
     if ( pg_maddr == 0 )
     {
         spin_unlock(&hd->arch.mapping_lock);
         return -ENOMEM;
     }
     page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
-    pte = page + (dfn & LEVEL_MASK);
+    pte = page + (dfn_x(dfn) & LEVEL_MASK);
     old = *pte;
-    dma_set_pte_addr(new, (paddr_t)mfn << PAGE_SHIFT_4K);
+    dma_set_pte_addr(new, mfn_to_maddr(mfn));
     dma_set_pte_prot(new,
                      ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
                      ((flags & IOMMUF_writable) ? DMA_PTE_WRITE : 0));
@@ -1824,13 +1822,13 @@ static int __must_check intel_iommu_map_page(struct domain *d,
 }
 
 static int __must_check intel_iommu_unmap_page(struct domain *d,
-                                               unsigned long dfn)
+                                               dfn_t dfn)
 {
     /* Do nothing if hardware domain and iommu supports pass thru. */
     if ( iommu_hwdom_passthrough && is_hardware_domain(d) )
         return 0;
 
-    return dma_pte_clear_one(d, __dfn_to_daddr(dfn));
+    return dma_pte_clear_one(d, dfn_to_daddr(dfn));
 }
 
 int iommu_pte_flush(struct domain *d, uint64_t dfn, uint64_t *pte,
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index d6cbeae5c9..70e79c20f7 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -68,7 +68,7 @@ int arch_iommu_populate_page_table(struct domain *d)
             {
                 ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
                 BUG_ON(SHARED_M2P(gfn));
-                rc = hd->platform_ops->map_page(d, gfn, mfn,
+                rc = hd->platform_ops->map_page(d, _dfn(gfn), _mfn(mfn),
                                                 IOMMUF_readable |
                                                 IOMMUF_writable);
             }
diff --git a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
index 99bc21c7b3..3083d625bd 100644
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
@@ -52,9 +52,9 @@ int amd_iommu_init(void);
 int amd_iommu_update_ivrs_mapping_acpi(void);
 
 /* mapping functions */
-int __must_check amd_iommu_map_page(struct domain *d, unsigned long gfn,
-                                    unsigned long mfn, unsigned int flags);
-int __must_check amd_iommu_unmap_page(struct domain *d, unsigned long gfn);
+int __must_check amd_iommu_map_page(struct domain *d, dfn_t dfn,
+                                    mfn_t mfn, unsigned int flags);
+int __must_check amd_iommu_unmap_page(struct domain *d, dfn_t dfn);
 u64 amd_iommu_get_next_table_from_pte(u32 *entry);
 int __must_check amd_iommu_alloc_root(struct domain_iommu *hd);
 int amd_iommu_reserve_domain_unity_map(struct domain *domain,
@@ -77,7 +77,7 @@ void iommu_dte_set_guest_cr3(u32 *dte, u16 dom_id, u64 gcr3,
 
 /* send cmd to iommu */
 void amd_iommu_flush_all_pages(struct domain *d);
-void amd_iommu_flush_pages(struct domain *d, unsigned long gfn,
+void amd_iommu_flush_pages(struct domain *d, unsigned long dfn,
                            unsigned int order);
 void amd_iommu_flush_iotlb(u8 devfn, const struct pci_dev *pdev,
                            uint64_t gaddr, unsigned int order);
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index f9d86fc816..7313957c81 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -48,6 +48,11 @@ static inline dfn_t dfn_add(dfn_t dfn, unsigned long i)
     return _dfn(dfn_x(dfn) + i);
 }
 
+static inline bool_t dfn_eq(dfn_t x, dfn_t y)
+{
+    return dfn_x(x) == dfn_x(y);
+}
+
 extern bool_t iommu_enable, iommu_enabled;
 extern bool_t force_iommu, iommu_verbose;
 extern bool_t iommu_workaround_bios_bug, iommu_igfx;
@@ -174,9 +179,9 @@ struct iommu_ops {
 #endif /* HAS_PCI */
 
     void (*teardown)(struct domain *d);
-    int __must_check (*map_page)(struct domain *d, unsigned long dfn,
-                                 unsigned long mfn, unsigned int flags);
-    int __must_check (*unmap_page)(struct domain *d, unsigned long dfn);
+    int __must_check (*map_page)(struct domain *d, dfn_t dfn, mfn_t mfn,
+                                 unsigned int flags);
+    int __must_check (*unmap_page)(struct domain *d, dfn_t dfn);
     void (*free_page_table)(struct page_info *);
 #ifdef CONFIG_X86
     void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
@@ -187,7 +192,7 @@ struct iommu_ops {
     void (*resume)(void);
     void (*share_p2m)(struct domain *d);
     void (*crash_shutdown)(void);
-    int __must_check (*iotlb_flush)(struct domain *d, unsigned long dfn,
+    int __must_check (*iotlb_flush)(struct domain *d, dfn_t dfn,
                                     unsigned int page_count);
     int __must_check (*iotlb_flush_all)(struct domain *d);
     int (*get_reserved_device_memory)(iommu_grdm_t *, void *);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v9 4/7] iommu: don't domain_crash() inside iommu_map/unmap_page()
  2018-09-13 15:21 [PATCH v9 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
                   ` (2 preceding siblings ...)
  2018-09-13 15:21 ` [PATCH v9 3/7] iommu: push use of type-safe DFN and MFN into iommu_ops Paul Durrant
@ 2018-09-13 15:21 ` Paul Durrant
  2018-09-13 15:21 ` [PATCH v9 5/7] memory: add check_get_page_from_gfn() as a wrapper Paul Durrant
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Paul Durrant @ 2018-09-13 15:21 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Paul Durrant, Jun Nakajima

This patch removes the implicit domain_crash() from iommu_map(),
unmap_page() and iommu_iotlb_flush() and turns them into straightforward
wrappers that check the existence of the relevant iommu_op and call
through to it. This makes them usable by PV IOMMU code to be delivered in
future patches.
This patch adds a helper macro, domu_crash(), that will only invoke
domain_crash() if the domain is not the hardware domain and modifies
callers of iommu_map(), unmap_page() and iommu_iotlb_flush() to use this
should an operation fail.

NOTE: This patch includes one bit of clean-up in set_identity_p2m_entry()
      replacing use of p2m->domain with the domain pointer passed into the
      function.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Roger Pau Monne <roger.pau@citrix.com>
---
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>

v7:
 - Re-base and re-name BFN -> DFN.
 - Move domu_crash() outside double locked region in grant_table.c.
 - Added Jan's R-b.

v6:
 - Introduce domu_crash() (idea suggested by Kevin, name suggested by Jan)
   to crash non-hardware domains.
 - Dropped Wei's and George's R-b because of change.

v2:
 - New in v2.
---
 xen/arch/arm/p2m.c                  |  4 ++++
 xen/arch/x86/mm.c                   |  3 +++
 xen/arch/x86/mm/p2m-ept.c           |  3 +++
 xen/arch/x86/mm/p2m-pt.c            |  3 +++
 xen/arch/x86/mm/p2m.c               | 22 ++++++++++++++++++----
 xen/common/grant_table.c            |  4 ++++
 xen/common/memory.c                 |  3 +++
 xen/drivers/passthrough/iommu.c     | 12 ------------
 xen/drivers/passthrough/x86/iommu.c |  4 ++++
 xen/include/xen/sched.h             |  5 +++++
 10 files changed, 47 insertions(+), 16 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 0db12b01f1..1c79ff7ade 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -957,8 +957,12 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
 
     if ( need_iommu(p2m->domain) &&
          (lpae_is_valid(orig_pte) || lpae_is_valid(*entry)) )
+    {
         rc = iommu_iotlb_flush(p2m->domain, _dfn(gfn_x(sgfn)),
                                1UL << page_order);
+        if ( unlikely(rc) )
+            domu_crash(p2m->domain);
+    }
     else
         rc = 0;
 
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index f308c97ec9..c2f20991bb 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2807,6 +2807,9 @@ static int _get_page_type(struct page_info *page, unsigned long type,
                 iommu_ret = iommu_map_page(d, _dfn(mfn_x(mfn)), mfn,
                                            IOMMUF_readable |
                                            IOMMUF_writable);
+
+            if ( unlikely(iommu_ret) )
+                domu_crash(d);
         }
     }
 
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 9a3a90e9e6..af7674f7e1 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -895,6 +895,9 @@ out:
                     if ( !rc )
                         rc = ret;
                 }
+
+            if ( unlikely(rc) )
+                domu_crash(d);
         }
     }
 
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 881e9e87b8..607046f31b 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -717,6 +717,9 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
                         rc = ret;
                 }
         }
+
+        if ( unlikely(rc) )
+            domu_crash(p2m->domain);
     }
 
     /*
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 801b629b95..537add65bb 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -724,6 +724,9 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn_l, unsigned long mfn,
                 if ( !rc )
                     rc = ret;
             }
+
+            if ( unlikely(rc) )
+                domu_crash(p2m->domain);
         }
 
         return rc;
@@ -789,6 +792,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
                         if ( iommu_unmap_page(d, dfn_add(dfn, i)) )
                             continue;
 
+                    domu_crash(d);
                     return rc;
                 }
             }
@@ -1157,12 +1161,17 @@ int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int ret;
 
-    if ( !paging_mode_translate(p2m->domain) )
+    if ( !paging_mode_translate(d) )
     {
         if ( !need_iommu(d) )
             return 0;
-        return iommu_map_page(d, _dfn(gfn_l), _mfn(gfn_l),
-                              IOMMUF_readable | IOMMUF_writable);
+
+        ret = iommu_map_page(d, _dfn(gfn_l), _mfn(gfn_l),
+                             IOMMUF_readable | IOMMUF_writable);
+        if ( unlikely(ret) )
+            domu_crash(d);
+
+        return ret;
     }
 
     gfn_lock(p2m, gfn, 0);
@@ -1252,7 +1261,12 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l)
     {
         if ( !need_iommu(d) )
             return 0;
-        return iommu_unmap_page(d, _dfn(gfn_l));
+
+        ret = iommu_unmap_page(d, _dfn(gfn_l));
+        if ( unlikely(ret) )
+            domu_crash(d);
+
+        return ret;
     }
 
     gfn_lock(p2m, gfn, 0);
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 2d01cad176..0f0b7b1a49 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1155,6 +1155,7 @@ map_grant_ref(
         if ( err )
         {
             double_gt_unlock(lgt, rgt);
+            domu_crash(ld);
             rc = GNTST_general_error;
             goto undo_out;
         }
@@ -1406,7 +1407,10 @@ unmap_common(
         double_gt_unlock(lgt, rgt);
 
         if ( err )
+        {
+            domu_crash(ld);
             rc = GNTST_general_error;
+        }
     }
 
     /* If just unmapped a writable mapping, mark as dirtied */
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 89b211d6aa..85611ddae4 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -841,6 +841,9 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
         ret = iommu_iotlb_flush(d, _dfn(xatp->gpfn - done), done);
         if ( unlikely(ret) && rc >= 0 )
             rc = ret;
+
+        if ( unlikely(rc < 0) )
+            domu_crash(d);
     }
 #endif
 
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index ac32a7022b..24f2a2a460 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -312,9 +312,6 @@ int iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
             printk(XENLOG_ERR
                    "d%d: IOMMU mapping dfn %"PRI_dfn" to mfn %"PRI_mfn" failed: %d\n",
                    d->domain_id, dfn_x(dfn), mfn_x(mfn), rc);
-
-        if ( !is_hardware_domain(d) )
-            domain_crash(d);
     }
 
     return rc;
@@ -335,9 +332,6 @@ int iommu_unmap_page(struct domain *d, dfn_t dfn)
             printk(XENLOG_ERR
                    "d%d: IOMMU unmapping dfn %"PRI_dfn" failed: %d\n",
                    d->domain_id, dfn_x(dfn), rc);
-
-        if ( !is_hardware_domain(d) )
-            domain_crash(d);
     }
 
     return rc;
@@ -375,9 +369,6 @@ int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_count)
             printk(XENLOG_ERR
                    "d%d: IOMMU IOTLB flush failed: %d, dfn %"PRI_dfn", page count %u\n",
                    d->domain_id, rc, dfn_x(dfn), page_count);
-
-        if ( !is_hardware_domain(d) )
-            domain_crash(d);
     }
 
     return rc;
@@ -398,9 +389,6 @@ int iommu_iotlb_flush_all(struct domain *d)
             printk(XENLOG_ERR
                    "d%d: IOMMU IOTLB flush all failed: %d\n",
                    d->domain_id, rc);
-
-        if ( !is_hardware_domain(d) )
-            domain_crash(d);
     }
 
     return rc;
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index 70e79c20f7..1b47bcba3f 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -107,7 +107,11 @@ int arch_iommu_populate_page_table(struct domain *d)
     this_cpu(iommu_dont_flush_iotlb) = 0;
 
     if ( !rc )
+    {
         rc = iommu_iotlb_flush_all(d);
+        if ( unlikely(rc) )
+            domain_crash(d);
+    }
 
     if ( rc && rc != -ERESTART )
         iommu_teardown(d);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 0ba80cb1a8..f2c594d197 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -616,6 +616,11 @@ void __domain_crash(struct domain *d);
     __domain_crash(d);                                                    \
 } while (0)
 
+#define domu_crash(d) do {                \
+    if ( !is_hardware_domain(d) )         \
+        domain_crash(d);                  \
+} while (false)
+
 /*
  * Called from assembly code, with an optional address to help indicate why
  * the crash occured.  If addr is 0, look up address from last extable
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v9 5/7] memory: add check_get_page_from_gfn() as a wrapper...
  2018-09-13 15:21 [PATCH v9 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
                   ` (3 preceding siblings ...)
  2018-09-13 15:21 ` [PATCH v9 4/7] iommu: don't domain_crash() inside iommu_map/unmap_page() Paul Durrant
@ 2018-09-13 15:21 ` Paul Durrant
  2018-09-18 13:16   ` Jan Beulich
  2018-09-13 15:21 ` [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops Paul Durrant
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 34+ messages in thread
From: Paul Durrant @ 2018-09-13 15:21 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Paul Durrant, Jan Beulich

...for some uses of get_page_from_gfn().

There are many occurrences of the following pattern in the code:

    q = <readonly look-up> ? P2M_ALLOC : P2M_UNSHARE;
    page = get_page_from_gfn(d, gfn, &p2mt, q);

    if ( p2m_is_paging(p2mt) )
    {
        if ( page )
            put_page(page);

        p2m_mem_paging_populate(d, gfn);
        return <-EAGAIN or equivalent>;
    }

    if ( (q & P2M_UNSHARE) && p2m_is_shared(p2mt) )
    {
        if ( page )
            put_page(page);

        return <-EAGAIN or equivalent>;
    }

    if ( !page )
        return <-EINVAL or equivalent>;

There are some small differences between the exact way the occurrences
are coded but the desired semantic is the same.

This patch introduces a new common implementation of this code in
check_get_page_from_gfn() and then converts the various open-coded patterns
into calls to this new function.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Roger Pau Monne <roger.pau@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>

v9:
 - Defer P2M type checks (beyond shared or paging) to the caller.

v7:
 - Fix ARM build by introducing p2m_is_readonly() predicate.
 - Re-name get_paged_frame() -> check_get_page_from_gfn().
 - Adjust default cases of callers switch-ing on return value.

v3:
 - Addressed comments from George.

v2:
 - New in v2.
---
 xen/arch/x86/hvm/emulate.c | 25 ++++++++++++-----------
 xen/arch/x86/hvm/hvm.c     | 14 +------------
 xen/common/grant_table.c   | 32 ++++++++++++++----------------
 xen/common/memory.c        | 49 ++++++++++++++++++++++++++++++++++++----------
 xen/include/asm-arm/p2m.h  |  4 ++++
 xen/include/asm-x86/p2m.h  |  3 +++
 6 files changed, 74 insertions(+), 53 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index a577685dc6..480840b202 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -356,22 +356,21 @@ static int hvmemul_acquire_page(unsigned long gmfn, struct page_info **page)
     struct domain *curr_d = current->domain;
     p2m_type_t p2mt;
 
-    *page = get_page_from_gfn(curr_d, gmfn, &p2mt, P2M_UNSHARE);
-
-    if ( *page == NULL )
-        return X86EMUL_UNHANDLEABLE;
-
-    if ( p2m_is_paging(p2mt) )
+    switch ( check_get_page_from_gfn(curr_d, _gfn(gmfn), false, &p2mt,
+                                     page) )
     {
-        put_page(*page);
-        p2m_mem_paging_populate(curr_d, gmfn);
-        return X86EMUL_RETRY;
-    }
+    case 0:
+        break;
 
-    if ( p2m_is_shared(p2mt) )
-    {
-        put_page(*page);
+    case -EAGAIN:
         return X86EMUL_RETRY;
+
+    default:
+        ASSERT_UNREACHABLE();
+        /* Fallthrough */
+
+    case -EINVAL:
+        return X86EMUL_UNHANDLEABLE;
     }
 
     /* This code should not be reached if the gmfn is not RAM */
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index fe6c9c592f..6bb1da07eb 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2536,20 +2536,8 @@ static void *_hvm_map_guest_frame(unsigned long gfn, bool_t permanent,
     struct page_info *page;
     struct domain *d = current->domain;
 
-    page = get_page_from_gfn(d, gfn, &p2mt,
-                             writable ? P2M_UNSHARE : P2M_ALLOC);
-    if ( (p2m_is_shared(p2mt) && writable) || !page )
-    {
-        if ( page )
-            put_page(page);
-        return NULL;
-    }
-    if ( p2m_is_paging(p2mt) )
-    {
-        put_page(page);
-        p2m_mem_paging_populate(d, gfn);
+    if ( check_get_page_from_gfn(d, _gfn(gfn), !writable, &p2mt, &page) )
         return NULL;
-    }
 
     if ( writable )
     {
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 0f0b7b1a49..3604a8812c 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -374,25 +374,23 @@ static int get_paged_frame(unsigned long gfn, mfn_t *mfn,
                            struct page_info **page, bool readonly,
                            struct domain *rd)
 {
-    int rc = GNTST_okay;
     p2m_type_t p2mt;
+    int rc;
 
-    *mfn = INVALID_MFN;
-    *page = get_page_from_gfn(rd, gfn, &p2mt,
-                              readonly ? P2M_ALLOC : P2M_UNSHARE);
-    if ( !*page )
+    rc = check_get_page_from_gfn(rd, _gfn(gfn), readonly, &p2mt, page);
+    switch ( rc )
     {
-#ifdef P2M_SHARED_TYPES
-        if ( p2m_is_shared(p2mt) )
-            return GNTST_eagain;
-#endif
-#ifdef P2M_PAGES_TYPES
-        if ( p2m_is_paging(p2mt) )
-        {
-            p2m_mem_paging_populate(rd, gfn);
-            return GNTST_eagain;
-        }
-#endif
+    case 0:
+        break;
+
+    case -EAGAIN:
+        return GNTST_eagain;
+
+    default:
+        ASSERT_UNREACHABLE();
+        /* Fallthrough */
+
+    case -EINVAL:
         return GNTST_bad_page;
     }
 
@@ -406,7 +404,7 @@ static int get_paged_frame(unsigned long gfn, mfn_t *mfn,
 
     *mfn = page_to_mfn(*page);
 
-    return rc;
+    return GNTST_okay;
 }
 
 static inline void
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 85611ddae4..9b592d4f66 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1625,37 +1625,66 @@ void destroy_ring_for_helper(
     }
 }
 
-int prepare_ring_for_helper(
-    struct domain *d, unsigned long gmfn, struct page_info **_page,
-    void **_va)
+/*
+ * Acquire a pointer to struct page_info for a specified doman and GFN,
+ * checking whether the page has been paged out, or needs unsharing.
+ * If the function succeeds then zero is returned, page_p is written
+ * with a pointer to the struct page_info with a reference taken, and
+ * p2mt_p it is written with the P2M type of the page. The caller is
+ * responsible for dropping the reference.
+ * If the function fails then an appropriate errno is returned and the
+ * values referenced by page_p and p2mt_p are undefined.
+ */
+int check_get_page_from_gfn(struct domain *d, gfn_t gfn, bool readonly,
+                            p2m_type_t *p2mt_p, struct page_info **page_p)
 {
-    struct page_info *page;
+    p2m_query_t q = readonly ? P2M_ALLOC : P2M_UNSHARE;
     p2m_type_t p2mt;
-    void *va;
+    struct page_info *page;
 
-    page = get_page_from_gfn(d, gmfn, &p2mt, P2M_UNSHARE);
+    page = get_page_from_gfn(d, gfn_x(gfn), &p2mt, q);
 
 #ifdef CONFIG_HAS_MEM_PAGING
     if ( p2m_is_paging(p2mt) )
     {
         if ( page )
             put_page(page);
-        p2m_mem_paging_populate(d, gmfn);
-        return -ENOENT;
+
+        p2m_mem_paging_populate(d, gfn_x(gfn));
+        return -EAGAIN;
     }
 #endif
 #ifdef CONFIG_HAS_MEM_SHARING
-    if ( p2m_is_shared(p2mt) )
+    if ( (q & P2M_UNSHARE) && p2m_is_shared(p2mt) )
     {
         if ( page )
             put_page(page);
-        return -ENOENT;
+
+        return -EAGAIN;
     }
 #endif
 
     if ( !page )
         return -EINVAL;
 
+    *p2mt_p = p2mt;
+    *page_p = page;
+    return 0;
+}
+
+int prepare_ring_for_helper(
+    struct domain *d, unsigned long gmfn, struct page_info **_page,
+    void **_va)
+{
+    p2m_type_t p2mt;
+    struct page_info *page;
+    void *va;
+    int rc;
+
+    rc = check_get_page_from_gfn(d, _gfn(gmfn), false, &p2mt, &page);
+    if ( rc )
+        return (rc == -EAGAIN) ? -ENOENT : rc;
+
     if ( !get_page_type(page, PGT_writable_page) )
     {
         put_page(page);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 8823707c17..377b591bc6 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -303,6 +303,10 @@ static inline struct page_info *get_page_from_gfn(
     return page;
 }
 
+int __must_check check_get_page_from_gfn(struct domain *d, gfn_t gfn,
+                                         bool readonly, p2m_type_t *p2mt_p,
+                                         struct page_info **page_p);
+
 int get_page_type(struct page_info *page, unsigned long type);
 bool is_iomem_page(mfn_t mfn);
 static inline int get_page_and_type(struct page_info *page,
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index d4b3cfcb6e..8613df42ce 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -492,6 +492,9 @@ static inline struct page_info *get_page_from_gfn(
     return mfn_valid(_mfn(gfn)) && get_page(page, d) ? page : NULL;
 }
 
+int __must_check check_get_page_from_gfn(struct domain *d, gfn_t gfn,
+                                         bool readonly, p2m_type_t *p2mt_p,
+                                         struct page_info **page_p);
 
 /* General conversion function from mfn to gfn */
 static inline unsigned long mfn_to_gfn(struct domain *d, mfn_t mfn)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops
  2018-09-13 15:21 [PATCH v9 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
                   ` (4 preceding siblings ...)
  2018-09-13 15:21 ` [PATCH v9 5/7] memory: add check_get_page_from_gfn() as a wrapper Paul Durrant
@ 2018-09-13 15:21 ` Paul Durrant
  2018-09-13 15:22   ` Paul Durrant
  2018-09-13 15:21 ` [PATCH v9 6/7] vtd: add missing check for shared EPT Paul Durrant
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 34+ messages in thread
From: Paul Durrant @ 2018-09-13 15:21 UTC (permalink / raw)
  To: xen-devel; +Cc: Kevin Tian, Paul Durrant, Wei Liu, George Dunlap, Jan Beulich

This patch adds a new method to the VT-d IOMMU implementation to find the
MFN currently mapped by the specified DFN along with a wrapper function
in generic IOMMU code to call the implementation if it exists.

This patch also cleans up the initializers in intel_iommu_map_page() and
uses array-style dereference there, for consistency. A missing check for
shared EPT is also added to intel_iommu_unmap_page().

NOTE: This patch only adds a Xen-internal interface. This will be used by
      a subsequent patch.
      Another subsequent patch will add similar functionality for AMD
      IOMMUs.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>

v7:
 - Re-base and re-name BFN -> DFN.
 - Add missing checks for shared EPT and iommu_passthrough.
 - Remove unnecessary initializers and use array-style dereference.
 - Drop Wei's R-b because of code churn.

v3:
 - Addressed comments from George.

v2:
 - Addressed some comments from Jan.
---
 xen/drivers/passthrough/iommu.c     | 11 ++++++++
 xen/drivers/passthrough/vtd/iommu.c | 52 +++++++++++++++++++++++++++++++++++--
 xen/drivers/passthrough/vtd/iommu.h |  3 +++
 xen/include/xen/iommu.h             |  4 +++
 4 files changed, 68 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index a16f1a0c66..52e3f500c7 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -296,6 +296,17 @@ int iommu_unmap_page(struct domain *d, dfn_t dfn)
     return rc;
 }
 
+int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
+                      unsigned int *flags)
+{
+    const struct domain_iommu *hd = dom_iommu(d);
+
+    if ( !iommu_enabled || !hd->platform_ops )
+        return -EOPNOTSUPP;
+
+    return hd->platform_ops->lookup_page(d, dfn, mfn, flags);
+}
+
 static void iommu_free_pagetables(unsigned long unused)
 {
     do {
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 0163bb949b..6622c2dd4c 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1770,7 +1770,7 @@ static int __must_check intel_iommu_map_page(struct domain *d,
                                              unsigned int flags)
 {
     struct domain_iommu *hd = dom_iommu(d);
-    struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
+    struct dma_pte *page, *pte, old, new = {};
     u64 pg_maddr;
     int rc = 0;
 
@@ -1790,9 +1790,11 @@ static int __must_check intel_iommu_map_page(struct domain *d,
         spin_unlock(&hd->arch.mapping_lock);
         return -ENOMEM;
     }
+
     page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
-    pte = page + (dfn_x(dfn) & LEVEL_MASK);
+    pte = &page[dfn_x(dfn) & LEVEL_MASK];
     old = *pte;
+
     dma_set_pte_addr(new, mfn_to_maddr(mfn));
     dma_set_pte_prot(new,
                      ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
@@ -1808,6 +1810,7 @@ static int __must_check intel_iommu_map_page(struct domain *d,
         unmap_vtd_domain_page(page);
         return 0;
     }
+
     *pte = new;
 
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
@@ -1823,6 +1826,10 @@ static int __must_check intel_iommu_map_page(struct domain *d,
 static int __must_check intel_iommu_unmap_page(struct domain *d,
                                                dfn_t dfn)
 {
+    /* Do nothing if VT-d shares EPT page table */
+    if ( iommu_use_hap_pt(d) )
+        return 0;
+
     /* Do nothing if hardware domain and iommu supports pass thru. */
     if ( iommu_passthrough && is_hardware_domain(d) )
         return 0;
@@ -1830,6 +1837,46 @@ static int __must_check intel_iommu_unmap_page(struct domain *d,
     return dma_pte_clear_one(d, dfn_to_daddr(dfn));
 }
 
+static int intel_iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
+                                   unsigned int *flags)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    struct dma_pte *page, val;
+    u64 pg_maddr;
+
+    /* Fail if VT-d shares EPT page table */
+    if ( iommu_use_hap_pt(d) )
+        return -ENOENT;
+
+    /* Fail if hardware domain and iommu supports pass thru. */
+    if ( iommu_passthrough && is_hardware_domain(d) )
+        return -ENOENT;
+
+    spin_lock(&hd->arch.mapping_lock);
+
+    pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 0);
+    if ( pg_maddr == 0 )
+    {
+        spin_unlock(&hd->arch.mapping_lock);
+        return -ENOMEM;
+    }
+
+    page = map_vtd_domain_page(pg_maddr);
+    val = page[dfn_x(dfn) & LEVEL_MASK];
+
+    unmap_vtd_domain_page(page);
+    spin_unlock(&hd->arch.mapping_lock);
+
+    if ( !dma_pte_present(val) )
+        return -ENOENT;
+
+    *mfn = maddr_to_mfn(dma_pte_addr(val));
+    *flags = dma_pte_read(val) ? IOMMUF_readable : 0;
+    *flags |= dma_pte_write(val) ? IOMMUF_writable : 0;
+
+    return 0;
+}
+
 int iommu_pte_flush(struct domain *d, uint64_t dfn, uint64_t *pte,
                     int order, int present)
 {
@@ -2655,6 +2702,7 @@ const struct iommu_ops intel_iommu_ops = {
     .teardown = iommu_domain_teardown,
     .map_page = intel_iommu_map_page,
     .unmap_page = intel_iommu_unmap_page,
+    .lookup_page = intel_iommu_lookup_page,
     .free_page_table = iommu_free_page_table,
     .reassign_device = reassign_device_ownership,
     .get_device_group_id = intel_iommu_group_id,
diff --git a/xen/drivers/passthrough/vtd/iommu.h b/xen/drivers/passthrough/vtd/iommu.h
index 72c1a2e3cd..47bdfcb5ea 100644
--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -272,6 +272,9 @@ struct dma_pte {
 #define dma_set_pte_prot(p, prot) do { \
         (p).val = ((p).val & ~DMA_PTE_PROT) | ((prot) & DMA_PTE_PROT); \
     } while (0)
+#define dma_pte_prot(p) ((p).val & DMA_PTE_PROT)
+#define dma_pte_read(p) (dma_pte_prot(p) & DMA_PTE_READ)
+#define dma_pte_write(p) (dma_pte_prot(p) & DMA_PTE_WRITE)
 #define dma_pte_addr(p) ((p).val & PADDR_MASK & PAGE_MASK_4K)
 #define dma_set_pte_addr(p, addr) do {\
             (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 9e0b4e8638..bebddc2db4 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -100,6 +100,8 @@ void iommu_teardown(struct domain *d);
 int __must_check iommu_map_page(struct domain *d, dfn_t dfn,
                                 mfn_t mfn, unsigned int flags);
 int __must_check iommu_unmap_page(struct domain *d, dfn_t dfn);
+int __must_check iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
+                                   unsigned int *flags);
 
 enum iommu_feature
 {
@@ -190,6 +192,8 @@ struct iommu_ops {
     int __must_check (*map_page)(struct domain *d, dfn_t dfn, mfn_t mfn,
                                  unsigned int flags);
     int __must_check (*unmap_page)(struct domain *d, dfn_t dfn);
+    int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t *mfn,
+                                    unsigned int *flags);
     void (*free_page_table)(struct page_info *);
 #ifdef CONFIG_X86
     void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v9 6/7] vtd: add missing check for shared EPT...
  2018-09-13 15:21 [PATCH v9 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
                   ` (5 preceding siblings ...)
  2018-09-13 15:21 ` [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops Paul Durrant
@ 2018-09-13 15:21 ` Paul Durrant
  2018-09-14  2:29   ` Tian, Kevin
  2018-09-13 15:21 ` [PATCH v9 7/7] vtd: add lookup_page method to iommu_ops Paul Durrant
  2018-09-13 15:24 ` [PATCH v9 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
  8 siblings, 1 reply; 34+ messages in thread
From: Paul Durrant @ 2018-09-13 15:21 UTC (permalink / raw)
  To: xen-devel; +Cc: Kevin Tian, Paul Durrant, Wei Liu, George Dunlap, Jan Beulich

...in intel_iommu_unmap_page().

This patch also includes some non-functional modifications in
intel_iommu_map_page().

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>

v8:
 - New in v8. (Split from the next patch in the series as requested by
   Kevin).
---
 xen/drivers/passthrough/vtd/iommu.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 63eabb3c1b..0eac755ff3 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1771,7 +1771,7 @@ static int __must_check intel_iommu_map_page(struct domain *d,
                                              unsigned int flags)
 {
     struct domain_iommu *hd = dom_iommu(d);
-    struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
+    struct dma_pte *page, *pte, old, new = {};
     u64 pg_maddr;
     int rc = 0;
 
@@ -1786,14 +1786,16 @@ static int __must_check intel_iommu_map_page(struct domain *d,
     spin_lock(&hd->arch.mapping_lock);
 
     pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 1);
-    if ( pg_maddr == 0 )
+    if ( !pg_maddr )
     {
         spin_unlock(&hd->arch.mapping_lock);
         return -ENOMEM;
     }
+
     page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
-    pte = page + (dfn_x(dfn) & LEVEL_MASK);
+    pte = &page[dfn_x(dfn) & LEVEL_MASK];
     old = *pte;
+
     dma_set_pte_addr(new, mfn_to_maddr(mfn));
     dma_set_pte_prot(new,
                      ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
@@ -1809,6 +1811,7 @@ static int __must_check intel_iommu_map_page(struct domain *d,
         unmap_vtd_domain_page(page);
         return 0;
     }
+
     *pte = new;
 
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
@@ -1824,6 +1827,10 @@ static int __must_check intel_iommu_map_page(struct domain *d,
 static int __must_check intel_iommu_unmap_page(struct domain *d,
                                                dfn_t dfn)
 {
+    /* Do nothing if VT-d shares EPT page table */
+    if ( iommu_use_hap_pt(d) )
+        return 0;
+
     /* Do nothing if hardware domain and iommu supports pass thru. */
     if ( iommu_hwdom_passthrough && is_hardware_domain(d) )
         return 0;
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v9 7/7] vtd: add lookup_page method to iommu_ops
  2018-09-13 15:21 [PATCH v9 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
                   ` (6 preceding siblings ...)
  2018-09-13 15:21 ` [PATCH v9 6/7] vtd: add missing check for shared EPT Paul Durrant
@ 2018-09-13 15:21 ` Paul Durrant
  2018-09-14  2:30   ` Tian, Kevin
  2018-09-18 13:20   ` Jan Beulich
  2018-09-13 15:24 ` [PATCH v9 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
  8 siblings, 2 replies; 34+ messages in thread
From: Paul Durrant @ 2018-09-13 15:21 UTC (permalink / raw)
  To: xen-devel; +Cc: Kevin Tian, Paul Durrant, Wei Liu, George Dunlap, Jan Beulich

This patch adds a new method to the VT-d IOMMU implementation to find the
MFN currently mapped by the specified DFN along with a wrapper function
in generic IOMMU code to call the implementation if it exists.

NOTE: This patch only adds a Xen-internal interface. This will be used by
      a subsequent patch.
      Another subsequent patch will add similar functionality for AMD
      IOMMUs.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>

v9:
 - Add comment about locking in xen/iommu.h.

v8:
 - Remove clean-up as this is now done by a prior patch.
 - Make intel_iommu_lookup_page() return dfn value if using shared EPT
   or iommu_passthrough is set, as requested by Kevin.

v7:
 - Re-base and re-name BFN -> DFN.
 - Add missing checks for shared EPT and iommu_passthrough.
 - Remove unnecessary initializers and use array-style dereference.
 - Drop Wei's R-b because of code churn.

v3:
 - Addressed comments from George.

v2:
 - Addressed some comments from Jan.
---
 xen/drivers/passthrough/iommu.c     | 11 ++++++++++
 xen/drivers/passthrough/vtd/iommu.c | 44 +++++++++++++++++++++++++++++++++++++
 xen/drivers/passthrough/vtd/iommu.h |  3 +++
 xen/include/xen/iommu.h             | 10 +++++++++
 4 files changed, 68 insertions(+)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 24f2a2a460..853a14438e 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -337,6 +337,17 @@ int iommu_unmap_page(struct domain *d, dfn_t dfn)
     return rc;
 }
 
+int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
+                      unsigned int *flags)
+{
+    const struct domain_iommu *hd = dom_iommu(d);
+
+    if ( !iommu_enabled || !hd->platform_ops )
+        return -EOPNOTSUPP;
+
+    return hd->platform_ops->lookup_page(d, dfn, mfn, flags);
+}
+
 static void iommu_free_pagetables(unsigned long unused)
 {
     do {
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 0eac755ff3..de057d228a 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1838,6 +1838,49 @@ static int __must_check intel_iommu_unmap_page(struct domain *d,
     return dma_pte_clear_one(d, dfn_to_daddr(dfn));
 }
 
+static int intel_iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
+                                   unsigned int *flags)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    struct dma_pte *page, val;
+    u64 pg_maddr;
+
+    /*
+     * If VT-d shares EPT page table or if the domain is the hardware
+     * domain and iommu_passthrough is set then pass back the dfn.
+     */
+    if ( iommu_use_hap_pt(d) ||
+         (iommu_hwdom_passthrough && is_hardware_domain(d)) )
+    {
+        *mfn = _mfn(dfn_x(dfn));
+        return 0;
+    }
+
+    spin_lock(&hd->arch.mapping_lock);
+
+    pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 0);
+    if ( !pg_maddr )
+    {
+        spin_unlock(&hd->arch.mapping_lock);
+        return -ENOMEM;
+    }
+
+    page = map_vtd_domain_page(pg_maddr);
+    val = page[dfn_x(dfn) & LEVEL_MASK];
+
+    unmap_vtd_domain_page(page);
+    spin_unlock(&hd->arch.mapping_lock);
+
+    if ( !dma_pte_present(val) )
+        return -ENOENT;
+
+    *mfn = maddr_to_mfn(dma_pte_addr(val));
+    *flags = dma_pte_read(val) ? IOMMUF_readable : 0;
+    *flags |= dma_pte_write(val) ? IOMMUF_writable : 0;
+
+    return 0;
+}
+
 int iommu_pte_flush(struct domain *d, uint64_t dfn, uint64_t *pte,
                     int order, int present)
 {
@@ -2663,6 +2706,7 @@ const struct iommu_ops intel_iommu_ops = {
     .teardown = iommu_domain_teardown,
     .map_page = intel_iommu_map_page,
     .unmap_page = intel_iommu_unmap_page,
+    .lookup_page = intel_iommu_lookup_page,
     .free_page_table = iommu_free_page_table,
     .reassign_device = reassign_device_ownership,
     .get_device_group_id = intel_iommu_group_id,
diff --git a/xen/drivers/passthrough/vtd/iommu.h b/xen/drivers/passthrough/vtd/iommu.h
index 72c1a2e3cd..47bdfcb5ea 100644
--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -272,6 +272,9 @@ struct dma_pte {
 #define dma_set_pte_prot(p, prot) do { \
         (p).val = ((p).val & ~DMA_PTE_PROT) | ((prot) & DMA_PTE_PROT); \
     } while (0)
+#define dma_pte_prot(p) ((p).val & DMA_PTE_PROT)
+#define dma_pte_read(p) (dma_pte_prot(p) & DMA_PTE_READ)
+#define dma_pte_write(p) (dma_pte_prot(p) & DMA_PTE_WRITE)
 #define dma_pte_addr(p) ((p).val & PADDR_MASK & PAGE_MASK_4K)
 #define dma_set_pte_addr(p, addr) do {\
             (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 7313957c81..f94b606f6b 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -92,6 +92,8 @@ void iommu_teardown(struct domain *d);
 int __must_check iommu_map_page(struct domain *d, dfn_t dfn,
                                 mfn_t mfn, unsigned int flags);
 int __must_check iommu_unmap_page(struct domain *d, dfn_t dfn);
+int __must_check iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
+                                   unsigned int *flags);
 
 enum iommu_feature
 {
@@ -179,9 +181,17 @@ struct iommu_ops {
 #endif /* HAS_PCI */
 
     void (*teardown)(struct domain *d);
+
+    /*
+     * This block of operations must be appropriately locked against each
+     * other to have meaningful results.
+     */
     int __must_check (*map_page)(struct domain *d, dfn_t dfn, mfn_t mfn,
                                  unsigned int flags);
     int __must_check (*unmap_page)(struct domain *d, dfn_t dfn);
+    int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t *mfn,
+                                    unsigned int *flags);
+
     void (*free_page_table)(struct page_info *);
 #ifdef CONFIG_X86
     void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops
  2018-09-13 15:21 ` [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops Paul Durrant
@ 2018-09-13 15:22   ` Paul Durrant
  0 siblings, 0 replies; 34+ messages in thread
From: Paul Durrant @ 2018-09-13 15:22 UTC (permalink / raw)
  To: Paul Durrant, xen-devel; +Cc: Kevin Tian, Wei Liu, George Dunlap, Jan Beulich

This patch is stale. Sorry for the noise.

  Paul

> -----Original Message-----
> From: Paul Durrant [mailto:paul.durrant@citrix.com]
> Sent: 13 September 2018 16:21
> To: xen-devel@lists.xenproject.org
> Cc: Paul Durrant <Paul.Durrant@citrix.com>; Wei Liu <wei.liu2@citrix.com>;
> Kevin Tian <kevin.tian@intel.com>; Jan Beulich <jbeulich@suse.com>; George
> Dunlap <George.Dunlap@citrix.com>
> Subject: [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops
> 
> This patch adds a new method to the VT-d IOMMU implementation to find the
> MFN currently mapped by the specified DFN along with a wrapper function
> in generic IOMMU code to call the implementation if it exists.
> 
> This patch also cleans up the initializers in intel_iommu_map_page() and
> uses array-style dereference there, for consistency. A missing check for
> shared EPT is also added to intel_iommu_unmap_page().
> 
> NOTE: This patch only adds a Xen-internal interface. This will be used by
>       a subsequent patch.
>       Another subsequent patch will add similar functionality for AMD
>       IOMMUs.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> ---
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: Kevin Tian <kevin.tian@intel.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: George Dunlap <george.dunlap@citrix.com>
> 
> v7:
>  - Re-base and re-name BFN -> DFN.
>  - Add missing checks for shared EPT and iommu_passthrough.
>  - Remove unnecessary initializers and use array-style dereference.
>  - Drop Wei's R-b because of code churn.
> 
> v3:
>  - Addressed comments from George.
> 
> v2:
>  - Addressed some comments from Jan.
> ---
>  xen/drivers/passthrough/iommu.c     | 11 ++++++++
>  xen/drivers/passthrough/vtd/iommu.c | 52
> +++++++++++++++++++++++++++++++++++--
>  xen/drivers/passthrough/vtd/iommu.h |  3 +++
>  xen/include/xen/iommu.h             |  4 +++
>  4 files changed, 68 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/iommu.c
> b/xen/drivers/passthrough/iommu.c
> index a16f1a0c66..52e3f500c7 100644
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -296,6 +296,17 @@ int iommu_unmap_page(struct domain *d, dfn_t dfn)
>      return rc;
>  }
> 
> +int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
> +                      unsigned int *flags)
> +{
> +    const struct domain_iommu *hd = dom_iommu(d);
> +
> +    if ( !iommu_enabled || !hd->platform_ops )
> +        return -EOPNOTSUPP;
> +
> +    return hd->platform_ops->lookup_page(d, dfn, mfn, flags);
> +}
> +
>  static void iommu_free_pagetables(unsigned long unused)
>  {
>      do {
> diff --git a/xen/drivers/passthrough/vtd/iommu.c
> b/xen/drivers/passthrough/vtd/iommu.c
> index 0163bb949b..6622c2dd4c 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1770,7 +1770,7 @@ static int __must_check intel_iommu_map_page(struct
> domain *d,
>                                               unsigned int flags)
>  {
>      struct domain_iommu *hd = dom_iommu(d);
> -    struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
> +    struct dma_pte *page, *pte, old, new = {};
>      u64 pg_maddr;
>      int rc = 0;
> 
> @@ -1790,9 +1790,11 @@ static int __must_check intel_iommu_map_page(struct
> domain *d,
>          spin_unlock(&hd->arch.mapping_lock);
>          return -ENOMEM;
>      }
> +
>      page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
> -    pte = page + (dfn_x(dfn) & LEVEL_MASK);
> +    pte = &page[dfn_x(dfn) & LEVEL_MASK];
>      old = *pte;
> +
>      dma_set_pte_addr(new, mfn_to_maddr(mfn));
>      dma_set_pte_prot(new,
>                       ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
> @@ -1808,6 +1810,7 @@ static int __must_check intel_iommu_map_page(struct
> domain *d,
>          unmap_vtd_domain_page(page);
>          return 0;
>      }
> +
>      *pte = new;
> 
>      iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
> @@ -1823,6 +1826,10 @@ static int __must_check intel_iommu_map_page(struct
> domain *d,
>  static int __must_check intel_iommu_unmap_page(struct domain *d,
>                                                 dfn_t dfn)
>  {
> +    /* Do nothing if VT-d shares EPT page table */
> +    if ( iommu_use_hap_pt(d) )
> +        return 0;
> +
>      /* Do nothing if hardware domain and iommu supports pass thru. */
>      if ( iommu_passthrough && is_hardware_domain(d) )
>          return 0;
> @@ -1830,6 +1837,46 @@ static int __must_check
> intel_iommu_unmap_page(struct domain *d,
>      return dma_pte_clear_one(d, dfn_to_daddr(dfn));
>  }
> 
> +static int intel_iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t
> *mfn,
> +                                   unsigned int *flags)
> +{
> +    struct domain_iommu *hd = dom_iommu(d);
> +    struct dma_pte *page, val;
> +    u64 pg_maddr;
> +
> +    /* Fail if VT-d shares EPT page table */
> +    if ( iommu_use_hap_pt(d) )
> +        return -ENOENT;
> +
> +    /* Fail if hardware domain and iommu supports pass thru. */
> +    if ( iommu_passthrough && is_hardware_domain(d) )
> +        return -ENOENT;
> +
> +    spin_lock(&hd->arch.mapping_lock);
> +
> +    pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 0);
> +    if ( pg_maddr == 0 )
> +    {
> +        spin_unlock(&hd->arch.mapping_lock);
> +        return -ENOMEM;
> +    }
> +
> +    page = map_vtd_domain_page(pg_maddr);
> +    val = page[dfn_x(dfn) & LEVEL_MASK];
> +
> +    unmap_vtd_domain_page(page);
> +    spin_unlock(&hd->arch.mapping_lock);
> +
> +    if ( !dma_pte_present(val) )
> +        return -ENOENT;
> +
> +    *mfn = maddr_to_mfn(dma_pte_addr(val));
> +    *flags = dma_pte_read(val) ? IOMMUF_readable : 0;
> +    *flags |= dma_pte_write(val) ? IOMMUF_writable : 0;
> +
> +    return 0;
> +}
> +
>  int iommu_pte_flush(struct domain *d, uint64_t dfn, uint64_t *pte,
>                      int order, int present)
>  {
> @@ -2655,6 +2702,7 @@ const struct iommu_ops intel_iommu_ops = {
>      .teardown = iommu_domain_teardown,
>      .map_page = intel_iommu_map_page,
>      .unmap_page = intel_iommu_unmap_page,
> +    .lookup_page = intel_iommu_lookup_page,
>      .free_page_table = iommu_free_page_table,
>      .reassign_device = reassign_device_ownership,
>      .get_device_group_id = intel_iommu_group_id,
> diff --git a/xen/drivers/passthrough/vtd/iommu.h
> b/xen/drivers/passthrough/vtd/iommu.h
> index 72c1a2e3cd..47bdfcb5ea 100644
> --- a/xen/drivers/passthrough/vtd/iommu.h
> +++ b/xen/drivers/passthrough/vtd/iommu.h
> @@ -272,6 +272,9 @@ struct dma_pte {
>  #define dma_set_pte_prot(p, prot) do { \
>          (p).val = ((p).val & ~DMA_PTE_PROT) | ((prot) & DMA_PTE_PROT); \
>      } while (0)
> +#define dma_pte_prot(p) ((p).val & DMA_PTE_PROT)
> +#define dma_pte_read(p) (dma_pte_prot(p) & DMA_PTE_READ)
> +#define dma_pte_write(p) (dma_pte_prot(p) & DMA_PTE_WRITE)
>  #define dma_pte_addr(p) ((p).val & PADDR_MASK & PAGE_MASK_4K)
>  #define dma_set_pte_addr(p, addr) do {\
>              (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
> diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
> index 9e0b4e8638..bebddc2db4 100644
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -100,6 +100,8 @@ void iommu_teardown(struct domain *d);
>  int __must_check iommu_map_page(struct domain *d, dfn_t dfn,
>                                  mfn_t mfn, unsigned int flags);
>  int __must_check iommu_unmap_page(struct domain *d, dfn_t dfn);
> +int __must_check iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t
> *mfn,
> +                                   unsigned int *flags);
> 
>  enum iommu_feature
>  {
> @@ -190,6 +192,8 @@ struct iommu_ops {
>      int __must_check (*map_page)(struct domain *d, dfn_t dfn, mfn_t mfn,
>                                   unsigned int flags);
>      int __must_check (*unmap_page)(struct domain *d, dfn_t dfn);
> +    int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t
> *mfn,
> +                                    unsigned int *flags);
>      void (*free_page_table)(struct page_info *);
>  #ifdef CONFIG_X86
>      void (*update_ire_from_apic)(unsigned int apic, unsigned int reg,
> unsigned int value);
> --
> 2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 0/6] paravirtual IOMMU pre-requisites and clean-up
  2018-09-13 15:21 [PATCH v9 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
                   ` (7 preceding siblings ...)
  2018-09-13 15:21 ` [PATCH v9 7/7] vtd: add lookup_page method to iommu_ops Paul Durrant
@ 2018-09-13 15:24 ` Paul Durrant
  8 siblings, 0 replies; 34+ messages in thread
From: Paul Durrant @ 2018-09-13 15:24 UTC (permalink / raw)
  To: Paul Durrant, xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Jan Beulich,
	Konrad Rzeszutek Wilk, Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, Jun Nakajima, Ian Jackson,
	Suravee Suthikulpanit

This should clearly say 0/7 rather then 0/6. Apologies.

  Paul

> -----Original Message-----
> From: Paul Durrant [mailto:paul.durrant@citrix.com]
> Sent: 13 September 2018 16:21
> To: xen-devel@lists.xenproject.org
> Cc: Paul Durrant <Paul.Durrant@citrix.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; George Dunlap <George.Dunlap@citrix.com>; Ian
> Jackson <Ian.Jackson@citrix.com>; Jan Beulich <jbeulich@suse.com>; Julien
> Grall <julien.grall@arm.com>; Jun Nakajima <jun.nakajima@intel.com>; Kevin
> Tian <kevin.tian@intel.com>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Stefano Stabellini <sstabellini@kernel.org>;
> Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>; Tim (Xen.org)
> <tim@xen.org>; Wei Liu <wei.liu2@citrix.com>
> Subject: [PATCH v9 0/6] paravirtual IOMMU pre-requisites and clean-up
> 
> This series contains pre-requisites and clean-up needed for paravirtual
> IOMMU support.
> 
> I have separated these patches to avoid further delaying their application
> whilst I re-work the implementation of paravirtual IOMMU after review of
> v6 of the series. Several of them already have all necessary acks.
> 
> Paul Durrant (7):
>   iommu: introduce the concept of DFN...
>   iommu: make use of type-safe DFN and MFN in exported functions
>   iommu: push use of type-safe DFN and MFN into iommu_ops
>   iommu: don't domain_crash() inside iommu_map/unmap_page()
>   memory: add check_get_page_from_gfn() as a wrapper...
>   vtd: add missing check for shared EPT...
>   vtd: add lookup_page method to iommu_ops
> 
>  xen/arch/arm/p2m.c                            |  7 ++-
>  xen/arch/x86/hvm/emulate.c                    | 25 ++++----
>  xen/arch/x86/hvm/hvm.c                        | 14 +----
>  xen/arch/x86/mm.c                             | 13 ++--
>  xen/arch/x86/mm/p2m-ept.c                     | 13 +++-
>  xen/arch/x86/mm/p2m-pt.c                      | 48 ++++++++------
>  xen/arch/x86/mm/p2m.c                         | 32 +++++++---
>  xen/arch/x86/x86_64/mm.c                      |  5 +-
>  xen/common/grant_table.c                      | 48 +++++++-------
>  xen/common/memory.c                           | 56 +++++++++++++----
>  xen/drivers/passthrough/amd/iommu_cmd.c       | 18 +++---
>  xen/drivers/passthrough/amd/iommu_map.c       | 88 ++++++++++++++--------
> ----
>  xen/drivers/passthrough/amd/pci_amd_iommu.c   |  4 +-
>  xen/drivers/passthrough/arm/smmu.c            | 20 +++---
>  xen/drivers/passthrough/iommu.c               | 53 ++++++++--------
>  xen/drivers/passthrough/vtd/iommu.c           | 91 ++++++++++++++++++++--
> -----
>  xen/drivers/passthrough/vtd/iommu.h           |  3 +
>  xen/drivers/passthrough/vtd/x86/vtd.c         |  1 -
>  xen/drivers/passthrough/x86/iommu.c           |  9 ++-
>  xen/include/asm-arm/p2m.h                     |  4 ++
>  xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |  8 +--
>  xen/include/asm-x86/iommu.h                   | 12 ++++
>  xen/include/asm-x86/p2m.h                     |  3 +
>  xen/include/xen/iommu.h                       | 51 ++++++++++++---
>  xen/include/xen/mm.h                          |  5 ++
>  xen/include/xen/sched.h                       |  5 ++
>  26 files changed, 414 insertions(+), 222 deletions(-)
> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Julien Grall <julien.grall@arm.com>
> Cc: Jun Nakajima <jun.nakajima@intel.com>
> Cc: Kevin Tian <kevin.tian@intel.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> Cc: Tim Deegan <tim@xen.org>
> Cc: Wei Liu <wei.liu2@citrix.com>
> --
> 2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 1/7] iommu: introduce the concept of DFN...
  2018-09-13 15:21 ` [PATCH v9 1/7] iommu: introduce the concept of DFN Paul Durrant
@ 2018-09-14  2:27   ` Tian, Kevin
  0 siblings, 0 replies; 34+ messages in thread
From: Tian, Kevin @ 2018-09-14  2:27 UTC (permalink / raw)
  To: Paul Durrant, xen-devel
  Cc: Julien Grall, Stefano Stabellini, Wei Liu, Suravee Suthikulpanit

> From: Paul Durrant [mailto:paul.durrant@citrix.com]
> Sent: Thursday, September 13, 2018 11:21 PM
> 
> ...meaning 'device DMA frame number' i.e. a frame number mapped in the
> IOMMU
> (rather than the MMU) and hence used for DMA address translation.
> 
> This patch is a largely cosmetic change that substitutes the terms 'gfn'
> and 'gaddr' for 'dfn' and 'daddr' in all the places where the frame number
> or address relate to a device rather than the CPU.
> 
> The parts that are not purely cosmetic are:
> 
>  - the introduction of a type-safe declaration of dfn_t and definition of
>    INVALID_DFN to make the substitution of gfn_x(INVALID_GFN)
> mechanical.
>  - the introduction of __dfn_to_daddr and __daddr_to_dfn (and type-safe
>    variants without the leading __) with some use of the former.
> 
> Subsequent patches will convert code to make use of type-safe DFNs.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 6/7] vtd: add missing check for shared EPT...
  2018-09-13 15:21 ` [PATCH v9 6/7] vtd: add missing check for shared EPT Paul Durrant
@ 2018-09-14  2:29   ` Tian, Kevin
  0 siblings, 0 replies; 34+ messages in thread
From: Tian, Kevin @ 2018-09-14  2:29 UTC (permalink / raw)
  To: Paul Durrant, xen-devel; +Cc: Wei Liu, George Dunlap, Jan Beulich

> From: Paul Durrant [mailto:paul.durrant@citrix.com]
> Sent: Thursday, September 13, 2018 11:21 PM
> 
> ...in intel_iommu_unmap_page().
> 
> This patch also includes some non-functional modifications in
> intel_iommu_map_page().
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 7/7] vtd: add lookup_page method to iommu_ops
  2018-09-13 15:21 ` [PATCH v9 7/7] vtd: add lookup_page method to iommu_ops Paul Durrant
@ 2018-09-14  2:30   ` Tian, Kevin
  2018-09-18 13:20   ` Jan Beulich
  1 sibling, 0 replies; 34+ messages in thread
From: Tian, Kevin @ 2018-09-14  2:30 UTC (permalink / raw)
  To: Paul Durrant, xen-devel; +Cc: Wei Liu, George Dunlap, Jan Beulich

> From: Paul Durrant [mailto:paul.durrant@citrix.com]
> Sent: Thursday, September 13, 2018 11:21 PM
> 
> This patch adds a new method to the VT-d IOMMU implementation to find
> the
> MFN currently mapped by the specified DFN along with a wrapper function
> in generic IOMMU code to call the implementation if it exists.
> 
> NOTE: This patch only adds a Xen-internal interface. This will be used by
>       a subsequent patch.
>       Another subsequent patch will add similar functionality for AMD
>       IOMMUs.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/7] memory: add check_get_page_from_gfn() as a wrapper...
  2018-09-13 15:21 ` [PATCH v9 5/7] memory: add check_get_page_from_gfn() as a wrapper Paul Durrant
@ 2018-09-18 13:16   ` Jan Beulich
  2018-09-18 14:03     ` Paul Durrant
  0 siblings, 1 reply; 34+ messages in thread
From: Jan Beulich @ 2018-09-18 13:16 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, xen-devel

>>> On 13.09.18 at 17:21, <paul.durrant@citrix.com> wrote:
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -303,6 +303,10 @@ static inline struct page_info *get_page_from_gfn(
>      return page;
>  }
>  
> +int __must_check check_get_page_from_gfn(struct domain *d, gfn_t gfn,
> +                                         bool readonly, p2m_type_t *p2mt_p,
> +                                         struct page_info **page_p);
> +
>  int get_page_type(struct page_info *page, unsigned long type);
>  bool is_iomem_page(mfn_t mfn);
>  static inline int get_page_and_type(struct page_info *page,
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -492,6 +492,9 @@ static inline struct page_info *get_page_from_gfn(
>      return mfn_valid(_mfn(gfn)) && get_page(page, d) ? page : NULL;
>  }
>  
> +int __must_check check_get_page_from_gfn(struct domain *d, gfn_t gfn,
> +                                         bool readonly, p2m_type_t *p2mt_p,
> +                                         struct page_info **page_p);
>  
>  /* General conversion function from mfn to gfn */
>  static inline unsigned long mfn_to_gfn(struct domain *d, mfn_t mfn)

Why twice the same declaration? Can't this be put in xen/p2m-common.h?
Or some other suitable common header? With that
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 7/7] vtd: add lookup_page method to iommu_ops
  2018-09-13 15:21 ` [PATCH v9 7/7] vtd: add lookup_page method to iommu_ops Paul Durrant
  2018-09-14  2:30   ` Tian, Kevin
@ 2018-09-18 13:20   ` Jan Beulich
  2018-09-18 14:00     ` Paul Durrant
  1 sibling, 1 reply; 34+ messages in thread
From: Jan Beulich @ 2018-09-18 13:20 UTC (permalink / raw)
  To: Paul Durrant; +Cc: xen-devel, Kevin Tian, Wei Liu, george.dunlap

>>> On 13.09.18 at 17:21, <paul.durrant@citrix.com> wrote:
> @@ -179,9 +181,17 @@ struct iommu_ops {
>  #endif /* HAS_PCI */
>  
>      void (*teardown)(struct domain *d);
> +
> +    /*
> +     * This block of operations must be appropriately locked against each
> +     * other to have meaningful results.
> +     */
>      int __must_check (*map_page)(struct domain *d, dfn_t dfn, mfn_t mfn,
>                                   unsigned int flags);
>      int __must_check (*unmap_page)(struct domain *d, dfn_t dfn);
> +    int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t *mfn,
> +                                    unsigned int *flags);

I'm afraid the comment is ambiguous: It may mean the implementations
of the hooks have to have suitable locking in place, or callers have to
take care of locking. I think the latter is meant, which I think needs to
be made explicit. With that
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 7/7] vtd: add lookup_page method to iommu_ops
  2018-09-18 13:20   ` Jan Beulich
@ 2018-09-18 14:00     ` Paul Durrant
  2018-09-18 14:50       ` Jan Beulich
  0 siblings, 1 reply; 34+ messages in thread
From: Paul Durrant @ 2018-09-18 14:00 UTC (permalink / raw)
  To: 'Jan Beulich'; +Cc: xen-devel, Kevin Tian, Wei Liu, George Dunlap

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 18 September 2018 14:20
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: George Dunlap <George.Dunlap@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; Kevin Tian <kevin.tian@intel.com>; xen-devel <xen-
> devel@lists.xenproject.org>
> Subject: Re: [PATCH v9 7/7] vtd: add lookup_page method to iommu_ops
> 
> >>> On 13.09.18 at 17:21, <paul.durrant@citrix.com> wrote:
> > @@ -179,9 +181,17 @@ struct iommu_ops {
> >  #endif /* HAS_PCI */
> >
> >      void (*teardown)(struct domain *d);
> > +
> > +    /*
> > +     * This block of operations must be appropriately locked against
> each
> > +     * other to have meaningful results.
> > +     */
> >      int __must_check (*map_page)(struct domain *d, dfn_t dfn, mfn_t
> mfn,
> >                                   unsigned int flags);
> >      int __must_check (*unmap_page)(struct domain *d, dfn_t dfn);
> > +    int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t
> *mfn,
> > +                                    unsigned int *flags);
> 
> I'm afraid the comment is ambiguous: It may mean the implementations
> of the hooks have to have suitable locking in place, or callers have to
> take care of locking. I think the latter is meant, which I think needs to
> be made explicit. With that
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 

Thanks. Yes, the latter is what I meant. Can you fix the wording as you see fit during commit?

  Paul

> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/7] memory: add check_get_page_from_gfn() as a wrapper...
  2018-09-18 13:16   ` Jan Beulich
@ 2018-09-18 14:03     ` Paul Durrant
  2018-09-18 14:51       ` Jan Beulich
  0 siblings, 1 reply; 34+ messages in thread
From: Paul Durrant @ 2018-09-18 14:03 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, xen-devel, Ian Jackson

> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
> Of Jan Beulich
> Sent: 18 September 2018 14:17
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>; Wei Liu
> <wei.liu2@citrix.com>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>;
> George Dunlap <George.Dunlap@citrix.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Tim
> (Xen.org) <tim@xen.org>; Julien Grall <julien.grall@arm.com>; xen-devel
> <xen-devel@lists.xenproject.org>
> Subject: Re: [Xen-devel] [PATCH v9 5/7] memory: add
> check_get_page_from_gfn() as a wrapper...
> 
> >>> On 13.09.18 at 17:21, <paul.durrant@citrix.com> wrote:
> > --- a/xen/include/asm-arm/p2m.h
> > +++ b/xen/include/asm-arm/p2m.h
> > @@ -303,6 +303,10 @@ static inline struct page_info *get_page_from_gfn(
> >      return page;
> >  }
> >
> > +int __must_check check_get_page_from_gfn(struct domain *d, gfn_t gfn,
> > +                                         bool readonly, p2m_type_t
> *p2mt_p,
> > +                                         struct page_info **page_p);
> > +
> >  int get_page_type(struct page_info *page, unsigned long type);
> >  bool is_iomem_page(mfn_t mfn);
> >  static inline int get_page_and_type(struct page_info *page,
> > --- a/xen/include/asm-x86/p2m.h
> > +++ b/xen/include/asm-x86/p2m.h
> > @@ -492,6 +492,9 @@ static inline struct page_info *get_page_from_gfn(
> >      return mfn_valid(_mfn(gfn)) && get_page(page, d) ? page : NULL;
> >  }
> >
> > +int __must_check check_get_page_from_gfn(struct domain *d, gfn_t gfn,
> > +                                         bool readonly, p2m_type_t
> *p2mt_p,
> > +                                         struct page_info **page_p);
> >
> >  /* General conversion function from mfn to gfn */
> >  static inline unsigned long mfn_to_gfn(struct domain *d, mfn_t mfn)
> 
> Why twice the same declaration? Can't this be put in xen/p2m-common.h?
> Or some other suitable common header? With that
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 

I was not aware of the common header so I placed it near the declarations of get_page_from_gfn(). Do you want be to submit a v10 with this fixed or are you happy to move it to wherever you think is appropriate during commit?

  Paul

> Jan
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 7/7] vtd: add lookup_page method to iommu_ops
  2018-09-18 14:00     ` Paul Durrant
@ 2018-09-18 14:50       ` Jan Beulich
  2018-09-18 16:03         ` Paul Durrant
  0 siblings, 1 reply; 34+ messages in thread
From: Jan Beulich @ 2018-09-18 14:50 UTC (permalink / raw)
  To: Paul Durrant; +Cc: xen-devel, Kevin Tian, Wei Liu, george.dunlap

>>> On 18.09.18 at 16:00, <Paul.Durrant@citrix.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 18 September 2018 14:20
>> 
>> >>> On 13.09.18 at 17:21, <paul.durrant@citrix.com> wrote:
>> > @@ -179,9 +181,17 @@ struct iommu_ops {
>> >  #endif /* HAS_PCI */
>> >
>> >      void (*teardown)(struct domain *d);
>> > +
>> > +    /*
>> > +     * This block of operations must be appropriately locked against
>> each
>> > +     * other to have meaningful results.
>> > +     */
>> >      int __must_check (*map_page)(struct domain *d, dfn_t dfn, mfn_t
>> mfn,
>> >                                   unsigned int flags);
>> >      int __must_check (*unmap_page)(struct domain *d, dfn_t dfn);
>> > +    int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t
>> *mfn,
>> > +                                    unsigned int *flags);
>> 
>> I'm afraid the comment is ambiguous: It may mean the implementations
>> of the hooks have to have suitable locking in place, or callers have to
>> take care of locking. I think the latter is meant, which I think needs to
>> be made explicit. With that
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>> 
> 
> Thanks. Yes, the latter is what I meant. Can you fix the wording as you see 
> fit during commit?

Well, I didn't offer doing so because it seemed to me that small adjustments
to at least one of earlier patches would still be needed. Was that wrong?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/7] memory: add check_get_page_from_gfn() as a wrapper...
  2018-09-18 14:03     ` Paul Durrant
@ 2018-09-18 14:51       ` Jan Beulich
  2018-09-18 16:37         ` Paul Durrant
  0 siblings, 1 reply; 34+ messages in thread
From: Jan Beulich @ 2018-09-18 14:51 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim Deegan, george.dunlap, Julien Grall,
	xen-devel, Ian Jackson

>>> On 18.09.18 at 16:03, <Paul.Durrant@citrix.com> wrote:
>>  -----Original Message-----
>> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
>> Of Jan Beulich
>> Sent: 18 September 2018 14:17
>> To: Paul Durrant <Paul.Durrant@citrix.com>
>> Cc: Stefano Stabellini <sstabellini@kernel.org>; Wei Liu
>> <wei.liu2@citrix.com>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>;
>> George Dunlap <George.Dunlap@citrix.com>; Andrew Cooper
>> <Andrew.Cooper3@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Tim
>> (Xen.org) <tim@xen.org>; Julien Grall <julien.grall@arm.com>; xen-devel
>> <xen-devel@lists.xenproject.org>
>> Subject: Re: [Xen-devel] [PATCH v9 5/7] memory: add
>> check_get_page_from_gfn() as a wrapper...
>> 
>> >>> On 13.09.18 at 17:21, <paul.durrant@citrix.com> wrote:
>> > --- a/xen/include/asm-arm/p2m.h
>> > +++ b/xen/include/asm-arm/p2m.h
>> > @@ -303,6 +303,10 @@ static inline struct page_info *get_page_from_gfn(
>> >      return page;
>> >  }
>> >
>> > +int __must_check check_get_page_from_gfn(struct domain *d, gfn_t gfn,
>> > +                                         bool readonly, p2m_type_t
>> *p2mt_p,
>> > +                                         struct page_info **page_p);
>> > +
>> >  int get_page_type(struct page_info *page, unsigned long type);
>> >  bool is_iomem_page(mfn_t mfn);
>> >  static inline int get_page_and_type(struct page_info *page,
>> > --- a/xen/include/asm-x86/p2m.h
>> > +++ b/xen/include/asm-x86/p2m.h
>> > @@ -492,6 +492,9 @@ static inline struct page_info *get_page_from_gfn(
>> >      return mfn_valid(_mfn(gfn)) && get_page(page, d) ? page : NULL;
>> >  }
>> >
>> > +int __must_check check_get_page_from_gfn(struct domain *d, gfn_t gfn,
>> > +                                         bool readonly, p2m_type_t
>> *p2mt_p,
>> > +                                         struct page_info **page_p);
>> >
>> >  /* General conversion function from mfn to gfn */
>> >  static inline unsigned long mfn_to_gfn(struct domain *d, mfn_t mfn)
>> 
>> Why twice the same declaration? Can't this be put in xen/p2m-common.h?
>> Or some other suitable common header? With that
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>> 
> 
> I was not aware of the common header so I placed it near the declarations of 
> get_page_from_gfn(). Do you want be to submit a v10 with this fixed or are 
> you happy to move it to wherever you think is appropriate during commit?

I'd prefer a v10, to be sure things compile fine ahead of trying to commit
this series.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 7/7] vtd: add lookup_page method to iommu_ops
  2018-09-18 14:50       ` Jan Beulich
@ 2018-09-18 16:03         ` Paul Durrant
  0 siblings, 0 replies; 34+ messages in thread
From: Paul Durrant @ 2018-09-18 16:03 UTC (permalink / raw)
  To: 'Jan Beulich'; +Cc: xen-devel, Kevin Tian, Wei Liu, George Dunlap

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 18 September 2018 15:51
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: George Dunlap <George.Dunlap@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; Kevin Tian <kevin.tian@intel.com>; xen-devel <xen-
> devel@lists.xenproject.org>
> Subject: RE: [PATCH v9 7/7] vtd: add lookup_page method to iommu_ops
> 
> >>> On 18.09.18 at 16:00, <Paul.Durrant@citrix.com> wrote:
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 18 September 2018 14:20
> >>
> >> >>> On 13.09.18 at 17:21, <paul.durrant@citrix.com> wrote:
> >> > @@ -179,9 +181,17 @@ struct iommu_ops {
> >> >  #endif /* HAS_PCI */
> >> >
> >> >      void (*teardown)(struct domain *d);
> >> > +
> >> > +    /*
> >> > +     * This block of operations must be appropriately locked against
> >> each
> >> > +     * other to have meaningful results.
> >> > +     */
> >> >      int __must_check (*map_page)(struct domain *d, dfn_t dfn, mfn_t
> >> mfn,
> >> >                                   unsigned int flags);
> >> >      int __must_check (*unmap_page)(struct domain *d, dfn_t dfn);
> >> > +    int __must_check (*lookup_page)(struct domain *d, dfn_t dfn,
> mfn_t
> >> *mfn,
> >> > +                                    unsigned int *flags);
> >>
> >> I'm afraid the comment is ambiguous: It may mean the implementations
> >> of the hooks have to have suitable locking in place, or callers have to
> >> take care of locking. I think the latter is meant, which I think needs
> to
> >> be made explicit. With that
> >> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> >>
> >
> > Thanks. Yes, the latter is what I meant. Can you fix the wording as you
> see
> > fit during commit?
> 
> Well, I didn't offer doing so because it seemed to me that small
> adjustments
> to at least one of earlier patches would still be needed. Was that wrong?
> 

No, I just read the emails in reverse order. I'll send v10.

  Paul

> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/7] memory: add check_get_page_from_gfn() as a wrapper...
  2018-09-18 14:51       ` Jan Beulich
@ 2018-09-18 16:37         ` Paul Durrant
  2018-09-19  6:03           ` Jan Beulich
  0 siblings, 1 reply; 34+ messages in thread
From: Paul Durrant @ 2018-09-18 16:37 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, xen-devel, Ian Jackson

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 18 September 2018 15:51
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; George Dunlap <George.Dunlap@citrix.com>; Ian
> Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Stefano
> Stabellini <sstabellini@kernel.org>; xen-devel <xen-
> devel@lists.xenproject.org>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Tim (Xen.org) <tim@xen.org>
> Subject: RE: [Xen-devel] [PATCH v9 5/7] memory: add
> check_get_page_from_gfn() as a wrapper...
> 
> >>> On 18.09.18 at 16:03, <Paul.Durrant@citrix.com> wrote:
> >>  -----Original Message-----
> >> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On
> Behalf
> >> Of Jan Beulich
> >> Sent: 18 September 2018 14:17
> >> To: Paul Durrant <Paul.Durrant@citrix.com>
> >> Cc: Stefano Stabellini <sstabellini@kernel.org>; Wei Liu
> >> <wei.liu2@citrix.com>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>;
> >> George Dunlap <George.Dunlap@citrix.com>; Andrew Cooper
> >> <Andrew.Cooper3@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Tim
> >> (Xen.org) <tim@xen.org>; Julien Grall <julien.grall@arm.com>; xen-devel
> >> <xen-devel@lists.xenproject.org>
> >> Subject: Re: [Xen-devel] [PATCH v9 5/7] memory: add
> >> check_get_page_from_gfn() as a wrapper...
> >>
> >> >>> On 13.09.18 at 17:21, <paul.durrant@citrix.com> wrote:
> >> > --- a/xen/include/asm-arm/p2m.h
> >> > +++ b/xen/include/asm-arm/p2m.h
> >> > @@ -303,6 +303,10 @@ static inline struct page_info
> *get_page_from_gfn(
> >> >      return page;
> >> >  }
> >> >
> >> > +int __must_check check_get_page_from_gfn(struct domain *d, gfn_t
> gfn,
> >> > +                                         bool readonly, p2m_type_t
> >> *p2mt_p,
> >> > +                                         struct page_info **page_p);
> >> > +
> >> >  int get_page_type(struct page_info *page, unsigned long type);
> >> >  bool is_iomem_page(mfn_t mfn);
> >> >  static inline int get_page_and_type(struct page_info *page,
> >> > --- a/xen/include/asm-x86/p2m.h
> >> > +++ b/xen/include/asm-x86/p2m.h
> >> > @@ -492,6 +492,9 @@ static inline struct page_info
> *get_page_from_gfn(
> >> >      return mfn_valid(_mfn(gfn)) && get_page(page, d) ? page : NULL;
> >> >  }
> >> >
> >> > +int __must_check check_get_page_from_gfn(struct domain *d, gfn_t
> gfn,
> >> > +                                         bool readonly, p2m_type_t
> >> *p2mt_p,
> >> > +                                         struct page_info **page_p);
> >> >
> >> >  /* General conversion function from mfn to gfn */
> >> >  static inline unsigned long mfn_to_gfn(struct domain *d, mfn_t mfn)
> >>
> >> Why twice the same declaration? Can't this be put in xen/p2m-common.h?
> >> Or some other suitable common header? With that
> >> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> >>
> >
> > I was not aware of the common header so I placed it near the
> declarations of
> > get_page_from_gfn(). Do you want be to submit a v10 with this fixed or
> are
> > you happy to move it to wherever you think is appropriate during commit?
> 
> I'd prefer a v10, to be sure things compile fine ahead of trying to commit
> this series.
> 

Moving to p2m-common won't work. The function declaration involves a p2m_type_t argument and that enum is defined in the arm and x86 specific headers. I propose therefore to leave this patch as-is.

  Paul

> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/7] memory: add check_get_page_from_gfn() as a wrapper...
  2018-09-18 16:37         ` Paul Durrant
@ 2018-09-19  6:03           ` Jan Beulich
  2018-09-19  7:56             ` Paul Durrant
  0 siblings, 1 reply; 34+ messages in thread
From: Jan Beulich @ 2018-09-19  6:03 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim Deegan, george.dunlap, Julien Grall,
	xen-devel, IanJackson

>>> On 18.09.18 at 18:37, <Paul.Durrant@citrix.com> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 18 September 2018 15:51
>> To: Paul Durrant <Paul.Durrant@citrix.com>
>> Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper
>> <Andrew.Cooper3@citrix.com>; George Dunlap <George.Dunlap@citrix.com>; Ian
>> Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Stefano
>> Stabellini <sstabellini@kernel.org>; xen-devel <xen-
>> devel@lists.xenproject.org>; Konrad Rzeszutek Wilk
>> <konrad.wilk@oracle.com>; Tim (Xen.org) <tim@xen.org>
>> Subject: RE: [Xen-devel] [PATCH v9 5/7] memory: add
>> check_get_page_from_gfn() as a wrapper...
>> 
>> >>> On 18.09.18 at 16:03, <Paul.Durrant@citrix.com> wrote:
>> >>  -----Original Message-----
>> >> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On
>> Behalf
>> >> Of Jan Beulich
>> >> Sent: 18 September 2018 14:17
>> >> To: Paul Durrant <Paul.Durrant@citrix.com>
>> >> Cc: Stefano Stabellini <sstabellini@kernel.org>; Wei Liu
>> >> <wei.liu2@citrix.com>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>;
>> >> George Dunlap <George.Dunlap@citrix.com>; Andrew Cooper
>> >> <Andrew.Cooper3@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Tim
>> >> (Xen.org) <tim@xen.org>; Julien Grall <julien.grall@arm.com>; xen-devel
>> >> <xen-devel@lists.xenproject.org>
>> >> Subject: Re: [Xen-devel] [PATCH v9 5/7] memory: add
>> >> check_get_page_from_gfn() as a wrapper...
>> >>
>> >> >>> On 13.09.18 at 17:21, <paul.durrant@citrix.com> wrote:
>> >> > --- a/xen/include/asm-arm/p2m.h
>> >> > +++ b/xen/include/asm-arm/p2m.h
>> >> > @@ -303,6 +303,10 @@ static inline struct page_info
>> *get_page_from_gfn(
>> >> >      return page;
>> >> >  }
>> >> >
>> >> > +int __must_check check_get_page_from_gfn(struct domain *d, gfn_t
>> gfn,
>> >> > +                                         bool readonly, p2m_type_t
>> >> *p2mt_p,
>> >> > +                                         struct page_info **page_p);
>> >> > +
>> >> >  int get_page_type(struct page_info *page, unsigned long type);
>> >> >  bool is_iomem_page(mfn_t mfn);
>> >> >  static inline int get_page_and_type(struct page_info *page,
>> >> > --- a/xen/include/asm-x86/p2m.h
>> >> > +++ b/xen/include/asm-x86/p2m.h
>> >> > @@ -492,6 +492,9 @@ static inline struct page_info
>> *get_page_from_gfn(
>> >> >      return mfn_valid(_mfn(gfn)) && get_page(page, d) ? page : NULL;
>> >> >  }
>> >> >
>> >> > +int __must_check check_get_page_from_gfn(struct domain *d, gfn_t
>> gfn,
>> >> > +                                         bool readonly, p2m_type_t
>> >> *p2mt_p,
>> >> > +                                         struct page_info **page_p);
>> >> >
>> >> >  /* General conversion function from mfn to gfn */
>> >> >  static inline unsigned long mfn_to_gfn(struct domain *d, mfn_t mfn)
>> >>
>> >> Why twice the same declaration? Can't this be put in xen/p2m-common.h?
>> >> Or some other suitable common header? With that
>> >> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>> >>
>> >
>> > I was not aware of the common header so I placed it near the
>> declarations of
>> > get_page_from_gfn(). Do you want be to submit a v10 with this fixed or
>> are
>> > you happy to move it to wherever you think is appropriate during commit?
>> 
>> I'd prefer a v10, to be sure things compile fine ahead of trying to commit
>> this series.
>> 
> 
> Moving to p2m-common won't work. The function declaration involves a 
> p2m_type_t argument and that enum is defined in the arm and x86 specific 
> headers. I propose therefore to leave this patch as-is.

Leaving the duplication in place is just the last resort imo. Does xen/mm.h
not work either?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/7] memory: add check_get_page_from_gfn() as a wrapper...
  2018-09-19  6:03           ` Jan Beulich
@ 2018-09-19  7:56             ` Paul Durrant
  2018-09-19  8:01               ` Paul Durrant
  2018-09-19  8:34               ` Jan Beulich
  0 siblings, 2 replies; 34+ messages in thread
From: Paul Durrant @ 2018-09-19  7:56 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, xen-devel, Ian Jackson

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 19 September 2018 07:03
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; George Dunlap <George.Dunlap@citrix.com>; Ian
> Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Stefano
> Stabellini <sstabellini@kernel.org>; xen-devel <xen-
> devel@lists.xenproject.org>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Tim (Xen.org) <tim@xen.org>
> Subject: RE: [Xen-devel] [PATCH v9 5/7] memory: add
> check_get_page_from_gfn() as a wrapper...
> 
> >>> On 18.09.18 at 18:37, <Paul.Durrant@citrix.com> wrote:
> >>  -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 18 September 2018 15:51
> >> To: Paul Durrant <Paul.Durrant@citrix.com>
> >> Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper
> >> <Andrew.Cooper3@citrix.com>; George Dunlap <George.Dunlap@citrix.com>;
> Ian
> >> Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>;
> Stefano
> >> Stabellini <sstabellini@kernel.org>; xen-devel <xen-
> >> devel@lists.xenproject.org>; Konrad Rzeszutek Wilk
> >> <konrad.wilk@oracle.com>; Tim (Xen.org) <tim@xen.org>
> >> Subject: RE: [Xen-devel] [PATCH v9 5/7] memory: add
> >> check_get_page_from_gfn() as a wrapper...
> >>
> >> >>> On 18.09.18 at 16:03, <Paul.Durrant@citrix.com> wrote:
> >> >>  -----Original Message-----
> >> >> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On
> >> Behalf
> >> >> Of Jan Beulich
> >> >> Sent: 18 September 2018 14:17
> >> >> To: Paul Durrant <Paul.Durrant@citrix.com>
> >> >> Cc: Stefano Stabellini <sstabellini@kernel.org>; Wei Liu
> >> >> <wei.liu2@citrix.com>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>;
> >> >> George Dunlap <George.Dunlap@citrix.com>; Andrew Cooper
> >> >> <Andrew.Cooper3@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>;
> Tim
> >> >> (Xen.org) <tim@xen.org>; Julien Grall <julien.grall@arm.com>; xen-
> devel
> >> >> <xen-devel@lists.xenproject.org>
> >> >> Subject: Re: [Xen-devel] [PATCH v9 5/7] memory: add
> >> >> check_get_page_from_gfn() as a wrapper...
> >> >>
> >> >> >>> On 13.09.18 at 17:21, <paul.durrant@citrix.com> wrote:
> >> >> > --- a/xen/include/asm-arm/p2m.h
> >> >> > +++ b/xen/include/asm-arm/p2m.h
> >> >> > @@ -303,6 +303,10 @@ static inline struct page_info
> >> *get_page_from_gfn(
> >> >> >      return page;
> >> >> >  }
> >> >> >
> >> >> > +int __must_check check_get_page_from_gfn(struct domain *d, gfn_t
> >> gfn,
> >> >> > +                                         bool readonly,
> p2m_type_t
> >> >> *p2mt_p,
> >> >> > +                                         struct page_info
> **page_p);
> >> >> > +
> >> >> >  int get_page_type(struct page_info *page, unsigned long type);
> >> >> >  bool is_iomem_page(mfn_t mfn);
> >> >> >  static inline int get_page_and_type(struct page_info *page,
> >> >> > --- a/xen/include/asm-x86/p2m.h
> >> >> > +++ b/xen/include/asm-x86/p2m.h
> >> >> > @@ -492,6 +492,9 @@ static inline struct page_info
> >> *get_page_from_gfn(
> >> >> >      return mfn_valid(_mfn(gfn)) && get_page(page, d) ? page :
> NULL;
> >> >> >  }
> >> >> >
> >> >> > +int __must_check check_get_page_from_gfn(struct domain *d, gfn_t
> >> gfn,
> >> >> > +                                         bool readonly,
> p2m_type_t
> >> >> *p2mt_p,
> >> >> > +                                         struct page_info
> **page_p);
> >> >> >
> >> >> >  /* General conversion function from mfn to gfn */
> >> >> >  static inline unsigned long mfn_to_gfn(struct domain *d, mfn_t
> mfn)
> >> >>
> >> >> Why twice the same declaration? Can't this be put in xen/p2m-
> common.h?
> >> >> Or some other suitable common header? With that
> >> >> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> >> >>
> >> >
> >> > I was not aware of the common header so I placed it near the
> >> declarations of
> >> > get_page_from_gfn(). Do you want be to submit a v10 with this fixed
> or
> >> are
> >> > you happy to move it to wherever you think is appropriate during
> commit?
> >>
> >> I'd prefer a v10, to be sure things compile fine ahead of trying to
> commit
> >> this series.
> >>
> >
> > Moving to p2m-common won't work. The function declaration involves a
> > p2m_type_t argument and that enum is defined in the arm and x86 specific
> > headers. I propose therefore to leave this patch as-is.
> 
> Leaving the duplication in place is just the last resort imo. Does
> xen/mm.h
> not work either?

No, it won't. It has to be something *after* the definition of the p2m_type_t enum. I could, as Julien suggested, move the inclusion of p2m-common after that point, but it would mean the header guards would cease to DTRT of course. I'm not sure whether C will allow me to forward declare the enum (not something I've tried) but I could give that a go. Any other suggestions?

  Paul

> 
> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/7] memory: add check_get_page_from_gfn() as a wrapper...
  2018-09-19  7:56             ` Paul Durrant
@ 2018-09-19  8:01               ` Paul Durrant
  2018-09-19  8:31                 ` Jan Beulich
  2018-09-19  8:34               ` Jan Beulich
  1 sibling, 1 reply; 34+ messages in thread
From: Paul Durrant @ 2018-09-19  8:01 UTC (permalink / raw)
  To: Paul Durrant, 'Jan Beulich'
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, Ian Jackson, xen-devel

> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
> Of Paul Durrant
> Sent: 19 September 2018 08:56
> To: 'Jan Beulich' <JBeulich@suse.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>; Wei Liu
> <wei.liu2@citrix.com>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>;
> Andrew Cooper <Andrew.Cooper3@citrix.com>; Tim (Xen.org) <tim@xen.org>;
> George Dunlap <George.Dunlap@citrix.com>; Julien Grall
> <julien.grall@arm.com>; xen-devel <xen-devel@lists.xenproject.org>; Ian
> Jackson <Ian.Jackson@citrix.com>
> Subject: Re: [Xen-devel] [PATCH v9 5/7] memory: add
> check_get_page_from_gfn() as a wrapper...
> 
> > -----Original Message-----
> > From: Jan Beulich [mailto:JBeulich@suse.com]
> > Sent: 19 September 2018 07:03
> > To: Paul Durrant <Paul.Durrant@citrix.com>
> > Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper
> > <Andrew.Cooper3@citrix.com>; George Dunlap <George.Dunlap@citrix.com>;
> Ian
> > Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Stefano
> > Stabellini <sstabellini@kernel.org>; xen-devel <xen-
> > devel@lists.xenproject.org>; Konrad Rzeszutek Wilk
> > <konrad.wilk@oracle.com>; Tim (Xen.org) <tim@xen.org>
> > Subject: RE: [Xen-devel] [PATCH v9 5/7] memory: add
> > check_get_page_from_gfn() as a wrapper...
> >
> > >>> On 18.09.18 at 18:37, <Paul.Durrant@citrix.com> wrote:
> > >>  -----Original Message-----
> > >> From: Jan Beulich [mailto:JBeulich@suse.com]
> > >> Sent: 18 September 2018 15:51
> > >> To: Paul Durrant <Paul.Durrant@citrix.com>
> > >> Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper
> > >> <Andrew.Cooper3@citrix.com>; George Dunlap
> <George.Dunlap@citrix.com>;
> > Ian
> > >> Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>;
> > Stefano
> > >> Stabellini <sstabellini@kernel.org>; xen-devel <xen-
> > >> devel@lists.xenproject.org>; Konrad Rzeszutek Wilk
> > >> <konrad.wilk@oracle.com>; Tim (Xen.org) <tim@xen.org>
> > >> Subject: RE: [Xen-devel] [PATCH v9 5/7] memory: add
> > >> check_get_page_from_gfn() as a wrapper...
> > >>
> > >> >>> On 18.09.18 at 16:03, <Paul.Durrant@citrix.com> wrote:
> > >> >>  -----Original Message-----
> > >> >> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On
> > >> Behalf
> > >> >> Of Jan Beulich
> > >> >> Sent: 18 September 2018 14:17
> > >> >> To: Paul Durrant <Paul.Durrant@citrix.com>
> > >> >> Cc: Stefano Stabellini <sstabellini@kernel.org>; Wei Liu
> > >> >> <wei.liu2@citrix.com>; Konrad Rzeszutek Wilk
> > <konrad.wilk@oracle.com>;
> > >> >> George Dunlap <George.Dunlap@citrix.com>; Andrew Cooper
> > >> >> <Andrew.Cooper3@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>;
> > Tim
> > >> >> (Xen.org) <tim@xen.org>; Julien Grall <julien.grall@arm.com>; xen-
> > devel
> > >> >> <xen-devel@lists.xenproject.org>
> > >> >> Subject: Re: [Xen-devel] [PATCH v9 5/7] memory: add
> > >> >> check_get_page_from_gfn() as a wrapper...
> > >> >>
> > >> >> >>> On 13.09.18 at 17:21, <paul.durrant@citrix.com> wrote:
> > >> >> > --- a/xen/include/asm-arm/p2m.h
> > >> >> > +++ b/xen/include/asm-arm/p2m.h
> > >> >> > @@ -303,6 +303,10 @@ static inline struct page_info
> > >> *get_page_from_gfn(
> > >> >> >      return page;
> > >> >> >  }
> > >> >> >
> > >> >> > +int __must_check check_get_page_from_gfn(struct domain *d,
> gfn_t
> > >> gfn,
> > >> >> > +                                         bool readonly,
> > p2m_type_t
> > >> >> *p2mt_p,
> > >> >> > +                                         struct page_info
> > **page_p);
> > >> >> > +
> > >> >> >  int get_page_type(struct page_info *page, unsigned long type);
> > >> >> >  bool is_iomem_page(mfn_t mfn);
> > >> >> >  static inline int get_page_and_type(struct page_info *page,
> > >> >> > --- a/xen/include/asm-x86/p2m.h
> > >> >> > +++ b/xen/include/asm-x86/p2m.h
> > >> >> > @@ -492,6 +492,9 @@ static inline struct page_info
> > >> *get_page_from_gfn(
> > >> >> >      return mfn_valid(_mfn(gfn)) && get_page(page, d) ? page :
> > NULL;
> > >> >> >  }
> > >> >> >
> > >> >> > +int __must_check check_get_page_from_gfn(struct domain *d,
> gfn_t
> > >> gfn,
> > >> >> > +                                         bool readonly,
> > p2m_type_t
> > >> >> *p2mt_p,
> > >> >> > +                                         struct page_info
> > **page_p);
> > >> >> >
> > >> >> >  /* General conversion function from mfn to gfn */
> > >> >> >  static inline unsigned long mfn_to_gfn(struct domain *d, mfn_t
> > mfn)
> > >> >>
> > >> >> Why twice the same declaration? Can't this be put in xen/p2m-
> > common.h?
> > >> >> Or some other suitable common header? With that
> > >> >> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> > >> >>
> > >> >
> > >> > I was not aware of the common header so I placed it near the
> > >> declarations of
> > >> > get_page_from_gfn(). Do you want be to submit a v10 with this fixed
> > or
> > >> are
> > >> > you happy to move it to wherever you think is appropriate during
> > commit?
> > >>
> > >> I'd prefer a v10, to be sure things compile fine ahead of trying to
> > commit
> > >> this series.
> > >>
> > >
> > > Moving to p2m-common won't work. The function declaration involves a
> > > p2m_type_t argument and that enum is defined in the arm and x86
> specific
> > > headers. I propose therefore to leave this patch as-is.
> >
> > Leaving the duplication in place is just the last resort imo. Does
> > xen/mm.h
> > not work either?
> 
> No, it won't. It has to be something *after* the definition of the
> p2m_type_t enum. I could, as Julien suggested, move the inclusion of p2m-
> common after that point, but it would mean the header guards would cease
> to DTRT of course. I'm not sure whether C will allow me to forward declare
> the enum (not something I've tried) but I could give that a go. Any other
> suggestions?
> 

Forward declaration of the enum does indeed appear to work, so I'll go with that.

  Paul

>   Paul
> 
> >
> > Jan
> >
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/7] memory: add check_get_page_from_gfn() as a wrapper...
  2018-09-19  8:01               ` Paul Durrant
@ 2018-09-19  8:31                 ` Jan Beulich
  2018-09-19  8:59                   ` Roger Pau Monné
  0 siblings, 1 reply; 34+ messages in thread
From: Jan Beulich @ 2018-09-19  8:31 UTC (permalink / raw)
  To: Paul Durrant, Roger Pau Monne
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim Deegan, george.dunlap, Julien Grall,
	xen-devel, IanJackson

>>> On 19.09.18 at 10:01, <Paul.Durrant@citrix.com> wrote:
>> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
>> Of Paul Durrant
>> Sent: 19 September 2018 08:56
>> 
>> > From: Jan Beulich [mailto:JBeulich@suse.com]
>> > Sent: 19 September 2018 07:03
>> >
>> > >>> On 18.09.18 at 18:37, <Paul.Durrant@citrix.com> wrote:
>> > > Moving to p2m-common won't work. The function declaration involves a
>> > > p2m_type_t argument and that enum is defined in the arm and x86
>> specific
>> > > headers. I propose therefore to leave this patch as-is.
>> >
>> > Leaving the duplication in place is just the last resort imo. Does
>> > xen/mm.h
>> > not work either?
>> 
>> No, it won't. It has to be something *after* the definition of the
>> p2m_type_t enum. I could, as Julien suggested, move the inclusion of p2m-
>> common after that point, but it would mean the header guards would cease
>> to DTRT of course. I'm not sure whether C will allow me to forward declare
>> the enum (not something I've tried) but I could give that a go. Any other
>> suggestions?
>> 
> 
> Forward declaration of the enum does indeed appear to work, so I'll go with 
> that.

That's an extension I'm not even sure all gcc versions support (I've checked
4.3 just now, where it works). Roger, any chance you know whether clang
supports this?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/7] memory: add check_get_page_from_gfn() as a wrapper...
  2018-09-19  7:56             ` Paul Durrant
  2018-09-19  8:01               ` Paul Durrant
@ 2018-09-19  8:34               ` Jan Beulich
  2018-09-19  8:54                 ` Paul Durrant
  1 sibling, 1 reply; 34+ messages in thread
From: Jan Beulich @ 2018-09-19  8:34 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim Deegan, george.dunlap, Julien Grall,
	xen-devel, IanJackson

>>> On 19.09.18 at 09:56, <Paul.Durrant@citrix.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 19 September 2018 07:03
>> 
>> >>> On 18.09.18 at 18:37, <Paul.Durrant@citrix.com> wrote:
>> > Moving to p2m-common won't work. The function declaration involves a
>> > p2m_type_t argument and that enum is defined in the arm and x86 specific
>> > headers. I propose therefore to leave this patch as-is.
>> 
>> Leaving the duplication in place is just the last resort imo. Does
>> xen/mm.h
>> not work either?
> 
> No, it won't. It has to be something *after* the definition of the 
> p2m_type_t enum. I could, as Julien suggested, move the inclusion of 
> p2m-common after that point, but it would mean the header guards would cease 
> to DTRT of course. I'm not sure whether C will allow me to forward declare 
> the enum (not something I've tried) but I could give that a go. Any other 
> suggestions?

I'm afraid I don't understand your concern wrt include guards. Each header
has its own. I'm actually surprised the current inclusion point is at the top
of the file, rather than after at least the basic type definitions each arch
has to supply.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/7] memory: add check_get_page_from_gfn() as a wrapper...
  2018-09-19  8:34               ` Jan Beulich
@ 2018-09-19  8:54                 ` Paul Durrant
  2018-09-19  9:22                   ` Jan Beulich
  0 siblings, 1 reply; 34+ messages in thread
From: Paul Durrant @ 2018-09-19  8:54 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, xen-devel, Ian Jackson

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 19 September 2018 09:34
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; George Dunlap <George.Dunlap@citrix.com>; Ian
> Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Stefano
> Stabellini <sstabellini@kernel.org>; xen-devel <xen-
> devel@lists.xenproject.org>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Tim (Xen.org) <tim@xen.org>
> Subject: RE: [Xen-devel] [PATCH v9 5/7] memory: add
> check_get_page_from_gfn() as a wrapper...
> 
> >>> On 19.09.18 at 09:56, <Paul.Durrant@citrix.com> wrote:
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 19 September 2018 07:03
> >>
> >> >>> On 18.09.18 at 18:37, <Paul.Durrant@citrix.com> wrote:
> >> > Moving to p2m-common won't work. The function declaration involves a
> >> > p2m_type_t argument and that enum is defined in the arm and x86
> specific
> >> > headers. I propose therefore to leave this patch as-is.
> >>
> >> Leaving the duplication in place is just the last resort imo. Does
> >> xen/mm.h
> >> not work either?
> >
> > No, it won't. It has to be something *after* the definition of the
> > p2m_type_t enum. I could, as Julien suggested, move the inclusion of
> > p2m-common after that point, but it would mean the header guards would
> cease
> > to DTRT of course. I'm not sure whether C will allow me to forward
> declare
> > the enum (not something I've tried) but I could give that a go. Any
> other
> > suggestions?
> 
> I'm afraid I don't understand your concern wrt include guards. Each header
> has its own. I'm actually surprised the current inclusion point is at the
> top
> of the file, rather than after at least the basic type definitions each
> arch
> has to supply.
> 

Ok, if we include p2m-common in the middle of p2m, and then add stuff dependent on the declaration of p2m_type_t, what happens when a source module explicitly includes p2m-common prior to p2m?

  Paul

> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/7] memory: add check_get_page_from_gfn() as a wrapper...
  2018-09-19  8:31                 ` Jan Beulich
@ 2018-09-19  8:59                   ` Roger Pau Monné
  2018-09-19 16:13                     ` Wei Liu
  0 siblings, 1 reply; 34+ messages in thread
From: Roger Pau Monné @ 2018-09-19  8:59 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim Deegan, george.dunlap, Julien Grall,
	Paul Durrant, IanJackson, xen-devel

On Wed, Sep 19, 2018 at 02:31:47AM -0600, Jan Beulich wrote:
> >>> On 19.09.18 at 10:01, <Paul.Durrant@citrix.com> wrote:
> >> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
> >> Of Paul Durrant
> >> Sent: 19 September 2018 08:56
> >> 
> >> > From: Jan Beulich [mailto:JBeulich@suse.com]
> >> > Sent: 19 September 2018 07:03
> >> >
> >> > >>> On 18.09.18 at 18:37, <Paul.Durrant@citrix.com> wrote:
> >> > > Moving to p2m-common won't work. The function declaration involves a
> >> > > p2m_type_t argument and that enum is defined in the arm and x86
> >> specific
> >> > > headers. I propose therefore to leave this patch as-is.
> >> >
> >> > Leaving the duplication in place is just the last resort imo. Does
> >> > xen/mm.h
> >> > not work either?
> >> 
> >> No, it won't. It has to be something *after* the definition of the
> >> p2m_type_t enum. I could, as Julien suggested, move the inclusion of p2m-
> >> common after that point, but it would mean the header guards would cease
> >> to DTRT of course. I'm not sure whether C will allow me to forward declare
> >> the enum (not something I've tried) but I could give that a go. Any other
> >> suggestions?
> >> 
> > 
> > Forward declaration of the enum does indeed appear to work, so I'll go with 
> > that.
> 
> That's an extension I'm not even sure all gcc versions support (I've checked
> 4.3 just now, where it works). Roger, any chance you know whether clang
> supports this?

I've just tested the following with clang 6 (which is the version used
by osstest):

enum foo;
enum foo { a, b, c };

And it works fine. I think travis or gitlab tested older versions of
clang (the ones on the Linux distros).

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/7] memory: add check_get_page_from_gfn() as a wrapper...
  2018-09-19  8:54                 ` Paul Durrant
@ 2018-09-19  9:22                   ` Jan Beulich
  2018-09-19  9:24                     ` Paul Durrant
  2018-09-19  9:29                     ` Roger Pau Monné
  0 siblings, 2 replies; 34+ messages in thread
From: Jan Beulich @ 2018-09-19  9:22 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim Deegan, george.dunlap, Julien Grall,
	xen-devel, IanJackson

>>> On 19.09.18 at 10:54, <Paul.Durrant@citrix.com> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 19 September 2018 09:34
>> To: Paul Durrant <Paul.Durrant@citrix.com>
>> Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper
>> <Andrew.Cooper3@citrix.com>; George Dunlap <George.Dunlap@citrix.com>; Ian
>> Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Stefano
>> Stabellini <sstabellini@kernel.org>; xen-devel <xen-
>> devel@lists.xenproject.org>; Konrad Rzeszutek Wilk
>> <konrad.wilk@oracle.com>; Tim (Xen.org) <tim@xen.org>
>> Subject: RE: [Xen-devel] [PATCH v9 5/7] memory: add
>> check_get_page_from_gfn() as a wrapper...
>> 
>> >>> On 19.09.18 at 09:56, <Paul.Durrant@citrix.com> wrote:
>> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> Sent: 19 September 2018 07:03
>> >>
>> >> >>> On 18.09.18 at 18:37, <Paul.Durrant@citrix.com> wrote:
>> >> > Moving to p2m-common won't work. The function declaration involves a
>> >> > p2m_type_t argument and that enum is defined in the arm and x86
>> specific
>> >> > headers. I propose therefore to leave this patch as-is.
>> >>
>> >> Leaving the duplication in place is just the last resort imo. Does
>> >> xen/mm.h
>> >> not work either?
>> >
>> > No, it won't. It has to be something *after* the definition of the
>> > p2m_type_t enum. I could, as Julien suggested, move the inclusion of
>> > p2m-common after that point, but it would mean the header guards would
>> cease
>> > to DTRT of course. I'm not sure whether C will allow me to forward
>> declare
>> > the enum (not something I've tried) but I could give that a go. Any
>> other
>> > suggestions?
>> 
>> I'm afraid I don't understand your concern wrt include guards. Each header
>> has its own. I'm actually surprised the current inclusion point is at the
>> top
>> of the file, rather than after at least the basic type definitions each
>> arch
>> has to supply.
>> 
> 
> Ok, if we include p2m-common in the middle of p2m, and then add stuff 
> dependent on the declaration of p2m_type_t, what happens when a source module 
> explicitly includes p2m-common prior to p2m?

This is not supposed to happen, as this header is supposed to only be
included from the per-arch p2m.h; we should put an #error there. I
see there's exactly one violation of this (drivers/vpci/header.c). But
your series can go in as is; I'll try to remember to clean this up.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/7] memory: add check_get_page_from_gfn() as a wrapper...
  2018-09-19  9:22                   ` Jan Beulich
@ 2018-09-19  9:24                     ` Paul Durrant
  2018-09-19  9:29                     ` Roger Pau Monné
  1 sibling, 0 replies; 34+ messages in thread
From: Paul Durrant @ 2018-09-19  9:24 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, xen-devel, Ian Jackson

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 19 September 2018 10:23
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; George Dunlap <George.Dunlap@citrix.com>; Ian
> Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Stefano
> Stabellini <sstabellini@kernel.org>; xen-devel <xen-
> devel@lists.xenproject.org>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Tim (Xen.org) <tim@xen.org>
> Subject: RE: [Xen-devel] [PATCH v9 5/7] memory: add
> check_get_page_from_gfn() as a wrapper...
> 
> >>> On 19.09.18 at 10:54, <Paul.Durrant@citrix.com> wrote:
> >>  -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 19 September 2018 09:34
> >> To: Paul Durrant <Paul.Durrant@citrix.com>
> >> Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper
> >> <Andrew.Cooper3@citrix.com>; George Dunlap <George.Dunlap@citrix.com>;
> Ian
> >> Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>;
> Stefano
> >> Stabellini <sstabellini@kernel.org>; xen-devel <xen-
> >> devel@lists.xenproject.org>; Konrad Rzeszutek Wilk
> >> <konrad.wilk@oracle.com>; Tim (Xen.org) <tim@xen.org>
> >> Subject: RE: [Xen-devel] [PATCH v9 5/7] memory: add
> >> check_get_page_from_gfn() as a wrapper...
> >>
> >> >>> On 19.09.18 at 09:56, <Paul.Durrant@citrix.com> wrote:
> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >> Sent: 19 September 2018 07:03
> >> >>
> >> >> >>> On 18.09.18 at 18:37, <Paul.Durrant@citrix.com> wrote:
> >> >> > Moving to p2m-common won't work. The function declaration involves
> a
> >> >> > p2m_type_t argument and that enum is defined in the arm and x86
> >> specific
> >> >> > headers. I propose therefore to leave this patch as-is.
> >> >>
> >> >> Leaving the duplication in place is just the last resort imo. Does
> >> >> xen/mm.h
> >> >> not work either?
> >> >
> >> > No, it won't. It has to be something *after* the definition of the
> >> > p2m_type_t enum. I could, as Julien suggested, move the inclusion of
> >> > p2m-common after that point, but it would mean the header guards
> would
> >> cease
> >> > to DTRT of course. I'm not sure whether C will allow me to forward
> >> declare
> >> > the enum (not something I've tried) but I could give that a go. Any
> >> other
> >> > suggestions?
> >>
> >> I'm afraid I don't understand your concern wrt include guards. Each
> header
> >> has its own. I'm actually surprised the current inclusion point is at
> the
> >> top
> >> of the file, rather than after at least the basic type definitions each
> >> arch
> >> has to supply.
> >>
> >
> > Ok, if we include p2m-common in the middle of p2m, and then add stuff
> > dependent on the declaration of p2m_type_t, what happens when a source
> module
> > explicitly includes p2m-common prior to p2m?
> 
> This is not supposed to happen, as this header is supposed to only be
> included from the per-arch p2m.h; we should put an #error there. I
> see there's exactly one violation of this (drivers/vpci/header.c). But
> your series can go in as is; I'll try to remember to clean this up.
> 

Ok, thanks.

  Paul

> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/7] memory: add check_get_page_from_gfn() as a wrapper...
  2018-09-19  9:22                   ` Jan Beulich
  2018-09-19  9:24                     ` Paul Durrant
@ 2018-09-19  9:29                     ` Roger Pau Monné
       [not found]                       ` <304E51960200007575B6CDDB@prv1-mh.provo.novell.com>
  1 sibling, 1 reply; 34+ messages in thread
From: Roger Pau Monné @ 2018-09-19  9:29 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim Deegan, george.dunlap, Julien Grall,
	Paul Durrant, IanJackson, xen-devel

On Wed, Sep 19, 2018 at 03:22:54AM -0600, Jan Beulich wrote:
> >>> On 19.09.18 at 10:54, <Paul.Durrant@citrix.com> wrote:
> >>  -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 19 September 2018 09:34
> >> To: Paul Durrant <Paul.Durrant@citrix.com>
> >> Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper
> >> <Andrew.Cooper3@citrix.com>; George Dunlap <George.Dunlap@citrix.com>; Ian
> >> Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Stefano
> >> Stabellini <sstabellini@kernel.org>; xen-devel <xen-
> >> devel@lists.xenproject.org>; Konrad Rzeszutek Wilk
> >> <konrad.wilk@oracle.com>; Tim (Xen.org) <tim@xen.org>
> >> Subject: RE: [Xen-devel] [PATCH v9 5/7] memory: add
> >> check_get_page_from_gfn() as a wrapper...
> >> 
> >> >>> On 19.09.18 at 09:56, <Paul.Durrant@citrix.com> wrote:
> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >> Sent: 19 September 2018 07:03
> >> >>
> >> >> >>> On 18.09.18 at 18:37, <Paul.Durrant@citrix.com> wrote:
> >> >> > Moving to p2m-common won't work. The function declaration involves a
> >> >> > p2m_type_t argument and that enum is defined in the arm and x86
> >> specific
> >> >> > headers. I propose therefore to leave this patch as-is.
> >> >>
> >> >> Leaving the duplication in place is just the last resort imo. Does
> >> >> xen/mm.h
> >> >> not work either?
> >> >
> >> > No, it won't. It has to be something *after* the definition of the
> >> > p2m_type_t enum. I could, as Julien suggested, move the inclusion of
> >> > p2m-common after that point, but it would mean the header guards would
> >> cease
> >> > to DTRT of course. I'm not sure whether C will allow me to forward
> >> declare
> >> > the enum (not something I've tried) but I could give that a go. Any
> >> other
> >> > suggestions?
> >> 
> >> I'm afraid I don't understand your concern wrt include guards. Each header
> >> has its own. I'm actually surprised the current inclusion point is at the
> >> top
> >> of the file, rather than after at least the basic type definitions each
> >> arch
> >> has to supply.
> >> 
> > 
> > Ok, if we include p2m-common in the middle of p2m, and then add stuff 
> > dependent on the declaration of p2m_type_t, what happens when a source module 
> > explicitly includes p2m-common prior to p2m?
> 
> This is not supposed to happen, as this header is supposed to only be
> included from the per-arch p2m.h; we should put an #error there. I
> see there's exactly one violation of this (drivers/vpci/header.c). But
> your series can go in as is; I'll try to remember to clean this up.

Oh, didn't know this, will send a patch to switch header.c to include
p2m.h instead of p2m-common.h.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/7] memory: add check_get_page_from_gfn() as a wrapper...
       [not found]                       ` <304E51960200007575B6CDDB@prv1-mh.provo.novell.com>
@ 2018-09-19  9:34                         ` Jan Beulich
  0 siblings, 0 replies; 34+ messages in thread
From: Jan Beulich @ 2018-09-19  9:34 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim Deegan, george.dunlap, Julien Grall,
	Paul Durrant, xen-devel, IanJackson

>>> On 19.09.18 at 11:29, <roger.pau@citrix.com> wrote:
> On Wed, Sep 19, 2018 at 03:22:54AM -0600, Jan Beulich wrote:
>> >>> On 19.09.18 at 10:54, <Paul.Durrant@citrix.com> wrote:
>> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> Sent: 19 September 2018 09:34
>> >> 
>> >> >>> On 19.09.18 at 09:56, <Paul.Durrant@citrix.com> wrote:
>> > Ok, if we include p2m-common in the middle of p2m, and then add stuff 
>> > dependent on the declaration of p2m_type_t, what happens when a source module 
>> > explicitly includes p2m-common prior to p2m?
>> 
>> This is not supposed to happen, as this header is supposed to only be
>> included from the per-arch p2m.h; we should put an #error there. I
>> see there's exactly one violation of this (drivers/vpci/header.c). But
>> your series can go in as is; I'll try to remember to clean this up.
> 
> Oh, didn't know this, will send a patch to switch header.c to include
> p2m.h instead of p2m-common.h.

No need to, I've already made this part of the patch moving the
inclusion point.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/7] memory: add check_get_page_from_gfn() as a wrapper...
  2018-09-19  8:59                   ` Roger Pau Monné
@ 2018-09-19 16:13                     ` Wei Liu
  0 siblings, 0 replies; 34+ messages in thread
From: Wei Liu @ 2018-09-19 16:13 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim Deegan, george.dunlap, Julien Grall,
	Paul Durrant, Jan Beulich, IanJackson, xen-devel

On Wed, Sep 19, 2018 at 10:59:34AM +0200, Roger Pau Monné wrote:
> On Wed, Sep 19, 2018 at 02:31:47AM -0600, Jan Beulich wrote:
> > >>> On 19.09.18 at 10:01, <Paul.Durrant@citrix.com> wrote:
> > >> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
> > >> Of Paul Durrant
> > >> Sent: 19 September 2018 08:56
> > >> 
> > >> > From: Jan Beulich [mailto:JBeulich@suse.com]
> > >> > Sent: 19 September 2018 07:03
> > >> >
> > >> > >>> On 18.09.18 at 18:37, <Paul.Durrant@citrix.com> wrote:
> > >> > > Moving to p2m-common won't work. The function declaration involves a
> > >> > > p2m_type_t argument and that enum is defined in the arm and x86
> > >> specific
> > >> > > headers. I propose therefore to leave this patch as-is.
> > >> >
> > >> > Leaving the duplication in place is just the last resort imo. Does
> > >> > xen/mm.h
> > >> > not work either?
> > >> 
> > >> No, it won't. It has to be something *after* the definition of the
> > >> p2m_type_t enum. I could, as Julien suggested, move the inclusion of p2m-
> > >> common after that point, but it would mean the header guards would cease
> > >> to DTRT of course. I'm not sure whether C will allow me to forward declare
> > >> the enum (not something I've tried) but I could give that a go. Any other
> > >> suggestions?
> > >> 
> > > 
> > > Forward declaration of the enum does indeed appear to work, so I'll go with 
> > > that.
> > 
> > That's an extension I'm not even sure all gcc versions support (I've checked
> > 4.3 just now, where it works). Roger, any chance you know whether clang
> > supports this?
> 
> I've just tested the following with clang 6 (which is the version used
> by osstest):
> 
> enum foo;
> enum foo { a, b, c };
> 
> And it works fine. I think travis or gitlab tested older versions of
> clang (the ones on the Linux distros).

They have clang 3.4 IIRC.

P.S. you can easily get the same environment locally by using the
in-tree containerization script.

Wei.

> 
> Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2018-09-19 16:13 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-13 15:21 [PATCH v9 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
2018-09-13 15:21 ` [PATCH v9 1/7] iommu: introduce the concept of DFN Paul Durrant
2018-09-14  2:27   ` Tian, Kevin
2018-09-13 15:21 ` [PATCH v9 2/7] iommu: make use of type-safe DFN and MFN in exported functions Paul Durrant
2018-09-13 15:21 ` [PATCH v9 3/7] iommu: push use of type-safe DFN and MFN into iommu_ops Paul Durrant
2018-09-13 15:21 ` [PATCH v9 4/7] iommu: don't domain_crash() inside iommu_map/unmap_page() Paul Durrant
2018-09-13 15:21 ` [PATCH v9 5/7] memory: add check_get_page_from_gfn() as a wrapper Paul Durrant
2018-09-18 13:16   ` Jan Beulich
2018-09-18 14:03     ` Paul Durrant
2018-09-18 14:51       ` Jan Beulich
2018-09-18 16:37         ` Paul Durrant
2018-09-19  6:03           ` Jan Beulich
2018-09-19  7:56             ` Paul Durrant
2018-09-19  8:01               ` Paul Durrant
2018-09-19  8:31                 ` Jan Beulich
2018-09-19  8:59                   ` Roger Pau Monné
2018-09-19 16:13                     ` Wei Liu
2018-09-19  8:34               ` Jan Beulich
2018-09-19  8:54                 ` Paul Durrant
2018-09-19  9:22                   ` Jan Beulich
2018-09-19  9:24                     ` Paul Durrant
2018-09-19  9:29                     ` Roger Pau Monné
     [not found]                       ` <304E51960200007575B6CDDB@prv1-mh.provo.novell.com>
2018-09-19  9:34                         ` Jan Beulich
2018-09-13 15:21 ` [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops Paul Durrant
2018-09-13 15:22   ` Paul Durrant
2018-09-13 15:21 ` [PATCH v9 6/7] vtd: add missing check for shared EPT Paul Durrant
2018-09-14  2:29   ` Tian, Kevin
2018-09-13 15:21 ` [PATCH v9 7/7] vtd: add lookup_page method to iommu_ops Paul Durrant
2018-09-14  2:30   ` Tian, Kevin
2018-09-18 13:20   ` Jan Beulich
2018-09-18 14:00     ` Paul Durrant
2018-09-18 14:50       ` Jan Beulich
2018-09-18 16:03         ` Paul Durrant
2018-09-13 15:24 ` [PATCH v9 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.