All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v7 0/6] paravirtual IOMMU pre-requisites and clean-up
@ 2018-09-12 11:30 Paul Durrant
  2018-09-12 11:30 ` [PATCH v7 1/6] iommu: introduce the concept of DFN Paul Durrant
                   ` (5 more replies)
  0 siblings, 6 replies; 22+ messages in thread
From: Paul Durrant @ 2018-09-12 11:30 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Jun Nakajima,
	Konrad Rzeszutek Wilk, Andrew Cooper, Ian Jackson, George Dunlap,
	Tim Deegan, Julien Grall, Paul Durrant, Jan Beulich,
	Suravee Suthikulpanit

This series contains pre-requisites and clean-up needed for paravirtual
IOMMU support.

I have separated these patches to avoid further delaying their application
whilst I re-work the implementation of paravirtual IOMMU after review of
v6 of the series. Several of them already have all necessary acks.

Paul Durrant (6):
  iommu: introduce the concept of DFN...
  iommu: make use of type-safe DFN and MFN in exported functions
  iommu: push use of type-safe DFN and MFN into iommu_ops
  iommu: don't domain_crash() inside iommu_map/unmap_page()
  memory: add check_get_page_from_gfn() as a wrapper...
  vtd: add lookup_page method to iommu_ops

 xen/arch/arm/p2m.c                            |  7 ++-
 xen/arch/x86/hvm/emulate.c                    | 34 +++--------
 xen/arch/x86/hvm/hvm.c                        | 16 +----
 xen/arch/x86/mm.c                             | 13 ++--
 xen/arch/x86/mm/p2m-ept.c                     | 13 +++-
 xen/arch/x86/mm/p2m-pt.c                      | 48 +++++++++------
 xen/arch/x86/mm/p2m.c                         | 32 +++++++---
 xen/arch/x86/x86_64/mm.c                      |  5 +-
 xen/common/grant_table.c                      | 55 +++++++----------
 xen/common/memory.c                           | 63 +++++++++++++++----
 xen/drivers/passthrough/amd/iommu_cmd.c       | 18 +++---
 xen/drivers/passthrough/amd/iommu_map.c       | 88 +++++++++++++++------------
 xen/drivers/passthrough/amd/pci_amd_iommu.c   |  4 +-
 xen/drivers/passthrough/arm/smmu.c            | 20 +++---
 xen/drivers/passthrough/iommu.c               | 53 ++++++++--------
 xen/drivers/passthrough/vtd/iommu.c           | 86 ++++++++++++++++++++------
 xen/drivers/passthrough/vtd/iommu.h           |  3 +
 xen/drivers/passthrough/vtd/x86/vtd.c         |  3 +-
 xen/drivers/passthrough/x86/iommu.c           |  6 +-
 xen/include/asm-arm/p2m.h                     |  8 +++
 xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |  8 +--
 xen/include/asm-x86/p2m.h                     |  2 +
 xen/include/xen/iommu.h                       | 57 ++++++++++++++---
 xen/include/xen/mm.h                          |  5 ++
 xen/include/xen/sched.h                       |  5 ++
 25 files changed, 409 insertions(+), 243 deletions(-)
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v7 1/6] iommu: introduce the concept of DFN...
  2018-09-12 11:30 [PATCH v7 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
@ 2018-09-12 11:30 ` Paul Durrant
  2018-09-12 11:49   ` Andrew Cooper
  2018-09-12 12:14   ` Roger Pau Monné
  2018-09-12 11:30 ` [PATCH v7 2/6] iommu: make use of type-safe DFN and MFN in exported functions Paul Durrant
                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 22+ messages in thread
From: Paul Durrant @ 2018-09-12 11:30 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Suravee Suthikulpanit,
	Julien Grall, Paul Durrant, Jan Beulich

...meaning 'device DMA frame number' i.e. a frame number mapped in the IOMMU
(rather than the MMU) and hence used for DMA address translation.

This patch is a largely cosmetic change that substitutes the terms 'gfn'
and 'gaddr' for 'dfn' and 'daddr' in all the places where the frame number
or address relate to a device rather than the CPU.

The parts that are not purely cosmetic are:

 - the introduction of a type-safe declaration of dfn_t and definition of
   INVALID_DFN to make the substitution of gfn_x(INVALID_GFN) mechanical.
 - the introduction of __dfn_to_daddr and __daddr_to_dfn (and type-safe
   variants without the leading __) with some use of the former.

Subsequent patches will convert code to make use of type-safe DFNs.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Kevin Tian <kevin.tian@intel.com>

v7:
 - Re-name BFN -> DFN as requested by Jan.
 - Dropped Wei's R-b because of name change.

v6:
 - Dropped changes to 'mfn' section in xen/mm.h as suggested by Kevin.

v3:
 - Get rid of intermediate 'frame' variables again.

v2:
 - Addressed comments from Jan.
---
 xen/drivers/passthrough/amd/iommu_cmd.c     | 18 +++----
 xen/drivers/passthrough/amd/iommu_map.c     | 78 ++++++++++++++---------------
 xen/drivers/passthrough/amd/pci_amd_iommu.c |  2 +-
 xen/drivers/passthrough/arm/smmu.c          | 16 +++---
 xen/drivers/passthrough/iommu.c             | 28 +++++------
 xen/drivers/passthrough/vtd/iommu.c         | 30 +++++------
 xen/include/xen/iommu.h                     | 38 +++++++++++---
 xen/include/xen/mm.h                        |  5 ++
 8 files changed, 123 insertions(+), 92 deletions(-)

diff --git a/xen/drivers/passthrough/amd/iommu_cmd.c b/xen/drivers/passthrough/amd/iommu_cmd.c
index 08247fa354..d4d071e53e 100644
--- a/xen/drivers/passthrough/amd/iommu_cmd.c
+++ b/xen/drivers/passthrough/amd/iommu_cmd.c
@@ -284,7 +284,7 @@ void invalidate_iommu_all(struct amd_iommu *iommu)
 }
 
 void amd_iommu_flush_iotlb(u8 devfn, const struct pci_dev *pdev,
-                           uint64_t gaddr, unsigned int order)
+                           daddr_t daddr, unsigned int order)
 {
     unsigned long flags;
     struct amd_iommu *iommu;
@@ -315,12 +315,12 @@ void amd_iommu_flush_iotlb(u8 devfn, const struct pci_dev *pdev,
 
     /* send INVALIDATE_IOTLB_PAGES command */
     spin_lock_irqsave(&iommu->lock, flags);
-    invalidate_iotlb_pages(iommu, maxpend, 0, queueid, gaddr, req_id, order);
+    invalidate_iotlb_pages(iommu, maxpend, 0, queueid, daddr, req_id, order);
     flush_command_buffer(iommu);
     spin_unlock_irqrestore(&iommu->lock, flags);
 }
 
-static void amd_iommu_flush_all_iotlbs(struct domain *d, uint64_t gaddr,
+static void amd_iommu_flush_all_iotlbs(struct domain *d, daddr_t daddr,
                                        unsigned int order)
 {
     struct pci_dev *pdev;
@@ -333,7 +333,7 @@ static void amd_iommu_flush_all_iotlbs(struct domain *d, uint64_t gaddr,
         u8 devfn = pdev->devfn;
 
         do {
-            amd_iommu_flush_iotlb(devfn, pdev, gaddr, order);
+            amd_iommu_flush_iotlb(devfn, pdev, daddr, order);
             devfn += pdev->phantom_stride;
         } while ( devfn != pdev->devfn &&
                   PCI_SLOT(devfn) == PCI_SLOT(pdev->devfn) );
@@ -342,7 +342,7 @@ static void amd_iommu_flush_all_iotlbs(struct domain *d, uint64_t gaddr,
 
 /* Flush iommu cache after p2m changes. */
 static void _amd_iommu_flush_pages(struct domain *d,
-                                   uint64_t gaddr, unsigned int order)
+                                   daddr_t daddr, unsigned int order)
 {
     unsigned long flags;
     struct amd_iommu *iommu;
@@ -352,13 +352,13 @@ static void _amd_iommu_flush_pages(struct domain *d,
     for_each_amd_iommu ( iommu )
     {
         spin_lock_irqsave(&iommu->lock, flags);
-        invalidate_iommu_pages(iommu, gaddr, dom_id, order);
+        invalidate_iommu_pages(iommu, daddr, dom_id, order);
         flush_command_buffer(iommu);
         spin_unlock_irqrestore(&iommu->lock, flags);
     }
 
     if ( ats_enabled )
-        amd_iommu_flush_all_iotlbs(d, gaddr, order);
+        amd_iommu_flush_all_iotlbs(d, daddr, order);
 }
 
 void amd_iommu_flush_all_pages(struct domain *d)
@@ -367,9 +367,9 @@ void amd_iommu_flush_all_pages(struct domain *d)
 }
 
 void amd_iommu_flush_pages(struct domain *d,
-                           unsigned long gfn, unsigned int order)
+                           unsigned long dfn, unsigned int order)
 {
-    _amd_iommu_flush_pages(d, (uint64_t) gfn << PAGE_SHIFT, order);
+    _amd_iommu_flush_pages(d, __dfn_to_daddr(dfn), order);
 }
 
 void amd_iommu_flush_device(struct amd_iommu *iommu, uint16_t bdf)
diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index 70b4345b37..61ade71850 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -35,12 +35,12 @@ static unsigned int pfn_to_pde_idx(unsigned long pfn, unsigned int level)
     return idx;
 }
 
-void clear_iommu_pte_present(unsigned long l1_mfn, unsigned long gfn)
+void clear_iommu_pte_present(unsigned long l1_mfn, unsigned long dfn)
 {
     u64 *table, *pte;
 
     table = map_domain_page(_mfn(l1_mfn));
-    pte = table + pfn_to_pde_idx(gfn, IOMMU_PAGING_MODE_LEVEL_1);
+    pte = table + pfn_to_pde_idx(dfn, IOMMU_PAGING_MODE_LEVEL_1);
     *pte = 0;
     unmap_domain_page(table);
 }
@@ -104,7 +104,7 @@ static bool_t set_iommu_pde_present(u32 *pde, unsigned long next_mfn,
     return need_flush;
 }
 
-static bool_t set_iommu_pte_present(unsigned long pt_mfn, unsigned long gfn, 
+static bool_t set_iommu_pte_present(unsigned long pt_mfn, unsigned long dfn,
                                     unsigned long next_mfn, int pde_level, 
                                     bool_t iw, bool_t ir)
 {
@@ -114,7 +114,7 @@ static bool_t set_iommu_pte_present(unsigned long pt_mfn, unsigned long gfn,
 
     table = map_domain_page(_mfn(pt_mfn));
 
-    pde = (u32*)(table + pfn_to_pde_idx(gfn, pde_level));
+    pde = (u32*)(table + pfn_to_pde_idx(dfn, pde_level));
 
     need_flush = set_iommu_pde_present(pde, next_mfn, 
                                        IOMMU_PAGING_MODE_LEVEL_0, iw, ir);
@@ -331,7 +331,7 @@ static void set_pde_count(u64 *pde, unsigned int count)
  * otherwise increase pde count if mfn is contigous with mfn - 1
  */
 static int iommu_update_pde_count(struct domain *d, unsigned long pt_mfn,
-                                  unsigned long gfn, unsigned long mfn,
+                                  unsigned long dfn, unsigned long mfn,
                                   unsigned int merge_level)
 {
     unsigned int pde_count, next_level;
@@ -347,7 +347,7 @@ static int iommu_update_pde_count(struct domain *d, unsigned long pt_mfn,
 
     /* get pde at merge level */
     table = map_domain_page(_mfn(pt_mfn));
-    pde = table + pfn_to_pde_idx(gfn, merge_level);
+    pde = table + pfn_to_pde_idx(dfn, merge_level);
 
     /* get page table of next level */
     ntable_maddr = amd_iommu_get_next_table_from_pte((u32*)pde);
@@ -362,7 +362,7 @@ static int iommu_update_pde_count(struct domain *d, unsigned long pt_mfn,
     mask = (1ULL<< (PTE_PER_TABLE_SHIFT * next_level)) - 1;
 
     if ( ((first_mfn & mask) == 0) &&
-         (((gfn & mask) | first_mfn) == mfn) )
+         (((dfn & mask) | first_mfn) == mfn) )
     {
         pde_count = get_pde_count(*pde);
 
@@ -387,7 +387,7 @@ out:
 }
 
 static int iommu_merge_pages(struct domain *d, unsigned long pt_mfn,
-                             unsigned long gfn, unsigned int flags,
+                             unsigned long dfn, unsigned int flags,
                              unsigned int merge_level)
 {
     u64 *table, *pde, *ntable;
@@ -398,7 +398,7 @@ static int iommu_merge_pages(struct domain *d, unsigned long pt_mfn,
     ASSERT( spin_is_locked(&hd->arch.mapping_lock) && pt_mfn );
 
     table = map_domain_page(_mfn(pt_mfn));
-    pde = table + pfn_to_pde_idx(gfn, merge_level);
+    pde = table + pfn_to_pde_idx(dfn, merge_level);
 
     /* get first mfn */
     ntable_mfn = amd_iommu_get_next_table_from_pte((u32*)pde) >> PAGE_SHIFT;
@@ -436,7 +436,7 @@ static int iommu_merge_pages(struct domain *d, unsigned long pt_mfn,
  * {Re, un}mapping super page frames causes re-allocation of io
  * page tables.
  */
-static int iommu_pde_from_gfn(struct domain *d, unsigned long pfn, 
+static int iommu_pde_from_dfn(struct domain *d, unsigned long dfn,
                               unsigned long pt_mfn[])
 {
     u64 *pde, *next_table_vaddr;
@@ -465,7 +465,7 @@ static int iommu_pde_from_gfn(struct domain *d, unsigned long pfn,
         pt_mfn[level] = next_table_mfn;
 
         next_table_vaddr = map_domain_page(_mfn(next_table_mfn));
-        pde = next_table_vaddr + pfn_to_pde_idx(pfn, level);
+        pde = next_table_vaddr + pfn_to_pde_idx(dfn, level);
 
         /* Here might be a super page frame */
         next_table_mfn = amd_iommu_get_next_table_from_pte((uint32_t*)pde) 
@@ -477,11 +477,11 @@ static int iommu_pde_from_gfn(struct domain *d, unsigned long pfn,
              next_table_mfn != 0 )
         {
             int i;
-            unsigned long mfn, gfn;
+            unsigned long mfn, pfn;
             unsigned int page_sz;
 
             page_sz = 1 << (PTE_PER_TABLE_SHIFT * (next_level - 1));
-            gfn =  pfn & ~((1 << (PTE_PER_TABLE_SHIFT * next_level)) - 1);
+            pfn =  dfn & ~((1 << (PTE_PER_TABLE_SHIFT * next_level)) - 1);
             mfn = next_table_mfn;
 
             /* allocate lower level page table */
@@ -499,10 +499,10 @@ static int iommu_pde_from_gfn(struct domain *d, unsigned long pfn,
 
             for ( i = 0; i < PTE_PER_TABLE_SIZE; i++ )
             {
-                set_iommu_pte_present(next_table_mfn, gfn, mfn, next_level,
+                set_iommu_pte_present(next_table_mfn, pfn, mfn, next_level,
                                       !!IOMMUF_writable, !!IOMMUF_readable);
                 mfn += page_sz;
-                gfn += page_sz;
+                pfn += page_sz;
              }
 
             amd_iommu_flush_all_pages(d);
@@ -540,7 +540,7 @@ static int iommu_pde_from_gfn(struct domain *d, unsigned long pfn,
     return 0;
 }
 
-static int update_paging_mode(struct domain *d, unsigned long gfn)
+static int update_paging_mode(struct domain *d, unsigned long dfn)
 {
     u16 bdf;
     void *device_entry;
@@ -554,13 +554,13 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
     unsigned long old_root_mfn;
     struct domain_iommu *hd = dom_iommu(d);
 
-    if ( gfn == gfn_x(INVALID_GFN) )
+    if ( dfn == dfn_x(INVALID_DFN) )
         return -EADDRNOTAVAIL;
-    ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
+    ASSERT(!(dfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
 
     level = hd->arch.paging_mode;
     old_root = hd->arch.root_table;
-    offset = gfn >> (PTE_PER_TABLE_SHIFT * (level - 1));
+    offset = dfn >> (PTE_PER_TABLE_SHIFT * (level - 1));
 
     ASSERT(spin_is_locked(&hd->arch.mapping_lock) && is_hvm_domain(d));
 
@@ -631,7 +631,7 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
     return 0;
 }
 
-int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
+int amd_iommu_map_page(struct domain *d, unsigned long dfn, unsigned long mfn,
                        unsigned int flags)
 {
     bool_t need_flush = 0;
@@ -651,34 +651,34 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     if ( rc )
     {
         spin_unlock(&hd->arch.mapping_lock);
-        AMD_IOMMU_DEBUG("Root table alloc failed, gfn = %lx\n", gfn);
+        AMD_IOMMU_DEBUG("Root table alloc failed, dfn = %lx\n", dfn);
         domain_crash(d);
         return rc;
     }
 
     /* Since HVM domain is initialized with 2 level IO page table,
-     * we might need a deeper page table for lager gfn now */
+     * we might need a deeper page table for wider dfn now */
     if ( is_hvm_domain(d) )
     {
-        if ( update_paging_mode(d, gfn) )
+        if ( update_paging_mode(d, dfn) )
         {
             spin_unlock(&hd->arch.mapping_lock);
-            AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
+            AMD_IOMMU_DEBUG("Update page mode failed dfn = %lx\n", dfn);
             domain_crash(d);
             return -EFAULT;
         }
     }
 
-    if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
+    if ( iommu_pde_from_dfn(d, dfn, pt_mfn) || (pt_mfn[1] == 0) )
     {
         spin_unlock(&hd->arch.mapping_lock);
-        AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
+        AMD_IOMMU_DEBUG("Invalid IO pagetable entry dfn = %lx\n", dfn);
         domain_crash(d);
         return -EFAULT;
     }
 
     /* Install 4k mapping first */
-    need_flush = set_iommu_pte_present(pt_mfn[1], gfn, mfn, 
+    need_flush = set_iommu_pte_present(pt_mfn[1], dfn, mfn,
                                        IOMMU_PAGING_MODE_LEVEL_1,
                                        !!(flags & IOMMUF_writable),
                                        !!(flags & IOMMUF_readable));
@@ -690,7 +690,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     /* 4K mapping for PV guests never changes, 
      * no need to flush if we trust non-present bits */
     if ( is_hvm_domain(d) )
-        amd_iommu_flush_pages(d, gfn, 0);
+        amd_iommu_flush_pages(d, dfn, 0);
 
     for ( merge_level = IOMMU_PAGING_MODE_LEVEL_2;
           merge_level <= hd->arch.paging_mode; merge_level++ )
@@ -698,15 +698,15 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
         if ( pt_mfn[merge_level] == 0 )
             break;
         if ( !iommu_update_pde_count(d, pt_mfn[merge_level],
-                                     gfn, mfn, merge_level) )
+                                     dfn, mfn, merge_level) )
             break;
 
-        if ( iommu_merge_pages(d, pt_mfn[merge_level], gfn, 
+        if ( iommu_merge_pages(d, pt_mfn[merge_level], dfn,
                                flags, merge_level) )
         {
             spin_unlock(&hd->arch.mapping_lock);
             AMD_IOMMU_DEBUG("Merge iommu page failed at level %d, "
-                            "gfn = %lx mfn = %lx\n", merge_level, gfn, mfn);
+                            "dfn = %lx mfn = %lx\n", merge_level, dfn, mfn);
             domain_crash(d);
             return -EFAULT;
         }
@@ -720,7 +720,7 @@ out:
     return 0;
 }
 
-int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
+int amd_iommu_unmap_page(struct domain *d, unsigned long dfn)
 {
     unsigned long pt_mfn[7];
     struct domain_iommu *hd = dom_iommu(d);
@@ -739,34 +739,34 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
     }
 
     /* Since HVM domain is initialized with 2 level IO page table,
-     * we might need a deeper page table for lager gfn now */
+     * we might need a deeper page table for lager dfn now */
     if ( is_hvm_domain(d) )
     {
-        int rc = update_paging_mode(d, gfn);
+        int rc = update_paging_mode(d, dfn);
 
         if ( rc )
         {
             spin_unlock(&hd->arch.mapping_lock);
-            AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
+            AMD_IOMMU_DEBUG("Update page mode failed dfn = %lx\n", dfn);
             if ( rc != -EADDRNOTAVAIL )
                 domain_crash(d);
             return rc;
         }
     }
 
-    if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
+    if ( iommu_pde_from_dfn(d, dfn, pt_mfn) || (pt_mfn[1] == 0) )
     {
         spin_unlock(&hd->arch.mapping_lock);
-        AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
+        AMD_IOMMU_DEBUG("Invalid IO pagetable entry dfn = %lx\n", dfn);
         domain_crash(d);
         return -EFAULT;
     }
 
     /* mark PTE as 'page not present' */
-    clear_iommu_pte_present(pt_mfn[1], gfn);
+    clear_iommu_pte_present(pt_mfn[1], dfn);
     spin_unlock(&hd->arch.mapping_lock);
 
-    amd_iommu_flush_pages(d, gfn, 0);
+    amd_iommu_flush_pages(d, dfn, 0);
 
     return 0;
 }
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 12d2695b89..5699adc0f6 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -578,7 +578,7 @@ static void amd_dump_p2m_table_level(struct page_info* pg, int level,
                 maddr_to_page(next_table_maddr), next_level,
                 address, indent + 1);
         else
-            printk("%*sgfn: %08lx  mfn: %08lx\n",
+            printk("%*sdfn: %08lx  mfn: %08lx\n",
                    indent, "",
                    (unsigned long)PFN_DOWN(address),
                    (unsigned long)PFN_DOWN(next_table_maddr));
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 74c09b0991..0662398ccc 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2551,7 +2551,7 @@ static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
 }
 
 static int __must_check arm_smmu_iotlb_flush(struct domain *d,
-                                             unsigned long gfn,
+                                             unsigned long dfn,
                                              unsigned int page_count)
 {
 	/* ARM SMMU v1 doesn't have flush by VMA and VMID */
@@ -2737,7 +2737,7 @@ static void arm_smmu_iommu_domain_teardown(struct domain *d)
 	xfree(xen_domain);
 }
 
-static int __must_check arm_smmu_map_page(struct domain *d, unsigned long gfn,
+static int __must_check arm_smmu_map_page(struct domain *d, unsigned long dfn,
 			unsigned long mfn, unsigned int flags)
 {
 	p2m_type_t t;
@@ -2748,10 +2748,10 @@ static int __must_check arm_smmu_map_page(struct domain *d, unsigned long gfn,
 	 * protected by an IOMMU, Xen needs to add a 1:1 mapping in the domain
 	 * p2m to allow DMA request to work.
 	 * This is only valid when the domain is directed mapped. Hence this
-	 * function should only be used by gnttab code with gfn == mfn.
+	 * function should only be used by gnttab code with gfn == mfn == dfn.
 	 */
 	BUG_ON(!is_domain_direct_mapped(d));
-	BUG_ON(mfn != gfn);
+	BUG_ON(mfn != dfn);
 
 	/* We only support readable and writable flags */
 	if (!(flags & (IOMMUF_readable | IOMMUF_writable)))
@@ -2763,19 +2763,19 @@ static int __must_check arm_smmu_map_page(struct domain *d, unsigned long gfn,
 	 * The function guest_physmap_add_entry replaces the current mapping
 	 * if there is already one...
 	 */
-	return guest_physmap_add_entry(d, _gfn(gfn), _mfn(mfn), 0, t);
+	return guest_physmap_add_entry(d, _gfn(dfn), _mfn(dfn), 0, t);
 }
 
-static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long gfn)
+static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long dfn)
 {
 	/*
 	 * This function should only be used by gnttab code when the domain
-	 * is direct mapped
+	 * is direct mapped (i.e. gfn == mfn == dfn).
 	 */
 	if ( !is_domain_direct_mapped(d) )
 		return -EINVAL;
 
-	return guest_physmap_remove_page(d, _gfn(gfn), _mfn(gfn), 0);
+	return guest_physmap_remove_page(d, _gfn(dfn), _mfn(dfn), 0);
 }
 
 static const struct iommu_ops arm_smmu_iommu_ops = {
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index e917901976..10d1252554 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -184,7 +184,7 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
         page_list_for_each ( page, &d->page_list )
         {
             unsigned long mfn = mfn_x(page_to_mfn(page));
-            unsigned long gfn = mfn_to_gmfn(d, mfn);
+            unsigned long dfn = mfn_to_gmfn(d, mfn);
             unsigned int mapping = IOMMUF_readable;
             int ret;
 
@@ -193,7 +193,7 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
                   == PGT_writable_page) )
                 mapping |= IOMMUF_writable;
 
-            ret = hd->platform_ops->map_page(d, gfn, mfn, mapping);
+            ret = hd->platform_ops->map_page(d, dfn, mfn, mapping);
             if ( !rc )
                 rc = ret;
 
@@ -254,7 +254,7 @@ void iommu_domain_destroy(struct domain *d)
     arch_iommu_domain_destroy(d);
 }
 
-int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
+int iommu_map_page(struct domain *d, unsigned long dfn, unsigned long mfn,
                    unsigned int flags)
 {
     const struct domain_iommu *hd = dom_iommu(d);
@@ -263,13 +263,13 @@ int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    rc = hd->platform_ops->map_page(d, gfn, mfn, flags);
+    rc = hd->platform_ops->map_page(d, dfn, mfn, flags);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
             printk(XENLOG_ERR
-                   "d%d: IOMMU mapping gfn %#lx to mfn %#lx failed: %d\n",
-                   d->domain_id, gfn, mfn, rc);
+                   "d%d: IOMMU mapping dfn %#lx to mfn %#lx failed: %d\n",
+                   d->domain_id, dfn, mfn, rc);
 
         if ( !is_hardware_domain(d) )
             domain_crash(d);
@@ -278,7 +278,7 @@ int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     return rc;
 }
 
-int iommu_unmap_page(struct domain *d, unsigned long gfn)
+int iommu_unmap_page(struct domain *d, unsigned long dfn)
 {
     const struct domain_iommu *hd = dom_iommu(d);
     int rc;
@@ -286,13 +286,13 @@ int iommu_unmap_page(struct domain *d, unsigned long gfn)
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    rc = hd->platform_ops->unmap_page(d, gfn);
+    rc = hd->platform_ops->unmap_page(d, dfn);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
             printk(XENLOG_ERR
-                   "d%d: IOMMU unmapping gfn %#lx failed: %d\n",
-                   d->domain_id, gfn, rc);
+                   "d%d: IOMMU unmapping dfn %#lx failed: %d\n",
+                   d->domain_id, dfn, rc);
 
         if ( !is_hardware_domain(d) )
             domain_crash(d);
@@ -318,7 +318,7 @@ static void iommu_free_pagetables(unsigned long unused)
                             cpumask_cycle(smp_processor_id(), &cpu_online_map));
 }
 
-int iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+int iommu_iotlb_flush(struct domain *d, unsigned long dfn,
                       unsigned int page_count)
 {
     const struct domain_iommu *hd = dom_iommu(d);
@@ -327,13 +327,13 @@ int iommu_iotlb_flush(struct domain *d, unsigned long gfn,
     if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
         return 0;
 
-    rc = hd->platform_ops->iotlb_flush(d, gfn, page_count);
+    rc = hd->platform_ops->iotlb_flush(d, dfn, page_count);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
             printk(XENLOG_ERR
-                   "d%d: IOMMU IOTLB flush failed: %d, gfn %#lx, page count %u\n",
-                   d->domain_id, rc, gfn, page_count);
+                   "d%d: IOMMU IOTLB flush failed: %d, dfn %#lx, page count %u\n",
+                   d->domain_id, rc, dfn, page_count);
 
         if ( !is_hardware_domain(d) )
             domain_crash(d);
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 8d768a4693..ac0a78d917 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -585,7 +585,7 @@ static int __must_check iommu_flush_all(void)
 }
 
 static int __must_check iommu_flush_iotlb(struct domain *d,
-                                          unsigned long gfn,
+                                          unsigned long dfn,
                                           bool_t dma_old_pte_present,
                                           unsigned int page_count)
 {
@@ -612,12 +612,12 @@ static int __must_check iommu_flush_iotlb(struct domain *d,
         if ( iommu_domid == -1 )
             continue;
 
-        if ( page_count != 1 || gfn == gfn_x(INVALID_GFN) )
+        if ( page_count != 1 || dfn == dfn_x(INVALID_DFN) )
             rc = iommu_flush_iotlb_dsi(iommu, iommu_domid,
                                        0, flush_dev_iotlb);
         else
             rc = iommu_flush_iotlb_psi(iommu, iommu_domid,
-                                       (paddr_t)gfn << PAGE_SHIFT_4K,
+                                       __dfn_to_daddr(dfn),
                                        PAGE_ORDER_4K,
                                        !dma_old_pte_present,
                                        flush_dev_iotlb);
@@ -633,15 +633,15 @@ static int __must_check iommu_flush_iotlb(struct domain *d,
 }
 
 static int __must_check iommu_flush_iotlb_pages(struct domain *d,
-                                                unsigned long gfn,
+                                                unsigned long dfn,
                                                 unsigned int page_count)
 {
-    return iommu_flush_iotlb(d, gfn, 1, page_count);
+    return iommu_flush_iotlb(d, dfn, 1, page_count);
 }
 
 static int __must_check iommu_flush_iotlb_all(struct domain *d)
 {
-    return iommu_flush_iotlb(d, gfn_x(INVALID_GFN), 0, 0);
+    return iommu_flush_iotlb(d, dfn_x(INVALID_DFN), 0, 0);
 }
 
 /* clear one page's page table */
@@ -1767,7 +1767,7 @@ static void iommu_domain_teardown(struct domain *d)
 }
 
 static int __must_check intel_iommu_map_page(struct domain *d,
-                                             unsigned long gfn,
+                                             unsigned long dfn,
                                              unsigned long mfn,
                                              unsigned int flags)
 {
@@ -1786,14 +1786,14 @@ static int __must_check intel_iommu_map_page(struct domain *d,
 
     spin_lock(&hd->arch.mapping_lock);
 
-    pg_maddr = addr_to_dma_page_maddr(d, (paddr_t)gfn << PAGE_SHIFT_4K, 1);
+    pg_maddr = addr_to_dma_page_maddr(d, __dfn_to_daddr(dfn), 1);
     if ( pg_maddr == 0 )
     {
         spin_unlock(&hd->arch.mapping_lock);
         return -ENOMEM;
     }
     page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
-    pte = page + (gfn & LEVEL_MASK);
+    pte = page + (dfn & LEVEL_MASK);
     old = *pte;
     dma_set_pte_addr(new, (paddr_t)mfn << PAGE_SHIFT_4K);
     dma_set_pte_prot(new,
@@ -1817,22 +1817,22 @@ static int __must_check intel_iommu_map_page(struct domain *d,
     unmap_vtd_domain_page(page);
 
     if ( !this_cpu(iommu_dont_flush_iotlb) )
-        rc = iommu_flush_iotlb(d, gfn, dma_pte_present(old), 1);
+        rc = iommu_flush_iotlb(d, dfn, dma_pte_present(old), 1);
 
     return rc;
 }
 
 static int __must_check intel_iommu_unmap_page(struct domain *d,
-                                               unsigned long gfn)
+                                               unsigned long dfn)
 {
     /* Do nothing if hardware domain and iommu supports pass thru. */
     if ( iommu_passthrough && is_hardware_domain(d) )
         return 0;
 
-    return dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K);
+    return dma_pte_clear_one(d, __dfn_to_daddr(dfn));
 }
 
-int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
+int iommu_pte_flush(struct domain *d, uint64_t dfn, uint64_t *pte,
                     int order, int present)
 {
     struct acpi_drhd_unit *drhd;
@@ -1856,7 +1856,7 @@ int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
             continue;
 
         rc = iommu_flush_iotlb_psi(iommu, iommu_domid,
-                                   (paddr_t)gfn << PAGE_SHIFT_4K,
+                                   __dfn_to_daddr(dfn),
                                    order, !present, flush_dev_iotlb);
         if ( rc > 0 )
         {
@@ -2626,7 +2626,7 @@ static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
             vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
                                      address, indent + 1);
         else
-            printk("%*sgfn: %08lx mfn: %08lx\n",
+            printk("%*sdfn: %08lx mfn: %08lx\n",
                    indent, "",
                    (unsigned long)(address >> PAGE_SHIFT_4K),
                    (unsigned long)(dma_pte_addr(*pte) >> PAGE_SHIFT_4K));
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index e35d941f3c..19d5d55d79 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -23,11 +23,37 @@
 #include <xen/page-defs.h>
 #include <xen/spinlock.h>
 #include <xen/pci.h>
+#include <xen/typesafe.h>
 #include <public/hvm/ioreq.h>
 #include <public/domctl.h>
 #include <asm/device.h>
 #include <asm/iommu.h>
 
+TYPE_SAFE(uint64_t, dfn);
+#define PRI_dfn     PRIx64
+#define INVALID_DFN _dfn(~0UL)
+
+#ifndef dfn_t
+#define dfn_t /* Grep fodder: dfn_t, _dfn() and dfn_x() are defined above */
+#define _dfn
+#define dfn_x
+#undef dfn_t
+#undef _dfn
+#undef dfn_x
+#endif
+
+#define IOMMU_PAGE_SHIFT 12
+#define IOMMU_PAGE_SIZE  (_AC(1,L) << IOMMU_PAGE_SHIFT)
+#define IOMMU_PAGE_MASK  (~(IOMMU_PAGE_SIZE - 1))
+
+typedef uint64_t daddr_t;
+
+#define __dfn_to_daddr(dfn) ((daddr_t)(dfn) << IOMMU_PAGE_SHIFT)
+#define __daddr_to_dfn(daddr) ((uint64_t)(daddr >> IOMMU_PAGE_SHIFT))
+
+#define dfn_to_daddr(dfn) __dfn_to_daddr(dfn_x(dfn))
+#define daddr_to_dfn(daddr) _dfn(__daddr_to_dfn(daddr))
+
 extern bool_t iommu_enable, iommu_enabled;
 extern bool_t force_iommu, iommu_dom0_strict, iommu_verbose;
 extern bool_t iommu_workaround_bios_bug, iommu_igfx, iommu_passthrough;
@@ -60,9 +86,9 @@ void iommu_teardown(struct domain *d);
 #define IOMMUF_readable  (1u<<_IOMMUF_readable)
 #define _IOMMUF_writable 1
 #define IOMMUF_writable  (1u<<_IOMMUF_writable)
-int __must_check iommu_map_page(struct domain *d, unsigned long gfn,
+int __must_check iommu_map_page(struct domain *d, unsigned long dfn,
                                 unsigned long mfn, unsigned int flags);
-int __must_check iommu_unmap_page(struct domain *d, unsigned long gfn);
+int __must_check iommu_unmap_page(struct domain *d, unsigned long dfn);
 
 enum iommu_feature
 {
@@ -150,9 +176,9 @@ struct iommu_ops {
 #endif /* HAS_PCI */
 
     void (*teardown)(struct domain *d);
-    int __must_check (*map_page)(struct domain *d, unsigned long gfn,
+    int __must_check (*map_page)(struct domain *d, unsigned long dfn,
                                  unsigned long mfn, unsigned int flags);
-    int __must_check (*unmap_page)(struct domain *d, unsigned long gfn);
+    int __must_check (*unmap_page)(struct domain *d, unsigned long dfn);
     void (*free_page_table)(struct page_info *);
 #ifdef CONFIG_X86
     void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
@@ -163,7 +189,7 @@ struct iommu_ops {
     void (*resume)(void);
     void (*share_p2m)(struct domain *d);
     void (*crash_shutdown)(void);
-    int __must_check (*iotlb_flush)(struct domain *d, unsigned long gfn,
+    int __must_check (*iotlb_flush)(struct domain *d, unsigned long dfn,
                                     unsigned int page_count);
     int __must_check (*iotlb_flush_all)(struct domain *d);
     int (*get_reserved_device_memory)(iommu_grdm_t *, void *);
@@ -185,7 +211,7 @@ int iommu_do_pci_domctl(struct xen_domctl *, struct domain *d,
 int iommu_do_domctl(struct xen_domctl *, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
-int __must_check iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+int __must_check iommu_iotlb_flush(struct domain *d, unsigned long dfn,
                                    unsigned int page_count);
 int __must_check iommu_iotlb_flush_all(struct domain *d);
 
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 24654e8e22..288dc77b85 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -26,6 +26,11 @@
  *   A linear idea of a guest physical address space. For an auto-translated
  *   guest, pfn == gfn while for a non-translated guest, pfn != gfn.
  *
+ * dfn: Device DMA Frame Number (definitions in include/xen/iommu.h)
+ *   The linear frame numbers of IOMMU address space. All initiators for (i.e.
+ *   all devices assigned to) a guest share a single IOMMU address space and,
+ *   by default, Xen will ensure dfn == pfn.
+ *
  * WARNING: Some of these terms have changed over time while others have been
  * used inconsistently, meaning that a lot of existing code does not match the
  * definitions above.  New code should use these terms as described here, and
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v7 2/6] iommu: make use of type-safe DFN and MFN in exported functions
  2018-09-12 11:30 [PATCH v7 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
  2018-09-12 11:30 ` [PATCH v7 1/6] iommu: introduce the concept of DFN Paul Durrant
@ 2018-09-12 11:30 ` Paul Durrant
  2018-09-13  8:09   ` Roger Pau Monné
  2018-09-12 11:30 ` [PATCH v7 3/6] iommu: push use of type-safe DFN and MFN into iommu_ops Paul Durrant
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 22+ messages in thread
From: Paul Durrant @ 2018-09-12 11:30 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Konrad Rzeszutek Wilk, George Dunlap,
	Andrew Cooper, Ian Jackson, Tim Deegan, Julien Grall,
	Paul Durrant, Jun Nakajima

This patch modifies the declaration of the entry points to the IOMMU
sub-system to use dfn_t and mfn_t in place of unsigned long. A subsequent
patch will similarly modify the methods in the iommu_ops structure.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>

v7:
 - Re-base and re-name BFN -> DFN.
 - Added Jan's A-b since re-naming was purely mechanical.

v6:
 - Re-base.

v3:
 - Removed most of the uses of an intermediate 'frame' variable.

v2:
 - Addressed comments from Jan.
 - Use intermediate 'frame' variable to avoid directly encapsulating
   mfn or gfn values as dfns.
---
 xen/arch/arm/p2m.c                    |  3 ++-
 xen/arch/x86/mm.c                     | 10 ++++----
 xen/arch/x86/mm/p2m-ept.c             | 10 +++++---
 xen/arch/x86/mm/p2m-pt.c              | 45 ++++++++++++++++++++---------------
 xen/arch/x86/mm/p2m.c                 | 16 ++++++++-----
 xen/arch/x86/x86_64/mm.c              |  5 ++--
 xen/common/grant_table.c              | 12 +++++-----
 xen/common/memory.c                   |  4 ++--
 xen/drivers/passthrough/iommu.c       | 25 ++++++++++---------
 xen/drivers/passthrough/vtd/x86/vtd.c |  3 ++-
 xen/include/xen/iommu.h               | 14 +++++++----
 11 files changed, 85 insertions(+), 62 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 1364e5960a..0db12b01f1 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -957,7 +957,8 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
 
     if ( need_iommu(p2m->domain) &&
          (lpae_is_valid(orig_pte) || lpae_is_valid(*entry)) )
-        rc = iommu_iotlb_flush(p2m->domain, gfn_x(sgfn), 1UL << page_order);
+        rc = iommu_iotlb_flush(p2m->domain, _dfn(gfn_x(sgfn)),
+                               1UL << page_order);
     else
         rc = 0;
 
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index baea2f5e63..e503dedebe 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2757,14 +2757,14 @@ static int _get_page_type(struct page_info *page, unsigned long type,
         struct domain *d = page_get_owner(page);
         if ( d && is_pv_domain(d) && unlikely(need_iommu(d)) )
         {
-            gfn_t gfn = _gfn(mfn_to_gmfn(d, mfn_x(page_to_mfn(page))));
+            mfn_t mfn = page_to_mfn(page);
 
             if ( (x & PGT_type_mask) == PGT_writable_page )
-                iommu_ret = iommu_unmap_page(d, gfn_x(gfn));
+                iommu_ret = iommu_unmap_page(d, _dfn(mfn_x(mfn)));
             else if ( type == PGT_writable_page )
-                iommu_ret = iommu_map_page(d, gfn_x(gfn),
-                                           mfn_x(page_to_mfn(page)),
-                                           IOMMUF_readable|IOMMUF_writable);
+                iommu_ret = iommu_map_page(d, _dfn(mfn_x(mfn)), mfn,
+                                           IOMMUF_readable |
+                                           IOMMUF_writable);
         }
     }
 
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 1ff4f14ae4..9a3a90e9e6 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -870,15 +870,19 @@ out:
             rc = iommu_pte_flush(d, gfn, &ept_entry->epte, order, vtd_pte_present);
         else
         {
+            dfn_t dfn = _dfn(gfn);
+
             if ( iommu_flags )
                 for ( i = 0; i < (1 << order); i++ )
                 {
-                    rc = iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags);
+                    rc = iommu_map_page(d, dfn_add(dfn, i),
+                                        mfn_add(mfn, i), iommu_flags);
                     if ( unlikely(rc) )
                     {
                         while ( i-- )
                             /* If statement to satisfy __must_check. */
-                            if ( iommu_unmap_page(p2m->domain, gfn + i) )
+                            if ( iommu_unmap_page(p2m->domain,
+                                                  dfn_add(dfn, i)) )
                                 continue;
 
                         break;
@@ -887,7 +891,7 @@ out:
             else
                 for ( i = 0; i < (1 << order); i++ )
                 {
-                    ret = iommu_unmap_page(d, gfn + i);
+                    ret = iommu_unmap_page(d, dfn_add(dfn, i));
                     if ( !rc )
                         rc = ret;
                 }
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index b8c5d2ed26..881e9e87b8 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -687,29 +687,36 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
             if ( iommu_old_flags )
                 amd_iommu_flush_pages(p2m->domain, gfn, page_order);
         }
-        else if ( iommu_pte_flags )
-            for ( i = 0; i < (1UL << page_order); i++ )
-            {
-                rc = iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
-                                    iommu_pte_flags);
-                if ( unlikely(rc) )
+        else
+        {
+            dfn_t dfn = _dfn(gfn);
+
+            if ( iommu_pte_flags )
+                for ( i = 0; i < (1UL << page_order); i++ )
                 {
-                    while ( i-- )
-                        /* If statement to satisfy __must_check. */
-                        if ( iommu_unmap_page(p2m->domain, gfn + i) )
-                            continue;
+                    rc = iommu_map_page(p2m->domain, dfn_add(dfn, i),
+                                        mfn_add(mfn, i), iommu_pte_flags);
+                    if ( unlikely(rc) )
+                    {
+                        while ( i-- )
+                            /* If statement to satisfy __must_check. */
+                            if ( iommu_unmap_page(p2m->domain,
+                                                  dfn_add(dfn, i)) )
+                                continue;
 
-                    break;
+                        break;
+                    }
                 }
-            }
-        else
-            for ( i = 0; i < (1UL << page_order); i++ )
-            {
-                int ret = iommu_unmap_page(p2m->domain, gfn + i);
+            else
+                for ( i = 0; i < (1UL << page_order); i++ )
+                {
+                    int ret = iommu_unmap_page(p2m->domain,
+                                               dfn_add(dfn, i));
 
-                if ( !rc )
-                    rc = ret;
-            }
+                    if ( !rc )
+                        rc = ret;
+                }
+        }
     }
 
     /*
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 6020553c17..801b629b95 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -715,9 +715,11 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn_l, unsigned long mfn,
 
         if ( need_iommu(p2m->domain) )
         {
+            dfn_t dfn = _dfn(mfn);
+
             for ( i = 0; i < (1 << page_order); i++ )
             {
-                int ret = iommu_unmap_page(p2m->domain, mfn + i);
+                int ret = iommu_unmap_page(p2m->domain, dfn_add(dfn, i));
 
                 if ( !rc )
                     rc = ret;
@@ -774,16 +776,17 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
     {
         if ( need_iommu(d) && t == p2m_ram_rw )
         {
+            dfn_t dfn = _dfn(mfn_x(mfn));
+
             for ( i = 0; i < (1 << page_order); i++ )
             {
-                rc = iommu_map_page(d, mfn_x(mfn_add(mfn, i)),
-                                    mfn_x(mfn_add(mfn, i)),
+                rc = iommu_map_page(d, dfn_add(dfn, i), mfn_add(mfn, i),
                                     IOMMUF_readable|IOMMUF_writable);
                 if ( rc != 0 )
                 {
                     while ( i-- > 0 )
                         /* If statement to satisfy __must_check. */
-                        if ( iommu_unmap_page(d, mfn_x(mfn_add(mfn, i))) )
+                        if ( iommu_unmap_page(d, dfn_add(dfn, i)) )
                             continue;
 
                     return rc;
@@ -1158,7 +1161,8 @@ int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
     {
         if ( !need_iommu(d) )
             return 0;
-        return iommu_map_page(d, gfn_l, gfn_l, IOMMUF_readable|IOMMUF_writable);
+        return iommu_map_page(d, _dfn(gfn_l), _mfn(gfn_l),
+                              IOMMUF_readable | IOMMUF_writable);
     }
 
     gfn_lock(p2m, gfn, 0);
@@ -1248,7 +1252,7 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l)
     {
         if ( !need_iommu(d) )
             return 0;
-        return iommu_unmap_page(d, gfn_l);
+        return iommu_unmap_page(d, _dfn(gfn_l));
     }
 
     gfn_lock(p2m, gfn, 0);
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 606508c871..048bda85bb 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1429,13 +1429,14 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm)
     if ( iommu_enabled && !iommu_passthrough && !need_iommu(hardware_domain) )
     {
         for ( i = spfn; i < epfn; i++ )
-            if ( iommu_map_page(hardware_domain, i, i, IOMMUF_readable|IOMMUF_writable) )
+            if ( iommu_map_page(hardware_domain, _dfn(i), _mfn(i),
+                                IOMMUF_readable | IOMMUF_writable) )
                 break;
         if ( i != epfn )
         {
             while (i-- > old_max)
                 /* If statement to satisfy __must_check. */
-                if ( iommu_unmap_page(hardware_domain, i) )
+                if ( iommu_unmap_page(hardware_domain, _dfn(i)) )
                     continue;
 
             goto destroy_m2p;
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 73d3ed3701..2d01cad176 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1143,13 +1143,13 @@ map_grant_ref(
              !(old_pin & (GNTPIN_hstw_mask|GNTPIN_devw_mask)) )
         {
             if ( !(kind & MAPKIND_WRITE) )
-                err = iommu_map_page(ld, mfn_x(mfn), mfn_x(mfn),
-                                     IOMMUF_readable|IOMMUF_writable);
+                err = iommu_map_page(ld, _dfn(mfn_x(mfn)), mfn,
+                                     IOMMUF_readable | IOMMUF_writable);
         }
         else if ( act_pin && !old_pin )
         {
             if ( !kind )
-                err = iommu_map_page(ld, mfn_x(mfn), mfn_x(mfn),
+                err = iommu_map_page(ld, _dfn(mfn_x(mfn)), mfn,
                                      IOMMUF_readable);
         }
         if ( err )
@@ -1398,10 +1398,10 @@ unmap_common(
 
         kind = mapkind(lgt, rd, op->mfn);
         if ( !kind )
-            err = iommu_unmap_page(ld, mfn_x(op->mfn));
+            err = iommu_unmap_page(ld, _dfn(mfn_x(op->mfn)));
         else if ( !(kind & MAPKIND_WRITE) )
-            err = iommu_map_page(ld, mfn_x(op->mfn),
-                                 mfn_x(op->mfn), IOMMUF_readable);
+            err = iommu_map_page(ld, _dfn(mfn_x(op->mfn)), op->mfn,
+                                 IOMMUF_readable);
 
         double_gt_unlock(lgt, rgt);
 
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 996f94b103..89b211d6aa 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -834,11 +834,11 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
 
         this_cpu(iommu_dont_flush_iotlb) = 0;
 
-        ret = iommu_iotlb_flush(d, xatp->idx - done, done);
+        ret = iommu_iotlb_flush(d, _dfn(xatp->idx - done), done);
         if ( unlikely(ret) && rc >= 0 )
             rc = ret;
 
-        ret = iommu_iotlb_flush(d, xatp->gpfn - done, done);
+        ret = iommu_iotlb_flush(d, _dfn(xatp->gpfn - done), done);
         if ( unlikely(ret) && rc >= 0 )
             rc = ret;
     }
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 10d1252554..0c6e5904da 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -254,7 +254,7 @@ void iommu_domain_destroy(struct domain *d)
     arch_iommu_domain_destroy(d);
 }
 
-int iommu_map_page(struct domain *d, unsigned long dfn, unsigned long mfn,
+int iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
                    unsigned int flags)
 {
     const struct domain_iommu *hd = dom_iommu(d);
@@ -263,13 +263,13 @@ int iommu_map_page(struct domain *d, unsigned long dfn, unsigned long mfn,
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    rc = hd->platform_ops->map_page(d, dfn, mfn, flags);
+    rc = hd->platform_ops->map_page(d, dfn_x(dfn), mfn_x(mfn), flags);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
             printk(XENLOG_ERR
-                   "d%d: IOMMU mapping dfn %#lx to mfn %#lx failed: %d\n",
-                   d->domain_id, dfn, mfn, rc);
+                   "d%d: IOMMU mapping dfn %"PRI_dfn" to mfn %"PRI_mfn" failed: %d\n",
+                   d->domain_id, dfn_x(dfn), mfn_x(mfn), rc);
 
         if ( !is_hardware_domain(d) )
             domain_crash(d);
@@ -278,7 +278,7 @@ int iommu_map_page(struct domain *d, unsigned long dfn, unsigned long mfn,
     return rc;
 }
 
-int iommu_unmap_page(struct domain *d, unsigned long dfn)
+int iommu_unmap_page(struct domain *d, dfn_t dfn)
 {
     const struct domain_iommu *hd = dom_iommu(d);
     int rc;
@@ -286,13 +286,13 @@ int iommu_unmap_page(struct domain *d, unsigned long dfn)
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    rc = hd->platform_ops->unmap_page(d, dfn);
+    rc = hd->platform_ops->unmap_page(d, dfn_x(dfn));
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
             printk(XENLOG_ERR
-                   "d%d: IOMMU unmapping dfn %#lx failed: %d\n",
-                   d->domain_id, dfn, rc);
+                   "d%d: IOMMU unmapping dfn %"PRI_dfn" failed: %d\n",
+                   d->domain_id, dfn_x(dfn), rc);
 
         if ( !is_hardware_domain(d) )
             domain_crash(d);
@@ -318,8 +318,7 @@ static void iommu_free_pagetables(unsigned long unused)
                             cpumask_cycle(smp_processor_id(), &cpu_online_map));
 }
 
-int iommu_iotlb_flush(struct domain *d, unsigned long dfn,
-                      unsigned int page_count)
+int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_count)
 {
     const struct domain_iommu *hd = dom_iommu(d);
     int rc;
@@ -327,13 +326,13 @@ int iommu_iotlb_flush(struct domain *d, unsigned long dfn,
     if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
         return 0;
 
-    rc = hd->platform_ops->iotlb_flush(d, dfn, page_count);
+    rc = hd->platform_ops->iotlb_flush(d, dfn_x(dfn), page_count);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
             printk(XENLOG_ERR
-                   "d%d: IOMMU IOTLB flush failed: %d, dfn %#lx, page count %u\n",
-                   d->domain_id, rc, dfn, page_count);
+                   "d%d: IOMMU IOTLB flush failed: %d, dfn %"PRI_dfn", page count %u\n",
+                   d->domain_id, rc, dfn_x(dfn), page_count);
 
         if ( !is_hardware_domain(d) )
             domain_crash(d);
diff --git a/xen/drivers/passthrough/vtd/x86/vtd.c b/xen/drivers/passthrough/vtd/x86/vtd.c
index ac653eea0e..9e4e80deed 100644
--- a/xen/drivers/passthrough/vtd/x86/vtd.c
+++ b/xen/drivers/passthrough/vtd/x86/vtd.c
@@ -107,7 +107,8 @@ void __hwdom_init vtd_set_hwdom_mapping(struct domain *d)
              page_is_ram_type(pfn, RAM_TYPE_CONVENTIONAL) )
             continue;
 
-        rc = iommu_map_page(d, pfn, pfn, IOMMUF_readable|IOMMUF_writable);
+        rc = iommu_map_page(d, _dfn(pfn), _mfn(pfn),
+			    IOMMUF_readable | IOMMUF_writable);
         if ( rc )
             printk(XENLOG_WARNING VTDPREFIX " d%d: IOMMU mapping failed: %d\n",
                    d->domain_id, rc);
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 19d5d55d79..299b69ee7f 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -24,6 +24,7 @@
 #include <xen/spinlock.h>
 #include <xen/pci.h>
 #include <xen/typesafe.h>
+#include <xen/mm.h>
 #include <public/hvm/ioreq.h>
 #include <public/domctl.h>
 #include <asm/device.h>
@@ -42,6 +43,11 @@ TYPE_SAFE(uint64_t, dfn);
 #undef dfn_x
 #endif
 
+static inline dfn_t dfn_add(dfn_t dfn, unsigned long i)
+{
+    return _dfn(dfn_x(dfn) + i);
+}
+
 #define IOMMU_PAGE_SHIFT 12
 #define IOMMU_PAGE_SIZE  (_AC(1,L) << IOMMU_PAGE_SHIFT)
 #define IOMMU_PAGE_MASK  (~(IOMMU_PAGE_SIZE - 1))
@@ -86,9 +92,9 @@ void iommu_teardown(struct domain *d);
 #define IOMMUF_readable  (1u<<_IOMMUF_readable)
 #define _IOMMUF_writable 1
 #define IOMMUF_writable  (1u<<_IOMMUF_writable)
-int __must_check iommu_map_page(struct domain *d, unsigned long dfn,
-                                unsigned long mfn, unsigned int flags);
-int __must_check iommu_unmap_page(struct domain *d, unsigned long dfn);
+int __must_check iommu_map_page(struct domain *d, dfn_t dfn,
+                                mfn_t mfn, unsigned int flags);
+int __must_check iommu_unmap_page(struct domain *d, dfn_t dfn);
 
 enum iommu_feature
 {
@@ -211,7 +217,7 @@ int iommu_do_pci_domctl(struct xen_domctl *, struct domain *d,
 int iommu_do_domctl(struct xen_domctl *, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
-int __must_check iommu_iotlb_flush(struct domain *d, unsigned long dfn,
+int __must_check iommu_iotlb_flush(struct domain *d, dfn_t dfn,
                                    unsigned int page_count);
 int __must_check iommu_iotlb_flush_all(struct domain *d);
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v7 3/6] iommu: push use of type-safe DFN and MFN into iommu_ops
  2018-09-12 11:30 [PATCH v7 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
  2018-09-12 11:30 ` [PATCH v7 1/6] iommu: introduce the concept of DFN Paul Durrant
  2018-09-12 11:30 ` [PATCH v7 2/6] iommu: make use of type-safe DFN and MFN in exported functions Paul Durrant
@ 2018-09-12 11:30 ` Paul Durrant
  2018-09-13  8:12   ` Roger Pau Monné
  2018-09-12 11:30 ` [PATCH v7 4/6] iommu: don't domain_crash() inside iommu_map/unmap_page() Paul Durrant
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 22+ messages in thread
From: Paul Durrant @ 2018-09-12 11:30 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, George Dunlap, Suravee Suthikulpanit

This patch modifies the methods in struct iommu_ops to use type-safe DFN
and MFN. This follows on from the prior patch that modified the functions
exported in xen/iommu.h.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>

v7:
 - Re-base and re-name BFN -> DFN.
 - Added Jan's A-b since re-naming was purely mechanical.

v6:
 - Re-base.

v3:
 - Remove some use of intermediate 'frame' variables.

v2:
 - Addressed comments from Jan.
 - Extend use of intermediate 'frame' variable to avoid directly
   encapsulating gfn values as bfns (now dfns) or vice versa.
---
 xen/drivers/passthrough/amd/iommu_map.c       | 46 ++++++++++++++++-----------
 xen/drivers/passthrough/amd/pci_amd_iommu.c   |  2 +-
 xen/drivers/passthrough/arm/smmu.c            | 16 +++++-----
 xen/drivers/passthrough/iommu.c               |  9 +++---
 xen/drivers/passthrough/vtd/iommu.c           | 26 +++++++--------
 xen/drivers/passthrough/x86/iommu.c           |  2 +-
 xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |  8 ++---
 xen/include/xen/iommu.h                       | 13 +++++---
 8 files changed, 67 insertions(+), 55 deletions(-)

diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index 61ade71850..c89c54fdb6 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -631,7 +631,7 @@ static int update_paging_mode(struct domain *d, unsigned long dfn)
     return 0;
 }
 
-int amd_iommu_map_page(struct domain *d, unsigned long dfn, unsigned long mfn,
+int amd_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
                        unsigned int flags)
 {
     bool_t need_flush = 0;
@@ -651,7 +651,8 @@ int amd_iommu_map_page(struct domain *d, unsigned long dfn, unsigned long mfn,
     if ( rc )
     {
         spin_unlock(&hd->arch.mapping_lock);
-        AMD_IOMMU_DEBUG("Root table alloc failed, dfn = %lx\n", dfn);
+        AMD_IOMMU_DEBUG("Root table alloc failed, dfn = %"PRI_dfn"\n",
+                        dfn_x(dfn));
         domain_crash(d);
         return rc;
     }
@@ -660,25 +661,27 @@ int amd_iommu_map_page(struct domain *d, unsigned long dfn, unsigned long mfn,
      * we might need a deeper page table for wider dfn now */
     if ( is_hvm_domain(d) )
     {
-        if ( update_paging_mode(d, dfn) )
+        if ( update_paging_mode(d, dfn_x(dfn)) )
         {
             spin_unlock(&hd->arch.mapping_lock);
-            AMD_IOMMU_DEBUG("Update page mode failed dfn = %lx\n", dfn);
+            AMD_IOMMU_DEBUG("Update page mode failed dfn = %"PRI_dfn"\n",
+                            dfn_x(dfn));
             domain_crash(d);
             return -EFAULT;
         }
     }
 
-    if ( iommu_pde_from_dfn(d, dfn, pt_mfn) || (pt_mfn[1] == 0) )
+    if ( iommu_pde_from_dfn(d, dfn_x(dfn), pt_mfn) || (pt_mfn[1] == 0) )
     {
         spin_unlock(&hd->arch.mapping_lock);
-        AMD_IOMMU_DEBUG("Invalid IO pagetable entry dfn = %lx\n", dfn);
+        AMD_IOMMU_DEBUG("Invalid IO pagetable entry dfn = %"PRI_dfn"\n",
+                        dfn_x(dfn));
         domain_crash(d);
         return -EFAULT;
     }
 
     /* Install 4k mapping first */
-    need_flush = set_iommu_pte_present(pt_mfn[1], dfn, mfn,
+    need_flush = set_iommu_pte_present(pt_mfn[1], dfn_x(dfn), mfn_x(mfn),
                                        IOMMU_PAGING_MODE_LEVEL_1,
                                        !!(flags & IOMMUF_writable),
                                        !!(flags & IOMMUF_readable));
@@ -690,7 +693,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long dfn, unsigned long mfn,
     /* 4K mapping for PV guests never changes, 
      * no need to flush if we trust non-present bits */
     if ( is_hvm_domain(d) )
-        amd_iommu_flush_pages(d, dfn, 0);
+        amd_iommu_flush_pages(d, dfn_x(dfn), 0);
 
     for ( merge_level = IOMMU_PAGING_MODE_LEVEL_2;
           merge_level <= hd->arch.paging_mode; merge_level++ )
@@ -698,15 +701,16 @@ int amd_iommu_map_page(struct domain *d, unsigned long dfn, unsigned long mfn,
         if ( pt_mfn[merge_level] == 0 )
             break;
         if ( !iommu_update_pde_count(d, pt_mfn[merge_level],
-                                     dfn, mfn, merge_level) )
+                                     dfn_x(dfn), mfn_x(mfn), merge_level) )
             break;
 
-        if ( iommu_merge_pages(d, pt_mfn[merge_level], dfn,
+        if ( iommu_merge_pages(d, pt_mfn[merge_level], dfn_x(dfn),
                                flags, merge_level) )
         {
             spin_unlock(&hd->arch.mapping_lock);
             AMD_IOMMU_DEBUG("Merge iommu page failed at level %d, "
-                            "dfn = %lx mfn = %lx\n", merge_level, dfn, mfn);
+                            "dfn = %"PRI_dfn" mfn = %"PRI_mfn"\n",
+                            merge_level, dfn_x(dfn), mfn_x(mfn));
             domain_crash(d);
             return -EFAULT;
         }
@@ -720,7 +724,7 @@ out:
     return 0;
 }
 
-int amd_iommu_unmap_page(struct domain *d, unsigned long dfn)
+int amd_iommu_unmap_page(struct domain *d, dfn_t dfn)
 {
     unsigned long pt_mfn[7];
     struct domain_iommu *hd = dom_iommu(d);
@@ -742,31 +746,33 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long dfn)
      * we might need a deeper page table for lager dfn now */
     if ( is_hvm_domain(d) )
     {
-        int rc = update_paging_mode(d, dfn);
+        int rc = update_paging_mode(d, dfn_x(dfn));
 
         if ( rc )
         {
             spin_unlock(&hd->arch.mapping_lock);
-            AMD_IOMMU_DEBUG("Update page mode failed dfn = %lx\n", dfn);
+            AMD_IOMMU_DEBUG("Update page mode failed dfn = %"PRI_dfn"\n",
+                            dfn_x(dfn));
             if ( rc != -EADDRNOTAVAIL )
                 domain_crash(d);
             return rc;
         }
     }
 
-    if ( iommu_pde_from_dfn(d, dfn, pt_mfn) || (pt_mfn[1] == 0) )
+    if ( iommu_pde_from_dfn(d, dfn_x(dfn), pt_mfn) || (pt_mfn[1] == 0) )
     {
         spin_unlock(&hd->arch.mapping_lock);
-        AMD_IOMMU_DEBUG("Invalid IO pagetable entry dfn = %lx\n", dfn);
+        AMD_IOMMU_DEBUG("Invalid IO pagetable entry dfn = %"PRI_dfn"\n",
+                        dfn_x(dfn));
         domain_crash(d);
         return -EFAULT;
     }
 
     /* mark PTE as 'page not present' */
-    clear_iommu_pte_present(pt_mfn[1], dfn);
+    clear_iommu_pte_present(pt_mfn[1], dfn_x(dfn));
     spin_unlock(&hd->arch.mapping_lock);
 
-    amd_iommu_flush_pages(d, dfn, 0);
+    amd_iommu_flush_pages(d, dfn_x(dfn), 0);
 
     return 0;
 }
@@ -787,7 +793,9 @@ int amd_iommu_reserve_domain_unity_map(struct domain *domain,
     gfn = phys_addr >> PAGE_SHIFT;
     for ( i = 0; i < npages; i++ )
     {
-        rt = amd_iommu_map_page(domain, gfn +i, gfn +i, flags);
+        unsigned long frame = gfn + i;
+
+        rt = amd_iommu_map_page(domain, _dfn(frame), _mfn(frame), flags);
         if ( rt != 0 )
             return rt;
     }
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 5699adc0f6..e7f9c41603 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -271,7 +271,7 @@ static void __hwdom_init amd_iommu_hwdom_init(struct domain *d)
              */
             if ( mfn_valid(_mfn(pfn)) )
             {
-                int ret = amd_iommu_map_page(d, pfn, pfn,
+                int ret = amd_iommu_map_page(d, _dfn(pfn), _mfn(pfn),
                                              IOMMUF_readable|IOMMUF_writable);
 
                 if ( !rc )
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 0662398ccc..f31b075bc5 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2550,8 +2550,7 @@ static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
 	return 0;
 }
 
-static int __must_check arm_smmu_iotlb_flush(struct domain *d,
-                                             unsigned long dfn,
+static int __must_check arm_smmu_iotlb_flush(struct domain *d, dfn_t dfn,
                                              unsigned int page_count)
 {
 	/* ARM SMMU v1 doesn't have flush by VMA and VMID */
@@ -2737,8 +2736,8 @@ static void arm_smmu_iommu_domain_teardown(struct domain *d)
 	xfree(xen_domain);
 }
 
-static int __must_check arm_smmu_map_page(struct domain *d, unsigned long dfn,
-			unsigned long mfn, unsigned int flags)
+static int __must_check arm_smmu_map_page(struct domain *d, dfn_t dfn,
+					  mfn_t mfn, unsigned int flags)
 {
 	p2m_type_t t;
 
@@ -2751,7 +2750,7 @@ static int __must_check arm_smmu_map_page(struct domain *d, unsigned long dfn,
 	 * function should only be used by gnttab code with gfn == mfn == dfn.
 	 */
 	BUG_ON(!is_domain_direct_mapped(d));
-	BUG_ON(mfn != dfn);
+	BUG_ON(mfn_x(mfn) != dfn_x(dfn));
 
 	/* We only support readable and writable flags */
 	if (!(flags & (IOMMUF_readable | IOMMUF_writable)))
@@ -2763,10 +2762,11 @@ static int __must_check arm_smmu_map_page(struct domain *d, unsigned long dfn,
 	 * The function guest_physmap_add_entry replaces the current mapping
 	 * if there is already one...
 	 */
-	return guest_physmap_add_entry(d, _gfn(dfn), _mfn(dfn), 0, t);
+	return guest_physmap_add_entry(d, _gfn(dfn_x(dfn)), _mfn(dfn_x(dfn)),
+				       0, t);
 }
 
-static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long dfn)
+static int __must_check arm_smmu_unmap_page(struct domain *d, dfn_t dfn)
 {
 	/*
 	 * This function should only be used by gnttab code when the domain
@@ -2775,7 +2775,7 @@ static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long dfn)
 	if ( !is_domain_direct_mapped(d) )
 		return -EINVAL;
 
-	return guest_physmap_remove_page(d, _gfn(dfn), _mfn(dfn), 0);
+	return guest_physmap_remove_page(d, _gfn(dfn_x(dfn)), _mfn(dfn_x(dfn)), 0);
 }
 
 static const struct iommu_ops arm_smmu_iommu_ops = {
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 0c6e5904da..15ae66a06c 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -193,7 +193,8 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
                   == PGT_writable_page) )
                 mapping |= IOMMUF_writable;
 
-            ret = hd->platform_ops->map_page(d, dfn, mfn, mapping);
+            ret = hd->platform_ops->map_page(d, _dfn(dfn), _mfn(mfn),
+                                             mapping);
             if ( !rc )
                 rc = ret;
 
@@ -263,7 +264,7 @@ int iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    rc = hd->platform_ops->map_page(d, dfn_x(dfn), mfn_x(mfn), flags);
+    rc = hd->platform_ops->map_page(d, dfn, mfn, flags);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
@@ -286,7 +287,7 @@ int iommu_unmap_page(struct domain *d, dfn_t dfn)
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
 
-    rc = hd->platform_ops->unmap_page(d, dfn_x(dfn));
+    rc = hd->platform_ops->unmap_page(d, dfn);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
@@ -326,7 +327,7 @@ int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_count)
     if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
         return 0;
 
-    rc = hd->platform_ops->iotlb_flush(d, dfn_x(dfn), page_count);
+    rc = hd->platform_ops->iotlb_flush(d, dfn, page_count);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index ac0a78d917..0163bb949b 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -584,8 +584,7 @@ static int __must_check iommu_flush_all(void)
     return rc;
 }
 
-static int __must_check iommu_flush_iotlb(struct domain *d,
-                                          unsigned long dfn,
+static int __must_check iommu_flush_iotlb(struct domain *d, dfn_t dfn,
                                           bool_t dma_old_pte_present,
                                           unsigned int page_count)
 {
@@ -612,12 +611,12 @@ static int __must_check iommu_flush_iotlb(struct domain *d,
         if ( iommu_domid == -1 )
             continue;
 
-        if ( page_count != 1 || dfn == dfn_x(INVALID_DFN) )
+        if ( page_count != 1 || dfn_eq(dfn, INVALID_DFN) )
             rc = iommu_flush_iotlb_dsi(iommu, iommu_domid,
                                        0, flush_dev_iotlb);
         else
             rc = iommu_flush_iotlb_psi(iommu, iommu_domid,
-                                       __dfn_to_daddr(dfn),
+                                       dfn_to_daddr(dfn),
                                        PAGE_ORDER_4K,
                                        !dma_old_pte_present,
                                        flush_dev_iotlb);
@@ -633,7 +632,7 @@ static int __must_check iommu_flush_iotlb(struct domain *d,
 }
 
 static int __must_check iommu_flush_iotlb_pages(struct domain *d,
-                                                unsigned long dfn,
+                                                dfn_t dfn,
                                                 unsigned int page_count)
 {
     return iommu_flush_iotlb(d, dfn, 1, page_count);
@@ -641,7 +640,7 @@ static int __must_check iommu_flush_iotlb_pages(struct domain *d,
 
 static int __must_check iommu_flush_iotlb_all(struct domain *d)
 {
-    return iommu_flush_iotlb(d, dfn_x(INVALID_DFN), 0, 0);
+    return iommu_flush_iotlb(d, INVALID_DFN, 0, 0);
 }
 
 /* clear one page's page table */
@@ -676,7 +675,7 @@ static int __must_check dma_pte_clear_one(struct domain *domain, u64 addr)
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
 
     if ( !this_cpu(iommu_dont_flush_iotlb) )
-        rc = iommu_flush_iotlb_pages(domain, addr >> PAGE_SHIFT_4K, 1);
+        rc = iommu_flush_iotlb_pages(domain, daddr_to_dfn(addr), 1);
 
     unmap_vtd_domain_page(page);
 
@@ -1767,8 +1766,7 @@ static void iommu_domain_teardown(struct domain *d)
 }
 
 static int __must_check intel_iommu_map_page(struct domain *d,
-                                             unsigned long dfn,
-                                             unsigned long mfn,
+                                             dfn_t dfn, mfn_t mfn,
                                              unsigned int flags)
 {
     struct domain_iommu *hd = dom_iommu(d);
@@ -1786,16 +1784,16 @@ static int __must_check intel_iommu_map_page(struct domain *d,
 
     spin_lock(&hd->arch.mapping_lock);
 
-    pg_maddr = addr_to_dma_page_maddr(d, __dfn_to_daddr(dfn), 1);
+    pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 1);
     if ( pg_maddr == 0 )
     {
         spin_unlock(&hd->arch.mapping_lock);
         return -ENOMEM;
     }
     page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
-    pte = page + (dfn & LEVEL_MASK);
+    pte = page + (dfn_x(dfn) & LEVEL_MASK);
     old = *pte;
-    dma_set_pte_addr(new, (paddr_t)mfn << PAGE_SHIFT_4K);
+    dma_set_pte_addr(new, mfn_to_maddr(mfn));
     dma_set_pte_prot(new,
                      ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
                      ((flags & IOMMUF_writable) ? DMA_PTE_WRITE : 0));
@@ -1823,13 +1821,13 @@ static int __must_check intel_iommu_map_page(struct domain *d,
 }
 
 static int __must_check intel_iommu_unmap_page(struct domain *d,
-                                               unsigned long dfn)
+                                               dfn_t dfn)
 {
     /* Do nothing if hardware domain and iommu supports pass thru. */
     if ( iommu_passthrough && is_hardware_domain(d) )
         return 0;
 
-    return dma_pte_clear_one(d, __dfn_to_daddr(dfn));
+    return dma_pte_clear_one(d, dfn_to_daddr(dfn));
 }
 
 int iommu_pte_flush(struct domain *d, uint64_t dfn, uint64_t *pte,
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index 68182afd91..21e6678f16 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -65,7 +65,7 @@ int arch_iommu_populate_page_table(struct domain *d)
             {
                 ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
                 BUG_ON(SHARED_M2P(gfn));
-                rc = hd->platform_ops->map_page(d, gfn, mfn,
+                rc = hd->platform_ops->map_page(d, _dfn(gfn), _mfn(mfn),
                                                 IOMMUF_readable |
                                                 IOMMUF_writable);
             }
diff --git a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
index 99bc21c7b3..3083d625bd 100644
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
@@ -52,9 +52,9 @@ int amd_iommu_init(void);
 int amd_iommu_update_ivrs_mapping_acpi(void);
 
 /* mapping functions */
-int __must_check amd_iommu_map_page(struct domain *d, unsigned long gfn,
-                                    unsigned long mfn, unsigned int flags);
-int __must_check amd_iommu_unmap_page(struct domain *d, unsigned long gfn);
+int __must_check amd_iommu_map_page(struct domain *d, dfn_t dfn,
+                                    mfn_t mfn, unsigned int flags);
+int __must_check amd_iommu_unmap_page(struct domain *d, dfn_t dfn);
 u64 amd_iommu_get_next_table_from_pte(u32 *entry);
 int __must_check amd_iommu_alloc_root(struct domain_iommu *hd);
 int amd_iommu_reserve_domain_unity_map(struct domain *domain,
@@ -77,7 +77,7 @@ void iommu_dte_set_guest_cr3(u32 *dte, u16 dom_id, u64 gcr3,
 
 /* send cmd to iommu */
 void amd_iommu_flush_all_pages(struct domain *d);
-void amd_iommu_flush_pages(struct domain *d, unsigned long gfn,
+void amd_iommu_flush_pages(struct domain *d, unsigned long dfn,
                            unsigned int order);
 void amd_iommu_flush_iotlb(u8 devfn, const struct pci_dev *pdev,
                            uint64_t gaddr, unsigned int order);
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 299b69ee7f..9e0b4e8638 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -48,6 +48,11 @@ static inline dfn_t dfn_add(dfn_t dfn, unsigned long i)
     return _dfn(dfn_x(dfn) + i);
 }
 
+static inline bool_t dfn_eq(dfn_t x, dfn_t y)
+{
+    return dfn_x(x) == dfn_x(y);
+}
+
 #define IOMMU_PAGE_SHIFT 12
 #define IOMMU_PAGE_SIZE  (_AC(1,L) << IOMMU_PAGE_SHIFT)
 #define IOMMU_PAGE_MASK  (~(IOMMU_PAGE_SIZE - 1))
@@ -182,9 +187,9 @@ struct iommu_ops {
 #endif /* HAS_PCI */
 
     void (*teardown)(struct domain *d);
-    int __must_check (*map_page)(struct domain *d, unsigned long dfn,
-                                 unsigned long mfn, unsigned int flags);
-    int __must_check (*unmap_page)(struct domain *d, unsigned long dfn);
+    int __must_check (*map_page)(struct domain *d, dfn_t dfn, mfn_t mfn,
+                                 unsigned int flags);
+    int __must_check (*unmap_page)(struct domain *d, dfn_t dfn);
     void (*free_page_table)(struct page_info *);
 #ifdef CONFIG_X86
     void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
@@ -195,7 +200,7 @@ struct iommu_ops {
     void (*resume)(void);
     void (*share_p2m)(struct domain *d);
     void (*crash_shutdown)(void);
-    int __must_check (*iotlb_flush)(struct domain *d, unsigned long dfn,
+    int __must_check (*iotlb_flush)(struct domain *d, dfn_t dfn,
                                     unsigned int page_count);
     int __must_check (*iotlb_flush_all)(struct domain *d);
     int (*get_reserved_device_memory)(iommu_grdm_t *, void *);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v7 4/6] iommu: don't domain_crash() inside iommu_map/unmap_page()
  2018-09-12 11:30 [PATCH v7 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
                   ` (2 preceding siblings ...)
  2018-09-12 11:30 ` [PATCH v7 3/6] iommu: push use of type-safe DFN and MFN into iommu_ops Paul Durrant
@ 2018-09-12 11:30 ` Paul Durrant
  2018-09-13  6:48   ` Tian, Kevin
  2018-09-13  8:22   ` Roger Pau Monné
  2018-09-12 11:30 ` [PATCH v7 5/6] memory: add check_get_page_from_gfn() as a wrapper Paul Durrant
  2018-09-12 11:30 ` [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops Paul Durrant
  5 siblings, 2 replies; 22+ messages in thread
From: Paul Durrant @ 2018-09-12 11:30 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Paul Durrant, Jun Nakajima

This patch removes the implicit domain_crash() from iommu_map(),
unmap_page() and iommu_iotlb_flush() and turns them into straightforward
wrappers that check the existence of the relevant iommu_op and call
through to it. This makes them usable by PV IOMMU code to be delivered in
future patches.
This patch adds a helper macro, domu_crash(), that will only invoke
domain_crash() if the domain is not the hardware domain and modifies
callers of iommu_map(), unmap_page() and iommu_iotlb_flush() to use this
should an operation fail.

NOTE: This patch includes one bit of clean-up in set_identity_p2m_entry()
      replacing use of p2m->domain with the domain pointer passed into the
      function.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>

v7:
 - Re-base and re-name BFN -> DFN.
 - Move domu_crash() outside double locked region in grant_table.c.
 - Added Jan's R-b.

v6:
 - Introduce domu_crash() (idea suggested by Kevin, name suggested by Jan)
   to crash non-hardware domains.
 - Dropped Wei's and George's R-b because of change.

v2:
 - New in v2.
---
 xen/arch/arm/p2m.c                  |  4 ++++
 xen/arch/x86/mm.c                   |  3 +++
 xen/arch/x86/mm/p2m-ept.c           |  3 +++
 xen/arch/x86/mm/p2m-pt.c            |  3 +++
 xen/arch/x86/mm/p2m.c               | 22 ++++++++++++++++++----
 xen/common/grant_table.c            |  4 ++++
 xen/common/memory.c                 |  3 +++
 xen/drivers/passthrough/iommu.c     | 12 ------------
 xen/drivers/passthrough/x86/iommu.c |  4 ++++
 xen/include/xen/sched.h             |  5 +++++
 10 files changed, 47 insertions(+), 16 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 0db12b01f1..1c79ff7ade 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -957,8 +957,12 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
 
     if ( need_iommu(p2m->domain) &&
          (lpae_is_valid(orig_pte) || lpae_is_valid(*entry)) )
+    {
         rc = iommu_iotlb_flush(p2m->domain, _dfn(gfn_x(sgfn)),
                                1UL << page_order);
+        if ( unlikely(rc) )
+            domu_crash(p2m->domain);
+    }
     else
         rc = 0;
 
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index e503dedebe..57764e84fa 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2765,6 +2765,9 @@ static int _get_page_type(struct page_info *page, unsigned long type,
                 iommu_ret = iommu_map_page(d, _dfn(mfn_x(mfn)), mfn,
                                            IOMMUF_readable |
                                            IOMMUF_writable);
+
+            if ( unlikely(iommu_ret) )
+                domu_crash(d);
         }
     }
 
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 9a3a90e9e6..af7674f7e1 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -895,6 +895,9 @@ out:
                     if ( !rc )
                         rc = ret;
                 }
+
+            if ( unlikely(rc) )
+                domu_crash(d);
         }
     }
 
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 881e9e87b8..607046f31b 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -717,6 +717,9 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
                         rc = ret;
                 }
         }
+
+        if ( unlikely(rc) )
+            domu_crash(p2m->domain);
     }
 
     /*
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 801b629b95..537add65bb 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -724,6 +724,9 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn_l, unsigned long mfn,
                 if ( !rc )
                     rc = ret;
             }
+
+            if ( unlikely(rc) )
+                domu_crash(p2m->domain);
         }
 
         return rc;
@@ -789,6 +792,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
                         if ( iommu_unmap_page(d, dfn_add(dfn, i)) )
                             continue;
 
+                    domu_crash(d);
                     return rc;
                 }
             }
@@ -1157,12 +1161,17 @@ int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int ret;
 
-    if ( !paging_mode_translate(p2m->domain) )
+    if ( !paging_mode_translate(d) )
     {
         if ( !need_iommu(d) )
             return 0;
-        return iommu_map_page(d, _dfn(gfn_l), _mfn(gfn_l),
-                              IOMMUF_readable | IOMMUF_writable);
+
+        ret = iommu_map_page(d, _dfn(gfn_l), _mfn(gfn_l),
+                             IOMMUF_readable | IOMMUF_writable);
+        if ( unlikely(ret) )
+            domu_crash(d);
+
+        return ret;
     }
 
     gfn_lock(p2m, gfn, 0);
@@ -1252,7 +1261,12 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l)
     {
         if ( !need_iommu(d) )
             return 0;
-        return iommu_unmap_page(d, _dfn(gfn_l));
+
+        ret = iommu_unmap_page(d, _dfn(gfn_l));
+        if ( unlikely(ret) )
+            domu_crash(d);
+
+        return ret;
     }
 
     gfn_lock(p2m, gfn, 0);
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 2d01cad176..0f0b7b1a49 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1155,6 +1155,7 @@ map_grant_ref(
         if ( err )
         {
             double_gt_unlock(lgt, rgt);
+            domu_crash(ld);
             rc = GNTST_general_error;
             goto undo_out;
         }
@@ -1406,7 +1407,10 @@ unmap_common(
         double_gt_unlock(lgt, rgt);
 
         if ( err )
+        {
+            domu_crash(ld);
             rc = GNTST_general_error;
+        }
     }
 
     /* If just unmapped a writable mapping, mark as dirtied */
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 89b211d6aa..85611ddae4 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -841,6 +841,9 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
         ret = iommu_iotlb_flush(d, _dfn(xatp->gpfn - done), done);
         if ( unlikely(ret) && rc >= 0 )
             rc = ret;
+
+        if ( unlikely(rc < 0) )
+            domu_crash(d);
     }
 #endif
 
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 15ae66a06c..a16f1a0c66 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -271,9 +271,6 @@ int iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
             printk(XENLOG_ERR
                    "d%d: IOMMU mapping dfn %"PRI_dfn" to mfn %"PRI_mfn" failed: %d\n",
                    d->domain_id, dfn_x(dfn), mfn_x(mfn), rc);
-
-        if ( !is_hardware_domain(d) )
-            domain_crash(d);
     }
 
     return rc;
@@ -294,9 +291,6 @@ int iommu_unmap_page(struct domain *d, dfn_t dfn)
             printk(XENLOG_ERR
                    "d%d: IOMMU unmapping dfn %"PRI_dfn" failed: %d\n",
                    d->domain_id, dfn_x(dfn), rc);
-
-        if ( !is_hardware_domain(d) )
-            domain_crash(d);
     }
 
     return rc;
@@ -334,9 +328,6 @@ int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_count)
             printk(XENLOG_ERR
                    "d%d: IOMMU IOTLB flush failed: %d, dfn %"PRI_dfn", page count %u\n",
                    d->domain_id, rc, dfn_x(dfn), page_count);
-
-        if ( !is_hardware_domain(d) )
-            domain_crash(d);
     }
 
     return rc;
@@ -357,9 +348,6 @@ int iommu_iotlb_flush_all(struct domain *d)
             printk(XENLOG_ERR
                    "d%d: IOMMU IOTLB flush all failed: %d\n",
                    d->domain_id, rc);
-
-        if ( !is_hardware_domain(d) )
-            domain_crash(d);
     }
 
     return rc;
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index 21e6678f16..620bdc14c3 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -104,7 +104,11 @@ int arch_iommu_populate_page_table(struct domain *d)
     this_cpu(iommu_dont_flush_iotlb) = 0;
 
     if ( !rc )
+    {
         rc = iommu_iotlb_flush_all(d);
+        if ( unlikely(rc) )
+            domain_crash(d);
+    }
 
     if ( rc && rc != -ERESTART )
         iommu_teardown(d);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 0ba80cb1a8..f2c594d197 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -616,6 +616,11 @@ void __domain_crash(struct domain *d);
     __domain_crash(d);                                                    \
 } while (0)
 
+#define domu_crash(d) do {                \
+    if ( !is_hardware_domain(d) )         \
+        domain_crash(d);                  \
+} while (false)
+
 /*
  * Called from assembly code, with an optional address to help indicate why
  * the crash occured.  If addr is 0, look up address from last extable
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v7 5/6] memory: add check_get_page_from_gfn() as a wrapper...
  2018-09-12 11:30 [PATCH v7 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
                   ` (3 preceding siblings ...)
  2018-09-12 11:30 ` [PATCH v7 4/6] iommu: don't domain_crash() inside iommu_map/unmap_page() Paul Durrant
@ 2018-09-12 11:30 ` Paul Durrant
  2018-09-13  8:30   ` Roger Pau Monné
  2018-09-12 11:30 ` [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops Paul Durrant
  5 siblings, 1 reply; 22+ messages in thread
From: Paul Durrant @ 2018-09-12 11:30 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Paul Durrant, Jan Beulich

...for some uses of get_page_from_gfn().

There are many occurences of the following pattern in the code:

    q = <readonly look-up> ? P2M_ALLOC : P2M_UNSHARE;
    page = get_page_from_gfn(d, gfn, &p2mt, q);

    if ( p2m_is_paging(p2mt) )
    {
        if ( page )
            put_page(page);

        p2m_mem_paging_populate(d, gfn);
        return <-EAGAIN or equivalent>;
    }

    if ( (q & P2M_UNSHARE) && p2m_is_shared(p2mt) )
    {
        if ( page )
            put_page(page);

        return <-EAGAIN or equivalent>;
    }

    if ( !page )
        return <-EINVAL or equivalent>;

    if ( !p2m_is_ram(p2mt) ||
         (!<readonly look-up> && p2m_is_readonly(p2mt)) )
    {
        put_page(page);
        return <-EINVAL or equivalent>;
    }

There are some small differences between the exact way the occurences
are coded but the desired semantic is the same.

This patch introduces a new common implementation of this code in
check_get_page_from_gfn() and then converts the various open-coded patterns
into calls to this new function.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>

v7:
 - Fix ARM build by introducing p2m_is_readonly() predicate.
 - Re-name get_paged_frame() -> check_get_page_from_gfn().
 - Adjust default cases of callers switch-ing on return value.

v3:
 - Addressed comments from George.

v2:
 - New in v2.
---
 xen/arch/x86/hvm/emulate.c | 34 ++++++++--------------------
 xen/arch/x86/hvm/hvm.c     | 16 ++-----------
 xen/common/grant_table.c   | 39 ++++++++++----------------------
 xen/common/memory.c        | 56 +++++++++++++++++++++++++++++++++++++---------
 xen/include/asm-arm/p2m.h  |  8 +++++++
 xen/include/asm-x86/p2m.h  |  2 ++
 6 files changed, 79 insertions(+), 76 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index a577685dc6..0135272bf4 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -353,33 +353,17 @@ static int hvmemul_do_io_buffer(
 
 static int hvmemul_acquire_page(unsigned long gmfn, struct page_info **page)
 {
-    struct domain *curr_d = current->domain;
-    p2m_type_t p2mt;
-
-    *page = get_page_from_gfn(curr_d, gmfn, &p2mt, P2M_UNSHARE);
-
-    if ( *page == NULL )
-        return X86EMUL_UNHANDLEABLE;
-
-    if ( p2m_is_paging(p2mt) )
-    {
-        put_page(*page);
-        p2m_mem_paging_populate(curr_d, gmfn);
-        return X86EMUL_RETRY;
-    }
-
-    if ( p2m_is_shared(p2mt) )
+    switch ( check_get_page_from_gfn(current->domain, _gfn(gmfn), false,
+                                     NULL, page) )
     {
-        put_page(*page);
+    case 0:
+        break;
+    case -EAGAIN:
         return X86EMUL_RETRY;
-    }
-
-    /* This code should not be reached if the gmfn is not RAM */
-    if ( p2m_is_mmio(p2mt) )
-    {
-        domain_crash(curr_d);
-
-        put_page(*page);
+    default:
+        ASSERT_UNREACHABLE();
+        /* Fallthrough */
+    case -EINVAL:
         return X86EMUL_UNHANDLEABLE;
     }
 
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 062872cb71..695d23516a 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2555,24 +2555,12 @@ static void *_hvm_map_guest_frame(unsigned long gfn, bool_t permanent,
                                   bool_t *writable)
 {
     void *map;
-    p2m_type_t p2mt;
     struct page_info *page;
     struct domain *d = current->domain;
+    p2m_type_t p2mt;
 
-    page = get_page_from_gfn(d, gfn, &p2mt,
-                             writable ? P2M_UNSHARE : P2M_ALLOC);
-    if ( (p2m_is_shared(p2mt) && writable) || !page )
-    {
-        if ( page )
-            put_page(page);
-        return NULL;
-    }
-    if ( p2m_is_paging(p2mt) )
-    {
-        put_page(page);
-        p2m_mem_paging_populate(d, gfn);
+    if ( check_get_page_from_gfn(d, _gfn(gfn), !writable, &p2mt, &page) )
         return NULL;
-    }
 
     if ( writable )
     {
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 0f0b7b1a49..8f888056ff 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -374,39 +374,24 @@ static int get_paged_frame(unsigned long gfn, mfn_t *mfn,
                            struct page_info **page, bool readonly,
                            struct domain *rd)
 {
-    int rc = GNTST_okay;
-    p2m_type_t p2mt;
-
-    *mfn = INVALID_MFN;
-    *page = get_page_from_gfn(rd, gfn, &p2mt,
-                              readonly ? P2M_ALLOC : P2M_UNSHARE);
-    if ( !*page )
-    {
-#ifdef P2M_SHARED_TYPES
-        if ( p2m_is_shared(p2mt) )
-            return GNTST_eagain;
-#endif
-#ifdef P2M_PAGES_TYPES
-        if ( p2m_is_paging(p2mt) )
-        {
-            p2m_mem_paging_populate(rd, gfn);
-            return GNTST_eagain;
-        }
-#endif
-        return GNTST_bad_page;
-    }
+    int rc;
 
-    if ( p2m_is_foreign(p2mt) )
+    rc = check_get_page_from_gfn(rd, _gfn(gfn), readonly, NULL, page);
+    switch ( rc )
     {
-        put_page(*page);
-        *page = NULL;
-
+    case 0:
+        break;
+    case -EAGAIN:
+        return GNTST_eagain;
+    default:
+        ASSERT_UNREACHABLE();
+        /* Fallthrough */
+    case -EINVAL:
         return GNTST_bad_page;
     }
 
     *mfn = page_to_mfn(*page);
-
-    return rc;
+    return GNTST_okay;
 }
 
 static inline void
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 85611ddae4..e289c414f8 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1625,37 +1625,73 @@ void destroy_ring_for_helper(
     }
 }
 
-int prepare_ring_for_helper(
-    struct domain *d, unsigned long gmfn, struct page_info **_page,
-    void **_va)
+/*
+ * Acquire a pointer to struct page_info for a specified doman and GFN,
+ * checking whether the page has been paged out, or needs unsharing.
+ * If the function succeeds then zero is returned and page_p is written
+ * with a pointer to the struct page_info with a reference taken. The
+ * caller is responsible for dropping the reference. If p2mt_p is non-NULL
+ * then it is also written with the P2M type of the page.
+ * If the function fails then an appropriate errno is returned and the
+ * values referenced by page_p and p2mt_p are undefined.
+ */
+int check_get_page_from_gfn(struct domain *d, gfn_t gfn, bool readonly,
+                            p2m_type_t *p2mt_p, struct page_info **page_p)
 {
-    struct page_info *page;
+    p2m_query_t q = readonly ? P2M_ALLOC : P2M_UNSHARE;
     p2m_type_t p2mt;
-    void *va;
+    struct page_info *page;
 
-    page = get_page_from_gfn(d, gmfn, &p2mt, P2M_UNSHARE);
+    page = get_page_from_gfn(d, gfn_x(gfn), &p2mt, q);
 
 #ifdef CONFIG_HAS_MEM_PAGING
     if ( p2m_is_paging(p2mt) )
     {
         if ( page )
             put_page(page);
-        p2m_mem_paging_populate(d, gmfn);
-        return -ENOENT;
+
+        p2m_mem_paging_populate(d, gfn_x(gfn));
+        return -EAGAIN;
     }
 #endif
 #ifdef CONFIG_HAS_MEM_SHARING
-    if ( p2m_is_shared(p2mt) )
+    if ( (q & P2M_UNSHARE) && p2m_is_shared(p2mt) )
     {
         if ( page )
             put_page(page);
-        return -ENOENT;
+
+        return -EAGAIN;
     }
 #endif
 
     if ( !page )
         return -EINVAL;
 
+    if ( !p2m_is_ram(p2mt) || (!readonly && p2m_is_readonly(p2mt)) )
+    {
+        put_page(page);
+        return -EINVAL;
+    }
+
+    if ( p2mt_p )
+        *p2mt_p = p2mt;
+
+    *page_p = page;
+    return 0;
+}
+
+int prepare_ring_for_helper(
+    struct domain *d, unsigned long gmfn, struct page_info **_page,
+    void **_va)
+{
+    struct page_info *page;
+    void *va;
+    int rc;
+
+    rc = check_get_page_from_gfn(d, _gfn(gmfn), false, NULL, &page);
+    if ( rc )
+        return (rc == -EAGAIN) ? -ENOENT : rc;
+
     if ( !get_page_type(page, PGT_writable_page) )
     {
         put_page(page);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 8823707c17..04f9a7b86b 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -137,12 +137,17 @@ typedef enum {
 #define P2M_GRANT_TYPES (p2m_to_mask(p2m_grant_map_rw) |  \
                          p2m_to_mask(p2m_grant_map_ro))
 
+/* Read-only types */
+#define P2M_RO_TYPES (p2m_to_mask(p2m_ram_ro) |         \
+                      p2m_to_mask(p2m_grant_map_ro))
+
 /* Useful predicates */
 #define p2m_is_ram(_t) (p2m_to_mask(_t) & P2M_RAM_TYPES)
 #define p2m_is_foreign(_t) (p2m_to_mask(_t) & p2m_to_mask(p2m_map_foreign))
 #define p2m_is_any_ram(_t) (p2m_to_mask(_t) &                   \
                             (P2M_RAM_TYPES | P2M_GRANT_TYPES |  \
                              p2m_to_mask(p2m_map_foreign)))
+#define p2m_is_readonly(_t) (p2m_to_mask(_t) & P2M_RO_TYPES)
 
 static inline
 void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
@@ -303,6 +308,9 @@ static inline struct page_info *get_page_from_gfn(
     return page;
 }
 
+int check_get_page_from_gfn(struct domain *d, gfn_t gfn, bool readonly,
+                            p2m_type_t *p2mt_p, struct page_info **page_p);
+
 int get_page_type(struct page_info *page, unsigned long type);
 bool is_iomem_page(mfn_t mfn);
 static inline int get_page_and_type(struct page_info *page,
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index d4b3cfcb6e..03897b49c3 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -492,6 +492,8 @@ static inline struct page_info *get_page_from_gfn(
     return mfn_valid(_mfn(gfn)) && get_page(page, d) ? page : NULL;
 }
 
+int check_get_page_from_gfn(struct domain *d, gfn_t gfn, bool readonly,
+                            p2m_type_t *p2mt_p, struct page_info **page_p);
 
 /* General conversion function from mfn to gfn */
 static inline unsigned long mfn_to_gfn(struct domain *d, mfn_t mfn)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops
  2018-09-12 11:30 [PATCH v7 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
                   ` (4 preceding siblings ...)
  2018-09-12 11:30 ` [PATCH v7 5/6] memory: add check_get_page_from_gfn() as a wrapper Paul Durrant
@ 2018-09-12 11:30 ` Paul Durrant
  2018-09-13  6:53   ` Tian, Kevin
  2018-09-13  8:50   ` Roger Pau Monné
  5 siblings, 2 replies; 22+ messages in thread
From: Paul Durrant @ 2018-09-12 11:30 UTC (permalink / raw)
  To: xen-devel; +Cc: Kevin Tian, Paul Durrant, Wei Liu, George Dunlap, Jan Beulich

This patch adds a new method to the VT-d IOMMU implementation to find the
MFN currently mapped by the specified DFN along with a wrapper function
in generic IOMMU code to call the implementation if it exists.

This patch also cleans up the initializers in intel_iommu_map_page() and
uses array-style dereference there, for consistency. A missing check for
shared EPT is also added to intel_iommu_unmap_page().

NOTE: This patch only adds a Xen-internal interface. This will be used by
      a subsequent patch.
      Another subsequent patch will add similar functionality for AMD
      IOMMUs.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>

v7:
 - Re-base and re-name BFN -> DFN.
 - Add missing checks for shared EPT and iommu_passthrough.
 - Remove unnecessary initializers and use array-style dereference.
 - Drop Wei's R-b because of code churn.

v3:
 - Addressed comments from George.

v2:
 - Addressed some comments from Jan.
---
 xen/drivers/passthrough/iommu.c     | 11 ++++++++
 xen/drivers/passthrough/vtd/iommu.c | 52 +++++++++++++++++++++++++++++++++++--
 xen/drivers/passthrough/vtd/iommu.h |  3 +++
 xen/include/xen/iommu.h             |  4 +++
 4 files changed, 68 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index a16f1a0c66..52e3f500c7 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -296,6 +296,17 @@ int iommu_unmap_page(struct domain *d, dfn_t dfn)
     return rc;
 }
 
+int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
+                      unsigned int *flags)
+{
+    const struct domain_iommu *hd = dom_iommu(d);
+
+    if ( !iommu_enabled || !hd->platform_ops )
+        return -EOPNOTSUPP;
+
+    return hd->platform_ops->lookup_page(d, dfn, mfn, flags);
+}
+
 static void iommu_free_pagetables(unsigned long unused)
 {
     do {
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 0163bb949b..6622c2dd4c 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1770,7 +1770,7 @@ static int __must_check intel_iommu_map_page(struct domain *d,
                                              unsigned int flags)
 {
     struct domain_iommu *hd = dom_iommu(d);
-    struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
+    struct dma_pte *page, *pte, old, new = {};
     u64 pg_maddr;
     int rc = 0;
 
@@ -1790,9 +1790,11 @@ static int __must_check intel_iommu_map_page(struct domain *d,
         spin_unlock(&hd->arch.mapping_lock);
         return -ENOMEM;
     }
+
     page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
-    pte = page + (dfn_x(dfn) & LEVEL_MASK);
+    pte = &page[dfn_x(dfn) & LEVEL_MASK];
     old = *pte;
+
     dma_set_pte_addr(new, mfn_to_maddr(mfn));
     dma_set_pte_prot(new,
                      ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
@@ -1808,6 +1810,7 @@ static int __must_check intel_iommu_map_page(struct domain *d,
         unmap_vtd_domain_page(page);
         return 0;
     }
+
     *pte = new;
 
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
@@ -1823,6 +1826,10 @@ static int __must_check intel_iommu_map_page(struct domain *d,
 static int __must_check intel_iommu_unmap_page(struct domain *d,
                                                dfn_t dfn)
 {
+    /* Do nothing if VT-d shares EPT page table */
+    if ( iommu_use_hap_pt(d) )
+        return 0;
+
     /* Do nothing if hardware domain and iommu supports pass thru. */
     if ( iommu_passthrough && is_hardware_domain(d) )
         return 0;
@@ -1830,6 +1837,46 @@ static int __must_check intel_iommu_unmap_page(struct domain *d,
     return dma_pte_clear_one(d, dfn_to_daddr(dfn));
 }
 
+static int intel_iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
+                                   unsigned int *flags)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    struct dma_pte *page, val;
+    u64 pg_maddr;
+
+    /* Fail if VT-d shares EPT page table */
+    if ( iommu_use_hap_pt(d) )
+        return -ENOENT;
+
+    /* Fail if hardware domain and iommu supports pass thru. */
+    if ( iommu_passthrough && is_hardware_domain(d) )
+        return -ENOENT;
+
+    spin_lock(&hd->arch.mapping_lock);
+
+    pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 0);
+    if ( pg_maddr == 0 )
+    {
+        spin_unlock(&hd->arch.mapping_lock);
+        return -ENOMEM;
+    }
+
+    page = map_vtd_domain_page(pg_maddr);
+    val = page[dfn_x(dfn) & LEVEL_MASK];
+
+    unmap_vtd_domain_page(page);
+    spin_unlock(&hd->arch.mapping_lock);
+
+    if ( !dma_pte_present(val) )
+        return -ENOENT;
+
+    *mfn = maddr_to_mfn(dma_pte_addr(val));
+    *flags = dma_pte_read(val) ? IOMMUF_readable : 0;
+    *flags |= dma_pte_write(val) ? IOMMUF_writable : 0;
+
+    return 0;
+}
+
 int iommu_pte_flush(struct domain *d, uint64_t dfn, uint64_t *pte,
                     int order, int present)
 {
@@ -2655,6 +2702,7 @@ const struct iommu_ops intel_iommu_ops = {
     .teardown = iommu_domain_teardown,
     .map_page = intel_iommu_map_page,
     .unmap_page = intel_iommu_unmap_page,
+    .lookup_page = intel_iommu_lookup_page,
     .free_page_table = iommu_free_page_table,
     .reassign_device = reassign_device_ownership,
     .get_device_group_id = intel_iommu_group_id,
diff --git a/xen/drivers/passthrough/vtd/iommu.h b/xen/drivers/passthrough/vtd/iommu.h
index 72c1a2e3cd..47bdfcb5ea 100644
--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -272,6 +272,9 @@ struct dma_pte {
 #define dma_set_pte_prot(p, prot) do { \
         (p).val = ((p).val & ~DMA_PTE_PROT) | ((prot) & DMA_PTE_PROT); \
     } while (0)
+#define dma_pte_prot(p) ((p).val & DMA_PTE_PROT)
+#define dma_pte_read(p) (dma_pte_prot(p) & DMA_PTE_READ)
+#define dma_pte_write(p) (dma_pte_prot(p) & DMA_PTE_WRITE)
 #define dma_pte_addr(p) ((p).val & PADDR_MASK & PAGE_MASK_4K)
 #define dma_set_pte_addr(p, addr) do {\
             (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 9e0b4e8638..bebddc2db4 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -100,6 +100,8 @@ void iommu_teardown(struct domain *d);
 int __must_check iommu_map_page(struct domain *d, dfn_t dfn,
                                 mfn_t mfn, unsigned int flags);
 int __must_check iommu_unmap_page(struct domain *d, dfn_t dfn);
+int __must_check iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
+                                   unsigned int *flags);
 
 enum iommu_feature
 {
@@ -190,6 +192,8 @@ struct iommu_ops {
     int __must_check (*map_page)(struct domain *d, dfn_t dfn, mfn_t mfn,
                                  unsigned int flags);
     int __must_check (*unmap_page)(struct domain *d, dfn_t dfn);
+    int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t *mfn,
+                                    unsigned int *flags);
     void (*free_page_table)(struct page_info *);
 #ifdef CONFIG_X86
     void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 1/6] iommu: introduce the concept of DFN...
  2018-09-12 11:30 ` [PATCH v7 1/6] iommu: introduce the concept of DFN Paul Durrant
@ 2018-09-12 11:49   ` Andrew Cooper
  2018-09-12 12:01     ` Paul Durrant
  2018-09-12 12:14   ` Roger Pau Monné
  1 sibling, 1 reply; 22+ messages in thread
From: Andrew Cooper @ 2018-09-12 11:49 UTC (permalink / raw)
  To: Paul Durrant, xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Suravee Suthikulpanit,
	Julien Grall, Jan Beulich

On 12/09/18 12:30, Paul Durrant wrote:
> diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
> index e35d941f3c..19d5d55d79 100644
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -23,11 +23,37 @@
>  #include <xen/page-defs.h>
>  #include <xen/spinlock.h>
>  #include <xen/pci.h>
> +#include <xen/typesafe.h>
>  #include <public/hvm/ioreq.h>
>  #include <public/domctl.h>
>  #include <asm/device.h>
>  #include <asm/iommu.h>
>  
> +TYPE_SAFE(uint64_t, dfn);
> +#define PRI_dfn     PRIx64
> +#define INVALID_DFN _dfn(~0UL)

Presumably the use uint64_t as opposed to unsigned long is for ARM32's
benefit?

If so, INVALID_DFN needs to be ~0ULL to work as intended.

> +
> +#ifndef dfn_t
> +#define dfn_t /* Grep fodder: dfn_t, _dfn() and dfn_x() are defined above */
> +#define _dfn
> +#define dfn_x
> +#undef dfn_t
> +#undef _dfn
> +#undef dfn_x
> +#endif
> +
> +#define IOMMU_PAGE_SHIFT 12
> +#define IOMMU_PAGE_SIZE  (_AC(1,L) << IOMMU_PAGE_SHIFT)

"1, L", but this shouldn't need any _AC() at all.  1 << 12 is signed and
fits within an int so should be fine on its own.

> +#define IOMMU_PAGE_MASK  (~(IOMMU_PAGE_SIZE - 1))
> +
> +typedef uint64_t daddr_t;
> +
> +#define __dfn_to_daddr(dfn) ((daddr_t)(dfn) << IOMMU_PAGE_SHIFT)
> +#define __daddr_to_dfn(daddr) ((uint64_t)(daddr >> IOMMU_PAGE_SHIFT))
> +
> +#define dfn_to_daddr(dfn) __dfn_to_daddr(dfn_x(dfn))
> +#define daddr_to_dfn(daddr) _dfn(__daddr_to_dfn(daddr))
> +
>  extern bool_t iommu_enable, iommu_enabled;
>  extern bool_t force_iommu, iommu_dom0_strict, iommu_verbose;
>  extern bool_t iommu_workaround_bios_bug, iommu_igfx, iommu_passthrough;
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index 24654e8e22..288dc77b85 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -26,6 +26,11 @@
>   *   A linear idea of a guest physical address space. For an auto-translated
>   *   guest, pfn == gfn while for a non-translated guest, pfn != gfn.
>   *
> + * dfn: Device DMA Frame Number (definitions in include/xen/iommu.h)
> + *   The linear frame numbers of IOMMU address space. All initiators for (i.e.
> + *   all devices assigned to) a guest share a single IOMMU address space and,
> + *   by default, Xen will ensure dfn == pfn.

ITYM gfn here?  PV-IOMMU is what a PV guest can use to program the IOMMU
with dfn = pfn, but that isn't the default in Xen.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 1/6] iommu: introduce the concept of DFN...
  2018-09-12 11:49   ` Andrew Cooper
@ 2018-09-12 12:01     ` Paul Durrant
  0 siblings, 0 replies; 22+ messages in thread
From: Paul Durrant @ 2018-09-12 12:01 UTC (permalink / raw)
  To: Andrew Cooper, xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Suravee Suthikulpanit,
	Julien Grall, Jan Beulich

> -----Original Message-----
> From: Andrew Cooper
> Sent: 12 September 2018 12:49
> To: Paul Durrant <Paul.Durrant@citrix.com>; xen-devel@lists.xenproject.org
> Cc: Kevin Tian <kevin.tian@intel.com>; Stefano Stabellini
> <sstabellini@kernel.org>; Wei Liu <wei.liu2@citrix.com>; Suravee
> Suthikulpanit <suravee.suthikulpanit@amd.com>; Julien Grall
> <julien.grall@arm.com>; Jan Beulich <jbeulich@suse.com>
> Subject: Re: [Xen-devel] [PATCH v7 1/6] iommu: introduce the concept of
> DFN...
> 
> On 12/09/18 12:30, Paul Durrant wrote:
> > diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
> > index e35d941f3c..19d5d55d79 100644
> > --- a/xen/include/xen/iommu.h
> > +++ b/xen/include/xen/iommu.h
> > @@ -23,11 +23,37 @@
> >  #include <xen/page-defs.h>
> >  #include <xen/spinlock.h>
> >  #include <xen/pci.h>
> > +#include <xen/typesafe.h>
> >  #include <public/hvm/ioreq.h>
> >  #include <public/domctl.h>
> >  #include <asm/device.h>
> >  #include <asm/iommu.h>
> >
> > +TYPE_SAFE(uint64_t, dfn);
> > +#define PRI_dfn     PRIx64
> > +#define INVALID_DFN _dfn(~0UL)
> 
> Presumably the use uint64_t as opposed to unsigned long is for ARM32's
> benefit?

I've lost track as I've chopped and changed this series so many times now. That's probably reason.

> 
> If so, INVALID_DFN needs to be ~0ULL to work as intended.
> 

Ok.

> > +
> > +#ifndef dfn_t
> > +#define dfn_t /* Grep fodder: dfn_t, _dfn() and dfn_x() are defined
> above */
> > +#define _dfn
> > +#define dfn_x
> > +#undef dfn_t
> > +#undef _dfn
> > +#undef dfn_x
> > +#endif
> > +
> > +#define IOMMU_PAGE_SHIFT 12
> > +#define IOMMU_PAGE_SIZE  (_AC(1,L) << IOMMU_PAGE_SHIFT)
> 
> "1, L", but this shouldn't need any _AC() at all.  1 << 12 is signed and
> fits within an int so should be fine on its own.
> 

Ok. I'll drop is. The lack of space was just from precedent in page.h.

> > +#define IOMMU_PAGE_MASK  (~(IOMMU_PAGE_SIZE - 1))
> > +
> > +typedef uint64_t daddr_t;
> > +
> > +#define __dfn_to_daddr(dfn) ((daddr_t)(dfn) << IOMMU_PAGE_SHIFT)
> > +#define __daddr_to_dfn(daddr) ((uint64_t)(daddr >>
> IOMMU_PAGE_SHIFT))
> > +
> > +#define dfn_to_daddr(dfn) __dfn_to_daddr(dfn_x(dfn))
> > +#define daddr_to_dfn(daddr) _dfn(__daddr_to_dfn(daddr))
> > +
> >  extern bool_t iommu_enable, iommu_enabled;
> >  extern bool_t force_iommu, iommu_dom0_strict, iommu_verbose;
> >  extern bool_t iommu_workaround_bios_bug, iommu_igfx,
> iommu_passthrough;
> > diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> > index 24654e8e22..288dc77b85 100644
> > --- a/xen/include/xen/mm.h
> > +++ b/xen/include/xen/mm.h
> > @@ -26,6 +26,11 @@
> >   *   A linear idea of a guest physical address space. For an auto-translated
> >   *   guest, pfn == gfn while for a non-translated guest, pfn != gfn.
> >   *
> > + * dfn: Device DMA Frame Number (definitions in include/xen/iommu.h)
> > + *   The linear frame numbers of IOMMU address space. All initiators for
> (i.e.
> > + *   all devices assigned to) a guest share a single IOMMU address space
> and,
> > + *   by default, Xen will ensure dfn == pfn.
> 
> ITYM gfn here?  PV-IOMMU is what a PV guest can use to program the
> IOMMU
> with dfn = pfn, but that isn't the default in Xen.
> 

It is the default. For a PV guest Xen will ensure there is a 1:1 mfn map. For a PV guest pfn = mfn (unless I'm misinterpreting the comment just above this one) so dfn == pfn. Simalarly, for an HVM guest Xen will ensure a 1:1 gfn map. For an HVM guest pfn = gfn so, again, dfn = pfn.

  Paul

> ~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 1/6] iommu: introduce the concept of DFN...
  2018-09-12 11:30 ` [PATCH v7 1/6] iommu: introduce the concept of DFN Paul Durrant
  2018-09-12 11:49   ` Andrew Cooper
@ 2018-09-12 12:14   ` Roger Pau Monné
  2018-09-13  9:36     ` Paul Durrant
  1 sibling, 1 reply; 22+ messages in thread
From: Roger Pau Monné @ 2018-09-12 12:14 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Jan Beulich,
	Julien Grall, Suravee Suthikulpanit, xen-devel

On Wed, Sep 12, 2018 at 12:30:23PM +0100, Paul Durrant wrote:
> diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
> index e35d941f3c..19d5d55d79 100644
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -23,11 +23,37 @@
>  #include <xen/page-defs.h>
>  #include <xen/spinlock.h>
>  #include <xen/pci.h>
> +#include <xen/typesafe.h>
>  #include <public/hvm/ioreq.h>
>  #include <public/domctl.h>
>  #include <asm/device.h>
>  #include <asm/iommu.h>
>  
> +TYPE_SAFE(uint64_t, dfn);

Shouldn't this be unsigned long instead of uint64_t? (like gfn or mfn)

> +#define PRI_dfn     PRIx64
> +#define INVALID_DFN _dfn(~0UL)

Or if there's a reason for the above to be uint64_t, then INVALID_DFN
should use ULL.

> +
> +#ifndef dfn_t
> +#define dfn_t /* Grep fodder: dfn_t, _dfn() and dfn_x() are defined above */
> +#define _dfn
> +#define dfn_x
> +#undef dfn_t
> +#undef _dfn
> +#undef dfn_x
> +#endif
> +
> +#define IOMMU_PAGE_SHIFT 12
> +#define IOMMU_PAGE_SIZE  (_AC(1,L) << IOMMU_PAGE_SHIFT)

There's no need to use _AC?

> +#define IOMMU_PAGE_MASK  (~(IOMMU_PAGE_SIZE - 1))
> +
> +typedef uint64_t daddr_t;

Use paddr_t instead of uint64_t directly?

> +
> +#define __dfn_to_daddr(dfn) ((daddr_t)(dfn) << IOMMU_PAGE_SHIFT)
> +#define __daddr_to_dfn(daddr) ((uint64_t)(daddr >> IOMMU_PAGE_SHIFT))
> +
> +#define dfn_to_daddr(dfn) __dfn_to_daddr(dfn_x(dfn))
> +#define daddr_to_dfn(daddr) _dfn(__daddr_to_dfn(daddr))
> +
>  extern bool_t iommu_enable, iommu_enabled;
>  extern bool_t force_iommu, iommu_dom0_strict, iommu_verbose;
>  extern bool_t iommu_workaround_bios_bug, iommu_igfx, iommu_passthrough;
> @@ -60,9 +86,9 @@ void iommu_teardown(struct domain *d);
>  #define IOMMUF_readable  (1u<<_IOMMUF_readable)
>  #define _IOMMUF_writable 1
>  #define IOMMUF_writable  (1u<<_IOMMUF_writable)
> -int __must_check iommu_map_page(struct domain *d, unsigned long gfn,
> +int __must_check iommu_map_page(struct domain *d, unsigned long dfn,
>                                  unsigned long mfn, unsigned int flags);
> -int __must_check iommu_unmap_page(struct domain *d, unsigned long gfn);
> +int __must_check iommu_unmap_page(struct domain *d, unsigned long dfn);
>  
>  enum iommu_feature
>  {
> @@ -150,9 +176,9 @@ struct iommu_ops {
>  #endif /* HAS_PCI */
>  
>      void (*teardown)(struct domain *d);
> -    int __must_check (*map_page)(struct domain *d, unsigned long gfn,
> +    int __must_check (*map_page)(struct domain *d, unsigned long dfn,
>                                   unsigned long mfn, unsigned int flags);
> -    int __must_check (*unmap_page)(struct domain *d, unsigned long gfn);
> +    int __must_check (*unmap_page)(struct domain *d, unsigned long dfn);
>      void (*free_page_table)(struct page_info *);
>  #ifdef CONFIG_X86
>      void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
> @@ -163,7 +189,7 @@ struct iommu_ops {
>      void (*resume)(void);
>      void (*share_p2m)(struct domain *d);
>      void (*crash_shutdown)(void);
> -    int __must_check (*iotlb_flush)(struct domain *d, unsigned long gfn,
> +    int __must_check (*iotlb_flush)(struct domain *d, unsigned long dfn,
>                                      unsigned int page_count);
>      int __must_check (*iotlb_flush_all)(struct domain *d);
>      int (*get_reserved_device_memory)(iommu_grdm_t *, void *);
> @@ -185,7 +211,7 @@ int iommu_do_pci_domctl(struct xen_domctl *, struct domain *d,
>  int iommu_do_domctl(struct xen_domctl *, struct domain *d,
>                      XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
>  
> -int __must_check iommu_iotlb_flush(struct domain *d, unsigned long gfn,
> +int __must_check iommu_iotlb_flush(struct domain *d, unsigned long dfn,

While doing this wholesale gfn to dfn replacement, it might be a good
idea to also change unsigned long for dfn at the same time?

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 4/6] iommu: don't domain_crash() inside iommu_map/unmap_page()
  2018-09-12 11:30 ` [PATCH v7 4/6] iommu: don't domain_crash() inside iommu_map/unmap_page() Paul Durrant
@ 2018-09-13  6:48   ` Tian, Kevin
  2018-09-13  8:22   ` Roger Pau Monné
  1 sibling, 0 replies; 22+ messages in thread
From: Tian, Kevin @ 2018-09-13  6:48 UTC (permalink / raw)
  To: Paul Durrant, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Nakajima, Jun

> From: Paul Durrant [mailto:paul.durrant@citrix.com]
> Sent: Wednesday, September 12, 2018 7:30 PM
> 
> This patch removes the implicit domain_crash() from iommu_map(),
> unmap_page() and iommu_iotlb_flush() and turns them into
> straightforward
> wrappers that check the existence of the relevant iommu_op and call
> through to it. This makes them usable by PV IOMMU code to be delivered
> in
> future patches.
> This patch adds a helper macro, domu_crash(), that will only invoke
> domain_crash() if the domain is not the hardware domain and modifies
> callers of iommu_map(), unmap_page() and iommu_iotlb_flush() to use
> this
> should an operation fail.
> 
> NOTE: This patch includes one bit of clean-up in set_identity_p2m_entry()
>       replacing use of p2m->domain with the domain pointer passed into the
>       function.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops
  2018-09-12 11:30 ` [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops Paul Durrant
@ 2018-09-13  6:53   ` Tian, Kevin
  2018-09-13  8:30     ` Paul Durrant
  2018-09-13  8:50   ` Roger Pau Monné
  1 sibling, 1 reply; 22+ messages in thread
From: Tian, Kevin @ 2018-09-13  6:53 UTC (permalink / raw)
  To: Paul Durrant, xen-devel; +Cc: Wei Liu, George Dunlap, Jan Beulich

> From: Paul Durrant [mailto:paul.durrant@citrix.com]
> Sent: Wednesday, September 12, 2018 7:30 PM
> 
> This patch adds a new method to the VT-d IOMMU implementation to find
> the
> MFN currently mapped by the specified DFN along with a wrapper function
> in generic IOMMU code to call the implementation if it exists.
> 
> This patch also cleans up the initializers in intel_iommu_map_page() and
> uses array-style dereference there, for consistency. A missing check for
> shared EPT is also added to intel_iommu_unmap_page().

then please split into two patches.

> 
> NOTE: This patch only adds a Xen-internal interface. This will be used by
>       a subsequent patch.
>       Another subsequent patch will add similar functionality for AMD
>       IOMMUs.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> ---
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: Kevin Tian <kevin.tian@intel.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: George Dunlap <george.dunlap@citrix.com>
> 
> v7:
>  - Re-base and re-name BFN -> DFN.
>  - Add missing checks for shared EPT and iommu_passthrough.
>  - Remove unnecessary initializers and use array-style dereference.
>  - Drop Wei's R-b because of code churn.
> 
> v3:
>  - Addressed comments from George.
> 
> v2:
>  - Addressed some comments from Jan.
> ---
>  xen/drivers/passthrough/iommu.c     | 11 ++++++++
>  xen/drivers/passthrough/vtd/iommu.c | 52
> +++++++++++++++++++++++++++++++++++--
>  xen/drivers/passthrough/vtd/iommu.h |  3 +++
>  xen/include/xen/iommu.h             |  4 +++
>  4 files changed, 68 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/iommu.c
> b/xen/drivers/passthrough/iommu.c
> index a16f1a0c66..52e3f500c7 100644
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -296,6 +296,17 @@ int iommu_unmap_page(struct domain *d, dfn_t
> dfn)
>      return rc;
>  }
> 
> +int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
> +                      unsigned int *flags)
> +{
> +    const struct domain_iommu *hd = dom_iommu(d);
> +
> +    if ( !iommu_enabled || !hd->platform_ops )
> +        return -EOPNOTSUPP;
> +
> +    return hd->platform_ops->lookup_page(d, dfn, mfn, flags);
> +}
> +
>  static void iommu_free_pagetables(unsigned long unused)
>  {
>      do {
> diff --git a/xen/drivers/passthrough/vtd/iommu.c
> b/xen/drivers/passthrough/vtd/iommu.c
> index 0163bb949b..6622c2dd4c 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1770,7 +1770,7 @@ static int __must_check
> intel_iommu_map_page(struct domain *d,
>                                               unsigned int flags)
>  {
>      struct domain_iommu *hd = dom_iommu(d);
> -    struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
> +    struct dma_pte *page, *pte, old, new = {};
>      u64 pg_maddr;
>      int rc = 0;
> 
> @@ -1790,9 +1790,11 @@ static int __must_check
> intel_iommu_map_page(struct domain *d,
>          spin_unlock(&hd->arch.mapping_lock);
>          return -ENOMEM;
>      }
> +
>      page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
> -    pte = page + (dfn_x(dfn) & LEVEL_MASK);
> +    pte = &page[dfn_x(dfn) & LEVEL_MASK];
>      old = *pte;
> +
>      dma_set_pte_addr(new, mfn_to_maddr(mfn));
>      dma_set_pte_prot(new,
>                       ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
> @@ -1808,6 +1810,7 @@ static int __must_check
> intel_iommu_map_page(struct domain *d,
>          unmap_vtd_domain_page(page);
>          return 0;
>      }
> +
>      *pte = new;
> 
>      iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
> @@ -1823,6 +1826,10 @@ static int __must_check
> intel_iommu_map_page(struct domain *d,
>  static int __must_check intel_iommu_unmap_page(struct domain *d,
>                                                 dfn_t dfn)
>  {
> +    /* Do nothing if VT-d shares EPT page table */
> +    if ( iommu_use_hap_pt(d) )
> +        return 0;
> +
>      /* Do nothing if hardware domain and iommu supports pass thru. */
>      if ( iommu_passthrough && is_hardware_domain(d) )
>          return 0;
> @@ -1830,6 +1837,46 @@ static int __must_check
> intel_iommu_unmap_page(struct domain *d,
>      return dma_pte_clear_one(d, dfn_to_daddr(dfn));
>  }
> 
> +static int intel_iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t
> *mfn,
> +                                   unsigned int *flags)
> +{
> +    struct domain_iommu *hd = dom_iommu(d);
> +    struct dma_pte *page, val;
> +    u64 pg_maddr;
> +
> +    /* Fail if VT-d shares EPT page table */
> +    if ( iommu_use_hap_pt(d) )
> +        return -ENOENT;
> +
> +    /* Fail if hardware domain and iommu supports pass thru. */
> +    if ( iommu_passthrough && is_hardware_domain(d) )
> +        return -ENOENT;

why fail instead of returning dfn as mfn? passthrough is just one
special translation mode in IOMMU, which doesn't mean lookup
is not possible.

> +
> +    spin_lock(&hd->arch.mapping_lock);
> +
> +    pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 0);
> +    if ( pg_maddr == 0 )
> +    {
> +        spin_unlock(&hd->arch.mapping_lock);
> +        return -ENOMEM;
> +    }
> +
> +    page = map_vtd_domain_page(pg_maddr);
> +    val = page[dfn_x(dfn) & LEVEL_MASK];
> +
> +    unmap_vtd_domain_page(page);
> +    spin_unlock(&hd->arch.mapping_lock);
> +
> +    if ( !dma_pte_present(val) )
> +        return -ENOENT;
> +
> +    *mfn = maddr_to_mfn(dma_pte_addr(val));
> +    *flags = dma_pte_read(val) ? IOMMUF_readable : 0;
> +    *flags |= dma_pte_write(val) ? IOMMUF_writable : 0;
> +
> +    return 0;
> +}
> +
>  int iommu_pte_flush(struct domain *d, uint64_t dfn, uint64_t *pte,
>                      int order, int present)
>  {
> @@ -2655,6 +2702,7 @@ const struct iommu_ops intel_iommu_ops = {
>      .teardown = iommu_domain_teardown,
>      .map_page = intel_iommu_map_page,
>      .unmap_page = intel_iommu_unmap_page,
> +    .lookup_page = intel_iommu_lookup_page,
>      .free_page_table = iommu_free_page_table,
>      .reassign_device = reassign_device_ownership,
>      .get_device_group_id = intel_iommu_group_id,
> diff --git a/xen/drivers/passthrough/vtd/iommu.h
> b/xen/drivers/passthrough/vtd/iommu.h
> index 72c1a2e3cd..47bdfcb5ea 100644
> --- a/xen/drivers/passthrough/vtd/iommu.h
> +++ b/xen/drivers/passthrough/vtd/iommu.h
> @@ -272,6 +272,9 @@ struct dma_pte {
>  #define dma_set_pte_prot(p, prot) do { \
>          (p).val = ((p).val & ~DMA_PTE_PROT) | ((prot) & DMA_PTE_PROT); \
>      } while (0)
> +#define dma_pte_prot(p) ((p).val & DMA_PTE_PROT)
> +#define dma_pte_read(p) (dma_pte_prot(p) & DMA_PTE_READ)
> +#define dma_pte_write(p) (dma_pte_prot(p) & DMA_PTE_WRITE)
>  #define dma_pte_addr(p) ((p).val & PADDR_MASK & PAGE_MASK_4K)
>  #define dma_set_pte_addr(p, addr) do {\
>              (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
> diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
> index 9e0b4e8638..bebddc2db4 100644
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -100,6 +100,8 @@ void iommu_teardown(struct domain *d);
>  int __must_check iommu_map_page(struct domain *d, dfn_t dfn,
>                                  mfn_t mfn, unsigned int flags);
>  int __must_check iommu_unmap_page(struct domain *d, dfn_t dfn);
> +int __must_check iommu_lookup_page(struct domain *d, dfn_t dfn,
> mfn_t *mfn,
> +                                   unsigned int *flags);
> 
>  enum iommu_feature
>  {
> @@ -190,6 +192,8 @@ struct iommu_ops {
>      int __must_check (*map_page)(struct domain *d, dfn_t dfn, mfn_t mfn,
>                                   unsigned int flags);
>      int __must_check (*unmap_page)(struct domain *d, dfn_t dfn);
> +    int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t
> *mfn,
> +                                    unsigned int *flags);
>      void (*free_page_table)(struct page_info *);
>  #ifdef CONFIG_X86
>      void (*update_ire_from_apic)(unsigned int apic, unsigned int reg,
> unsigned int value);
> --
> 2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 2/6] iommu: make use of type-safe DFN and MFN in exported functions
  2018-09-12 11:30 ` [PATCH v7 2/6] iommu: make use of type-safe DFN and MFN in exported functions Paul Durrant
@ 2018-09-13  8:09   ` Roger Pau Monné
  0 siblings, 0 replies; 22+ messages in thread
From: Roger Pau Monné @ 2018-09-13  8:09 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Konrad Rzeszutek Wilk, George Dunlap,
	Andrew Cooper, Ian Jackson, Tim Deegan, Julien Grall,
	Jun Nakajima, xen-devel

On Wed, Sep 12, 2018 at 12:30:24PM +0100, Paul Durrant wrote:
> This patch modifies the declaration of the entry points to the IOMMU
> sub-system to use dfn_t and mfn_t in place of unsigned long. A subsequent
> patch will similarly modify the methods in the iommu_ops structure.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Reviewed-by: Wei Liu <wei.liu2@citrix.com>
> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 3/6] iommu: push use of type-safe DFN and MFN into iommu_ops
  2018-09-12 11:30 ` [PATCH v7 3/6] iommu: push use of type-safe DFN and MFN into iommu_ops Paul Durrant
@ 2018-09-13  8:12   ` Roger Pau Monné
  0 siblings, 0 replies; 22+ messages in thread
From: Roger Pau Monné @ 2018-09-13  8:12 UTC (permalink / raw)
  To: Paul Durrant
  Cc: xen-devel, George Dunlap, Suravee Suthikulpanit, Andrew Cooper

On Wed, Sep 12, 2018 at 12:30:25PM +0100, Paul Durrant wrote:
> This patch modifies the methods in struct iommu_ops to use type-safe DFN
> and MFN. This follows on from the prior patch that modified the functions
> exported in xen/iommu.h.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Reviewed-by: Wei Liu <wei.liu2@citrix.com>
> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 4/6] iommu: don't domain_crash() inside iommu_map/unmap_page()
  2018-09-12 11:30 ` [PATCH v7 4/6] iommu: don't domain_crash() inside iommu_map/unmap_page() Paul Durrant
  2018-09-13  6:48   ` Tian, Kevin
@ 2018-09-13  8:22   ` Roger Pau Monné
  2018-09-13  8:27     ` Paul Durrant
  1 sibling, 1 reply; 22+ messages in thread
From: Roger Pau Monné @ 2018-09-13  8:22 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Jun Nakajima, xen-devel

On Wed, Sep 12, 2018 at 12:30:26PM +0100, Paul Durrant wrote:
> This patch removes the implicit domain_crash() from iommu_map(),
> unmap_page() and iommu_iotlb_flush() and turns them into straightforward
> wrappers that check the existence of the relevant iommu_op and call
> through to it. This makes them usable by PV IOMMU code to be delivered in
> future patches.
> This patch adds a helper macro, domu_crash(), that will only invoke
> domain_crash() if the domain is not the hardware domain and modifies
> callers of iommu_map(), unmap_page() and iommu_iotlb_flush() to use this
> should an operation fail.

I assume this is because future usages of iommu_map, unmap and flush
will tolerate failure, and will be handled differently than crashing
the domain.

> NOTE: This patch includes one bit of clean-up in set_identity_p2m_entry()
>       replacing use of p2m->domain with the domain pointer passed into the
>       function.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index 0ba80cb1a8..f2c594d197 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -616,6 +616,11 @@ void __domain_crash(struct domain *d);
>      __domain_crash(d);                                                    \
>  } while (0)
>  
> +#define domu_crash(d) do {                \
> +    if ( !is_hardware_domain(d) )         \
> +        domain_crash(d);                  \
> +} while (false)
            ^ missing spaces
> +
>  /*
>   * Called from assembly code, with an optional address to help indicate why
>   * the crash occured.  If addr is 0, look up address from last extable
> -- 
> 2.11.0
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 4/6] iommu: don't domain_crash() inside iommu_map/unmap_page()
  2018-09-13  8:22   ` Roger Pau Monné
@ 2018-09-13  8:27     ` Paul Durrant
  0 siblings, 0 replies; 22+ messages in thread
From: Paul Durrant @ 2018-09-13  8:27 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, Jun Nakajima, Ian Jackson,
	xen-devel

> -----Original Message-----
> From: Roger Pau Monne
> Sent: 13 September 2018 09:22
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: xen-devel@lists.xenproject.org; Kevin Tian <kevin.tian@intel.com>;
> Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wei.liu2@citrix.com>;
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; George Dunlap
> <George.Dunlap@citrix.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Tim
> (Xen.org) <tim@xen.org>; Julien Grall <julien.grall@arm.com>; Jun Nakajima
> <jun.nakajima@intel.com>
> Subject: Re: [Xen-devel] [PATCH v7 4/6] iommu: don't domain_crash() inside
> iommu_map/unmap_page()
> 
> On Wed, Sep 12, 2018 at 12:30:26PM +0100, Paul Durrant wrote:
> > This patch removes the implicit domain_crash() from iommu_map(),
> > unmap_page() and iommu_iotlb_flush() and turns them into
> straightforward
> > wrappers that check the existence of the relevant iommu_op and call
> > through to it. This makes them usable by PV IOMMU code to be delivered
> in
> > future patches.
> > This patch adds a helper macro, domu_crash(), that will only invoke
> > domain_crash() if the domain is not the hardware domain and modifies
> > callers of iommu_map(), unmap_page() and iommu_iotlb_flush() to use
> this
> > should an operation fail.
> 
> I assume this is because future usages of iommu_map, unmap and flush
> will tolerate failure, and will be handled differently than crashing
> the domain.
> 

Correct. If they form part of the implementation of PV-IOMMU then there's no way we can allow guest supplied arguments to directly cause a domain crash.

> > NOTE: This patch includes one bit of clean-up in set_identity_p2m_entry()
> >       replacing use of p2m->domain with the domain pointer passed into the
> >       function.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> 

Thanks,

  Paul

> > diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> > index 0ba80cb1a8..f2c594d197 100644
> > --- a/xen/include/xen/sched.h
> > +++ b/xen/include/xen/sched.h
> > @@ -616,6 +616,11 @@ void __domain_crash(struct domain *d);
> >      __domain_crash(d);                                                    \
> >  } while (0)
> >
> > +#define domu_crash(d) do {                \
> > +    if ( !is_hardware_domain(d) )         \
> > +        domain_crash(d);                  \
> > +} while (false)
>             ^ missing spaces
> > +
> >  /*
> >   * Called from assembly code, with an optional address to help indicate
> why
> >   * the crash occured.  If addr is 0, look up address from last extable
> > --
> > 2.11.0
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xenproject.org
> > https://lists.xenproject.org/mailman/listinfo/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 5/6] memory: add check_get_page_from_gfn() as a wrapper...
  2018-09-12 11:30 ` [PATCH v7 5/6] memory: add check_get_page_from_gfn() as a wrapper Paul Durrant
@ 2018-09-13  8:30   ` Roger Pau Monné
  2018-09-13  8:35     ` Paul Durrant
  0 siblings, 1 reply; 22+ messages in thread
From: Roger Pau Monné @ 2018-09-13  8:30 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Jan Beulich, xen-devel

On Wed, Sep 12, 2018 at 12:30:27PM +0100, Paul Durrant wrote:
> ...for some uses of get_page_from_gfn().
> 
> There are many occurences of the following pattern in the code:
> 
>     q = <readonly look-up> ? P2M_ALLOC : P2M_UNSHARE;
>     page = get_page_from_gfn(d, gfn, &p2mt, q);
> 
>     if ( p2m_is_paging(p2mt) )
>     {
>         if ( page )
>             put_page(page);
> 
>         p2m_mem_paging_populate(d, gfn);
>         return <-EAGAIN or equivalent>;
>     }
> 
>     if ( (q & P2M_UNSHARE) && p2m_is_shared(p2mt) )
>     {
>         if ( page )
>             put_page(page);
> 
>         return <-EAGAIN or equivalent>;
>     }
> 
>     if ( !page )
>         return <-EINVAL or equivalent>;
> 
>     if ( !p2m_is_ram(p2mt) ||
>          (!<readonly look-up> && p2m_is_readonly(p2mt)) )
>     {
>         put_page(page);
>         return <-EINVAL or equivalent>;
>     }
> 
> There are some small differences between the exact way the occurences
> are coded but the desired semantic is the same.
> 
> This patch introduces a new common implementation of this code in
> check_get_page_from_gfn() and then converts the various open-coded patterns
> into calls to this new function.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Just two nits below.

> ---
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <George.Dunlap@eu.citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Julien Grall <julien.grall@arm.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Tim Deegan <tim@xen.org>
> Cc: Wei Liu <wei.liu2@citrix.com>
> 
> v7:
>  - Fix ARM build by introducing p2m_is_readonly() predicate.
>  - Re-name get_paged_frame() -> check_get_page_from_gfn().
>  - Adjust default cases of callers switch-ing on return value.
> 
> v3:
>  - Addressed comments from George.
> 
> v2:
>  - New in v2.
> ---
>  xen/arch/x86/hvm/emulate.c | 34 ++++++++--------------------
>  xen/arch/x86/hvm/hvm.c     | 16 ++-----------
>  xen/common/grant_table.c   | 39 ++++++++++----------------------
>  xen/common/memory.c        | 56 +++++++++++++++++++++++++++++++++++++---------
>  xen/include/asm-arm/p2m.h  |  8 +++++++
>  xen/include/asm-x86/p2m.h  |  2 ++
>  6 files changed, 79 insertions(+), 76 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> index a577685dc6..0135272bf4 100644
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -353,33 +353,17 @@ static int hvmemul_do_io_buffer(
>  
>  static int hvmemul_acquire_page(unsigned long gmfn, struct page_info **page)
>  {
> -    struct domain *curr_d = current->domain;
> -    p2m_type_t p2mt;
> -
> -    *page = get_page_from_gfn(curr_d, gmfn, &p2mt, P2M_UNSHARE);
> -
> -    if ( *page == NULL )
> -        return X86EMUL_UNHANDLEABLE;
> -
> -    if ( p2m_is_paging(p2mt) )
> -    {
> -        put_page(*page);
> -        p2m_mem_paging_populate(curr_d, gmfn);
> -        return X86EMUL_RETRY;
> -    }
> -
> -    if ( p2m_is_shared(p2mt) )
> +    switch ( check_get_page_from_gfn(current->domain, _gfn(gmfn), false,
> +                                     NULL, page) )
>      {
> -        put_page(*page);
> +    case 0:
> +        break;
> +    case -EAGAIN:
>          return X86EMUL_RETRY;

I would add a newline between cases.

> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index d4b3cfcb6e..03897b49c3 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -492,6 +492,8 @@ static inline struct page_info *get_page_from_gfn(
>      return mfn_valid(_mfn(gfn)) && get_page(page, d) ? page : NULL;
>  }
>  
> +int check_get_page_from_gfn(struct domain *d, gfn_t gfn, bool readonly,
> +                            p2m_type_t *p2mt_p, struct page_info **page_p);

Given the usage above, I would add the must_check attribute.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops
  2018-09-13  6:53   ` Tian, Kevin
@ 2018-09-13  8:30     ` Paul Durrant
  0 siblings, 0 replies; 22+ messages in thread
From: Paul Durrant @ 2018-09-13  8:30 UTC (permalink / raw)
  To: Kevin Tian, xen-devel; +Cc: Wei Liu, George Dunlap, Jan Beulich

> -----Original Message-----
> From: Tian, Kevin [mailto:kevin.tian@intel.com]
> Sent: 13 September 2018 07:53
> To: Paul Durrant <Paul.Durrant@citrix.com>; xen-devel@lists.xenproject.org
> Cc: Wei Liu <wei.liu2@citrix.com>; Jan Beulich <jbeulich@suse.com>; George
> Dunlap <George.Dunlap@citrix.com>
> Subject: RE: [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops
> 
> > From: Paul Durrant [mailto:paul.durrant@citrix.com]
> > Sent: Wednesday, September 12, 2018 7:30 PM
> >
> > This patch adds a new method to the VT-d IOMMU implementation to find
> > the
> > MFN currently mapped by the specified DFN along with a wrapper function
> > in generic IOMMU code to call the implementation if it exists.
> >
> > This patch also cleans up the initializers in intel_iommu_map_page() and
> > uses array-style dereference there, for consistency. A missing check for
> > shared EPT is also added to intel_iommu_unmap_page().
> 
> then please split into two patches.
> 

Ok.

> >
> > NOTE: This patch only adds a Xen-internal interface. This will be used by
> >       a subsequent patch.
> >       Another subsequent patch will add similar functionality for AMD
> >       IOMMUs.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > ---
> > Cc: Wei Liu <wei.liu2@citrix.com>
> > Cc: Kevin Tian <kevin.tian@intel.com>
> > Cc: Jan Beulich <jbeulich@suse.com>
> > Cc: George Dunlap <george.dunlap@citrix.com>
> >
> > v7:
> >  - Re-base and re-name BFN -> DFN.
> >  - Add missing checks for shared EPT and iommu_passthrough.
> >  - Remove unnecessary initializers and use array-style dereference.
> >  - Drop Wei's R-b because of code churn.
> >
> > v3:
> >  - Addressed comments from George.
> >
> > v2:
> >  - Addressed some comments from Jan.
> > ---
> >  xen/drivers/passthrough/iommu.c     | 11 ++++++++
> >  xen/drivers/passthrough/vtd/iommu.c | 52
> > +++++++++++++++++++++++++++++++++++--
> >  xen/drivers/passthrough/vtd/iommu.h |  3 +++
> >  xen/include/xen/iommu.h             |  4 +++
> >  4 files changed, 68 insertions(+), 2 deletions(-)
> >
> > diff --git a/xen/drivers/passthrough/iommu.c
> > b/xen/drivers/passthrough/iommu.c
> > index a16f1a0c66..52e3f500c7 100644
> > --- a/xen/drivers/passthrough/iommu.c
> > +++ b/xen/drivers/passthrough/iommu.c
> > @@ -296,6 +296,17 @@ int iommu_unmap_page(struct domain *d, dfn_t
> > dfn)
> >      return rc;
> >  }
> >
> > +int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
> > +                      unsigned int *flags)
> > +{
> > +    const struct domain_iommu *hd = dom_iommu(d);
> > +
> > +    if ( !iommu_enabled || !hd->platform_ops )
> > +        return -EOPNOTSUPP;
> > +
> > +    return hd->platform_ops->lookup_page(d, dfn, mfn, flags);
> > +}
> > +
> >  static void iommu_free_pagetables(unsigned long unused)
> >  {
> >      do {
> > diff --git a/xen/drivers/passthrough/vtd/iommu.c
> > b/xen/drivers/passthrough/vtd/iommu.c
> > index 0163bb949b..6622c2dd4c 100644
> > --- a/xen/drivers/passthrough/vtd/iommu.c
> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> > @@ -1770,7 +1770,7 @@ static int __must_check
> > intel_iommu_map_page(struct domain *d,
> >                                               unsigned int flags)
> >  {
> >      struct domain_iommu *hd = dom_iommu(d);
> > -    struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
> > +    struct dma_pte *page, *pte, old, new = {};
> >      u64 pg_maddr;
> >      int rc = 0;
> >
> > @@ -1790,9 +1790,11 @@ static int __must_check
> > intel_iommu_map_page(struct domain *d,
> >          spin_unlock(&hd->arch.mapping_lock);
> >          return -ENOMEM;
> >      }
> > +
> >      page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
> > -    pte = page + (dfn_x(dfn) & LEVEL_MASK);
> > +    pte = &page[dfn_x(dfn) & LEVEL_MASK];
> >      old = *pte;
> > +
> >      dma_set_pte_addr(new, mfn_to_maddr(mfn));
> >      dma_set_pte_prot(new,
> >                       ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
> > @@ -1808,6 +1810,7 @@ static int __must_check
> > intel_iommu_map_page(struct domain *d,
> >          unmap_vtd_domain_page(page);
> >          return 0;
> >      }
> > +
> >      *pte = new;
> >
> >      iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
> > @@ -1823,6 +1826,10 @@ static int __must_check
> > intel_iommu_map_page(struct domain *d,
> >  static int __must_check intel_iommu_unmap_page(struct domain *d,
> >                                                 dfn_t dfn)
> >  {
> > +    /* Do nothing if VT-d shares EPT page table */
> > +    if ( iommu_use_hap_pt(d) )
> > +        return 0;
> > +
> >      /* Do nothing if hardware domain and iommu supports pass thru. */
> >      if ( iommu_passthrough && is_hardware_domain(d) )
> >          return 0;
> > @@ -1830,6 +1837,46 @@ static int __must_check
> > intel_iommu_unmap_page(struct domain *d,
> >      return dma_pte_clear_one(d, dfn_to_daddr(dfn));
> >  }
> >
> > +static int intel_iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t
> > *mfn,
> > +                                   unsigned int *flags)
> > +{
> > +    struct domain_iommu *hd = dom_iommu(d);
> > +    struct dma_pte *page, val;
> > +    u64 pg_maddr;
> > +
> > +    /* Fail if VT-d shares EPT page table */
> > +    if ( iommu_use_hap_pt(d) )
> > +        return -ENOENT;
> > +
> > +    /* Fail if hardware domain and iommu supports pass thru. */
> > +    if ( iommu_passthrough && is_hardware_domain(d) )
> > +        return -ENOENT;
> 
> why fail instead of returning dfn as mfn? passthrough is just one
> special translation mode in IOMMU, which doesn't mean lookup
> is not possible.
> 

Hmm. Given that map and unmap don't return errors then maybe that is best. Will do.

  Paul

> > +
> > +    spin_lock(&hd->arch.mapping_lock);
> > +
> > +    pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 0);
> > +    if ( pg_maddr == 0 )
> > +    {
> > +        spin_unlock(&hd->arch.mapping_lock);
> > +        return -ENOMEM;
> > +    }
> > +
> > +    page = map_vtd_domain_page(pg_maddr);
> > +    val = page[dfn_x(dfn) & LEVEL_MASK];
> > +
> > +    unmap_vtd_domain_page(page);
> > +    spin_unlock(&hd->arch.mapping_lock);
> > +
> > +    if ( !dma_pte_present(val) )
> > +        return -ENOENT;
> > +
> > +    *mfn = maddr_to_mfn(dma_pte_addr(val));
> > +    *flags = dma_pte_read(val) ? IOMMUF_readable : 0;
> > +    *flags |= dma_pte_write(val) ? IOMMUF_writable : 0;
> > +
> > +    return 0;
> > +}
> > +
> >  int iommu_pte_flush(struct domain *d, uint64_t dfn, uint64_t *pte,
> >                      int order, int present)
> >  {
> > @@ -2655,6 +2702,7 @@ const struct iommu_ops intel_iommu_ops = {
> >      .teardown = iommu_domain_teardown,
> >      .map_page = intel_iommu_map_page,
> >      .unmap_page = intel_iommu_unmap_page,
> > +    .lookup_page = intel_iommu_lookup_page,
> >      .free_page_table = iommu_free_page_table,
> >      .reassign_device = reassign_device_ownership,
> >      .get_device_group_id = intel_iommu_group_id,
> > diff --git a/xen/drivers/passthrough/vtd/iommu.h
> > b/xen/drivers/passthrough/vtd/iommu.h
> > index 72c1a2e3cd..47bdfcb5ea 100644
> > --- a/xen/drivers/passthrough/vtd/iommu.h
> > +++ b/xen/drivers/passthrough/vtd/iommu.h
> > @@ -272,6 +272,9 @@ struct dma_pte {
> >  #define dma_set_pte_prot(p, prot) do { \
> >          (p).val = ((p).val & ~DMA_PTE_PROT) | ((prot) & DMA_PTE_PROT); \
> >      } while (0)
> > +#define dma_pte_prot(p) ((p).val & DMA_PTE_PROT)
> > +#define dma_pte_read(p) (dma_pte_prot(p) & DMA_PTE_READ)
> > +#define dma_pte_write(p) (dma_pte_prot(p) & DMA_PTE_WRITE)
> >  #define dma_pte_addr(p) ((p).val & PADDR_MASK & PAGE_MASK_4K)
> >  #define dma_set_pte_addr(p, addr) do {\
> >              (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
> > diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
> > index 9e0b4e8638..bebddc2db4 100644
> > --- a/xen/include/xen/iommu.h
> > +++ b/xen/include/xen/iommu.h
> > @@ -100,6 +100,8 @@ void iommu_teardown(struct domain *d);
> >  int __must_check iommu_map_page(struct domain *d, dfn_t dfn,
> >                                  mfn_t mfn, unsigned int flags);
> >  int __must_check iommu_unmap_page(struct domain *d, dfn_t dfn);
> > +int __must_check iommu_lookup_page(struct domain *d, dfn_t dfn,
> > mfn_t *mfn,
> > +                                   unsigned int *flags);
> >
> >  enum iommu_feature
> >  {
> > @@ -190,6 +192,8 @@ struct iommu_ops {
> >      int __must_check (*map_page)(struct domain *d, dfn_t dfn, mfn_t
> mfn,
> >                                   unsigned int flags);
> >      int __must_check (*unmap_page)(struct domain *d, dfn_t dfn);
> > +    int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t
> > *mfn,
> > +                                    unsigned int *flags);
> >      void (*free_page_table)(struct page_info *);
> >  #ifdef CONFIG_X86
> >      void (*update_ire_from_apic)(unsigned int apic, unsigned int reg,
> > unsigned int value);
> > --
> > 2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 5/6] memory: add check_get_page_from_gfn() as a wrapper...
  2018-09-13  8:30   ` Roger Pau Monné
@ 2018-09-13  8:35     ` Paul Durrant
  0 siblings, 0 replies; 22+ messages in thread
From: Paul Durrant @ 2018-09-13  8:35 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, Jan Beulich, Ian Jackson, xen-devel

> -----Original Message-----
> From: Roger Pau Monne
> Sent: 13 September 2018 09:30
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: xen-devel@lists.xenproject.org; Stefano Stabellini
> <sstabellini@kernel.org>; Wei Liu <wei.liu2@citrix.com>; Konrad Rzeszutek
> Wilk <konrad.wilk@oracle.com>; George Dunlap
> <George.Dunlap@citrix.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Tim
> (Xen.org) <tim@xen.org>; Julien Grall <julien.grall@arm.com>; Jan Beulich
> <jbeulich@suse.com>
> Subject: Re: [Xen-devel] [PATCH v7 5/6] memory: add
> check_get_page_from_gfn() as a wrapper...
> 
> On Wed, Sep 12, 2018 at 12:30:27PM +0100, Paul Durrant wrote:
> > ...for some uses of get_page_from_gfn().
> >
> > There are many occurences of the following pattern in the code:
> >
> >     q = <readonly look-up> ? P2M_ALLOC : P2M_UNSHARE;
> >     page = get_page_from_gfn(d, gfn, &p2mt, q);
> >
> >     if ( p2m_is_paging(p2mt) )
> >     {
> >         if ( page )
> >             put_page(page);
> >
> >         p2m_mem_paging_populate(d, gfn);
> >         return <-EAGAIN or equivalent>;
> >     }
> >
> >     if ( (q & P2M_UNSHARE) && p2m_is_shared(p2mt) )
> >     {
> >         if ( page )
> >             put_page(page);
> >
> >         return <-EAGAIN or equivalent>;
> >     }
> >
> >     if ( !page )
> >         return <-EINVAL or equivalent>;
> >
> >     if ( !p2m_is_ram(p2mt) ||
> >          (!<readonly look-up> && p2m_is_readonly(p2mt)) )
> >     {
> >         put_page(page);
> >         return <-EINVAL or equivalent>;
> >     }
> >
> > There are some small differences between the exact way the occurences
> > are coded but the desired semantic is the same.
> >
> > This patch introduces a new common implementation of this code in
> > check_get_page_from_gfn() and then converts the various open-coded
> patterns
> > into calls to this new function.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> 

Thanks.

> Just two nits below.
> 
> > ---
> > Cc: Jan Beulich <jbeulich@suse.com>
> > Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> > Cc: George Dunlap <George.Dunlap@eu.citrix.com>
> > Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> > Cc: Julien Grall <julien.grall@arm.com>
> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Cc: Stefano Stabellini <sstabellini@kernel.org>
> > Cc: Tim Deegan <tim@xen.org>
> > Cc: Wei Liu <wei.liu2@citrix.com>
> >
> > v7:
> >  - Fix ARM build by introducing p2m_is_readonly() predicate.
> >  - Re-name get_paged_frame() -> check_get_page_from_gfn().
> >  - Adjust default cases of callers switch-ing on return value.
> >
> > v3:
> >  - Addressed comments from George.
> >
> > v2:
> >  - New in v2.
> > ---
> >  xen/arch/x86/hvm/emulate.c | 34 ++++++++--------------------
> >  xen/arch/x86/hvm/hvm.c     | 16 ++-----------
> >  xen/common/grant_table.c   | 39 ++++++++++----------------------
> >  xen/common/memory.c        | 56
> +++++++++++++++++++++++++++++++++++++---------
> >  xen/include/asm-arm/p2m.h  |  8 +++++++
> >  xen/include/asm-x86/p2m.h  |  2 ++
> >  6 files changed, 79 insertions(+), 76 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> > index a577685dc6..0135272bf4 100644
> > --- a/xen/arch/x86/hvm/emulate.c
> > +++ b/xen/arch/x86/hvm/emulate.c
> > @@ -353,33 +353,17 @@ static int hvmemul_do_io_buffer(
> >
> >  static int hvmemul_acquire_page(unsigned long gmfn, struct page_info
> **page)
> >  {
> > -    struct domain *curr_d = current->domain;
> > -    p2m_type_t p2mt;
> > -
> > -    *page = get_page_from_gfn(curr_d, gmfn, &p2mt, P2M_UNSHARE);
> > -
> > -    if ( *page == NULL )
> > -        return X86EMUL_UNHANDLEABLE;
> > -
> > -    if ( p2m_is_paging(p2mt) )
> > -    {
> > -        put_page(*page);
> > -        p2m_mem_paging_populate(curr_d, gmfn);
> > -        return X86EMUL_RETRY;
> > -    }
> > -
> > -    if ( p2m_is_shared(p2mt) )
> > +    switch ( check_get_page_from_gfn(current->domain, _gfn(gmfn),
> false,
> > +                                     NULL, page) )
> >      {
> > -        put_page(*page);
> > +    case 0:
> > +        break;
> > +    case -EAGAIN:
> >          return X86EMUL_RETRY;
> 
> I would add a newline between cases.

Ok.

> 
> > diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> > index d4b3cfcb6e..03897b49c3 100644
> > --- a/xen/include/asm-x86/p2m.h
> > +++ b/xen/include/asm-x86/p2m.h
> > @@ -492,6 +492,8 @@ static inline struct page_info *get_page_from_gfn(
> >      return mfn_valid(_mfn(gfn)) && get_page(page, d) ? page : NULL;
> >  }
> >
> > +int check_get_page_from_gfn(struct domain *d, gfn_t gfn, bool
> readonly,
> > +                            p2m_type_t *p2mt_p, struct page_info **page_p);
> 
> Given the usage above, I would add the must_check attribute.
> 

Yes, that sounds like a good thing to do.

  Paul

> Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops
  2018-09-12 11:30 ` [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops Paul Durrant
  2018-09-13  6:53   ` Tian, Kevin
@ 2018-09-13  8:50   ` Roger Pau Monné
  2018-09-13  8:59     ` Paul Durrant
  1 sibling, 1 reply; 22+ messages in thread
From: Roger Pau Monné @ 2018-09-13  8:50 UTC (permalink / raw)
  To: Paul Durrant; +Cc: xen-devel, Kevin Tian, Wei Liu, George Dunlap, Jan Beulich

On Wed, Sep 12, 2018 at 12:30:28PM +0100, Paul Durrant wrote:
> This patch adds a new method to the VT-d IOMMU implementation to find the
> MFN currently mapped by the specified DFN along with a wrapper function
> in generic IOMMU code to call the implementation if it exists.
> 
> This patch also cleans up the initializers in intel_iommu_map_page() and
> uses array-style dereference there, for consistency. A missing check for
> shared EPT is also added to intel_iommu_unmap_page().
> 
> NOTE: This patch only adds a Xen-internal interface. This will be used by
>       a subsequent patch.
>       Another subsequent patch will add similar functionality for AMD
>       IOMMUs.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> ---
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: Kevin Tian <kevin.tian@intel.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: George Dunlap <george.dunlap@citrix.com>
> 
> v7:
>  - Re-base and re-name BFN -> DFN.
>  - Add missing checks for shared EPT and iommu_passthrough.
>  - Remove unnecessary initializers and use array-style dereference.
>  - Drop Wei's R-b because of code churn.
> 
> v3:
>  - Addressed comments from George.
> 
> v2:
>  - Addressed some comments from Jan.
> ---
>  xen/drivers/passthrough/iommu.c     | 11 ++++++++
>  xen/drivers/passthrough/vtd/iommu.c | 52 +++++++++++++++++++++++++++++++++++--
>  xen/drivers/passthrough/vtd/iommu.h |  3 +++
>  xen/include/xen/iommu.h             |  4 +++
>  4 files changed, 68 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
> index a16f1a0c66..52e3f500c7 100644
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -296,6 +296,17 @@ int iommu_unmap_page(struct domain *d, dfn_t dfn)
>      return rc;
>  }
>  
> +int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
> +                      unsigned int *flags)
> +{
> +    const struct domain_iommu *hd = dom_iommu(d);
> +
> +    if ( !iommu_enabled || !hd->platform_ops )
> +        return -EOPNOTSUPP;
> +
> +    return hd->platform_ops->lookup_page(d, dfn, mfn, flags);
> +}
> +
>  static void iommu_free_pagetables(unsigned long unused)
>  {
>      do {
> diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
> index 0163bb949b..6622c2dd4c 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1770,7 +1770,7 @@ static int __must_check intel_iommu_map_page(struct domain *d,
>                                               unsigned int flags)
>  {
>      struct domain_iommu *hd = dom_iommu(d);
> -    struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
> +    struct dma_pte *page, *pte, old, new = {};
>      u64 pg_maddr;
>      int rc = 0;
>  
> @@ -1790,9 +1790,11 @@ static int __must_check intel_iommu_map_page(struct domain *d,
>          spin_unlock(&hd->arch.mapping_lock);
>          return -ENOMEM;
>      }
> +
>      page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
> -    pte = page + (dfn_x(dfn) & LEVEL_MASK);
> +    pte = &page[dfn_x(dfn) & LEVEL_MASK];
>      old = *pte;
> +
>      dma_set_pte_addr(new, mfn_to_maddr(mfn));
>      dma_set_pte_prot(new,
>                       ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
> @@ -1808,6 +1810,7 @@ static int __must_check intel_iommu_map_page(struct domain *d,
>          unmap_vtd_domain_page(page);
>          return 0;
>      }
> +
>      *pte = new;
>  
>      iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
> @@ -1823,6 +1826,10 @@ static int __must_check intel_iommu_map_page(struct domain *d,
>  static int __must_check intel_iommu_unmap_page(struct domain *d,
>                                                 dfn_t dfn)
>  {
> +    /* Do nothing if VT-d shares EPT page table */
> +    if ( iommu_use_hap_pt(d) )
> +        return 0;
> +
>      /* Do nothing if hardware domain and iommu supports pass thru. */
>      if ( iommu_passthrough && is_hardware_domain(d) )
>          return 0;
> @@ -1830,6 +1837,46 @@ static int __must_check intel_iommu_unmap_page(struct domain *d,
>      return dma_pte_clear_one(d, dfn_to_daddr(dfn));
>  }
>  
> +static int intel_iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
> +                                   unsigned int *flags)

Would be nice to constify domain here, but I see that's not possible
because of the lock.

> +{
> +    struct domain_iommu *hd = dom_iommu(d);
> +    struct dma_pte *page, val;
> +    u64 pg_maddr;

Use uint64_t.

> +
> +    /* Fail if VT-d shares EPT page table */
> +    if ( iommu_use_hap_pt(d) )
> +        return -ENOENT;
> +
> +    /* Fail if hardware domain and iommu supports pass thru. */
> +    if ( iommu_passthrough && is_hardware_domain(d) )
> +        return -ENOENT;

I would join the two if conditions above, and consider returning
something different from ENOENT, since that's also used to signal that
iommu page tables are present but the entry is not present. Maybe
EOPNOTSUPP or ENOSYS?

I also dislike that other iommu functions simply return 0 without
doing anything when the page tables are shared.

> +
> +    spin_lock(&hd->arch.mapping_lock);
> +
> +    pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 0);
> +    if ( pg_maddr == 0 )

It's more common to use !pg_maddr.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops
  2018-09-13  8:50   ` Roger Pau Monné
@ 2018-09-13  8:59     ` Paul Durrant
  0 siblings, 0 replies; 22+ messages in thread
From: Paul Durrant @ 2018-09-13  8:59 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: xen-devel, Kevin Tian, Wei Liu, George Dunlap, Jan Beulich

> -----Original Message-----
> From: Roger Pau Monne
> Sent: 13 September 2018 09:50
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: xen-devel@lists.xenproject.org; Kevin Tian <kevin.tian@intel.com>; Wei
> Liu <wei.liu2@citrix.com>; George Dunlap <George.Dunlap@citrix.com>; Jan
> Beulich <jbeulich@suse.com>
> Subject: Re: [Xen-devel] [PATCH v7 6/6] vtd: add lookup_page method to
> iommu_ops
> 
> On Wed, Sep 12, 2018 at 12:30:28PM +0100, Paul Durrant wrote:
> > This patch adds a new method to the VT-d IOMMU implementation to find
> the
> > MFN currently mapped by the specified DFN along with a wrapper function
> > in generic IOMMU code to call the implementation if it exists.
> >
> > This patch also cleans up the initializers in intel_iommu_map_page() and
> > uses array-style dereference there, for consistency. A missing check for
> > shared EPT is also added to intel_iommu_unmap_page().
> >
> > NOTE: This patch only adds a Xen-internal interface. This will be used by
> >       a subsequent patch.
> >       Another subsequent patch will add similar functionality for AMD
> >       IOMMUs.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > ---
> > Cc: Wei Liu <wei.liu2@citrix.com>
> > Cc: Kevin Tian <kevin.tian@intel.com>
> > Cc: Jan Beulich <jbeulich@suse.com>
> > Cc: George Dunlap <george.dunlap@citrix.com>
> >
> > v7:
> >  - Re-base and re-name BFN -> DFN.
> >  - Add missing checks for shared EPT and iommu_passthrough.
> >  - Remove unnecessary initializers and use array-style dereference.
> >  - Drop Wei's R-b because of code churn.
> >
> > v3:
> >  - Addressed comments from George.
> >
> > v2:
> >  - Addressed some comments from Jan.
> > ---
> >  xen/drivers/passthrough/iommu.c     | 11 ++++++++
> >  xen/drivers/passthrough/vtd/iommu.c | 52
> +++++++++++++++++++++++++++++++++++--
> >  xen/drivers/passthrough/vtd/iommu.h |  3 +++
> >  xen/include/xen/iommu.h             |  4 +++
> >  4 files changed, 68 insertions(+), 2 deletions(-)
> >
> > diff --git a/xen/drivers/passthrough/iommu.c
> b/xen/drivers/passthrough/iommu.c
> > index a16f1a0c66..52e3f500c7 100644
> > --- a/xen/drivers/passthrough/iommu.c
> > +++ b/xen/drivers/passthrough/iommu.c
> > @@ -296,6 +296,17 @@ int iommu_unmap_page(struct domain *d, dfn_t
> dfn)
> >      return rc;
> >  }
> >
> > +int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
> > +                      unsigned int *flags)
> > +{
> > +    const struct domain_iommu *hd = dom_iommu(d);
> > +
> > +    if ( !iommu_enabled || !hd->platform_ops )
> > +        return -EOPNOTSUPP;
> > +
> > +    return hd->platform_ops->lookup_page(d, dfn, mfn, flags);
> > +}
> > +
> >  static void iommu_free_pagetables(unsigned long unused)
> >  {
> >      do {
> > diff --git a/xen/drivers/passthrough/vtd/iommu.c
> b/xen/drivers/passthrough/vtd/iommu.c
> > index 0163bb949b..6622c2dd4c 100644
> > --- a/xen/drivers/passthrough/vtd/iommu.c
> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> > @@ -1770,7 +1770,7 @@ static int __must_check
> intel_iommu_map_page(struct domain *d,
> >                                               unsigned int flags)
> >  {
> >      struct domain_iommu *hd = dom_iommu(d);
> > -    struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
> > +    struct dma_pte *page, *pte, old, new = {};
> >      u64 pg_maddr;
> >      int rc = 0;
> >
> > @@ -1790,9 +1790,11 @@ static int __must_check
> intel_iommu_map_page(struct domain *d,
> >          spin_unlock(&hd->arch.mapping_lock);
> >          return -ENOMEM;
> >      }
> > +
> >      page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
> > -    pte = page + (dfn_x(dfn) & LEVEL_MASK);
> > +    pte = &page[dfn_x(dfn) & LEVEL_MASK];
> >      old = *pte;
> > +
> >      dma_set_pte_addr(new, mfn_to_maddr(mfn));
> >      dma_set_pte_prot(new,
> >                       ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
> > @@ -1808,6 +1810,7 @@ static int __must_check
> intel_iommu_map_page(struct domain *d,
> >          unmap_vtd_domain_page(page);
> >          return 0;
> >      }
> > +
> >      *pte = new;
> >
> >      iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
> > @@ -1823,6 +1826,10 @@ static int __must_check
> intel_iommu_map_page(struct domain *d,
> >  static int __must_check intel_iommu_unmap_page(struct domain *d,
> >                                                 dfn_t dfn)
> >  {
> > +    /* Do nothing if VT-d shares EPT page table */
> > +    if ( iommu_use_hap_pt(d) )
> > +        return 0;
> > +
> >      /* Do nothing if hardware domain and iommu supports pass thru. */
> >      if ( iommu_passthrough && is_hardware_domain(d) )
> >          return 0;
> > @@ -1830,6 +1837,46 @@ static int __must_check
> intel_iommu_unmap_page(struct domain *d,
> >      return dma_pte_clear_one(d, dfn_to_daddr(dfn));
> >  }
> >
> > +static int intel_iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t
> *mfn,
> > +                                   unsigned int *flags)
> 
> Would be nice to constify domain here, but I see that's not possible
> because of the lock.
> 
> > +{
> > +    struct domain_iommu *hd = dom_iommu(d);
> > +    struct dma_pte *page, val;
> > +    u64 pg_maddr;
> 
> Use uint64_t.
> 
> > +
> > +    /* Fail if VT-d shares EPT page table */
> > +    if ( iommu_use_hap_pt(d) )
> > +        return -ENOENT;
> > +
> > +    /* Fail if hardware domain and iommu supports pass thru. */
> > +    if ( iommu_passthrough && is_hardware_domain(d) )
> > +        return -ENOENT;
> 
> I would join the two if conditions above, and consider returning
> something different from ENOENT, since that's also used to signal that
> iommu page tables are present but the entry is not present. Maybe
> EOPNOTSUPP or ENOSYS?
> 
> I also dislike that other iommu functions simply return 0 without
> doing anything when the page tables are shared.
> 

That's Kevin's preference though and he is subsystem maintainer.

> > +
> > +    spin_lock(&hd->arch.mapping_lock);
> > +
> > +    pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 0);
> > +    if ( pg_maddr == 0 )
> 
> It's more common to use !pg_maddr.
> 

Ok. I'll make some style adjustments but I generally try to follow the prevailing style of the source module.

  Paul

> Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v7 1/6] iommu: introduce the concept of DFN...
  2018-09-12 12:14   ` Roger Pau Monné
@ 2018-09-13  9:36     ` Paul Durrant
  0 siblings, 0 replies; 22+ messages in thread
From: Paul Durrant @ 2018-09-13  9:36 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Jan Beulich,
	Julien Grall, Suravee Suthikulpanit, xen-devel

> -----Original Message-----
> From: Roger Pau Monne
> Sent: 12 September 2018 13:15
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: xen-devel@lists.xenproject.org; Kevin Tian <kevin.tian@intel.com>;
> Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wei.liu2@citrix.com>;
> Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>; Julien Grall
> <julien.grall@arm.com>; Jan Beulich <jbeulich@suse.com>
> Subject: Re: [Xen-devel] [PATCH v7 1/6] iommu: introduce the concept of
> DFN...
> 
> On Wed, Sep 12, 2018 at 12:30:23PM +0100, Paul Durrant wrote:
> > diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
> > index e35d941f3c..19d5d55d79 100644
> > --- a/xen/include/xen/iommu.h
> > +++ b/xen/include/xen/iommu.h
> > @@ -23,11 +23,37 @@
> >  #include <xen/page-defs.h>
> >  #include <xen/spinlock.h>
> >  #include <xen/pci.h>
> > +#include <xen/typesafe.h>
> >  #include <public/hvm/ioreq.h>
> >  #include <public/domctl.h>
> >  #include <asm/device.h>
> >  #include <asm/iommu.h>
> >
> > +TYPE_SAFE(uint64_t, dfn);
> 
> Shouldn't this be unsigned long instead of uint64_t? (like gfn or mfn)
> 

Also, there is no reason why a 32-bit domain can't use h/w capable of 64-bit DMA.

> > +#define PRI_dfn     PRIx64
> > +#define INVALID_DFN _dfn(~0UL)
> 
> Or if there's a reason for the above to be uint64_t, then INVALID_DFN
> should use ULL.
> 

Yes, I missed that.

> > +
> > +#ifndef dfn_t
> > +#define dfn_t /* Grep fodder: dfn_t, _dfn() and dfn_x() are defined
> above */
> > +#define _dfn
> > +#define dfn_x
> > +#undef dfn_t
> > +#undef _dfn
> > +#undef dfn_x
> > +#endif
> > +
> > +#define IOMMU_PAGE_SHIFT 12
> > +#define IOMMU_PAGE_SIZE  (_AC(1,L) << IOMMU_PAGE_SHIFT)
> 
> There's no need to use _AC?
> 

Indeed.

> > +#define IOMMU_PAGE_MASK  (~(IOMMU_PAGE_SIZE - 1))
> > +
> > +typedef uint64_t daddr_t;
> 
> Use paddr_t instead of uint64_t directly?
> 

No, I don't think paddr_t is the correct thing to use.

> > +
> > +#define __dfn_to_daddr(dfn) ((daddr_t)(dfn) << IOMMU_PAGE_SHIFT)
> > +#define __daddr_to_dfn(daddr) ((uint64_t)(daddr >>
> IOMMU_PAGE_SHIFT))
> > +
> > +#define dfn_to_daddr(dfn) __dfn_to_daddr(dfn_x(dfn))
> > +#define daddr_to_dfn(daddr) _dfn(__daddr_to_dfn(daddr))
> > +
> >  extern bool_t iommu_enable, iommu_enabled;
> >  extern bool_t force_iommu, iommu_dom0_strict, iommu_verbose;
> >  extern bool_t iommu_workaround_bios_bug, iommu_igfx,
> iommu_passthrough;
> > @@ -60,9 +86,9 @@ void iommu_teardown(struct domain *d);
> >  #define IOMMUF_readable  (1u<<_IOMMUF_readable)
> >  #define _IOMMUF_writable 1
> >  #define IOMMUF_writable  (1u<<_IOMMUF_writable)
> > -int __must_check iommu_map_page(struct domain *d, unsigned long
> gfn,
> > +int __must_check iommu_map_page(struct domain *d, unsigned long
> dfn,
> >                                  unsigned long mfn, unsigned int flags);
> > -int __must_check iommu_unmap_page(struct domain *d, unsigned long
> gfn);
> > +int __must_check iommu_unmap_page(struct domain *d, unsigned long
> dfn);
> >
> >  enum iommu_feature
> >  {
> > @@ -150,9 +176,9 @@ struct iommu_ops {
> >  #endif /* HAS_PCI */
> >
> >      void (*teardown)(struct domain *d);
> > -    int __must_check (*map_page)(struct domain *d, unsigned long gfn,
> > +    int __must_check (*map_page)(struct domain *d, unsigned long dfn,
> >                                   unsigned long mfn, unsigned int flags);
> > -    int __must_check (*unmap_page)(struct domain *d, unsigned long
> gfn);
> > +    int __must_check (*unmap_page)(struct domain *d, unsigned long
> dfn);
> >      void (*free_page_table)(struct page_info *);
> >  #ifdef CONFIG_X86
> >      void (*update_ire_from_apic)(unsigned int apic, unsigned int reg,
> unsigned int value);
> > @@ -163,7 +189,7 @@ struct iommu_ops {
> >      void (*resume)(void);
> >      void (*share_p2m)(struct domain *d);
> >      void (*crash_shutdown)(void);
> > -    int __must_check (*iotlb_flush)(struct domain *d, unsigned long gfn,
> > +    int __must_check (*iotlb_flush)(struct domain *d, unsigned long dfn,
> >                                      unsigned int page_count);
> >      int __must_check (*iotlb_flush_all)(struct domain *d);
> >      int (*get_reserved_device_memory)(iommu_grdm_t *, void *);
> > @@ -185,7 +211,7 @@ int iommu_do_pci_domctl(struct xen_domctl *,
> struct domain *d,
> >  int iommu_do_domctl(struct xen_domctl *, struct domain *d,
> >                      XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
> >
> > -int __must_check iommu_iotlb_flush(struct domain *d, unsigned long
> gfn,
> > +int __must_check iommu_iotlb_flush(struct domain *d, unsigned long
> dfn,
> 
> While doing this wholesale gfn to dfn replacement, it might be a good
> idea to also change unsigned long for dfn at the same time?
> 

Not a lot of point. I'm going to replace these occurrences with type-safe dfn_t in a later patch.

  Paul

> Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2018-09-13  9:36 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-12 11:30 [PATCH v7 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
2018-09-12 11:30 ` [PATCH v7 1/6] iommu: introduce the concept of DFN Paul Durrant
2018-09-12 11:49   ` Andrew Cooper
2018-09-12 12:01     ` Paul Durrant
2018-09-12 12:14   ` Roger Pau Monné
2018-09-13  9:36     ` Paul Durrant
2018-09-12 11:30 ` [PATCH v7 2/6] iommu: make use of type-safe DFN and MFN in exported functions Paul Durrant
2018-09-13  8:09   ` Roger Pau Monné
2018-09-12 11:30 ` [PATCH v7 3/6] iommu: push use of type-safe DFN and MFN into iommu_ops Paul Durrant
2018-09-13  8:12   ` Roger Pau Monné
2018-09-12 11:30 ` [PATCH v7 4/6] iommu: don't domain_crash() inside iommu_map/unmap_page() Paul Durrant
2018-09-13  6:48   ` Tian, Kevin
2018-09-13  8:22   ` Roger Pau Monné
2018-09-13  8:27     ` Paul Durrant
2018-09-12 11:30 ` [PATCH v7 5/6] memory: add check_get_page_from_gfn() as a wrapper Paul Durrant
2018-09-13  8:30   ` Roger Pau Monné
2018-09-13  8:35     ` Paul Durrant
2018-09-12 11:30 ` [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops Paul Durrant
2018-09-13  6:53   ` Tian, Kevin
2018-09-13  8:30     ` Paul Durrant
2018-09-13  8:50   ` Roger Pau Monné
2018-09-13  8:59     ` Paul Durrant

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.