All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/7] amd-iommu: cleanup and add lookup_page method
@ 2018-09-20 14:11 Paul Durrant
  2018-09-20 14:11 ` [PATCH 1/7] amd-iommu: don't domain_crash() inside map/unmap_page() Paul Durrant
                   ` (6 more replies)
  0 siblings, 7 replies; 15+ messages in thread
From: Paul Durrant @ 2018-09-20 14:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Andrew Cooper, Paul Durrant,
	Suravee Suthikulpanit, Brian Woods

The aim of this series is to add a lookup_page method for AMD IOMMU
analogous to that recently added for VT-d. That is done by the last
patch. The first five patches are cleanup of the AMD IOMMU code, and
patch six modifies the VT-d code to remove a semantic problem.

Paul Durrant (7):
  amd-iommu: don't domain_crash() inside map/unmap_page()
  amd-iommu: re-name u8/16/32/64 to uint8/16/32/64_t
  amd-iommu: convert all bool_t to bool
  amd-iommu: reduce code duplication...
  amd-iommu: introduce new get/set_iommu_pde_info() functions...
  vtd: change lookup_page failure semantics
  amd-iommu: add lookup_page method to iommu_ops

 xen/drivers/passthrough/amd/iommu_map.c       | 551 +++++++++++++++-----------
 xen/drivers/passthrough/amd/pci_amd_iommu.c   |   1 +
 xen/drivers/passthrough/vtd/iommu.c           |   5 +-
 xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |   2 +
 4 files changed, 317 insertions(+), 242 deletions(-)
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Brian Woods <brian.woods@amd.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Wei Liu <wei.liu2@citrix.com>
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/7] amd-iommu: don't domain_crash() inside map/unmap_page()
  2018-09-20 14:11 [PATCH 0/7] amd-iommu: cleanup and add lookup_page method Paul Durrant
@ 2018-09-20 14:11 ` Paul Durrant
  2018-09-20 14:11 ` [PATCH 2/7] amd-iommu: re-name u8/16/32/64 to uint8/16/32/64_t Paul Durrant
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Paul Durrant @ 2018-09-20 14:11 UTC (permalink / raw)
  To: xen-devel; +Cc: Paul Durrant, Brian Woods, Suravee Suthikulpanit

Commit c210cafb "iommu: don't domain_crash() inside iommu_map/unmap_page()"
removed the implicit domain_crash() from the iommu_ops wrapper functions.
This patch does the same thing in the AMD IOMMU implementation. This is a
necessary pre-requisite for implementation of PV IOMMU.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Brian Woods <brian.woods@amd.com>
---
 xen/drivers/passthrough/amd/iommu_map.c | 7 -------
 1 file changed, 7 deletions(-)

diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index c89c54fdb6..8a10412a07 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -653,7 +653,6 @@ int amd_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
         spin_unlock(&hd->arch.mapping_lock);
         AMD_IOMMU_DEBUG("Root table alloc failed, dfn = %"PRI_dfn"\n",
                         dfn_x(dfn));
-        domain_crash(d);
         return rc;
     }
 
@@ -666,7 +665,6 @@ int amd_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
             spin_unlock(&hd->arch.mapping_lock);
             AMD_IOMMU_DEBUG("Update page mode failed dfn = %"PRI_dfn"\n",
                             dfn_x(dfn));
-            domain_crash(d);
             return -EFAULT;
         }
     }
@@ -676,7 +674,6 @@ int amd_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
         spin_unlock(&hd->arch.mapping_lock);
         AMD_IOMMU_DEBUG("Invalid IO pagetable entry dfn = %"PRI_dfn"\n",
                         dfn_x(dfn));
-        domain_crash(d);
         return -EFAULT;
     }
 
@@ -711,7 +708,6 @@ int amd_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
             AMD_IOMMU_DEBUG("Merge iommu page failed at level %d, "
                             "dfn = %"PRI_dfn" mfn = %"PRI_mfn"\n",
                             merge_level, dfn_x(dfn), mfn_x(mfn));
-            domain_crash(d);
             return -EFAULT;
         }
 
@@ -753,8 +749,6 @@ int amd_iommu_unmap_page(struct domain *d, dfn_t dfn)
             spin_unlock(&hd->arch.mapping_lock);
             AMD_IOMMU_DEBUG("Update page mode failed dfn = %"PRI_dfn"\n",
                             dfn_x(dfn));
-            if ( rc != -EADDRNOTAVAIL )
-                domain_crash(d);
             return rc;
         }
     }
@@ -764,7 +758,6 @@ int amd_iommu_unmap_page(struct domain *d, dfn_t dfn)
         spin_unlock(&hd->arch.mapping_lock);
         AMD_IOMMU_DEBUG("Invalid IO pagetable entry dfn = %"PRI_dfn"\n",
                         dfn_x(dfn));
-        domain_crash(d);
         return -EFAULT;
     }
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/7] amd-iommu: re-name u8/16/32/64 to uint8/16/32/64_t
  2018-09-20 14:11 [PATCH 0/7] amd-iommu: cleanup and add lookup_page method Paul Durrant
  2018-09-20 14:11 ` [PATCH 1/7] amd-iommu: don't domain_crash() inside map/unmap_page() Paul Durrant
@ 2018-09-20 14:11 ` Paul Durrant
  2018-09-20 14:11 ` [PATCH 3/7] amd-iommu: convert all bool_t to bool Paul Durrant
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Paul Durrant @ 2018-09-20 14:11 UTC (permalink / raw)
  To: xen-devel; +Cc: Paul Durrant, Brian Woods, Suravee Suthikulpanit

This patch is largely cosmetic. The only non-cosmetic changes are to
re-define the local pde variable as a uint32_t pointer (rather than a
uint64_t pointer) in iommu_merge_pages() and iommu_pde_from_dfn() to allow
the removal of rather excessive amounts of casting.

NOTE: This patch also adds missing emacs boilerplate.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Brian Woods <brian.woods@amd.com>
---
 xen/drivers/passthrough/amd/iommu_map.c | 134 +++++++++++++++++---------------
 1 file changed, 71 insertions(+), 63 deletions(-)

diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index 8a10412a07..e4f22c9fc6 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -37,7 +37,7 @@ static unsigned int pfn_to_pde_idx(unsigned long pfn, unsigned int level)
 
 void clear_iommu_pte_present(unsigned long l1_mfn, unsigned long dfn)
 {
-    u64 *table, *pte;
+    uint64_t *table, *pte;
 
     table = map_domain_page(_mfn(l1_mfn));
     pte = table + pfn_to_pde_idx(dfn, IOMMU_PAGING_MODE_LEVEL_1);
@@ -45,15 +45,15 @@ void clear_iommu_pte_present(unsigned long l1_mfn, unsigned long dfn)
     unmap_domain_page(table);
 }
 
-static bool_t set_iommu_pde_present(u32 *pde, unsigned long next_mfn, 
+static bool_t set_iommu_pde_present(uint32_t *pde, unsigned long next_mfn,
                                     unsigned int next_level,
                                     bool_t iw, bool_t ir)
 {
-    u64 addr_lo, addr_hi, maddr_old, maddr_next;
-    u32 entry;
+    uint64_t addr_lo, addr_hi, maddr_old, maddr_next;
+    uint32_t entry;
     bool_t need_flush = 0;
 
-    maddr_next = (u64)next_mfn << PAGE_SHIFT;
+    maddr_next = (uint64_t)next_mfn << PAGE_SHIFT;
 
     addr_hi = get_field_from_reg_u32(pde[1],
                                      IOMMU_PTE_ADDR_HIGH_MASK,
@@ -71,7 +71,7 @@ static bool_t set_iommu_pde_present(u32 *pde, unsigned long next_mfn,
     addr_hi = maddr_next >> 32;
 
     /* enable read/write permissions,which will be enforced at the PTE */
-    set_field_in_reg_u32((u32)addr_hi, 0,
+    set_field_in_reg_u32((uint32_t)addr_hi, 0,
                          IOMMU_PDE_ADDR_HIGH_MASK,
                          IOMMU_PDE_ADDR_HIGH_SHIFT, &entry);
     set_field_in_reg_u32(iw, entry,
@@ -90,7 +90,7 @@ static bool_t set_iommu_pde_present(u32 *pde, unsigned long next_mfn,
     pde[1] = entry;
 
     /* mark next level as 'present' */
-    set_field_in_reg_u32((u32)addr_lo >> PAGE_SHIFT, 0,
+    set_field_in_reg_u32((uint32_t)addr_lo >> PAGE_SHIFT, 0,
                          IOMMU_PDE_ADDR_LOW_MASK,
                          IOMMU_PDE_ADDR_LOW_SHIFT, &entry);
     set_field_in_reg_u32(next_level, entry,
@@ -108,13 +108,13 @@ static bool_t set_iommu_pte_present(unsigned long pt_mfn, unsigned long dfn,
                                     unsigned long next_mfn, int pde_level, 
                                     bool_t iw, bool_t ir)
 {
-    u64 *table;
-    u32 *pde;
+    uint64_t *table;
+    uint32_t *pde;
     bool_t need_flush = 0;
 
     table = map_domain_page(_mfn(pt_mfn));
 
-    pde = (u32*)(table + pfn_to_pde_idx(dfn, pde_level));
+    pde = (uint32_t *)(table + pfn_to_pde_idx(dfn, pde_level));
 
     need_flush = set_iommu_pde_present(pde, next_mfn, 
                                        IOMMU_PAGING_MODE_LEVEL_0, iw, ir);
@@ -123,10 +123,10 @@ static bool_t set_iommu_pte_present(unsigned long pt_mfn, unsigned long dfn,
 }
 
 void amd_iommu_set_root_page_table(
-    u32 *dte, u64 root_ptr, u16 domain_id, u8 paging_mode, u8 valid)
+    uint32_t *dte, uint64_t root_ptr, uint16_t domain_id, uint8_t paging_mode, uint8_t valid)
 {
-    u64 addr_hi, addr_lo;
-    u32 entry;
+    uint64_t addr_hi, addr_lo;
+    uint32_t entry;
     set_field_in_reg_u32(domain_id, 0,
                          IOMMU_DEV_TABLE_DOMAIN_ID_MASK,
                          IOMMU_DEV_TABLE_DOMAIN_ID_SHIFT, &entry);
@@ -135,7 +135,7 @@ void amd_iommu_set_root_page_table(
     addr_lo = root_ptr & DMA_32BIT_MASK;
     addr_hi = root_ptr >> 32;
 
-    set_field_in_reg_u32((u32)addr_hi, 0,
+    set_field_in_reg_u32((uint32_t)addr_hi, 0,
                          IOMMU_DEV_TABLE_PAGE_TABLE_PTR_HIGH_MASK,
                          IOMMU_DEV_TABLE_PAGE_TABLE_PTR_HIGH_SHIFT, &entry);
     set_field_in_reg_u32(IOMMU_CONTROL_ENABLED, entry,
@@ -146,7 +146,7 @@ void amd_iommu_set_root_page_table(
                          IOMMU_DEV_TABLE_IO_READ_PERMISSION_SHIFT, &entry);
     dte[1] = entry;
 
-    set_field_in_reg_u32((u32)addr_lo >> PAGE_SHIFT, 0,
+    set_field_in_reg_u32((uint32_t)addr_lo >> PAGE_SHIFT, 0,
                          IOMMU_DEV_TABLE_PAGE_TABLE_PTR_LOW_MASK,
                          IOMMU_DEV_TABLE_PAGE_TABLE_PTR_LOW_SHIFT, &entry);
     set_field_in_reg_u32(paging_mode, entry,
@@ -162,9 +162,9 @@ void amd_iommu_set_root_page_table(
     dte[0] = entry;
 }
 
-void iommu_dte_set_iotlb(u32 *dte, u8 i)
+void iommu_dte_set_iotlb(uint32_t *dte, uint8_t i)
 {
-    u32 entry;
+    uint32_t entry;
 
     entry = dte[3];
     set_field_in_reg_u32(!!i, entry,
@@ -174,16 +174,16 @@ void iommu_dte_set_iotlb(u32 *dte, u8 i)
 }
 
 void __init amd_iommu_set_intremap_table(
-    u32 *dte, u64 intremap_ptr, u8 int_valid)
+    uint32_t *dte, uint64_t intremap_ptr, uint8_t int_valid)
 {
-    u64 addr_hi, addr_lo;
-    u32 entry;
+    uint64_t addr_hi, addr_lo;
+    uint32_t entry;
 
     addr_lo = intremap_ptr & DMA_32BIT_MASK;
     addr_hi = intremap_ptr >> 32;
 
     entry = dte[5];
-    set_field_in_reg_u32((u32)addr_hi, entry,
+    set_field_in_reg_u32((uint32_t)addr_hi, entry,
                         IOMMU_DEV_TABLE_INT_TABLE_PTR_HIGH_MASK,
                         IOMMU_DEV_TABLE_INT_TABLE_PTR_HIGH_SHIFT, &entry);
     /* Fixed and arbitrated interrupts remapepd */
@@ -192,7 +192,7 @@ void __init amd_iommu_set_intremap_table(
                         IOMMU_DEV_TABLE_INT_CONTROL_SHIFT, &entry);
     dte[5] = entry;
 
-    set_field_in_reg_u32((u32)addr_lo >> 6, 0,
+    set_field_in_reg_u32((uint32_t)addr_lo >> 6, 0,
                         IOMMU_DEV_TABLE_INT_TABLE_PTR_LOW_MASK,
                         IOMMU_DEV_TABLE_INT_TABLE_PTR_LOW_SHIFT, &entry);
     /* 2048 entries */
@@ -211,11 +211,11 @@ void __init amd_iommu_set_intremap_table(
     dte[4] = entry;
 }
 
-void __init iommu_dte_add_device_entry(u32 *dte, struct ivrs_mappings *ivrs_dev)
+void __init iommu_dte_add_device_entry(uint32_t *dte, struct ivrs_mappings *ivrs_dev)
 {
-    u32 entry;
-    u8 sys_mgt, dev_ex, flags;
-    u8 mask = ~(0x7 << 3);
+    uint32_t entry;
+    uint8_t sys_mgt, dev_ex, flags;
+    uint8_t mask = ~(0x7 << 3);
 
     dte[7] = dte[6] = dte[4] = dte[2] = dte[1] = dte[0] = 0;
 
@@ -238,10 +238,10 @@ void __init iommu_dte_add_device_entry(u32 *dte, struct ivrs_mappings *ivrs_dev)
     dte[3] = entry;
 }
 
-void iommu_dte_set_guest_cr3(u32 *dte, u16 dom_id, u64 gcr3,
+void iommu_dte_set_guest_cr3(uint32_t *dte, uint16_t dom_id, uint64_t gcr3,
                              int gv, unsigned int glx)
 {
-    u32 entry, gcr3_1, gcr3_2, gcr3_3;
+    uint32_t entry, gcr3_1, gcr3_2, gcr3_3;
 
     gcr3_3 = gcr3 >> 31;
     gcr3_2 = (gcr3 >> 15) & 0xFFFF;
@@ -285,9 +285,9 @@ void iommu_dte_set_guest_cr3(u32 *dte, u16 dom_id, u64 gcr3,
     dte[1] = entry;
 }
 
-u64 amd_iommu_get_next_table_from_pte(u32 *entry)
+uint64_t amd_iommu_get_next_table_from_pte(uint32_t *entry)
 {
-    u64 addr_lo, addr_hi, ptr;
+    uint64_t addr_lo, addr_hi, ptr;
 
     addr_lo = get_field_from_reg_u32(
         entry[0],
@@ -306,22 +306,22 @@ u64 amd_iommu_get_next_table_from_pte(u32 *entry)
 /* For each pde, We use ignored bits (bit 1 - bit 8 and bit 63)
  * to save pde count, pde count = 511 is a candidate of page coalescing.
  */
-static unsigned int get_pde_count(u64 pde)
+static unsigned int get_pde_count(uint64_t pde)
 {
     unsigned int count;
-    u64 upper_mask = 1ULL << 63 ;
-    u64 lower_mask = 0xFF << 1;
+    uint64_t upper_mask = 1ULL << 63 ;
+    uint64_t lower_mask = 0xFF << 1;
 
     count = ((pde & upper_mask) >> 55) | ((pde & lower_mask) >> 1);
     return count;
 }
 
 /* Convert pde count into iommu pte ignored bits */
-static void set_pde_count(u64 *pde, unsigned int count)
+static void set_pde_count(uint64_t *pde, unsigned int count)
 {
-    u64 upper_mask = 1ULL << 8 ;
-    u64 lower_mask = 0xFF;
-    u64 pte_mask = (~(1ULL << 63)) & (~(0xFF << 1));
+    uint64_t upper_mask = 1ULL << 8 ;
+    uint64_t lower_mask = 0xFF;
+    uint64_t pte_mask = (~(1ULL << 63)) & (~(0xFF << 1));
 
     *pde &= pte_mask;
     *pde |= ((count & upper_mask ) << 55) | ((count & lower_mask ) << 1);
@@ -336,8 +336,8 @@ static int iommu_update_pde_count(struct domain *d, unsigned long pt_mfn,
 {
     unsigned int pde_count, next_level;
     unsigned long first_mfn;
-    u64 *table, *pde, *ntable;
-    u64 ntable_maddr, mask;
+    uint64_t *table, *pde, *ntable;
+    uint64_t ntable_maddr, mask;
     struct domain_iommu *hd = dom_iommu(d);
     bool_t ok = 0;
 
@@ -350,11 +350,11 @@ static int iommu_update_pde_count(struct domain *d, unsigned long pt_mfn,
     pde = table + pfn_to_pde_idx(dfn, merge_level);
 
     /* get page table of next level */
-    ntable_maddr = amd_iommu_get_next_table_from_pte((u32*)pde);
+    ntable_maddr = amd_iommu_get_next_table_from_pte((uint32_t *)pde);
     ntable = map_domain_page(_mfn(paddr_to_pfn(ntable_maddr)));
 
     /* get the first mfn of next level */
-    first_mfn = amd_iommu_get_next_table_from_pte((u32*)ntable) >> PAGE_SHIFT;
+    first_mfn = amd_iommu_get_next_table_from_pte((uint32_t *)ntable) >> PAGE_SHIFT;
 
     if ( first_mfn == 0 )
         goto out;
@@ -390,18 +390,19 @@ static int iommu_merge_pages(struct domain *d, unsigned long pt_mfn,
                              unsigned long dfn, unsigned int flags,
                              unsigned int merge_level)
 {
-    u64 *table, *pde, *ntable;
-    u64 ntable_mfn;
+    uint64_t *table, *ntable;
+    uint32_t *pde;
+    uint64_t ntable_mfn;
     unsigned long first_mfn;
     struct domain_iommu *hd = dom_iommu(d);
 
     ASSERT( spin_is_locked(&hd->arch.mapping_lock) && pt_mfn );
 
     table = map_domain_page(_mfn(pt_mfn));
-    pde = table + pfn_to_pde_idx(dfn, merge_level);
+    pde = (uint32_t *)(table + pfn_to_pde_idx(dfn, merge_level));
 
     /* get first mfn */
-    ntable_mfn = amd_iommu_get_next_table_from_pte((u32*)pde) >> PAGE_SHIFT;
+    ntable_mfn = amd_iommu_get_next_table_from_pte(pde) >> PAGE_SHIFT;
 
     if ( ntable_mfn == 0 )
     {
@@ -410,7 +411,7 @@ static int iommu_merge_pages(struct domain *d, unsigned long pt_mfn,
     }
 
     ntable = map_domain_page(_mfn(ntable_mfn));
-    first_mfn = amd_iommu_get_next_table_from_pte((u32*)ntable) >> PAGE_SHIFT;
+    first_mfn = amd_iommu_get_next_table_from_pte((uint32_t *)ntable) >> PAGE_SHIFT;
 
     if ( first_mfn == 0 )
     {
@@ -420,10 +421,8 @@ static int iommu_merge_pages(struct domain *d, unsigned long pt_mfn,
     }
 
     /* setup super page mapping, next level = 0 */
-    set_iommu_pde_present((u32*)pde, first_mfn,
-                          IOMMU_PAGING_MODE_LEVEL_0,
-                          !!(flags & IOMMUF_writable),
-                          !!(flags & IOMMUF_readable));
+    set_iommu_pde_present(pde, first_mfn, IOMMU_PAGING_MODE_LEVEL_0,
+                          !!(flags & IOMMUF_writable), !!(flags & IOMMUF_readable));
 
     amd_iommu_flush_all_pages(d);
 
@@ -439,7 +438,8 @@ static int iommu_merge_pages(struct domain *d, unsigned long pt_mfn,
 static int iommu_pde_from_dfn(struct domain *d, unsigned long dfn,
                               unsigned long pt_mfn[])
 {
-    u64 *pde, *next_table_vaddr;
+    uint64_t *next_table_vaddr;
+    uint32_t *pde;
     unsigned long  next_table_mfn;
     unsigned int level;
     struct page_info *table;
@@ -465,16 +465,15 @@ static int iommu_pde_from_dfn(struct domain *d, unsigned long dfn,
         pt_mfn[level] = next_table_mfn;
 
         next_table_vaddr = map_domain_page(_mfn(next_table_mfn));
-        pde = next_table_vaddr + pfn_to_pde_idx(dfn, level);
+        pde = (uint32_t *)(next_table_vaddr + pfn_to_pde_idx(dfn, level));
 
         /* Here might be a super page frame */
-        next_table_mfn = amd_iommu_get_next_table_from_pte((uint32_t*)pde) 
-                         >> PAGE_SHIFT;
+        next_table_mfn =
+            amd_iommu_get_next_table_from_pte(pde) >> PAGE_SHIFT;
 
         /* Split super page frame into smaller pieces.*/
-        if ( iommu_is_pte_present((u32*)pde) &&
-             (iommu_next_level((u32*)pde) == 0) &&
-             next_table_mfn != 0 )
+        if ( iommu_is_pte_present(pde) && !iommu_next_level(pde) &&
+             next_table_mfn )
         {
             int i;
             unsigned long mfn, pfn;
@@ -494,7 +493,7 @@ static int iommu_pde_from_dfn(struct domain *d, unsigned long dfn,
             }
 
             next_table_mfn = mfn_x(page_to_mfn(table));
-            set_iommu_pde_present((u32*)pde, next_table_mfn, next_level, 
+            set_iommu_pde_present(pde, next_table_mfn, next_level,
                                   !!IOMMUF_writable, !!IOMMUF_readable);
 
             for ( i = 0; i < PTE_PER_TABLE_SIZE; i++ )
@@ -509,7 +508,7 @@ static int iommu_pde_from_dfn(struct domain *d, unsigned long dfn,
         }
 
         /* Install lower level page table for non-present entries */
-        else if ( !iommu_is_pte_present((u32*)pde) )
+        else if ( !iommu_is_pte_present(pde) )
         {
             if ( next_table_mfn == 0 )
             {
@@ -521,7 +520,7 @@ static int iommu_pde_from_dfn(struct domain *d, unsigned long dfn,
                     return 1;
                 }
                 next_table_mfn = mfn_x(page_to_mfn(table));
-                set_iommu_pde_present((u32*)pde, next_table_mfn, next_level,
+                set_iommu_pde_present(pde, next_table_mfn, next_level,
                                       !!IOMMUF_writable, !!IOMMUF_readable);
             }
             else /* should never reach here */
@@ -542,7 +541,7 @@ static int iommu_pde_from_dfn(struct domain *d, unsigned long dfn,
 
 static int update_paging_mode(struct domain *d, unsigned long dfn)
 {
-    u16 bdf;
+    uint16_t bdf;
     void *device_entry;
     unsigned int req_id, level, offset;
     unsigned long flags;
@@ -613,7 +612,7 @@ static int update_paging_mode(struct domain *d, unsigned long dfn)
                                (req_id * IOMMU_DEV_TABLE_ENTRY_SIZE);
 
                 /* valid = 0 only works for dom0 passthrough mode */
-                amd_iommu_set_root_page_table((u32 *)device_entry,
+                amd_iommu_set_root_page_table(device_entry,
                                               page_to_maddr(hd->arch.root_table),
                                               d->domain_id,
                                               hd->arch.paging_mode, 1);
@@ -771,7 +770,7 @@ int amd_iommu_unmap_page(struct domain *d, dfn_t dfn)
 }
 
 int amd_iommu_reserve_domain_unity_map(struct domain *domain,
-                                       u64 phys_addr,
+                                       uint64_t phys_addr,
                                        unsigned long size, int iw, int ir)
 {
     unsigned long npages, i;
@@ -816,3 +815,12 @@ void amd_iommu_share_p2m(struct domain *d)
                         mfn_x(pgd_mfn));
     }
 }
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 3/7] amd-iommu: convert all bool_t to bool
  2018-09-20 14:11 [PATCH 0/7] amd-iommu: cleanup and add lookup_page method Paul Durrant
  2018-09-20 14:11 ` [PATCH 1/7] amd-iommu: don't domain_crash() inside map/unmap_page() Paul Durrant
  2018-09-20 14:11 ` [PATCH 2/7] amd-iommu: re-name u8/16/32/64 to uint8/16/32/64_t Paul Durrant
@ 2018-09-20 14:11 ` Paul Durrant
  2018-09-20 14:11 ` [PATCH 4/7] amd-iommu: reduce code duplication Paul Durrant
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Paul Durrant @ 2018-09-20 14:11 UTC (permalink / raw)
  To: xen-devel; +Cc: Paul Durrant, Brian Woods, Suravee Suthikulpanit

Fix assignments to use 'false' rather than '0'.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Brian Woods <brian.woods@amd.com>
---
 xen/drivers/passthrough/amd/iommu_map.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index e4f22c9fc6..d7df8b9161 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -45,13 +45,13 @@ void clear_iommu_pte_present(unsigned long l1_mfn, unsigned long dfn)
     unmap_domain_page(table);
 }
 
-static bool_t set_iommu_pde_present(uint32_t *pde, unsigned long next_mfn,
-                                    unsigned int next_level,
-                                    bool_t iw, bool_t ir)
+static bool set_iommu_pde_present(uint32_t *pde, unsigned long next_mfn,
+                                  unsigned int next_level,
+                                  bool iw, bool ir)
 {
     uint64_t addr_lo, addr_hi, maddr_old, maddr_next;
     uint32_t entry;
-    bool_t need_flush = 0;
+    bool need_flush = false;
 
     maddr_next = (uint64_t)next_mfn << PAGE_SHIFT;
 
@@ -104,13 +104,13 @@ static bool_t set_iommu_pde_present(uint32_t *pde, unsigned long next_mfn,
     return need_flush;
 }
 
-static bool_t set_iommu_pte_present(unsigned long pt_mfn, unsigned long dfn,
-                                    unsigned long next_mfn, int pde_level, 
-                                    bool_t iw, bool_t ir)
+static bool set_iommu_pte_present(unsigned long pt_mfn, unsigned long dfn,
+                                  unsigned long next_mfn, int pde_level,
+                                  bool iw, bool ir)
 {
     uint64_t *table;
     uint32_t *pde;
-    bool_t need_flush = 0;
+    bool need_flush = false;
 
     table = map_domain_page(_mfn(pt_mfn));
 
@@ -339,7 +339,7 @@ static int iommu_update_pde_count(struct domain *d, unsigned long pt_mfn,
     uint64_t *table, *pde, *ntable;
     uint64_t ntable_maddr, mask;
     struct domain_iommu *hd = dom_iommu(d);
-    bool_t ok = 0;
+    bool ok = false;
 
     ASSERT( spin_is_locked(&hd->arch.mapping_lock) && pt_mfn );
 
@@ -633,7 +633,7 @@ static int update_paging_mode(struct domain *d, unsigned long dfn)
 int amd_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
                        unsigned int flags)
 {
-    bool_t need_flush = 0;
+    bool need_flush = false;
     struct domain_iommu *hd = dom_iommu(d);
     int rc;
     unsigned long pt_mfn[7];
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 4/7] amd-iommu: reduce code duplication...
  2018-09-20 14:11 [PATCH 0/7] amd-iommu: cleanup and add lookup_page method Paul Durrant
                   ` (2 preceding siblings ...)
  2018-09-20 14:11 ` [PATCH 3/7] amd-iommu: convert all bool_t to bool Paul Durrant
@ 2018-09-20 14:11 ` Paul Durrant
  2018-09-20 14:11 ` [PATCH 5/7] amd-iommu: introduce new get/set_iommu_pde_info() functions Paul Durrant
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Paul Durrant @ 2018-09-20 14:11 UTC (permalink / raw)
  To: xen-devel; +Cc: Paul Durrant, Brian Woods, Suravee Suthikulpanit

...by calling update_paging_mode() inside iommu_pde_from_dfn().

Also have iommu_pde_from_dfn() return -EFAULT if pt_mfn[1] is zero, to avoid
the callers having to explictly test it.

NOTE: The return value of iommu_pde_from_dfn() is ignored by
      amd_iommu_map_page(). Instead, to preserve the existing return
      semantics, -EFAULT is returned regardless of the actual error.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Brian Woods <brian.woods@amd.com>
---
 xen/drivers/passthrough/amd/iommu_map.c | 276 ++++++++++++++++----------------
 1 file changed, 138 insertions(+), 138 deletions(-)

diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index d7df8b9161..a186c8d28b 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -431,6 +431,102 @@ static int iommu_merge_pages(struct domain *d, unsigned long pt_mfn,
     return 0;
 }
 
+static int update_paging_mode(struct domain *d, unsigned long dfn)
+{
+    uint16_t bdf;
+    void *device_entry;
+    unsigned int req_id, level, offset;
+    unsigned long flags;
+    struct pci_dev *pdev;
+    struct amd_iommu *iommu = NULL;
+    struct page_info *new_root = NULL;
+    struct page_info *old_root = NULL;
+    void *new_root_vaddr;
+    unsigned long old_root_mfn;
+    struct domain_iommu *hd = dom_iommu(d);
+
+    ASSERT(dfn != dfn_x(INVALID_DFN));
+    ASSERT(!(dfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
+
+    level = hd->arch.paging_mode;
+    old_root = hd->arch.root_table;
+    offset = dfn >> (PTE_PER_TABLE_SHIFT * (level - 1));
+
+    ASSERT(spin_is_locked(&hd->arch.mapping_lock) && is_hvm_domain(d));
+
+    while ( offset >= PTE_PER_TABLE_SIZE )
+    {
+        /*
+         * Allocate and install a new root table.
+         * Only upper I/O page table grows, no need to fix next level bits.
+         */
+        new_root = alloc_amd_iommu_pgtable();
+        if ( !new_root )
+        {
+            AMD_IOMMU_DEBUG("%s Cannot allocate I/O page table\n",
+                            __func__);
+            return -ENOMEM;
+        }
+
+        new_root_vaddr = __map_domain_page(new_root);
+        old_root_mfn = mfn_x(page_to_mfn(old_root));
+        set_iommu_pde_present(new_root_vaddr, old_root_mfn, level,
+                              !!IOMMUF_writable, !!IOMMUF_readable);
+
+        level++;
+        old_root = new_root;
+        offset >>= PTE_PER_TABLE_SHIFT;
+        unmap_domain_page(new_root_vaddr);
+    }
+
+    if ( new_root )
+    {
+        hd->arch.paging_mode = level;
+        hd->arch.root_table = new_root;
+
+        if ( !pcidevs_locked() )
+            AMD_IOMMU_DEBUG("%s Try to access pdev_list "
+                            "without aquiring pcidevs_lock.\n", __func__);
+
+        /*
+         * Update device table entries using new root table and paging
+         * mode.
+         */
+        for_each_pdev ( d, pdev )
+        {
+            bdf = PCI_BDF2(pdev->bus, pdev->devfn);
+            iommu = find_iommu_for_device(pdev->seg, bdf);
+            if ( !iommu )
+            {
+                AMD_IOMMU_DEBUG("%s Fail to find iommu.\n", __func__);
+                return -ENODEV;
+            }
+
+            spin_lock_irqsave(&iommu->lock, flags);
+            do {
+                req_id = get_dma_requestor_id(pdev->seg, bdf);
+                device_entry = iommu->dev_table.buffer +
+                               (req_id * IOMMU_DEV_TABLE_ENTRY_SIZE);
+
+                /* valid = 0 only works for dom0 passthrough mode */
+                amd_iommu_set_root_page_table(
+                    device_entry, page_to_maddr(hd->arch.root_table),
+                    d->domain_id, hd->arch.paging_mode, 1);
+
+                amd_iommu_flush_device(iommu, req_id);
+                bdf += pdev->phantom_stride;
+            } while ( PCI_DEVFN2(bdf) != pdev->devfn &&
+                      PCI_SLOT(bdf) == PCI_SLOT(pdev->devfn) );
+            spin_unlock_irqrestore(&iommu->lock, flags);
+        }
+
+        /* For safety, invalidate all entries */
+        amd_iommu_flush_all_pages(d);
+    }
+
+    return 0;
+}
+
 /* Walk io page tables and build level page tables if necessary
  * {Re, un}mapping super page frames causes re-allocation of io
  * page tables.
@@ -445,23 +541,37 @@ static int iommu_pde_from_dfn(struct domain *d, unsigned long dfn,
     struct page_info *table;
     const struct domain_iommu *hd = dom_iommu(d);
 
+    /*
+     * Since HVM domain is initialized with 2 level IO page table,
+     * we might need a deeper page table for wider dfn now.
+     */
+    if ( is_hvm_domain(d) )
+    {
+        int rc = update_paging_mode(d, dfn);
+
+        if ( rc )
+        {
+            AMD_IOMMU_DEBUG("Update page mode failed dfn = %"PRI_dfn"\n",
+                            dfn);
+            return rc;
+        }
+    }
+
     table = hd->arch.root_table;
     level = hd->arch.paging_mode;
 
-    BUG_ON( table == NULL || level < IOMMU_PAGING_MODE_LEVEL_1 || 
+    BUG_ON( !table || level < IOMMU_PAGING_MODE_LEVEL_1 ||
             level > IOMMU_PAGING_MODE_LEVEL_6 );
 
     next_table_mfn = mfn_x(page_to_mfn(table));
 
     if ( level == IOMMU_PAGING_MODE_LEVEL_1 )
-    {
-        pt_mfn[level] = next_table_mfn;
-        return 0;
-    }
+        goto out;
 
     while ( level > IOMMU_PAGING_MODE_LEVEL_1 )
     {
         unsigned int next_level = level - 1;
+
         pt_mfn[level] = next_table_mfn;
 
         next_table_vaddr = map_domain_page(_mfn(next_table_mfn));
@@ -485,11 +595,11 @@ static int iommu_pde_from_dfn(struct domain *d, unsigned long dfn,
 
             /* allocate lower level page table */
             table = alloc_amd_iommu_pgtable();
-            if ( table == NULL )
+            if ( !table )
             {
                 AMD_IOMMU_DEBUG("Cannot allocate I/O page table\n");
                 unmap_domain_page(next_table_vaddr);
-                return 1;
+                return -EFAULT;
             }
 
             next_table_mfn = mfn_x(page_to_mfn(table));
@@ -510,14 +620,14 @@ static int iommu_pde_from_dfn(struct domain *d, unsigned long dfn,
         /* Install lower level page table for non-present entries */
         else if ( !iommu_is_pte_present(pde) )
         {
-            if ( next_table_mfn == 0 )
+            if ( !next_table_mfn )
             {
                 table = alloc_amd_iommu_pgtable();
                 if ( table == NULL )
                 {
                     AMD_IOMMU_DEBUG("Cannot allocate I/O page table\n");
                     unmap_domain_page(next_table_vaddr);
-                    return 1;
+                    return -EFAULT;
                 }
                 next_table_mfn = mfn_x(page_to_mfn(table));
                 set_iommu_pde_present(pde, next_table_mfn, next_level,
@@ -526,7 +636,7 @@ static int iommu_pde_from_dfn(struct domain *d, unsigned long dfn,
             else /* should never reach here */
             {
                 unmap_domain_page(next_table_vaddr);
-                return 1;
+                return -EFAULT;
             }
         }
 
@@ -534,99 +644,17 @@ static int iommu_pde_from_dfn(struct domain *d, unsigned long dfn,
         level--;
     }
 
-    /* mfn of level 1 page table */
-    pt_mfn[level] = next_table_mfn;
-    return 0;
-}
-
-static int update_paging_mode(struct domain *d, unsigned long dfn)
-{
-    uint16_t bdf;
-    void *device_entry;
-    unsigned int req_id, level, offset;
-    unsigned long flags;
-    struct pci_dev *pdev;
-    struct amd_iommu *iommu = NULL;
-    struct page_info *new_root = NULL;
-    struct page_info *old_root = NULL;
-    void *new_root_vaddr;
-    unsigned long old_root_mfn;
-    struct domain_iommu *hd = dom_iommu(d);
-
-    if ( dfn == dfn_x(INVALID_DFN) )
-        return -EADDRNOTAVAIL;
-    ASSERT(!(dfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
-
-    level = hd->arch.paging_mode;
-    old_root = hd->arch.root_table;
-    offset = dfn >> (PTE_PER_TABLE_SHIFT * (level - 1));
-
-    ASSERT(spin_is_locked(&hd->arch.mapping_lock) && is_hvm_domain(d));
-
-    while ( offset >= PTE_PER_TABLE_SIZE )
+ out:
+    if ( !next_table_mfn )
     {
-        /* Allocate and install a new root table.
-         * Only upper I/O page table grows, no need to fix next level bits */
-        new_root = alloc_amd_iommu_pgtable();
-        if ( new_root == NULL )
-        {
-            AMD_IOMMU_DEBUG("%s Cannot allocate I/O page table\n",
-                            __func__);
-            return -ENOMEM;
-        }
-
-        new_root_vaddr = __map_domain_page(new_root);
-        old_root_mfn = mfn_x(page_to_mfn(old_root));
-        set_iommu_pde_present(new_root_vaddr, old_root_mfn, level,
-                              !!IOMMUF_writable, !!IOMMUF_readable);
-        level++;
-        old_root = new_root;
-        offset >>= PTE_PER_TABLE_SHIFT;
-        unmap_domain_page(new_root_vaddr);
+        AMD_IOMMU_DEBUG("Invalid IO pagetable entry dfn = %"PRI_dfn"\n",
+                        dfn);
+        return -EFAULT;
     }
 
-    if ( new_root != NULL )
-    {
-        hd->arch.paging_mode = level;
-        hd->arch.root_table = new_root;
-
-        if ( !pcidevs_locked() )
-            AMD_IOMMU_DEBUG("%s Try to access pdev_list "
-                            "without aquiring pcidevs_lock.\n", __func__);
-
-        /* Update device table entries using new root table and paging mode */
-        for_each_pdev( d, pdev )
-        {
-            bdf = PCI_BDF2(pdev->bus, pdev->devfn);
-            iommu = find_iommu_for_device(pdev->seg, bdf);
-            if ( !iommu )
-            {
-                AMD_IOMMU_DEBUG("%s Fail to find iommu.\n", __func__);
-                return -ENODEV;
-            }
-
-            spin_lock_irqsave(&iommu->lock, flags);
-            do {
-                req_id = get_dma_requestor_id(pdev->seg, bdf);
-                device_entry = iommu->dev_table.buffer +
-                               (req_id * IOMMU_DEV_TABLE_ENTRY_SIZE);
-
-                /* valid = 0 only works for dom0 passthrough mode */
-                amd_iommu_set_root_page_table(device_entry,
-                                              page_to_maddr(hd->arch.root_table),
-                                              d->domain_id,
-                                              hd->arch.paging_mode, 1);
-
-                amd_iommu_flush_device(iommu, req_id);
-                bdf += pdev->phantom_stride;
-            } while ( PCI_DEVFN2(bdf) != pdev->devfn &&
-                      PCI_SLOT(bdf) == PCI_SLOT(pdev->devfn) );
-            spin_unlock_irqrestore(&iommu->lock, flags);
-        }
+    ASSERT(level == IOMMU_PAGING_MODE_LEVEL_1);
+    pt_mfn[level] = next_table_mfn;
 
-        /* For safety, invalidate all entries */
-        amd_iommu_flush_all_pages(d);
-    }
     return 0;
 }
 
@@ -636,13 +664,14 @@ int amd_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
     bool need_flush = false;
     struct domain_iommu *hd = dom_iommu(d);
     int rc;
-    unsigned long pt_mfn[7];
+    unsigned long pt_mfn[7] = {};
     unsigned int merge_level;
 
     if ( iommu_use_hap_pt(d) )
         return 0;
 
-    memset(pt_mfn, 0, sizeof(pt_mfn));
+    if ( dfn_eq(dfn, INVALID_DFN) )
+        return -EFAULT;
 
     spin_lock(&hd->arch.mapping_lock);
 
@@ -655,24 +684,9 @@ int amd_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
         return rc;
     }
 
-    /* Since HVM domain is initialized with 2 level IO page table,
-     * we might need a deeper page table for wider dfn now */
-    if ( is_hvm_domain(d) )
-    {
-        if ( update_paging_mode(d, dfn_x(dfn)) )
-        {
-            spin_unlock(&hd->arch.mapping_lock);
-            AMD_IOMMU_DEBUG("Update page mode failed dfn = %"PRI_dfn"\n",
-                            dfn_x(dfn));
-            return -EFAULT;
-        }
-    }
-
-    if ( iommu_pde_from_dfn(d, dfn_x(dfn), pt_mfn) || (pt_mfn[1] == 0) )
+    if ( iommu_pde_from_dfn(d, dfn_x(dfn), pt_mfn) )
     {
         spin_unlock(&hd->arch.mapping_lock);
-        AMD_IOMMU_DEBUG("Invalid IO pagetable entry dfn = %"PRI_dfn"\n",
-                        dfn_x(dfn));
         return -EFAULT;
     }
 
@@ -721,13 +735,15 @@ out:
 
 int amd_iommu_unmap_page(struct domain *d, dfn_t dfn)
 {
-    unsigned long pt_mfn[7];
+    unsigned long pt_mfn[7] = {};
     struct domain_iommu *hd = dom_iommu(d);
+    int rc;
 
     if ( iommu_use_hap_pt(d) )
         return 0;
 
-    memset(pt_mfn, 0, sizeof(pt_mfn));
+    if ( dfn_eq(dfn, INVALID_DFN) )
+        return -EADDRNOTAVAIL;
 
     spin_lock(&hd->arch.mapping_lock);
 
@@ -737,27 +753,11 @@ int amd_iommu_unmap_page(struct domain *d, dfn_t dfn)
         return 0;
     }
 
-    /* Since HVM domain is initialized with 2 level IO page table,
-     * we might need a deeper page table for lager dfn now */
-    if ( is_hvm_domain(d) )
-    {
-        int rc = update_paging_mode(d, dfn_x(dfn));
-
-        if ( rc )
-        {
-            spin_unlock(&hd->arch.mapping_lock);
-            AMD_IOMMU_DEBUG("Update page mode failed dfn = %"PRI_dfn"\n",
-                            dfn_x(dfn));
-            return rc;
-        }
-    }
-
-    if ( iommu_pde_from_dfn(d, dfn_x(dfn), pt_mfn) || (pt_mfn[1] == 0) )
+    rc = iommu_pde_from_dfn(d, dfn_x(dfn), pt_mfn);
+    if ( rc )
     {
         spin_unlock(&hd->arch.mapping_lock);
-        AMD_IOMMU_DEBUG("Invalid IO pagetable entry dfn = %"PRI_dfn"\n",
-                        dfn_x(dfn));
-        return -EFAULT;
+        return rc;
     }
 
     /* mark PTE as 'page not present' */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 5/7] amd-iommu: introduce new get/set_iommu_pde_info() functions...
  2018-09-20 14:11 [PATCH 0/7] amd-iommu: cleanup and add lookup_page method Paul Durrant
                   ` (3 preceding siblings ...)
  2018-09-20 14:11 ` [PATCH 4/7] amd-iommu: reduce code duplication Paul Durrant
@ 2018-09-20 14:11 ` Paul Durrant
  2018-09-20 14:11 ` [PATCH 6/7] vtd: change lookup_page failure semantics Paul Durrant
  2018-09-20 14:11 ` [PATCH 7/7] amd-iommu: add lookup_page method to iommu_ops Paul Durrant
  6 siblings, 0 replies; 15+ messages in thread
From: Paul Durrant @ 2018-09-20 14:11 UTC (permalink / raw)
  To: xen-devel; +Cc: Paul Durrant, Brian Woods, Suravee Suthikulpanit

...and use set_iommu_pde_info() in set_iommu_pde_present().

set_iommu_pde_info() only sets the address and read/write flags in the PDE,
leaving the (PTE-only) FC bit, level value and presence bit to be
subsequently set by set_iommu_pde_present(). A memory barrier is added to
ensure that the presence bit is last to be set.

A subsequent patch will make further use of get_iommu_pde_info().

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Brian Woods <brian.woods@amd.com>
---
 xen/drivers/passthrough/amd/iommu_map.c | 88 +++++++++++++++++++++------------
 1 file changed, 56 insertions(+), 32 deletions(-)

diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index a186c8d28b..fecde9d645 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -45,15 +45,10 @@ void clear_iommu_pte_present(unsigned long l1_mfn, unsigned long dfn)
     unmap_domain_page(table);
 }
 
-static bool set_iommu_pde_present(uint32_t *pde, unsigned long next_mfn,
-                                  unsigned int next_level,
-                                  bool iw, bool ir)
+static void get_iommu_pde_info(uint32_t *pde, uint64_t *maddr, bool *iw,
+                               bool *ir)
 {
-    uint64_t addr_lo, addr_hi, maddr_old, maddr_next;
-    uint32_t entry;
-    bool need_flush = false;
-
-    maddr_next = (uint64_t)next_mfn << PAGE_SHIFT;
+    uint64_t addr_lo, addr_hi;
 
     addr_hi = get_field_from_reg_u32(pde[1],
                                      IOMMU_PTE_ADDR_HIGH_MASK,
@@ -61,45 +56,74 @@ static bool set_iommu_pde_present(uint32_t *pde, unsigned long next_mfn,
     addr_lo = get_field_from_reg_u32(pde[0],
                                      IOMMU_PTE_ADDR_LOW_MASK,
                                      IOMMU_PTE_ADDR_LOW_SHIFT);
+    *maddr = (addr_hi << 32) | (addr_lo << PAGE_SHIFT);
 
-    maddr_old = (addr_hi << 32) | (addr_lo << PAGE_SHIFT);
+    if ( iw )
+        *iw = !!get_field_from_reg_u32(pde[1],
+                                       IOMMU_PDE_IO_WRITE_PERMISSION_MASK,
+                                       IOMMU_PDE_IO_WRITE_PERMISSION_SHIFT);
+
+    if ( ir )
+        *ir = !!get_field_from_reg_u32(pde[1],
+                                       IOMMU_PDE_IO_READ_PERMISSION_MASK,
+                                       IOMMU_PDE_IO_READ_PERMISSION_SHIFT);
+}
+
+static bool set_iommu_pde_info(uint32_t *pde, uint64_t maddr, bool iw,
+                               bool ir)
+{
+    uint64_t addr_lo, addr_hi, maddr_old;
 
-    if ( maddr_old != maddr_next )
-        need_flush = 1;
+    get_iommu_pde_info(pde, &maddr_old, NULL, NULL);
 
-    addr_lo = maddr_next & DMA_32BIT_MASK;
-    addr_hi = maddr_next >> 32;
+    addr_lo = (maddr & DMA_32BIT_MASK) >> PAGE_SHIFT;
+    addr_hi = maddr >> 32;
 
-    /* enable read/write permissions,which will be enforced at the PTE */
     set_field_in_reg_u32((uint32_t)addr_hi, 0,
                          IOMMU_PDE_ADDR_HIGH_MASK,
-                         IOMMU_PDE_ADDR_HIGH_SHIFT, &entry);
-    set_field_in_reg_u32(iw, entry,
+                         IOMMU_PDE_ADDR_HIGH_SHIFT, &pde[1]);
+    set_field_in_reg_u32(iw, pde[1],
                          IOMMU_PDE_IO_WRITE_PERMISSION_MASK,
-                         IOMMU_PDE_IO_WRITE_PERMISSION_SHIFT, &entry);
-    set_field_in_reg_u32(ir, entry,
+                         IOMMU_PDE_IO_WRITE_PERMISSION_SHIFT, &pde[1]);
+    set_field_in_reg_u32(ir, pde[1],
                          IOMMU_PDE_IO_READ_PERMISSION_MASK,
-                         IOMMU_PDE_IO_READ_PERMISSION_SHIFT, &entry);
+                         IOMMU_PDE_IO_READ_PERMISSION_SHIFT, &pde[1]);
+    set_field_in_reg_u32((uint32_t)addr_lo, 0,
+                         IOMMU_PDE_ADDR_LOW_MASK,
+                         IOMMU_PDE_ADDR_LOW_SHIFT, &pde[0]);
+
+    return maddr != maddr_old;
+}
+
+static bool set_iommu_pde_present(uint32_t *pde, unsigned long next_mfn,
+                                  unsigned int next_level,
+                                  bool_t iw, bool_t ir)
+{
+    bool need_flush = set_iommu_pde_info(pde, next_mfn << PAGE_SHIFT, iw,
+                                         ir);
 
-    /* FC bit should be enabled in PTE, this helps to solve potential
-     * issues with ATS devices
+    /*
+     * FC bit should be enabled in PTE, this helps to solve potential
+     * issues with ATS devices.
      */
     if ( next_level == IOMMU_PAGING_MODE_LEVEL_0 )
-        set_field_in_reg_u32(IOMMU_CONTROL_ENABLED, entry,
-                             IOMMU_PTE_FC_MASK, IOMMU_PTE_FC_SHIFT, &entry);
-    pde[1] = entry;
+        set_field_in_reg_u32(IOMMU_CONTROL_ENABLED, pde[1],
+                             IOMMU_PTE_FC_MASK, IOMMU_PTE_FC_SHIFT,
+                             &pde[1]);
 
     /* mark next level as 'present' */
-    set_field_in_reg_u32((uint32_t)addr_lo >> PAGE_SHIFT, 0,
-                         IOMMU_PDE_ADDR_LOW_MASK,
-                         IOMMU_PDE_ADDR_LOW_SHIFT, &entry);
-    set_field_in_reg_u32(next_level, entry,
+    set_field_in_reg_u32(next_level, pde[0],
                          IOMMU_PDE_NEXT_LEVEL_MASK,
-                         IOMMU_PDE_NEXT_LEVEL_SHIFT, &entry);
-    set_field_in_reg_u32(IOMMU_CONTROL_ENABLED, entry,
+                         IOMMU_PDE_NEXT_LEVEL_SHIFT, &pde[0]);
+
+    /*
+     * Make sure all other bits are written before the entry is made
+     * present.
+     */
+    smp_mb();
+    set_field_in_reg_u32(IOMMU_CONTROL_ENABLED, pde[0],
                          IOMMU_PDE_PRESENT_MASK,
-                         IOMMU_PDE_PRESENT_SHIFT, &entry);
-    pde[0] = entry;
+                         IOMMU_PDE_PRESENT_SHIFT, &pde[0]);
 
     return need_flush;
 }
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 6/7] vtd: change lookup_page failure semantics
  2018-09-20 14:11 [PATCH 0/7] amd-iommu: cleanup and add lookup_page method Paul Durrant
                   ` (4 preceding siblings ...)
  2018-09-20 14:11 ` [PATCH 5/7] amd-iommu: introduce new get/set_iommu_pde_info() functions Paul Durrant
@ 2018-09-20 14:11 ` Paul Durrant
  2018-09-21  0:58   ` Tian, Kevin
  2018-09-21 10:16   ` Jan Beulich
  2018-09-20 14:11 ` [PATCH 7/7] amd-iommu: add lookup_page method to iommu_ops Paul Durrant
  6 siblings, 2 replies; 15+ messages in thread
From: Paul Durrant @ 2018-09-20 14:11 UTC (permalink / raw)
  To: xen-devel; +Cc: Kevin Tian, Paul Durrant

Commit 43d1622b "vtd: add lookup_page method to iommu_ops" added a
lookup method in to the VT-d IOMMU implementation. In some cases (such as
when shared EPT is in operation) that function simply passes back an
identity MFN (i.e. an MFN with the same value as the DFN that was passed
in), but this doesn't actually make a lot of sense. If, for instance,
shared EPT is used then really the function should be doing a P2M lookup
since DFN space will be identical to GFN space.

In practice there are no current callers of the lookup_page method and,
when PV-IOMMU support is added, the method will not be called if either
shared EPT is in operation or iommu_passthrough is set, so this patch
simply fails the method with -EOPNOTSUPP in those cases.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Kevin Tian <kevin.tian@intel.com>
---
 xen/drivers/passthrough/vtd/iommu.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 79ecd15e49..1e861b696d 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1853,10 +1853,7 @@ static int intel_iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
      */
     if ( iommu_use_hap_pt(d) ||
          (iommu_hwdom_passthrough && is_hardware_domain(d)) )
-    {
-        *mfn = _mfn(dfn_x(dfn));
-        return 0;
-    }
+        return -EOPNOTSUPP;
 
     spin_lock(&hd->arch.mapping_lock);
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 7/7] amd-iommu: add lookup_page method to iommu_ops
  2018-09-20 14:11 [PATCH 0/7] amd-iommu: cleanup and add lookup_page method Paul Durrant
                   ` (5 preceding siblings ...)
  2018-09-20 14:11 ` [PATCH 6/7] vtd: change lookup_page failure semantics Paul Durrant
@ 2018-09-20 14:11 ` Paul Durrant
  2018-09-20 15:03   ` Jan Beulich
  6 siblings, 1 reply; 15+ messages in thread
From: Paul Durrant @ 2018-09-20 14:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Suravee Suthikulpanit, Andrew Cooper, Paul Durrant,
	Jan Beulich, Brian Woods

This patch adds a new method to the AMD IOMMU implementation to find the
MFN currently mapped by the specified DFN. This is analogous to the
method added for VT-d IOMMU by commit 43d1622b.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Brian Woods <brian.woods@amd.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
---
 xen/drivers/passthrough/amd/iommu_map.c       | 50 +++++++++++++++++++++++++++
 xen/drivers/passthrough/amd/pci_amd_iommu.c   |  1 +
 xen/include/asm-x86/hvm/svm/amd-iommu-proto.h |  2 ++
 3 files changed, 53 insertions(+)

diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index fecde9d645..309720743f 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -69,6 +69,20 @@ static void get_iommu_pde_info(uint32_t *pde, uint64_t *maddr, bool *iw,
                                        IOMMU_PDE_IO_READ_PERMISSION_SHIFT);
 }
 
+static void get_iommu_pte_info(unsigned long l1_mfn, unsigned long dfn,
+                               uint64_t *maddr, bool *iw, bool *ir)
+{
+    uint64_t *table;
+    uint32_t *pde;
+
+    table = map_domain_page(_mfn(l1_mfn));
+    pde = (uint32_t *)(table + pfn_to_pde_idx(dfn,
+                                              IOMMU_PAGING_MODE_LEVEL_1));
+
+    get_iommu_pde_info(pde, maddr, iw, ir);
+    unmap_domain_page(table);
+}
+
 static bool set_iommu_pde_info(uint32_t *pde, uint64_t maddr, bool iw,
                                bool ir)
 {
@@ -793,6 +807,42 @@ int amd_iommu_unmap_page(struct domain *d, dfn_t dfn)
     return 0;
 }
 
+int amd_iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
+                          unsigned int *flags)
+{
+    unsigned long pt_mfn[7] = {};
+    struct domain_iommu *hd = dom_iommu(d);
+    uint64_t maddr;
+    bool iw, ir;
+    int rc;
+
+    if ( iommu_use_hap_pt(d) )
+        return -EOPNOTSUPP;
+
+    spin_lock(&hd->arch.mapping_lock);
+
+    if ( !hd->arch.root_table )
+    {
+        spin_unlock(&hd->arch.mapping_lock);
+        return -EOPNOTSUPP;
+    }
+
+    rc = iommu_pde_from_dfn(d, dfn_x(dfn), pt_mfn);
+    if ( rc )
+    {
+        spin_unlock(&hd->arch.mapping_lock);
+        return rc;
+    }
+
+    get_iommu_pte_info(pt_mfn[1], dfn_x(dfn), &maddr, &iw, &ir);
+    spin_unlock(&hd->arch.mapping_lock);
+
+    *mfn = _mfn(maddr >> PAGE_SHIFT);
+    *flags = (iw ? IOMMUF_writable : 0) | (ir ? IOMMUF_readable : 0);
+
+    return 0;
+}
+
 int amd_iommu_reserve_domain_unity_map(struct domain *domain,
                                        uint64_t phys_addr,
                                        unsigned long size, int iw, int ir)
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 5e99b6988e..86b97d7eaf 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -616,6 +616,7 @@ const struct iommu_ops amd_iommu_ops = {
     .teardown = amd_iommu_domain_destroy,
     .map_page = amd_iommu_map_page,
     .unmap_page = amd_iommu_unmap_page,
+    .lookup_page = amd_iommu_lookup_page,
     .free_page_table = deallocate_page_table,
     .reassign_device = reassign_device,
     .get_device_group_id = amd_iommu_group_id,
diff --git a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
index 3083d625bd..d451acc28c 100644
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
@@ -55,6 +55,8 @@ int amd_iommu_update_ivrs_mapping_acpi(void);
 int __must_check amd_iommu_map_page(struct domain *d, dfn_t dfn,
                                     mfn_t mfn, unsigned int flags);
 int __must_check amd_iommu_unmap_page(struct domain *d, dfn_t dfn);
+int __must_check amd_iommu_lookup_page(struct domain *d, dfn_t dfn,
+                                       mfn_t *mfn, unsigned int *flags);
 u64 amd_iommu_get_next_table_from_pte(u32 *entry);
 int __must_check amd_iommu_alloc_root(struct domain_iommu *hd);
 int amd_iommu_reserve_domain_unity_map(struct domain *domain,
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH 7/7] amd-iommu: add lookup_page method to iommu_ops
  2018-09-20 14:11 ` [PATCH 7/7] amd-iommu: add lookup_page method to iommu_ops Paul Durrant
@ 2018-09-20 15:03   ` Jan Beulich
  2018-09-20 15:41     ` Paul Durrant
  0 siblings, 1 reply; 15+ messages in thread
From: Jan Beulich @ 2018-09-20 15:03 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Andrew Cooper, Brian Woods, Wei Liu, Suravee Suthikulpanit, xen-devel

>>> On 20.09.18 at 16:11, <paul.durrant@citrix.com> wrote:
> This patch adds a new method to the AMD IOMMU implementation to find the
> MFN currently mapped by the specified DFN. This is analogous to the
> method added for VT-d IOMMU by commit 43d1622b.

Please don't provide commit IDs from (presumably) your private repo,
the more without a title quoted next to it. At least patches 1 and 6
have the same issue.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 7/7] amd-iommu: add lookup_page method to iommu_ops
  2018-09-20 15:03   ` Jan Beulich
@ 2018-09-20 15:41     ` Paul Durrant
  0 siblings, 0 replies; 15+ messages in thread
From: Paul Durrant @ 2018-09-20 15:41 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Andrew Cooper, Brian Woods, Wei Liu, Suravee Suthikulpanit, xen-devel

> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
> Of Jan Beulich
> Sent: 20 September 2018 16:04
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Brian Woods
> <brian.woods@amd.com>; Wei Liu <wei.liu2@citrix.com>; Suravee
> Suthikulpanit <suravee.suthikulpanit@amd.com>; xen-devel <xen-
> devel@lists.xenproject.org>
> Subject: Re: [Xen-devel] [PATCH 7/7] amd-iommu: add lookup_page method to
> iommu_ops
> 
> >>> On 20.09.18 at 16:11, <paul.durrant@citrix.com> wrote:
> > This patch adds a new method to the AMD IOMMU implementation to find the
> > MFN currently mapped by the specified DFN. This is analogous to the
> > method added for VT-d IOMMU by commit 43d1622b.
> 
> Please don't provide commit IDs from (presumably) your private repo,
> the more without a title quoted next to it. At least patches 1 and 6
> have the same issue.

OK... sorry getting ahead of myself. Need to wait for the pre-req series to go in.

  Paul

> 
> Jan
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 6/7] vtd: change lookup_page failure semantics
  2018-09-20 14:11 ` [PATCH 6/7] vtd: change lookup_page failure semantics Paul Durrant
@ 2018-09-21  0:58   ` Tian, Kevin
  2018-09-21 10:16   ` Jan Beulich
  1 sibling, 0 replies; 15+ messages in thread
From: Tian, Kevin @ 2018-09-21  0:58 UTC (permalink / raw)
  To: Paul Durrant, xen-devel

> From: Paul Durrant [mailto:paul.durrant@citrix.com]
> Sent: Thursday, September 20, 2018 10:12 PM
> 
> Commit 43d1622b "vtd: add lookup_page method to iommu_ops" added a
> lookup method in to the VT-d IOMMU implementation. In some cases (such
> as
> when shared EPT is in operation) that function simply passes back an
> identity MFN (i.e. an MFN with the same value as the DFN that was passed
> in), but this doesn't actually make a lot of sense. If, for instance,
> shared EPT is used then really the function should be doing a P2M lookup
> since DFN space will be identical to GFN space.
> 
> In practice there are no current callers of the lookup_page method and,
> when PV-IOMMU support is added, the method will not be called if either
> shared EPT is in operation or iommu_passthrough is set, so this patch
> simply fails the method with -EOPNOTSUPP in those cases.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 6/7] vtd: change lookup_page failure semantics
  2018-09-20 14:11 ` [PATCH 6/7] vtd: change lookup_page failure semantics Paul Durrant
  2018-09-21  0:58   ` Tian, Kevin
@ 2018-09-21 10:16   ` Jan Beulich
  2018-09-21 10:18     ` Paul Durrant
  1 sibling, 1 reply; 15+ messages in thread
From: Jan Beulich @ 2018-09-21 10:16 UTC (permalink / raw)
  To: Paul Durrant; +Cc: xen-devel, Kevin Tian

>>> On 20.09.18 at 16:11, <paul.durrant@citrix.com> wrote:
> Commit 43d1622b "vtd: add lookup_page method to iommu_ops" added a
> lookup method in to the VT-d IOMMU implementation.

With this not having gone in yet, is there any reason not to make the
adjustment right there?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 6/7] vtd: change lookup_page failure semantics
  2018-09-21 10:16   ` Jan Beulich
@ 2018-09-21 10:18     ` Paul Durrant
  2018-09-21 10:27       ` Jan Beulich
  0 siblings, 1 reply; 15+ messages in thread
From: Paul Durrant @ 2018-09-21 10:18 UTC (permalink / raw)
  To: 'Jan Beulich'; +Cc: xen-devel, Kevin Tian

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 21 September 2018 11:17
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Kevin Tian <kevin.tian@intel.com>; xen-devel <xen-
> devel@lists.xenproject.org>
> Subject: Re: [Xen-devel] [PATCH 6/7] vtd: change lookup_page failure
> semantics
> 
> >>> On 20.09.18 at 16:11, <paul.durrant@citrix.com> wrote:
> > Commit 43d1622b "vtd: add lookup_page method to iommu_ops" added a
> > lookup method in to the VT-d IOMMU implementation.
> 
> With this not having gone in yet, is there any reason not to make the
> adjustment right there?
> 

No, other than that commit having all the necessary acks and thus potentially being committed whilst I was working on this series. Would you rather I submitted v11 of the other series?

  Paul

> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 6/7] vtd: change lookup_page failure semantics
  2018-09-21 10:18     ` Paul Durrant
@ 2018-09-21 10:27       ` Jan Beulich
  2018-09-21 10:31         ` Paul Durrant
  0 siblings, 1 reply; 15+ messages in thread
From: Jan Beulich @ 2018-09-21 10:27 UTC (permalink / raw)
  To: Paul Durrant; +Cc: xen-devel, Kevin Tian

>>> On 21.09.18 at 12:18, <Paul.Durrant@citrix.com> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 21 September 2018 11:17
>> To: Paul Durrant <Paul.Durrant@citrix.com>
>> Cc: Kevin Tian <kevin.tian@intel.com>; xen-devel <xen-
>> devel@lists.xenproject.org>
>> Subject: Re: [Xen-devel] [PATCH 6/7] vtd: change lookup_page failure
>> semantics
>> 
>> >>> On 20.09.18 at 16:11, <paul.durrant@citrix.com> wrote:
>> > Commit 43d1622b "vtd: add lookup_page method to iommu_ops" added a
>> > lookup method in to the VT-d IOMMU implementation.
>> 
>> With this not having gone in yet, is there any reason not to make the
>> adjustment right there?
>> 
> 
> No, other than that commit having all the necessary acks and thus 
> potentially being committed whilst I was working on this series. Would you 
> rather I submitted v11 of the other series?

I'm not going to insist, but it would look cleaner that way. AMD acks for
the first of your series are still way out anyway, I'm afraid.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 6/7] vtd: change lookup_page failure semantics
  2018-09-21 10:27       ` Jan Beulich
@ 2018-09-21 10:31         ` Paul Durrant
  0 siblings, 0 replies; 15+ messages in thread
From: Paul Durrant @ 2018-09-21 10:31 UTC (permalink / raw)
  To: 'Jan Beulich'; +Cc: xen-devel, Kevin Tian

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 21 September 2018 11:28
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Kevin Tian <kevin.tian@intel.com>; xen-devel <xen-
> devel@lists.xenproject.org>
> Subject: RE: [Xen-devel] [PATCH 6/7] vtd: change lookup_page failure
> semantics
> 
> >>> On 21.09.18 at 12:18, <Paul.Durrant@citrix.com> wrote:
> >>  -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 21 September 2018 11:17
> >> To: Paul Durrant <Paul.Durrant@citrix.com>
> >> Cc: Kevin Tian <kevin.tian@intel.com>; xen-devel <xen-
> >> devel@lists.xenproject.org>
> >> Subject: Re: [Xen-devel] [PATCH 6/7] vtd: change lookup_page failure
> >> semantics
> >>
> >> >>> On 20.09.18 at 16:11, <paul.durrant@citrix.com> wrote:
> >> > Commit 43d1622b "vtd: add lookup_page method to iommu_ops" added a
> >> > lookup method in to the VT-d IOMMU implementation.
> >>
> >> With this not having gone in yet, is there any reason not to make the
> >> adjustment right there?
> >>
> >
> > No, other than that commit having all the necessary acks and thus
> > potentially being committed whilst I was working on this series. Would
> you
> > rather I submitted v11 of the other series?
> 
> I'm not going to insist, but it would look cleaner that way. AMD acks for
> the first of your series are still way out anyway, I'm afraid.
> 

Ok. V11 coming up then.

  Paul

> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2018-09-21 10:31 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-20 14:11 [PATCH 0/7] amd-iommu: cleanup and add lookup_page method Paul Durrant
2018-09-20 14:11 ` [PATCH 1/7] amd-iommu: don't domain_crash() inside map/unmap_page() Paul Durrant
2018-09-20 14:11 ` [PATCH 2/7] amd-iommu: re-name u8/16/32/64 to uint8/16/32/64_t Paul Durrant
2018-09-20 14:11 ` [PATCH 3/7] amd-iommu: convert all bool_t to bool Paul Durrant
2018-09-20 14:11 ` [PATCH 4/7] amd-iommu: reduce code duplication Paul Durrant
2018-09-20 14:11 ` [PATCH 5/7] amd-iommu: introduce new get/set_iommu_pde_info() functions Paul Durrant
2018-09-20 14:11 ` [PATCH 6/7] vtd: change lookup_page failure semantics Paul Durrant
2018-09-21  0:58   ` Tian, Kevin
2018-09-21 10:16   ` Jan Beulich
2018-09-21 10:18     ` Paul Durrant
2018-09-21 10:27       ` Jan Beulich
2018-09-21 10:31         ` Paul Durrant
2018-09-20 14:11 ` [PATCH 7/7] amd-iommu: add lookup_page method to iommu_ops Paul Durrant
2018-09-20 15:03   ` Jan Beulich
2018-09-20 15:41     ` Paul Durrant

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.