All of lore.kernel.org
 help / color / mirror / Atom feed
* IOMMU, vtd and iotlb flush rework (v3)
@ 2011-11-07 18:25 Jean Guyader
  2011-11-07 18:25 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
  0 siblings, 1 reply; 31+ messages in thread
From: Jean Guyader @ 2011-11-07 18:25 UTC (permalink / raw)
  To: xen-devel; +Cc: allen.m.kay, tim

In one of my previous email I detailed a bug I was seeing when passing
through a Intel GPU on a guest that has more that 4G or RAM.

Allen suggested that I go for the Plan B but after a discussion with Tim
we agreed that Plan B was way to disruptive in term of code change.

This patch series implements Plan A.

http://xen.1045712.n5.nabble.com/VTD-Intel-iommu-IOTLB-flush-really-slow-td4952866.html

Changes between v2 and v3:
        - Check for the presence iotlb_flush_all callback before calling it.

Changes between v1 and v2:
       - Move size in struct xen_add_to_physmap in padding between .domid and .space.
       - Store iommu_dont_flush per cpu
       - Change the code in hvmloader to relocate by batch of 64K, .size is now 16 bits.

Jean Guyader (6):
      vtd: Refactor iotlb flush code
      iommu: Introduce iommu_flush and iommu_flush_all.
      add_to_physmap: Move the code for XENMEM_add_to_physmap.
      mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range
      hvmloader: Change memory relocation loop when overlap with PCI hole.
      Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb flush

 tools/firmware/hvmloader/pci.c      |   20 +++-
 xen/arch/x86/mm.c                   |  203 +++++++++++++++++++++--------------
 xen/drivers/passthrough/iommu.c     |   25 +++++
 xen/drivers/passthrough/vtd/iommu.c |  100 ++++++++++--------
 xen/include/public/memory.h         |    4 +
 xen/include/xen/iommu.h             |    7 ++
 6 files changed, 230 insertions(+), 129 deletions(-)

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 1/6] vtd: Refactor iotlb flush code
  2011-11-07 18:25 IOMMU, vtd and iotlb flush rework (v3) Jean Guyader
@ 2011-11-07 18:25 ` Jean Guyader
  2011-11-07 18:25   ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
  2011-11-08  3:10   ` [PATCH 1/6] vtd: Refactor iotlb flush code Kay, Allen M
  0 siblings, 2 replies; 31+ messages in thread
From: Jean Guyader @ 2011-11-07 18:25 UTC (permalink / raw)
  To: xen-devel; +Cc: allen.m.kay, tim, Jean Guyader

[-- Attachment #1: Type: text/plain, Size: 279 bytes --]


Factorize the iotlb flush code from map_page and unmap_page into
it's own function.

Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
 xen/drivers/passthrough/vtd/iommu.c |   86 +++++++++++++++++-----------------
 1 files changed, 43 insertions(+), 43 deletions(-)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0001-vtd-Refactor-iotlb-flush-code.patch --]
[-- Type: text/x-patch; name="0001-vtd-Refactor-iotlb-flush-code.patch", Size: 4122 bytes --]

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 7717ab4..5a5b6be 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -578,16 +578,53 @@ static void iommu_flush_all(void)
     }
 }
 
+static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+        int dma_old_pte_present, unsigned int page_count)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    struct acpi_drhd_unit *drhd;
+    struct iommu *iommu;
+    int flush_dev_iotlb;
+    int iommu_domid;
+
+    /*
+     * No need pcideves_lock here because we have flush
+     * when assign/deassign device
+     */
+    for_each_drhd_unit ( drhd )
+    {
+        iommu = drhd->iommu;
+
+        if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
+            continue;
+
+        flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
+        iommu_domid= domain_iommu_domid(d, iommu);
+        if ( iommu_domid == -1 )
+            continue;
+
+        if ( page_count > 1 || gfn == -1 )
+        {
+            if ( iommu_flush_iotlb_dsi(iommu, iommu_domid,
+                        0, flush_dev_iotlb) )
+                iommu_flush_write_buffer(iommu);
+        }
+        else
+        {
+            if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
+                        (paddr_t)gfn << PAGE_SHIFT_4K, 0,
+                        !dma_old_pte_present, flush_dev_iotlb) )
+                iommu_flush_write_buffer(iommu);
+        }
+    }
+}
+
 /* clear one page's page table */
 static void dma_pte_clear_one(struct domain *domain, u64 addr)
 {
     struct hvm_iommu *hd = domain_hvm_iommu(domain);
-    struct acpi_drhd_unit *drhd;
-    struct iommu *iommu;
     struct dma_pte *page = NULL, *pte = NULL;
     u64 pg_maddr;
-    int flush_dev_iotlb;
-    int iommu_domid;
     struct mapped_rmrr *mrmrr;
 
     spin_lock(&hd->mapping_lock);
@@ -613,21 +650,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
     spin_unlock(&hd->mapping_lock);
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
 
-    /* No need pcidevs_lock here since do that on assign/deassign device*/
-    for_each_drhd_unit ( drhd )
-    {
-        iommu = drhd->iommu;
-        if ( test_bit(iommu->index, &hd->iommu_bitmap) )
-        {
-            flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
-            iommu_domid= domain_iommu_domid(domain, iommu);
-            if ( iommu_domid == -1 )
-                continue;
-            if ( iommu_flush_iotlb_psi(iommu, iommu_domid, addr,
-                                       0, 0, flush_dev_iotlb) )
-                iommu_flush_write_buffer(iommu);
-        }
-    }
+    __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K , 0, 1);
 
     unmap_vtd_domain_page(page);
 
@@ -1677,12 +1700,8 @@ static int intel_iommu_map_page(
     unsigned int flags)
 {
     struct hvm_iommu *hd = domain_hvm_iommu(d);
-    struct acpi_drhd_unit *drhd;
-    struct iommu *iommu;
     struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
     u64 pg_maddr;
-    int flush_dev_iotlb;
-    int iommu_domid;
 
     /* Do nothing if VT-d shares EPT page table */
     if ( iommu_use_hap_pt(d) )
@@ -1724,26 +1743,7 @@ static int intel_iommu_map_page(
     spin_unlock(&hd->mapping_lock);
     unmap_vtd_domain_page(page);
 
-    /*
-     * No need pcideves_lock here because we have flush
-     * when assign/deassign device
-     */
-    for_each_drhd_unit ( drhd )
-    {
-        iommu = drhd->iommu;
-
-        if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
-            continue;
-
-        flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
-        iommu_domid= domain_iommu_domid(d, iommu);
-        if ( iommu_domid == -1 )
-            continue;
-        if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
-                                   (paddr_t)gfn << PAGE_SHIFT_4K, 0,
-                                   !dma_pte_present(old), flush_dev_iotlb) )
-            iommu_flush_write_buffer(iommu);
-    }
+    __intel_iommu_iotlb_flush(d, gfn, dma_pte_present(old), 1);
 
     return 0;
 }

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all.
  2011-11-07 18:25 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
@ 2011-11-07 18:25   ` Jean Guyader
  2011-11-07 18:25     ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
  2011-11-08  3:10   ` [PATCH 1/6] vtd: Refactor iotlb flush code Kay, Allen M
  1 sibling, 1 reply; 31+ messages in thread
From: Jean Guyader @ 2011-11-07 18:25 UTC (permalink / raw)
  To: xen-devel; +Cc: allen.m.kay, tim, Jean Guyader

[-- Attachment #1: Type: text/plain, Size: 286 bytes --]


Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
 xen/drivers/passthrough/iommu.c     |   20 ++++++++++++++++++++
 xen/drivers/passthrough/vtd/iommu.c |   12 ++++++++++++
 xen/include/xen/iommu.h             |    5 +++++
 3 files changed, 37 insertions(+), 0 deletions(-)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0002-iommu-Introduce-iommu_flush-and-iommu_flush_all.patch --]
[-- Type: text/x-patch; name="0002-iommu-Introduce-iommu_flush-and-iommu_flush_all.patch", Size: 2780 bytes --]

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index cd6174d..ca7b37b 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -301,6 +301,26 @@ int iommu_unmap_page(struct domain *d, unsigned long gfn)
     return hd->platform_ops->unmap_page(d, gfn);
 }
 
+void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+    if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
+        return;
+
+    hd->platform_ops->iotlb_flush(d, gfn, page_count);
+}
+
+void iommu_iotlb_flush_all(struct domain *d)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+    if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush_all )
+        return;
+
+    hd->platform_ops->iotlb_flush_all(d);
+}
+
 /* caller should hold the pcidevs_lock */
 int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
 {
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 5a5b6be..7ec9541 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -619,6 +619,16 @@ static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
     }
 }
 
+static void intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+{
+    __intel_iommu_iotlb_flush(d, gfn, 1, page_count);
+}
+
+static void intel_iommu_iotlb_flush_all(struct domain *d)
+{
+    __intel_iommu_iotlb_flush(d, 0, 0, 0);
+}
+
 /* clear one page's page table */
 static void dma_pte_clear_one(struct domain *domain, u64 addr)
 {
@@ -2329,6 +2339,8 @@ const struct iommu_ops intel_iommu_ops = {
     .resume = vtd_resume,
     .share_p2m = iommu_set_pgd,
     .crash_shutdown = vtd_crash_shutdown,
+    .iotlb_flush = intel_iommu_iotlb_flush,
+    .iotlb_flush_all = intel_iommu_iotlb_flush_all,
 };
 
 /*
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 837e60d..a1034df 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -139,6 +139,8 @@ struct iommu_ops {
     void (*resume)(void);
     void (*share_p2m)(struct domain *d);
     void (*crash_shutdown)(void);
+    void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
+    void (*iotlb_flush_all)(struct domain *d);
 };
 
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
@@ -155,4 +157,7 @@ void iommu_share_p2m_table(struct domain *d);
 
 int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE(xen_domctl_t));
 
+void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
+void iommu_iotlb_flush_all(struct domain *d);
+
 #endif /* _IOMMU_H_ */

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap.
  2011-11-07 18:25   ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
@ 2011-11-07 18:25     ` Jean Guyader
  2011-11-07 18:25       ` [PATCH 4/6] mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range Jean Guyader
  2011-11-08 13:24       ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Tim Deegan
  0 siblings, 2 replies; 31+ messages in thread
From: Jean Guyader @ 2011-11-07 18:25 UTC (permalink / raw)
  To: xen-devel; +Cc: allen.m.kay, tim, Jean Guyader

[-- Attachment #1: Type: text/plain, Size: 293 bytes --]


Move the code for the XENMEM_add_to_physmap case into it's own
function (xenmem_add_to_physmap).

Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
 xen/arch/x86/mm.c |  188 ++++++++++++++++++++++++++++-------------------------
 1 files changed, 99 insertions(+), 89 deletions(-)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch --]
[-- Type: text/x-patch; name="0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch", Size: 7065 bytes --]

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index acc1f34..f75011e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4592,119 +4592,129 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
     return 0;
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+static int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap xatp)
 {
     struct page_info *page = NULL;
+    unsigned long prev_mfn, mfn = 0, gpfn;
     int rc;
 
-    switch ( op )
-    {
-    case XENMEM_add_to_physmap:
+    switch ( xatp.space )
     {
-        struct xen_add_to_physmap xatp;
-        unsigned long prev_mfn, mfn = 0, gpfn;
-        struct domain *d;
-
-        if ( copy_from_guest(&xatp, arg, 1) )
-            return -EFAULT;
+    case XENMAPSPACE_shared_info:
+        if ( xatp.idx == 0 )
+            mfn = virt_to_mfn(d->shared_info);
+        break;
+    case XENMAPSPACE_grant_table:
+        spin_lock(&d->grant_table->lock);
 
-        rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        if ( d->grant_table->gt_version == 0 )
+            d->grant_table->gt_version = 1;
 
-        if ( xsm_add_to_physmap(current->domain, d) )
+        if ( d->grant_table->gt_version == 2 &&
+             (xatp.idx & XENMAPIDX_grant_table_status) )
         {
-            rcu_unlock_domain(d);
-            return -EPERM;
+            xatp.idx &= ~XENMAPIDX_grant_table_status;
+            if ( xatp.idx < nr_status_frames(d->grant_table) )
+                mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
+        }
+        else
+        {
+            if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
+                 (xatp.idx < max_nr_grant_frames) )
+                gnttab_grow_table(d, xatp.idx + 1);
+
+            if ( xatp.idx < nr_grant_frames(d->grant_table) )
+                mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
         }
 
-        switch ( xatp.space )
+        spin_unlock(&d->grant_table->lock);
+        break;
+    case XENMAPSPACE_gmfn:
+    {
+        p2m_type_t p2mt;
+
+        xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
+        /* If the page is still shared, exit early */
+        if ( p2m_is_shared(p2mt) )
         {
-        case XENMAPSPACE_shared_info:
-            if ( xatp.idx == 0 )
-                mfn = virt_to_mfn(d->shared_info);
+            rcu_unlock_domain(d);
+            return -ENOMEM;
+        }
+        if ( !get_page_from_pagenr(xatp.idx, d) )
             break;
-        case XENMAPSPACE_grant_table:
-            spin_lock(&d->grant_table->lock);
+        mfn = xatp.idx;
+        page = mfn_to_page(mfn);
+        break;
+    }
+    default:
+        break;
+    }
 
-            if ( d->grant_table->gt_version == 0 )
-                d->grant_table->gt_version = 1;
+    if ( !paging_mode_translate(d) || (mfn == 0) )
+    {
+        if ( page )
+            put_page(page);
+        rcu_unlock_domain(d);
+        return -EINVAL;
+    }
 
-            if ( d->grant_table->gt_version == 2 &&
-                 (xatp.idx & XENMAPIDX_grant_table_status) )
-            {
-                xatp.idx &= ~XENMAPIDX_grant_table_status;
-                if ( xatp.idx < nr_status_frames(d->grant_table) )
-                    mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
-            }
-            else
-            {
-                if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
-                     (xatp.idx < max_nr_grant_frames) )
-                    gnttab_grow_table(d, xatp.idx + 1);
+    domain_lock(d);
 
-                if ( xatp.idx < nr_grant_frames(d->grant_table) )
-                    mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
-            }
+    if ( page )
+        put_page(page);
 
-            spin_unlock(&d->grant_table->lock);
-            break;
-        case XENMAPSPACE_gmfn:
-        {
-            p2m_type_t p2mt;
+    /* Remove previously mapped page if it was present. */
+    prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
+    if ( mfn_valid(prev_mfn) )
+    {
+        if ( is_xen_heap_mfn(prev_mfn) )
+            /* Xen heap frames are simply unhooked from this phys slot. */
+            guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
+        else
+            /* Normal domain memory is freed, to avoid leaking memory. */
+            guest_remove_page(d, xatp.gpfn);
+    }
 
-            xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
-            /* If the page is still shared, exit early */
-            if ( p2m_is_shared(p2mt) )
-            {
-                rcu_unlock_domain(d);
-                return -ENOMEM;
-            }
-            if ( !get_page_from_pagenr(xatp.idx, d) )
-                break;
-            mfn = xatp.idx;
-            page = mfn_to_page(mfn);
-            break;
-        }
-        default:
-            break;
-        }
+    /* Unmap from old location, if any. */
+    gpfn = get_gpfn_from_mfn(mfn);
+    ASSERT( gpfn != SHARED_M2P_ENTRY );
+    if ( gpfn != INVALID_M2P_ENTRY )
+        guest_physmap_remove_page(d, gpfn, mfn, 0);
 
-        if ( !paging_mode_translate(d) || (mfn == 0) )
-        {
-            if ( page )
-                put_page(page);
-            rcu_unlock_domain(d);
-            return -EINVAL;
-        }
+    /* Map at new location. */
+    rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
 
-        domain_lock(d);
+    domain_unlock(d);
 
-        if ( page )
-            put_page(page);
+    return rc;
+}
 
-        /* Remove previously mapped page if it was present. */
-        prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
-        if ( mfn_valid(prev_mfn) )
-        {
-            if ( is_xen_heap_mfn(prev_mfn) )
-                /* Xen heap frames are simply unhooked from this phys slot. */
-                guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
-            else
-                /* Normal domain memory is freed, to avoid leaking memory. */
-                guest_remove_page(d, xatp.gpfn);
-        }
 
-        /* Unmap from old location, if any. */
-        gpfn = get_gpfn_from_mfn(mfn);
-        ASSERT( gpfn != SHARED_M2P_ENTRY );
-        if ( gpfn != INVALID_M2P_ENTRY )
-            guest_physmap_remove_page(d, gpfn, mfn, 0);
+long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+{
+    int rc;
+
+    switch ( op )
+    {
+    case XENMEM_add_to_physmap:
+    {
+        struct xen_add_to_physmap xatp;
+        struct domain *d;
 
-        /* Map at new location. */
-        rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
+        if ( copy_from_guest(&xatp, arg, 1) )
+            return -EFAULT;
+
+        rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
+        if ( rc != 0 )
+            return rc;
+
+        if ( xsm_add_to_physmap(current->domain, d) )
+        {
+            rcu_unlock_domain(d);
+            return -EPERM;
+        }
 
-        domain_unlock(d);
+        xenmem_add_to_physmap(d, xatp);
 
         rcu_unlock_domain(d);
 

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 4/6] mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range
  2011-11-07 18:25     ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
@ 2011-11-07 18:25       ` Jean Guyader
  2011-11-07 18:25         ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
  2011-11-08  9:20         ` [PATCH 4/6] mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range Jan Beulich
  2011-11-08 13:24       ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Tim Deegan
  1 sibling, 2 replies; 31+ messages in thread
From: Jean Guyader @ 2011-11-07 18:25 UTC (permalink / raw)
  To: xen-devel; +Cc: allen.m.kay, tim, Jean Guyader

[-- Attachment #1: Type: text/plain, Size: 470 bytes --]


XENMEM_add_to_physmap_gmfn_range is like XENMEM_add_to_physmap on
the gmfn space but the number of pages on which xen will iterate.

Use the 16 bits padding between .domid and .space in struct xen_add_to_physmap
to keep compatibility with older versions.

Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
 xen/arch/x86/mm.c           |   22 +++++++++++++++++++++-
 xen/include/public/memory.h |    4 ++++
 2 files changed, 25 insertions(+), 1 deletions(-)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0004-mm-New-XENMEM-XENMEM_add_to_physmap_gmfn_range.patch --]
[-- Type: text/x-patch; name="0004-mm-New-XENMEM-XENMEM_add_to_physmap_gmfn_range.patch", Size: 1670 bytes --]

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index f75011e..cca64b8 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4714,9 +4714,29 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
             return -EPERM;
         }
 
-        xenmem_add_to_physmap(d, xatp);
+        if ( xatp.space != XENMAPSPACE_gmfn_range )
+            xatp.size = 1;
+
+        for ( ; xatp.size > 0; xatp.size-- )
+        {
+            if ( hypercall_preempt_check() )
+            {
+                rc = -EAGAIN;
+                break;
+            }
+            xenmem_add_to_physmap(d, xatp);
+            xatp.idx++;
+            xatp.gpfn++;
+        }
 
         rcu_unlock_domain(d);
+        if ( rc == -EAGAIN )
+        {
+            if ( copy_to_guest(arg, &xatp, 1) )
+                return -EFAULT;
+            rc = hypercall_create_continuation(
+                    __HYPERVISOR_memory_op, "ih", op, arg);
+        }
 
         return rc;
     }
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 08355e3..8bc8272 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -208,10 +208,14 @@ struct xen_add_to_physmap {
     /* Which domain to change the mapping for. */
     domid_t domid;
 
+    /* Number of pages to go through for gmfn_range */
+    uint16_t    size;
+
     /* Source mapping space. */
 #define XENMAPSPACE_shared_info 0 /* shared info page */
 #define XENMAPSPACE_grant_table 1 /* grant table page */
 #define XENMAPSPACE_gmfn        2 /* GMFN */
+#define XENMAPSPACE_gmfn_range  3 /* GMFN_range */
     unsigned int space;
 
 #define XENMAPIDX_grant_table_status 0x80000000

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole.
  2011-11-07 18:25       ` [PATCH 4/6] mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range Jean Guyader
@ 2011-11-07 18:25         ` Jean Guyader
  2011-11-07 18:25           ` [PATCH 6/6] Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb flush Jean Guyader
  2011-11-08  9:20         ` [PATCH 4/6] mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range Jan Beulich
  1 sibling, 1 reply; 31+ messages in thread
From: Jean Guyader @ 2011-11-07 18:25 UTC (permalink / raw)
  To: xen-devel; +Cc: allen.m.kay, tim, Jean Guyader

[-- Attachment #1: Type: text/plain, Size: 715 bytes --]


Change the way we relocate the memory page if they overlap with pci hole.
Use new map space (XENMAPSPACE_gmfn_range) to move the loop into xen.

This code usually get triggered when a device is pass through to a guest
and the PCI hole has to be extended to have enough room to map the device BARs.
The PCI hole will starts lower and it might overlap with some RAM that has been
alocated for the guest. That usually happen if the guest has more than 4G of RAM.
We have to relocate those pages in high mem otherwise they won't be accessible.

Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
 tools/firmware/hvmloader/pci.c |   20 ++++++++++++++------
 1 files changed, 14 insertions(+), 6 deletions(-)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0005-hvmloader-Change-memory-relocation-loop-when-overlap.patch --]
[-- Type: text/x-patch; name="0005-hvmloader-Change-memory-relocation-loop-when-overlap.patch", Size: 1719 bytes --]

diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
index 29ec011..3bd6ac5 100644
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -50,6 +50,7 @@ void pci_setup(void)
         uint32_t devfn, bar_reg, bar_sz;
     } *bars = (struct bars *)scratch_start;
     unsigned int i, nr_bars = 0;
+    unsigned long pci_mem_start_pg;
 
     /* Program PCI-ISA bridge with appropriate link routes. */
     isa_irq = 0;
@@ -185,17 +186,24 @@ void pci_setup(void)
             ((pci_mem_start << 1) != 0) )
         pci_mem_start <<= 1;
 
-    while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend )
+    /* Relocate RAM that overlaps (in 64K chunks) */
+    pci_mem_start_pg = (pci_mem_start >> PAGE_SHIFT);
+    while (pci_mem_start_pg < hvm_info->low_mem_pgend)
     {
         struct xen_add_to_physmap xatp;
-        if ( hvm_info->high_mem_pgend == 0 )
-            hvm_info->high_mem_pgend = 1ull << (32 - PAGE_SHIFT);
+        unsigned int size = hvm_info->low_mem_pgend - pci_mem_start_pg;
         xatp.domid = DOMID_SELF;
-        xatp.space = XENMAPSPACE_gmfn;
-        xatp.idx   = --hvm_info->low_mem_pgend;
-        xatp.gpfn  = hvm_info->high_mem_pgend++;
+        xatp.space = XENMAPSPACE_gmfn_range;
+        xatp.idx = pci_mem_start_pg;
+        xatp.gpfn = hvm_info->high_mem_pgend;
+        size = size > ((1 << 16) - 1) ? ((1 << 16) - 1) : size;
+        xatp.size = size;
+
         if ( hypercall_memory_op(XENMEM_add_to_physmap, &xatp) != 0 )
             BUG();
+        pci_mem_start_pg += size;
+        hvm_info->high_mem_pgend += size;
+        hvm_info->low_mem_pgend = pci_mem_start_pg;
     }
 
     mem_resource.base = pci_mem_start;

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 6/6] Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb flush
  2011-11-07 18:25         ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
@ 2011-11-07 18:25           ` Jean Guyader
  2011-11-08 13:45             ` Tim Deegan
  0 siblings, 1 reply; 31+ messages in thread
From: Jean Guyader @ 2011-11-07 18:25 UTC (permalink / raw)
  To: xen-devel; +Cc: allen.m.kay, tim, Jean Guyader

[-- Attachment #1: Type: text/plain, Size: 454 bytes --]


Add cpu flag that will be checked by the iommu low level code
to skip iotlb flushes. iommu_iotlb_flush shall be called explicitly.

Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
 xen/arch/x86/mm.c                   |   15 ++++++++++++++-
 xen/drivers/passthrough/iommu.c     |    5 +++++
 xen/drivers/passthrough/vtd/iommu.c |    6 ++++--
 xen/include/xen/iommu.h             |    2 ++
 4 files changed, 25 insertions(+), 3 deletions(-)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0006-Introduce-per-cpu-flag-iommu_dont_flush_iotlb-to-avo.patch --]
[-- Type: text/x-patch; name="0006-Introduce-per-cpu-flag-iommu_dont_flush_iotlb-to-avo.patch", Size: 3674 bytes --]

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index cca64b8..7aece56 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4698,7 +4698,7 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
     {
     case XENMEM_add_to_physmap:
     {
-        struct xen_add_to_physmap xatp;
+        struct xen_add_to_physmap xatp, start_xatp;
         struct domain *d;
 
         if ( copy_from_guest(&xatp, arg, 1) )
@@ -4716,7 +4716,13 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
 
         if ( xatp.space != XENMAPSPACE_gmfn_range )
             xatp.size = 1;
+        else
+        {
+            if ( need_iommu(d) )
+                this_cpu(iommu_dont_flush_iotlb) = 1;
+        }
 
+        start_xatp = xatp;
         for ( ; xatp.size > 0; xatp.size-- )
         {
             if ( hypercall_preempt_check() )
@@ -4729,6 +4735,13 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
             xatp.gpfn++;
         }
 
+        if ( xatp.space == XENMAPSPACE_gmfn_range && need_iommu(d) )
+        {
+            this_cpu(iommu_dont_flush_iotlb) = 0;
+            iommu_iotlb_flush(d, start_xatp.idx, start_xatp.size - xatp.size);
+            iommu_iotlb_flush(d, start_xatp.gpfn, start_xatp.size - xatp.size);
+        }
+
         rcu_unlock_domain(d);
         if ( rc == -EAGAIN )
         {
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index ca7b37b..bacca11 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -52,6 +52,8 @@ bool_t __read_mostly iommu_hap_pt_share = 1;
 bool_t __read_mostly iommu_debug;
 bool_t __read_mostly amd_iommu_perdev_intremap;
 
+DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
+
 static void __init parse_iommu_param(char *s)
 {
     char *ss;
@@ -227,6 +229,7 @@ static int iommu_populate_page_table(struct domain *d)
 
     spin_lock(&d->page_alloc_lock);
 
+    this_cpu(iommu_dont_flush_iotlb) = 1;
     page_list_for_each ( page, &d->page_list )
     {
         if ( is_hvm_domain(d) ||
@@ -244,6 +247,8 @@ static int iommu_populate_page_table(struct domain *d)
             }
         }
     }
+    this_cpu(iommu_dont_flush_iotlb) = 0;
+    iommu_iotlb_flush_all(d);
     spin_unlock(&d->page_alloc_lock);
     return 0;
 }
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 7ec9541..a3dd018 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -660,7 +660,8 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
     spin_unlock(&hd->mapping_lock);
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
 
-    __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K , 0, 1);
+    if ( !this_cpu(iommu_dont_flush_iotlb) )
+        __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K , 0, 1);
 
     unmap_vtd_domain_page(page);
 
@@ -1753,7 +1754,8 @@ static int intel_iommu_map_page(
     spin_unlock(&hd->mapping_lock);
     unmap_vtd_domain_page(page);
 
-    __intel_iommu_iotlb_flush(d, gfn, dma_pte_present(old), 1);
+    if ( !this_cpu(iommu_dont_flush_iotlb) )
+        __intel_iommu_iotlb_flush(d, gfn, dma_pte_present(old), 1);
 
     return 0;
 }
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index a1034df..c757a78 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -160,4 +160,6 @@ int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE(xen_domctl_t));
 void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
 void iommu_iotlb_flush_all(struct domain *d);
 
+DECLARE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
+
 #endif /* _IOMMU_H_ */

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* RE: [PATCH 1/6] vtd: Refactor iotlb flush code
  2011-11-07 18:25 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
  2011-11-07 18:25   ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
@ 2011-11-08  3:10   ` Kay, Allen M
  2011-11-08  7:42     ` Jean Guyader
  1 sibling, 1 reply; 31+ messages in thread
From: Kay, Allen M @ 2011-11-08  3:10 UTC (permalink / raw)
  To: Jean Guyader, xen-devel; +Cc: tim

[-- Attachment #1: Type: text/plain, Size: 1215 bytes --]

Jean,

The original code does not call iommu_flush_iotlb_dsi().  What is the reason the refractored code need to use domain selective invalidation?

Allen
-----

+        if ( page_count > 1 || gfn == -1 )
+        {
+            if ( iommu_flush_iotlb_dsi(iommu, iommu_domid,
+                        0, flush_dev_iotlb) )
+                iommu_flush_write_buffer(iommu);
+        }
+        else
+        {
+            if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
+                        (paddr_t)gfn << PAGE_SHIFT_4K, 0,
+                        !dma_old_pte_present, flush_dev_iotlb) )
+                iommu_flush_write_buffer(iommu);

-----Original Message-----
From: Jean Guyader [mailto:jean.guyader@eu.citrix.com] 
Sent: Monday, November 07, 2011 10:25 AM
To: xen-devel@lists.xensource.com
Cc: tim@xen.org; Kay, Allen M; Jean Guyader
Subject: [PATCH 1/6] vtd: Refactor iotlb flush code


Factorize the iotlb flush code from map_page and unmap_page into it's own function.

Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
 xen/drivers/passthrough/vtd/iommu.c |   86 +++++++++++++++++-----------------
 1 files changed, 43 insertions(+), 43 deletions(-)


[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 1/6] vtd: Refactor iotlb flush code
  2011-11-08  3:10   ` [PATCH 1/6] vtd: Refactor iotlb flush code Kay, Allen M
@ 2011-11-08  7:42     ` Jean Guyader
  2011-11-09  2:41       ` Kay, Allen M
  2011-11-09  2:50       ` Kay, Allen M
  0 siblings, 2 replies; 31+ messages in thread
From: Jean Guyader @ 2011-11-08  7:42 UTC (permalink / raw)
  To: Kay, Allen M; +Cc: xen-devel, Tim (Xen.org), Jean Guyader


Allen,

You are probably talking about __intel_iommu_iotlb_flush.
This function takes a range of address to flush. I haven't
found a function in the vtd code to invalidate a range on
address without doing a loop of flush_iotlb_psi, so I thought
that the most efficient and quick way to flush a range would
be to use a domain selective invalidation.

Jean

On 08/11 03:10, Kay, Allen M wrote:
> Jean,
> 
> The original code does not call iommu_flush_iotlb_dsi().  What is the reason the refractored code need to use domain selective invalidation?
> 
> Allen
> -----
> 
> +        if ( page_count > 1 || gfn == -1 )
> +        {
> +            if ( iommu_flush_iotlb_dsi(iommu, iommu_domid,
> +                        0, flush_dev_iotlb) )
> +                iommu_flush_write_buffer(iommu);
> +        }
> +        else
> +        {
> +            if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
> +                        (paddr_t)gfn << PAGE_SHIFT_4K, 0,
> +                        !dma_old_pte_present, flush_dev_iotlb) )
> +                iommu_flush_write_buffer(iommu);
> 
> -----Original Message-----
> From: Jean Guyader [mailto:jean.guyader@eu.citrix.com] 
> Sent: Monday, November 07, 2011 10:25 AM
> To: xen-devel@lists.xensource.com
> Cc: tim@xen.org; Kay, Allen M; Jean Guyader
> Subject: [PATCH 1/6] vtd: Refactor iotlb flush code
> 
> 
> Factorize the iotlb flush code from map_page and unmap_page into it's own function.
> 
> Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
> ---
>  xen/drivers/passthrough/vtd/iommu.c |   86 +++++++++++++++++-----------------
>  1 files changed, 43 insertions(+), 43 deletions(-)
> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 4/6] mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range
  2011-11-07 18:25       ` [PATCH 4/6] mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range Jean Guyader
  2011-11-07 18:25         ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
@ 2011-11-08  9:20         ` Jan Beulich
  2011-11-08 13:32           ` Tim Deegan
  1 sibling, 1 reply; 31+ messages in thread
From: Jan Beulich @ 2011-11-08  9:20 UTC (permalink / raw)
  To: Jean Guyader; +Cc: tim, xen-devel, allen.m.kay

>>> On 07.11.11 at 19:25, Jean Guyader <jean.guyader@eu.citrix.com> wrote:

Sorry, noticed this only now, but neither title nor description of this
are in sync with the actual patch.

Jan

> XENMEM_add_to_physmap_gmfn_range is like XENMEM_add_to_physmap on
> the gmfn space but the number of pages on which xen will iterate.
> 
> Use the 16 bits padding between .domid and .space in struct 
> xen_add_to_physmap
> to keep compatibility with older versions.
> 
> Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
> ---
>  xen/arch/x86/mm.c           |   22 +++++++++++++++++++++-
>  xen/include/public/memory.h |    4 ++++
>  2 files changed, 25 insertions(+), 1 deletions(-)

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap.
  2011-11-07 18:25     ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
  2011-11-07 18:25       ` [PATCH 4/6] mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range Jean Guyader
@ 2011-11-08 13:24       ` Tim Deegan
  1 sibling, 0 replies; 31+ messages in thread
From: Tim Deegan @ 2011-11-08 13:24 UTC (permalink / raw)
  To: Jean Guyader; +Cc: xen-devel, allen.m.kay

At 18:25 +0000 on 07 Nov (1320690324), Jean Guyader wrote:
> 
> Move the code for the XENMEM_add_to_physmap case into it's own
> function (xenmem_add_to_physmap).
> 
> Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>

Acked-by: Tim Deegan <tim@xen.org>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 4/6] mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range
  2011-11-08  9:20         ` [PATCH 4/6] mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range Jan Beulich
@ 2011-11-08 13:32           ` Tim Deegan
  2011-11-08 13:39             ` Tim Deegan
  0 siblings, 1 reply; 31+ messages in thread
From: Tim Deegan @ 2011-11-08 13:32 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, allen.m.kay, Jean Guyader

At 02:20 -0700 on 08 Nov (1320718835), Jan Beulich wrote:
> >>> On 07.11.11 at 19:25, Jean Guyader <jean.guyader@eu.citrix.com> wrote:
> 
> Sorry, noticed this only now, but neither title nor description of this
> are in sync with the actual patch.


Indeed; they should be updated.  But otherwise:
Acked-by: Tim Deegan <tim@xen.org>

> > XENMEM_add_to_physmap_gmfn_range is like XENMEM_add_to_physmap on
> > the gmfn space but the number of pages on which xen will iterate.
> > 
> > Use the 16 bits padding between .domid and .space in struct 
> > xen_add_to_physmap
> > to keep compatibility with older versions.
> > 
> > Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
> > ---
> >  xen/arch/x86/mm.c           |   22 +++++++++++++++++++++-
> >  xen/include/public/memory.h |    4 ++++
> >  2 files changed, 25 insertions(+), 1 deletions(-)
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 4/6] mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range
  2011-11-08 13:32           ` Tim Deegan
@ 2011-11-08 13:39             ` Tim Deegan
  0 siblings, 0 replies; 31+ messages in thread
From: Tim Deegan @ 2011-11-08 13:39 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, allen.m.kay, Jean Guyader

At 13:32 +0000 on 08 Nov (1320759148), Tim Deegan wrote:
> At 02:20 -0700 on 08 Nov (1320718835), Jan Beulich wrote:
> > >>> On 07.11.11 at 19:25, Jean Guyader <jean.guyader@eu.citrix.com> wrote:
> > 
> > Sorry, noticed this only now, but neither title nor description of this
> > are in sync with the actual patch.
> 
> 
> Indeed; they should be updated.  But otherwise:
> Acked-by: Tim Deegan <tim@xen.org>

No, wait, that was the other patch, which I already acked!
ENOCOFFEE. :(

I have a few other comments on this patch...

> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index f75011e..cca64b8 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4714,9 +4714,29 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
>              return -EPERM;
>          }
>  
> -        xenmem_add_to_physmap(d, xatp);
> +        if ( xatp.space != XENMAPSPACE_gmfn_range )
> +            xatp.size = 1;
> +
> +        for ( ; xatp.size > 0; xatp.size-- )
> +        {
> +            if ( hypercall_preempt_check() )
> +            {
> +                rc = -EAGAIN;
> +                break;
> +            }
> +            xenmem_add_to_physmap(d, xatp);
> +            xatp.idx++;
> +            xatp.gpfn++;
> +        }

Having moved XATP into its own function, this loop probably belongs in
that function. 

While I'm looking at it, updating two loop vars explicitly and one in
the header is a bit ugly - why not just use a while() and update all
three together?


>          rcu_unlock_domain(d);
> +        if ( rc == -EAGAIN )
> +        {
> +            if ( copy_to_guest(arg, &xatp, 1) )
> +                return -EFAULT;
> +            rc = hypercall_create_continuation(
> +                    __HYPERVISOR_memory_op, "ih", op, arg);
> +        }

I think this might need some compat glue in
arch/x86/x86_64/compat/mm.c for it to work with 32-bit callers.


Tim.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 6/6] Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb flush
  2011-11-07 18:25           ` [PATCH 6/6] Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb flush Jean Guyader
@ 2011-11-08 13:45             ` Tim Deegan
  0 siblings, 0 replies; 31+ messages in thread
From: Tim Deegan @ 2011-11-08 13:45 UTC (permalink / raw)
  To: Jean Guyader; +Cc: xen-devel, allen.m.kay

At 18:25 +0000 on 07 Nov (1320690327), Jean Guyader wrote:
> 
> Add cpu flag that will be checked by the iommu low level code
> to skip iotlb flushes. iommu_iotlb_flush shall be called explicitly.

The general approach is OK.  I think the explicit flush on the xatp path
belongs in the inner xatp function, along with the reat of the loop. 

Also, please add a comment beside the declaration in iommu.h to explain
what the falg does and how it should be used. 

Cheers,

Tim.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* RE: [PATCH 1/6] vtd: Refactor iotlb flush code
  2011-11-08  7:42     ` Jean Guyader
@ 2011-11-09  2:41       ` Kay, Allen M
  2011-11-09 22:15         ` Jean Guyader
  2011-11-09  2:50       ` Kay, Allen M
  1 sibling, 1 reply; 31+ messages in thread
From: Kay, Allen M @ 2011-11-09  2:41 UTC (permalink / raw)
  To: Jean Guyader; +Cc: xen-devel, Tim (Xen.org), Jean Guyader

Jean,

Page 122 of VT-d spec indicates you can invalidate multiple pages by setting the Address Mask (AM) field.  Will this achieve what you wanted to do?

http://download.intel.com/technology/computing/vptech/Intel(r)_VT_for_Direct_IO.pdf

Allen

-----Original Message-----
From: Jean Guyader [mailto:jean.guyader@eu.citrix.com] 
Sent: Monday, November 07, 2011 11:42 PM
To: Kay, Allen M
Cc: Jean Guyader; xen-devel@lists.xensource.com; Tim (Xen.org)
Subject: Re: [PATCH 1/6] vtd: Refactor iotlb flush code


Allen,

You are probably talking about __intel_iommu_iotlb_flush.
This function takes a range of address to flush. I haven't found a function in the vtd code to invalidate a range on address without doing a loop of flush_iotlb_psi, so I thought that the most efficient and quick way to flush a range would be to use a domain selective invalidation.

Jean

On 08/11 03:10, Kay, Allen M wrote:
> Jean,
> 
> The original code does not call iommu_flush_iotlb_dsi().  What is the reason the refractored code need to use domain selective invalidation?
> 
> Allen
> -----
> 
> +        if ( page_count > 1 || gfn == -1 )
> +        {
> +            if ( iommu_flush_iotlb_dsi(iommu, iommu_domid,
> +                        0, flush_dev_iotlb) )
> +                iommu_flush_write_buffer(iommu);
> +        }
> +        else
> +        {
> +            if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
> +                        (paddr_t)gfn << PAGE_SHIFT_4K, 0,
> +                        !dma_old_pte_present, flush_dev_iotlb) )
> +                iommu_flush_write_buffer(iommu);
> 
> -----Original Message-----
> From: Jean Guyader [mailto:jean.guyader@eu.citrix.com]
> Sent: Monday, November 07, 2011 10:25 AM
> To: xen-devel@lists.xensource.com
> Cc: tim@xen.org; Kay, Allen M; Jean Guyader
> Subject: [PATCH 1/6] vtd: Refactor iotlb flush code
> 
> 
> Factorize the iotlb flush code from map_page and unmap_page into it's own function.
> 
> Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
> ---
>  xen/drivers/passthrough/vtd/iommu.c |   86 +++++++++++++++++-----------------
>  1 files changed, 43 insertions(+), 43 deletions(-)
> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* RE: [PATCH 1/6] vtd: Refactor iotlb flush code
  2011-11-08  7:42     ` Jean Guyader
  2011-11-09  2:41       ` Kay, Allen M
@ 2011-11-09  2:50       ` Kay, Allen M
  1 sibling, 0 replies; 31+ messages in thread
From: Kay, Allen M @ 2011-11-09  2:50 UTC (permalink / raw)
  To: Jean Guyader; +Cc: xen-devel, Tim (Xen.org), Jean Guyader

The fourth parameter, currently 0,  is supposed to be the AM field.  It is named order now.

+            if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
+                        (paddr_t)gfn << PAGE_SHIFT_4K, 0,
+                        !dma_old_pte_present, flush_dev_iotlb) )

-----Original Message-----
From: Kay, Allen M 
Sent: Tuesday, November 08, 2011 6:42 PM
To: 'Jean Guyader'
Cc: Jean Guyader; xen-devel@lists.xensource.com; Tim (Xen.org)
Subject: RE: [PATCH 1/6] vtd: Refactor iotlb flush code

Jean,

Page 122 of VT-d spec indicates you can invalidate multiple pages by setting the Address Mask (AM) field.  Will this achieve what you wanted to do?

http://download.intel.com/technology/computing/vptech/Intel(r)_VT_for_Direct_IO.pdf

Allen

-----Original Message-----
From: Jean Guyader [mailto:jean.guyader@eu.citrix.com] 
Sent: Monday, November 07, 2011 11:42 PM
To: Kay, Allen M
Cc: Jean Guyader; xen-devel@lists.xensource.com; Tim (Xen.org)
Subject: Re: [PATCH 1/6] vtd: Refactor iotlb flush code


Allen,

You are probably talking about __intel_iommu_iotlb_flush.
This function takes a range of address to flush. I haven't found a function in the vtd code to invalidate a range on address without doing a loop of flush_iotlb_psi, so I thought that the most efficient and quick way to flush a range would be to use a domain selective invalidation.

Jean

On 08/11 03:10, Kay, Allen M wrote:
> Jean,
> 
> The original code does not call iommu_flush_iotlb_dsi().  What is the reason the refractored code need to use domain selective invalidation?
> 
> Allen
> -----
> 
> +        if ( page_count > 1 || gfn == -1 )
> +        {
> +            if ( iommu_flush_iotlb_dsi(iommu, iommu_domid,
> +                        0, flush_dev_iotlb) )
> +                iommu_flush_write_buffer(iommu);
> +        }
> +        else
> +        {
> +            if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
> +                        (paddr_t)gfn << PAGE_SHIFT_4K, 0,
> +                        !dma_old_pte_present, flush_dev_iotlb) )
> +                iommu_flush_write_buffer(iommu);
> 
> -----Original Message-----
> From: Jean Guyader [mailto:jean.guyader@eu.citrix.com]
> Sent: Monday, November 07, 2011 10:25 AM
> To: xen-devel@lists.xensource.com
> Cc: tim@xen.org; Kay, Allen M; Jean Guyader
> Subject: [PATCH 1/6] vtd: Refactor iotlb flush code
> 
> 
> Factorize the iotlb flush code from map_page and unmap_page into it's own function.
> 
> Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
> ---
>  xen/drivers/passthrough/vtd/iommu.c |   86 +++++++++++++++++-----------------
>  1 files changed, 43 insertions(+), 43 deletions(-)
> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 1/6] vtd: Refactor iotlb flush code
  2011-11-09  2:41       ` Kay, Allen M
@ 2011-11-09 22:15         ` Jean Guyader
  2011-11-09 23:25           ` Kay, Allen M
  0 siblings, 1 reply; 31+ messages in thread
From: Jean Guyader @ 2011-11-09 22:15 UTC (permalink / raw)
  To: Kay, Allen M; +Cc: xen-devel, Tim (Xen.org), Jean Guyader

Allen,

The addresses in input are not necessarily aligned, to be able to use this feature
I will have to break the range into aligned chunks of different sizes and flush
them separatly.

I personally don't think this optimization is worth the effort.

Jean

On 09/11 02:41, Kay, Allen M wrote:
> Jean,
> 
> Page 122 of VT-d spec indicates you can invalidate multiple pages by setting the Address Mask (AM) field.  Will this achieve what you wanted to do?
> 
> http://download.intel.com/technology/computing/vptech/Intel(r)_VT_for_Direct_IO.pdf
> 
> Allen
> 
> -----Original Message-----
> From: Jean Guyader [mailto:jean.guyader@eu.citrix.com] 
> Sent: Monday, November 07, 2011 11:42 PM
> To: Kay, Allen M
> Cc: Jean Guyader; xen-devel@lists.xensource.com; Tim (Xen.org)
> Subject: Re: [PATCH 1/6] vtd: Refactor iotlb flush code
> 
> 
> Allen,
> 
> You are probably talking about __intel_iommu_iotlb_flush.
> This function takes a range of address to flush. I haven't found a function in the vtd code to invalidate a range on address without doing a loop of flush_iotlb_psi, so I thought that the most efficient and quick way to flush a range would be to use a domain selective invalidation.
> 
> Jean
> 
> On 08/11 03:10, Kay, Allen M wrote:
> > Jean,
> > 
> > The original code does not call iommu_flush_iotlb_dsi().  What is the reason the refractored code need to use domain selective invalidation?
> > 
> > Allen
> > -----
> > 
> > +        if ( page_count > 1 || gfn == -1 )
> > +        {
> > +            if ( iommu_flush_iotlb_dsi(iommu, iommu_domid,
> > +                        0, flush_dev_iotlb) )
> > +                iommu_flush_write_buffer(iommu);
> > +        }
> > +        else
> > +        {
> > +            if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
> > +                        (paddr_t)gfn << PAGE_SHIFT_4K, 0,
> > +                        !dma_old_pte_present, flush_dev_iotlb) )
> > +                iommu_flush_write_buffer(iommu);
> > 
> > -----Original Message-----
> > From: Jean Guyader [mailto:jean.guyader@eu.citrix.com]
> > Sent: Monday, November 07, 2011 10:25 AM
> > To: xen-devel@lists.xensource.com
> > Cc: tim@xen.org; Kay, Allen M; Jean Guyader
> > Subject: [PATCH 1/6] vtd: Refactor iotlb flush code
> > 
> > 
> > Factorize the iotlb flush code from map_page and unmap_page into it's own function.
> > 
> > Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
> > ---
> >  xen/drivers/passthrough/vtd/iommu.c |   86 +++++++++++++++++-----------------
> >  1 files changed, 43 insertions(+), 43 deletions(-)
> > 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* RE: [PATCH 1/6] vtd: Refactor iotlb flush code
  2011-11-09 22:15         ` Jean Guyader
@ 2011-11-09 23:25           ` Kay, Allen M
  0 siblings, 0 replies; 31+ messages in thread
From: Kay, Allen M @ 2011-11-09 23:25 UTC (permalink / raw)
  To: Jean Guyader; +Cc: xen-devel, Tim (Xen.org), Jean Guyader

Jean,

I think this is fine.  Ack for VT-d related changes.

Allen

-----Original Message-----
From: Jean Guyader [mailto:jean.guyader@eu.citrix.com] 
Sent: Wednesday, November 09, 2011 2:16 PM
To: Kay, Allen M
Cc: Jean Guyader; xen-devel@lists.xensource.com; Tim (Xen.org)
Subject: Re: [PATCH 1/6] vtd: Refactor iotlb flush code

Allen,

The addresses in input are not necessarily aligned, to be able to use this feature I will have to break the range into aligned chunks of different sizes and flush them separatly.

I personally don't think this optimization is worth the effort.

Jean

On 09/11 02:41, Kay, Allen M wrote:
> Jean,
> 
> Page 122 of VT-d spec indicates you can invalidate multiple pages by setting the Address Mask (AM) field.  Will this achieve what you wanted to do?
> 
> http://download.intel.com/technology/computing/vptech/Intel(r)_VT_for_
> Direct_IO.pdf
> 
> Allen
> 
> -----Original Message-----
> From: Jean Guyader [mailto:jean.guyader@eu.citrix.com]
> Sent: Monday, November 07, 2011 11:42 PM
> To: Kay, Allen M
> Cc: Jean Guyader; xen-devel@lists.xensource.com; Tim (Xen.org)
> Subject: Re: [PATCH 1/6] vtd: Refactor iotlb flush code
> 
> 
> Allen,
> 
> You are probably talking about __intel_iommu_iotlb_flush.
> This function takes a range of address to flush. I haven't found a function in the vtd code to invalidate a range on address without doing a loop of flush_iotlb_psi, so I thought that the most efficient and quick way to flush a range would be to use a domain selective invalidation.
> 
> Jean
> 
> On 08/11 03:10, Kay, Allen M wrote:
> > Jean,
> > 
> > The original code does not call iommu_flush_iotlb_dsi().  What is the reason the refractored code need to use domain selective invalidation?
> > 
> > Allen
> > -----
> > 
> > +        if ( page_count > 1 || gfn == -1 )
> > +        {
> > +            if ( iommu_flush_iotlb_dsi(iommu, iommu_domid,
> > +                        0, flush_dev_iotlb) )
> > +                iommu_flush_write_buffer(iommu);
> > +        }
> > +        else
> > +        {
> > +            if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
> > +                        (paddr_t)gfn << PAGE_SHIFT_4K, 0,
> > +                        !dma_old_pte_present, flush_dev_iotlb) )
> > +                iommu_flush_write_buffer(iommu);
> > 
> > -----Original Message-----
> > From: Jean Guyader [mailto:jean.guyader@eu.citrix.com]
> > Sent: Monday, November 07, 2011 10:25 AM
> > To: xen-devel@lists.xensource.com
> > Cc: tim@xen.org; Kay, Allen M; Jean Guyader
> > Subject: [PATCH 1/6] vtd: Refactor iotlb flush code
> > 
> > 
> > Factorize the iotlb flush code from map_page and unmap_page into it's own function.
> > 
> > Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
> > ---
> >  xen/drivers/passthrough/vtd/iommu.c |   86 +++++++++++++++++-----------------
> >  1 files changed, 43 insertions(+), 43 deletions(-)
> > 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
  2011-11-20 19:59             ` Keir Fraser
@ 2011-11-21  8:39               ` Olaf Hering
  0 siblings, 0 replies; 31+ messages in thread
From: Olaf Hering @ 2011-11-21  8:39 UTC (permalink / raw)
  To: Keir Fraser; +Cc: allen.m.kay, xen-devel, tim, Jean Guyader, JBeulich

On Sun, Nov 20, Keir Fraser wrote:

> > Nov 20 06:21:11.744519 (XEN) hvm.c:2312:d1 guest attempted write to read-only
> > memory page. gfn=0xc0, mfn=0x201979
> 
> Is that new behaviour? It may be unrelated to whatever HVM test failure
> we're seeing, or else be a mere symptom of a guest gone haywire for other
> reasons (we write-protect that memory range because it is supposed to be
> ROM).

The message does not trigger with changeset 24162, but it does with
24167.

Olaf

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
  2011-11-20 13:25           ` Olaf Hering
@ 2011-11-20 19:59             ` Keir Fraser
  2011-11-21  8:39               ` Olaf Hering
  0 siblings, 1 reply; 31+ messages in thread
From: Keir Fraser @ 2011-11-20 19:59 UTC (permalink / raw)
  To: Olaf Hering; +Cc: allen.m.kay, xen-devel, tim, Jean Guyader, JBeulich

On 20/11/2011 13:25, "Olaf Hering" <olaf@aepfle.de> wrote:

> On Sat, Nov 19, Keir Fraser wrote:
> 
>> On 19/11/2011 21:58, "Olaf Hering" <olaf@aepfle.de> wrote:
>> 
>>> On Wed, Nov 16, Jean Guyader wrote:
>>> 
>>>> Move the code for the XENMEM_add_to_physmap case into it's own
>>>> function (xenmem_add_to_physmap).
>>> 
>>> This changeset 24163:7a9a1261a6b0 seems to cause the current testsuite
>>> failures.
>>> (XEN) Assertion '!in_atomic()' failed at softirq.c:61
>>> 
>>> preempt_count is like fffffc52 or fffffc00 in my testing.
>> 
>> Thanks, hopefully fixed by c/s 24167.
> 
> Yes, the ASSERT does not trigger anymore.
> 
> The remaining issue is this:
> 
> Nov 20 06:21:11.744519 (XEN) hvm.c:2312:d1 guest attempted write to read-only
> memory page. gfn=0xc0, mfn=0x201979

Is that new behaviour? It may be unrelated to whatever HVM test failure
we're seeing, or else be a mere symptom of a guest gone haywire for other
reasons (we write-protect that memory range because it is supposed to be
ROM).

 -- Keir

> See
> http://www.chiark.greenend.org.uk/~xensrcts/logs/9893/test-amd64-i386-rhel6hvm
> -amd/serial-potato-beetle.log
> 
> Olaf

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
  2011-11-19 22:14         ` Keir Fraser
  2011-11-19 22:37           ` Jean Guyader
@ 2011-11-20 13:25           ` Olaf Hering
  2011-11-20 19:59             ` Keir Fraser
  1 sibling, 1 reply; 31+ messages in thread
From: Olaf Hering @ 2011-11-20 13:25 UTC (permalink / raw)
  To: Keir Fraser; +Cc: allen.m.kay, xen-devel, tim, Jean Guyader, JBeulich

On Sat, Nov 19, Keir Fraser wrote:

> On 19/11/2011 21:58, "Olaf Hering" <olaf@aepfle.de> wrote:
> 
> > On Wed, Nov 16, Jean Guyader wrote:
> > 
> >> Move the code for the XENMEM_add_to_physmap case into it's own
> >> function (xenmem_add_to_physmap).
> > 
> > This changeset 24163:7a9a1261a6b0 seems to cause the current testsuite
> > failures.
> > (XEN) Assertion '!in_atomic()' failed at softirq.c:61
> > 
> > preempt_count is like fffffc52 or fffffc00 in my testing.
> 
> Thanks, hopefully fixed by c/s 24167.

Yes, the ASSERT does not trigger anymore.

The remaining issue is this:

Nov 20 06:21:11.744519 (XEN) hvm.c:2312:d1 guest attempted write to read-only memory page. gfn=0xc0, mfn=0x201979

See
http://www.chiark.greenend.org.uk/~xensrcts/logs/9893/test-amd64-i386-rhel6hvm-amd/serial-potato-beetle.log

Olaf

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
  2011-11-19 22:14         ` Keir Fraser
@ 2011-11-19 22:37           ` Jean Guyader
  2011-11-20 13:25           ` Olaf Hering
  1 sibling, 0 replies; 31+ messages in thread
From: Jean Guyader @ 2011-11-19 22:37 UTC (permalink / raw)
  To: Keir Fraser
  Cc: Olaf Hering, xen-devel, allen.m.kay, tim, Jean Guyader, JBeulich

On 19 November 2011 14:14, Keir Fraser <keir.xen@gmail.com> wrote:
> On 19/11/2011 21:58, "Olaf Hering" <olaf@aepfle.de> wrote:
>
>> On Wed, Nov 16, Jean Guyader wrote:
>>
>>> Move the code for the XENMEM_add_to_physmap case into it's own
>>> function (xenmem_add_to_physmap).
>>
>> This changeset 24163:7a9a1261a6b0 seems to cause the current testsuite
>> failures.
>> (XEN) Assertion '!in_atomic()' failed at softirq.c:61
>>
>> preempt_count is like fffffc52 or fffffc00 in my testing.
>
> Thanks, hopefully fixed by c/s 24167.
>

Thanks, sorry about that.

Jean

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
  2011-11-19 21:58       ` Olaf Hering
@ 2011-11-19 22:14         ` Keir Fraser
  2011-11-19 22:37           ` Jean Guyader
  2011-11-20 13:25           ` Olaf Hering
  0 siblings, 2 replies; 31+ messages in thread
From: Keir Fraser @ 2011-11-19 22:14 UTC (permalink / raw)
  To: Olaf Hering, Jean Guyader; +Cc: allen.m.kay, xen-devel, tim, JBeulich

On 19/11/2011 21:58, "Olaf Hering" <olaf@aepfle.de> wrote:

> On Wed, Nov 16, Jean Guyader wrote:
> 
>> Move the code for the XENMEM_add_to_physmap case into it's own
>> function (xenmem_add_to_physmap).
> 
> This changeset 24163:7a9a1261a6b0 seems to cause the current testsuite
> failures.
> (XEN) Assertion '!in_atomic()' failed at softirq.c:61
> 
> preempt_count is like fffffc52 or fffffc00 in my testing.

Thanks, hopefully fixed by c/s 24167.

 -- keir

> Olaf

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
  2011-11-16 19:25     ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
@ 2011-11-19 21:58       ` Olaf Hering
  2011-11-19 22:14         ` Keir Fraser
  0 siblings, 1 reply; 31+ messages in thread
From: Olaf Hering @ 2011-11-19 21:58 UTC (permalink / raw)
  To: Jean Guyader; +Cc: allen.m.kay, keir, xen-devel, tim, JBeulich

On Wed, Nov 16, Jean Guyader wrote:

> Move the code for the XENMEM_add_to_physmap case into it's own
> function (xenmem_add_to_physmap).

This changeset 24163:7a9a1261a6b0 seems to cause the current testsuite failures.
(XEN) Assertion '!in_atomic()' failed at softirq.c:61

preempt_count is like fffffc52 or fffffc00 in my testing.

Olaf

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
  2011-11-16 19:25   ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
@ 2011-11-16 19:25     ` Jean Guyader
  2011-11-19 21:58       ` Olaf Hering
  0 siblings, 1 reply; 31+ messages in thread
From: Jean Guyader @ 2011-11-16 19:25 UTC (permalink / raw)
  To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich

[-- Attachment #1: Type: text/plain, Size: 292 bytes --]


Move the code for the XENMEM_add_to_physmap case into it's own
function (xenmem_add_to_physmap).

Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
 xen/arch/x86/mm.c |  161 ++++++++++++++++++++++++++++------------------------
 1 files changed, 87 insertions(+), 74 deletions(-)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch --]
[-- Type: text/x-patch; name="0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch", Size: 7450 bytes --]

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index a41a1d6..f093e93 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4677,37 +4677,18 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
     return 0;
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+static int xenmem_add_to_physmap(struct domain *d,
+                                 const struct xen_add_to_physmap *xatp)
 {
     struct page_info *page = NULL;
     unsigned long gfn = 0; /* gcc ... */
+    unsigned long prev_mfn, mfn = 0, gpfn, idx;
     int rc;
 
-    switch ( op )
+    switch ( xatp->space )
     {
-    case XENMEM_add_to_physmap:
-    {
-        struct xen_add_to_physmap xatp;
-        unsigned long prev_mfn, mfn = 0, gpfn;
-        struct domain *d;
-
-        if ( copy_from_guest(&xatp, arg, 1) )
-            return -EFAULT;
-
-        rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
-        if ( rc != 0 )
-            return rc;
-
-        if ( xsm_add_to_physmap(current->domain, d) )
-        {
-            rcu_unlock_domain(d);
-            return -EPERM;
-        }
-
-        switch ( xatp.space )
-        {
         case XENMAPSPACE_shared_info:
-            if ( xatp.idx == 0 )
+            if ( xatp->idx == 0 )
                 mfn = virt_to_mfn(d->shared_info);
             break;
         case XENMAPSPACE_grant_table:
@@ -4716,21 +4697,22 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
             if ( d->grant_table->gt_version == 0 )
                 d->grant_table->gt_version = 1;
 
+            idx = xatp->idx;
             if ( d->grant_table->gt_version == 2 &&
-                 (xatp.idx & XENMAPIDX_grant_table_status) )
+                 (xatp->idx & XENMAPIDX_grant_table_status) )
             {
-                xatp.idx &= ~XENMAPIDX_grant_table_status;
-                if ( xatp.idx < nr_status_frames(d->grant_table) )
-                    mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
+                idx &= ~XENMAPIDX_grant_table_status;
+                if ( xatp->idx < nr_status_frames(d->grant_table) )
+                    mfn = virt_to_mfn(d->grant_table->status[idx]);
             }
             else
             {
-                if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
-                     (xatp.idx < max_nr_grant_frames) )
-                    gnttab_grow_table(d, xatp.idx + 1);
+                if ( (idx >= nr_grant_frames(d->grant_table)) &&
+                     (idx < max_nr_grant_frames) )
+                    gnttab_grow_table(d, idx + 1);
 
-                if ( xatp.idx < nr_grant_frames(d->grant_table) )
-                    mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
+                if ( idx < nr_grant_frames(d->grant_table) )
+                    mfn = virt_to_mfn(d->grant_table->shared_raw[idx]);
             }
 
             spin_unlock(&d->grant_table->lock);
@@ -4738,9 +4720,9 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
         case XENMAPSPACE_gmfn:
         {
             p2m_type_t p2mt;
-            gfn = xatp.idx;
+            gfn = xatp->idx;
 
-            xatp.idx = mfn_x(get_gfn_unshare(d, xatp.idx, &p2mt));
+            idx = mfn_x(get_gfn_unshare(d, xatp->idx, &p2mt));
             /* If the page is still shared, exit early */
             if ( p2m_is_shared(p2mt) )
             {
@@ -4748,58 +4730,89 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
                 rcu_unlock_domain(d);
                 return -ENOMEM;
             }
-            if ( !get_page_from_pagenr(xatp.idx, d) )
+            if ( !get_page_from_pagenr(idx, d) )
                 break;
-            mfn = xatp.idx;
+            mfn = idx;
             page = mfn_to_page(mfn);
             break;
         }
         default:
             break;
-        }
-
-        if ( !paging_mode_translate(d) || (mfn == 0) )
-        {
-            if ( page )
-                put_page(page);
-            if ( xatp.space == XENMAPSPACE_gmfn )
-                put_gfn(d, gfn);
-            rcu_unlock_domain(d);
-            return -EINVAL;
-        }
-
-        domain_lock(d);
+    }
 
+    if ( !paging_mode_translate(d) || (mfn == 0) )
+    {
         if ( page )
             put_page(page);
+        if ( xatp->space == XENMAPSPACE_gmfn )
+            put_gfn(d, gfn);
+        rcu_unlock_domain(d);
+        return -EINVAL;
+    }
 
-        /* Remove previously mapped page if it was present. */
-        prev_mfn = get_gfn_untyped(d, xatp.gpfn);
-        if ( mfn_valid(prev_mfn) )
-        {
-            if ( is_xen_heap_mfn(prev_mfn) )
-                /* Xen heap frames are simply unhooked from this phys slot. */
-                guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, PAGE_ORDER_4K);
-            else
-                /* Normal domain memory is freed, to avoid leaking memory. */
-                guest_remove_page(d, xatp.gpfn);
-        }
-        /* In the XENMAPSPACE_gmfn case we still hold a ref on the old page. */
-        put_gfn(d, xatp.gpfn);
+    domain_lock(d);
+
+    if ( page )
+        put_page(page);
+
+    /* Remove previously mapped page if it was present. */
+    prev_mfn = get_gfn_untyped(d, xatp->gpfn);
+    if ( mfn_valid(prev_mfn) )
+    {
+        if ( is_xen_heap_mfn(prev_mfn) )
+            /* Xen heap frames are simply unhooked from this phys slot. */
+            guest_physmap_remove_page(d, xatp->gpfn, prev_mfn, PAGE_ORDER_4K);
+        else
+            /* Normal domain memory is freed, to avoid leaking memory. */
+            guest_remove_page(d, xatp->gpfn);
+    }
+    /* In the XENMAPSPACE_gmfn case we still hold a ref on the old page. */
+    put_gfn(d, xatp->gpfn);
 
-        /* Unmap from old location, if any. */
-        gpfn = get_gpfn_from_mfn(mfn);
-        ASSERT( gpfn != SHARED_M2P_ENTRY );
-        if ( gpfn != INVALID_M2P_ENTRY )
-            guest_physmap_remove_page(d, gpfn, mfn, PAGE_ORDER_4K);
+    /* Unmap from old location, if any. */
+    gpfn = get_gpfn_from_mfn(mfn);
+    ASSERT( gpfn != SHARED_M2P_ENTRY );
+    if ( gpfn != INVALID_M2P_ENTRY )
+        guest_physmap_remove_page(d, gpfn, mfn, PAGE_ORDER_4K);
 
-        /* Map at new location. */
-        rc = guest_physmap_add_page(d, xatp.gpfn, mfn, PAGE_ORDER_4K);
+    /* Map at new location. */
+    rc = guest_physmap_add_page(d, xatp->gpfn, mfn, PAGE_ORDER_4K);
 
-        /* In the XENMAPSPACE_gmfn, we took a ref and locked the p2m at the top */
-        if ( xatp.space == XENMAPSPACE_gmfn )
-            put_gfn(d, gfn);
-        domain_unlock(d);
+    /* In the XENMAPSPACE_gmfn, we took a ref and locked the p2m at the top */
+    if ( xatp->space == XENMAPSPACE_gmfn )
+        put_gfn(d, gfn);
+    domain_unlock(d);
+
+    rcu_unlock_domain(d);
+
+    return rc;
+}
+
+long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+{
+    int rc;
+
+    switch ( op )
+    {
+    case XENMEM_add_to_physmap:
+    {
+        struct xen_add_to_physmap xatp;
+        struct domain *d;
+
+        if ( copy_from_guest(&xatp, arg, 1) )
+            return -EFAULT;
+
+        rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
+        if ( rc != 0 )
+            return rc;
+
+        if ( xsm_add_to_physmap(current->domain, d) )
+        {
+            rcu_unlock_domain(d);
+            return -EPERM;
+        }
+
+        rc = xenmem_add_to_physmap(d, &xatp);
 
         rcu_unlock_domain(d);
 

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/6] add_to_physmap: Move the code for > XENMEM_add_to_physmap
       [not found] <20111113175006.1EF1C72C36F@homiemail-mx8.g.dreamhost.com>
@ 2011-11-14 14:17 ` Andres Lagar-Cavilla
  0 siblings, 0 replies; 31+ messages in thread
From: Andres Lagar-Cavilla @ 2011-11-14 14:17 UTC (permalink / raw)
  To: xen-devel, jean.guyader

> Date: Sun, 13 Nov 2011 17:40:49 +0000
> From: Jean Guyader <jean.guyader@eu.citrix.com>
> Subject: [Xen-devel] [PATCH 3/6] add_to_physmap: Move the code for
> 	XENMEM_add_to_physmap
> To: <xen-devel@lists.xensource.com>
> Cc: tim@xen.org, allen.m.kay@intel.com, keir@xen.org,	Jean Guyader
> 	<jean.guyader@eu.citrix.com>, JBeulich@suse.com
> Message-ID:
> 	<1321206052-18833-4-git-send-email-jean.guyader@eu.citrix.com>
> Content-Type: text/plain; charset="utf-8"
>
>
> Move the code for the XENMEM_add_to_physmap case into it's own
> function (xenmem_add_to_physmap).
>
> Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
> ---
Jean, with the p2m API changes, your patch won't compile any more.

gfn_to_mfn* is now get_gfn*, with the need to put_gfn once done working
with the p2m translation. From what I've seen, your patch need not worry
about put_gfn, although it needs rebasing.

Andres

>  xen/arch/x86/mm.c |  189
> ++++++++++++++++++++++++++++-------------------------
>  1 files changed, 100 insertions(+), 89 deletions(-)
>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: 0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch
> Type: text/x-patch
> Size: 6630 bytes
> Desc: not available
> Url :
> http://lists.xensource.com/archives/html/xen-devel/attachments/20111113/13c22ac8/0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.bin
>
> ------------------------------
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
>
> End of Xen-devel Digest, Vol 81, Issue 163
> ******************************************
>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
  2011-11-13 17:40   ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
@ 2011-11-13 17:40     ` Jean Guyader
  0 siblings, 0 replies; 31+ messages in thread
From: Jean Guyader @ 2011-11-13 17:40 UTC (permalink / raw)
  To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich

[-- Attachment #1: Type: text/plain, Size: 294 bytes --]


Move the code for the XENMEM_add_to_physmap case into it's own
function (xenmem_add_to_physmap).

Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
 xen/arch/x86/mm.c |  189 ++++++++++++++++++++++++++++-------------------------
 1 files changed, 100 insertions(+), 89 deletions(-)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch --]
[-- Type: text/x-patch; name="0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch", Size: 6630 bytes --]

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index acc1f34..7cbbb07 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4592,9 +4592,107 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
     return 0;
 }
 
+static int xenmem_add_to_physmap(struct domain *d,
+                                 struct xen_add_to_physmap xatp)
+{
+    struct page_info* page = NULL;
+    unsigned long mfn = 0;
+    unsigned long prev_mfn, gpfn;
+    int rc;
+
+    switch ( xatp.space )
+    {
+    case XENMAPSPACE_shared_info:
+        if ( xatp.idx == 0 )
+            mfn = virt_to_mfn(d->shared_info);
+        break;
+    case XENMAPSPACE_grant_table:
+        spin_lock(&d->grant_table->lock);
+
+        if ( d->grant_table->gt_version == 0 )
+            d->grant_table->gt_version = 1;
+
+        if ( d->grant_table->gt_version == 2 &&
+             (xatp.idx & XENMAPIDX_grant_table_status) )
+        {
+            xatp.idx &= ~XENMAPIDX_grant_table_status;
+            if ( xatp.idx < nr_status_frames(d->grant_table) )
+                mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
+        }
+        else
+        {
+            if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
+                 (xatp.idx < max_nr_grant_frames) )
+                gnttab_grow_table(d, xatp.idx + 1);
+
+            if ( xatp.idx < nr_grant_frames(d->grant_table) )
+                mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
+        }
+
+        spin_unlock(&d->grant_table->lock);
+        break;
+    case XENMAPSPACE_gmfn:
+    {
+        p2m_type_t p2mt;
+
+        xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
+        /* If the page is still shared, exit early */
+        if ( p2m_is_shared(p2mt) )
+        {
+            rcu_unlock_domain(d);
+            return -ENOMEM;
+        }
+        if ( !get_page_from_pagenr(xatp.idx, d) )
+            break;
+        mfn = xatp.idx;
+        page = mfn_to_page(mfn);
+        break;
+    }
+    default:
+        break;
+    }
+
+    if ( !paging_mode_translate(d) || (mfn == 0) )
+    {
+        if ( page )
+            put_page(page);
+        rcu_unlock_domain(d);
+        return -EINVAL;
+    }
+
+    domain_lock(d);
+
+    if ( page )
+        put_page(page);
+
+    /* Remove previously mapped page if it was present. */
+    prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
+    if ( mfn_valid(prev_mfn) )
+    {
+        if ( is_xen_heap_mfn(prev_mfn) )
+            /* Xen heap frames are simply unhooked from this phys slot. */
+            guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
+        else
+            /* Normal domain memory is freed, to avoid leaking memory. */
+            guest_remove_page(d, xatp.gpfn);
+    }
+
+    /* Unmap from old location, if any. */
+    gpfn = get_gpfn_from_mfn(mfn);
+    ASSERT( gpfn != SHARED_M2P_ENTRY );
+    if ( gpfn != INVALID_M2P_ENTRY )
+        guest_physmap_remove_page(d, gpfn, mfn, 0);
+
+    /* Map at new location. */
+    rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
+
+    domain_unlock(d);
+
+    return rc;
+}
+
 long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
 {
-    struct page_info *page = NULL;
     int rc;
 
     switch ( op )
@@ -4602,7 +4700,6 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
     case XENMEM_add_to_physmap:
     {
         struct xen_add_to_physmap xatp;
-        unsigned long prev_mfn, mfn = 0, gpfn;
         struct domain *d;
 
         if ( copy_from_guest(&xatp, arg, 1) )
@@ -4618,93 +4715,7 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
             return -EPERM;
         }
 
-        switch ( xatp.space )
-        {
-        case XENMAPSPACE_shared_info:
-            if ( xatp.idx == 0 )
-                mfn = virt_to_mfn(d->shared_info);
-            break;
-        case XENMAPSPACE_grant_table:
-            spin_lock(&d->grant_table->lock);
-
-            if ( d->grant_table->gt_version == 0 )
-                d->grant_table->gt_version = 1;
-
-            if ( d->grant_table->gt_version == 2 &&
-                 (xatp.idx & XENMAPIDX_grant_table_status) )
-            {
-                xatp.idx &= ~XENMAPIDX_grant_table_status;
-                if ( xatp.idx < nr_status_frames(d->grant_table) )
-                    mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
-            }
-            else
-            {
-                if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
-                     (xatp.idx < max_nr_grant_frames) )
-                    gnttab_grow_table(d, xatp.idx + 1);
-
-                if ( xatp.idx < nr_grant_frames(d->grant_table) )
-                    mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
-            }
-
-            spin_unlock(&d->grant_table->lock);
-            break;
-        case XENMAPSPACE_gmfn:
-        {
-            p2m_type_t p2mt;
-
-            xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
-            /* If the page is still shared, exit early */
-            if ( p2m_is_shared(p2mt) )
-            {
-                rcu_unlock_domain(d);
-                return -ENOMEM;
-            }
-            if ( !get_page_from_pagenr(xatp.idx, d) )
-                break;
-            mfn = xatp.idx;
-            page = mfn_to_page(mfn);
-            break;
-        }
-        default:
-            break;
-        }
-
-        if ( !paging_mode_translate(d) || (mfn == 0) )
-        {
-            if ( page )
-                put_page(page);
-            rcu_unlock_domain(d);
-            return -EINVAL;
-        }
-
-        domain_lock(d);
-
-        if ( page )
-            put_page(page);
-
-        /* Remove previously mapped page if it was present. */
-        prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
-        if ( mfn_valid(prev_mfn) )
-        {
-            if ( is_xen_heap_mfn(prev_mfn) )
-                /* Xen heap frames are simply unhooked from this phys slot. */
-                guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
-            else
-                /* Normal domain memory is freed, to avoid leaking memory. */
-                guest_remove_page(d, xatp.gpfn);
-        }
-
-        /* Unmap from old location, if any. */
-        gpfn = get_gpfn_from_mfn(mfn);
-        ASSERT( gpfn != SHARED_M2P_ENTRY );
-        if ( gpfn != INVALID_M2P_ENTRY )
-            guest_physmap_remove_page(d, gpfn, mfn, 0);
-
-        /* Map at new location. */
-        rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
-
-        domain_unlock(d);
+        rc = xenmem_add_to_physmap(d, xatp);
 
         rcu_unlock_domain(d);
 

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap.
  2011-11-10  8:44   ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
@ 2011-11-10  8:44     ` Jean Guyader
  0 siblings, 0 replies; 31+ messages in thread
From: Jean Guyader @ 2011-11-10  8:44 UTC (permalink / raw)
  To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich

[-- Attachment #1: Type: text/plain, Size: 328 bytes --]


Move the code for the XENMEM_add_to_physmap case into it's own
function (xenmem_add_to_physmap).

Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/mm.c |  188 ++++++++++++++++++++++++++++-------------------------
 1 files changed, 99 insertions(+), 89 deletions(-)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch --]
[-- Type: text/x-patch; name="0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch", Size: 7065 bytes --]

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index acc1f34..f75011e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4592,119 +4592,129 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
     return 0;
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+static int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap xatp)
 {
     struct page_info *page = NULL;
+    unsigned long prev_mfn, mfn = 0, gpfn;
     int rc;
 
-    switch ( op )
-    {
-    case XENMEM_add_to_physmap:
+    switch ( xatp.space )
     {
-        struct xen_add_to_physmap xatp;
-        unsigned long prev_mfn, mfn = 0, gpfn;
-        struct domain *d;
-
-        if ( copy_from_guest(&xatp, arg, 1) )
-            return -EFAULT;
+    case XENMAPSPACE_shared_info:
+        if ( xatp.idx == 0 )
+            mfn = virt_to_mfn(d->shared_info);
+        break;
+    case XENMAPSPACE_grant_table:
+        spin_lock(&d->grant_table->lock);
 
-        rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        if ( d->grant_table->gt_version == 0 )
+            d->grant_table->gt_version = 1;
 
-        if ( xsm_add_to_physmap(current->domain, d) )
+        if ( d->grant_table->gt_version == 2 &&
+             (xatp.idx & XENMAPIDX_grant_table_status) )
         {
-            rcu_unlock_domain(d);
-            return -EPERM;
+            xatp.idx &= ~XENMAPIDX_grant_table_status;
+            if ( xatp.idx < nr_status_frames(d->grant_table) )
+                mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
+        }
+        else
+        {
+            if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
+                 (xatp.idx < max_nr_grant_frames) )
+                gnttab_grow_table(d, xatp.idx + 1);
+
+            if ( xatp.idx < nr_grant_frames(d->grant_table) )
+                mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
         }
 
-        switch ( xatp.space )
+        spin_unlock(&d->grant_table->lock);
+        break;
+    case XENMAPSPACE_gmfn:
+    {
+        p2m_type_t p2mt;
+
+        xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
+        /* If the page is still shared, exit early */
+        if ( p2m_is_shared(p2mt) )
         {
-        case XENMAPSPACE_shared_info:
-            if ( xatp.idx == 0 )
-                mfn = virt_to_mfn(d->shared_info);
+            rcu_unlock_domain(d);
+            return -ENOMEM;
+        }
+        if ( !get_page_from_pagenr(xatp.idx, d) )
             break;
-        case XENMAPSPACE_grant_table:
-            spin_lock(&d->grant_table->lock);
+        mfn = xatp.idx;
+        page = mfn_to_page(mfn);
+        break;
+    }
+    default:
+        break;
+    }
 
-            if ( d->grant_table->gt_version == 0 )
-                d->grant_table->gt_version = 1;
+    if ( !paging_mode_translate(d) || (mfn == 0) )
+    {
+        if ( page )
+            put_page(page);
+        rcu_unlock_domain(d);
+        return -EINVAL;
+    }
 
-            if ( d->grant_table->gt_version == 2 &&
-                 (xatp.idx & XENMAPIDX_grant_table_status) )
-            {
-                xatp.idx &= ~XENMAPIDX_grant_table_status;
-                if ( xatp.idx < nr_status_frames(d->grant_table) )
-                    mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
-            }
-            else
-            {
-                if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
-                     (xatp.idx < max_nr_grant_frames) )
-                    gnttab_grow_table(d, xatp.idx + 1);
+    domain_lock(d);
 
-                if ( xatp.idx < nr_grant_frames(d->grant_table) )
-                    mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
-            }
+    if ( page )
+        put_page(page);
 
-            spin_unlock(&d->grant_table->lock);
-            break;
-        case XENMAPSPACE_gmfn:
-        {
-            p2m_type_t p2mt;
+    /* Remove previously mapped page if it was present. */
+    prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
+    if ( mfn_valid(prev_mfn) )
+    {
+        if ( is_xen_heap_mfn(prev_mfn) )
+            /* Xen heap frames are simply unhooked from this phys slot. */
+            guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
+        else
+            /* Normal domain memory is freed, to avoid leaking memory. */
+            guest_remove_page(d, xatp.gpfn);
+    }
 
-            xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
-            /* If the page is still shared, exit early */
-            if ( p2m_is_shared(p2mt) )
-            {
-                rcu_unlock_domain(d);
-                return -ENOMEM;
-            }
-            if ( !get_page_from_pagenr(xatp.idx, d) )
-                break;
-            mfn = xatp.idx;
-            page = mfn_to_page(mfn);
-            break;
-        }
-        default:
-            break;
-        }
+    /* Unmap from old location, if any. */
+    gpfn = get_gpfn_from_mfn(mfn);
+    ASSERT( gpfn != SHARED_M2P_ENTRY );
+    if ( gpfn != INVALID_M2P_ENTRY )
+        guest_physmap_remove_page(d, gpfn, mfn, 0);
 
-        if ( !paging_mode_translate(d) || (mfn == 0) )
-        {
-            if ( page )
-                put_page(page);
-            rcu_unlock_domain(d);
-            return -EINVAL;
-        }
+    /* Map at new location. */
+    rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
 
-        domain_lock(d);
+    domain_unlock(d);
 
-        if ( page )
-            put_page(page);
+    return rc;
+}
 
-        /* Remove previously mapped page if it was present. */
-        prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
-        if ( mfn_valid(prev_mfn) )
-        {
-            if ( is_xen_heap_mfn(prev_mfn) )
-                /* Xen heap frames are simply unhooked from this phys slot. */
-                guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
-            else
-                /* Normal domain memory is freed, to avoid leaking memory. */
-                guest_remove_page(d, xatp.gpfn);
-        }
 
-        /* Unmap from old location, if any. */
-        gpfn = get_gpfn_from_mfn(mfn);
-        ASSERT( gpfn != SHARED_M2P_ENTRY );
-        if ( gpfn != INVALID_M2P_ENTRY )
-            guest_physmap_remove_page(d, gpfn, mfn, 0);
+long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+{
+    int rc;
+
+    switch ( op )
+    {
+    case XENMEM_add_to_physmap:
+    {
+        struct xen_add_to_physmap xatp;
+        struct domain *d;
 
-        /* Map at new location. */
-        rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
+        if ( copy_from_guest(&xatp, arg, 1) )
+            return -EFAULT;
+
+        rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
+        if ( rc != 0 )
+            return rc;
+
+        if ( xsm_add_to_physmap(current->domain, d) )
+        {
+            rcu_unlock_domain(d);
+            return -EPERM;
+        }
 
-        domain_unlock(d);
+        xenmem_add_to_physmap(d, xatp);
 
         rcu_unlock_domain(d);
 

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap.
  2011-11-08 20:04   ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
@ 2011-11-08 20:04     ` Jean Guyader
  0 siblings, 0 replies; 31+ messages in thread
From: Jean Guyader @ 2011-11-08 20:04 UTC (permalink / raw)
  To: xen-devel; +Cc: allen.m.kay, tim, Jean Guyader

[-- Attachment #1: Type: text/plain, Size: 293 bytes --]


Move the code for the XENMEM_add_to_physmap case into it's own
function (xenmem_add_to_physmap).

Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
 xen/arch/x86/mm.c |  188 ++++++++++++++++++++++++++++-------------------------
 1 files changed, 99 insertions(+), 89 deletions(-)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch --]
[-- Type: text/x-patch; name="0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch", Size: 7065 bytes --]

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index acc1f34..f75011e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4592,119 +4592,129 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
     return 0;
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+static int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap xatp)
 {
     struct page_info *page = NULL;
+    unsigned long prev_mfn, mfn = 0, gpfn;
     int rc;
 
-    switch ( op )
-    {
-    case XENMEM_add_to_physmap:
+    switch ( xatp.space )
     {
-        struct xen_add_to_physmap xatp;
-        unsigned long prev_mfn, mfn = 0, gpfn;
-        struct domain *d;
-
-        if ( copy_from_guest(&xatp, arg, 1) )
-            return -EFAULT;
+    case XENMAPSPACE_shared_info:
+        if ( xatp.idx == 0 )
+            mfn = virt_to_mfn(d->shared_info);
+        break;
+    case XENMAPSPACE_grant_table:
+        spin_lock(&d->grant_table->lock);
 
-        rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        if ( d->grant_table->gt_version == 0 )
+            d->grant_table->gt_version = 1;
 
-        if ( xsm_add_to_physmap(current->domain, d) )
+        if ( d->grant_table->gt_version == 2 &&
+             (xatp.idx & XENMAPIDX_grant_table_status) )
         {
-            rcu_unlock_domain(d);
-            return -EPERM;
+            xatp.idx &= ~XENMAPIDX_grant_table_status;
+            if ( xatp.idx < nr_status_frames(d->grant_table) )
+                mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
+        }
+        else
+        {
+            if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
+                 (xatp.idx < max_nr_grant_frames) )
+                gnttab_grow_table(d, xatp.idx + 1);
+
+            if ( xatp.idx < nr_grant_frames(d->grant_table) )
+                mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
         }
 
-        switch ( xatp.space )
+        spin_unlock(&d->grant_table->lock);
+        break;
+    case XENMAPSPACE_gmfn:
+    {
+        p2m_type_t p2mt;
+
+        xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
+        /* If the page is still shared, exit early */
+        if ( p2m_is_shared(p2mt) )
         {
-        case XENMAPSPACE_shared_info:
-            if ( xatp.idx == 0 )
-                mfn = virt_to_mfn(d->shared_info);
+            rcu_unlock_domain(d);
+            return -ENOMEM;
+        }
+        if ( !get_page_from_pagenr(xatp.idx, d) )
             break;
-        case XENMAPSPACE_grant_table:
-            spin_lock(&d->grant_table->lock);
+        mfn = xatp.idx;
+        page = mfn_to_page(mfn);
+        break;
+    }
+    default:
+        break;
+    }
 
-            if ( d->grant_table->gt_version == 0 )
-                d->grant_table->gt_version = 1;
+    if ( !paging_mode_translate(d) || (mfn == 0) )
+    {
+        if ( page )
+            put_page(page);
+        rcu_unlock_domain(d);
+        return -EINVAL;
+    }
 
-            if ( d->grant_table->gt_version == 2 &&
-                 (xatp.idx & XENMAPIDX_grant_table_status) )
-            {
-                xatp.idx &= ~XENMAPIDX_grant_table_status;
-                if ( xatp.idx < nr_status_frames(d->grant_table) )
-                    mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
-            }
-            else
-            {
-                if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
-                     (xatp.idx < max_nr_grant_frames) )
-                    gnttab_grow_table(d, xatp.idx + 1);
+    domain_lock(d);
 
-                if ( xatp.idx < nr_grant_frames(d->grant_table) )
-                    mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
-            }
+    if ( page )
+        put_page(page);
 
-            spin_unlock(&d->grant_table->lock);
-            break;
-        case XENMAPSPACE_gmfn:
-        {
-            p2m_type_t p2mt;
+    /* Remove previously mapped page if it was present. */
+    prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
+    if ( mfn_valid(prev_mfn) )
+    {
+        if ( is_xen_heap_mfn(prev_mfn) )
+            /* Xen heap frames are simply unhooked from this phys slot. */
+            guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
+        else
+            /* Normal domain memory is freed, to avoid leaking memory. */
+            guest_remove_page(d, xatp.gpfn);
+    }
 
-            xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
-            /* If the page is still shared, exit early */
-            if ( p2m_is_shared(p2mt) )
-            {
-                rcu_unlock_domain(d);
-                return -ENOMEM;
-            }
-            if ( !get_page_from_pagenr(xatp.idx, d) )
-                break;
-            mfn = xatp.idx;
-            page = mfn_to_page(mfn);
-            break;
-        }
-        default:
-            break;
-        }
+    /* Unmap from old location, if any. */
+    gpfn = get_gpfn_from_mfn(mfn);
+    ASSERT( gpfn != SHARED_M2P_ENTRY );
+    if ( gpfn != INVALID_M2P_ENTRY )
+        guest_physmap_remove_page(d, gpfn, mfn, 0);
 
-        if ( !paging_mode_translate(d) || (mfn == 0) )
-        {
-            if ( page )
-                put_page(page);
-            rcu_unlock_domain(d);
-            return -EINVAL;
-        }
+    /* Map at new location. */
+    rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
 
-        domain_lock(d);
+    domain_unlock(d);
 
-        if ( page )
-            put_page(page);
+    return rc;
+}
 
-        /* Remove previously mapped page if it was present. */
-        prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
-        if ( mfn_valid(prev_mfn) )
-        {
-            if ( is_xen_heap_mfn(prev_mfn) )
-                /* Xen heap frames are simply unhooked from this phys slot. */
-                guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
-            else
-                /* Normal domain memory is freed, to avoid leaking memory. */
-                guest_remove_page(d, xatp.gpfn);
-        }
 
-        /* Unmap from old location, if any. */
-        gpfn = get_gpfn_from_mfn(mfn);
-        ASSERT( gpfn != SHARED_M2P_ENTRY );
-        if ( gpfn != INVALID_M2P_ENTRY )
-            guest_physmap_remove_page(d, gpfn, mfn, 0);
+long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+{
+    int rc;
+
+    switch ( op )
+    {
+    case XENMEM_add_to_physmap:
+    {
+        struct xen_add_to_physmap xatp;
+        struct domain *d;
 
-        /* Map at new location. */
-        rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
+        if ( copy_from_guest(&xatp, arg, 1) )
+            return -EFAULT;
+
+        rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
+        if ( rc != 0 )
+            return rc;
+
+        if ( xsm_add_to_physmap(current->domain, d) )
+        {
+            rcu_unlock_domain(d);
+            return -EPERM;
+        }
 
-        domain_unlock(d);
+        xenmem_add_to_physmap(d, xatp);
 
         rcu_unlock_domain(d);
 

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap.
  2011-11-07 15:16   ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
@ 2011-11-07 15:16     ` Jean Guyader
  0 siblings, 0 replies; 31+ messages in thread
From: Jean Guyader @ 2011-11-07 15:16 UTC (permalink / raw)
  To: xen-devel; +Cc: tim, allen.m.kay, Jean Guyader

[-- Attachment #1: Type: text/plain, Size: 293 bytes --]


Move the code for the XENMEM_add_to_physmap case into it's own
function (xenmem_add_to_physmap).

Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
 xen/arch/x86/mm.c |  188 ++++++++++++++++++++++++++++-------------------------
 1 files changed, 99 insertions(+), 89 deletions(-)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch --]
[-- Type: text/x-patch; name="0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch", Size: 7065 bytes --]

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index acc1f34..f75011e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4592,119 +4592,129 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
     return 0;
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+static int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap xatp)
 {
     struct page_info *page = NULL;
+    unsigned long prev_mfn, mfn = 0, gpfn;
     int rc;
 
-    switch ( op )
-    {
-    case XENMEM_add_to_physmap:
+    switch ( xatp.space )
     {
-        struct xen_add_to_physmap xatp;
-        unsigned long prev_mfn, mfn = 0, gpfn;
-        struct domain *d;
-
-        if ( copy_from_guest(&xatp, arg, 1) )
-            return -EFAULT;
+    case XENMAPSPACE_shared_info:
+        if ( xatp.idx == 0 )
+            mfn = virt_to_mfn(d->shared_info);
+        break;
+    case XENMAPSPACE_grant_table:
+        spin_lock(&d->grant_table->lock);
 
-        rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        if ( d->grant_table->gt_version == 0 )
+            d->grant_table->gt_version = 1;
 
-        if ( xsm_add_to_physmap(current->domain, d) )
+        if ( d->grant_table->gt_version == 2 &&
+             (xatp.idx & XENMAPIDX_grant_table_status) )
         {
-            rcu_unlock_domain(d);
-            return -EPERM;
+            xatp.idx &= ~XENMAPIDX_grant_table_status;
+            if ( xatp.idx < nr_status_frames(d->grant_table) )
+                mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
+        }
+        else
+        {
+            if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
+                 (xatp.idx < max_nr_grant_frames) )
+                gnttab_grow_table(d, xatp.idx + 1);
+
+            if ( xatp.idx < nr_grant_frames(d->grant_table) )
+                mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
         }
 
-        switch ( xatp.space )
+        spin_unlock(&d->grant_table->lock);
+        break;
+    case XENMAPSPACE_gmfn:
+    {
+        p2m_type_t p2mt;
+
+        xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
+        /* If the page is still shared, exit early */
+        if ( p2m_is_shared(p2mt) )
         {
-        case XENMAPSPACE_shared_info:
-            if ( xatp.idx == 0 )
-                mfn = virt_to_mfn(d->shared_info);
+            rcu_unlock_domain(d);
+            return -ENOMEM;
+        }
+        if ( !get_page_from_pagenr(xatp.idx, d) )
             break;
-        case XENMAPSPACE_grant_table:
-            spin_lock(&d->grant_table->lock);
+        mfn = xatp.idx;
+        page = mfn_to_page(mfn);
+        break;
+    }
+    default:
+        break;
+    }
 
-            if ( d->grant_table->gt_version == 0 )
-                d->grant_table->gt_version = 1;
+    if ( !paging_mode_translate(d) || (mfn == 0) )
+    {
+        if ( page )
+            put_page(page);
+        rcu_unlock_domain(d);
+        return -EINVAL;
+    }
 
-            if ( d->grant_table->gt_version == 2 &&
-                 (xatp.idx & XENMAPIDX_grant_table_status) )
-            {
-                xatp.idx &= ~XENMAPIDX_grant_table_status;
-                if ( xatp.idx < nr_status_frames(d->grant_table) )
-                    mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
-            }
-            else
-            {
-                if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
-                     (xatp.idx < max_nr_grant_frames) )
-                    gnttab_grow_table(d, xatp.idx + 1);
+    domain_lock(d);
 
-                if ( xatp.idx < nr_grant_frames(d->grant_table) )
-                    mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
-            }
+    if ( page )
+        put_page(page);
 
-            spin_unlock(&d->grant_table->lock);
-            break;
-        case XENMAPSPACE_gmfn:
-        {
-            p2m_type_t p2mt;
+    /* Remove previously mapped page if it was present. */
+    prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
+    if ( mfn_valid(prev_mfn) )
+    {
+        if ( is_xen_heap_mfn(prev_mfn) )
+            /* Xen heap frames are simply unhooked from this phys slot. */
+            guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
+        else
+            /* Normal domain memory is freed, to avoid leaking memory. */
+            guest_remove_page(d, xatp.gpfn);
+    }
 
-            xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
-            /* If the page is still shared, exit early */
-            if ( p2m_is_shared(p2mt) )
-            {
-                rcu_unlock_domain(d);
-                return -ENOMEM;
-            }
-            if ( !get_page_from_pagenr(xatp.idx, d) )
-                break;
-            mfn = xatp.idx;
-            page = mfn_to_page(mfn);
-            break;
-        }
-        default:
-            break;
-        }
+    /* Unmap from old location, if any. */
+    gpfn = get_gpfn_from_mfn(mfn);
+    ASSERT( gpfn != SHARED_M2P_ENTRY );
+    if ( gpfn != INVALID_M2P_ENTRY )
+        guest_physmap_remove_page(d, gpfn, mfn, 0);
 
-        if ( !paging_mode_translate(d) || (mfn == 0) )
-        {
-            if ( page )
-                put_page(page);
-            rcu_unlock_domain(d);
-            return -EINVAL;
-        }
+    /* Map at new location. */
+    rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
 
-        domain_lock(d);
+    domain_unlock(d);
 
-        if ( page )
-            put_page(page);
+    return rc;
+}
 
-        /* Remove previously mapped page if it was present. */
-        prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
-        if ( mfn_valid(prev_mfn) )
-        {
-            if ( is_xen_heap_mfn(prev_mfn) )
-                /* Xen heap frames are simply unhooked from this phys slot. */
-                guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
-            else
-                /* Normal domain memory is freed, to avoid leaking memory. */
-                guest_remove_page(d, xatp.gpfn);
-        }
 
-        /* Unmap from old location, if any. */
-        gpfn = get_gpfn_from_mfn(mfn);
-        ASSERT( gpfn != SHARED_M2P_ENTRY );
-        if ( gpfn != INVALID_M2P_ENTRY )
-            guest_physmap_remove_page(d, gpfn, mfn, 0);
+long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+{
+    int rc;
+
+    switch ( op )
+    {
+    case XENMEM_add_to_physmap:
+    {
+        struct xen_add_to_physmap xatp;
+        struct domain *d;
 
-        /* Map at new location. */
-        rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
+        if ( copy_from_guest(&xatp, arg, 1) )
+            return -EFAULT;
+
+        rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
+        if ( rc != 0 )
+            return rc;
+
+        if ( xsm_add_to_physmap(current->domain, d) )
+        {
+            rcu_unlock_domain(d);
+            return -EPERM;
+        }
 
-        domain_unlock(d);
+        xenmem_add_to_physmap(d, xatp);
 
         rcu_unlock_domain(d);
 

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap.
  2011-11-04 10:38   ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
@ 2011-11-04 10:38     ` Jean Guyader
  0 siblings, 0 replies; 31+ messages in thread
From: Jean Guyader @ 2011-11-04 10:38 UTC (permalink / raw)
  To: xen-devel; +Cc: allen.m.kay, tim, Jean Guyader

[-- Attachment #1: Type: text/plain, Size: 293 bytes --]


Move the code for the XENMEM_add_to_physmap case into it's own
function (xenmem_add_to_physmap).

Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
 xen/arch/x86/mm.c |  188 ++++++++++++++++++++++++++++-------------------------
 1 files changed, 99 insertions(+), 89 deletions(-)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch --]
[-- Type: text/x-patch; name="0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch", Size: 7065 bytes --]

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index acc1f34..f75011e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4592,119 +4592,129 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
     return 0;
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+static int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap xatp)
 {
     struct page_info *page = NULL;
+    unsigned long prev_mfn, mfn = 0, gpfn;
     int rc;
 
-    switch ( op )
-    {
-    case XENMEM_add_to_physmap:
+    switch ( xatp.space )
     {
-        struct xen_add_to_physmap xatp;
-        unsigned long prev_mfn, mfn = 0, gpfn;
-        struct domain *d;
-
-        if ( copy_from_guest(&xatp, arg, 1) )
-            return -EFAULT;
+    case XENMAPSPACE_shared_info:
+        if ( xatp.idx == 0 )
+            mfn = virt_to_mfn(d->shared_info);
+        break;
+    case XENMAPSPACE_grant_table:
+        spin_lock(&d->grant_table->lock);
 
-        rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        if ( d->grant_table->gt_version == 0 )
+            d->grant_table->gt_version = 1;
 
-        if ( xsm_add_to_physmap(current->domain, d) )
+        if ( d->grant_table->gt_version == 2 &&
+             (xatp.idx & XENMAPIDX_grant_table_status) )
         {
-            rcu_unlock_domain(d);
-            return -EPERM;
+            xatp.idx &= ~XENMAPIDX_grant_table_status;
+            if ( xatp.idx < nr_status_frames(d->grant_table) )
+                mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
+        }
+        else
+        {
+            if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
+                 (xatp.idx < max_nr_grant_frames) )
+                gnttab_grow_table(d, xatp.idx + 1);
+
+            if ( xatp.idx < nr_grant_frames(d->grant_table) )
+                mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
         }
 
-        switch ( xatp.space )
+        spin_unlock(&d->grant_table->lock);
+        break;
+    case XENMAPSPACE_gmfn:
+    {
+        p2m_type_t p2mt;
+
+        xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
+        /* If the page is still shared, exit early */
+        if ( p2m_is_shared(p2mt) )
         {
-        case XENMAPSPACE_shared_info:
-            if ( xatp.idx == 0 )
-                mfn = virt_to_mfn(d->shared_info);
+            rcu_unlock_domain(d);
+            return -ENOMEM;
+        }
+        if ( !get_page_from_pagenr(xatp.idx, d) )
             break;
-        case XENMAPSPACE_grant_table:
-            spin_lock(&d->grant_table->lock);
+        mfn = xatp.idx;
+        page = mfn_to_page(mfn);
+        break;
+    }
+    default:
+        break;
+    }
 
-            if ( d->grant_table->gt_version == 0 )
-                d->grant_table->gt_version = 1;
+    if ( !paging_mode_translate(d) || (mfn == 0) )
+    {
+        if ( page )
+            put_page(page);
+        rcu_unlock_domain(d);
+        return -EINVAL;
+    }
 
-            if ( d->grant_table->gt_version == 2 &&
-                 (xatp.idx & XENMAPIDX_grant_table_status) )
-            {
-                xatp.idx &= ~XENMAPIDX_grant_table_status;
-                if ( xatp.idx < nr_status_frames(d->grant_table) )
-                    mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
-            }
-            else
-            {
-                if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
-                     (xatp.idx < max_nr_grant_frames) )
-                    gnttab_grow_table(d, xatp.idx + 1);
+    domain_lock(d);
 
-                if ( xatp.idx < nr_grant_frames(d->grant_table) )
-                    mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
-            }
+    if ( page )
+        put_page(page);
 
-            spin_unlock(&d->grant_table->lock);
-            break;
-        case XENMAPSPACE_gmfn:
-        {
-            p2m_type_t p2mt;
+    /* Remove previously mapped page if it was present. */
+    prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
+    if ( mfn_valid(prev_mfn) )
+    {
+        if ( is_xen_heap_mfn(prev_mfn) )
+            /* Xen heap frames are simply unhooked from this phys slot. */
+            guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
+        else
+            /* Normal domain memory is freed, to avoid leaking memory. */
+            guest_remove_page(d, xatp.gpfn);
+    }
 
-            xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
-            /* If the page is still shared, exit early */
-            if ( p2m_is_shared(p2mt) )
-            {
-                rcu_unlock_domain(d);
-                return -ENOMEM;
-            }
-            if ( !get_page_from_pagenr(xatp.idx, d) )
-                break;
-            mfn = xatp.idx;
-            page = mfn_to_page(mfn);
-            break;
-        }
-        default:
-            break;
-        }
+    /* Unmap from old location, if any. */
+    gpfn = get_gpfn_from_mfn(mfn);
+    ASSERT( gpfn != SHARED_M2P_ENTRY );
+    if ( gpfn != INVALID_M2P_ENTRY )
+        guest_physmap_remove_page(d, gpfn, mfn, 0);
 
-        if ( !paging_mode_translate(d) || (mfn == 0) )
-        {
-            if ( page )
-                put_page(page);
-            rcu_unlock_domain(d);
-            return -EINVAL;
-        }
+    /* Map at new location. */
+    rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
 
-        domain_lock(d);
+    domain_unlock(d);
 
-        if ( page )
-            put_page(page);
+    return rc;
+}
 
-        /* Remove previously mapped page if it was present. */
-        prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
-        if ( mfn_valid(prev_mfn) )
-        {
-            if ( is_xen_heap_mfn(prev_mfn) )
-                /* Xen heap frames are simply unhooked from this phys slot. */
-                guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
-            else
-                /* Normal domain memory is freed, to avoid leaking memory. */
-                guest_remove_page(d, xatp.gpfn);
-        }
 
-        /* Unmap from old location, if any. */
-        gpfn = get_gpfn_from_mfn(mfn);
-        ASSERT( gpfn != SHARED_M2P_ENTRY );
-        if ( gpfn != INVALID_M2P_ENTRY )
-            guest_physmap_remove_page(d, gpfn, mfn, 0);
+long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+{
+    int rc;
+
+    switch ( op )
+    {
+    case XENMEM_add_to_physmap:
+    {
+        struct xen_add_to_physmap xatp;
+        struct domain *d;
 
-        /* Map at new location. */
-        rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
+        if ( copy_from_guest(&xatp, arg, 1) )
+            return -EFAULT;
+
+        rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
+        if ( rc != 0 )
+            return rc;
+
+        if ( xsm_add_to_physmap(current->domain, d) )
+        {
+            rcu_unlock_domain(d);
+            return -EPERM;
+        }
 
-        domain_unlock(d);
+        xenmem_add_to_physmap(d, xatp);
 
         rcu_unlock_domain(d);
 

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2011-11-21  8:39 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-11-07 18:25 IOMMU, vtd and iotlb flush rework (v3) Jean Guyader
2011-11-07 18:25 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-07 18:25   ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-07 18:25     ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-07 18:25       ` [PATCH 4/6] mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range Jean Guyader
2011-11-07 18:25         ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
2011-11-07 18:25           ` [PATCH 6/6] Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb flush Jean Guyader
2011-11-08 13:45             ` Tim Deegan
2011-11-08  9:20         ` [PATCH 4/6] mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range Jan Beulich
2011-11-08 13:32           ` Tim Deegan
2011-11-08 13:39             ` Tim Deegan
2011-11-08 13:24       ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Tim Deegan
2011-11-08  3:10   ` [PATCH 1/6] vtd: Refactor iotlb flush code Kay, Allen M
2011-11-08  7:42     ` Jean Guyader
2011-11-09  2:41       ` Kay, Allen M
2011-11-09 22:15         ` Jean Guyader
2011-11-09 23:25           ` Kay, Allen M
2011-11-09  2:50       ` Kay, Allen M
  -- strict thread matches above, loose matches on Subject: below --
2011-11-16 19:25 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v8) Jean Guyader
2011-11-16 19:25 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-16 19:25   ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-16 19:25     ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-19 21:58       ` Olaf Hering
2011-11-19 22:14         ` Keir Fraser
2011-11-19 22:37           ` Jean Guyader
2011-11-20 13:25           ` Olaf Hering
2011-11-20 19:59             ` Keir Fraser
2011-11-21  8:39               ` Olaf Hering
     [not found] <20111113175006.1EF1C72C36F@homiemail-mx8.g.dreamhost.com>
2011-11-14 14:17 ` [PATCH 3/6] add_to_physmap: Move the code for > XENMEM_add_to_physmap Andres Lagar-Cavilla
2011-11-13 17:40 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v7) Jean Guyader
2011-11-13 17:40 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-13 17:40   ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-13 17:40     ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-10  8:43 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v5) Jean Guyader
2011-11-10  8:43 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-10  8:44   ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-10  8:44     ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-08 20:04 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v4) Jean Guyader
2011-11-08 20:04 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-08 20:04   ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-08 20:04     ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-07 15:16 IOMMU, vtd and iotlb flush rework (v2) Jean Guyader
2011-11-07 15:16 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-07 15:16   ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-07 15:16     ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-04 10:38 [PATCH 0/6] IOMMU, vtd and iotlb flush rework Jean Guyader
2011-11-04 10:38 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-04 10:38   ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-04 10:38     ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.