Xen-Devel Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH v2 0/10] iomem memory policy
@ 2019-04-30 21:02 Stefano Stabellini
  2019-04-30 21:02 ` [Xen-devel] " Stefano Stabellini
                   ` (11 more replies)
  0 siblings, 12 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel
  Cc: sstabellini, wei.liu2, andrew.cooper3, julien.grall, JBeulich,
	ian.jackson

Hi all,

This series introduces a memory policy parameter for the iomem option,
so that we can map an iomem region into a guest as cacheable memory.

Then, this series fixes the way Xen handles reserved memory regions on
ARM: they should be mapped as normal memory, instead today they are
treated as device memory.

Cheers,

Stefano



The following changes since commit be3d5b30331d87e177744dbe23138b9ebcdc86f1:

  x86/msr: Fix fallout from mostly c/s 832c180 (2019-04-15 17:51:30 +0100)

are available in the git repository at:

  http://xenbits.xenproject.org/git-http/people/sstabellini/xen-unstable.git iomem_cache-v2

for you to fetch changes up to 4979f8e2f1120b2c394be815b071c017e287cf33:

  xen/arm: add reserved-memory regions to the dom0 memory node (2019-04-30 13:56:40 -0700)

----------------------------------------------------------------
Stefano Stabellini (10):
      xen: add a p2mt parameter to map_mmio_regions
      xen: rename un/map_mmio_regions to un/map_regions
      xen: extend XEN_DOMCTL_memory_mapping to handle memory policy
      libxc: introduce xc_domain_mem_map_policy
      libxl/xl: add memory policy option to iomem
      xen/arm: extend device_tree_for_each_node
      xen/arm: make process_memory_node a device_tree_node_func
      xen/arm: keep track of reserved-memory regions
      xen/arm: map reserved-memory regions as normal memory in dom0
      xen/arm: add reserved-memory regions to the dom0 memory node

 SUPPORT.md                       |  2 +-
 docs/man/xl.cfg.5.pod.in         |  7 ++++-
 tools/libxc/include/xenctrl.h    |  8 ++++++
 tools/libxc/xc_domain.c          | 24 ++++++++++++++---
 tools/libxl/libxl.h              |  5 ++++
 tools/libxl/libxl_create.c       | 21 +++++++++++++--
 tools/libxl/libxl_types.idl      |  9 +++++++
 tools/xl/xl_parse.c              | 22 +++++++++++++++-
 xen/arch/arm/acpi/boot.c         |  2 +-
 xen/arch/arm/acpi/domain_build.c | 20 +++++++-------
 xen/arch/arm/bootfdt.c           | 56 ++++++++++++++++++++++++++--------------
 xen/arch/arm/domain_build.c      | 34 +++++++++++++++++++-----
 xen/arch/arm/gic-v2.c            |  7 ++---
 xen/arch/arm/p2m.c               | 34 +++++++-----------------
 xen/arch/arm/platforms/exynos5.c | 10 ++++---
 xen/arch/arm/platforms/omap5.c   | 20 ++++++++------
 xen/arch/arm/setup.c             | 36 +++++++++++++++++++++++---
 xen/arch/arm/traps.c             |  2 +-
 xen/arch/arm/vgic-v2.c           |  4 +--
 xen/arch/arm/vgic/vgic-v2.c      |  4 +--
 xen/arch/x86/hvm/dom0_build.c    |  7 +++--
 xen/arch/x86/mm/p2m.c            | 20 ++++++++------
 xen/common/domctl.c              | 32 ++++++++++++++++++++---
 xen/drivers/vpci/header.c        |  9 ++++---
 xen/include/asm-arm/p2m.h        | 15 -----------
 xen/include/asm-arm/setup.h      |  1 +
 xen/include/public/domctl.h      | 14 +++++++++-
 xen/include/xen/device_tree.h    |  5 ++--
 xen/include/xen/p2m-common.h     | 25 ++++++++++--------
 29 files changed, 315 insertions(+), 140 deletions(-)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [Xen-devel] [PATCH v2 0/10] iomem memory policy
  2019-04-30 21:02 [PATCH v2 0/10] iomem memory policy Stefano Stabellini
@ 2019-04-30 21:02 ` " Stefano Stabellini
  2019-04-30 21:02 ` [PATCH v2 01/10] xen: add a p2mt parameter to map_mmio_regions Stefano Stabellini
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel
  Cc: sstabellini, wei.liu2, andrew.cooper3, julien.grall, JBeulich,
	ian.jackson

Hi all,

This series introduces a memory policy parameter for the iomem option,
so that we can map an iomem region into a guest as cacheable memory.

Then, this series fixes the way Xen handles reserved memory regions on
ARM: they should be mapped as normal memory, instead today they are
treated as device memory.

Cheers,

Stefano



The following changes since commit be3d5b30331d87e177744dbe23138b9ebcdc86f1:

  x86/msr: Fix fallout from mostly c/s 832c180 (2019-04-15 17:51:30 +0100)

are available in the git repository at:

  http://xenbits.xenproject.org/git-http/people/sstabellini/xen-unstable.git iomem_cache-v2

for you to fetch changes up to 4979f8e2f1120b2c394be815b071c017e287cf33:

  xen/arm: add reserved-memory regions to the dom0 memory node (2019-04-30 13:56:40 -0700)

----------------------------------------------------------------
Stefano Stabellini (10):
      xen: add a p2mt parameter to map_mmio_regions
      xen: rename un/map_mmio_regions to un/map_regions
      xen: extend XEN_DOMCTL_memory_mapping to handle memory policy
      libxc: introduce xc_domain_mem_map_policy
      libxl/xl: add memory policy option to iomem
      xen/arm: extend device_tree_for_each_node
      xen/arm: make process_memory_node a device_tree_node_func
      xen/arm: keep track of reserved-memory regions
      xen/arm: map reserved-memory regions as normal memory in dom0
      xen/arm: add reserved-memory regions to the dom0 memory node

 SUPPORT.md                       |  2 +-
 docs/man/xl.cfg.5.pod.in         |  7 ++++-
 tools/libxc/include/xenctrl.h    |  8 ++++++
 tools/libxc/xc_domain.c          | 24 ++++++++++++++---
 tools/libxl/libxl.h              |  5 ++++
 tools/libxl/libxl_create.c       | 21 +++++++++++++--
 tools/libxl/libxl_types.idl      |  9 +++++++
 tools/xl/xl_parse.c              | 22 +++++++++++++++-
 xen/arch/arm/acpi/boot.c         |  2 +-
 xen/arch/arm/acpi/domain_build.c | 20 +++++++-------
 xen/arch/arm/bootfdt.c           | 56 ++++++++++++++++++++++++++--------------
 xen/arch/arm/domain_build.c      | 34 +++++++++++++++++++-----
 xen/arch/arm/gic-v2.c            |  7 ++---
 xen/arch/arm/p2m.c               | 34 +++++++-----------------
 xen/arch/arm/platforms/exynos5.c | 10 ++++---
 xen/arch/arm/platforms/omap5.c   | 20 ++++++++------
 xen/arch/arm/setup.c             | 36 +++++++++++++++++++++++---
 xen/arch/arm/traps.c             |  2 +-
 xen/arch/arm/vgic-v2.c           |  4 +--
 xen/arch/arm/vgic/vgic-v2.c      |  4 +--
 xen/arch/x86/hvm/dom0_build.c    |  7 +++--
 xen/arch/x86/mm/p2m.c            | 20 ++++++++------
 xen/common/domctl.c              | 32 ++++++++++++++++++++---
 xen/drivers/vpci/header.c        |  9 ++++---
 xen/include/asm-arm/p2m.h        | 15 -----------
 xen/include/asm-arm/setup.h      |  1 +
 xen/include/public/domctl.h      | 14 +++++++++-
 xen/include/xen/device_tree.h    |  5 ++--
 xen/include/xen/p2m-common.h     | 25 ++++++++++--------
 29 files changed, 315 insertions(+), 140 deletions(-)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [PATCH v2 01/10] xen: add a p2mt parameter to map_mmio_regions
  2019-04-30 21:02 [PATCH v2 0/10] iomem memory policy Stefano Stabellini
  2019-04-30 21:02 ` [Xen-devel] " Stefano Stabellini
@ 2019-04-30 21:02 ` Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
                     ` (2 more replies)
  2019-04-30 21:02 ` [PATCH v2 02/10] xen: rename un/map_mmio_regions to un/map_regions Stefano Stabellini
                   ` (9 subsequent siblings)
  11 siblings, 3 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, julien.grall, sstabellini, JBeulich, andrew.cooper3

Add a p2mt parameter to map_mmio_regions, pass p2m_mmio_direct_dev on
ARM and p2m_mmio_direct on x86 -- no changes in behavior.

On ARM, given the similarity between map_mmio_regions after the change
and map_regions_p2mt, remove un/map_regions_p2mt.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
CC: JBeulich@suse.com
CC: andrew.cooper3@citrix.com
---
Changes in v2:
- new patch
---
 xen/arch/arm/acpi/domain_build.c |  4 ++--
 xen/arch/arm/domain_build.c      |  2 +-
 xen/arch/arm/gic-v2.c            |  3 ++-
 xen/arch/arm/p2m.c               | 18 +-----------------
 xen/arch/arm/platforms/exynos5.c |  6 ++++--
 xen/arch/arm/platforms/omap5.c   | 12 ++++++++----
 xen/arch/arm/traps.c             |  2 +-
 xen/arch/arm/vgic-v2.c           |  2 +-
 xen/arch/arm/vgic/vgic-v2.c      |  2 +-
 xen/arch/x86/hvm/dom0_build.c    |  7 +++++--
 xen/arch/x86/mm/p2m.c            |  6 +++++-
 xen/common/domctl.c              |  7 ++++++-
 xen/drivers/vpci/header.c        |  3 ++-
 xen/include/asm-arm/p2m.h        | 15 ---------------
 xen/include/xen/p2m-common.h     |  3 ++-
 15 files changed, 41 insertions(+), 51 deletions(-)

diff --git a/xen/arch/arm/acpi/domain_build.c b/xen/arch/arm/acpi/domain_build.c
index 5aae32a..f4ac91c 100644
--- a/xen/arch/arm/acpi/domain_build.c
+++ b/xen/arch/arm/acpi/domain_build.c
@@ -193,7 +193,7 @@ static void __init acpi_map_other_tables(struct domain *d)
     {
         addr = acpi_gbl_root_table_list.tables[i].address;
         size = acpi_gbl_root_table_list.tables[i].length;
-        res = map_regions_p2mt(d,
+        res = map_mmio_regions(d,
                                gaddr_to_gfn(addr),
                                PFN_UP(size),
                                maddr_to_mfn(addr),
@@ -547,7 +547,7 @@ int __init prepare_acpi(struct domain *d, struct kernel_info *kinfo)
     acpi_create_efi_mmap_table(d, &kinfo->mem, tbl_add);
 
     /* Map the EFI and ACPI tables to Dom0 */
-    rc = map_regions_p2mt(d,
+    rc = map_mmio_regions(d,
                           gaddr_to_gfn(d->arch.efi_acpi_gpa),
                           PFN_UP(d->arch.efi_acpi_len),
                           virt_to_mfn(d->arch.efi_acpi_table),
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index d983677..1f808b2 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1171,7 +1171,7 @@ static int __init map_range_to_domain(const struct dt_device_node *dev,
 
     if ( need_mapping )
     {
-        res = map_regions_p2mt(d,
+        res = map_mmio_regions(d,
                                gaddr_to_gfn(addr),
                                PFN_UP(len),
                                maddr_to_mfn(addr),
diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index 256988c..d2ef361 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -701,7 +701,8 @@ static int gicv2_map_hwdown_extra_mappings(struct domain *d)
 
         ret = map_mmio_regions(d, gaddr_to_gfn(v2m_data->addr),
                                PFN_UP(v2m_data->size),
-                               maddr_to_mfn(v2m_data->addr));
+                               maddr_to_mfn(v2m_data->addr),
+                               p2m_mmio_direct_dev);
         if ( ret )
         {
             printk(XENLOG_ERR "GICv2: Map v2m frame to d%d failed.\n",
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index c38bd7e..e44c932 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1331,7 +1331,7 @@ static inline int p2m_remove_mapping(struct domain *d,
     return rc;
 }
 
-int map_regions_p2mt(struct domain *d,
+int map_mmio_regions(struct domain *d,
                      gfn_t gfn,
                      unsigned long nr,
                      mfn_t mfn,
@@ -1340,22 +1340,6 @@ int map_regions_p2mt(struct domain *d,
     return p2m_insert_mapping(d, gfn, nr, mfn, p2mt);
 }
 
-int unmap_regions_p2mt(struct domain *d,
-                       gfn_t gfn,
-                       unsigned long nr,
-                       mfn_t mfn)
-{
-    return p2m_remove_mapping(d, gfn, nr, mfn);
-}
-
-int map_mmio_regions(struct domain *d,
-                     gfn_t start_gfn,
-                     unsigned long nr,
-                     mfn_t mfn)
-{
-    return p2m_insert_mapping(d, start_gfn, nr, mfn, p2m_mmio_direct_dev);
-}
-
 int unmap_mmio_regions(struct domain *d,
                        gfn_t start_gfn,
                        unsigned long nr,
diff --git a/xen/arch/arm/platforms/exynos5.c b/xen/arch/arm/platforms/exynos5.c
index 6560507..97cd080 100644
--- a/xen/arch/arm/platforms/exynos5.c
+++ b/xen/arch/arm/platforms/exynos5.c
@@ -83,11 +83,13 @@ static int exynos5250_specific_mapping(struct domain *d)
 {
     /* Map the chip ID */
     map_mmio_regions(d, gaddr_to_gfn(EXYNOS5_PA_CHIPID), 1,
-                     maddr_to_mfn(EXYNOS5_PA_CHIPID));
+                     maddr_to_mfn(EXYNOS5_PA_CHIPID),
+                     p2m_mmio_direct_dev);
 
     /* Map the PWM region */
     map_mmio_regions(d, gaddr_to_gfn(EXYNOS5_PA_TIMER), 2,
-                     maddr_to_mfn(EXYNOS5_PA_TIMER));
+                     maddr_to_mfn(EXYNOS5_PA_TIMER),
+                     p2m_mmio_direct_dev);
 
     return 0;
 }
diff --git a/xen/arch/arm/platforms/omap5.c b/xen/arch/arm/platforms/omap5.c
index aee24e4..c5701df 100644
--- a/xen/arch/arm/platforms/omap5.c
+++ b/xen/arch/arm/platforms/omap5.c
@@ -99,19 +99,23 @@ static int omap5_specific_mapping(struct domain *d)
 {
     /* Map the PRM module */
     map_mmio_regions(d, gaddr_to_gfn(OMAP5_PRM_BASE), 2,
-                     maddr_to_mfn(OMAP5_PRM_BASE));
+                     maddr_to_mfn(OMAP5_PRM_BASE),
+                     p2m_mmio_direct_dev);
 
     /* Map the PRM_MPU */
     map_mmio_regions(d, gaddr_to_gfn(OMAP5_PRCM_MPU_BASE), 1,
-                     maddr_to_mfn(OMAP5_PRCM_MPU_BASE));
+                     maddr_to_mfn(OMAP5_PRCM_MPU_BASE),
+                     p2m_mmio_direct_dev);
 
     /* Map the Wakeup Gen */
     map_mmio_regions(d, gaddr_to_gfn(OMAP5_WKUPGEN_BASE), 1,
-                     maddr_to_mfn(OMAP5_WKUPGEN_BASE));
+                     maddr_to_mfn(OMAP5_WKUPGEN_BASE),
+                     p2m_mmio_direct_dev);
 
     /* Map the on-chip SRAM */
     map_mmio_regions(d, gaddr_to_gfn(OMAP5_SRAM_PA), 32,
-                     maddr_to_mfn(OMAP5_SRAM_PA));
+                     maddr_to_mfn(OMAP5_SRAM_PA),
+                     p2m_mmio_direct_dev);
 
     return 0;
 }
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index d8b9a8a..afae5a1 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1887,7 +1887,7 @@ static bool try_map_mmio(gfn_t gfn)
     if ( !iomem_access_permitted(d, mfn_x(mfn), mfn_x(mfn) + 1) )
         return false;
 
-    return !map_regions_p2mt(d, gfn, 1, mfn, p2m_mmio_direct_c);
+    return !map_mmio_regions(d, gfn, 1, mfn, p2m_mmio_direct_c);
 }
 
 static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs,
diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 64b141f..1543625 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -691,7 +691,7 @@ static int vgic_v2_domain_init(struct domain *d)
      * region of the guest.
      */
     ret = map_mmio_regions(d, gaddr_to_gfn(cbase), csize / PAGE_SIZE,
-                           maddr_to_mfn(vbase));
+                           maddr_to_mfn(vbase), p2m_mmio_direct_dev);
     if ( ret )
         return ret;
 
diff --git a/xen/arch/arm/vgic/vgic-v2.c b/xen/arch/arm/vgic/vgic-v2.c
index b5ba4ac..04f34dd 100644
--- a/xen/arch/arm/vgic/vgic-v2.c
+++ b/xen/arch/arm/vgic/vgic-v2.c
@@ -309,7 +309,7 @@ int vgic_v2_map_resources(struct domain *d)
      * region of the guest.
      */
     ret = map_mmio_regions(d, gaddr_to_gfn(cbase), csize / PAGE_SIZE,
-                           maddr_to_mfn(vbase));
+                           maddr_to_mfn(vbase), p2m_mmio_direct_dev);
     if ( ret )
     {
         gdprintk(XENLOG_ERR, "Unable to remap VGIC CPU to VCPU\n");
diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index aa599f0..84776fc 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -79,8 +79,11 @@ static int __init modify_identity_mmio(struct domain *d, unsigned long pfn,
 
     for ( ; ; )
     {
-        rc = map ?   map_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn))
-                 : unmap_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn));
+        if ( map )
+            rc = map_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn),
+                                  p2m_mmio_direct);
+        else
+            rc = unmap_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn));
         if ( rc == 0 )
             break;
         if ( rc < 0 )
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 9e81a30..a72f012 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2264,12 +2264,16 @@ static unsigned int mmio_order(const struct domain *d,
 int map_mmio_regions(struct domain *d,
                      gfn_t start_gfn,
                      unsigned long nr,
-                     mfn_t mfn)
+                     mfn_t mfn,
+                     p2m_type_t p2mt)
 {
     int ret = 0;
     unsigned long i;
     unsigned int iter, order;
 
+    if ( p2mt != p2m_mmio_direct )
+        return -EOPNOTSUPP;
+
     if ( !paging_mode_translate(d) )
         return 0;
 
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index bade9a6..18a0f8f 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -927,6 +927,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         unsigned long nr_mfns = op->u.memory_mapping.nr_mfns;
         unsigned long mfn_end = mfn + nr_mfns - 1;
         int add = op->u.memory_mapping.add_mapping;
+        p2m_type_t p2mt;
 
         ret = -EINVAL;
         if ( mfn_end < mfn || /* wrap? */
@@ -939,6 +940,10 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         /* Must break hypercall up as this could take a while. */
         if ( nr_mfns > 64 )
             break;
+
+        p2mt = p2m_mmio_direct_dev;
+#else
+        p2mt = p2m_mmio_direct;
 #endif
 
         ret = -EPERM;
@@ -956,7 +961,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
                    "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
                    d->domain_id, gfn, mfn, nr_mfns);
 
-            ret = map_mmio_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn));
+            ret = map_mmio_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn), p2mt);
             if ( ret < 0 )
                 printk(XENLOG_G_WARNING
                        "memory_map:fail: dom%d gfn=%lx mfn=%lx nr=%lx ret:%ld\n",
diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
index efb6ca9..6adfa55 100644
--- a/xen/drivers/vpci/header.c
+++ b/xen/drivers/vpci/header.c
@@ -52,7 +52,8 @@ static int map_range(unsigned long s, unsigned long e, void *data,
          * - {un}map_mmio_regions doesn't support preemption.
          */
 
-        rc = map->map ? map_mmio_regions(map->d, _gfn(s), size, _mfn(s))
+        rc = map->map ? map_mmio_regions(map->d, _gfn(s), size, _mfn(s),
+                                         p2m_mmio_direct)
                       : unmap_mmio_regions(map->d, _gfn(s), size, _mfn(s));
         if ( rc == 0 )
         {
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 041dea8..0218021 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -252,21 +252,6 @@ void p2m_toggle_cache(struct vcpu *v, bool was_enabled);
 
 void p2m_flush_vm(struct vcpu *v);
 
-/*
- * Map a region in the guest p2m with a specific p2m type.
- * The memory attributes will be derived from the p2m type.
- */
-int map_regions_p2mt(struct domain *d,
-                     gfn_t gfn,
-                     unsigned long nr,
-                     mfn_t mfn,
-                     p2m_type_t p2mt);
-
-int unmap_regions_p2mt(struct domain *d,
-                       gfn_t gfn,
-                       unsigned long nr,
-                       mfn_t mfn);
-
 int map_dev_mmio_region(struct domain *d,
                         gfn_t gfn,
                         unsigned long nr,
diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h
index 58031a6..69c82cc 100644
--- a/xen/include/xen/p2m-common.h
+++ b/xen/include/xen/p2m-common.h
@@ -14,7 +14,8 @@ guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
 int map_mmio_regions(struct domain *d,
                      gfn_t start_gfn,
                      unsigned long nr,
-                     mfn_t mfn);
+                     mfn_t mfn,
+                     p2m_type_t p2mt);
 int unmap_mmio_regions(struct domain *d,
                        gfn_t start_gfn,
                        unsigned long nr,
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [Xen-devel] [PATCH v2 01/10] xen: add a p2mt parameter to map_mmio_regions
  2019-04-30 21:02 ` [PATCH v2 01/10] xen: add a p2mt parameter to map_mmio_regions Stefano Stabellini
@ 2019-04-30 21:02   ` " Stefano Stabellini
  2019-05-02 14:59   ` Jan Beulich
  2019-05-15 13:39   ` Oleksandr
  2 siblings, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, julien.grall, sstabellini, JBeulich, andrew.cooper3

Add a p2mt parameter to map_mmio_regions, pass p2m_mmio_direct_dev on
ARM and p2m_mmio_direct on x86 -- no changes in behavior.

On ARM, given the similarity between map_mmio_regions after the change
and map_regions_p2mt, remove un/map_regions_p2mt.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
CC: JBeulich@suse.com
CC: andrew.cooper3@citrix.com
---
Changes in v2:
- new patch
---
 xen/arch/arm/acpi/domain_build.c |  4 ++--
 xen/arch/arm/domain_build.c      |  2 +-
 xen/arch/arm/gic-v2.c            |  3 ++-
 xen/arch/arm/p2m.c               | 18 +-----------------
 xen/arch/arm/platforms/exynos5.c |  6 ++++--
 xen/arch/arm/platforms/omap5.c   | 12 ++++++++----
 xen/arch/arm/traps.c             |  2 +-
 xen/arch/arm/vgic-v2.c           |  2 +-
 xen/arch/arm/vgic/vgic-v2.c      |  2 +-
 xen/arch/x86/hvm/dom0_build.c    |  7 +++++--
 xen/arch/x86/mm/p2m.c            |  6 +++++-
 xen/common/domctl.c              |  7 ++++++-
 xen/drivers/vpci/header.c        |  3 ++-
 xen/include/asm-arm/p2m.h        | 15 ---------------
 xen/include/xen/p2m-common.h     |  3 ++-
 15 files changed, 41 insertions(+), 51 deletions(-)

diff --git a/xen/arch/arm/acpi/domain_build.c b/xen/arch/arm/acpi/domain_build.c
index 5aae32a..f4ac91c 100644
--- a/xen/arch/arm/acpi/domain_build.c
+++ b/xen/arch/arm/acpi/domain_build.c
@@ -193,7 +193,7 @@ static void __init acpi_map_other_tables(struct domain *d)
     {
         addr = acpi_gbl_root_table_list.tables[i].address;
         size = acpi_gbl_root_table_list.tables[i].length;
-        res = map_regions_p2mt(d,
+        res = map_mmio_regions(d,
                                gaddr_to_gfn(addr),
                                PFN_UP(size),
                                maddr_to_mfn(addr),
@@ -547,7 +547,7 @@ int __init prepare_acpi(struct domain *d, struct kernel_info *kinfo)
     acpi_create_efi_mmap_table(d, &kinfo->mem, tbl_add);
 
     /* Map the EFI and ACPI tables to Dom0 */
-    rc = map_regions_p2mt(d,
+    rc = map_mmio_regions(d,
                           gaddr_to_gfn(d->arch.efi_acpi_gpa),
                           PFN_UP(d->arch.efi_acpi_len),
                           virt_to_mfn(d->arch.efi_acpi_table),
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index d983677..1f808b2 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1171,7 +1171,7 @@ static int __init map_range_to_domain(const struct dt_device_node *dev,
 
     if ( need_mapping )
     {
-        res = map_regions_p2mt(d,
+        res = map_mmio_regions(d,
                                gaddr_to_gfn(addr),
                                PFN_UP(len),
                                maddr_to_mfn(addr),
diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index 256988c..d2ef361 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -701,7 +701,8 @@ static int gicv2_map_hwdown_extra_mappings(struct domain *d)
 
         ret = map_mmio_regions(d, gaddr_to_gfn(v2m_data->addr),
                                PFN_UP(v2m_data->size),
-                               maddr_to_mfn(v2m_data->addr));
+                               maddr_to_mfn(v2m_data->addr),
+                               p2m_mmio_direct_dev);
         if ( ret )
         {
             printk(XENLOG_ERR "GICv2: Map v2m frame to d%d failed.\n",
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index c38bd7e..e44c932 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1331,7 +1331,7 @@ static inline int p2m_remove_mapping(struct domain *d,
     return rc;
 }
 
-int map_regions_p2mt(struct domain *d,
+int map_mmio_regions(struct domain *d,
                      gfn_t gfn,
                      unsigned long nr,
                      mfn_t mfn,
@@ -1340,22 +1340,6 @@ int map_regions_p2mt(struct domain *d,
     return p2m_insert_mapping(d, gfn, nr, mfn, p2mt);
 }
 
-int unmap_regions_p2mt(struct domain *d,
-                       gfn_t gfn,
-                       unsigned long nr,
-                       mfn_t mfn)
-{
-    return p2m_remove_mapping(d, gfn, nr, mfn);
-}
-
-int map_mmio_regions(struct domain *d,
-                     gfn_t start_gfn,
-                     unsigned long nr,
-                     mfn_t mfn)
-{
-    return p2m_insert_mapping(d, start_gfn, nr, mfn, p2m_mmio_direct_dev);
-}
-
 int unmap_mmio_regions(struct domain *d,
                        gfn_t start_gfn,
                        unsigned long nr,
diff --git a/xen/arch/arm/platforms/exynos5.c b/xen/arch/arm/platforms/exynos5.c
index 6560507..97cd080 100644
--- a/xen/arch/arm/platforms/exynos5.c
+++ b/xen/arch/arm/platforms/exynos5.c
@@ -83,11 +83,13 @@ static int exynos5250_specific_mapping(struct domain *d)
 {
     /* Map the chip ID */
     map_mmio_regions(d, gaddr_to_gfn(EXYNOS5_PA_CHIPID), 1,
-                     maddr_to_mfn(EXYNOS5_PA_CHIPID));
+                     maddr_to_mfn(EXYNOS5_PA_CHIPID),
+                     p2m_mmio_direct_dev);
 
     /* Map the PWM region */
     map_mmio_regions(d, gaddr_to_gfn(EXYNOS5_PA_TIMER), 2,
-                     maddr_to_mfn(EXYNOS5_PA_TIMER));
+                     maddr_to_mfn(EXYNOS5_PA_TIMER),
+                     p2m_mmio_direct_dev);
 
     return 0;
 }
diff --git a/xen/arch/arm/platforms/omap5.c b/xen/arch/arm/platforms/omap5.c
index aee24e4..c5701df 100644
--- a/xen/arch/arm/platforms/omap5.c
+++ b/xen/arch/arm/platforms/omap5.c
@@ -99,19 +99,23 @@ static int omap5_specific_mapping(struct domain *d)
 {
     /* Map the PRM module */
     map_mmio_regions(d, gaddr_to_gfn(OMAP5_PRM_BASE), 2,
-                     maddr_to_mfn(OMAP5_PRM_BASE));
+                     maddr_to_mfn(OMAP5_PRM_BASE),
+                     p2m_mmio_direct_dev);
 
     /* Map the PRM_MPU */
     map_mmio_regions(d, gaddr_to_gfn(OMAP5_PRCM_MPU_BASE), 1,
-                     maddr_to_mfn(OMAP5_PRCM_MPU_BASE));
+                     maddr_to_mfn(OMAP5_PRCM_MPU_BASE),
+                     p2m_mmio_direct_dev);
 
     /* Map the Wakeup Gen */
     map_mmio_regions(d, gaddr_to_gfn(OMAP5_WKUPGEN_BASE), 1,
-                     maddr_to_mfn(OMAP5_WKUPGEN_BASE));
+                     maddr_to_mfn(OMAP5_WKUPGEN_BASE),
+                     p2m_mmio_direct_dev);
 
     /* Map the on-chip SRAM */
     map_mmio_regions(d, gaddr_to_gfn(OMAP5_SRAM_PA), 32,
-                     maddr_to_mfn(OMAP5_SRAM_PA));
+                     maddr_to_mfn(OMAP5_SRAM_PA),
+                     p2m_mmio_direct_dev);
 
     return 0;
 }
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index d8b9a8a..afae5a1 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1887,7 +1887,7 @@ static bool try_map_mmio(gfn_t gfn)
     if ( !iomem_access_permitted(d, mfn_x(mfn), mfn_x(mfn) + 1) )
         return false;
 
-    return !map_regions_p2mt(d, gfn, 1, mfn, p2m_mmio_direct_c);
+    return !map_mmio_regions(d, gfn, 1, mfn, p2m_mmio_direct_c);
 }
 
 static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs,
diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 64b141f..1543625 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -691,7 +691,7 @@ static int vgic_v2_domain_init(struct domain *d)
      * region of the guest.
      */
     ret = map_mmio_regions(d, gaddr_to_gfn(cbase), csize / PAGE_SIZE,
-                           maddr_to_mfn(vbase));
+                           maddr_to_mfn(vbase), p2m_mmio_direct_dev);
     if ( ret )
         return ret;
 
diff --git a/xen/arch/arm/vgic/vgic-v2.c b/xen/arch/arm/vgic/vgic-v2.c
index b5ba4ac..04f34dd 100644
--- a/xen/arch/arm/vgic/vgic-v2.c
+++ b/xen/arch/arm/vgic/vgic-v2.c
@@ -309,7 +309,7 @@ int vgic_v2_map_resources(struct domain *d)
      * region of the guest.
      */
     ret = map_mmio_regions(d, gaddr_to_gfn(cbase), csize / PAGE_SIZE,
-                           maddr_to_mfn(vbase));
+                           maddr_to_mfn(vbase), p2m_mmio_direct_dev);
     if ( ret )
     {
         gdprintk(XENLOG_ERR, "Unable to remap VGIC CPU to VCPU\n");
diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index aa599f0..84776fc 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -79,8 +79,11 @@ static int __init modify_identity_mmio(struct domain *d, unsigned long pfn,
 
     for ( ; ; )
     {
-        rc = map ?   map_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn))
-                 : unmap_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn));
+        if ( map )
+            rc = map_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn),
+                                  p2m_mmio_direct);
+        else
+            rc = unmap_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn));
         if ( rc == 0 )
             break;
         if ( rc < 0 )
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 9e81a30..a72f012 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2264,12 +2264,16 @@ static unsigned int mmio_order(const struct domain *d,
 int map_mmio_regions(struct domain *d,
                      gfn_t start_gfn,
                      unsigned long nr,
-                     mfn_t mfn)
+                     mfn_t mfn,
+                     p2m_type_t p2mt)
 {
     int ret = 0;
     unsigned long i;
     unsigned int iter, order;
 
+    if ( p2mt != p2m_mmio_direct )
+        return -EOPNOTSUPP;
+
     if ( !paging_mode_translate(d) )
         return 0;
 
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index bade9a6..18a0f8f 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -927,6 +927,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         unsigned long nr_mfns = op->u.memory_mapping.nr_mfns;
         unsigned long mfn_end = mfn + nr_mfns - 1;
         int add = op->u.memory_mapping.add_mapping;
+        p2m_type_t p2mt;
 
         ret = -EINVAL;
         if ( mfn_end < mfn || /* wrap? */
@@ -939,6 +940,10 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         /* Must break hypercall up as this could take a while. */
         if ( nr_mfns > 64 )
             break;
+
+        p2mt = p2m_mmio_direct_dev;
+#else
+        p2mt = p2m_mmio_direct;
 #endif
 
         ret = -EPERM;
@@ -956,7 +961,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
                    "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
                    d->domain_id, gfn, mfn, nr_mfns);
 
-            ret = map_mmio_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn));
+            ret = map_mmio_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn), p2mt);
             if ( ret < 0 )
                 printk(XENLOG_G_WARNING
                        "memory_map:fail: dom%d gfn=%lx mfn=%lx nr=%lx ret:%ld\n",
diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
index efb6ca9..6adfa55 100644
--- a/xen/drivers/vpci/header.c
+++ b/xen/drivers/vpci/header.c
@@ -52,7 +52,8 @@ static int map_range(unsigned long s, unsigned long e, void *data,
          * - {un}map_mmio_regions doesn't support preemption.
          */
 
-        rc = map->map ? map_mmio_regions(map->d, _gfn(s), size, _mfn(s))
+        rc = map->map ? map_mmio_regions(map->d, _gfn(s), size, _mfn(s),
+                                         p2m_mmio_direct)
                       : unmap_mmio_regions(map->d, _gfn(s), size, _mfn(s));
         if ( rc == 0 )
         {
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 041dea8..0218021 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -252,21 +252,6 @@ void p2m_toggle_cache(struct vcpu *v, bool was_enabled);
 
 void p2m_flush_vm(struct vcpu *v);
 
-/*
- * Map a region in the guest p2m with a specific p2m type.
- * The memory attributes will be derived from the p2m type.
- */
-int map_regions_p2mt(struct domain *d,
-                     gfn_t gfn,
-                     unsigned long nr,
-                     mfn_t mfn,
-                     p2m_type_t p2mt);
-
-int unmap_regions_p2mt(struct domain *d,
-                       gfn_t gfn,
-                       unsigned long nr,
-                       mfn_t mfn);
-
 int map_dev_mmio_region(struct domain *d,
                         gfn_t gfn,
                         unsigned long nr,
diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h
index 58031a6..69c82cc 100644
--- a/xen/include/xen/p2m-common.h
+++ b/xen/include/xen/p2m-common.h
@@ -14,7 +14,8 @@ guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
 int map_mmio_regions(struct domain *d,
                      gfn_t start_gfn,
                      unsigned long nr,
-                     mfn_t mfn);
+                     mfn_t mfn,
+                     p2m_type_t p2mt);
 int unmap_mmio_regions(struct domain *d,
                        gfn_t start_gfn,
                        unsigned long nr,
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [PATCH v2 02/10] xen: rename un/map_mmio_regions to un/map_regions
  2019-04-30 21:02 [PATCH v2 0/10] iomem memory policy Stefano Stabellini
  2019-04-30 21:02 ` [Xen-devel] " Stefano Stabellini
  2019-04-30 21:02 ` [PATCH v2 01/10] xen: add a p2mt parameter to map_mmio_regions Stefano Stabellini
@ 2019-04-30 21:02 ` Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
                     ` (2 more replies)
  2019-04-30 21:02 ` [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy Stefano Stabellini
                   ` (8 subsequent siblings)
  11 siblings, 3 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, julien.grall, sstabellini, JBeulich, andrew.cooper3

Now that map_mmio_regions takes a p2mt parameter, there is no need to
keep "mmio" in the name. The p2mt parameter does a better job at
expressing what the mapping is about. Let's save the environment 5
characters at a time.

Also fix the comment on top of map_mmio_regions.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
CC: JBeulich@suse.com
CC: andrew.cooper3@citrix.com
---
Changes in v2:
- new patch
---
 xen/arch/arm/acpi/domain_build.c | 20 ++++++++++----------
 xen/arch/arm/domain_build.c      | 10 +++++-----
 xen/arch/arm/gic-v2.c            |  8 ++++----
 xen/arch/arm/p2m.c               | 18 +++++++++---------
 xen/arch/arm/platforms/exynos5.c | 12 ++++++------
 xen/arch/arm/platforms/omap5.c   | 24 ++++++++++++------------
 xen/arch/arm/traps.c             |  2 +-
 xen/arch/arm/vgic-v2.c           |  4 ++--
 xen/arch/arm/vgic/vgic-v2.c      |  4 ++--
 xen/arch/x86/hvm/dom0_build.c    |  6 +++---
 xen/arch/x86/mm/p2m.c            | 18 +++++++++---------
 xen/common/domctl.c              |  4 ++--
 xen/drivers/vpci/header.c        |  8 ++++----
 xen/include/xen/p2m-common.h     | 26 ++++++++++++++------------
 14 files changed, 83 insertions(+), 81 deletions(-)

diff --git a/xen/arch/arm/acpi/domain_build.c b/xen/arch/arm/acpi/domain_build.c
index f4ac91c..f107f5a 100644
--- a/xen/arch/arm/acpi/domain_build.c
+++ b/xen/arch/arm/acpi/domain_build.c
@@ -193,11 +193,11 @@ static void __init acpi_map_other_tables(struct domain *d)
     {
         addr = acpi_gbl_root_table_list.tables[i].address;
         size = acpi_gbl_root_table_list.tables[i].length;
-        res = map_mmio_regions(d,
-                               gaddr_to_gfn(addr),
-                               PFN_UP(size),
-                               maddr_to_mfn(addr),
-                               p2m_mmio_direct_c);
+        res = map_regions(d,
+                          gaddr_to_gfn(addr),
+                          PFN_UP(size),
+                          maddr_to_mfn(addr),
+                          p2m_mmio_direct_c);
         if ( res )
         {
              panic(XENLOG_ERR "Unable to map ACPI region 0x%"PRIx64
@@ -547,11 +547,11 @@ int __init prepare_acpi(struct domain *d, struct kernel_info *kinfo)
     acpi_create_efi_mmap_table(d, &kinfo->mem, tbl_add);
 
     /* Map the EFI and ACPI tables to Dom0 */
-    rc = map_mmio_regions(d,
-                          gaddr_to_gfn(d->arch.efi_acpi_gpa),
-                          PFN_UP(d->arch.efi_acpi_len),
-                          virt_to_mfn(d->arch.efi_acpi_table),
-                          p2m_mmio_direct_c);
+    rc = map_regions(d,
+                     gaddr_to_gfn(d->arch.efi_acpi_gpa),
+                     PFN_UP(d->arch.efi_acpi_len),
+                     virt_to_mfn(d->arch.efi_acpi_table),
+                     p2m_mmio_direct_c);
     if ( rc != 0 )
     {
         printk(XENLOG_ERR "Unable to map EFI/ACPI table 0x%"PRIx64
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 1f808b2..5e7f94c 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1171,11 +1171,11 @@ static int __init map_range_to_domain(const struct dt_device_node *dev,
 
     if ( need_mapping )
     {
-        res = map_mmio_regions(d,
-                               gaddr_to_gfn(addr),
-                               PFN_UP(len),
-                               maddr_to_mfn(addr),
-                               mr_data->p2mt);
+        res = map_regions(d,
+                          gaddr_to_gfn(addr),
+                          PFN_UP(len),
+                          maddr_to_mfn(addr),
+                          mr_data->p2mt);
 
         if ( res < 0 )
         {
diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index d2ef361..ad2b368 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -699,10 +699,10 @@ static int gicv2_map_hwdown_extra_mappings(struct domain *d)
                d->domain_id, v2m_data->addr, v2m_data->size,
                v2m_data->spi_start, v2m_data->nr_spis);
 
-        ret = map_mmio_regions(d, gaddr_to_gfn(v2m_data->addr),
-                               PFN_UP(v2m_data->size),
-                               maddr_to_mfn(v2m_data->addr),
-                               p2m_mmio_direct_dev);
+        ret = map_regions(d, gaddr_to_gfn(v2m_data->addr),
+                          PFN_UP(v2m_data->size),
+                          maddr_to_mfn(v2m_data->addr),
+                          p2m_mmio_direct_dev);
         if ( ret )
         {
             printk(XENLOG_ERR "GICv2: Map v2m frame to d%d failed.\n",
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index e44c932..d6529ca 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1331,19 +1331,19 @@ static inline int p2m_remove_mapping(struct domain *d,
     return rc;
 }
 
-int map_mmio_regions(struct domain *d,
-                     gfn_t gfn,
-                     unsigned long nr,
-                     mfn_t mfn,
-                     p2m_type_t p2mt)
+int map_regions(struct domain *d,
+                gfn_t gfn,
+                unsigned long nr,
+                mfn_t mfn,
+                p2m_type_t p2mt)
 {
     return p2m_insert_mapping(d, gfn, nr, mfn, p2mt);
 }
 
-int unmap_mmio_regions(struct domain *d,
-                       gfn_t start_gfn,
-                       unsigned long nr,
-                       mfn_t mfn)
+int unmap_regions(struct domain *d,
+                  gfn_t start_gfn,
+                  unsigned long nr,
+                  mfn_t mfn)
 {
     return p2m_remove_mapping(d, start_gfn, nr, mfn);
 }
diff --git a/xen/arch/arm/platforms/exynos5.c b/xen/arch/arm/platforms/exynos5.c
index 97cd080..261526e 100644
--- a/xen/arch/arm/platforms/exynos5.c
+++ b/xen/arch/arm/platforms/exynos5.c
@@ -82,14 +82,14 @@ static int exynos5_init_time(void)
 static int exynos5250_specific_mapping(struct domain *d)
 {
     /* Map the chip ID */
-    map_mmio_regions(d, gaddr_to_gfn(EXYNOS5_PA_CHIPID), 1,
-                     maddr_to_mfn(EXYNOS5_PA_CHIPID),
-                     p2m_mmio_direct_dev);
+    map_regions(d, gaddr_to_gfn(EXYNOS5_PA_CHIPID), 1,
+                maddr_to_mfn(EXYNOS5_PA_CHIPID),
+                p2m_mmio_direct_dev);
 
     /* Map the PWM region */
-    map_mmio_regions(d, gaddr_to_gfn(EXYNOS5_PA_TIMER), 2,
-                     maddr_to_mfn(EXYNOS5_PA_TIMER),
-                     p2m_mmio_direct_dev);
+    map_regions(d, gaddr_to_gfn(EXYNOS5_PA_TIMER), 2,
+                maddr_to_mfn(EXYNOS5_PA_TIMER),
+                p2m_mmio_direct_dev);
 
     return 0;
 }
diff --git a/xen/arch/arm/platforms/omap5.c b/xen/arch/arm/platforms/omap5.c
index c5701df..3575f6c 100644
--- a/xen/arch/arm/platforms/omap5.c
+++ b/xen/arch/arm/platforms/omap5.c
@@ -98,24 +98,24 @@ static int omap5_init_time(void)
 static int omap5_specific_mapping(struct domain *d)
 {
     /* Map the PRM module */
-    map_mmio_regions(d, gaddr_to_gfn(OMAP5_PRM_BASE), 2,
-                     maddr_to_mfn(OMAP5_PRM_BASE),
-                     p2m_mmio_direct_dev);
+    map_regions(d, gaddr_to_gfn(OMAP5_PRM_BASE), 2,
+                maddr_to_mfn(OMAP5_PRM_BASE),
+                p2m_mmio_direct_dev);
 
     /* Map the PRM_MPU */
-    map_mmio_regions(d, gaddr_to_gfn(OMAP5_PRCM_MPU_BASE), 1,
-                     maddr_to_mfn(OMAP5_PRCM_MPU_BASE),
-                     p2m_mmio_direct_dev);
+    map_regions(d, gaddr_to_gfn(OMAP5_PRCM_MPU_BASE), 1,
+                maddr_to_mfn(OMAP5_PRCM_MPU_BASE),
+                p2m_mmio_direct_dev);
 
     /* Map the Wakeup Gen */
-    map_mmio_regions(d, gaddr_to_gfn(OMAP5_WKUPGEN_BASE), 1,
-                     maddr_to_mfn(OMAP5_WKUPGEN_BASE),
-                     p2m_mmio_direct_dev);
+    map_regions(d, gaddr_to_gfn(OMAP5_WKUPGEN_BASE), 1,
+                maddr_to_mfn(OMAP5_WKUPGEN_BASE),
+                p2m_mmio_direct_dev);
 
     /* Map the on-chip SRAM */
-    map_mmio_regions(d, gaddr_to_gfn(OMAP5_SRAM_PA), 32,
-                     maddr_to_mfn(OMAP5_SRAM_PA),
-                     p2m_mmio_direct_dev);
+    map_regions(d, gaddr_to_gfn(OMAP5_SRAM_PA), 32,
+                maddr_to_mfn(OMAP5_SRAM_PA),
+                p2m_mmio_direct_dev);
 
     return 0;
 }
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index afae5a1..fee5517 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1887,7 +1887,7 @@ static bool try_map_mmio(gfn_t gfn)
     if ( !iomem_access_permitted(d, mfn_x(mfn), mfn_x(mfn) + 1) )
         return false;
 
-    return !map_mmio_regions(d, gfn, 1, mfn, p2m_mmio_direct_c);
+    return !map_regions(d, gfn, 1, mfn, p2m_mmio_direct_c);
 }
 
 static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs,
diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 1543625..f33c56a 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -690,8 +690,8 @@ static int vgic_v2_domain_init(struct domain *d)
      * Map the gic virtual cpu interface in the gic cpu interface
      * region of the guest.
      */
-    ret = map_mmio_regions(d, gaddr_to_gfn(cbase), csize / PAGE_SIZE,
-                           maddr_to_mfn(vbase), p2m_mmio_direct_dev);
+    ret = map_regions(d, gaddr_to_gfn(cbase), csize / PAGE_SIZE,
+                      maddr_to_mfn(vbase), p2m_mmio_direct_dev);
     if ( ret )
         return ret;
 
diff --git a/xen/arch/arm/vgic/vgic-v2.c b/xen/arch/arm/vgic/vgic-v2.c
index 04f34dd..b03af84 100644
--- a/xen/arch/arm/vgic/vgic-v2.c
+++ b/xen/arch/arm/vgic/vgic-v2.c
@@ -308,8 +308,8 @@ int vgic_v2_map_resources(struct domain *d)
      * Map the gic virtual cpu interface in the gic cpu interface
      * region of the guest.
      */
-    ret = map_mmio_regions(d, gaddr_to_gfn(cbase), csize / PAGE_SIZE,
-                           maddr_to_mfn(vbase), p2m_mmio_direct_dev);
+    ret = map_regions(d, gaddr_to_gfn(cbase), csize / PAGE_SIZE,
+                      maddr_to_mfn(vbase), p2m_mmio_direct_dev);
     if ( ret )
     {
         gdprintk(XENLOG_ERR, "Unable to remap VGIC CPU to VCPU\n");
diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index 84776fc..800faaa 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -80,10 +80,10 @@ static int __init modify_identity_mmio(struct domain *d, unsigned long pfn,
     for ( ; ; )
     {
         if ( map )
-            rc = map_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn),
-                                  p2m_mmio_direct);
+            rc = map_regions(d, _gfn(pfn), nr_pages, _mfn(pfn),
+                             p2m_mmio_direct);
         else
-            rc = unmap_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn));
+            rc = unmap_regions(d, _gfn(pfn), nr_pages, _mfn(pfn));
         if ( rc == 0 )
             break;
         if ( rc < 0 )
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index a72f012..d976ce9 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2261,11 +2261,11 @@ static unsigned int mmio_order(const struct domain *d,
 
 #define MAP_MMIO_MAX_ITER 64 /* pretty arbitrary */
 
-int map_mmio_regions(struct domain *d,
-                     gfn_t start_gfn,
-                     unsigned long nr,
-                     mfn_t mfn,
-                     p2m_type_t p2mt)
+int map_regions(struct domain *d,
+                gfn_t start_gfn,
+                unsigned long nr,
+                mfn_t mfn,
+                p2m_type_t p2mt)
 {
     int ret = 0;
     unsigned long i;
@@ -2298,10 +2298,10 @@ int map_mmio_regions(struct domain *d,
     return i == nr ? 0 : i ?: ret;
 }
 
-int unmap_mmio_regions(struct domain *d,
-                       gfn_t start_gfn,
-                       unsigned long nr,
-                       mfn_t mfn)
+int unmap_regions(struct domain *d,
+                  gfn_t start_gfn,
+                  unsigned long nr,
+                  mfn_t mfn)
 {
     int ret = 0;
     unsigned long i;
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 18a0f8f..140f979 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -961,7 +961,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
                    "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
                    d->domain_id, gfn, mfn, nr_mfns);
 
-            ret = map_mmio_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn), p2mt);
+            ret = map_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn), p2mt);
             if ( ret < 0 )
                 printk(XENLOG_G_WARNING
                        "memory_map:fail: dom%d gfn=%lx mfn=%lx nr=%lx ret:%ld\n",
@@ -973,7 +973,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
                    "memory_map:remove: dom%d gfn=%lx mfn=%lx nr=%lx\n",
                    d->domain_id, gfn, mfn, nr_mfns);
 
-            ret = unmap_mmio_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn));
+            ret = unmap_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn));
             if ( ret < 0 && is_hardware_domain(current->domain) )
                 printk(XENLOG_ERR
                        "memory_map: error %ld removing dom%d access to [%lx,%lx]\n",
diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
index 6adfa55..7b25acc 100644
--- a/xen/drivers/vpci/header.c
+++ b/xen/drivers/vpci/header.c
@@ -46,15 +46,15 @@ static int map_range(unsigned long s, unsigned long e, void *data,
         /*
          * ARM TODOs:
          * - On ARM whether the memory is prefetchable or not should be passed
-         *   to map_mmio_regions in order to decide which memory attributes
+         *   to map_regions in order to decide which memory attributes
          *   should be used.
          *
-         * - {un}map_mmio_regions doesn't support preemption.
+         * - {un}map_regions doesn't support preemption.
          */
 
-        rc = map->map ? map_mmio_regions(map->d, _gfn(s), size, _mfn(s),
+        rc = map->map ? map_regions(map->d, _gfn(s), size, _mfn(s),
                                          p2m_mmio_direct)
-                      : unmap_mmio_regions(map->d, _gfn(s), size, _mfn(s));
+                      : unmap_regions(map->d, _gfn(s), size, _mfn(s));
         if ( rc == 0 )
         {
             *c += size;
diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h
index 69c82cc..728c9a4 100644
--- a/xen/include/xen/p2m-common.h
+++ b/xen/include/xen/p2m-common.h
@@ -8,18 +8,20 @@ int __must_check
 guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
                           unsigned int page_order);
 
-/* Map MMIO regions in the p2m: start_gfn and nr describe the range in
- *  * the guest physical address space to map, starting from the machine
- *   * frame number mfn. */
-int map_mmio_regions(struct domain *d,
-                     gfn_t start_gfn,
-                     unsigned long nr,
-                     mfn_t mfn,
-                     p2m_type_t p2mt);
-int unmap_mmio_regions(struct domain *d,
-                       gfn_t start_gfn,
-                       unsigned long nr,
-                       mfn_t mfn);
+/*
+ * Map memory regions in the p2m: start_gfn and nr describe the range in
+ * the guest physical address space to map, starting from the machine
+ * frame number mfn.
+ */
+int map_regions(struct domain *d,
+                gfn_t start_gfn,
+                unsigned long nr,
+                mfn_t mfn,
+                p2m_type_t p2mt);
+int unmap_regions(struct domain *d,
+                  gfn_t start_gfn,
+                  unsigned long nr,
+                  mfn_t mfn);
 
 /*
  * Populate-on-Demand
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [Xen-devel] [PATCH v2 02/10] xen: rename un/map_mmio_regions to un/map_regions
  2019-04-30 21:02 ` [PATCH v2 02/10] xen: rename un/map_mmio_regions to un/map_regions Stefano Stabellini
@ 2019-04-30 21:02   ` " Stefano Stabellini
  2019-05-01  9:22   ` Julien Grall
  2019-05-02 15:03   ` Jan Beulich
  2 siblings, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, julien.grall, sstabellini, JBeulich, andrew.cooper3

Now that map_mmio_regions takes a p2mt parameter, there is no need to
keep "mmio" in the name. The p2mt parameter does a better job at
expressing what the mapping is about. Let's save the environment 5
characters at a time.

Also fix the comment on top of map_mmio_regions.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
CC: JBeulich@suse.com
CC: andrew.cooper3@citrix.com
---
Changes in v2:
- new patch
---
 xen/arch/arm/acpi/domain_build.c | 20 ++++++++++----------
 xen/arch/arm/domain_build.c      | 10 +++++-----
 xen/arch/arm/gic-v2.c            |  8 ++++----
 xen/arch/arm/p2m.c               | 18 +++++++++---------
 xen/arch/arm/platforms/exynos5.c | 12 ++++++------
 xen/arch/arm/platforms/omap5.c   | 24 ++++++++++++------------
 xen/arch/arm/traps.c             |  2 +-
 xen/arch/arm/vgic-v2.c           |  4 ++--
 xen/arch/arm/vgic/vgic-v2.c      |  4 ++--
 xen/arch/x86/hvm/dom0_build.c    |  6 +++---
 xen/arch/x86/mm/p2m.c            | 18 +++++++++---------
 xen/common/domctl.c              |  4 ++--
 xen/drivers/vpci/header.c        |  8 ++++----
 xen/include/xen/p2m-common.h     | 26 ++++++++++++++------------
 14 files changed, 83 insertions(+), 81 deletions(-)

diff --git a/xen/arch/arm/acpi/domain_build.c b/xen/arch/arm/acpi/domain_build.c
index f4ac91c..f107f5a 100644
--- a/xen/arch/arm/acpi/domain_build.c
+++ b/xen/arch/arm/acpi/domain_build.c
@@ -193,11 +193,11 @@ static void __init acpi_map_other_tables(struct domain *d)
     {
         addr = acpi_gbl_root_table_list.tables[i].address;
         size = acpi_gbl_root_table_list.tables[i].length;
-        res = map_mmio_regions(d,
-                               gaddr_to_gfn(addr),
-                               PFN_UP(size),
-                               maddr_to_mfn(addr),
-                               p2m_mmio_direct_c);
+        res = map_regions(d,
+                          gaddr_to_gfn(addr),
+                          PFN_UP(size),
+                          maddr_to_mfn(addr),
+                          p2m_mmio_direct_c);
         if ( res )
         {
              panic(XENLOG_ERR "Unable to map ACPI region 0x%"PRIx64
@@ -547,11 +547,11 @@ int __init prepare_acpi(struct domain *d, struct kernel_info *kinfo)
     acpi_create_efi_mmap_table(d, &kinfo->mem, tbl_add);
 
     /* Map the EFI and ACPI tables to Dom0 */
-    rc = map_mmio_regions(d,
-                          gaddr_to_gfn(d->arch.efi_acpi_gpa),
-                          PFN_UP(d->arch.efi_acpi_len),
-                          virt_to_mfn(d->arch.efi_acpi_table),
-                          p2m_mmio_direct_c);
+    rc = map_regions(d,
+                     gaddr_to_gfn(d->arch.efi_acpi_gpa),
+                     PFN_UP(d->arch.efi_acpi_len),
+                     virt_to_mfn(d->arch.efi_acpi_table),
+                     p2m_mmio_direct_c);
     if ( rc != 0 )
     {
         printk(XENLOG_ERR "Unable to map EFI/ACPI table 0x%"PRIx64
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 1f808b2..5e7f94c 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1171,11 +1171,11 @@ static int __init map_range_to_domain(const struct dt_device_node *dev,
 
     if ( need_mapping )
     {
-        res = map_mmio_regions(d,
-                               gaddr_to_gfn(addr),
-                               PFN_UP(len),
-                               maddr_to_mfn(addr),
-                               mr_data->p2mt);
+        res = map_regions(d,
+                          gaddr_to_gfn(addr),
+                          PFN_UP(len),
+                          maddr_to_mfn(addr),
+                          mr_data->p2mt);
 
         if ( res < 0 )
         {
diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index d2ef361..ad2b368 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -699,10 +699,10 @@ static int gicv2_map_hwdown_extra_mappings(struct domain *d)
                d->domain_id, v2m_data->addr, v2m_data->size,
                v2m_data->spi_start, v2m_data->nr_spis);
 
-        ret = map_mmio_regions(d, gaddr_to_gfn(v2m_data->addr),
-                               PFN_UP(v2m_data->size),
-                               maddr_to_mfn(v2m_data->addr),
-                               p2m_mmio_direct_dev);
+        ret = map_regions(d, gaddr_to_gfn(v2m_data->addr),
+                          PFN_UP(v2m_data->size),
+                          maddr_to_mfn(v2m_data->addr),
+                          p2m_mmio_direct_dev);
         if ( ret )
         {
             printk(XENLOG_ERR "GICv2: Map v2m frame to d%d failed.\n",
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index e44c932..d6529ca 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1331,19 +1331,19 @@ static inline int p2m_remove_mapping(struct domain *d,
     return rc;
 }
 
-int map_mmio_regions(struct domain *d,
-                     gfn_t gfn,
-                     unsigned long nr,
-                     mfn_t mfn,
-                     p2m_type_t p2mt)
+int map_regions(struct domain *d,
+                gfn_t gfn,
+                unsigned long nr,
+                mfn_t mfn,
+                p2m_type_t p2mt)
 {
     return p2m_insert_mapping(d, gfn, nr, mfn, p2mt);
 }
 
-int unmap_mmio_regions(struct domain *d,
-                       gfn_t start_gfn,
-                       unsigned long nr,
-                       mfn_t mfn)
+int unmap_regions(struct domain *d,
+                  gfn_t start_gfn,
+                  unsigned long nr,
+                  mfn_t mfn)
 {
     return p2m_remove_mapping(d, start_gfn, nr, mfn);
 }
diff --git a/xen/arch/arm/platforms/exynos5.c b/xen/arch/arm/platforms/exynos5.c
index 97cd080..261526e 100644
--- a/xen/arch/arm/platforms/exynos5.c
+++ b/xen/arch/arm/platforms/exynos5.c
@@ -82,14 +82,14 @@ static int exynos5_init_time(void)
 static int exynos5250_specific_mapping(struct domain *d)
 {
     /* Map the chip ID */
-    map_mmio_regions(d, gaddr_to_gfn(EXYNOS5_PA_CHIPID), 1,
-                     maddr_to_mfn(EXYNOS5_PA_CHIPID),
-                     p2m_mmio_direct_dev);
+    map_regions(d, gaddr_to_gfn(EXYNOS5_PA_CHIPID), 1,
+                maddr_to_mfn(EXYNOS5_PA_CHIPID),
+                p2m_mmio_direct_dev);
 
     /* Map the PWM region */
-    map_mmio_regions(d, gaddr_to_gfn(EXYNOS5_PA_TIMER), 2,
-                     maddr_to_mfn(EXYNOS5_PA_TIMER),
-                     p2m_mmio_direct_dev);
+    map_regions(d, gaddr_to_gfn(EXYNOS5_PA_TIMER), 2,
+                maddr_to_mfn(EXYNOS5_PA_TIMER),
+                p2m_mmio_direct_dev);
 
     return 0;
 }
diff --git a/xen/arch/arm/platforms/omap5.c b/xen/arch/arm/platforms/omap5.c
index c5701df..3575f6c 100644
--- a/xen/arch/arm/platforms/omap5.c
+++ b/xen/arch/arm/platforms/omap5.c
@@ -98,24 +98,24 @@ static int omap5_init_time(void)
 static int omap5_specific_mapping(struct domain *d)
 {
     /* Map the PRM module */
-    map_mmio_regions(d, gaddr_to_gfn(OMAP5_PRM_BASE), 2,
-                     maddr_to_mfn(OMAP5_PRM_BASE),
-                     p2m_mmio_direct_dev);
+    map_regions(d, gaddr_to_gfn(OMAP5_PRM_BASE), 2,
+                maddr_to_mfn(OMAP5_PRM_BASE),
+                p2m_mmio_direct_dev);
 
     /* Map the PRM_MPU */
-    map_mmio_regions(d, gaddr_to_gfn(OMAP5_PRCM_MPU_BASE), 1,
-                     maddr_to_mfn(OMAP5_PRCM_MPU_BASE),
-                     p2m_mmio_direct_dev);
+    map_regions(d, gaddr_to_gfn(OMAP5_PRCM_MPU_BASE), 1,
+                maddr_to_mfn(OMAP5_PRCM_MPU_BASE),
+                p2m_mmio_direct_dev);
 
     /* Map the Wakeup Gen */
-    map_mmio_regions(d, gaddr_to_gfn(OMAP5_WKUPGEN_BASE), 1,
-                     maddr_to_mfn(OMAP5_WKUPGEN_BASE),
-                     p2m_mmio_direct_dev);
+    map_regions(d, gaddr_to_gfn(OMAP5_WKUPGEN_BASE), 1,
+                maddr_to_mfn(OMAP5_WKUPGEN_BASE),
+                p2m_mmio_direct_dev);
 
     /* Map the on-chip SRAM */
-    map_mmio_regions(d, gaddr_to_gfn(OMAP5_SRAM_PA), 32,
-                     maddr_to_mfn(OMAP5_SRAM_PA),
-                     p2m_mmio_direct_dev);
+    map_regions(d, gaddr_to_gfn(OMAP5_SRAM_PA), 32,
+                maddr_to_mfn(OMAP5_SRAM_PA),
+                p2m_mmio_direct_dev);
 
     return 0;
 }
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index afae5a1..fee5517 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1887,7 +1887,7 @@ static bool try_map_mmio(gfn_t gfn)
     if ( !iomem_access_permitted(d, mfn_x(mfn), mfn_x(mfn) + 1) )
         return false;
 
-    return !map_mmio_regions(d, gfn, 1, mfn, p2m_mmio_direct_c);
+    return !map_regions(d, gfn, 1, mfn, p2m_mmio_direct_c);
 }
 
 static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs,
diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 1543625..f33c56a 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -690,8 +690,8 @@ static int vgic_v2_domain_init(struct domain *d)
      * Map the gic virtual cpu interface in the gic cpu interface
      * region of the guest.
      */
-    ret = map_mmio_regions(d, gaddr_to_gfn(cbase), csize / PAGE_SIZE,
-                           maddr_to_mfn(vbase), p2m_mmio_direct_dev);
+    ret = map_regions(d, gaddr_to_gfn(cbase), csize / PAGE_SIZE,
+                      maddr_to_mfn(vbase), p2m_mmio_direct_dev);
     if ( ret )
         return ret;
 
diff --git a/xen/arch/arm/vgic/vgic-v2.c b/xen/arch/arm/vgic/vgic-v2.c
index 04f34dd..b03af84 100644
--- a/xen/arch/arm/vgic/vgic-v2.c
+++ b/xen/arch/arm/vgic/vgic-v2.c
@@ -308,8 +308,8 @@ int vgic_v2_map_resources(struct domain *d)
      * Map the gic virtual cpu interface in the gic cpu interface
      * region of the guest.
      */
-    ret = map_mmio_regions(d, gaddr_to_gfn(cbase), csize / PAGE_SIZE,
-                           maddr_to_mfn(vbase), p2m_mmio_direct_dev);
+    ret = map_regions(d, gaddr_to_gfn(cbase), csize / PAGE_SIZE,
+                      maddr_to_mfn(vbase), p2m_mmio_direct_dev);
     if ( ret )
     {
         gdprintk(XENLOG_ERR, "Unable to remap VGIC CPU to VCPU\n");
diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index 84776fc..800faaa 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -80,10 +80,10 @@ static int __init modify_identity_mmio(struct domain *d, unsigned long pfn,
     for ( ; ; )
     {
         if ( map )
-            rc = map_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn),
-                                  p2m_mmio_direct);
+            rc = map_regions(d, _gfn(pfn), nr_pages, _mfn(pfn),
+                             p2m_mmio_direct);
         else
-            rc = unmap_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn));
+            rc = unmap_regions(d, _gfn(pfn), nr_pages, _mfn(pfn));
         if ( rc == 0 )
             break;
         if ( rc < 0 )
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index a72f012..d976ce9 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2261,11 +2261,11 @@ static unsigned int mmio_order(const struct domain *d,
 
 #define MAP_MMIO_MAX_ITER 64 /* pretty arbitrary */
 
-int map_mmio_regions(struct domain *d,
-                     gfn_t start_gfn,
-                     unsigned long nr,
-                     mfn_t mfn,
-                     p2m_type_t p2mt)
+int map_regions(struct domain *d,
+                gfn_t start_gfn,
+                unsigned long nr,
+                mfn_t mfn,
+                p2m_type_t p2mt)
 {
     int ret = 0;
     unsigned long i;
@@ -2298,10 +2298,10 @@ int map_mmio_regions(struct domain *d,
     return i == nr ? 0 : i ?: ret;
 }
 
-int unmap_mmio_regions(struct domain *d,
-                       gfn_t start_gfn,
-                       unsigned long nr,
-                       mfn_t mfn)
+int unmap_regions(struct domain *d,
+                  gfn_t start_gfn,
+                  unsigned long nr,
+                  mfn_t mfn)
 {
     int ret = 0;
     unsigned long i;
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 18a0f8f..140f979 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -961,7 +961,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
                    "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
                    d->domain_id, gfn, mfn, nr_mfns);
 
-            ret = map_mmio_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn), p2mt);
+            ret = map_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn), p2mt);
             if ( ret < 0 )
                 printk(XENLOG_G_WARNING
                        "memory_map:fail: dom%d gfn=%lx mfn=%lx nr=%lx ret:%ld\n",
@@ -973,7 +973,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
                    "memory_map:remove: dom%d gfn=%lx mfn=%lx nr=%lx\n",
                    d->domain_id, gfn, mfn, nr_mfns);
 
-            ret = unmap_mmio_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn));
+            ret = unmap_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn));
             if ( ret < 0 && is_hardware_domain(current->domain) )
                 printk(XENLOG_ERR
                        "memory_map: error %ld removing dom%d access to [%lx,%lx]\n",
diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
index 6adfa55..7b25acc 100644
--- a/xen/drivers/vpci/header.c
+++ b/xen/drivers/vpci/header.c
@@ -46,15 +46,15 @@ static int map_range(unsigned long s, unsigned long e, void *data,
         /*
          * ARM TODOs:
          * - On ARM whether the memory is prefetchable or not should be passed
-         *   to map_mmio_regions in order to decide which memory attributes
+         *   to map_regions in order to decide which memory attributes
          *   should be used.
          *
-         * - {un}map_mmio_regions doesn't support preemption.
+         * - {un}map_regions doesn't support preemption.
          */
 
-        rc = map->map ? map_mmio_regions(map->d, _gfn(s), size, _mfn(s),
+        rc = map->map ? map_regions(map->d, _gfn(s), size, _mfn(s),
                                          p2m_mmio_direct)
-                      : unmap_mmio_regions(map->d, _gfn(s), size, _mfn(s));
+                      : unmap_regions(map->d, _gfn(s), size, _mfn(s));
         if ( rc == 0 )
         {
             *c += size;
diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h
index 69c82cc..728c9a4 100644
--- a/xen/include/xen/p2m-common.h
+++ b/xen/include/xen/p2m-common.h
@@ -8,18 +8,20 @@ int __must_check
 guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
                           unsigned int page_order);
 
-/* Map MMIO regions in the p2m: start_gfn and nr describe the range in
- *  * the guest physical address space to map, starting from the machine
- *   * frame number mfn. */
-int map_mmio_regions(struct domain *d,
-                     gfn_t start_gfn,
-                     unsigned long nr,
-                     mfn_t mfn,
-                     p2m_type_t p2mt);
-int unmap_mmio_regions(struct domain *d,
-                       gfn_t start_gfn,
-                       unsigned long nr,
-                       mfn_t mfn);
+/*
+ * Map memory regions in the p2m: start_gfn and nr describe the range in
+ * the guest physical address space to map, starting from the machine
+ * frame number mfn.
+ */
+int map_regions(struct domain *d,
+                gfn_t start_gfn,
+                unsigned long nr,
+                mfn_t mfn,
+                p2m_type_t p2mt);
+int unmap_regions(struct domain *d,
+                  gfn_t start_gfn,
+                  unsigned long nr,
+                  mfn_t mfn);
 
 /*
  * Populate-on-Demand
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy
  2019-04-30 21:02 [PATCH v2 0/10] iomem memory policy Stefano Stabellini
                   ` (2 preceding siblings ...)
  2019-04-30 21:02 ` [PATCH v2 02/10] xen: rename un/map_mmio_regions to un/map_regions Stefano Stabellini
@ 2019-04-30 21:02 ` Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
                     ` (3 more replies)
  2019-04-30 21:02 ` [PATCH v2 04/10] libxc: introduce xc_domain_mem_map_policy Stefano Stabellini
                   ` (7 subsequent siblings)
  11 siblings, 4 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, julien.grall, sstabellini, JBeulich, andrew.cooper3

Reuse the existing padding field to pass memory policy information.  On
Arm, the caller can specify whether the memory should be mapped as
device nGRE, which is the default and the only possibility today, or
cacheable memory write-back. On x86, the only option is uncachable. The
current behavior becomes the default (numerically '0').

On ARM, map device nGRE as p2m_mmio_direct_dev (as it is already done
today) and WB cacheable memory as p2m_mmio_direct_c.

On x86, return error if the memory policy requested is not
MEMORY_POLICY_X86_UC.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
CC: JBeulich@suse.com
CC: andrew.cooper3@citrix.com

---
Changes in v2:
- rebase
- use p2m_mmio_direct_c
- use EOPNOTSUPP
- rename cache_policy to memory policy
- rename MEMORY_POLICY_DEVMEM to MEMORY_POLICY_ARM_DEV_nGRE
- rename MEMORY_POLICY_MEMORY to MEMORY_POLICY_ARM_MEM_WB
- add MEMORY_POLICY_X86_UC
- add MEMORY_POLICY_DEFAULT and use it
---
 xen/common/domctl.c         | 23 +++++++++++++++++++++--
 xen/include/public/domctl.h | 14 +++++++++++++-
 2 files changed, 34 insertions(+), 3 deletions(-)

diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 140f979..9f62ead 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -928,6 +928,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         unsigned long mfn_end = mfn + nr_mfns - 1;
         int add = op->u.memory_mapping.add_mapping;
         p2m_type_t p2mt;
+        uint32_t memory_policy = op->u.memory_mapping.memory_policy;
 
         ret = -EINVAL;
         if ( mfn_end < mfn || /* wrap? */
@@ -958,9 +959,27 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         if ( add )
         {
             printk(XENLOG_G_DEBUG
-                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
-                   d->domain_id, gfn, mfn, nr_mfns);
+                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx cache=%u\n",
+                   d->domain_id, gfn, mfn, nr_mfns, memory_policy);
 
+            switch ( memory_policy )
+            {
+#ifdef CONFIG_ARM
+                case MEMORY_POLICY_ARM_MEM_WB:
+                    p2mt = p2m_mmio_direct_c;
+                    break;
+                case MEMORY_POLICY_ARM_DEV_nGRE:
+                    p2mt = p2m_mmio_direct_dev;
+                    break;
+#endif
+#ifdef CONFIG_X86
+                case MEMORY_POLICY_X86_UC:
+                    p2mt = p2m_mmio_direct;
+                    break;
+#endif
+                default:
+                    return -EOPNOTSUPP;
+            }
             ret = map_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn), p2mt);
             if ( ret < 0 )
                 printk(XENLOG_G_WARNING
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 19486d5..9330387 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -571,12 +571,24 @@ struct xen_domctl_bind_pt_irq {
 */
 #define DPCI_ADD_MAPPING         1
 #define DPCI_REMOVE_MAPPING      0
+/*
+ * Default memory policy. Corresponds to:
+ * Arm: MEMORY_POLICY_ARM_DEV_nGRE
+ * x86: MEMORY_POLICY_X86_UC
+ */
+#define MEMORY_POLICY_DEFAULT    0
+/* x86 only. Memory type UNCACHABLE */
+#define MEMORY_POLICY_X86_UC     0
+/* Arm only. Outer Shareable, Device-nGRE memory */
+#define MEMORY_POLICY_ARM_DEV_nGRE       0
+/* Arm only. Outer Shareable, Outer/Inner Write-Back Cacheable memory */
+#define MEMORY_POLICY_ARM_MEM_WB         1
 struct xen_domctl_memory_mapping {
     uint64_aligned_t first_gfn; /* first page (hvm guest phys page) in range */
     uint64_aligned_t first_mfn; /* first page (machine page) in range */
     uint64_aligned_t nr_mfns;   /* number of pages in range (>0) */
     uint32_t add_mapping;       /* add or remove mapping */
-    uint32_t padding;           /* padding for 64-bit aligned structure */
+    uint32_t memory_policy;      /* cacheability of the memory mapping */
 };
 
 
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [Xen-devel] [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy
  2019-04-30 21:02 ` [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy Stefano Stabellini
@ 2019-04-30 21:02   ` " Stefano Stabellini
  2019-05-02 15:12   ` Jan Beulich
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, julien.grall, sstabellini, JBeulich, andrew.cooper3

Reuse the existing padding field to pass memory policy information.  On
Arm, the caller can specify whether the memory should be mapped as
device nGRE, which is the default and the only possibility today, or
cacheable memory write-back. On x86, the only option is uncachable. The
current behavior becomes the default (numerically '0').

On ARM, map device nGRE as p2m_mmio_direct_dev (as it is already done
today) and WB cacheable memory as p2m_mmio_direct_c.

On x86, return error if the memory policy requested is not
MEMORY_POLICY_X86_UC.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
CC: JBeulich@suse.com
CC: andrew.cooper3@citrix.com

---
Changes in v2:
- rebase
- use p2m_mmio_direct_c
- use EOPNOTSUPP
- rename cache_policy to memory policy
- rename MEMORY_POLICY_DEVMEM to MEMORY_POLICY_ARM_DEV_nGRE
- rename MEMORY_POLICY_MEMORY to MEMORY_POLICY_ARM_MEM_WB
- add MEMORY_POLICY_X86_UC
- add MEMORY_POLICY_DEFAULT and use it
---
 xen/common/domctl.c         | 23 +++++++++++++++++++++--
 xen/include/public/domctl.h | 14 +++++++++++++-
 2 files changed, 34 insertions(+), 3 deletions(-)

diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 140f979..9f62ead 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -928,6 +928,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         unsigned long mfn_end = mfn + nr_mfns - 1;
         int add = op->u.memory_mapping.add_mapping;
         p2m_type_t p2mt;
+        uint32_t memory_policy = op->u.memory_mapping.memory_policy;
 
         ret = -EINVAL;
         if ( mfn_end < mfn || /* wrap? */
@@ -958,9 +959,27 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         if ( add )
         {
             printk(XENLOG_G_DEBUG
-                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
-                   d->domain_id, gfn, mfn, nr_mfns);
+                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx cache=%u\n",
+                   d->domain_id, gfn, mfn, nr_mfns, memory_policy);
 
+            switch ( memory_policy )
+            {
+#ifdef CONFIG_ARM
+                case MEMORY_POLICY_ARM_MEM_WB:
+                    p2mt = p2m_mmio_direct_c;
+                    break;
+                case MEMORY_POLICY_ARM_DEV_nGRE:
+                    p2mt = p2m_mmio_direct_dev;
+                    break;
+#endif
+#ifdef CONFIG_X86
+                case MEMORY_POLICY_X86_UC:
+                    p2mt = p2m_mmio_direct;
+                    break;
+#endif
+                default:
+                    return -EOPNOTSUPP;
+            }
             ret = map_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn), p2mt);
             if ( ret < 0 )
                 printk(XENLOG_G_WARNING
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 19486d5..9330387 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -571,12 +571,24 @@ struct xen_domctl_bind_pt_irq {
 */
 #define DPCI_ADD_MAPPING         1
 #define DPCI_REMOVE_MAPPING      0
+/*
+ * Default memory policy. Corresponds to:
+ * Arm: MEMORY_POLICY_ARM_DEV_nGRE
+ * x86: MEMORY_POLICY_X86_UC
+ */
+#define MEMORY_POLICY_DEFAULT    0
+/* x86 only. Memory type UNCACHABLE */
+#define MEMORY_POLICY_X86_UC     0
+/* Arm only. Outer Shareable, Device-nGRE memory */
+#define MEMORY_POLICY_ARM_DEV_nGRE       0
+/* Arm only. Outer Shareable, Outer/Inner Write-Back Cacheable memory */
+#define MEMORY_POLICY_ARM_MEM_WB         1
 struct xen_domctl_memory_mapping {
     uint64_aligned_t first_gfn; /* first page (hvm guest phys page) in range */
     uint64_aligned_t first_mfn; /* first page (machine page) in range */
     uint64_aligned_t nr_mfns;   /* number of pages in range (>0) */
     uint32_t add_mapping;       /* add or remove mapping */
-    uint32_t padding;           /* padding for 64-bit aligned structure */
+    uint32_t memory_policy;      /* cacheability of the memory mapping */
 };
 
 
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [PATCH v2 04/10] libxc: introduce xc_domain_mem_map_policy
  2019-04-30 21:02 [PATCH v2 0/10] iomem memory policy Stefano Stabellini
                   ` (3 preceding siblings ...)
  2019-04-30 21:02 ` [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy Stefano Stabellini
@ 2019-04-30 21:02 ` Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
  2019-04-30 21:02 ` [PATCH v2 05/10] libxl/xl: add memory policy option to iomem Stefano Stabellini
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, julien.grall, sstabellini, ian.jackson, wei.liu2

Introduce a new libxc function that makes use of the new memory_policy
parameter added to the XEN_DOMCTL_memory_mapping hypercall.

The parameter values are the same for the XEN_DOMCTL_memory_mapping
hypercall (0 is MEMORY_POLICY_DEFAULT). Pass MEMORY_POLICY_DEFAULT by
default -- no changes in behavior.

We could extend xc_domain_memory_mapping, but QEMU makes use of it, so
it is easier and less disruptive to introduce a new libxc function and
change the implementation of xc_domain_memory_mapping to call into it.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
CC: ian.jackson@eu.citrix.com
CC: wei.liu2@citrix.com
---
Changes in v2:
- rename cache_policy to memory policy
- rename MEMORY_POLICY_DEVMEM to MEMORY_POLICY_ARM_DEV_nGRE
- rename MEMORY_POLICY_MEMORY to MEMORY_POLICY_ARM_MEM_WB
- introduce xc_domain_mem_map_policy
---
 tools/libxc/include/xenctrl.h |  8 ++++++++
 tools/libxc/xc_domain.c       | 24 ++++++++++++++++++++----
 2 files changed, 28 insertions(+), 4 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 49a6b2a..16ff286 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1714,6 +1714,14 @@ int xc_deassign_dt_device(xc_interface *xch,
                           uint32_t domid,
                           char *path);
 
+int xc_domain_mem_map_policy(xc_interface *xch,
+                             uint32_t domid,
+                             unsigned long first_gfn,
+                             unsigned long first_mfn,
+                             unsigned long nr_mfns,
+                             uint32_t add_mapping,
+                             uint32_t memory_policy);
+
 int xc_domain_memory_mapping(xc_interface *xch,
                              uint32_t domid,
                              unsigned long first_gfn,
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 05d771f..02f5778 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -2042,13 +2042,14 @@ failed:
     return -1;
 }
 
-int xc_domain_memory_mapping(
+int xc_domain_mem_map_policy(
     xc_interface *xch,
     uint32_t domid,
     unsigned long first_gfn,
     unsigned long first_mfn,
     unsigned long nr_mfns,
-    uint32_t add_mapping)
+    uint32_t add_mapping,
+    uint32_t memory_policy)
 {
     DECLARE_DOMCTL;
     xc_dominfo_t info;
@@ -2070,6 +2071,7 @@ int xc_domain_memory_mapping(
     domctl.cmd = XEN_DOMCTL_memory_mapping;
     domctl.domain = domid;
     domctl.u.memory_mapping.add_mapping = add_mapping;
+    domctl.u.memory_mapping.memory_policy = memory_policy;
     max_batch_sz = nr_mfns;
     do
     {
@@ -2105,8 +2107,9 @@ int xc_domain_memory_mapping(
      * Errors here are ignored.
      */
     if ( ret && add_mapping != DPCI_REMOVE_MAPPING )
-        xc_domain_memory_mapping(xch, domid, first_gfn, first_mfn, nr_mfns,
-                                 DPCI_REMOVE_MAPPING);
+        xc_domain_mem_map_policy(xch, domid, first_gfn, first_mfn, nr_mfns,
+                                 DPCI_REMOVE_MAPPING,
+                                 MEMORY_POLICY_DEFAULT);
 
     /* We might get E2BIG so many times that we never advance. */
     if ( !done && !ret )
@@ -2115,6 +2118,19 @@ int xc_domain_memory_mapping(
     return ret;
 }
 
+int xc_domain_memory_mapping(
+    xc_interface *xch,
+    uint32_t domid,
+    unsigned long first_gfn,
+    unsigned long first_mfn,
+    unsigned long nr_mfns,
+    uint32_t add_mapping)
+{
+    return xc_domain_mem_map_policy(xch, domid, first_gfn, first_mfn,
+                                    nr_mfns, add_mapping,
+                                    MEMORY_POLICY_DEFAULT);
+}
+
 int xc_domain_ioport_mapping(
     xc_interface *xch,
     uint32_t domid,
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [Xen-devel] [PATCH v2 04/10] libxc: introduce xc_domain_mem_map_policy
  2019-04-30 21:02 ` [PATCH v2 04/10] libxc: introduce xc_domain_mem_map_policy Stefano Stabellini
@ 2019-04-30 21:02   ` " Stefano Stabellini
  0 siblings, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, julien.grall, sstabellini, ian.jackson, wei.liu2

Introduce a new libxc function that makes use of the new memory_policy
parameter added to the XEN_DOMCTL_memory_mapping hypercall.

The parameter values are the same for the XEN_DOMCTL_memory_mapping
hypercall (0 is MEMORY_POLICY_DEFAULT). Pass MEMORY_POLICY_DEFAULT by
default -- no changes in behavior.

We could extend xc_domain_memory_mapping, but QEMU makes use of it, so
it is easier and less disruptive to introduce a new libxc function and
change the implementation of xc_domain_memory_mapping to call into it.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
CC: ian.jackson@eu.citrix.com
CC: wei.liu2@citrix.com
---
Changes in v2:
- rename cache_policy to memory policy
- rename MEMORY_POLICY_DEVMEM to MEMORY_POLICY_ARM_DEV_nGRE
- rename MEMORY_POLICY_MEMORY to MEMORY_POLICY_ARM_MEM_WB
- introduce xc_domain_mem_map_policy
---
 tools/libxc/include/xenctrl.h |  8 ++++++++
 tools/libxc/xc_domain.c       | 24 ++++++++++++++++++++----
 2 files changed, 28 insertions(+), 4 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 49a6b2a..16ff286 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1714,6 +1714,14 @@ int xc_deassign_dt_device(xc_interface *xch,
                           uint32_t domid,
                           char *path);
 
+int xc_domain_mem_map_policy(xc_interface *xch,
+                             uint32_t domid,
+                             unsigned long first_gfn,
+                             unsigned long first_mfn,
+                             unsigned long nr_mfns,
+                             uint32_t add_mapping,
+                             uint32_t memory_policy);
+
 int xc_domain_memory_mapping(xc_interface *xch,
                              uint32_t domid,
                              unsigned long first_gfn,
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 05d771f..02f5778 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -2042,13 +2042,14 @@ failed:
     return -1;
 }
 
-int xc_domain_memory_mapping(
+int xc_domain_mem_map_policy(
     xc_interface *xch,
     uint32_t domid,
     unsigned long first_gfn,
     unsigned long first_mfn,
     unsigned long nr_mfns,
-    uint32_t add_mapping)
+    uint32_t add_mapping,
+    uint32_t memory_policy)
 {
     DECLARE_DOMCTL;
     xc_dominfo_t info;
@@ -2070,6 +2071,7 @@ int xc_domain_memory_mapping(
     domctl.cmd = XEN_DOMCTL_memory_mapping;
     domctl.domain = domid;
     domctl.u.memory_mapping.add_mapping = add_mapping;
+    domctl.u.memory_mapping.memory_policy = memory_policy;
     max_batch_sz = nr_mfns;
     do
     {
@@ -2105,8 +2107,9 @@ int xc_domain_memory_mapping(
      * Errors here are ignored.
      */
     if ( ret && add_mapping != DPCI_REMOVE_MAPPING )
-        xc_domain_memory_mapping(xch, domid, first_gfn, first_mfn, nr_mfns,
-                                 DPCI_REMOVE_MAPPING);
+        xc_domain_mem_map_policy(xch, domid, first_gfn, first_mfn, nr_mfns,
+                                 DPCI_REMOVE_MAPPING,
+                                 MEMORY_POLICY_DEFAULT);
 
     /* We might get E2BIG so many times that we never advance. */
     if ( !done && !ret )
@@ -2115,6 +2118,19 @@ int xc_domain_memory_mapping(
     return ret;
 }
 
+int xc_domain_memory_mapping(
+    xc_interface *xch,
+    uint32_t domid,
+    unsigned long first_gfn,
+    unsigned long first_mfn,
+    unsigned long nr_mfns,
+    uint32_t add_mapping)
+{
+    return xc_domain_mem_map_policy(xch, domid, first_gfn, first_mfn,
+                                    nr_mfns, add_mapping,
+                                    MEMORY_POLICY_DEFAULT);
+}
+
 int xc_domain_ioport_mapping(
     xc_interface *xch,
     uint32_t domid,
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [PATCH v2 05/10] libxl/xl: add memory policy option to iomem
  2019-04-30 21:02 [PATCH v2 0/10] iomem memory policy Stefano Stabellini
                   ` (4 preceding siblings ...)
  2019-04-30 21:02 ` [PATCH v2 04/10] libxc: introduce xc_domain_mem_map_policy Stefano Stabellini
@ 2019-04-30 21:02 ` Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
                     ` (2 more replies)
  2019-04-30 21:02 ` [PATCH v2 06/10] xen/arm: extend device_tree_for_each_node Stefano Stabellini
                   ` (5 subsequent siblings)
  11 siblings, 3 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, julien.grall, sstabellini, ian.jackson, wei.liu2

Add a new memory policy option for the iomem parameter.
Possible values are:
- arm_devmem, device nGRE, the default on ARM
- arm_memory, WB cachable memory
- x86_uc: uncachable memory, the default on x86

Store the parameter in a new field in libxl_iomem_range.

Pass the memory policy option to xc_domain_mem_map_policy.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
CC: ian.jackson@eu.citrix.com
CC: wei.liu2@citrix.com
---
Changes in v2:
- add #define LIBXL_HAVE_MEMORY_POLICY
- ability to part the memory policy parameter even if gfn is not passed
- rename cache_policy to memory policy
- rename MEMORY_POLICY_DEVMEM to MEMORY_POLICY_ARM_DEV_nGRE
- rename MEMORY_POLICY_MEMORY to MEMORY_POLICY_ARM_MEM_WB
- rename memory to arm_memory and devmem to arm_devmem
- expand the non-security support status to non device passthrough iomem
  configurations
- rename iomem options
- add x86 specific iomem option
---
 SUPPORT.md                  |  2 +-
 docs/man/xl.cfg.5.pod.in    |  7 ++++++-
 tools/libxl/libxl.h         |  5 +++++
 tools/libxl/libxl_create.c  | 21 +++++++++++++++++++--
 tools/libxl/libxl_types.idl |  9 +++++++++
 tools/xl/xl_parse.c         | 22 +++++++++++++++++++++-
 6 files changed, 61 insertions(+), 5 deletions(-)

diff --git a/SUPPORT.md b/SUPPORT.md
index e4fb15b..f29a299 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -649,7 +649,7 @@ to be used in addition to QEMU.
 
 	Status: Experimental
 
-### ARM/Non-PCI device passthrough
+### ARM/Non-PCI device passthrough and other iomem configurations
 
     Status: Supported, not security supported
 
diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index c7d70e6..c85857e 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1222,7 +1222,7 @@ is given in hexadecimal format and may either be a range, e.g. C<2f8-2ff>
 It is recommended to only use this option for trusted VMs under
 administrator's control.
 
-=item B<iomem=[ "IOMEM_START,NUM_PAGES[@GFN]", "IOMEM_START,NUM_PAGES[@GFN]", ...]>
+=item B<iomem=[ "IOMEM_START,NUM_PAGES[@GFN],MEMORY_POLICY", "IOMEM_START,NUM_PAGES[@GFN][,MEMORY_POLICY]", ...]>
 
 Allow auto-translated domains to access specific hardware I/O memory pages.
 
@@ -1233,6 +1233,11 @@ B<GFN> is not specified, the mapping will be performed using B<IOMEM_START>
 as a start in the guest's address space, therefore performing a 1:1 mapping
 by default.
 All of these values must be given in hexadecimal format.
+B<MEMORY_POLICY> for ARM platforms:
+  - "arm_devmem" for Device nGRE, the default on ARM
+  - "arm_memory" for Outer Shareable Write-Back Cacheable Memory
+B<MEMORY_POLICY> can be for x86 platforms:
+  - "x86_uc" for Uncachable Memory, the default on x86
 
 Note that the IOMMU won't be updated with the mappings specified with this
 option. This option therefore should not be used to pass through any
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 482499a..2366331 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -379,6 +379,11 @@
 #define LIBXL_HAVE_BUILDINFO_BOOTLOADER_ARGS 1
 
 /*
+ * Support specifying memory policy information for memory mappings.
+ */
+#define LIBXL_HAVE_MEMORY_POLICY 1
+
+/*
  * LIBXL_HAVE_EXTENDED_VKB indicates that libxl_device_vkb has extended fields:
  *  - unique_id;
  *  - feature_disable_keyboard;
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 89fe80f..a6c5e30 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -415,6 +415,21 @@ static void init_console_info(libxl__gc *gc,
        Only 'channels' when mapped to consoles have a string name. */
 }
 
+static uint32_t libxl__memory_policy_to_xc(libxl_memory_policy c)
+{
+    switch (c) {
+    case LIBXL_MEMORY_POLICY_ARM_MEM_WB:
+        return MEMORY_POLICY_ARM_MEM_WB;
+    case LIBXL_MEMORY_POLICY_ARM_DEV_NGRE:
+        return MEMORY_POLICY_ARM_DEV_nGRE;
+    case LIBXL_MEMORY_POLICY_X86_UC:
+        return MEMORY_POLICY_X86_UC;
+    case LIBXL_MEMORY_POLICY_DEFAULT:
+    default:
+        return MEMORY_POLICY_DEFAULT;
+    }
+}
+
 int libxl__domain_build(libxl__gc *gc,
                         libxl_domain_config *d_config,
                         uint32_t domid,
@@ -1369,9 +1384,11 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
             ret = ERROR_FAIL;
             goto error_out;
         }
-        ret = xc_domain_memory_mapping(CTX->xch, domid,
+        ret = xc_domain_mem_map_policy(CTX->xch, domid,
                                        io->gfn, io->start,
-                                       io->number, 1);
+                                       io->number, 1,
+                                       libxl__memory_policy_to_xc(
+                                           io->memory_policy));
         if (ret < 0) {
             LOGED(ERROR, domid,
                   "failed to map to domain iomem range %"PRIx64"-%"PRIx64
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index cb4702f..4db8a62 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -272,6 +272,13 @@ libxl_ioport_range = Struct("ioport_range", [
     ("number", uint32),
     ])
 
+libxl_memory_policy = Enumeration("memory_policy", [
+    (0, "default"),
+    (1, "ARM_Dev_nGRE"),
+    (2, "ARM_Mem_WB"),
+    (3, "x86_UC"),
+    ], init_val = "LIBXL_MEMORY_POLICY_DEFAULT")
+
 libxl_iomem_range = Struct("iomem_range", [
     # start host frame number to be mapped to the guest
     ("start", uint64),
@@ -279,6 +286,8 @@ libxl_iomem_range = Struct("iomem_range", [
     ("number", uint64),
     # guest frame number used as a start for the mapping
     ("gfn", uint64, {'init_val': "LIBXL_INVALID_GFN"}),
+    # memory_policy of the memory region
+    ("memory_policy", libxl_memory_policy),
     ])
 
 libxl_vga_interface_info = Struct("vga_interface_info", [
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 352cd21..ed56931 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1883,6 +1883,7 @@ void parse_config_data(const char *config_source,
         }
         for (i = 0; i < num_iomem; i++) {
             int used;
+            const char *mempolicy;
 
             buf = xlu_cfg_get_listitem (iomem, i);
             if (!buf) {
@@ -1895,11 +1896,30 @@ void parse_config_data(const char *config_source,
                          &b_info->iomem[i].start,
                          &b_info->iomem[i].number, &used,
                          &b_info->iomem[i].gfn, &used);
-            if (ret < 2 || buf[used] != '\0') {
+            if (ret < 2) {
                 fprintf(stderr,
                         "xl: Invalid argument parsing iomem: %s\n", buf);
                 exit(1);
             }
+            mempolicy = &buf[used];
+            if (strlen(mempolicy) > 1) {
+                mempolicy++;
+                if (!strcmp(mempolicy, "arm_devmem"))
+                    b_info->iomem[i].memory_policy =
+                        LIBXL_MEMORY_POLICY_ARM_DEV_NGRE;
+                else if (!strcmp(mempolicy, "x86_uc"))
+                    b_info->iomem[i].memory_policy =
+                        LIBXL_MEMORY_POLICY_X86_UC;
+                else if (!strcmp(mempolicy, "arm_memory"))
+                    b_info->iomem[i].memory_policy =
+                        LIBXL_MEMORY_POLICY_ARM_MEM_WB;
+                else {
+                    fprintf(stderr,
+                            "xl: Invalid iomem memory policy parameter: %s\n",
+                            mempolicy);
+                    exit(1);
+                }
+            }
         }
     }
 
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [Xen-devel] [PATCH v2 05/10] libxl/xl: add memory policy option to iomem
  2019-04-30 21:02 ` [PATCH v2 05/10] libxl/xl: add memory policy option to iomem Stefano Stabellini
@ 2019-04-30 21:02   ` " Stefano Stabellini
  2019-05-01  9:42   ` Julien Grall
  2019-06-18 11:15   ` Julien Grall
  2 siblings, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, julien.grall, sstabellini, ian.jackson, wei.liu2

Add a new memory policy option for the iomem parameter.
Possible values are:
- arm_devmem, device nGRE, the default on ARM
- arm_memory, WB cachable memory
- x86_uc: uncachable memory, the default on x86

Store the parameter in a new field in libxl_iomem_range.

Pass the memory policy option to xc_domain_mem_map_policy.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
CC: ian.jackson@eu.citrix.com
CC: wei.liu2@citrix.com
---
Changes in v2:
- add #define LIBXL_HAVE_MEMORY_POLICY
- ability to part the memory policy parameter even if gfn is not passed
- rename cache_policy to memory policy
- rename MEMORY_POLICY_DEVMEM to MEMORY_POLICY_ARM_DEV_nGRE
- rename MEMORY_POLICY_MEMORY to MEMORY_POLICY_ARM_MEM_WB
- rename memory to arm_memory and devmem to arm_devmem
- expand the non-security support status to non device passthrough iomem
  configurations
- rename iomem options
- add x86 specific iomem option
---
 SUPPORT.md                  |  2 +-
 docs/man/xl.cfg.5.pod.in    |  7 ++++++-
 tools/libxl/libxl.h         |  5 +++++
 tools/libxl/libxl_create.c  | 21 +++++++++++++++++++--
 tools/libxl/libxl_types.idl |  9 +++++++++
 tools/xl/xl_parse.c         | 22 +++++++++++++++++++++-
 6 files changed, 61 insertions(+), 5 deletions(-)

diff --git a/SUPPORT.md b/SUPPORT.md
index e4fb15b..f29a299 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -649,7 +649,7 @@ to be used in addition to QEMU.
 
 	Status: Experimental
 
-### ARM/Non-PCI device passthrough
+### ARM/Non-PCI device passthrough and other iomem configurations
 
     Status: Supported, not security supported
 
diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index c7d70e6..c85857e 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1222,7 +1222,7 @@ is given in hexadecimal format and may either be a range, e.g. C<2f8-2ff>
 It is recommended to only use this option for trusted VMs under
 administrator's control.
 
-=item B<iomem=[ "IOMEM_START,NUM_PAGES[@GFN]", "IOMEM_START,NUM_PAGES[@GFN]", ...]>
+=item B<iomem=[ "IOMEM_START,NUM_PAGES[@GFN],MEMORY_POLICY", "IOMEM_START,NUM_PAGES[@GFN][,MEMORY_POLICY]", ...]>
 
 Allow auto-translated domains to access specific hardware I/O memory pages.
 
@@ -1233,6 +1233,11 @@ B<GFN> is not specified, the mapping will be performed using B<IOMEM_START>
 as a start in the guest's address space, therefore performing a 1:1 mapping
 by default.
 All of these values must be given in hexadecimal format.
+B<MEMORY_POLICY> for ARM platforms:
+  - "arm_devmem" for Device nGRE, the default on ARM
+  - "arm_memory" for Outer Shareable Write-Back Cacheable Memory
+B<MEMORY_POLICY> can be for x86 platforms:
+  - "x86_uc" for Uncachable Memory, the default on x86
 
 Note that the IOMMU won't be updated with the mappings specified with this
 option. This option therefore should not be used to pass through any
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 482499a..2366331 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -379,6 +379,11 @@
 #define LIBXL_HAVE_BUILDINFO_BOOTLOADER_ARGS 1
 
 /*
+ * Support specifying memory policy information for memory mappings.
+ */
+#define LIBXL_HAVE_MEMORY_POLICY 1
+
+/*
  * LIBXL_HAVE_EXTENDED_VKB indicates that libxl_device_vkb has extended fields:
  *  - unique_id;
  *  - feature_disable_keyboard;
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 89fe80f..a6c5e30 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -415,6 +415,21 @@ static void init_console_info(libxl__gc *gc,
        Only 'channels' when mapped to consoles have a string name. */
 }
 
+static uint32_t libxl__memory_policy_to_xc(libxl_memory_policy c)
+{
+    switch (c) {
+    case LIBXL_MEMORY_POLICY_ARM_MEM_WB:
+        return MEMORY_POLICY_ARM_MEM_WB;
+    case LIBXL_MEMORY_POLICY_ARM_DEV_NGRE:
+        return MEMORY_POLICY_ARM_DEV_nGRE;
+    case LIBXL_MEMORY_POLICY_X86_UC:
+        return MEMORY_POLICY_X86_UC;
+    case LIBXL_MEMORY_POLICY_DEFAULT:
+    default:
+        return MEMORY_POLICY_DEFAULT;
+    }
+}
+
 int libxl__domain_build(libxl__gc *gc,
                         libxl_domain_config *d_config,
                         uint32_t domid,
@@ -1369,9 +1384,11 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
             ret = ERROR_FAIL;
             goto error_out;
         }
-        ret = xc_domain_memory_mapping(CTX->xch, domid,
+        ret = xc_domain_mem_map_policy(CTX->xch, domid,
                                        io->gfn, io->start,
-                                       io->number, 1);
+                                       io->number, 1,
+                                       libxl__memory_policy_to_xc(
+                                           io->memory_policy));
         if (ret < 0) {
             LOGED(ERROR, domid,
                   "failed to map to domain iomem range %"PRIx64"-%"PRIx64
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index cb4702f..4db8a62 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -272,6 +272,13 @@ libxl_ioport_range = Struct("ioport_range", [
     ("number", uint32),
     ])
 
+libxl_memory_policy = Enumeration("memory_policy", [
+    (0, "default"),
+    (1, "ARM_Dev_nGRE"),
+    (2, "ARM_Mem_WB"),
+    (3, "x86_UC"),
+    ], init_val = "LIBXL_MEMORY_POLICY_DEFAULT")
+
 libxl_iomem_range = Struct("iomem_range", [
     # start host frame number to be mapped to the guest
     ("start", uint64),
@@ -279,6 +286,8 @@ libxl_iomem_range = Struct("iomem_range", [
     ("number", uint64),
     # guest frame number used as a start for the mapping
     ("gfn", uint64, {'init_val': "LIBXL_INVALID_GFN"}),
+    # memory_policy of the memory region
+    ("memory_policy", libxl_memory_policy),
     ])
 
 libxl_vga_interface_info = Struct("vga_interface_info", [
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 352cd21..ed56931 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1883,6 +1883,7 @@ void parse_config_data(const char *config_source,
         }
         for (i = 0; i < num_iomem; i++) {
             int used;
+            const char *mempolicy;
 
             buf = xlu_cfg_get_listitem (iomem, i);
             if (!buf) {
@@ -1895,11 +1896,30 @@ void parse_config_data(const char *config_source,
                          &b_info->iomem[i].start,
                          &b_info->iomem[i].number, &used,
                          &b_info->iomem[i].gfn, &used);
-            if (ret < 2 || buf[used] != '\0') {
+            if (ret < 2) {
                 fprintf(stderr,
                         "xl: Invalid argument parsing iomem: %s\n", buf);
                 exit(1);
             }
+            mempolicy = &buf[used];
+            if (strlen(mempolicy) > 1) {
+                mempolicy++;
+                if (!strcmp(mempolicy, "arm_devmem"))
+                    b_info->iomem[i].memory_policy =
+                        LIBXL_MEMORY_POLICY_ARM_DEV_NGRE;
+                else if (!strcmp(mempolicy, "x86_uc"))
+                    b_info->iomem[i].memory_policy =
+                        LIBXL_MEMORY_POLICY_X86_UC;
+                else if (!strcmp(mempolicy, "arm_memory"))
+                    b_info->iomem[i].memory_policy =
+                        LIBXL_MEMORY_POLICY_ARM_MEM_WB;
+                else {
+                    fprintf(stderr,
+                            "xl: Invalid iomem memory policy parameter: %s\n",
+                            mempolicy);
+                    exit(1);
+                }
+            }
         }
     }
 
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [PATCH v2 06/10] xen/arm: extend device_tree_for_each_node
  2019-04-30 21:02 [PATCH v2 0/10] iomem memory policy Stefano Stabellini
                   ` (5 preceding siblings ...)
  2019-04-30 21:02 ` [PATCH v2 05/10] libxl/xl: add memory policy option to iomem Stefano Stabellini
@ 2019-04-30 21:02 ` Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
  2019-05-07 17:12   ` Julien Grall
  2019-04-30 21:02 ` [PATCH v2 07/10] xen/arm: make process_memory_node a device_tree_node_func Stefano Stabellini
                   ` (4 subsequent siblings)
  11 siblings, 2 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel; +Cc: Stefano Stabellini, julien.grall, sstabellini

Add two new paramters to device_tree_for_each_node: node and depth.
Node is the node to start the search from and depth is the min depth of
the search.

Passing 0, 0 triggers the old behavior.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
---
Changes in v2:
- new
---
 xen/arch/arm/acpi/boot.c      |  2 +-
 xen/arch/arm/bootfdt.c        | 12 ++++++------
 xen/include/xen/device_tree.h |  5 +++--
 3 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/acpi/boot.c b/xen/arch/arm/acpi/boot.c
index 9b29769..cfc85c2 100644
--- a/xen/arch/arm/acpi/boot.c
+++ b/xen/arch/arm/acpi/boot.c
@@ -248,7 +248,7 @@ int __init acpi_boot_table_init(void)
      */
     if ( param_acpi_off || ( !param_acpi_force
                              && device_tree_for_each_node(device_tree_flattened,
-                                                   dt_scan_depth1_nodes, NULL)))
+                                 0, 0, dt_scan_depth1_nodes, NULL)))
         goto disable;
 
     /*
diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index 891b4b6..e7b08ed 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -77,6 +77,8 @@ static u32 __init device_tree_get_u32(const void *fdt, int node,
 /**
  * device_tree_for_each_node - iterate over all device tree nodes
  * @fdt: flat device tree.
+ * @node: node to start the search from
+ * @depth: min depth of the search
  * @func: function to call for each node.
  * @data: data to pass to @func.
  *
@@ -86,17 +88,15 @@ static u32 __init device_tree_get_u32(const void *fdt, int node,
  * returns a value different from 0, that value is returned immediately.
  */
 int __init device_tree_for_each_node(const void *fdt,
+                                     int node, int depth,
                                      device_tree_node_func func,
                                      void *data)
 {
-    int node;
-    int depth;
     u32 address_cells[DEVICE_TREE_MAX_DEPTH];
     u32 size_cells[DEVICE_TREE_MAX_DEPTH];
-    int ret;
+    int ret, min_depth = depth;
 
-    for ( node = 0, depth = 0;
-          node >=0 && depth >= 0;
+    for ( ; node >=0 && depth >= min_depth;
           node = fdt_next_node(fdt, node, &depth) )
     {
         const char *name = fdt_get_name(fdt, node, NULL);
@@ -357,7 +357,7 @@ size_t __init boot_fdt_info(const void *fdt, paddr_t paddr)
 
     add_boot_module(BOOTMOD_FDT, paddr, fdt_totalsize(fdt), false);
 
-    device_tree_for_each_node((void *)fdt, early_scan_node, NULL);
+    device_tree_for_each_node((void *)fdt, 0, 0, early_scan_node, NULL);
     early_print_info();
 
     return fdt_totalsize(fdt);
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 7408a6c..4ff78ba 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -159,8 +159,9 @@ typedef int (*device_tree_node_func)(const void *fdt,
 extern const void *device_tree_flattened;
 
 int device_tree_for_each_node(const void *fdt,
-                                     device_tree_node_func func,
-                                     void *data);
+                              int node, int depth,
+                              device_tree_node_func func,
+                              void *data);
 
 /**
  * dt_unflatten_host_device_tree - Unflatten the host device tree
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [Xen-devel] [PATCH v2 06/10] xen/arm: extend device_tree_for_each_node
  2019-04-30 21:02 ` [PATCH v2 06/10] xen/arm: extend device_tree_for_each_node Stefano Stabellini
@ 2019-04-30 21:02   ` " Stefano Stabellini
  2019-05-07 17:12   ` Julien Grall
  1 sibling, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel; +Cc: Stefano Stabellini, julien.grall, sstabellini

Add two new paramters to device_tree_for_each_node: node and depth.
Node is the node to start the search from and depth is the min depth of
the search.

Passing 0, 0 triggers the old behavior.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
---
Changes in v2:
- new
---
 xen/arch/arm/acpi/boot.c      |  2 +-
 xen/arch/arm/bootfdt.c        | 12 ++++++------
 xen/include/xen/device_tree.h |  5 +++--
 3 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/acpi/boot.c b/xen/arch/arm/acpi/boot.c
index 9b29769..cfc85c2 100644
--- a/xen/arch/arm/acpi/boot.c
+++ b/xen/arch/arm/acpi/boot.c
@@ -248,7 +248,7 @@ int __init acpi_boot_table_init(void)
      */
     if ( param_acpi_off || ( !param_acpi_force
                              && device_tree_for_each_node(device_tree_flattened,
-                                                   dt_scan_depth1_nodes, NULL)))
+                                 0, 0, dt_scan_depth1_nodes, NULL)))
         goto disable;
 
     /*
diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index 891b4b6..e7b08ed 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -77,6 +77,8 @@ static u32 __init device_tree_get_u32(const void *fdt, int node,
 /**
  * device_tree_for_each_node - iterate over all device tree nodes
  * @fdt: flat device tree.
+ * @node: node to start the search from
+ * @depth: min depth of the search
  * @func: function to call for each node.
  * @data: data to pass to @func.
  *
@@ -86,17 +88,15 @@ static u32 __init device_tree_get_u32(const void *fdt, int node,
  * returns a value different from 0, that value is returned immediately.
  */
 int __init device_tree_for_each_node(const void *fdt,
+                                     int node, int depth,
                                      device_tree_node_func func,
                                      void *data)
 {
-    int node;
-    int depth;
     u32 address_cells[DEVICE_TREE_MAX_DEPTH];
     u32 size_cells[DEVICE_TREE_MAX_DEPTH];
-    int ret;
+    int ret, min_depth = depth;
 
-    for ( node = 0, depth = 0;
-          node >=0 && depth >= 0;
+    for ( ; node >=0 && depth >= min_depth;
           node = fdt_next_node(fdt, node, &depth) )
     {
         const char *name = fdt_get_name(fdt, node, NULL);
@@ -357,7 +357,7 @@ size_t __init boot_fdt_info(const void *fdt, paddr_t paddr)
 
     add_boot_module(BOOTMOD_FDT, paddr, fdt_totalsize(fdt), false);
 
-    device_tree_for_each_node((void *)fdt, early_scan_node, NULL);
+    device_tree_for_each_node((void *)fdt, 0, 0, early_scan_node, NULL);
     early_print_info();
 
     return fdt_totalsize(fdt);
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 7408a6c..4ff78ba 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -159,8 +159,9 @@ typedef int (*device_tree_node_func)(const void *fdt,
 extern const void *device_tree_flattened;
 
 int device_tree_for_each_node(const void *fdt,
-                                     device_tree_node_func func,
-                                     void *data);
+                              int node, int depth,
+                              device_tree_node_func func,
+                              void *data);
 
 /**
  * dt_unflatten_host_device_tree - Unflatten the host device tree
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [PATCH v2 07/10] xen/arm: make process_memory_node a device_tree_node_func
  2019-04-30 21:02 [PATCH v2 0/10] iomem memory policy Stefano Stabellini
                   ` (6 preceding siblings ...)
  2019-04-30 21:02 ` [PATCH v2 06/10] xen/arm: extend device_tree_for_each_node Stefano Stabellini
@ 2019-04-30 21:02 ` Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
  2019-05-01  9:47   ` Julien Grall
  2019-04-30 21:02 ` [PATCH v2 08/10] xen/arm: keep track of reserved-memory regions Stefano Stabellini
                   ` (3 subsequent siblings)
  11 siblings, 2 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel; +Cc: Stefano Stabellini, julien.grall, sstabellini

Change the signature of process_memory_node to match
device_tree_node_func.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
---
Changes in v2:
- new
---
 xen/arch/arm/bootfdt.c | 16 ++++++++++------
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index e7b08ed..b6600ab 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -124,9 +124,10 @@ int __init device_tree_for_each_node(const void *fdt,
     return 0;
 }
 
-static void __init process_memory_node(const void *fdt, int node,
-                                       const char *name,
-                                       u32 address_cells, u32 size_cells)
+static int __init process_memory_node(const void *fdt, int node,
+                                      const char *name, int depth,
+                                      u32 address_cells, u32 size_cells,
+                                      void *data)
 {
     const struct fdt_property *prop;
     int i;
@@ -139,14 +140,14 @@ static void __init process_memory_node(const void *fdt, int node,
     {
         printk("fdt: node `%s': invalid #address-cells or #size-cells",
                name);
-        return;
+        return 0;
     }
 
     prop = fdt_get_property(fdt, node, "reg", NULL);
     if ( !prop )
     {
         printk("fdt: node `%s': missing `reg' property\n", name);
-        return;
+        return 0;
     }
 
     cell = (const __be32 *)prop->data;
@@ -161,6 +162,8 @@ static void __init process_memory_node(const void *fdt, int node,
         bootinfo.mem.bank[bootinfo.mem.nr_banks].size = size;
         bootinfo.mem.nr_banks++;
     }
+
+    return 0;
 }
 
 static void __init process_multiboot_node(const void *fdt, int node,
@@ -293,7 +296,8 @@ static int __init early_scan_node(const void *fdt,
                                   void *data)
 {
     if ( device_tree_node_matches(fdt, node, "memory") )
-        process_memory_node(fdt, node, name, address_cells, size_cells);
+        process_memory_node(fdt, node, name, depth, address_cells, size_cells,
+                            NULL);
     else if ( depth <= 3 && (device_tree_node_compatible(fdt, node, "xen,multiboot-module" ) ||
               device_tree_node_compatible(fdt, node, "multiboot,module" )))
         process_multiboot_node(fdt, node, name, address_cells, size_cells);
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [Xen-devel] [PATCH v2 07/10] xen/arm: make process_memory_node a device_tree_node_func
  2019-04-30 21:02 ` [PATCH v2 07/10] xen/arm: make process_memory_node a device_tree_node_func Stefano Stabellini
@ 2019-04-30 21:02   ` " Stefano Stabellini
  2019-05-01  9:47   ` Julien Grall
  1 sibling, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel; +Cc: Stefano Stabellini, julien.grall, sstabellini

Change the signature of process_memory_node to match
device_tree_node_func.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
---
Changes in v2:
- new
---
 xen/arch/arm/bootfdt.c | 16 ++++++++++------
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index e7b08ed..b6600ab 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -124,9 +124,10 @@ int __init device_tree_for_each_node(const void *fdt,
     return 0;
 }
 
-static void __init process_memory_node(const void *fdt, int node,
-                                       const char *name,
-                                       u32 address_cells, u32 size_cells)
+static int __init process_memory_node(const void *fdt, int node,
+                                      const char *name, int depth,
+                                      u32 address_cells, u32 size_cells,
+                                      void *data)
 {
     const struct fdt_property *prop;
     int i;
@@ -139,14 +140,14 @@ static void __init process_memory_node(const void *fdt, int node,
     {
         printk("fdt: node `%s': invalid #address-cells or #size-cells",
                name);
-        return;
+        return 0;
     }
 
     prop = fdt_get_property(fdt, node, "reg", NULL);
     if ( !prop )
     {
         printk("fdt: node `%s': missing `reg' property\n", name);
-        return;
+        return 0;
     }
 
     cell = (const __be32 *)prop->data;
@@ -161,6 +162,8 @@ static void __init process_memory_node(const void *fdt, int node,
         bootinfo.mem.bank[bootinfo.mem.nr_banks].size = size;
         bootinfo.mem.nr_banks++;
     }
+
+    return 0;
 }
 
 static void __init process_multiboot_node(const void *fdt, int node,
@@ -293,7 +296,8 @@ static int __init early_scan_node(const void *fdt,
                                   void *data)
 {
     if ( device_tree_node_matches(fdt, node, "memory") )
-        process_memory_node(fdt, node, name, address_cells, size_cells);
+        process_memory_node(fdt, node, name, depth, address_cells, size_cells,
+                            NULL);
     else if ( depth <= 3 && (device_tree_node_compatible(fdt, node, "xen,multiboot-module" ) ||
               device_tree_node_compatible(fdt, node, "multiboot,module" )))
         process_multiboot_node(fdt, node, name, address_cells, size_cells);
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [PATCH v2 08/10] xen/arm: keep track of reserved-memory regions
  2019-04-30 21:02 [PATCH v2 0/10] iomem memory policy Stefano Stabellini
                   ` (7 preceding siblings ...)
  2019-04-30 21:02 ` [PATCH v2 07/10] xen/arm: make process_memory_node a device_tree_node_func Stefano Stabellini
@ 2019-04-30 21:02 ` Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
                     ` (2 more replies)
  2019-04-30 21:02 ` [PATCH v2 09/10] xen/arm: map reserved-memory regions as normal memory in dom0 Stefano Stabellini
                   ` (2 subsequent siblings)
  11 siblings, 3 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel; +Cc: Stefano Stabellini, julien.grall, sstabellini

As we parse the device tree in Xen, keep track of the reserved-memory
regions as they need special treatment (follow-up patches will make use
of the stored information.)

Reuse process_memory_node to add reserved-memory regions to the
bootinfo.reserved_mem array. Remove the warning if there is no reg in
process_memory_node because it is a normal condition for
reserved-memory.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>

---

Not done: create an e820-like structure on ARM.

Changes in v2:
- call process_memory_node from process_reserved_memory_node to avoid
  duplication
---
 xen/arch/arm/bootfdt.c      | 30 ++++++++++++++++++++++--------
 xen/include/asm-arm/setup.h |  1 +
 2 files changed, 23 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index b6600ab..9355a6e 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -135,6 +135,8 @@ static int __init process_memory_node(const void *fdt, int node,
     const __be32 *cell;
     paddr_t start, size;
     u32 reg_cells = address_cells + size_cells;
+    struct meminfo *mem;
+    bool reserved = (bool)data;
 
     if ( address_cells < 1 || size_cells < 1 )
     {
@@ -143,29 +145,39 @@ static int __init process_memory_node(const void *fdt, int node,
         return 0;
     }
 
+    if ( reserved )
+        mem = &bootinfo.reserved_mem;
+    else
+        mem = &bootinfo.mem;
+
     prop = fdt_get_property(fdt, node, "reg", NULL);
     if ( !prop )
-    {
-        printk("fdt: node `%s': missing `reg' property\n", name);
         return 0;
-    }
 
     cell = (const __be32 *)prop->data;
     banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof (u32));
 
-    for ( i = 0; i < banks && bootinfo.mem.nr_banks < NR_MEM_BANKS; i++ )
+    for ( i = 0; i < banks && mem->nr_banks < NR_MEM_BANKS; i++ )
     {
         device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
         if ( !size )
             continue;
-        bootinfo.mem.bank[bootinfo.mem.nr_banks].start = start;
-        bootinfo.mem.bank[bootinfo.mem.nr_banks].size = size;
-        bootinfo.mem.nr_banks++;
+        mem->bank[mem->nr_banks].start = start;
+        mem->bank[mem->nr_banks].size = size;
+        mem->nr_banks++;
     }
 
     return 0;
 }
 
+static int __init process_reserved_memory_node(const void *fdt, int node,
+                                               const char *name, int depth,
+                                               u32 address_cells, u32 size_cells)
+{
+    device_tree_for_each_node(fdt, node, depth, process_memory_node, (void*)true);
+    return 0;
+}
+
 static void __init process_multiboot_node(const void *fdt, int node,
                                           const char *name,
                                           u32 address_cells, u32 size_cells)
@@ -297,7 +309,9 @@ static int __init early_scan_node(const void *fdt,
 {
     if ( device_tree_node_matches(fdt, node, "memory") )
         process_memory_node(fdt, node, name, depth, address_cells, size_cells,
-                            NULL);
+                            (void*)false);
+    else if ( device_tree_node_matches(fdt, node, "reserved-memory") )
+        process_reserved_memory_node(fdt, node, name, depth, address_cells, size_cells);
     else if ( depth <= 3 && (device_tree_node_compatible(fdt, node, "xen,multiboot-module" ) ||
               device_tree_node_compatible(fdt, node, "multiboot,module" )))
         process_multiboot_node(fdt, node, name, address_cells, size_cells);
diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h
index 48187e1..5c3fc2d 100644
--- a/xen/include/asm-arm/setup.h
+++ b/xen/include/asm-arm/setup.h
@@ -66,6 +66,7 @@ struct bootcmdlines {
 
 struct bootinfo {
     struct meminfo mem;
+    struct meminfo reserved_mem;
     struct bootmodules modules;
     struct bootcmdlines cmdlines;
 #ifdef CONFIG_ACPI
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [Xen-devel] [PATCH v2 08/10] xen/arm: keep track of reserved-memory regions
  2019-04-30 21:02 ` [PATCH v2 08/10] xen/arm: keep track of reserved-memory regions Stefano Stabellini
@ 2019-04-30 21:02   ` " Stefano Stabellini
  2019-05-01 10:03   ` Julien Grall
  2019-05-07 17:21   ` Julien Grall
  2 siblings, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel; +Cc: Stefano Stabellini, julien.grall, sstabellini

As we parse the device tree in Xen, keep track of the reserved-memory
regions as they need special treatment (follow-up patches will make use
of the stored information.)

Reuse process_memory_node to add reserved-memory regions to the
bootinfo.reserved_mem array. Remove the warning if there is no reg in
process_memory_node because it is a normal condition for
reserved-memory.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>

---

Not done: create an e820-like structure on ARM.

Changes in v2:
- call process_memory_node from process_reserved_memory_node to avoid
  duplication
---
 xen/arch/arm/bootfdt.c      | 30 ++++++++++++++++++++++--------
 xen/include/asm-arm/setup.h |  1 +
 2 files changed, 23 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index b6600ab..9355a6e 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -135,6 +135,8 @@ static int __init process_memory_node(const void *fdt, int node,
     const __be32 *cell;
     paddr_t start, size;
     u32 reg_cells = address_cells + size_cells;
+    struct meminfo *mem;
+    bool reserved = (bool)data;
 
     if ( address_cells < 1 || size_cells < 1 )
     {
@@ -143,29 +145,39 @@ static int __init process_memory_node(const void *fdt, int node,
         return 0;
     }
 
+    if ( reserved )
+        mem = &bootinfo.reserved_mem;
+    else
+        mem = &bootinfo.mem;
+
     prop = fdt_get_property(fdt, node, "reg", NULL);
     if ( !prop )
-    {
-        printk("fdt: node `%s': missing `reg' property\n", name);
         return 0;
-    }
 
     cell = (const __be32 *)prop->data;
     banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof (u32));
 
-    for ( i = 0; i < banks && bootinfo.mem.nr_banks < NR_MEM_BANKS; i++ )
+    for ( i = 0; i < banks && mem->nr_banks < NR_MEM_BANKS; i++ )
     {
         device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
         if ( !size )
             continue;
-        bootinfo.mem.bank[bootinfo.mem.nr_banks].start = start;
-        bootinfo.mem.bank[bootinfo.mem.nr_banks].size = size;
-        bootinfo.mem.nr_banks++;
+        mem->bank[mem->nr_banks].start = start;
+        mem->bank[mem->nr_banks].size = size;
+        mem->nr_banks++;
     }
 
     return 0;
 }
 
+static int __init process_reserved_memory_node(const void *fdt, int node,
+                                               const char *name, int depth,
+                                               u32 address_cells, u32 size_cells)
+{
+    device_tree_for_each_node(fdt, node, depth, process_memory_node, (void*)true);
+    return 0;
+}
+
 static void __init process_multiboot_node(const void *fdt, int node,
                                           const char *name,
                                           u32 address_cells, u32 size_cells)
@@ -297,7 +309,9 @@ static int __init early_scan_node(const void *fdt,
 {
     if ( device_tree_node_matches(fdt, node, "memory") )
         process_memory_node(fdt, node, name, depth, address_cells, size_cells,
-                            NULL);
+                            (void*)false);
+    else if ( device_tree_node_matches(fdt, node, "reserved-memory") )
+        process_reserved_memory_node(fdt, node, name, depth, address_cells, size_cells);
     else if ( depth <= 3 && (device_tree_node_compatible(fdt, node, "xen,multiboot-module" ) ||
               device_tree_node_compatible(fdt, node, "multiboot,module" )))
         process_multiboot_node(fdt, node, name, address_cells, size_cells);
diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h
index 48187e1..5c3fc2d 100644
--- a/xen/include/asm-arm/setup.h
+++ b/xen/include/asm-arm/setup.h
@@ -66,6 +66,7 @@ struct bootcmdlines {
 
 struct bootinfo {
     struct meminfo mem;
+    struct meminfo reserved_mem;
     struct bootmodules modules;
     struct bootcmdlines cmdlines;
 #ifdef CONFIG_ACPI
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [PATCH v2 09/10] xen/arm: map reserved-memory regions as normal memory in dom0
  2019-04-30 21:02 [PATCH v2 0/10] iomem memory policy Stefano Stabellini
                   ` (8 preceding siblings ...)
  2019-04-30 21:02 ` [PATCH v2 08/10] xen/arm: keep track of reserved-memory regions Stefano Stabellini
@ 2019-04-30 21:02 ` Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
  2019-05-07 19:52   ` Julien Grall
  2019-04-30 21:02 ` [PATCH v2 10/10] xen/arm: add reserved-memory regions to the dom0 memory node Stefano Stabellini
  2019-05-16 16:52 ` [PATCH v2 0/10] iomem memory policy Oleksandr
  11 siblings, 2 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel; +Cc: Stefano Stabellini, julien.grall, sstabellini

reserved-memory regions should be mapped as normal memory. At the
moment, they get remapped as device memory in dom0 because Xen doesn't
know any better. Add an explicit check for it.

reserved-memory regions overlap with memory nodes. The overlapping
memory is reserved-memory and should be handled accordingly:
consider_modules and dt_unreserved_regions should skip these regions the
same way they are already skipping mem-reserve regions.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
---
Changes in v2:
- fix commit message: full overlap
- remove check_reserved_memory
- extend consider_modules and dt_unreserved_regions
---
 xen/arch/arm/domain_build.c |  7 +++++++
 xen/arch/arm/setup.c        | 36 +++++++++++++++++++++++++++++++++---
 2 files changed, 40 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 5e7f94c..e5d488d 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1408,6 +1408,13 @@ static int __init handle_node(struct domain *d, struct kernel_info *kinfo,
                "WARNING: Path %s is reserved, skip the node as we may re-use the path.\n",
                path);
 
+    /*
+     * reserved-memory ranges should be mapped as normal memory in the
+     * p2m.
+     */
+    if ( !strcmp(dt_node_name(node), "reserved-memory") )
+        p2mt = p2m_mmio_direct_c;
+
     res = handle_device(d, node, p2mt);
     if ( res)
         return res;
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index ccb0f18..908b52c 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -204,6 +204,19 @@ void __init dt_unreserved_regions(paddr_t s, paddr_t e,
         }
     }
 
+    for ( ; i - nr < bootinfo.reserved_mem.nr_banks; i++ )
+    {
+        paddr_t r_s = bootinfo.reserved_mem.bank[i - nr].start;
+        paddr_t r_e = r_s + bootinfo.reserved_mem.bank[i - nr].size;
+
+        if ( s < r_e && r_s < e )
+        {
+            dt_unreserved_regions(r_e, e, cb, i+1);
+            dt_unreserved_regions(s, r_s, cb, i+1);
+            return;
+        }
+    }
+
     cb(s, e);
 }
 
@@ -390,7 +403,7 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
 {
     const struct bootmodules *mi = &bootinfo.modules;
     int i;
-    int nr_rsvd;
+    int nr;
 
     s = (s+align-1) & ~(align-1);
     e = e & ~(align-1);
@@ -416,9 +429,9 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
 
     /* Now check any fdt reserved areas. */
 
-    nr_rsvd = fdt_num_mem_rsv(device_tree_flattened);
+    nr = fdt_num_mem_rsv(device_tree_flattened);
 
-    for ( ; i < mi->nr_mods + nr_rsvd; i++ )
+    for ( ; i < mi->nr_mods + nr; i++ )
     {
         paddr_t mod_s, mod_e;
 
@@ -440,6 +453,23 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
             return consider_modules(s, mod_s, size, align, i+1);
         }
     }
+
+    /* Now check for reserved-memory regions */
+    nr += mi->nr_mods;
+    for ( ; i - nr < bootinfo.reserved_mem.nr_banks; i++ )
+    {
+        paddr_t r_s = bootinfo.reserved_mem.bank[i - nr].start;
+        paddr_t r_e = r_s + bootinfo.reserved_mem.bank[i - nr].size;
+
+        if ( s < r_e && r_s < e )
+        {
+            r_e = consider_modules(r_e, e, size, align, i+1);
+            if ( r_e )
+                return r_e;
+
+            return consider_modules(s, r_s, size, align, i+1);
+        }
+    }
     return e;
 }
 #endif
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [Xen-devel] [PATCH v2 09/10] xen/arm: map reserved-memory regions as normal memory in dom0
  2019-04-30 21:02 ` [PATCH v2 09/10] xen/arm: map reserved-memory regions as normal memory in dom0 Stefano Stabellini
@ 2019-04-30 21:02   ` " Stefano Stabellini
  2019-05-07 19:52   ` Julien Grall
  1 sibling, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel; +Cc: Stefano Stabellini, julien.grall, sstabellini

reserved-memory regions should be mapped as normal memory. At the
moment, they get remapped as device memory in dom0 because Xen doesn't
know any better. Add an explicit check for it.

reserved-memory regions overlap with memory nodes. The overlapping
memory is reserved-memory and should be handled accordingly:
consider_modules and dt_unreserved_regions should skip these regions the
same way they are already skipping mem-reserve regions.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
---
Changes in v2:
- fix commit message: full overlap
- remove check_reserved_memory
- extend consider_modules and dt_unreserved_regions
---
 xen/arch/arm/domain_build.c |  7 +++++++
 xen/arch/arm/setup.c        | 36 +++++++++++++++++++++++++++++++++---
 2 files changed, 40 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 5e7f94c..e5d488d 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1408,6 +1408,13 @@ static int __init handle_node(struct domain *d, struct kernel_info *kinfo,
                "WARNING: Path %s is reserved, skip the node as we may re-use the path.\n",
                path);
 
+    /*
+     * reserved-memory ranges should be mapped as normal memory in the
+     * p2m.
+     */
+    if ( !strcmp(dt_node_name(node), "reserved-memory") )
+        p2mt = p2m_mmio_direct_c;
+
     res = handle_device(d, node, p2mt);
     if ( res)
         return res;
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index ccb0f18..908b52c 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -204,6 +204,19 @@ void __init dt_unreserved_regions(paddr_t s, paddr_t e,
         }
     }
 
+    for ( ; i - nr < bootinfo.reserved_mem.nr_banks; i++ )
+    {
+        paddr_t r_s = bootinfo.reserved_mem.bank[i - nr].start;
+        paddr_t r_e = r_s + bootinfo.reserved_mem.bank[i - nr].size;
+
+        if ( s < r_e && r_s < e )
+        {
+            dt_unreserved_regions(r_e, e, cb, i+1);
+            dt_unreserved_regions(s, r_s, cb, i+1);
+            return;
+        }
+    }
+
     cb(s, e);
 }
 
@@ -390,7 +403,7 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
 {
     const struct bootmodules *mi = &bootinfo.modules;
     int i;
-    int nr_rsvd;
+    int nr;
 
     s = (s+align-1) & ~(align-1);
     e = e & ~(align-1);
@@ -416,9 +429,9 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
 
     /* Now check any fdt reserved areas. */
 
-    nr_rsvd = fdt_num_mem_rsv(device_tree_flattened);
+    nr = fdt_num_mem_rsv(device_tree_flattened);
 
-    for ( ; i < mi->nr_mods + nr_rsvd; i++ )
+    for ( ; i < mi->nr_mods + nr; i++ )
     {
         paddr_t mod_s, mod_e;
 
@@ -440,6 +453,23 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
             return consider_modules(s, mod_s, size, align, i+1);
         }
     }
+
+    /* Now check for reserved-memory regions */
+    nr += mi->nr_mods;
+    for ( ; i - nr < bootinfo.reserved_mem.nr_banks; i++ )
+    {
+        paddr_t r_s = bootinfo.reserved_mem.bank[i - nr].start;
+        paddr_t r_e = r_s + bootinfo.reserved_mem.bank[i - nr].size;
+
+        if ( s < r_e && r_s < e )
+        {
+            r_e = consider_modules(r_e, e, size, align, i+1);
+            if ( r_e )
+                return r_e;
+
+            return consider_modules(s, r_s, size, align, i+1);
+        }
+    }
     return e;
 }
 #endif
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [PATCH v2 10/10] xen/arm: add reserved-memory regions to the dom0 memory node
  2019-04-30 21:02 [PATCH v2 0/10] iomem memory policy Stefano Stabellini
                   ` (9 preceding siblings ...)
  2019-04-30 21:02 ` [PATCH v2 09/10] xen/arm: map reserved-memory regions as normal memory in dom0 Stefano Stabellini
@ 2019-04-30 21:02 ` Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
  2019-05-07 20:15   ` Julien Grall
  2019-05-16 16:52 ` [PATCH v2 0/10] iomem memory policy Oleksandr
  11 siblings, 2 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel; +Cc: Stefano Stabellini, julien.grall, sstabellini

Reserved memory regions are automatically remapped to dom0. Their device
tree nodes are also added to dom0 device tree. However, the dom0 memory
node is not currently extended to cover the reserved memory regions
ranges as required by the spec.  This commit fixes it.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
---
 xen/arch/arm/domain_build.c | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index e5d488d..fa1ca20 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -643,7 +643,8 @@ static int __init make_memory_node(const struct domain *d,
 {
     int res, i;
     int reg_size = addrcells + sizecells;
-    int nr_cells = reg_size*kinfo->mem.nr_banks;
+    int nr_cells = reg_size * (kinfo->mem.nr_banks + (is_hardware_domain(d) ?
+                               bootinfo.reserved_mem.nr_banks : 0));
     __be32 reg[NR_MEM_BANKS * 4 /* Worst case addrcells + sizecells */];
     __be32 *cells;
 
@@ -673,6 +674,20 @@ static int __init make_memory_node(const struct domain *d,
         dt_child_set_range(&cells, addrcells, sizecells, start, size);
     }
 
+    if ( is_hardware_domain(d) )
+    {
+        for ( i = 0; i < bootinfo.reserved_mem.nr_banks; i++ )
+        {
+            u64 start = bootinfo.reserved_mem.bank[i].start;
+            u64 size = bootinfo.reserved_mem.bank[i].size;
+
+            dt_dprintk("  Bank %d: %#"PRIx64"->%#"PRIx64"\n",
+                    i, start, start + size);
+
+            dt_child_set_range(&cells, addrcells, sizecells, start, size);
+        }
+    }
+
     res = fdt_property(fdt, "reg", reg, nr_cells * sizeof(*reg));
     if ( res )
         return res;
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* [Xen-devel] [PATCH v2 10/10] xen/arm: add reserved-memory regions to the dom0 memory node
  2019-04-30 21:02 ` [PATCH v2 10/10] xen/arm: add reserved-memory regions to the dom0 memory node Stefano Stabellini
@ 2019-04-30 21:02   ` " Stefano Stabellini
  2019-05-07 20:15   ` Julien Grall
  1 sibling, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-04-30 21:02 UTC (permalink / raw)
  To: xen-devel; +Cc: Stefano Stabellini, julien.grall, sstabellini

Reserved memory regions are automatically remapped to dom0. Their device
tree nodes are also added to dom0 device tree. However, the dom0 memory
node is not currently extended to cover the reserved memory regions
ranges as required by the spec.  This commit fixes it.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
---
 xen/arch/arm/domain_build.c | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index e5d488d..fa1ca20 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -643,7 +643,8 @@ static int __init make_memory_node(const struct domain *d,
 {
     int res, i;
     int reg_size = addrcells + sizecells;
-    int nr_cells = reg_size*kinfo->mem.nr_banks;
+    int nr_cells = reg_size * (kinfo->mem.nr_banks + (is_hardware_domain(d) ?
+                               bootinfo.reserved_mem.nr_banks : 0));
     __be32 reg[NR_MEM_BANKS * 4 /* Worst case addrcells + sizecells */];
     __be32 *cells;
 
@@ -673,6 +674,20 @@ static int __init make_memory_node(const struct domain *d,
         dt_child_set_range(&cells, addrcells, sizecells, start, size);
     }
 
+    if ( is_hardware_domain(d) )
+    {
+        for ( i = 0; i < bootinfo.reserved_mem.nr_banks; i++ )
+        {
+            u64 start = bootinfo.reserved_mem.bank[i].start;
+            u64 size = bootinfo.reserved_mem.bank[i].size;
+
+            dt_dprintk("  Bank %d: %#"PRIx64"->%#"PRIx64"\n",
+                    i, start, start + size);
+
+            dt_child_set_range(&cells, addrcells, sizecells, start, size);
+        }
+    }
+
     res = fdt_property(fdt, "reg", reg, nr_cells * sizeof(*reg));
     if ( res )
         return res;
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 02/10] xen: rename un/map_mmio_regions to un/map_regions
  2019-04-30 21:02 ` [PATCH v2 02/10] xen: rename un/map_mmio_regions to un/map_regions Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
@ 2019-05-01  9:22   ` Julien Grall
  2019-05-01  9:22     ` [Xen-devel] " Julien Grall
  2019-06-17 21:24     ` Stefano Stabellini
  2019-05-02 15:03   ` Jan Beulich
  2 siblings, 2 replies; 86+ messages in thread
From: Julien Grall @ 2019-05-01  9:22 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: Stefano Stabellini, JBeulich, andrew.cooper3

Hi,

On 30/04/2019 22:02, Stefano Stabellini wrote:
> Now that map_mmio_regions takes a p2mt parameter, there is no need to
> keep "mmio" in the name. The p2mt parameter does a better job at
> expressing what the mapping is about. Let's save the environment 5
> characters at a time.

At least on Arm, what's the difference with guest_physmap_add_entry and this 
function now? On x86, how does the user now which function to use?

What actually tell the users it should not use this function for RAM?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 02/10] xen: rename un/map_mmio_regions to un/map_regions
  2019-05-01  9:22   ` Julien Grall
@ 2019-05-01  9:22     ` " Julien Grall
  2019-06-17 21:24     ` Stefano Stabellini
  1 sibling, 0 replies; 86+ messages in thread
From: Julien Grall @ 2019-05-01  9:22 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: Stefano Stabellini, JBeulich, andrew.cooper3

Hi,

On 30/04/2019 22:02, Stefano Stabellini wrote:
> Now that map_mmio_regions takes a p2mt parameter, there is no need to
> keep "mmio" in the name. The p2mt parameter does a better job at
> expressing what the mapping is about. Let's save the environment 5
> characters at a time.

At least on Arm, what's the difference with guest_physmap_add_entry and this 
function now? On x86, how does the user now which function to use?

What actually tell the users it should not use this function for RAM?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 05/10] libxl/xl: add memory policy option to iomem
  2019-04-30 21:02 ` [PATCH v2 05/10] libxl/xl: add memory policy option to iomem Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
@ 2019-05-01  9:42   ` Julien Grall
  2019-05-01  9:42     ` [Xen-devel] " Julien Grall
  2019-06-17 22:32     ` Stefano Stabellini
  2019-06-18 11:15   ` Julien Grall
  2 siblings, 2 replies; 86+ messages in thread
From: Julien Grall @ 2019-05-01  9:42 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: Stefano Stabellini, ian.jackson, wei.liu2, Jan Beulich

Hi,

On 30/04/2019 22:02, Stefano Stabellini wrote:
> Add a new memory policy option for the iomem parameter.
> Possible values are:
> - arm_devmem, device nGRE, the default on ARM
> - arm_memory, WB cachable memory
> - x86_uc: uncachable memory, the default on x86
> 
> Store the parameter in a new field in libxl_iomem_range.
> 
> Pass the memory policy option to xc_domain_mem_map_policy.
> 
> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> CC: ian.jackson@eu.citrix.com
> CC: wei.liu2@citrix.com
> ---
> Changes in v2:
> - add #define LIBXL_HAVE_MEMORY_POLICY
> - ability to part the memory policy parameter even if gfn is not passed
> - rename cache_policy to memory policy
> - rename MEMORY_POLICY_DEVMEM to MEMORY_POLICY_ARM_DEV_nGRE
> - rename MEMORY_POLICY_MEMORY to MEMORY_POLICY_ARM_MEM_WB
> - rename memory to arm_memory and devmem to arm_devmem
> - expand the non-security support status to non device passthrough iomem
>    configurations
> - rename iomem options
> - add x86 specific iomem option
> ---
>   SUPPORT.md                  |  2 +-
>   docs/man/xl.cfg.5.pod.in    |  7 ++++++-
>   tools/libxl/libxl.h         |  5 +++++
>   tools/libxl/libxl_create.c  | 21 +++++++++++++++++++--
>   tools/libxl/libxl_types.idl |  9 +++++++++
>   tools/xl/xl_parse.c         | 22 +++++++++++++++++++++-
>   6 files changed, 61 insertions(+), 5 deletions(-)
> 
> diff --git a/SUPPORT.md b/SUPPORT.md
> index e4fb15b..f29a299 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -649,7 +649,7 @@ to be used in addition to QEMU.
>   
>   	Status: Experimental
>   
> -### ARM/Non-PCI device passthrough
> +### ARM/Non-PCI device passthrough and other iomem configurations

I am not sure why iomem is added here? Also what was the security support before 
hand? Was it supported?

>   
>       Status: Supported, not security supported
>   
> diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
> index c7d70e6..c85857e 100644
> --- a/docs/man/xl.cfg.5.pod.in
> +++ b/docs/man/xl.cfg.5.pod.in
> @@ -1222,7 +1222,7 @@ is given in hexadecimal format and may either be a range, e.g. C<2f8-2ff>
>   It is recommended to only use this option for trusted VMs under
>   administrator's control.
>   
> -=item B<iomem=[ "IOMEM_START,NUM_PAGES[@GFN]", "IOMEM_START,NUM_PAGES[@GFN]", ...]>
> +=item B<iomem=[ "IOMEM_START,NUM_PAGES[@GFN],MEMORY_POLICY", "IOMEM_START,NUM_PAGES[@GFN][,MEMORY_POLICY]", ...]>
>   
>   Allow auto-translated domains to access specific hardware I/O memory pages.
>   
> @@ -1233,6 +1233,11 @@ B<GFN> is not specified, the mapping will be performed using B<IOMEM_START>
>   as a start in the guest's address space, therefore performing a 1:1 mapping
>   by default.
>   All of these values must be given in hexadecimal format.
> +B<MEMORY_POLICY> for ARM platforms:
> +  - "arm_devmem" for Device nGRE, the default on ARM

This does not match the current default. At the moment, it is Device nGnRE.

> +  - "arm_memory" for Outer Shareable Write-Back Cacheable Memory

The two names are quite confusing and will make quite difficult to introduce any 
new one. It also make little sense to use a different naming in xl and libxl. 
This only add an other level of confusion.

Overall, this is not enough for a user to understand the memory policy. As I 
pointed out before, this is not straight forward on Arm as the resulting memory 
attribute will be a combination of stage-2 and stage-1.

We need to explain the implication of using the memory and the consequence of 
misuse it. And particularly as this is not security supported so we don't end up 
to security support in the future something that don't work.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 05/10] libxl/xl: add memory policy option to iomem
  2019-05-01  9:42   ` Julien Grall
@ 2019-05-01  9:42     ` " Julien Grall
  2019-06-17 22:32     ` Stefano Stabellini
  1 sibling, 0 replies; 86+ messages in thread
From: Julien Grall @ 2019-05-01  9:42 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: Stefano Stabellini, ian.jackson, wei.liu2, Jan Beulich

Hi,

On 30/04/2019 22:02, Stefano Stabellini wrote:
> Add a new memory policy option for the iomem parameter.
> Possible values are:
> - arm_devmem, device nGRE, the default on ARM
> - arm_memory, WB cachable memory
> - x86_uc: uncachable memory, the default on x86
> 
> Store the parameter in a new field in libxl_iomem_range.
> 
> Pass the memory policy option to xc_domain_mem_map_policy.
> 
> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> CC: ian.jackson@eu.citrix.com
> CC: wei.liu2@citrix.com
> ---
> Changes in v2:
> - add #define LIBXL_HAVE_MEMORY_POLICY
> - ability to part the memory policy parameter even if gfn is not passed
> - rename cache_policy to memory policy
> - rename MEMORY_POLICY_DEVMEM to MEMORY_POLICY_ARM_DEV_nGRE
> - rename MEMORY_POLICY_MEMORY to MEMORY_POLICY_ARM_MEM_WB
> - rename memory to arm_memory and devmem to arm_devmem
> - expand the non-security support status to non device passthrough iomem
>    configurations
> - rename iomem options
> - add x86 specific iomem option
> ---
>   SUPPORT.md                  |  2 +-
>   docs/man/xl.cfg.5.pod.in    |  7 ++++++-
>   tools/libxl/libxl.h         |  5 +++++
>   tools/libxl/libxl_create.c  | 21 +++++++++++++++++++--
>   tools/libxl/libxl_types.idl |  9 +++++++++
>   tools/xl/xl_parse.c         | 22 +++++++++++++++++++++-
>   6 files changed, 61 insertions(+), 5 deletions(-)
> 
> diff --git a/SUPPORT.md b/SUPPORT.md
> index e4fb15b..f29a299 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -649,7 +649,7 @@ to be used in addition to QEMU.
>   
>   	Status: Experimental
>   
> -### ARM/Non-PCI device passthrough
> +### ARM/Non-PCI device passthrough and other iomem configurations

I am not sure why iomem is added here? Also what was the security support before 
hand? Was it supported?

>   
>       Status: Supported, not security supported
>   
> diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
> index c7d70e6..c85857e 100644
> --- a/docs/man/xl.cfg.5.pod.in
> +++ b/docs/man/xl.cfg.5.pod.in
> @@ -1222,7 +1222,7 @@ is given in hexadecimal format and may either be a range, e.g. C<2f8-2ff>
>   It is recommended to only use this option for trusted VMs under
>   administrator's control.
>   
> -=item B<iomem=[ "IOMEM_START,NUM_PAGES[@GFN]", "IOMEM_START,NUM_PAGES[@GFN]", ...]>
> +=item B<iomem=[ "IOMEM_START,NUM_PAGES[@GFN],MEMORY_POLICY", "IOMEM_START,NUM_PAGES[@GFN][,MEMORY_POLICY]", ...]>
>   
>   Allow auto-translated domains to access specific hardware I/O memory pages.
>   
> @@ -1233,6 +1233,11 @@ B<GFN> is not specified, the mapping will be performed using B<IOMEM_START>
>   as a start in the guest's address space, therefore performing a 1:1 mapping
>   by default.
>   All of these values must be given in hexadecimal format.
> +B<MEMORY_POLICY> for ARM platforms:
> +  - "arm_devmem" for Device nGRE, the default on ARM

This does not match the current default. At the moment, it is Device nGnRE.

> +  - "arm_memory" for Outer Shareable Write-Back Cacheable Memory

The two names are quite confusing and will make quite difficult to introduce any 
new one. It also make little sense to use a different naming in xl and libxl. 
This only add an other level of confusion.

Overall, this is not enough for a user to understand the memory policy. As I 
pointed out before, this is not straight forward on Arm as the resulting memory 
attribute will be a combination of stage-2 and stage-1.

We need to explain the implication of using the memory and the consequence of 
misuse it. And particularly as this is not security supported so we don't end up 
to security support in the future something that don't work.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 07/10] xen/arm: make process_memory_node a device_tree_node_func
  2019-04-30 21:02 ` [PATCH v2 07/10] xen/arm: make process_memory_node a device_tree_node_func Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
@ 2019-05-01  9:47   ` Julien Grall
  2019-05-01  9:47     ` [Xen-devel] " Julien Grall
  1 sibling, 1 reply; 86+ messages in thread
From: Julien Grall @ 2019-05-01  9:47 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: Stefano Stabellini

Hi,

On 30/04/2019 22:02, Stefano Stabellini wrote:
> Change the signature of process_memory_node to match
> device_tree_node_func.

NAck in the current form. If a function return a value, then it should be 
checked appropriately and not ignored.

But then, the commit message leads to think you are going to use 
device_tree_node_func here while in fact it is in the next patch. Please update 
the commit message accordingly.

> 
> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> ---
> Changes in v2:
> - new
> ---
>   xen/arch/arm/bootfdt.c | 16 ++++++++++------
>   1 file changed, 10 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index e7b08ed..b6600ab 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -124,9 +124,10 @@ int __init device_tree_for_each_node(const void *fdt,
>       return 0;
>   }
>   
> -static void __init process_memory_node(const void *fdt, int node,
> -                                       const char *name,
> -                                       u32 address_cells, u32 size_cells)
> +static int __init process_memory_node(const void *fdt, int node,
> +                                      const char *name, int depth,
> +                                      u32 address_cells, u32 size_cells,
> +                                      void *data)
>   {
>       const struct fdt_property *prop;
>       int i;
> @@ -139,14 +140,14 @@ static void __init process_memory_node(const void *fdt, int node,
>       {
>           printk("fdt: node `%s': invalid #address-cells or #size-cells",
>                  name);
> -        return;
> +        return 0;
>       }
>   
>       prop = fdt_get_property(fdt, node, "reg", NULL);
>       if ( !prop )
>       {
>           printk("fdt: node `%s': missing `reg' property\n", name);
> -        return;
> +        return 0;
>       }
>   
>       cell = (const __be32 *)prop->data;
> @@ -161,6 +162,8 @@ static void __init process_memory_node(const void *fdt, int node,
>           bootinfo.mem.bank[bootinfo.mem.nr_banks].size = size;
>           bootinfo.mem.nr_banks++;
>       }
> +
> +    return 0;
>   }
>   
>   static void __init process_multiboot_node(const void *fdt, int node,
> @@ -293,7 +296,8 @@ static int __init early_scan_node(const void *fdt,
>                                     void *data)
>   {
>       if ( device_tree_node_matches(fdt, node, "memory") )
> -        process_memory_node(fdt, node, name, address_cells, size_cells);
> +        process_memory_node(fdt, node, name, depth, address_cells, size_cells,
> +                            NULL);
>       else if ( depth <= 3 && (device_tree_node_compatible(fdt, node, "xen,multiboot-module" ) ||
>                 device_tree_node_compatible(fdt, node, "multiboot,module" )))
>           process_multiboot_node(fdt, node, name, address_cells, size_cells);
> 

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 07/10] xen/arm: make process_memory_node a device_tree_node_func
  2019-05-01  9:47   ` Julien Grall
@ 2019-05-01  9:47     ` " Julien Grall
  0 siblings, 0 replies; 86+ messages in thread
From: Julien Grall @ 2019-05-01  9:47 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: Stefano Stabellini

Hi,

On 30/04/2019 22:02, Stefano Stabellini wrote:
> Change the signature of process_memory_node to match
> device_tree_node_func.

NAck in the current form. If a function return a value, then it should be 
checked appropriately and not ignored.

But then, the commit message leads to think you are going to use 
device_tree_node_func here while in fact it is in the next patch. Please update 
the commit message accordingly.

> 
> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> ---
> Changes in v2:
> - new
> ---
>   xen/arch/arm/bootfdt.c | 16 ++++++++++------
>   1 file changed, 10 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index e7b08ed..b6600ab 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -124,9 +124,10 @@ int __init device_tree_for_each_node(const void *fdt,
>       return 0;
>   }
>   
> -static void __init process_memory_node(const void *fdt, int node,
> -                                       const char *name,
> -                                       u32 address_cells, u32 size_cells)
> +static int __init process_memory_node(const void *fdt, int node,
> +                                      const char *name, int depth,
> +                                      u32 address_cells, u32 size_cells,
> +                                      void *data)
>   {
>       const struct fdt_property *prop;
>       int i;
> @@ -139,14 +140,14 @@ static void __init process_memory_node(const void *fdt, int node,
>       {
>           printk("fdt: node `%s': invalid #address-cells or #size-cells",
>                  name);
> -        return;
> +        return 0;
>       }
>   
>       prop = fdt_get_property(fdt, node, "reg", NULL);
>       if ( !prop )
>       {
>           printk("fdt: node `%s': missing `reg' property\n", name);
> -        return;
> +        return 0;
>       }
>   
>       cell = (const __be32 *)prop->data;
> @@ -161,6 +162,8 @@ static void __init process_memory_node(const void *fdt, int node,
>           bootinfo.mem.bank[bootinfo.mem.nr_banks].size = size;
>           bootinfo.mem.nr_banks++;
>       }
> +
> +    return 0;
>   }
>   
>   static void __init process_multiboot_node(const void *fdt, int node,
> @@ -293,7 +296,8 @@ static int __init early_scan_node(const void *fdt,
>                                     void *data)
>   {
>       if ( device_tree_node_matches(fdt, node, "memory") )
> -        process_memory_node(fdt, node, name, address_cells, size_cells);
> +        process_memory_node(fdt, node, name, depth, address_cells, size_cells,
> +                            NULL);
>       else if ( depth <= 3 && (device_tree_node_compatible(fdt, node, "xen,multiboot-module" ) ||
>                 device_tree_node_compatible(fdt, node, "multiboot,module" )))
>           process_multiboot_node(fdt, node, name, address_cells, size_cells);
> 

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 08/10] xen/arm: keep track of reserved-memory regions
  2019-04-30 21:02 ` [PATCH v2 08/10] xen/arm: keep track of reserved-memory regions Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
@ 2019-05-01 10:03   ` Julien Grall
  2019-05-01 10:03     ` [Xen-devel] " Julien Grall
  2019-06-21 23:47     ` Stefano Stabellini
  2019-05-07 17:21   ` Julien Grall
  2 siblings, 2 replies; 86+ messages in thread
From: Julien Grall @ 2019-05-01 10:03 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: Stefano Stabellini

Hi Stefano,

On 30/04/2019 22:02, Stefano Stabellini wrote:
> As we parse the device tree in Xen, keep track of the reserved-memory
> regions as they need special treatment (follow-up patches will make use
> of the stored information.)
> 
> Reuse process_memory_node to add reserved-memory regions to the
> bootinfo.reserved_mem array. Remove the warning if there is no reg in
> process_memory_node because it is a normal condition for
> reserved-memory.

And it is not a normal condition for /memory... So your argument here is not 
sufficient for me to not keep the warning here for /memory.

Rather than trying to re-purpose process_memory_node, I would prefer if you move 
out the parsing of "reg" and then provide 2 functions (one for /memory and one 
for /reserved-memory).

The parsing function will return an error if "reg" is not present, but it can be 
ignored by /reserved-memory and a warning is added for /memory.

> 
> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> 
> ---
> 
> Not done: create an e820-like structure on ARM.
> 
> Changes in v2:
> - call process_memory_node from process_reserved_memory_node to avoid
>    duplication
> ---
>   xen/arch/arm/bootfdt.c      | 30 ++++++++++++++++++++++--------
>   xen/include/asm-arm/setup.h |  1 +
>   2 files changed, 23 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index b6600ab..9355a6e 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -135,6 +135,8 @@ static int __init process_memory_node(const void *fdt, int node,
>       const __be32 *cell;
>       paddr_t start, size;
>       u32 reg_cells = address_cells + size_cells;
> +    struct meminfo *mem;
> +    bool reserved = (bool)data;
>   
>       if ( address_cells < 1 || size_cells < 1 )
>       {
> @@ -143,29 +145,39 @@ static int __init process_memory_node(const void *fdt, int node,
>           return 0;
>       }
>   
> +    if ( reserved )
> +        mem = &bootinfo.reserved_mem;
> +    else
> +        mem = &bootinfo.mem;
> +
>       prop = fdt_get_property(fdt, node, "reg", NULL);
>       if ( !prop )
> -    {
> -        printk("fdt: node `%s': missing `reg' property\n", name);
>           return 0;
> -    }
>   
>       cell = (const __be32 *)prop->data;
>       banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof (u32));
>   
> -    for ( i = 0; i < banks && bootinfo.mem.nr_banks < NR_MEM_BANKS; i++ )
> +    for ( i = 0; i < banks && mem->nr_banks < NR_MEM_BANKS; i++ )

As I pointed out on v1, this is pretty fragile. While ignoring /memory bank is 
fine if we have no more space, for /reserved-region this may mean using them in 
Xen allocator with the consequences we all know.

If you split the function properly, then you will be able to treat 
reserved-regions and memory differently.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 08/10] xen/arm: keep track of reserved-memory regions
  2019-05-01 10:03   ` Julien Grall
@ 2019-05-01 10:03     ` " Julien Grall
  2019-06-21 23:47     ` Stefano Stabellini
  1 sibling, 0 replies; 86+ messages in thread
From: Julien Grall @ 2019-05-01 10:03 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: Stefano Stabellini

Hi Stefano,

On 30/04/2019 22:02, Stefano Stabellini wrote:
> As we parse the device tree in Xen, keep track of the reserved-memory
> regions as they need special treatment (follow-up patches will make use
> of the stored information.)
> 
> Reuse process_memory_node to add reserved-memory regions to the
> bootinfo.reserved_mem array. Remove the warning if there is no reg in
> process_memory_node because it is a normal condition for
> reserved-memory.

And it is not a normal condition for /memory... So your argument here is not 
sufficient for me to not keep the warning here for /memory.

Rather than trying to re-purpose process_memory_node, I would prefer if you move 
out the parsing of "reg" and then provide 2 functions (one for /memory and one 
for /reserved-memory).

The parsing function will return an error if "reg" is not present, but it can be 
ignored by /reserved-memory and a warning is added for /memory.

> 
> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> 
> ---
> 
> Not done: create an e820-like structure on ARM.
> 
> Changes in v2:
> - call process_memory_node from process_reserved_memory_node to avoid
>    duplication
> ---
>   xen/arch/arm/bootfdt.c      | 30 ++++++++++++++++++++++--------
>   xen/include/asm-arm/setup.h |  1 +
>   2 files changed, 23 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index b6600ab..9355a6e 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -135,6 +135,8 @@ static int __init process_memory_node(const void *fdt, int node,
>       const __be32 *cell;
>       paddr_t start, size;
>       u32 reg_cells = address_cells + size_cells;
> +    struct meminfo *mem;
> +    bool reserved = (bool)data;
>   
>       if ( address_cells < 1 || size_cells < 1 )
>       {
> @@ -143,29 +145,39 @@ static int __init process_memory_node(const void *fdt, int node,
>           return 0;
>       }
>   
> +    if ( reserved )
> +        mem = &bootinfo.reserved_mem;
> +    else
> +        mem = &bootinfo.mem;
> +
>       prop = fdt_get_property(fdt, node, "reg", NULL);
>       if ( !prop )
> -    {
> -        printk("fdt: node `%s': missing `reg' property\n", name);
>           return 0;
> -    }
>   
>       cell = (const __be32 *)prop->data;
>       banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof (u32));
>   
> -    for ( i = 0; i < banks && bootinfo.mem.nr_banks < NR_MEM_BANKS; i++ )
> +    for ( i = 0; i < banks && mem->nr_banks < NR_MEM_BANKS; i++ )

As I pointed out on v1, this is pretty fragile. While ignoring /memory bank is 
fine if we have no more space, for /reserved-region this may mean using them in 
Xen allocator with the consequences we all know.

If you split the function properly, then you will be able to treat 
reserved-regions and memory differently.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 01/10] xen: add a p2mt parameter to map_mmio_regions
  2019-04-30 21:02 ` [PATCH v2 01/10] xen: add a p2mt parameter to map_mmio_regions Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
@ 2019-05-02 14:59   ` Jan Beulich
  2019-05-02 14:59     ` [Xen-devel] " Jan Beulich
  2019-05-02 18:49     ` Stefano Stabellini
  2019-05-15 13:39   ` Oleksandr
  2 siblings, 2 replies; 86+ messages in thread
From: Jan Beulich @ 2019-05-02 14:59 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Andrew Cooper, Julien Grall, Stefano Stabellini, xen-devel

>>> On 30.04.19 at 23:02, <sstabellini@kernel.org> wrote:
> --- a/xen/arch/x86/hvm/dom0_build.c
> +++ b/xen/arch/x86/hvm/dom0_build.c
> @@ -79,8 +79,11 @@ static int __init modify_identity_mmio(struct domain *d, unsigned long pfn,
>  
>      for ( ; ; )
>      {
> -        rc = map ?   map_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn))
> -                 : unmap_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn));
> +        if ( map )
> +            rc = map_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn),
> +                                  p2m_mmio_direct);
> +        else
> +            rc = unmap_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn));

May I ask that you leave alone the use of the conditional
operator here, and _just_ add the new argument?

> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -2264,12 +2264,16 @@ static unsigned int mmio_order(const struct domain *d,
>  int map_mmio_regions(struct domain *d,
>                       gfn_t start_gfn,
>                       unsigned long nr,
> -                     mfn_t mfn)
> +                     mfn_t mfn,
> +                     p2m_type_t p2mt)
>  {
>      int ret = 0;
>      unsigned long i;
>      unsigned int iter, order;
>  
> +    if ( p2mt != p2m_mmio_direct )
> +        return -EOPNOTSUPP;

Considering this and ...

> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -927,6 +927,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>          unsigned long nr_mfns = op->u.memory_mapping.nr_mfns;
>          unsigned long mfn_end = mfn + nr_mfns - 1;
>          int add = op->u.memory_mapping.add_mapping;
> +        p2m_type_t p2mt;
>  
>          ret = -EINVAL;
>          if ( mfn_end < mfn || /* wrap? */
> @@ -939,6 +940,10 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>          /* Must break hypercall up as this could take a while. */
>          if ( nr_mfns > 64 )
>              break;
> +
> +        p2mt = p2m_mmio_direct_dev;
> +#else
> +        p2mt = p2m_mmio_direct;
>  #endif

... this, is there really value in adding the new parameter for
x86? A wrapper macro of the same name could be used to
strip the new last argument at all call sites (current and future
ones).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 01/10] xen: add a p2mt parameter to map_mmio_regions
  2019-05-02 14:59   ` Jan Beulich
@ 2019-05-02 14:59     ` " Jan Beulich
  2019-05-02 18:49     ` Stefano Stabellini
  1 sibling, 0 replies; 86+ messages in thread
From: Jan Beulich @ 2019-05-02 14:59 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Andrew Cooper, Julien Grall, Stefano Stabellini, xen-devel

>>> On 30.04.19 at 23:02, <sstabellini@kernel.org> wrote:
> --- a/xen/arch/x86/hvm/dom0_build.c
> +++ b/xen/arch/x86/hvm/dom0_build.c
> @@ -79,8 +79,11 @@ static int __init modify_identity_mmio(struct domain *d, unsigned long pfn,
>  
>      for ( ; ; )
>      {
> -        rc = map ?   map_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn))
> -                 : unmap_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn));
> +        if ( map )
> +            rc = map_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn),
> +                                  p2m_mmio_direct);
> +        else
> +            rc = unmap_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn));

May I ask that you leave alone the use of the conditional
operator here, and _just_ add the new argument?

> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -2264,12 +2264,16 @@ static unsigned int mmio_order(const struct domain *d,
>  int map_mmio_regions(struct domain *d,
>                       gfn_t start_gfn,
>                       unsigned long nr,
> -                     mfn_t mfn)
> +                     mfn_t mfn,
> +                     p2m_type_t p2mt)
>  {
>      int ret = 0;
>      unsigned long i;
>      unsigned int iter, order;
>  
> +    if ( p2mt != p2m_mmio_direct )
> +        return -EOPNOTSUPP;

Considering this and ...

> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -927,6 +927,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>          unsigned long nr_mfns = op->u.memory_mapping.nr_mfns;
>          unsigned long mfn_end = mfn + nr_mfns - 1;
>          int add = op->u.memory_mapping.add_mapping;
> +        p2m_type_t p2mt;
>  
>          ret = -EINVAL;
>          if ( mfn_end < mfn || /* wrap? */
> @@ -939,6 +940,10 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>          /* Must break hypercall up as this could take a while. */
>          if ( nr_mfns > 64 )
>              break;
> +
> +        p2mt = p2m_mmio_direct_dev;
> +#else
> +        p2mt = p2m_mmio_direct;
>  #endif

... this, is there really value in adding the new parameter for
x86? A wrapper macro of the same name could be used to
strip the new last argument at all call sites (current and future
ones).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 02/10] xen: rename un/map_mmio_regions to un/map_regions
  2019-04-30 21:02 ` [PATCH v2 02/10] xen: rename un/map_mmio_regions to un/map_regions Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
  2019-05-01  9:22   ` Julien Grall
@ 2019-05-02 15:03   ` Jan Beulich
  2019-05-02 15:03     ` [Xen-devel] " Jan Beulich
  2019-05-02 18:55     ` Stefano Stabellini
  2 siblings, 2 replies; 86+ messages in thread
From: Jan Beulich @ 2019-05-02 15:03 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Andrew Cooper, Julien Grall, Stefano Stabellini, xen-devel

>>> On 30.04.19 at 23:02, <sstabellini@kernel.org> wrote:
> Now that map_mmio_regions takes a p2mt parameter, there is no need to
> keep "mmio" in the name. The p2mt parameter does a better job at
> expressing what the mapping is about. Let's save the environment 5
> characters at a time.

But as per the cover letter the purpose is to allow mapping
iomem (which I take is just an alternative term for MMIO).
Even if that's misleading, {,un}map_regions() is a little too
unspecific for my taste. At which point at least the
environment saving argument goes away ;-)

As to the series as a whole, I guess you first want to come
to an agreement with Julien. Only then it'll make sense to
actually review the changes, I think.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 02/10] xen: rename un/map_mmio_regions to un/map_regions
  2019-05-02 15:03   ` Jan Beulich
@ 2019-05-02 15:03     ` " Jan Beulich
  2019-05-02 18:55     ` Stefano Stabellini
  1 sibling, 0 replies; 86+ messages in thread
From: Jan Beulich @ 2019-05-02 15:03 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Andrew Cooper, Julien Grall, Stefano Stabellini, xen-devel

>>> On 30.04.19 at 23:02, <sstabellini@kernel.org> wrote:
> Now that map_mmio_regions takes a p2mt parameter, there is no need to
> keep "mmio" in the name. The p2mt parameter does a better job at
> expressing what the mapping is about. Let's save the environment 5
> characters at a time.

But as per the cover letter the purpose is to allow mapping
iomem (which I take is just an alternative term for MMIO).
Even if that's misleading, {,un}map_regions() is a little too
unspecific for my taste. At which point at least the
environment saving argument goes away ;-)

As to the series as a whole, I guess you first want to come
to an agreement with Julien. Only then it'll make sense to
actually review the changes, I think.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy
  2019-04-30 21:02 ` [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
@ 2019-05-02 15:12   ` Jan Beulich
  2019-05-02 15:12     ` [Xen-devel] " Jan Beulich
  2019-06-17 21:28     ` Stefano Stabellini
  2019-05-07 16:41   ` Julien Grall
  2019-05-15 14:40   ` Oleksandr
  3 siblings, 2 replies; 86+ messages in thread
From: Jan Beulich @ 2019-05-02 15:12 UTC (permalink / raw)
  To: Andrew Cooper, Stefano Stabellini
  Cc: xen-devel, Julien Grall, Stefano Stabellini

>>> On 30.04.19 at 23:02, <sstabellini@kernel.org> wrote:
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -571,12 +571,24 @@ struct xen_domctl_bind_pt_irq {
>  */
>  #define DPCI_ADD_MAPPING         1
>  #define DPCI_REMOVE_MAPPING      0
> +/*
> + * Default memory policy. Corresponds to:
> + * Arm: MEMORY_POLICY_ARM_DEV_nGRE
> + * x86: MEMORY_POLICY_X86_UC
> + */
> +#define MEMORY_POLICY_DEFAULT    0
> +/* x86 only. Memory type UNCACHABLE */
> +#define MEMORY_POLICY_X86_UC     0

I'm afraid this may end up misleading, as on NPT and in
shadow mode we use UC- instead of UC afaics. Andrew,
do you have an opinion either way what exactly should
be stated here?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy
  2019-05-02 15:12   ` Jan Beulich
@ 2019-05-02 15:12     ` " Jan Beulich
  2019-06-17 21:28     ` Stefano Stabellini
  1 sibling, 0 replies; 86+ messages in thread
From: Jan Beulich @ 2019-05-02 15:12 UTC (permalink / raw)
  To: Andrew Cooper, Stefano Stabellini
  Cc: xen-devel, Julien Grall, Stefano Stabellini

>>> On 30.04.19 at 23:02, <sstabellini@kernel.org> wrote:
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -571,12 +571,24 @@ struct xen_domctl_bind_pt_irq {
>  */
>  #define DPCI_ADD_MAPPING         1
>  #define DPCI_REMOVE_MAPPING      0
> +/*
> + * Default memory policy. Corresponds to:
> + * Arm: MEMORY_POLICY_ARM_DEV_nGRE
> + * x86: MEMORY_POLICY_X86_UC
> + */
> +#define MEMORY_POLICY_DEFAULT    0
> +/* x86 only. Memory type UNCACHABLE */
> +#define MEMORY_POLICY_X86_UC     0

I'm afraid this may end up misleading, as on NPT and in
shadow mode we use UC- instead of UC afaics. Andrew,
do you have an opinion either way what exactly should
be stated here?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 01/10] xen: add a p2mt parameter to map_mmio_regions
  2019-05-02 14:59   ` Jan Beulich
  2019-05-02 14:59     ` [Xen-devel] " Jan Beulich
@ 2019-05-02 18:49     ` Stefano Stabellini
  2019-05-02 18:49       ` [Xen-devel] " Stefano Stabellini
  1 sibling, 1 reply; 86+ messages in thread
From: Stefano Stabellini @ 2019-05-02 18:49 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Julien Grall, Stefano Stabellini,
	Stefano Stabellini, xen-devel

On Thu, 2 May 2019, Jan Beulich wrote:
> >>> On 30.04.19 at 23:02, <sstabellini@kernel.org> wrote:
> > --- a/xen/arch/x86/hvm/dom0_build.c
> > +++ b/xen/arch/x86/hvm/dom0_build.c
> > @@ -79,8 +79,11 @@ static int __init modify_identity_mmio(struct domain *d, unsigned long pfn,
> >  
> >      for ( ; ; )
> >      {
> > -        rc = map ?   map_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn))
> > -                 : unmap_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn));
> > +        if ( map )
> > +            rc = map_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn),
> > +                                  p2m_mmio_direct);
> > +        else
> > +            rc = unmap_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn));
> 
> May I ask that you leave alone the use of the conditional
> operator here, and _just_ add the new argument?

Yes, I can do that. This change is due to the way it was done in the
previous version of the series, it doesn't make sense anymore.


> > --- a/xen/arch/x86/mm/p2m.c
> > +++ b/xen/arch/x86/mm/p2m.c
> > @@ -2264,12 +2264,16 @@ static unsigned int mmio_order(const struct domain *d,
> >  int map_mmio_regions(struct domain *d,
> >                       gfn_t start_gfn,
> >                       unsigned long nr,
> > -                     mfn_t mfn)
> > +                     mfn_t mfn,
> > +                     p2m_type_t p2mt)
> >  {
> >      int ret = 0;
> >      unsigned long i;
> >      unsigned int iter, order;
> >  
> > +    if ( p2mt != p2m_mmio_direct )
> > +        return -EOPNOTSUPP;
> 
> Considering this and ...
> 
> > --- a/xen/common/domctl.c
> > +++ b/xen/common/domctl.c
> > @@ -927,6 +927,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
> >          unsigned long nr_mfns = op->u.memory_mapping.nr_mfns;
> >          unsigned long mfn_end = mfn + nr_mfns - 1;
> >          int add = op->u.memory_mapping.add_mapping;
> > +        p2m_type_t p2mt;
> >  
> >          ret = -EINVAL;
> >          if ( mfn_end < mfn || /* wrap? */
> > @@ -939,6 +940,10 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
> >          /* Must break hypercall up as this could take a while. */
> >          if ( nr_mfns > 64 )
> >              break;
> > +
> > +        p2mt = p2m_mmio_direct_dev;
> > +#else
> > +        p2mt = p2m_mmio_direct;
> >  #endif
> 
> ... this, is there really value in adding the new parameter for
> x86? A wrapper macro of the same name could be used to
> strip the new last argument at all call sites (current and future
> ones).
 
Sure, no problem.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 01/10] xen: add a p2mt parameter to map_mmio_regions
  2019-05-02 18:49     ` Stefano Stabellini
@ 2019-05-02 18:49       ` " Stefano Stabellini
  0 siblings, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-05-02 18:49 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Julien Grall, Stefano Stabellini,
	Stefano Stabellini, xen-devel

On Thu, 2 May 2019, Jan Beulich wrote:
> >>> On 30.04.19 at 23:02, <sstabellini@kernel.org> wrote:
> > --- a/xen/arch/x86/hvm/dom0_build.c
> > +++ b/xen/arch/x86/hvm/dom0_build.c
> > @@ -79,8 +79,11 @@ static int __init modify_identity_mmio(struct domain *d, unsigned long pfn,
> >  
> >      for ( ; ; )
> >      {
> > -        rc = map ?   map_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn))
> > -                 : unmap_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn));
> > +        if ( map )
> > +            rc = map_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn),
> > +                                  p2m_mmio_direct);
> > +        else
> > +            rc = unmap_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn));
> 
> May I ask that you leave alone the use of the conditional
> operator here, and _just_ add the new argument?

Yes, I can do that. This change is due to the way it was done in the
previous version of the series, it doesn't make sense anymore.


> > --- a/xen/arch/x86/mm/p2m.c
> > +++ b/xen/arch/x86/mm/p2m.c
> > @@ -2264,12 +2264,16 @@ static unsigned int mmio_order(const struct domain *d,
> >  int map_mmio_regions(struct domain *d,
> >                       gfn_t start_gfn,
> >                       unsigned long nr,
> > -                     mfn_t mfn)
> > +                     mfn_t mfn,
> > +                     p2m_type_t p2mt)
> >  {
> >      int ret = 0;
> >      unsigned long i;
> >      unsigned int iter, order;
> >  
> > +    if ( p2mt != p2m_mmio_direct )
> > +        return -EOPNOTSUPP;
> 
> Considering this and ...
> 
> > --- a/xen/common/domctl.c
> > +++ b/xen/common/domctl.c
> > @@ -927,6 +927,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
> >          unsigned long nr_mfns = op->u.memory_mapping.nr_mfns;
> >          unsigned long mfn_end = mfn + nr_mfns - 1;
> >          int add = op->u.memory_mapping.add_mapping;
> > +        p2m_type_t p2mt;
> >  
> >          ret = -EINVAL;
> >          if ( mfn_end < mfn || /* wrap? */
> > @@ -939,6 +940,10 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
> >          /* Must break hypercall up as this could take a while. */
> >          if ( nr_mfns > 64 )
> >              break;
> > +
> > +        p2mt = p2m_mmio_direct_dev;
> > +#else
> > +        p2mt = p2m_mmio_direct;
> >  #endif
> 
> ... this, is there really value in adding the new parameter for
> x86? A wrapper macro of the same name could be used to
> strip the new last argument at all call sites (current and future
> ones).
 
Sure, no problem.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 02/10] xen: rename un/map_mmio_regions to un/map_regions
  2019-05-02 15:03   ` Jan Beulich
  2019-05-02 15:03     ` [Xen-devel] " Jan Beulich
@ 2019-05-02 18:55     ` Stefano Stabellini
  2019-05-02 18:55       ` [Xen-devel] " Stefano Stabellini
  1 sibling, 1 reply; 86+ messages in thread
From: Stefano Stabellini @ 2019-05-02 18:55 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Julien Grall, Stefano Stabellini,
	Stefano Stabellini, xen-devel

On Thu, 2 May 2019, Jan Beulich wrote:
> >>> On 30.04.19 at 23:02, <sstabellini@kernel.org> wrote:
> > Now that map_mmio_regions takes a p2mt parameter, there is no need to
> > keep "mmio" in the name. The p2mt parameter does a better job at
> > expressing what the mapping is about. Let's save the environment 5
> > characters at a time.
> 
> But as per the cover letter the purpose is to allow mapping
> iomem (which I take is just an alternative term for MMIO).
> Even if that's misleading, {,un}map_regions() is a little too
> unspecific for my taste. At which point at least the
> environment saving argument goes away ;-)

Honestly, I am not one to care for functions names. As long as the other
maintainers agree with each others, I am happy to make the required
changes.


> As to the series as a whole, I guess you first want to come
> to an agreement with Julien. Only then it'll make sense to
> actually review the changes, I think.

Fair enough, but I don't think Julien and I have such a big disagreement
on the shape of the series. (He still needs to complete his review of v2
though.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 02/10] xen: rename un/map_mmio_regions to un/map_regions
  2019-05-02 18:55     ` Stefano Stabellini
@ 2019-05-02 18:55       ` " Stefano Stabellini
  0 siblings, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-05-02 18:55 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Julien Grall, Stefano Stabellini,
	Stefano Stabellini, xen-devel

On Thu, 2 May 2019, Jan Beulich wrote:
> >>> On 30.04.19 at 23:02, <sstabellini@kernel.org> wrote:
> > Now that map_mmio_regions takes a p2mt parameter, there is no need to
> > keep "mmio" in the name. The p2mt parameter does a better job at
> > expressing what the mapping is about. Let's save the environment 5
> > characters at a time.
> 
> But as per the cover letter the purpose is to allow mapping
> iomem (which I take is just an alternative term for MMIO).
> Even if that's misleading, {,un}map_regions() is a little too
> unspecific for my taste. At which point at least the
> environment saving argument goes away ;-)

Honestly, I am not one to care for functions names. As long as the other
maintainers agree with each others, I am happy to make the required
changes.


> As to the series as a whole, I guess you first want to come
> to an agreement with Julien. Only then it'll make sense to
> actually review the changes, I think.

Fair enough, but I don't think Julien and I have such a big disagreement
on the shape of the series. (He still needs to complete his review of v2
though.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy
  2019-04-30 21:02 ` [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
  2019-05-02 15:12   ` Jan Beulich
@ 2019-05-07 16:41   ` Julien Grall
  2019-05-07 16:41     ` [Xen-devel] " Julien Grall
  2019-06-17 22:43     ` Stefano Stabellini
  2019-05-15 14:40   ` Oleksandr
  3 siblings, 2 replies; 86+ messages in thread
From: Julien Grall @ 2019-05-07 16:41 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: Stefano Stabellini, JBeulich, andrew.cooper3

Hi Stefano,

On 4/30/19 10:02 PM, Stefano Stabellini wrote:
> Reuse the existing padding field to pass memory policy information.  On

NIT: I know that some developper like using two spaces after the final 
point. I don't mind if you use it but please be at least consistent 
within the commit message.

> Arm, the caller can specify whether the memory should be mapped as
> device nGRE, which is the default and the only possibility today, or

I am afraid this is not correct. The default on is Device-nGnRE (it is 
called Device Memory on Armv7).

> cacheable memory write-back. On x86, the only option is uncachable. The
> current behavior becomes the default (numerically '0').
> 
> On ARM, map device nGRE as p2m_mmio_direct_dev (as it is already done
> today) and WB cacheable memory as p2m_mmio_direct_c.

As I pointed out in v1, the wording is confusing. The resulting memory 
attribute will be a combination of stage-2 and stage-2 memory 
attributes. It will actually be whatever is the strongest between the 2 
stages attributes. You can see the stage-2 attributes as a way to give 
more or less freedom to the guest for configure the attributes.

The commit message and all documentation should actually reflect that to 
avoid misuse of the new option.

> 
> On x86, return error if the memory policy requested is not
> MEMORY_POLICY_X86_UC.
> 
> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> CC: JBeulich@suse.com
> CC: andrew.cooper3@citrix.com
> 
> ---
> Changes in v2:
> - rebase
> - use p2m_mmio_direct_c
> - use EOPNOTSUPP
> - rename cache_policy to memory policy
> - rename MEMORY_POLICY_DEVMEM to MEMORY_POLICY_ARM_DEV_nGRE
> - rename MEMORY_POLICY_MEMORY to MEMORY_POLICY_ARM_MEM_WB
> - add MEMORY_POLICY_X86_UC
> - add MEMORY_POLICY_DEFAULT and use it
> ---
>   xen/common/domctl.c         | 23 +++++++++++++++++++++--
>   xen/include/public/domctl.h | 14 +++++++++++++-
>   2 files changed, 34 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> index 140f979..9f62ead 100644
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -928,6 +928,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>           unsigned long mfn_end = mfn + nr_mfns - 1;
>           int add = op->u.memory_mapping.add_mapping;
>           p2m_type_t p2mt;
> +        uint32_t memory_policy = op->u.memory_mapping.memory_policy;
>   
>           ret = -EINVAL;
>           if ( mfn_end < mfn || /* wrap? */
> @@ -958,9 +959,27 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>           if ( add )
>           {
>               printk(XENLOG_G_DEBUG
> -                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
> -                   d->domain_id, gfn, mfn, nr_mfns);
> +                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx cache=%u\n",
> +                   d->domain_id, gfn, mfn, nr_mfns, memory_policy);
>   
> +            switch ( memory_policy )
> +            {
> +#ifdef CONFIG_ARM
> +                case MEMORY_POLICY_ARM_MEM_WB:
> +                    p2mt = p2m_mmio_direct_c;
> +                    break;
> +                case MEMORY_POLICY_ARM_DEV_nGRE:
> +                    p2mt = p2m_mmio_direct_dev;
> +                    break;
> +#endif
> +#ifdef CONFIG_X86
> +                case MEMORY_POLICY_X86_UC:
> +                    p2mt = p2m_mmio_direct;
> +                    break;
> +#endif
> +                default:
> +                    return -EOPNOTSUPP;
> +            }
>               ret = map_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn), p2mt);
>               if ( ret < 0 )
>                   printk(XENLOG_G_WARNING
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 19486d5..9330387 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -571,12 +571,24 @@ struct xen_domctl_bind_pt_irq {
>   */
>   #define DPCI_ADD_MAPPING         1
>   #define DPCI_REMOVE_MAPPING      0
> +/*
> + * Default memory policy. Corresponds to:
> + * Arm: MEMORY_POLICY_ARM_DEV_nGRE
> + * x86: MEMORY_POLICY_X86_UC
> + */
> +#define MEMORY_POLICY_DEFAULT    0
> +/* x86 only. Memory type UNCACHABLE */
> +#define MEMORY_POLICY_X86_UC     0
> +/* Arm only. Outer Shareable, Device-nGRE memory */

Device-nGRE is an Armv8 term. You might want to also specify the Armv7 
name in parenthesis to help the user.

> +#define MEMORY_POLICY_ARM_DEV_nGRE       0
> +/* Arm only. Outer Shareable, Outer/Inner Write-Back Cacheable memory */
> +#define MEMORY_POLICY_ARM_MEM_WB         1

I am wondering whether we should put Arm (resp. x86) defines under an 
ifdef arm (resp. x86). Do you see any use in the common toolstack code 
of those #ifdef?

>   struct xen_domctl_memory_mapping {
>       uint64_aligned_t first_gfn; /* first page (hvm guest phys page) in range */
>       uint64_aligned_t first_mfn; /* first page (machine page) in range */
>       uint64_aligned_t nr_mfns;   /* number of pages in range (>0) */
>       uint32_t add_mapping;       /* add or remove mapping */
> -    uint32_t padding;           /* padding for 64-bit aligned structure */
> +    uint32_t memory_policy;      /* cacheability of the memory mapping */

 From a quick look at libxc, it seems the padding field will not be 
initialized to 0 (aka MEMORY_DEFAULT_POLICY). As the libxc support is 
added in a follow-up patch, I think you want to ensure memory_policy is 
equal to MEMORY_DEFAULT_POLICY in libxc. So there are no unexpected 
behavior during bisection or this patch gets applied before the rest.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy
  2019-05-07 16:41   ` Julien Grall
@ 2019-05-07 16:41     ` " Julien Grall
  2019-06-17 22:43     ` Stefano Stabellini
  1 sibling, 0 replies; 86+ messages in thread
From: Julien Grall @ 2019-05-07 16:41 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: Stefano Stabellini, JBeulich, andrew.cooper3

Hi Stefano,

On 4/30/19 10:02 PM, Stefano Stabellini wrote:
> Reuse the existing padding field to pass memory policy information.  On

NIT: I know that some developper like using two spaces after the final 
point. I don't mind if you use it but please be at least consistent 
within the commit message.

> Arm, the caller can specify whether the memory should be mapped as
> device nGRE, which is the default and the only possibility today, or

I am afraid this is not correct. The default on is Device-nGnRE (it is 
called Device Memory on Armv7).

> cacheable memory write-back. On x86, the only option is uncachable. The
> current behavior becomes the default (numerically '0').
> 
> On ARM, map device nGRE as p2m_mmio_direct_dev (as it is already done
> today) and WB cacheable memory as p2m_mmio_direct_c.

As I pointed out in v1, the wording is confusing. The resulting memory 
attribute will be a combination of stage-2 and stage-2 memory 
attributes. It will actually be whatever is the strongest between the 2 
stages attributes. You can see the stage-2 attributes as a way to give 
more or less freedom to the guest for configure the attributes.

The commit message and all documentation should actually reflect that to 
avoid misuse of the new option.

> 
> On x86, return error if the memory policy requested is not
> MEMORY_POLICY_X86_UC.
> 
> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> CC: JBeulich@suse.com
> CC: andrew.cooper3@citrix.com
> 
> ---
> Changes in v2:
> - rebase
> - use p2m_mmio_direct_c
> - use EOPNOTSUPP
> - rename cache_policy to memory policy
> - rename MEMORY_POLICY_DEVMEM to MEMORY_POLICY_ARM_DEV_nGRE
> - rename MEMORY_POLICY_MEMORY to MEMORY_POLICY_ARM_MEM_WB
> - add MEMORY_POLICY_X86_UC
> - add MEMORY_POLICY_DEFAULT and use it
> ---
>   xen/common/domctl.c         | 23 +++++++++++++++++++++--
>   xen/include/public/domctl.h | 14 +++++++++++++-
>   2 files changed, 34 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> index 140f979..9f62ead 100644
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -928,6 +928,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>           unsigned long mfn_end = mfn + nr_mfns - 1;
>           int add = op->u.memory_mapping.add_mapping;
>           p2m_type_t p2mt;
> +        uint32_t memory_policy = op->u.memory_mapping.memory_policy;
>   
>           ret = -EINVAL;
>           if ( mfn_end < mfn || /* wrap? */
> @@ -958,9 +959,27 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>           if ( add )
>           {
>               printk(XENLOG_G_DEBUG
> -                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
> -                   d->domain_id, gfn, mfn, nr_mfns);
> +                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx cache=%u\n",
> +                   d->domain_id, gfn, mfn, nr_mfns, memory_policy);
>   
> +            switch ( memory_policy )
> +            {
> +#ifdef CONFIG_ARM
> +                case MEMORY_POLICY_ARM_MEM_WB:
> +                    p2mt = p2m_mmio_direct_c;
> +                    break;
> +                case MEMORY_POLICY_ARM_DEV_nGRE:
> +                    p2mt = p2m_mmio_direct_dev;
> +                    break;
> +#endif
> +#ifdef CONFIG_X86
> +                case MEMORY_POLICY_X86_UC:
> +                    p2mt = p2m_mmio_direct;
> +                    break;
> +#endif
> +                default:
> +                    return -EOPNOTSUPP;
> +            }
>               ret = map_regions(d, _gfn(gfn), nr_mfns, _mfn(mfn), p2mt);
>               if ( ret < 0 )
>                   printk(XENLOG_G_WARNING
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 19486d5..9330387 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -571,12 +571,24 @@ struct xen_domctl_bind_pt_irq {
>   */
>   #define DPCI_ADD_MAPPING         1
>   #define DPCI_REMOVE_MAPPING      0
> +/*
> + * Default memory policy. Corresponds to:
> + * Arm: MEMORY_POLICY_ARM_DEV_nGRE
> + * x86: MEMORY_POLICY_X86_UC
> + */
> +#define MEMORY_POLICY_DEFAULT    0
> +/* x86 only. Memory type UNCACHABLE */
> +#define MEMORY_POLICY_X86_UC     0
> +/* Arm only. Outer Shareable, Device-nGRE memory */

Device-nGRE is an Armv8 term. You might want to also specify the Armv7 
name in parenthesis to help the user.

> +#define MEMORY_POLICY_ARM_DEV_nGRE       0
> +/* Arm only. Outer Shareable, Outer/Inner Write-Back Cacheable memory */
> +#define MEMORY_POLICY_ARM_MEM_WB         1

I am wondering whether we should put Arm (resp. x86) defines under an 
ifdef arm (resp. x86). Do you see any use in the common toolstack code 
of those #ifdef?

>   struct xen_domctl_memory_mapping {
>       uint64_aligned_t first_gfn; /* first page (hvm guest phys page) in range */
>       uint64_aligned_t first_mfn; /* first page (machine page) in range */
>       uint64_aligned_t nr_mfns;   /* number of pages in range (>0) */
>       uint32_t add_mapping;       /* add or remove mapping */
> -    uint32_t padding;           /* padding for 64-bit aligned structure */
> +    uint32_t memory_policy;      /* cacheability of the memory mapping */

 From a quick look at libxc, it seems the padding field will not be 
initialized to 0 (aka MEMORY_DEFAULT_POLICY). As the libxc support is 
added in a follow-up patch, I think you want to ensure memory_policy is 
equal to MEMORY_DEFAULT_POLICY in libxc. So there are no unexpected 
behavior during bisection or this patch gets applied before the rest.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 06/10] xen/arm: extend device_tree_for_each_node
  2019-04-30 21:02 ` [PATCH v2 06/10] xen/arm: extend device_tree_for_each_node Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
@ 2019-05-07 17:12   ` Julien Grall
  2019-05-07 17:12     ` [Xen-devel] " Julien Grall
  1 sibling, 1 reply; 86+ messages in thread
From: Julien Grall @ 2019-05-07 17:12 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: Stefano Stabellini

Hi Stefano,

On 4/30/19 10:02 PM, Stefano Stabellini wrote:
> Add two new paramters to device_tree_for_each_node: node and depth.

NIT: s/paramters/parameters/

> Node is the node to start the search from and depth is the min depth of
> the search.
> 
> Passing 0, 0 triggers the old behavior.

It would be good to explain in the commit message why we need this.

> 
> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> ---
> Changes in v2:
> - new
> ---
>   xen/arch/arm/acpi/boot.c      |  2 +-
>   xen/arch/arm/bootfdt.c        | 12 ++++++------
>   xen/include/xen/device_tree.h |  5 +++--
>   3 files changed, 10 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/arm/acpi/boot.c b/xen/arch/arm/acpi/boot.c
> index 9b29769..cfc85c2 100644
> --- a/xen/arch/arm/acpi/boot.c
> +++ b/xen/arch/arm/acpi/boot.c
> @@ -248,7 +248,7 @@ int __init acpi_boot_table_init(void)
>        */
>       if ( param_acpi_off || ( !param_acpi_force
>                                && device_tree_for_each_node(device_tree_flattened,
> -                                                   dt_scan_depth1_nodes, NULL)))
> +                                 0, 0, dt_scan_depth1_nodes, NULL)))
>           goto disable;
>   
>       /*
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index 891b4b6..e7b08ed 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -77,6 +77,8 @@ static u32 __init device_tree_get_u32(const void *fdt, int node,
>   /**
>    * device_tree_for_each_node - iterate over all device tree nodes
>    * @fdt: flat device tree.
> + * @node: node to start the search from
> + * @depth: min depth of the search

The interface is not clear, which node is it? The parent node or the 
first child?

Similarly, which depth is it? But then, is the depth really necessary? 
You basically want to browse all the child of the parent node.

>    * @func: function to call for each node.
>    * @data: data to pass to @func.
>    *
> @@ -86,17 +88,15 @@ static u32 __init device_tree_get_u32(const void *fdt, int node,
>    * returns a value different from 0, that value is returned immediately.
>    */
>   int __init device_tree_for_each_node(const void *fdt,
> +                                     int node, int depth,
>                                        device_tree_node_func func,
>                                        void *data)
>   {
> -    int node;
> -    int depth;
>       u32 address_cells[DEVICE_TREE_MAX_DEPTH];
>       u32 size_cells[DEVICE_TREE_MAX_DEPTH];
> -    int ret;
> +    int ret, min_depth = depth;
>   
> -    for ( node = 0, depth = 0;
> -          node >=0 && depth >= 0;
> +    for ( ; node >=0 && depth >= min_depth;

NIT: While you modify the code, can you please add the missing space 
between > and 0?

Also, the code below is looking at {address, size}_cells[depth - 1]. On 
the first loop, they will not be initialized and will contain garbage. 
Note that with my suggestion about dropping the parameter depth, the 
address/size cells would still be wrongly initialized.

>             node = fdt_next_node(fdt, node, &depth) )
>       {
>           const char *name = fdt_get_name(fdt, node, NULL);
> @@ -357,7 +357,7 @@ size_t __init boot_fdt_info(const void *fdt, paddr_t paddr)
>   
>       add_boot_module(BOOTMOD_FDT, paddr, fdt_totalsize(fdt), false);
>   
> -    device_tree_for_each_node((void *)fdt, early_scan_node, NULL);
> +    device_tree_for_each_node((void *)fdt, 0, 0, early_scan_node, NULL);
>       early_print_info();
>   
>       return fdt_totalsize(fdt);
> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> index 7408a6c..4ff78ba 100644
> --- a/xen/include/xen/device_tree.h
> +++ b/xen/include/xen/device_tree.h
> @@ -159,8 +159,9 @@ typedef int (*device_tree_node_func)(const void *fdt,
>   extern const void *device_tree_flattened;
>   
>   int device_tree_for_each_node(const void *fdt,
> -                                     device_tree_node_func func,
> -                                     void *data);
> +                              int node, int depth,
> +                              device_tree_node_func func,
> +                              void *data);
>   
>   /**
>    * dt_unflatten_host_device_tree - Unflatten the host device tree
> 

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 06/10] xen/arm: extend device_tree_for_each_node
  2019-05-07 17:12   ` Julien Grall
@ 2019-05-07 17:12     ` " Julien Grall
  0 siblings, 0 replies; 86+ messages in thread
From: Julien Grall @ 2019-05-07 17:12 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: Stefano Stabellini

Hi Stefano,

On 4/30/19 10:02 PM, Stefano Stabellini wrote:
> Add two new paramters to device_tree_for_each_node: node and depth.

NIT: s/paramters/parameters/

> Node is the node to start the search from and depth is the min depth of
> the search.
> 
> Passing 0, 0 triggers the old behavior.

It would be good to explain in the commit message why we need this.

> 
> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> ---
> Changes in v2:
> - new
> ---
>   xen/arch/arm/acpi/boot.c      |  2 +-
>   xen/arch/arm/bootfdt.c        | 12 ++++++------
>   xen/include/xen/device_tree.h |  5 +++--
>   3 files changed, 10 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/arm/acpi/boot.c b/xen/arch/arm/acpi/boot.c
> index 9b29769..cfc85c2 100644
> --- a/xen/arch/arm/acpi/boot.c
> +++ b/xen/arch/arm/acpi/boot.c
> @@ -248,7 +248,7 @@ int __init acpi_boot_table_init(void)
>        */
>       if ( param_acpi_off || ( !param_acpi_force
>                                && device_tree_for_each_node(device_tree_flattened,
> -                                                   dt_scan_depth1_nodes, NULL)))
> +                                 0, 0, dt_scan_depth1_nodes, NULL)))
>           goto disable;
>   
>       /*
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index 891b4b6..e7b08ed 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -77,6 +77,8 @@ static u32 __init device_tree_get_u32(const void *fdt, int node,
>   /**
>    * device_tree_for_each_node - iterate over all device tree nodes
>    * @fdt: flat device tree.
> + * @node: node to start the search from
> + * @depth: min depth of the search

The interface is not clear, which node is it? The parent node or the 
first child?

Similarly, which depth is it? But then, is the depth really necessary? 
You basically want to browse all the child of the parent node.

>    * @func: function to call for each node.
>    * @data: data to pass to @func.
>    *
> @@ -86,17 +88,15 @@ static u32 __init device_tree_get_u32(const void *fdt, int node,
>    * returns a value different from 0, that value is returned immediately.
>    */
>   int __init device_tree_for_each_node(const void *fdt,
> +                                     int node, int depth,
>                                        device_tree_node_func func,
>                                        void *data)
>   {
> -    int node;
> -    int depth;
>       u32 address_cells[DEVICE_TREE_MAX_DEPTH];
>       u32 size_cells[DEVICE_TREE_MAX_DEPTH];
> -    int ret;
> +    int ret, min_depth = depth;
>   
> -    for ( node = 0, depth = 0;
> -          node >=0 && depth >= 0;
> +    for ( ; node >=0 && depth >= min_depth;

NIT: While you modify the code, can you please add the missing space 
between > and 0?

Also, the code below is looking at {address, size}_cells[depth - 1]. On 
the first loop, they will not be initialized and will contain garbage. 
Note that with my suggestion about dropping the parameter depth, the 
address/size cells would still be wrongly initialized.

>             node = fdt_next_node(fdt, node, &depth) )
>       {
>           const char *name = fdt_get_name(fdt, node, NULL);
> @@ -357,7 +357,7 @@ size_t __init boot_fdt_info(const void *fdt, paddr_t paddr)
>   
>       add_boot_module(BOOTMOD_FDT, paddr, fdt_totalsize(fdt), false);
>   
> -    device_tree_for_each_node((void *)fdt, early_scan_node, NULL);
> +    device_tree_for_each_node((void *)fdt, 0, 0, early_scan_node, NULL);
>       early_print_info();
>   
>       return fdt_totalsize(fdt);
> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> index 7408a6c..4ff78ba 100644
> --- a/xen/include/xen/device_tree.h
> +++ b/xen/include/xen/device_tree.h
> @@ -159,8 +159,9 @@ typedef int (*device_tree_node_func)(const void *fdt,
>   extern const void *device_tree_flattened;
>   
>   int device_tree_for_each_node(const void *fdt,
> -                                     device_tree_node_func func,
> -                                     void *data);
> +                              int node, int depth,
> +                              device_tree_node_func func,
> +                              void *data);
>   
>   /**
>    * dt_unflatten_host_device_tree - Unflatten the host device tree
> 

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 08/10] xen/arm: keep track of reserved-memory regions
  2019-04-30 21:02 ` [PATCH v2 08/10] xen/arm: keep track of reserved-memory regions Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
  2019-05-01 10:03   ` Julien Grall
@ 2019-05-07 17:21   ` Julien Grall
  2019-05-07 17:21     ` [Xen-devel] " Julien Grall
  2 siblings, 1 reply; 86+ messages in thread
From: Julien Grall @ 2019-05-07 17:21 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: Stefano Stabellini

Hi,

Some more review on this patch.

On 4/30/19 10:02 PM, Stefano Stabellini wrote:
> As we parse the device tree in Xen, keep track of the reserved-memory
> regions as they need special treatment (follow-up patches will make use
> of the stored information.)
> 
> Reuse process_memory_node to add reserved-memory regions to the
> bootinfo.reserved_mem array. Remove the warning if there is no reg in
> process_memory_node because it is a normal condition for
> reserved-memory.
> 
> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> 
> ---
> 
> Not done: create an e820-like structure on ARM.
> 
> Changes in v2:
> - call process_memory_node from process_reserved_memory_node to avoid
>    duplication
> ---
>   xen/arch/arm/bootfdt.c      | 30 ++++++++++++++++++++++--------
>   xen/include/asm-arm/setup.h |  1 +
>   2 files changed, 23 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index b6600ab..9355a6e 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -135,6 +135,8 @@ static int __init process_memory_node(const void *fdt, int node,
>       const __be32 *cell;
>       paddr_t start, size;
>       u32 reg_cells = address_cells + size_cells;
> +    struct meminfo *mem;
> +    bool reserved = (bool)data;
>   
>       if ( address_cells < 1 || size_cells < 1 )
>       {
> @@ -143,29 +145,39 @@ static int __init process_memory_node(const void *fdt, int node,
>           return 0;
>       }
>   
> +    if ( reserved )
> +        mem = &bootinfo.reserved_mem;
> +    else
> +        mem = &bootinfo.mem;
> +
>       prop = fdt_get_property(fdt, node, "reg", NULL);
>       if ( !prop )
> -    {
> -        printk("fdt: node `%s': missing `reg' property\n", name);
>           return 0;
> -    }
>   
>       cell = (const __be32 *)prop->data;
>       banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof (u32));
>   
> -    for ( i = 0; i < banks && bootinfo.mem.nr_banks < NR_MEM_BANKS; i++ )
> +    for ( i = 0; i < banks && mem->nr_banks < NR_MEM_BANKS; i++ )
>       {
>           device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
>           if ( !size )
>               continue;
> -        bootinfo.mem.bank[bootinfo.mem.nr_banks].start = start;
> -        bootinfo.mem.bank[bootinfo.mem.nr_banks].size = size;
> -        bootinfo.mem.nr_banks++;
> +        mem->bank[mem->nr_banks].start = start;
> +        mem->bank[mem->nr_banks].size = size;
> +        mem->nr_banks++;
>       }
>   
>       return 0;
>   }
>   
> +static int __init process_reserved_memory_node(const void *fdt, int node,
> +                                               const char *name, int depth,
> +                                               u32 address_cells, u32 size_cells)
> +{
> +    device_tree_for_each_node(fdt, node, depth, process_memory_node, (void*)true);
> +    return 0;
> +}
> +
>   static void __init process_multiboot_node(const void *fdt, int node,
>                                             const char *name,
>                                             u32 address_cells, u32 size_cells)
> @@ -297,7 +309,9 @@ static int __init early_scan_node(const void *fdt,
>   {
>       if ( device_tree_node_matches(fdt, node, "memory") )
>           process_memory_node(fdt, node, name, depth, address_cells, size_cells,
> -                            NULL);
> +                            (void*)false);
> +    else if ( device_tree_node_matches(fdt, node, "reserved-memory") )
> +        process_reserved_memory_node(fdt, node, name, depth, address_cells, size_cells);

This will match reserved-memory@... or even /foo/reserved-memory. Here, 
we only want to match /reserved-memory.

>       else if ( depth <= 3 && (device_tree_node_compatible(fdt, node, "xen,multiboot-module" ) ||
>                 device_tree_node_compatible(fdt, node, "multiboot,module" )))
>           process_multiboot_node(fdt, node, name, address_cells, size_cells);
> diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h
> index 48187e1..5c3fc2d 100644
> --- a/xen/include/asm-arm/setup.h
> +++ b/xen/include/asm-arm/setup.h
> @@ -66,6 +66,7 @@ struct bootcmdlines {
>   
>   struct bootinfo {
>       struct meminfo mem;
> +    struct meminfo reserved_mem;
>       struct bootmodules modules;
>       struct bootcmdlines cmdlines;
>   #ifdef CONFIG_ACPI
> 

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 08/10] xen/arm: keep track of reserved-memory regions
  2019-05-07 17:21   ` Julien Grall
@ 2019-05-07 17:21     ` " Julien Grall
  0 siblings, 0 replies; 86+ messages in thread
From: Julien Grall @ 2019-05-07 17:21 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: Stefano Stabellini

Hi,

Some more review on this patch.

On 4/30/19 10:02 PM, Stefano Stabellini wrote:
> As we parse the device tree in Xen, keep track of the reserved-memory
> regions as they need special treatment (follow-up patches will make use
> of the stored information.)
> 
> Reuse process_memory_node to add reserved-memory regions to the
> bootinfo.reserved_mem array. Remove the warning if there is no reg in
> process_memory_node because it is a normal condition for
> reserved-memory.
> 
> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> 
> ---
> 
> Not done: create an e820-like structure on ARM.
> 
> Changes in v2:
> - call process_memory_node from process_reserved_memory_node to avoid
>    duplication
> ---
>   xen/arch/arm/bootfdt.c      | 30 ++++++++++++++++++++++--------
>   xen/include/asm-arm/setup.h |  1 +
>   2 files changed, 23 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index b6600ab..9355a6e 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -135,6 +135,8 @@ static int __init process_memory_node(const void *fdt, int node,
>       const __be32 *cell;
>       paddr_t start, size;
>       u32 reg_cells = address_cells + size_cells;
> +    struct meminfo *mem;
> +    bool reserved = (bool)data;
>   
>       if ( address_cells < 1 || size_cells < 1 )
>       {
> @@ -143,29 +145,39 @@ static int __init process_memory_node(const void *fdt, int node,
>           return 0;
>       }
>   
> +    if ( reserved )
> +        mem = &bootinfo.reserved_mem;
> +    else
> +        mem = &bootinfo.mem;
> +
>       prop = fdt_get_property(fdt, node, "reg", NULL);
>       if ( !prop )
> -    {
> -        printk("fdt: node `%s': missing `reg' property\n", name);
>           return 0;
> -    }
>   
>       cell = (const __be32 *)prop->data;
>       banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof (u32));
>   
> -    for ( i = 0; i < banks && bootinfo.mem.nr_banks < NR_MEM_BANKS; i++ )
> +    for ( i = 0; i < banks && mem->nr_banks < NR_MEM_BANKS; i++ )
>       {
>           device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
>           if ( !size )
>               continue;
> -        bootinfo.mem.bank[bootinfo.mem.nr_banks].start = start;
> -        bootinfo.mem.bank[bootinfo.mem.nr_banks].size = size;
> -        bootinfo.mem.nr_banks++;
> +        mem->bank[mem->nr_banks].start = start;
> +        mem->bank[mem->nr_banks].size = size;
> +        mem->nr_banks++;
>       }
>   
>       return 0;
>   }
>   
> +static int __init process_reserved_memory_node(const void *fdt, int node,
> +                                               const char *name, int depth,
> +                                               u32 address_cells, u32 size_cells)
> +{
> +    device_tree_for_each_node(fdt, node, depth, process_memory_node, (void*)true);
> +    return 0;
> +}
> +
>   static void __init process_multiboot_node(const void *fdt, int node,
>                                             const char *name,
>                                             u32 address_cells, u32 size_cells)
> @@ -297,7 +309,9 @@ static int __init early_scan_node(const void *fdt,
>   {
>       if ( device_tree_node_matches(fdt, node, "memory") )
>           process_memory_node(fdt, node, name, depth, address_cells, size_cells,
> -                            NULL);
> +                            (void*)false);
> +    else if ( device_tree_node_matches(fdt, node, "reserved-memory") )
> +        process_reserved_memory_node(fdt, node, name, depth, address_cells, size_cells);

This will match reserved-memory@... or even /foo/reserved-memory. Here, 
we only want to match /reserved-memory.

>       else if ( depth <= 3 && (device_tree_node_compatible(fdt, node, "xen,multiboot-module" ) ||
>                 device_tree_node_compatible(fdt, node, "multiboot,module" )))
>           process_multiboot_node(fdt, node, name, address_cells, size_cells);
> diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h
> index 48187e1..5c3fc2d 100644
> --- a/xen/include/asm-arm/setup.h
> +++ b/xen/include/asm-arm/setup.h
> @@ -66,6 +66,7 @@ struct bootcmdlines {
>   
>   struct bootinfo {
>       struct meminfo mem;
> +    struct meminfo reserved_mem;
>       struct bootmodules modules;
>       struct bootcmdlines cmdlines;
>   #ifdef CONFIG_ACPI
> 

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 09/10] xen/arm: map reserved-memory regions as normal memory in dom0
  2019-04-30 21:02 ` [PATCH v2 09/10] xen/arm: map reserved-memory regions as normal memory in dom0 Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
@ 2019-05-07 19:52   ` Julien Grall
  2019-05-07 19:52     ` [Xen-devel] " Julien Grall
  1 sibling, 1 reply; 86+ messages in thread
From: Julien Grall @ 2019-05-07 19:52 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: Stefano Stabellini

Hi,

On 4/30/19 10:02 PM, Stefano Stabellini wrote:
> reserved-memory regions should be mapped as normal memory. At the
> moment, they get remapped as device memory in dom0 because Xen doesn't
> know any better. Add an explicit check for it.

This part matches the title of the patch but...

> 
> reserved-memory regions overlap with memory nodes. The overlapping
> memory is reserved-memory and should be handled accordingly:
> consider_modules and dt_unreserved_regions should skip these regions the
> same way they are already skipping mem-reserve regions.

... this doesn't. They are actually two different things and should be 
handled in separate patches.

> 
> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> ---
> Changes in v2:
> - fix commit message: full overlap
> - remove check_reserved_memory
> - extend consider_modules and dt_unreserved_regions
> ---
>   xen/arch/arm/domain_build.c |  7 +++++++
>   xen/arch/arm/setup.c        | 36 +++++++++++++++++++++++++++++++++---
>   2 files changed, 40 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 5e7f94c..e5d488d 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1408,6 +1408,13 @@ static int __init handle_node(struct domain *d, struct kernel_info *kinfo,
>                  "WARNING: Path %s is reserved, skip the node as we may re-use the path.\n",
>                  path);
>   
> +    /*
> +     * reserved-memory ranges should be mapped as normal memory in the
> +     * p2m.
> +     */
> +    if ( !strcmp(dt_node_name(node), "reserved-memory") )
> +        p2mt = p2m_mmio_direct_c;

Do we really need this? The default type is already p2m_mmio_direct_c 
(see default_p2mt).

> +
>       res = handle_device(d, node, p2mt);
>       if ( res)
>           return res;
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index ccb0f18..908b52c 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -204,6 +204,19 @@ void __init dt_unreserved_regions(paddr_t s, paddr_t e,
>           }
>       }
>   
> +    for ( ; i - nr < bootinfo.reserved_mem.nr_banks; i++ )

It took me a bit of time to understand why you do i - nr. I think we 
need some comments explaining the new logic.

Longer term (i.e I will not push for it today :)), I think this code 
would benefits of using e820-like. It would make the code clearer and 
probably more efficient than what we currently have.

> +    {
> +        paddr_t r_s = bootinfo.reserved_mem.bank[i - nr].start;
> +        paddr_t r_e = r_s + bootinfo.reserved_mem.bank[i - nr].size;
> +
> +        if ( s < r_e && r_s < e )
> +        {
> +            dt_unreserved_regions(r_e, e, cb, i+1);
> +            dt_unreserved_regions(s, r_s, cb, i+1);
> +            return;
> +        }
> +    }
> +
>       cb(s, e);
>   }
>   
> @@ -390,7 +403,7 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
>   {
>       const struct bootmodules *mi = &bootinfo.modules;
>       int i;
> -    int nr_rsvd;
> +    int nr;
>   
>       s = (s+align-1) & ~(align-1);
>       e = e & ~(align-1);
> @@ -416,9 +429,9 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
>   
>       /* Now check any fdt reserved areas. */
>   
> -    nr_rsvd = fdt_num_mem_rsv(device_tree_flattened);
> +    nr = fdt_num_mem_rsv(device_tree_flattened);
>   
> -    for ( ; i < mi->nr_mods + nr_rsvd; i++ )
> +    for ( ; i < mi->nr_mods + nr; i++ )
>       {
>           paddr_t mod_s, mod_e;
>   
> @@ -440,6 +453,23 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
>               return consider_modules(s, mod_s, size, align, i+1);
>           }
>       }
> +
> +    /* Now check for reserved-memory regions */
> +    nr += mi->nr_mods;

Similar to the previous function, this needs to be documented.

> +    for ( ; i - nr < bootinfo.reserved_mem.nr_banks; i++ )
> +    {
> +        paddr_t r_s = bootinfo.reserved_mem.bank[i - nr].start;
> +        paddr_t r_e = r_s + bootinfo.reserved_mem.bank[i - nr].size;
> +
> +        if ( s < r_e && r_s < e )
> +        {
> +            r_e = consider_modules(r_e, e, size, align, i+1);

Coding style: space before and after the operator. Ideally, the rest of 
the function should be fixed.

> +            if ( r_e )
> +                return r_e;
> +
> +            return consider_modules(s, r_s, size, align, i+1);

Same here.

> +        }
> +    }
>       return e;
>   }
>   #endif
> 

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 09/10] xen/arm: map reserved-memory regions as normal memory in dom0
  2019-05-07 19:52   ` Julien Grall
@ 2019-05-07 19:52     ` " Julien Grall
  0 siblings, 0 replies; 86+ messages in thread
From: Julien Grall @ 2019-05-07 19:52 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: Stefano Stabellini

Hi,

On 4/30/19 10:02 PM, Stefano Stabellini wrote:
> reserved-memory regions should be mapped as normal memory. At the
> moment, they get remapped as device memory in dom0 because Xen doesn't
> know any better. Add an explicit check for it.

This part matches the title of the patch but...

> 
> reserved-memory regions overlap with memory nodes. The overlapping
> memory is reserved-memory and should be handled accordingly:
> consider_modules and dt_unreserved_regions should skip these regions the
> same way they are already skipping mem-reserve regions.

... this doesn't. They are actually two different things and should be 
handled in separate patches.

> 
> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> ---
> Changes in v2:
> - fix commit message: full overlap
> - remove check_reserved_memory
> - extend consider_modules and dt_unreserved_regions
> ---
>   xen/arch/arm/domain_build.c |  7 +++++++
>   xen/arch/arm/setup.c        | 36 +++++++++++++++++++++++++++++++++---
>   2 files changed, 40 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 5e7f94c..e5d488d 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1408,6 +1408,13 @@ static int __init handle_node(struct domain *d, struct kernel_info *kinfo,
>                  "WARNING: Path %s is reserved, skip the node as we may re-use the path.\n",
>                  path);
>   
> +    /*
> +     * reserved-memory ranges should be mapped as normal memory in the
> +     * p2m.
> +     */
> +    if ( !strcmp(dt_node_name(node), "reserved-memory") )
> +        p2mt = p2m_mmio_direct_c;

Do we really need this? The default type is already p2m_mmio_direct_c 
(see default_p2mt).

> +
>       res = handle_device(d, node, p2mt);
>       if ( res)
>           return res;
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index ccb0f18..908b52c 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -204,6 +204,19 @@ void __init dt_unreserved_regions(paddr_t s, paddr_t e,
>           }
>       }
>   
> +    for ( ; i - nr < bootinfo.reserved_mem.nr_banks; i++ )

It took me a bit of time to understand why you do i - nr. I think we 
need some comments explaining the new logic.

Longer term (i.e I will not push for it today :)), I think this code 
would benefits of using e820-like. It would make the code clearer and 
probably more efficient than what we currently have.

> +    {
> +        paddr_t r_s = bootinfo.reserved_mem.bank[i - nr].start;
> +        paddr_t r_e = r_s + bootinfo.reserved_mem.bank[i - nr].size;
> +
> +        if ( s < r_e && r_s < e )
> +        {
> +            dt_unreserved_regions(r_e, e, cb, i+1);
> +            dt_unreserved_regions(s, r_s, cb, i+1);
> +            return;
> +        }
> +    }
> +
>       cb(s, e);
>   }
>   
> @@ -390,7 +403,7 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
>   {
>       const struct bootmodules *mi = &bootinfo.modules;
>       int i;
> -    int nr_rsvd;
> +    int nr;
>   
>       s = (s+align-1) & ~(align-1);
>       e = e & ~(align-1);
> @@ -416,9 +429,9 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
>   
>       /* Now check any fdt reserved areas. */
>   
> -    nr_rsvd = fdt_num_mem_rsv(device_tree_flattened);
> +    nr = fdt_num_mem_rsv(device_tree_flattened);
>   
> -    for ( ; i < mi->nr_mods + nr_rsvd; i++ )
> +    for ( ; i < mi->nr_mods + nr; i++ )
>       {
>           paddr_t mod_s, mod_e;
>   
> @@ -440,6 +453,23 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
>               return consider_modules(s, mod_s, size, align, i+1);
>           }
>       }
> +
> +    /* Now check for reserved-memory regions */
> +    nr += mi->nr_mods;

Similar to the previous function, this needs to be documented.

> +    for ( ; i - nr < bootinfo.reserved_mem.nr_banks; i++ )
> +    {
> +        paddr_t r_s = bootinfo.reserved_mem.bank[i - nr].start;
> +        paddr_t r_e = r_s + bootinfo.reserved_mem.bank[i - nr].size;
> +
> +        if ( s < r_e && r_s < e )
> +        {
> +            r_e = consider_modules(r_e, e, size, align, i+1);

Coding style: space before and after the operator. Ideally, the rest of 
the function should be fixed.

> +            if ( r_e )
> +                return r_e;
> +
> +            return consider_modules(s, r_s, size, align, i+1);

Same here.

> +        }
> +    }
>       return e;
>   }
>   #endif
> 

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 10/10] xen/arm: add reserved-memory regions to the dom0 memory node
  2019-04-30 21:02 ` [PATCH v2 10/10] xen/arm: add reserved-memory regions to the dom0 memory node Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
@ 2019-05-07 20:15   ` Julien Grall
  2019-05-07 20:15     ` [Xen-devel] " Julien Grall
  2019-05-10 20:51     ` Stefano Stabellini
  1 sibling, 2 replies; 86+ messages in thread
From: Julien Grall @ 2019-05-07 20:15 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: Stefano Stabellini

Hi Stefano,

On 4/30/19 10:02 PM, Stefano Stabellini wrote:
> Reserved memory regions are automatically remapped to dom0. Their device
> tree nodes are also added to dom0 device tree. However, the dom0 memory
> node is not currently extended to cover the reserved memory regions
> ranges as required by the spec.  This commit fixes it.

AFAICT, this does not cover the problem mention by Amit in [1].

But I am still not happy with the approach taken for the reserved-memory 
regions in this series. As I pointed out before, they are just normal 
memory that was reserved for other purpose (CMA, framebuffer...).

Treating them as "device" from Xen POV is a clear abuse of the meaning 
and I don't believe it is a viable solution long term.

Indeed, some of the regions may have a property "reusable" allowing the 
the OS to use them until they are claimed by the device driver owning 
the region. I don't know how Linux (or any other OS) is using it today, 
but I don't see what would prevent it to use them as hypercall buffer. 
This would obviously not work because they are not actual RAM from Xen POV.

On a similar topic, because they are normal memory, I don't think 
XEN_DOMCTL_memory_mapping should be able to map reserved-regions. So the 
iomem rangeset should not contain them.

Cheers,

[1] <CABHD4K-z-x=3joJWcOb_x9m7zsjzhskDQweNBr+paLS=PFEY9Q@mail.gmail.com>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 10/10] xen/arm: add reserved-memory regions to the dom0 memory node
  2019-05-07 20:15   ` Julien Grall
@ 2019-05-07 20:15     ` " Julien Grall
  2019-05-10 20:51     ` Stefano Stabellini
  1 sibling, 0 replies; 86+ messages in thread
From: Julien Grall @ 2019-05-07 20:15 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: Stefano Stabellini

Hi Stefano,

On 4/30/19 10:02 PM, Stefano Stabellini wrote:
> Reserved memory regions are automatically remapped to dom0. Their device
> tree nodes are also added to dom0 device tree. However, the dom0 memory
> node is not currently extended to cover the reserved memory regions
> ranges as required by the spec.  This commit fixes it.

AFAICT, this does not cover the problem mention by Amit in [1].

But I am still not happy with the approach taken for the reserved-memory 
regions in this series. As I pointed out before, they are just normal 
memory that was reserved for other purpose (CMA, framebuffer...).

Treating them as "device" from Xen POV is a clear abuse of the meaning 
and I don't believe it is a viable solution long term.

Indeed, some of the regions may have a property "reusable" allowing the 
the OS to use them until they are claimed by the device driver owning 
the region. I don't know how Linux (or any other OS) is using it today, 
but I don't see what would prevent it to use them as hypercall buffer. 
This would obviously not work because they are not actual RAM from Xen POV.

On a similar topic, because they are normal memory, I don't think 
XEN_DOMCTL_memory_mapping should be able to map reserved-regions. So the 
iomem rangeset should not contain them.

Cheers,

[1] <CABHD4K-z-x=3joJWcOb_x9m7zsjzhskDQweNBr+paLS=PFEY9Q@mail.gmail.com>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 10/10] xen/arm: add reserved-memory regions to the dom0 memory node
  2019-05-07 20:15   ` Julien Grall
  2019-05-07 20:15     ` [Xen-devel] " Julien Grall
@ 2019-05-10 20:51     ` Stefano Stabellini
  2019-05-10 20:51       ` [Xen-devel] " Stefano Stabellini
  2019-05-10 21:43       ` Julien Grall
  1 sibling, 2 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-05-10 20:51 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Stefano Stabellini, Stefano Stabellini

On Tue, 7 May 2019, Julien Grall wrote:
> Hi Stefano,
> 
> On 4/30/19 10:02 PM, Stefano Stabellini wrote:
> > Reserved memory regions are automatically remapped to dom0. Their device
> > tree nodes are also added to dom0 device tree. However, the dom0 memory
> > node is not currently extended to cover the reserved memory regions
> > ranges as required by the spec.  This commit fixes it.
> 
> AFAICT, this does not cover the problem mention by Amit in [1].

What do you think is required to fix Amit's problem?


> But I am still not happy with the approach taken for the reserved-memory
> regions in this series. As I pointed out before, they are just normal memory
> that was reserved for other purpose (CMA, framebuffer...).
> 
> Treating them as "device" from Xen POV is a clear abuse of the meaning and I
> don't believe it is a viable solution long term.

If we don't consider "reusable" memory regions as part of the
discussion, the distinction becomes more philosophical than practical:

- Xen is not supposed to use them for anything
- only given them to the VM configured for it

I don't see much of a difference with MMIO regions, except for the
expected pagetable attributes: i.e. cacheable, not-cacheable. But even
in that case, there could be reasonable use cases for non-cacheable
mappings of reserved-memory regions, even if reserved-memory regions are
"normal" memory.

Could you please help me understand why you see them so differently, as
far as to say that "treating them as "device" from Xen POV is a clear
abuse of the meaning"?


> Indeed, some of the regions may have a property "reusable" allowing the the OS
> to use them until they are claimed by the device driver owning the region. I
> don't know how Linux (or any other OS) is using it today, but I don't see what
> would prevent it to use them as hypercall buffer. This would obviously not
> work because they are not actual RAM from Xen POV.

I haven't attempted at handling "reusable" reserved-memory regions
because I don't have a test environment and/or a use-case for them. In
other words, I don't have any "reusable" reserved-memory regions in any
of the boards (Xilinx and not Xilinx) I have access to. I could add a
warning if we find a "reusable" reserved-memory region at boot.

Nonetheless, if you have a concrete suggestion which doesn't require a
complete rework of this series, I can try to put extra effort to handle
this case even if it is not a benefit to my employer. I am also open to
the possibility of dropping patches 6-10 from the series.

There is also the option of going to devicetree.org to request a new
binding different from reserved-memory. If reserved-memory regions are
expected to be treated as normal memory for all intents and purposes
except for being reserved sometimes, then they might not be the right
bindings to describe Xilinx hardware, which requires fully dedicated
memory regions with both cacheable and non-cacheable mappings for the
purpose of communicating with foreign CPUs.

As a maintainer, even if the approach might considered not-ideal, my
opinion is that this series is still an improvement over what we have
today.


> On a similar topic, because they are normal memory, I don't think
> XEN_DOMCTL_memory_mapping should be able to map reserved-regions. So the iomem
> rangeset should not contain them.

What hypercall do you suggest should be used instead?


> Cheers,
> 
> [1] <CABHD4K-z-x=3joJWcOb_x9m7zsjzhskDQweNBr+paLS=PFEY9Q@mail.gmail.com>
> 
> -- 
> Julien Grall
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 10/10] xen/arm: add reserved-memory regions to the dom0 memory node
  2019-05-10 20:51     ` Stefano Stabellini
@ 2019-05-10 20:51       ` " Stefano Stabellini
  2019-05-10 21:43       ` Julien Grall
  1 sibling, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-05-10 20:51 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Stefano Stabellini, Stefano Stabellini

On Tue, 7 May 2019, Julien Grall wrote:
> Hi Stefano,
> 
> On 4/30/19 10:02 PM, Stefano Stabellini wrote:
> > Reserved memory regions are automatically remapped to dom0. Their device
> > tree nodes are also added to dom0 device tree. However, the dom0 memory
> > node is not currently extended to cover the reserved memory regions
> > ranges as required by the spec.  This commit fixes it.
> 
> AFAICT, this does not cover the problem mention by Amit in [1].

What do you think is required to fix Amit's problem?


> But I am still not happy with the approach taken for the reserved-memory
> regions in this series. As I pointed out before, they are just normal memory
> that was reserved for other purpose (CMA, framebuffer...).
> 
> Treating them as "device" from Xen POV is a clear abuse of the meaning and I
> don't believe it is a viable solution long term.

If we don't consider "reusable" memory regions as part of the
discussion, the distinction becomes more philosophical than practical:

- Xen is not supposed to use them for anything
- only given them to the VM configured for it

I don't see much of a difference with MMIO regions, except for the
expected pagetable attributes: i.e. cacheable, not-cacheable. But even
in that case, there could be reasonable use cases for non-cacheable
mappings of reserved-memory regions, even if reserved-memory regions are
"normal" memory.

Could you please help me understand why you see them so differently, as
far as to say that "treating them as "device" from Xen POV is a clear
abuse of the meaning"?


> Indeed, some of the regions may have a property "reusable" allowing the the OS
> to use them until they are claimed by the device driver owning the region. I
> don't know how Linux (or any other OS) is using it today, but I don't see what
> would prevent it to use them as hypercall buffer. This would obviously not
> work because they are not actual RAM from Xen POV.

I haven't attempted at handling "reusable" reserved-memory regions
because I don't have a test environment and/or a use-case for them. In
other words, I don't have any "reusable" reserved-memory regions in any
of the boards (Xilinx and not Xilinx) I have access to. I could add a
warning if we find a "reusable" reserved-memory region at boot.

Nonetheless, if you have a concrete suggestion which doesn't require a
complete rework of this series, I can try to put extra effort to handle
this case even if it is not a benefit to my employer. I am also open to
the possibility of dropping patches 6-10 from the series.

There is also the option of going to devicetree.org to request a new
binding different from reserved-memory. If reserved-memory regions are
expected to be treated as normal memory for all intents and purposes
except for being reserved sometimes, then they might not be the right
bindings to describe Xilinx hardware, which requires fully dedicated
memory regions with both cacheable and non-cacheable mappings for the
purpose of communicating with foreign CPUs.

As a maintainer, even if the approach might considered not-ideal, my
opinion is that this series is still an improvement over what we have
today.


> On a similar topic, because they are normal memory, I don't think
> XEN_DOMCTL_memory_mapping should be able to map reserved-regions. So the iomem
> rangeset should not contain them.

What hypercall do you suggest should be used instead?


> Cheers,
> 
> [1] <CABHD4K-z-x=3joJWcOb_x9m7zsjzhskDQweNBr+paLS=PFEY9Q@mail.gmail.com>
> 
> -- 
> Julien Grall
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 10/10] xen/arm: add reserved-memory regions to the dom0 memory node
  2019-05-10 20:51     ` Stefano Stabellini
  2019-05-10 20:51       ` [Xen-devel] " Stefano Stabellini
@ 2019-05-10 21:43       ` Julien Grall
  2019-05-10 21:43         ` [Xen-devel] " Julien Grall
  2019-05-11 12:40         ` Julien Grall
  1 sibling, 2 replies; 86+ messages in thread
From: Julien Grall @ 2019-05-10 21:43 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Stefano Stabellini

Hi,

On 10/05/2019 21:51, Stefano Stabellini wrote:
> On Tue, 7 May 2019, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 4/30/19 10:02 PM, Stefano Stabellini wrote:
>>> Reserved memory regions are automatically remapped to dom0. Their device
>>> tree nodes are also added to dom0 device tree. However, the dom0 memory
>>> node is not currently extended to cover the reserved memory regions
>>> ranges as required by the spec.  This commit fixes it.
>>
>> AFAICT, this does not cover the problem mention by Amit in [1].
> 
> What do you think is required to fix Amit's problem?

I haven't fully investigated the problem to be able to answer the 
question here. Although I provided some insights in:

<b293d89c-9ed1-2033-44e5-227643ae1b0c@arm.com>

> 
> 
>> But I am still not happy with the approach taken for the reserved-memory
>> regions in this series. As I pointed out before, they are just normal memory
>> that was reserved for other purpose (CMA, framebuffer...).
>>
>> Treating them as "device" from Xen POV is a clear abuse of the meaning and I
>> don't believe it is a viable solution long term.
> 
> If we don't consider "reusable" memory regions as part of the
> discussion, the distinction becomes more philosophical than practical:
> 
> - Xen is not supposed to use them for anything
> - only given them to the VM configured for it
> 
> I don't see much of a difference with MMIO regions, except for the
> expected pagetable attributes: i.e. cacheable, not-cacheable. But even
> in that case, there could be reasonable use cases for non-cacheable
> mappings of reserved-memory regions, even if reserved-memory regions are
> "normal" memory.
> 
> Could you please help me understand why you see them so differently, as
> far as to say that "treating them as "device" from Xen POV is a clear
> abuse of the meaning"?

Obviously if you take half of the picture, then it makes things easier.
However, we are not here to discuss half of the picture but the full one 
(even if at the end you only implement half of it).

>> Indeed, some of the regions may have a property "reusable" allowing the the OS
>> to use them until they are claimed by the device driver owning the region. I
>> don't know how Linux (or any other OS) is using it today, but I don't see what
>> would prevent it to use them as hypercall buffer. This would obviously not
>> work because they are not actual RAM from Xen POV.
> 
> I haven't attempted at handling "reusable" reserved-memory regions
> because I don't have a test environment and/or a use-case for them. In
> other words, I don't have any "reusable" reserved-memory regions in any
> of the boards (Xilinx and not Xilinx) I have access to. I could add a
> warning if we find a "reusable" reserved-memory region at boot.

Don't get me wrong, I don't ask for the implementation now, so a warning 
would be fine here. However, you need at least to show me some ground 
that re-usable memory can be implemented with your solution or they are 
not a concern for Xen at all.

> 
> Nonetheless, if you have a concrete suggestion which doesn't require a
> complete rework of this series, I can try to put extra effort to handle
> this case even if it is not a benefit to my employer. I am also open to
> the possibility of dropping patches 6-10 from the series.
I don't think the series as it is would allow us to support re-usable 
memory. However as I haven't spent enough time to understand how this 
could be possibly dealt. So I am happy to be proved wrong.

> 
> There is also the option of going to devicetree.org to request a new
> binding different from reserved-memory. If reserved-memory regions are
> expected to be treated as normal memory for all intents and purposes
> except for being reserved sometimes, then they might not be the right
> bindings to describe Xilinx hardware, which requires fully dedicated
> memory regions with both cacheable and non-cacheable mappings for the
> purpose of communicating with foreign CPUs.
> 
> As a maintainer, even if the approach might considered not-ideal, my
> opinion is that this series is still an improvement over what we have
> today.

Well yes it is an improvement compare to what we have today. However, I 
don't think the problem of the reserved-memory region has been fully 
thought through so far. I am worry that your suggestion is going to put 
us into a corner make impossible to expand it (such as re-usable) in the 
future without breaking backward compatibility.

Maybe your solution is correct and we will be able to expand for 
re-usable or at least add it in a backward compatible way. But for that, 
I need solid explanation from your side that it would be possible.

>> On a similar topic, because they are normal memory, I don't think
>> XEN_DOMCTL_memory_mapping should be able to map reserved-regions. So the iomem
>> rangeset should not contain them.
> 
> What hypercall do you suggest should be used instead?

Let's talk about that once we agree on the overall approach for 
reserved-memory.

Cheers,

-- 
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 10/10] xen/arm: add reserved-memory regions to the dom0 memory node
  2019-05-10 21:43       ` Julien Grall
@ 2019-05-10 21:43         ` " Julien Grall
  2019-05-11 12:40         ` Julien Grall
  1 sibling, 0 replies; 86+ messages in thread
From: Julien Grall @ 2019-05-10 21:43 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Stefano Stabellini

Hi,

On 10/05/2019 21:51, Stefano Stabellini wrote:
> On Tue, 7 May 2019, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 4/30/19 10:02 PM, Stefano Stabellini wrote:
>>> Reserved memory regions are automatically remapped to dom0. Their device
>>> tree nodes are also added to dom0 device tree. However, the dom0 memory
>>> node is not currently extended to cover the reserved memory regions
>>> ranges as required by the spec.  This commit fixes it.
>>
>> AFAICT, this does not cover the problem mention by Amit in [1].
> 
> What do you think is required to fix Amit's problem?

I haven't fully investigated the problem to be able to answer the 
question here. Although I provided some insights in:

<b293d89c-9ed1-2033-44e5-227643ae1b0c@arm.com>

> 
> 
>> But I am still not happy with the approach taken for the reserved-memory
>> regions in this series. As I pointed out before, they are just normal memory
>> that was reserved for other purpose (CMA, framebuffer...).
>>
>> Treating them as "device" from Xen POV is a clear abuse of the meaning and I
>> don't believe it is a viable solution long term.
> 
> If we don't consider "reusable" memory regions as part of the
> discussion, the distinction becomes more philosophical than practical:
> 
> - Xen is not supposed to use them for anything
> - only given them to the VM configured for it
> 
> I don't see much of a difference with MMIO regions, except for the
> expected pagetable attributes: i.e. cacheable, not-cacheable. But even
> in that case, there could be reasonable use cases for non-cacheable
> mappings of reserved-memory regions, even if reserved-memory regions are
> "normal" memory.
> 
> Could you please help me understand why you see them so differently, as
> far as to say that "treating them as "device" from Xen POV is a clear
> abuse of the meaning"?

Obviously if you take half of the picture, then it makes things easier.
However, we are not here to discuss half of the picture but the full one 
(even if at the end you only implement half of it).

>> Indeed, some of the regions may have a property "reusable" allowing the the OS
>> to use them until they are claimed by the device driver owning the region. I
>> don't know how Linux (or any other OS) is using it today, but I don't see what
>> would prevent it to use them as hypercall buffer. This would obviously not
>> work because they are not actual RAM from Xen POV.
> 
> I haven't attempted at handling "reusable" reserved-memory regions
> because I don't have a test environment and/or a use-case for them. In
> other words, I don't have any "reusable" reserved-memory regions in any
> of the boards (Xilinx and not Xilinx) I have access to. I could add a
> warning if we find a "reusable" reserved-memory region at boot.

Don't get me wrong, I don't ask for the implementation now, so a warning 
would be fine here. However, you need at least to show me some ground 
that re-usable memory can be implemented with your solution or they are 
not a concern for Xen at all.

> 
> Nonetheless, if you have a concrete suggestion which doesn't require a
> complete rework of this series, I can try to put extra effort to handle
> this case even if it is not a benefit to my employer. I am also open to
> the possibility of dropping patches 6-10 from the series.
I don't think the series as it is would allow us to support re-usable 
memory. However as I haven't spent enough time to understand how this 
could be possibly dealt. So I am happy to be proved wrong.

> 
> There is also the option of going to devicetree.org to request a new
> binding different from reserved-memory. If reserved-memory regions are
> expected to be treated as normal memory for all intents and purposes
> except for being reserved sometimes, then they might not be the right
> bindings to describe Xilinx hardware, which requires fully dedicated
> memory regions with both cacheable and non-cacheable mappings for the
> purpose of communicating with foreign CPUs.
> 
> As a maintainer, even if the approach might considered not-ideal, my
> opinion is that this series is still an improvement over what we have
> today.

Well yes it is an improvement compare to what we have today. However, I 
don't think the problem of the reserved-memory region has been fully 
thought through so far. I am worry that your suggestion is going to put 
us into a corner make impossible to expand it (such as re-usable) in the 
future without breaking backward compatibility.

Maybe your solution is correct and we will be able to expand for 
re-usable or at least add it in a backward compatible way. But for that, 
I need solid explanation from your side that it would be possible.

>> On a similar topic, because they are normal memory, I don't think
>> XEN_DOMCTL_memory_mapping should be able to map reserved-regions. So the iomem
>> rangeset should not contain them.
> 
> What hypercall do you suggest should be used instead?

Let's talk about that once we agree on the overall approach for 
reserved-memory.

Cheers,

-- 
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 10/10] xen/arm: add reserved-memory regions to the dom0 memory node
  2019-05-10 21:43       ` Julien Grall
  2019-05-10 21:43         ` [Xen-devel] " Julien Grall
@ 2019-05-11 12:40         ` Julien Grall
  2019-05-11 12:40           ` [Xen-devel] " Julien Grall
  2019-05-20 21:26           ` Stefano Stabellini
  1 sibling, 2 replies; 86+ messages in thread
From: Julien Grall @ 2019-05-11 12:40 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Stefano Stabellini

On 5/10/19 10:43 PM, Julien Grall wrote:
> Hi,
> 
> On 10/05/2019 21:51, Stefano Stabellini wrote:
>> On Tue, 7 May 2019, Julien Grall wrote:
>>> Hi Stefano,
>>>
>>> On 4/30/19 10:02 PM, Stefano Stabellini wrote:
>>>> Reserved memory regions are automatically remapped to dom0. Their 
>>>> device
>>>> tree nodes are also added to dom0 device tree. However, the dom0 memory
>>>> node is not currently extended to cover the reserved memory regions
>>>> ranges as required by the spec.  This commit fixes it.
>>>
>>> AFAICT, this does not cover the problem mention by Amit in [1].
>>
>> What do you think is required to fix Amit's problem?
> 
> I haven't fully investigated the problem to be able to answer the 
> question here. Although I provided some insights in:
> 
> <b293d89c-9ed1-2033-44e5-227643ae1b0c@arm.com>
> 
>>
>>
>>> But I am still not happy with the approach taken for the reserved-memory
>>> regions in this series. As I pointed out before, they are just normal 
>>> memory
>>> that was reserved for other purpose (CMA, framebuffer...).
>>>
>>> Treating them as "device" from Xen POV is a clear abuse of the 
>>> meaning and I
>>> don't believe it is a viable solution long term.
>>
>> If we don't consider "reusable" memory regions as part of the
>> discussion, the distinction becomes more philosophical than practical:
>>
>> - Xen is not supposed to use them for anything
>> - only given them to the VM configured for it
>>
>> I don't see much of a difference with MMIO regions, except for the
>> expected pagetable attributes: i.e. cacheable, not-cacheable. But even
>> in that case, there could be reasonable use cases for non-cacheable
>> mappings of reserved-memory regions, even if reserved-memory regions are
>> "normal" memory.
>>
>> Could you please help me understand why you see them so differently, as
>> far as to say that "treating them as "device" from Xen POV is a clear
>> abuse of the meaning"?
> 
> Obviously if you take half of the picture, then it makes things easier.
> However, we are not here to discuss half of the picture but the full one 
> (even if at the end you only implement half of it).
> 
>>> Indeed, some of the regions may have a property "reusable" allowing 
>>> the the OS
>>> to use them until they are claimed by the device driver owning the 
>>> region. I
>>> don't know how Linux (or any other OS) is using it today, but I don't 
>>> see what
>>> would prevent it to use them as hypercall buffer. This would 
>>> obviously not
>>> work because they are not actual RAM from Xen POV.
>>
>> I haven't attempted at handling "reusable" reserved-memory regions
>> because I don't have a test environment and/or a use-case for them. In
>> other words, I don't have any "reusable" reserved-memory regions in any
>> of the boards (Xilinx and not Xilinx) I have access to. I could add a
>> warning if we find a "reusable" reserved-memory region at boot.
> 
> Don't get me wrong, I don't ask for the implementation now, so a warning 
> would be fine here. However, you need at least to show me some ground 
> that re-usable memory can be implemented with your solution or they are 
> not a concern for Xen at all.
> 
>>
>> Nonetheless, if you have a concrete suggestion which doesn't require a
>> complete rework of this series, I can try to put extra effort to handle
>> this case even if it is not a benefit to my employer. I am also open to
>> the possibility of dropping patches 6-10 from the series.
> I don't think the series as it is would allow us to support re-usable 
> memory. However as I haven't spent enough time to understand how this 
> could be possibly dealt. So I am happy to be proved wrong.

I thought a bit more about this series during the night. I do agree that 
we need to improve the support of the reserved-memory today as we may 
give memory to the allocator that are could be exposed to a guest via a 
different method (iomem). So carving out the reserved-memory region from 
the memory allocator is the first step to go.

Now we have to differentiate the hardware domain from the other guests. 
I don't have any objection regarding the way to map reserved-memory 
region to the hardware domain because this is completely internal to 
Xen. However, I have some objections with the current interface for DomU:
    1) It is still unclear how "reusable" property would fit in that story
    2) It is definitely not possible for a user to use 'iomem' for 
reserved-memory region today because the partial Device-Tree doesn't 
allow you to create /reserved-memory node nor /memory
    3) AFAIK, there are no way for to prevent the hardware domain to use 
the reserved-region (status = "disabled" would not work).

So, IHMO, the guest support for reserved-memory is not in shape. So I 
think it would be best if we don't permit the reserved-memory region in 
the iomem rangeset. This would avoid us to tie us in an interface until 
we figure out the correct plan for guest.

With that in place, I don't have a strong objection with patches 6-10.

In any case I think you should clearly spell out in the commit message 
what kind of reserved-memory region is supported. For instance, by just 
going through the binding, I have the feeling that those properties are 
not actually supported:
     1) "no-map" - It is used to tell the OS to not create a virtual 
memory of the region as part of its standard mapping of system memory, 
nor permit speculative access to it under any circumstances other than 
under the control of the device driver using the region. On Arm64, Xen 
will map reserved-memory as part of xenheap (i.e the direct mapping), 
but carving out from xenheap would not be sufficient as we use 1GB block 
for the mapping. So they may still be covered. I would assume this is 
used for memory that needs to be mapped non-cacheable, so it is 
potentially critical as Xen would map them cacheable in the stage-1 
hypervisor page-tables.
     2) "alloc-ranges": it is used to specify regions of memory where it 
is acceptable to allocate memory from. This may not play well with the 
Dom0 memory allocator.
     3) "reusable": I mention here only for completeness. My 
understanding is it could potentially be used for hypercall buffer. This 
needs to be investigated.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 10/10] xen/arm: add reserved-memory regions to the dom0 memory node
  2019-05-11 12:40         ` Julien Grall
@ 2019-05-11 12:40           ` " Julien Grall
  2019-05-20 21:26           ` Stefano Stabellini
  1 sibling, 0 replies; 86+ messages in thread
From: Julien Grall @ 2019-05-11 12:40 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Stefano Stabellini

On 5/10/19 10:43 PM, Julien Grall wrote:
> Hi,
> 
> On 10/05/2019 21:51, Stefano Stabellini wrote:
>> On Tue, 7 May 2019, Julien Grall wrote:
>>> Hi Stefano,
>>>
>>> On 4/30/19 10:02 PM, Stefano Stabellini wrote:
>>>> Reserved memory regions are automatically remapped to dom0. Their 
>>>> device
>>>> tree nodes are also added to dom0 device tree. However, the dom0 memory
>>>> node is not currently extended to cover the reserved memory regions
>>>> ranges as required by the spec.  This commit fixes it.
>>>
>>> AFAICT, this does not cover the problem mention by Amit in [1].
>>
>> What do you think is required to fix Amit's problem?
> 
> I haven't fully investigated the problem to be able to answer the 
> question here. Although I provided some insights in:
> 
> <b293d89c-9ed1-2033-44e5-227643ae1b0c@arm.com>
> 
>>
>>
>>> But I am still not happy with the approach taken for the reserved-memory
>>> regions in this series. As I pointed out before, they are just normal 
>>> memory
>>> that was reserved for other purpose (CMA, framebuffer...).
>>>
>>> Treating them as "device" from Xen POV is a clear abuse of the 
>>> meaning and I
>>> don't believe it is a viable solution long term.
>>
>> If we don't consider "reusable" memory regions as part of the
>> discussion, the distinction becomes more philosophical than practical:
>>
>> - Xen is not supposed to use them for anything
>> - only given them to the VM configured for it
>>
>> I don't see much of a difference with MMIO regions, except for the
>> expected pagetable attributes: i.e. cacheable, not-cacheable. But even
>> in that case, there could be reasonable use cases for non-cacheable
>> mappings of reserved-memory regions, even if reserved-memory regions are
>> "normal" memory.
>>
>> Could you please help me understand why you see them so differently, as
>> far as to say that "treating them as "device" from Xen POV is a clear
>> abuse of the meaning"?
> 
> Obviously if you take half of the picture, then it makes things easier.
> However, we are not here to discuss half of the picture but the full one 
> (even if at the end you only implement half of it).
> 
>>> Indeed, some of the regions may have a property "reusable" allowing 
>>> the the OS
>>> to use them until they are claimed by the device driver owning the 
>>> region. I
>>> don't know how Linux (or any other OS) is using it today, but I don't 
>>> see what
>>> would prevent it to use them as hypercall buffer. This would 
>>> obviously not
>>> work because they are not actual RAM from Xen POV.
>>
>> I haven't attempted at handling "reusable" reserved-memory regions
>> because I don't have a test environment and/or a use-case for them. In
>> other words, I don't have any "reusable" reserved-memory regions in any
>> of the boards (Xilinx and not Xilinx) I have access to. I could add a
>> warning if we find a "reusable" reserved-memory region at boot.
> 
> Don't get me wrong, I don't ask for the implementation now, so a warning 
> would be fine here. However, you need at least to show me some ground 
> that re-usable memory can be implemented with your solution or they are 
> not a concern for Xen at all.
> 
>>
>> Nonetheless, if you have a concrete suggestion which doesn't require a
>> complete rework of this series, I can try to put extra effort to handle
>> this case even if it is not a benefit to my employer. I am also open to
>> the possibility of dropping patches 6-10 from the series.
> I don't think the series as it is would allow us to support re-usable 
> memory. However as I haven't spent enough time to understand how this 
> could be possibly dealt. So I am happy to be proved wrong.

I thought a bit more about this series during the night. I do agree that 
we need to improve the support of the reserved-memory today as we may 
give memory to the allocator that are could be exposed to a guest via a 
different method (iomem). So carving out the reserved-memory region from 
the memory allocator is the first step to go.

Now we have to differentiate the hardware domain from the other guests. 
I don't have any objection regarding the way to map reserved-memory 
region to the hardware domain because this is completely internal to 
Xen. However, I have some objections with the current interface for DomU:
    1) It is still unclear how "reusable" property would fit in that story
    2) It is definitely not possible for a user to use 'iomem' for 
reserved-memory region today because the partial Device-Tree doesn't 
allow you to create /reserved-memory node nor /memory
    3) AFAIK, there are no way for to prevent the hardware domain to use 
the reserved-region (status = "disabled" would not work).

So, IHMO, the guest support for reserved-memory is not in shape. So I 
think it would be best if we don't permit the reserved-memory region in 
the iomem rangeset. This would avoid us to tie us in an interface until 
we figure out the correct plan for guest.

With that in place, I don't have a strong objection with patches 6-10.

In any case I think you should clearly spell out in the commit message 
what kind of reserved-memory region is supported. For instance, by just 
going through the binding, I have the feeling that those properties are 
not actually supported:
     1) "no-map" - It is used to tell the OS to not create a virtual 
memory of the region as part of its standard mapping of system memory, 
nor permit speculative access to it under any circumstances other than 
under the control of the device driver using the region. On Arm64, Xen 
will map reserved-memory as part of xenheap (i.e the direct mapping), 
but carving out from xenheap would not be sufficient as we use 1GB block 
for the mapping. So they may still be covered. I would assume this is 
used for memory that needs to be mapped non-cacheable, so it is 
potentially critical as Xen would map them cacheable in the stage-1 
hypervisor page-tables.
     2) "alloc-ranges": it is used to specify regions of memory where it 
is acceptable to allocate memory from. This may not play well with the 
Dom0 memory allocator.
     3) "reusable": I mention here only for completeness. My 
understanding is it could potentially be used for hypercall buffer. This 
needs to be investigated.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 01/10] xen: add a p2mt parameter to map_mmio_regions
  2019-04-30 21:02 ` [PATCH v2 01/10] xen: add a p2mt parameter to map_mmio_regions Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
  2019-05-02 14:59   ` Jan Beulich
@ 2019-05-15 13:39   ` Oleksandr
  2019-05-15 13:39     ` [Xen-devel] " Oleksandr
  2 siblings, 1 reply; 86+ messages in thread
From: Oleksandr @ 2019-05-15 13:39 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: Stefano Stabellini, julien.grall, JBeulich, andrew.cooper3


On 01.05.19 00:02, Stefano Stabellini wrote:

Hi, Stefano

> diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
> index efb6ca9..6adfa55 100644
> --- a/xen/drivers/vpci/header.c
> +++ b/xen/drivers/vpci/header.c
> @@ -52,7 +52,8 @@ static int map_range(unsigned long s, unsigned long e, void *data,
>            * - {un}map_mmio_regions doesn't support preemption.
>            */
>   
> -        rc = map->map ? map_mmio_regions(map->d, _gfn(s), size, _mfn(s))
> +        rc = map->map ? map_mmio_regions(map->d, _gfn(s), size, _mfn(s),
> +                                         p2m_mmio_direct)

Not really sure the VPCI is used on ARM, but xen/drivers/vpci/ looks 
like a common code.

But, according to the commit description, we should pass 
"p2m_mmio_direct" on x86 and "p2m_mmio_direct_dev" on ARM...

>                         : unmap_mmio_regions(map->d, _gfn(s), size, _mfn(s));
>           if ( rc == 0 )
>           {
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h

-- 
Regards,

Oleksandr Tyshchenko


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 01/10] xen: add a p2mt parameter to map_mmio_regions
  2019-05-15 13:39   ` Oleksandr
@ 2019-05-15 13:39     ` " Oleksandr
  0 siblings, 0 replies; 86+ messages in thread
From: Oleksandr @ 2019-05-15 13:39 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: Stefano Stabellini, julien.grall, JBeulich, andrew.cooper3


On 01.05.19 00:02, Stefano Stabellini wrote:

Hi, Stefano

> diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
> index efb6ca9..6adfa55 100644
> --- a/xen/drivers/vpci/header.c
> +++ b/xen/drivers/vpci/header.c
> @@ -52,7 +52,8 @@ static int map_range(unsigned long s, unsigned long e, void *data,
>            * - {un}map_mmio_regions doesn't support preemption.
>            */
>   
> -        rc = map->map ? map_mmio_regions(map->d, _gfn(s), size, _mfn(s))
> +        rc = map->map ? map_mmio_regions(map->d, _gfn(s), size, _mfn(s),
> +                                         p2m_mmio_direct)

Not really sure the VPCI is used on ARM, but xen/drivers/vpci/ looks 
like a common code.

But, according to the commit description, we should pass 
"p2m_mmio_direct" on x86 and "p2m_mmio_direct_dev" on ARM...

>                         : unmap_mmio_regions(map->d, _gfn(s), size, _mfn(s));
>           if ( rc == 0 )
>           {
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h

-- 
Regards,

Oleksandr Tyshchenko


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy
  2019-04-30 21:02 ` [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy Stefano Stabellini
                     ` (2 preceding siblings ...)
  2019-05-07 16:41   ` Julien Grall
@ 2019-05-15 14:40   ` Oleksandr
  2019-05-15 14:40     ` [Xen-devel] " Oleksandr
  3 siblings, 1 reply; 86+ messages in thread
From: Oleksandr @ 2019-05-15 14:40 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: Stefano Stabellini, julien.grall, JBeulich, andrew.cooper3


On 01.05.19 00:02, Stefano Stabellini wrote:

Hi, Stefano
> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> index 140f979..9f62ead 100644
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -928,6 +928,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>           unsigned long mfn_end = mfn + nr_mfns - 1;
>           int add = op->u.memory_mapping.add_mapping;
>           p2m_type_t p2mt;
> +        uint32_t memory_policy = op->u.memory_mapping.memory_policy;
>   
>           ret = -EINVAL;
>           if ( mfn_end < mfn || /* wrap? */
> @@ -958,9 +959,27 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>           if ( add )
>           {
>               printk(XENLOG_G_DEBUG
> -                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
> -                   d->domain_id, gfn, mfn, nr_mfns);
> +                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx cache=%u\n",
> +                   d->domain_id, gfn, mfn, nr_mfns, memory_policy);
>   
> +            switch ( memory_policy )
> +            {
> +#ifdef CONFIG_ARM
> +                case MEMORY_POLICY_ARM_MEM_WB:
> +                    p2mt = p2m_mmio_direct_c;
> +                    break;
> +                case MEMORY_POLICY_ARM_DEV_nGRE:
> +                    p2mt = p2m_mmio_direct_dev;
> +                    break;
> +#endif
> +#ifdef CONFIG_X86
> +                case MEMORY_POLICY_X86_UC:
> +                    p2mt = p2m_mmio_direct;
> +                    break;
> +#endif
> +                default:
> +                    return -EOPNOTSUPP;

If I correctly understand the code, we can't just return an error here 
(domctl_lock is taken, etc). Looks like we should store an error and 
modify code to execute exit part.


-- 
Regards,

Oleksandr Tyshchenko


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy
  2019-05-15 14:40   ` Oleksandr
@ 2019-05-15 14:40     ` " Oleksandr
  0 siblings, 0 replies; 86+ messages in thread
From: Oleksandr @ 2019-05-15 14:40 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: Stefano Stabellini, julien.grall, JBeulich, andrew.cooper3


On 01.05.19 00:02, Stefano Stabellini wrote:

Hi, Stefano
> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> index 140f979..9f62ead 100644
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -928,6 +928,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>           unsigned long mfn_end = mfn + nr_mfns - 1;
>           int add = op->u.memory_mapping.add_mapping;
>           p2m_type_t p2mt;
> +        uint32_t memory_policy = op->u.memory_mapping.memory_policy;
>   
>           ret = -EINVAL;
>           if ( mfn_end < mfn || /* wrap? */
> @@ -958,9 +959,27 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>           if ( add )
>           {
>               printk(XENLOG_G_DEBUG
> -                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
> -                   d->domain_id, gfn, mfn, nr_mfns);
> +                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx cache=%u\n",
> +                   d->domain_id, gfn, mfn, nr_mfns, memory_policy);
>   
> +            switch ( memory_policy )
> +            {
> +#ifdef CONFIG_ARM
> +                case MEMORY_POLICY_ARM_MEM_WB:
> +                    p2mt = p2m_mmio_direct_c;
> +                    break;
> +                case MEMORY_POLICY_ARM_DEV_nGRE:
> +                    p2mt = p2m_mmio_direct_dev;
> +                    break;
> +#endif
> +#ifdef CONFIG_X86
> +                case MEMORY_POLICY_X86_UC:
> +                    p2mt = p2m_mmio_direct;
> +                    break;
> +#endif
> +                default:
> +                    return -EOPNOTSUPP;

If I correctly understand the code, we can't just return an error here 
(domctl_lock is taken, etc). Looks like we should store an error and 
modify code to execute exit part.


-- 
Regards,

Oleksandr Tyshchenko


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 0/10] iomem memory policy
  2019-04-30 21:02 [PATCH v2 0/10] iomem memory policy Stefano Stabellini
                   ` (10 preceding siblings ...)
  2019-04-30 21:02 ` [PATCH v2 10/10] xen/arm: add reserved-memory regions to the dom0 memory node Stefano Stabellini
@ 2019-05-16 16:52 ` Oleksandr
  2019-05-16 16:52   ` [Xen-devel] " Oleksandr
  2019-06-21 23:48   ` Stefano Stabellini
  11 siblings, 2 replies; 86+ messages in thread
From: Oleksandr @ 2019-05-16 16:52 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: andrew.cooper3, julien.grall, wei.liu2, JBeulich, ian.jackson


On 01.05.19 00:02, Stefano Stabellini wrote:
> Hi all,

Hi, Stefano


>
> This series introduces a memory policy parameter for the iomem option,
> so that we can map an iomem region into a guest as cacheable memory.
>
> Then, this series fixes the way Xen handles reserved memory regions on
> ARM: they should be mapped as normal memory, instead today they are
> treated as device memory.
>
> Cheers,
>
> Stefano
>
>
>
> The following changes since commit be3d5b30331d87e177744dbe23138b9ebcdc86f1:
>
>    x86/msr: Fix fallout from mostly c/s 832c180 (2019-04-15 17:51:30 +0100)
>
> are available in the git repository at:
>
>    http://xenbits.xenproject.org/git-http/people/sstabellini/xen-unstable.git iomem_cache-v2
>
> for you to fetch changes up to 4979f8e2f1120b2c394be815b071c017e287cf33:
>
>    xen/arm: add reserved-memory regions to the dom0 memory node (2019-04-30 13:56:40 -0700)
>
> ----------------------------------------------------------------
> Stefano Stabellini (10):
>        xen: add a p2mt parameter to map_mmio_regions
>        xen: rename un/map_mmio_regions to un/map_regions
>        xen: extend XEN_DOMCTL_memory_mapping to handle memory policy
>        libxc: introduce xc_domain_mem_map_policy
>        libxl/xl: add memory policy option to iomem
>        xen/arm: extend device_tree_for_each_node
>        xen/arm: make process_memory_node a device_tree_node_func
>        xen/arm: keep track of reserved-memory regions
>        xen/arm: map reserved-memory regions as normal memory in dom0
>        xen/arm: add reserved-memory regions to the dom0 memory node

Thank you for doing that. Support of reserved-memory in Xen on ARM is a 
quite important feature. We are interested in possibility to provide 
reserved-memory regions to DomU. Our system uses *thin Dom0* which 
doesn't have H/W IPs assigned which may require reserved-memory, unlike, 
other domains which could have. So, I would be happy to test your patch 
series on R-Car Gen3 platforms if you have a plan to extend this support 
for covering other than hwdom domains. There are a few quite different 
reserved-memory regions used in Renesas BSP, I think, it would be a good 
target to test on...

https://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas-bsp.git/tree/arch/arm64/boot/dts/renesas/r8a7795-salvator-x.dts#n37 


As for the current series, I have tested Xen boot only. Looks like, 
*real* reserved-memory regions were handled correctly, but some test 
"non-reserved-memory" node was interpreted as a "reserved-memory" and 
was taken into the account... Please see details below.

--------------------
Host device tree contains the following nodes:

memory@48000000 {
     device_type = "memory";
     /* first 128MB is reserved for secure area. */
     reg = <0x0 0x48000000 0x0 0x78000000>,
           <0x5 0x00000000 0x0 0x80000000>,
           <0x6 0x00000000 0x0 0x80000000>,
           <0x7 0x00000000 0x0 0x80000000>;
};

reserved-memory {
     #address-cells = <2>;
     #size-cells = <2>;
     ranges;

     /* device specific region for Lossy Decompression */
     lossy_decompress: linux,lossy_decompress@54000000 {
         no-map;
         reg = <0x00000000 0x54000000 0x0 0x03000000>;
     };

     /* For Audio DSP */
     adsp_reserved: linux,adsp@57000000 {
         compatible = "shared-dma-pool";
         reusable;
         reg = <0x00000000 0x57000000 0x0 0x01000000>;
     };

     /* global autoconfigured region for contiguous allocations */
     linux,cma@58000000 {
         compatible = "shared-dma-pool";
         reusable;
         reg = <0x00000000 0x58000000 0x0 0x18000000>;
         linux,cma-default;
     };

     /* device specific region for contiguous allocations */
     mmp_reserved: linux,multimedia@70000000 {
         compatible = "shared-dma-pool";
         reusable;
         reg = <0x00000000 0x70000000 0x0 0x10000000>;
     };
};

/* test "non-reserved-memory" node */
sram: sram@47FFF000 {
     compatible = "mmio-sram";
     reg = <0x0 0x47FFF000 0x0 0x1000>;

     #address-cells = <1>;
     #size-cells = <1>;
     ranges = <0 0x0 0x47FFF000 0x1000>;

     scp_shmem: scp_shmem@0 {
         compatible = "mmio-sram";
         reg = <0x0 0x200>;
     };
};

--------------------

I added a print to see which memory regions were inserted:

diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index 9355a6e..23e68b0 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -162,6 +162,10 @@ static int __init process_memory_node(const void 
*fdt, int node,
          device_tree_get_reg(&cell, address_cells, size_cells, &start, 
&size);
          if ( !size )
              continue;
+
+        dt_dprintk("node %s: insert bank %d: %#"PRIx64"->%#"PRIx64" 
type: %s\n",
+                   name, i, start, start + size, reserved ? "reserved" 
: "normal");
+
          mem->bank[mem->nr_banks].start = start;
          mem->bank[mem->nr_banks].size = size;
          mem->nr_banks++;

--------------------

Xen log shows that test "non-reserved-memory" node (scp_shmem@0) is 
processed as "reserved-memory":

(XEN) Checking for initrd in /chosen
(XEN) Initrd 0000000076000040-0000000077c87e47
(XEN) node memory@48000000: insert bank 0: 0x48000000->0xc0000000 type: 
normal
(XEN) node memory@48000000: insert bank 1: 0x500000000->0x580000000 
type: normal
(XEN) node memory@48000000: insert bank 2: 0x600000000->0x680000000 
type: normal
(XEN) node memory@48000000: insert bank 3: 0x700000000->0x780000000 
type: normal
(XEN) node linux,lossy_decompress@54000000: insert bank 0: 
0x54000000->0x57000000 type: reserved
(XEN) node linux,adsp@57000000: insert bank 0: 0x57000000->0x58000000 
type: reserved
(XEN) node linux,cma@58000000: insert bank 0: 0x58000000->0x70000000 
type: reserved
(XEN) node linux,multimedia@70000000: insert bank 0: 
0x70000000->0x80000000 type: reserved
(XEN) node scp_shmem@0: insert bank 0: 0->0x200 type: reserved   
<----------- test "non-reserved-memory" node
(XEN) RAM: 0000000048000000 - 00000000bfffffff
(XEN) RAM: 0000000500000000 - 000000057fffffff
(XEN) RAM: 0000000600000000 - 000000067fffffff
(XEN) RAM: 0000000700000000 - 000000077fffffff
(XEN)
(XEN) MODULE[0]: 0000000048000000 - 0000000048014080 Device Tree
(XEN) MODULE[1]: 0000000076000040 - 0000000077c87e47 Ramdisk
(XEN) MODULE[2]: 000000007a000000 - 000000007c000000 Kernel
(XEN) MODULE[3]: 000000007c000000 - 000000007c010000 XSM
(XEN)  RESVD[0]: 0000000048000000 - 0000000048014000
(XEN)  RESVD[1]: 0000000076000040 - 0000000077c87e47

...

(XEN) handle /memory@48000000
(XEN)   Skip it (matched)
(XEN) handle /reserved-memory
(XEN) dt_irq_number: dev=/reserved-memory
(XEN) /reserved-memory passthrough = 1 nirq = 0 naddr = 0
(XEN) handle /reserved-memory/linux,lossy_decompress@54000000
(XEN) dt_irq_number: dev=/reserved-memory/linux,lossy_decompress@54000000
(XEN) /reserved-memory/linux,lossy_decompress@54000000 passthrough = 1 
nirq = 0 naddr = 1
(XEN) DT: ** translation for device 
/reserved-memory/linux,lossy_decompress@54000000 **
(XEN) DT: bus is default (na=2, ns=2) on /reserved-memory
(XEN) DT: translating address:<3> 00000000<3> 54000000<3>
(XEN) DT: parent bus is default (na=2, ns=2) on /
(XEN) DT: empty ranges; 1:1 translation
(XEN) DT: parent translation for:<3> 00000000<3> 00000000<3>
(XEN) DT: with offset: 54000000
(XEN) DT: one level translation:<3> 00000000<3> 54000000<3>
(XEN) DT: reached root node
(XEN)   - MMIO: 0054000000 - 0057000000 P2MType=5
(XEN) handle /reserved-memory/linux,adsp@57000000
(XEN) dt_irq_number: dev=/reserved-memory/linux,adsp@57000000
(XEN) /reserved-memory/linux,adsp@57000000 passthrough = 1 nirq = 0 
naddr = 1
(XEN) DT: ** translation for device /reserved-memory/linux,adsp@57000000 **
(XEN) DT: bus is default (na=2, ns=2) on /reserved-memory
(XEN) DT: translating address:<3> 00000000<3> 57000000<3>
(XEN) DT: parent bus is default (na=2, ns=2) on /
(XEN) DT: empty ranges; 1:1 translation
(XEN) DT: parent translation for:<3> 00000000<3> 00000000<3>
(XEN) DT: with offset: 57000000
(XEN) DT: one level translation:<3> 00000000<3> 57000000<3>
(XEN) DT: reached root node
(XEN)   - MMIO: 0057000000 - 0058000000 P2MType=5
(XEN) handle /reserved-memory/linux,cma@58000000
(XEN) dt_irq_number: dev=/reserved-memory/linux,cma@58000000
(XEN) /reserved-memory/linux,cma@58000000 passthrough = 1 nirq = 0 naddr = 1
(XEN) DT: ** translation for device /reserved-memory/linux,cma@58000000 **
(XEN) DT: bus is default (na=2, ns=2) on /reserved-memory
(XEN) DT: translating address:<3> 00000000<3> 58000000<3>
(XEN) DT: parent bus is default (na=2, ns=2) on /
(XEN) DT: empty ranges; 1:1 translation
(XEN) DT: parent translation for:<3> 00000000<3> 00000000<3>
(XEN) DT: with offset: 58000000
(XEN) DT: one level translation:<3> 00000000<3> 58000000<3>
(XEN) DT: reached root node
(XEN)   - MMIO: 0058000000 - 0070000000 P2MType=5
(XEN) handle /reserved-memory/linux,multimedia@70000000
(XEN) dt_irq_number: dev=/reserved-memory/linux,multimedia@70000000
(XEN) /reserved-memory/linux,multimedia@70000000 passthrough = 1 nirq = 
0 naddr = 1
(XEN) DT: ** translation for device 
/reserved-memory/linux,multimedia@70000000 **
(XEN) DT: bus is default (na=2, ns=2) on /reserved-memory
(XEN) DT: translating address:<3> 00000000<3> 70000000<3>
(XEN) DT: parent bus is default (na=2, ns=2) on /
(XEN) DT: empty ranges; 1:1 translation
(XEN) DT: parent translation for:<3> 00000000<3> 00000000<3>
(XEN) DT: with offset: 70000000
(XEN) DT: one level translation:<3> 00000000<3> 70000000<3>
(XEN) DT: reached root node
(XEN)   - MMIO: 0070000000 - 0080000000 P2MType=5

...


(XEN) Create memory node (reg size 4, nr cells 24)
(XEN)   Bank 0: 0xb0000000->0xc0000000   <----------- Dom0 memory which 
is 256MB total
(XEN)   Bank 0: 0x54000000->0x57000000   <----------- 
linux,lossy_decompress@54000000
(XEN)   Bank 1: 0x57000000->0x58000000   <----------- linux,adsp@57000000
(XEN)   Bank 2: 0x58000000->0x70000000   <----------- linux,cma@58000000
(XEN)   Bank 3: 0x70000000->0x80000000   <----------- 
linux,multimedia@70000000
(XEN)   Bank 4: 0->0x200   <----------- test "non-reserved-memory" node
(XEN) Loading zImage from 000000007a000000 to 
00000000b0080000-00000000b2080000
(XEN) Loading dom0 initrd from 0000000076000040 to 
0x00000000b8200000-0x00000000b9e87e07
(XEN) Loading dom0 DTB to 0x00000000b8000000-0x00000000b8011b7f
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All

...


-- 
Regards,

Oleksandr Tyshchenko


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 0/10] iomem memory policy
  2019-05-16 16:52 ` [PATCH v2 0/10] iomem memory policy Oleksandr
@ 2019-05-16 16:52   ` " Oleksandr
  2019-06-21 23:48   ` Stefano Stabellini
  1 sibling, 0 replies; 86+ messages in thread
From: Oleksandr @ 2019-05-16 16:52 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: andrew.cooper3, julien.grall, wei.liu2, JBeulich, ian.jackson


On 01.05.19 00:02, Stefano Stabellini wrote:
> Hi all,

Hi, Stefano


>
> This series introduces a memory policy parameter for the iomem option,
> so that we can map an iomem region into a guest as cacheable memory.
>
> Then, this series fixes the way Xen handles reserved memory regions on
> ARM: they should be mapped as normal memory, instead today they are
> treated as device memory.
>
> Cheers,
>
> Stefano
>
>
>
> The following changes since commit be3d5b30331d87e177744dbe23138b9ebcdc86f1:
>
>    x86/msr: Fix fallout from mostly c/s 832c180 (2019-04-15 17:51:30 +0100)
>
> are available in the git repository at:
>
>    http://xenbits.xenproject.org/git-http/people/sstabellini/xen-unstable.git iomem_cache-v2
>
> for you to fetch changes up to 4979f8e2f1120b2c394be815b071c017e287cf33:
>
>    xen/arm: add reserved-memory regions to the dom0 memory node (2019-04-30 13:56:40 -0700)
>
> ----------------------------------------------------------------
> Stefano Stabellini (10):
>        xen: add a p2mt parameter to map_mmio_regions
>        xen: rename un/map_mmio_regions to un/map_regions
>        xen: extend XEN_DOMCTL_memory_mapping to handle memory policy
>        libxc: introduce xc_domain_mem_map_policy
>        libxl/xl: add memory policy option to iomem
>        xen/arm: extend device_tree_for_each_node
>        xen/arm: make process_memory_node a device_tree_node_func
>        xen/arm: keep track of reserved-memory regions
>        xen/arm: map reserved-memory regions as normal memory in dom0
>        xen/arm: add reserved-memory regions to the dom0 memory node

Thank you for doing that. Support of reserved-memory in Xen on ARM is a 
quite important feature. We are interested in possibility to provide 
reserved-memory regions to DomU. Our system uses *thin Dom0* which 
doesn't have H/W IPs assigned which may require reserved-memory, unlike, 
other domains which could have. So, I would be happy to test your patch 
series on R-Car Gen3 platforms if you have a plan to extend this support 
for covering other than hwdom domains. There are a few quite different 
reserved-memory regions used in Renesas BSP, I think, it would be a good 
target to test on...

https://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas-bsp.git/tree/arch/arm64/boot/dts/renesas/r8a7795-salvator-x.dts#n37 


As for the current series, I have tested Xen boot only. Looks like, 
*real* reserved-memory regions were handled correctly, but some test 
"non-reserved-memory" node was interpreted as a "reserved-memory" and 
was taken into the account... Please see details below.

--------------------
Host device tree contains the following nodes:

memory@48000000 {
     device_type = "memory";
     /* first 128MB is reserved for secure area. */
     reg = <0x0 0x48000000 0x0 0x78000000>,
           <0x5 0x00000000 0x0 0x80000000>,
           <0x6 0x00000000 0x0 0x80000000>,
           <0x7 0x00000000 0x0 0x80000000>;
};

reserved-memory {
     #address-cells = <2>;
     #size-cells = <2>;
     ranges;

     /* device specific region for Lossy Decompression */
     lossy_decompress: linux,lossy_decompress@54000000 {
         no-map;
         reg = <0x00000000 0x54000000 0x0 0x03000000>;
     };

     /* For Audio DSP */
     adsp_reserved: linux,adsp@57000000 {
         compatible = "shared-dma-pool";
         reusable;
         reg = <0x00000000 0x57000000 0x0 0x01000000>;
     };

     /* global autoconfigured region for contiguous allocations */
     linux,cma@58000000 {
         compatible = "shared-dma-pool";
         reusable;
         reg = <0x00000000 0x58000000 0x0 0x18000000>;
         linux,cma-default;
     };

     /* device specific region for contiguous allocations */
     mmp_reserved: linux,multimedia@70000000 {
         compatible = "shared-dma-pool";
         reusable;
         reg = <0x00000000 0x70000000 0x0 0x10000000>;
     };
};

/* test "non-reserved-memory" node */
sram: sram@47FFF000 {
     compatible = "mmio-sram";
     reg = <0x0 0x47FFF000 0x0 0x1000>;

     #address-cells = <1>;
     #size-cells = <1>;
     ranges = <0 0x0 0x47FFF000 0x1000>;

     scp_shmem: scp_shmem@0 {
         compatible = "mmio-sram";
         reg = <0x0 0x200>;
     };
};

--------------------

I added a print to see which memory regions were inserted:

diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index 9355a6e..23e68b0 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -162,6 +162,10 @@ static int __init process_memory_node(const void 
*fdt, int node,
          device_tree_get_reg(&cell, address_cells, size_cells, &start, 
&size);
          if ( !size )
              continue;
+
+        dt_dprintk("node %s: insert bank %d: %#"PRIx64"->%#"PRIx64" 
type: %s\n",
+                   name, i, start, start + size, reserved ? "reserved" 
: "normal");
+
          mem->bank[mem->nr_banks].start = start;
          mem->bank[mem->nr_banks].size = size;
          mem->nr_banks++;

--------------------

Xen log shows that test "non-reserved-memory" node (scp_shmem@0) is 
processed as "reserved-memory":

(XEN) Checking for initrd in /chosen
(XEN) Initrd 0000000076000040-0000000077c87e47
(XEN) node memory@48000000: insert bank 0: 0x48000000->0xc0000000 type: 
normal
(XEN) node memory@48000000: insert bank 1: 0x500000000->0x580000000 
type: normal
(XEN) node memory@48000000: insert bank 2: 0x600000000->0x680000000 
type: normal
(XEN) node memory@48000000: insert bank 3: 0x700000000->0x780000000 
type: normal
(XEN) node linux,lossy_decompress@54000000: insert bank 0: 
0x54000000->0x57000000 type: reserved
(XEN) node linux,adsp@57000000: insert bank 0: 0x57000000->0x58000000 
type: reserved
(XEN) node linux,cma@58000000: insert bank 0: 0x58000000->0x70000000 
type: reserved
(XEN) node linux,multimedia@70000000: insert bank 0: 
0x70000000->0x80000000 type: reserved
(XEN) node scp_shmem@0: insert bank 0: 0->0x200 type: reserved   
<----------- test "non-reserved-memory" node
(XEN) RAM: 0000000048000000 - 00000000bfffffff
(XEN) RAM: 0000000500000000 - 000000057fffffff
(XEN) RAM: 0000000600000000 - 000000067fffffff
(XEN) RAM: 0000000700000000 - 000000077fffffff
(XEN)
(XEN) MODULE[0]: 0000000048000000 - 0000000048014080 Device Tree
(XEN) MODULE[1]: 0000000076000040 - 0000000077c87e47 Ramdisk
(XEN) MODULE[2]: 000000007a000000 - 000000007c000000 Kernel
(XEN) MODULE[3]: 000000007c000000 - 000000007c010000 XSM
(XEN)  RESVD[0]: 0000000048000000 - 0000000048014000
(XEN)  RESVD[1]: 0000000076000040 - 0000000077c87e47

...

(XEN) handle /memory@48000000
(XEN)   Skip it (matched)
(XEN) handle /reserved-memory
(XEN) dt_irq_number: dev=/reserved-memory
(XEN) /reserved-memory passthrough = 1 nirq = 0 naddr = 0
(XEN) handle /reserved-memory/linux,lossy_decompress@54000000
(XEN) dt_irq_number: dev=/reserved-memory/linux,lossy_decompress@54000000
(XEN) /reserved-memory/linux,lossy_decompress@54000000 passthrough = 1 
nirq = 0 naddr = 1
(XEN) DT: ** translation for device 
/reserved-memory/linux,lossy_decompress@54000000 **
(XEN) DT: bus is default (na=2, ns=2) on /reserved-memory
(XEN) DT: translating address:<3> 00000000<3> 54000000<3>
(XEN) DT: parent bus is default (na=2, ns=2) on /
(XEN) DT: empty ranges; 1:1 translation
(XEN) DT: parent translation for:<3> 00000000<3> 00000000<3>
(XEN) DT: with offset: 54000000
(XEN) DT: one level translation:<3> 00000000<3> 54000000<3>
(XEN) DT: reached root node
(XEN)   - MMIO: 0054000000 - 0057000000 P2MType=5
(XEN) handle /reserved-memory/linux,adsp@57000000
(XEN) dt_irq_number: dev=/reserved-memory/linux,adsp@57000000
(XEN) /reserved-memory/linux,adsp@57000000 passthrough = 1 nirq = 0 
naddr = 1
(XEN) DT: ** translation for device /reserved-memory/linux,adsp@57000000 **
(XEN) DT: bus is default (na=2, ns=2) on /reserved-memory
(XEN) DT: translating address:<3> 00000000<3> 57000000<3>
(XEN) DT: parent bus is default (na=2, ns=2) on /
(XEN) DT: empty ranges; 1:1 translation
(XEN) DT: parent translation for:<3> 00000000<3> 00000000<3>
(XEN) DT: with offset: 57000000
(XEN) DT: one level translation:<3> 00000000<3> 57000000<3>
(XEN) DT: reached root node
(XEN)   - MMIO: 0057000000 - 0058000000 P2MType=5
(XEN) handle /reserved-memory/linux,cma@58000000
(XEN) dt_irq_number: dev=/reserved-memory/linux,cma@58000000
(XEN) /reserved-memory/linux,cma@58000000 passthrough = 1 nirq = 0 naddr = 1
(XEN) DT: ** translation for device /reserved-memory/linux,cma@58000000 **
(XEN) DT: bus is default (na=2, ns=2) on /reserved-memory
(XEN) DT: translating address:<3> 00000000<3> 58000000<3>
(XEN) DT: parent bus is default (na=2, ns=2) on /
(XEN) DT: empty ranges; 1:1 translation
(XEN) DT: parent translation for:<3> 00000000<3> 00000000<3>
(XEN) DT: with offset: 58000000
(XEN) DT: one level translation:<3> 00000000<3> 58000000<3>
(XEN) DT: reached root node
(XEN)   - MMIO: 0058000000 - 0070000000 P2MType=5
(XEN) handle /reserved-memory/linux,multimedia@70000000
(XEN) dt_irq_number: dev=/reserved-memory/linux,multimedia@70000000
(XEN) /reserved-memory/linux,multimedia@70000000 passthrough = 1 nirq = 
0 naddr = 1
(XEN) DT: ** translation for device 
/reserved-memory/linux,multimedia@70000000 **
(XEN) DT: bus is default (na=2, ns=2) on /reserved-memory
(XEN) DT: translating address:<3> 00000000<3> 70000000<3>
(XEN) DT: parent bus is default (na=2, ns=2) on /
(XEN) DT: empty ranges; 1:1 translation
(XEN) DT: parent translation for:<3> 00000000<3> 00000000<3>
(XEN) DT: with offset: 70000000
(XEN) DT: one level translation:<3> 00000000<3> 70000000<3>
(XEN) DT: reached root node
(XEN)   - MMIO: 0070000000 - 0080000000 P2MType=5

...


(XEN) Create memory node (reg size 4, nr cells 24)
(XEN)   Bank 0: 0xb0000000->0xc0000000   <----------- Dom0 memory which 
is 256MB total
(XEN)   Bank 0: 0x54000000->0x57000000   <----------- 
linux,lossy_decompress@54000000
(XEN)   Bank 1: 0x57000000->0x58000000   <----------- linux,adsp@57000000
(XEN)   Bank 2: 0x58000000->0x70000000   <----------- linux,cma@58000000
(XEN)   Bank 3: 0x70000000->0x80000000   <----------- 
linux,multimedia@70000000
(XEN)   Bank 4: 0->0x200   <----------- test "non-reserved-memory" node
(XEN) Loading zImage from 000000007a000000 to 
00000000b0080000-00000000b2080000
(XEN) Loading dom0 initrd from 0000000076000040 to 
0x00000000b8200000-0x00000000b9e87e07
(XEN) Loading dom0 DTB to 0x00000000b8000000-0x00000000b8011b7f
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All

...


-- 
Regards,

Oleksandr Tyshchenko


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 10/10] xen/arm: add reserved-memory regions to the dom0 memory node
  2019-05-11 12:40         ` Julien Grall
  2019-05-11 12:40           ` [Xen-devel] " Julien Grall
@ 2019-05-20 21:26           ` Stefano Stabellini
  2019-05-20 21:26             ` [Xen-devel] " Stefano Stabellini
  2019-05-20 22:38             ` Julien Grall
  1 sibling, 2 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-05-20 21:26 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, nd, Stefano Stabellini, Stefano Stabellini

On Sat, 11 May 2019, Julien Grall wrote:
> > > > But I am still not happy with the approach taken for the reserved-memory
> > > > regions in this series. As I pointed out before, they are just normal
> > > > memory
> > > > that was reserved for other purpose (CMA, framebuffer...).
> > > > 
> > > > Treating them as "device" from Xen POV is a clear abuse of the meaning
> > > > and I
> > > > don't believe it is a viable solution long term.
> > > 
> > > If we don't consider "reusable" memory regions as part of the
> > > discussion, the distinction becomes more philosophical than practical:
> > > 
> > > - Xen is not supposed to use them for anything
> > > - only given them to the VM configured for it
> > > 
> > > I don't see much of a difference with MMIO regions, except for the
> > > expected pagetable attributes: i.e. cacheable, not-cacheable. But even
> > > in that case, there could be reasonable use cases for non-cacheable
> > > mappings of reserved-memory regions, even if reserved-memory regions are
> > > "normal" memory.
> > > 
> > > Could you please help me understand why you see them so differently, as
> > > far as to say that "treating them as "device" from Xen POV is a clear
> > > abuse of the meaning"?
> > 
> > Obviously if you take half of the picture, then it makes things easier.
> > However, we are not here to discuss half of the picture but the full one
> > (even if at the end you only implement half of it).
> > 
> > > > Indeed, some of the regions may have a property "reusable" allowing the
> > > > the OS
> > > > to use them until they are claimed by the device driver owning the
> > > > region. I
> > > > don't know how Linux (or any other OS) is using it today, but I don't
> > > > see what
> > > > would prevent it to use them as hypercall buffer. This would obviously
> > > > not
> > > > work because they are not actual RAM from Xen POV.
> > > 
> > > I haven't attempted at handling "reusable" reserved-memory regions
> > > because I don't have a test environment and/or a use-case for them. In
> > > other words, I don't have any "reusable" reserved-memory regions in any
> > > of the boards (Xilinx and not Xilinx) I have access to. I could add a
> > > warning if we find a "reusable" reserved-memory region at boot.
> > 
> > Don't get me wrong, I don't ask for the implementation now, so a warning
> > would be fine here. However, you need at least to show me some ground that
> > re-usable memory can be implemented with your solution or they are not a
> > concern for Xen at all.
> > 
> > > 
> > > Nonetheless, if you have a concrete suggestion which doesn't require a
> > > complete rework of this series, I can try to put extra effort to handle
> > > this case even if it is not a benefit to my employer. I am also open to
> > > the possibility of dropping patches 6-10 from the series.
> > I don't think the series as it is would allow us to support re-usable
> > memory. However as I haven't spent enough time to understand how this could
> > be possibly dealt. So I am happy to be proved wrong.
> 
> I thought a bit more about this series during the night. I do agree that we
> need to improve the support of the reserved-memory today as we may give memory
> to the allocator that are could be exposed to a guest via a different method
> (iomem). So carving out the reserved-memory region from the memory allocator
> is the first step to go.
> 
> Now we have to differentiate the hardware domain from the other guests. I
> don't have any objection regarding the way to map reserved-memory region to
> the hardware domain because this is completely internal to Xen. However, I
> have some objections with the current interface for DomU:
>    1) It is still unclear how "reusable" property would fit in that story
>    2) It is definitely not possible for a user to use 'iomem' for
> reserved-memory region today because the partial Device-Tree doesn't allow you
> to create /reserved-memory node nor /memory
>    3) AFAIK, there are no way for to prevent the hardware domain to use the
> reserved-region (status = "disabled" would not work).
> So, IHMO, the guest support for reserved-memory is not in shape. So I think it
> would be best if we don't permit the reserved-memory region in the iomem
> rangeset. This would avoid us to tie us in an interface until we figure out
> the correct plan for guest.

Wouldn't be proper documentation be enough? (See below for where the
documentation should live.)

This is not about privilege over the system: whoever will make the
decision to ask the hypervisor to map the page will have all the
necessary rights to do it.  If the user wants to map a given region,
either because she knows what she is doing, because she is
experimenting, or for whatever reason, I think she should be allowed. In
fact, she can always do it by reverting the patch. So why make it
inconvenient for her?


> With that in place, I don't have a strong objection with patches 6-10.
> 
> In any case I think you should clearly spell out in the commit message what
> kind of reserved-memory region is supported.

Yes, this makes sense. I am thinking of adding a note to SUPPORT.md. Any
other places where I should write it down aside from commit messages?


> For instance, by just going through the binding, I have the feeling
> that those properties are not actually supported:
>     1) "no-map" - It is used to tell the OS to not create a virtual memory of
> the region as part of its standard mapping of system memory, nor permit
> speculative access to it under any circumstances other than under the control
> of the device driver using the region. On Arm64, Xen will map reserved-memory
> as part of xenheap (i.e the direct mapping), but carving out from xenheap
> would not be sufficient as we use 1GB block for the mapping. So they may still
> be covered. I would assume this is used for memory that needs to be mapped
> non-cacheable, so it is potentially critical as Xen would map them cacheable
> in the stage-1 hypervisor page-tables.
>     2) "alloc-ranges": it is used to specify regions of memory where it is
> acceptable to allocate memory from. This may not play well with the Dom0
> memory allocator.
>     3) "reusable": I mention here only for completeness. My understanding is
> it could potentially be used for hypercall buffer. This needs to be
> investigated.

Yes, you are right about these properties not being properly supported.
Do you think that I should list them in SUPPORT.md under a new iomem
section? Or do you prefer a longer document under docs/? Or both?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 10/10] xen/arm: add reserved-memory regions to the dom0 memory node
  2019-05-20 21:26           ` Stefano Stabellini
@ 2019-05-20 21:26             ` " Stefano Stabellini
  2019-05-20 22:38             ` Julien Grall
  1 sibling, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-05-20 21:26 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, nd, Stefano Stabellini, Stefano Stabellini

On Sat, 11 May 2019, Julien Grall wrote:
> > > > But I am still not happy with the approach taken for the reserved-memory
> > > > regions in this series. As I pointed out before, they are just normal
> > > > memory
> > > > that was reserved for other purpose (CMA, framebuffer...).
> > > > 
> > > > Treating them as "device" from Xen POV is a clear abuse of the meaning
> > > > and I
> > > > don't believe it is a viable solution long term.
> > > 
> > > If we don't consider "reusable" memory regions as part of the
> > > discussion, the distinction becomes more philosophical than practical:
> > > 
> > > - Xen is not supposed to use them for anything
> > > - only given them to the VM configured for it
> > > 
> > > I don't see much of a difference with MMIO regions, except for the
> > > expected pagetable attributes: i.e. cacheable, not-cacheable. But even
> > > in that case, there could be reasonable use cases for non-cacheable
> > > mappings of reserved-memory regions, even if reserved-memory regions are
> > > "normal" memory.
> > > 
> > > Could you please help me understand why you see them so differently, as
> > > far as to say that "treating them as "device" from Xen POV is a clear
> > > abuse of the meaning"?
> > 
> > Obviously if you take half of the picture, then it makes things easier.
> > However, we are not here to discuss half of the picture but the full one
> > (even if at the end you only implement half of it).
> > 
> > > > Indeed, some of the regions may have a property "reusable" allowing the
> > > > the OS
> > > > to use them until they are claimed by the device driver owning the
> > > > region. I
> > > > don't know how Linux (or any other OS) is using it today, but I don't
> > > > see what
> > > > would prevent it to use them as hypercall buffer. This would obviously
> > > > not
> > > > work because they are not actual RAM from Xen POV.
> > > 
> > > I haven't attempted at handling "reusable" reserved-memory regions
> > > because I don't have a test environment and/or a use-case for them. In
> > > other words, I don't have any "reusable" reserved-memory regions in any
> > > of the boards (Xilinx and not Xilinx) I have access to. I could add a
> > > warning if we find a "reusable" reserved-memory region at boot.
> > 
> > Don't get me wrong, I don't ask for the implementation now, so a warning
> > would be fine here. However, you need at least to show me some ground that
> > re-usable memory can be implemented with your solution or they are not a
> > concern for Xen at all.
> > 
> > > 
> > > Nonetheless, if you have a concrete suggestion which doesn't require a
> > > complete rework of this series, I can try to put extra effort to handle
> > > this case even if it is not a benefit to my employer. I am also open to
> > > the possibility of dropping patches 6-10 from the series.
> > I don't think the series as it is would allow us to support re-usable
> > memory. However as I haven't spent enough time to understand how this could
> > be possibly dealt. So I am happy to be proved wrong.
> 
> I thought a bit more about this series during the night. I do agree that we
> need to improve the support of the reserved-memory today as we may give memory
> to the allocator that are could be exposed to a guest via a different method
> (iomem). So carving out the reserved-memory region from the memory allocator
> is the first step to go.
> 
> Now we have to differentiate the hardware domain from the other guests. I
> don't have any objection regarding the way to map reserved-memory region to
> the hardware domain because this is completely internal to Xen. However, I
> have some objections with the current interface for DomU:
>    1) It is still unclear how "reusable" property would fit in that story
>    2) It is definitely not possible for a user to use 'iomem' for
> reserved-memory region today because the partial Device-Tree doesn't allow you
> to create /reserved-memory node nor /memory
>    3) AFAIK, there are no way for to prevent the hardware domain to use the
> reserved-region (status = "disabled" would not work).
> So, IHMO, the guest support for reserved-memory is not in shape. So I think it
> would be best if we don't permit the reserved-memory region in the iomem
> rangeset. This would avoid us to tie us in an interface until we figure out
> the correct plan for guest.

Wouldn't be proper documentation be enough? (See below for where the
documentation should live.)

This is not about privilege over the system: whoever will make the
decision to ask the hypervisor to map the page will have all the
necessary rights to do it.  If the user wants to map a given region,
either because she knows what she is doing, because she is
experimenting, or for whatever reason, I think she should be allowed. In
fact, she can always do it by reverting the patch. So why make it
inconvenient for her?


> With that in place, I don't have a strong objection with patches 6-10.
> 
> In any case I think you should clearly spell out in the commit message what
> kind of reserved-memory region is supported.

Yes, this makes sense. I am thinking of adding a note to SUPPORT.md. Any
other places where I should write it down aside from commit messages?


> For instance, by just going through the binding, I have the feeling
> that those properties are not actually supported:
>     1) "no-map" - It is used to tell the OS to not create a virtual memory of
> the region as part of its standard mapping of system memory, nor permit
> speculative access to it under any circumstances other than under the control
> of the device driver using the region. On Arm64, Xen will map reserved-memory
> as part of xenheap (i.e the direct mapping), but carving out from xenheap
> would not be sufficient as we use 1GB block for the mapping. So they may still
> be covered. I would assume this is used for memory that needs to be mapped
> non-cacheable, so it is potentially critical as Xen would map them cacheable
> in the stage-1 hypervisor page-tables.
>     2) "alloc-ranges": it is used to specify regions of memory where it is
> acceptable to allocate memory from. This may not play well with the Dom0
> memory allocator.
>     3) "reusable": I mention here only for completeness. My understanding is
> it could potentially be used for hypercall buffer. This needs to be
> investigated.

Yes, you are right about these properties not being properly supported.
Do you think that I should list them in SUPPORT.md under a new iomem
section? Or do you prefer a longer document under docs/? Or both?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [PATCH v2 10/10] xen/arm: add reserved-memory regions to the dom0 memory node
  2019-05-20 21:26           ` Stefano Stabellini
  2019-05-20 21:26             ` [Xen-devel] " Stefano Stabellini
@ 2019-05-20 22:38             ` Julien Grall
  2019-05-20 22:38               ` [Xen-devel] " Julien Grall
  2019-06-05 16:30               ` Julien Grall
  1 sibling, 2 replies; 86+ messages in thread
From: Julien Grall @ 2019-05-20 22:38 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Stefano Stabellini

Hi Stefano,

On 20/05/2019 22:26, Stefano Stabellini wrote:
> On Sat, 11 May 2019, Julien Grall wrote:
>>>>> But I am still not happy with the approach taken for the reserved-memory
>>>>> regions in this series. As I pointed out before, they are just normal
>>>>> memory
>>>>> that was reserved for other purpose (CMA, framebuffer...).
>>>>>
>>>>> Treating them as "device" from Xen POV is a clear abuse of the meaning
>>>>> and I
>>>>> don't believe it is a viable solution long term.
>>>>
>>>> If we don't consider "reusable" memory regions as part of the
>>>> discussion, the distinction becomes more philosophical than practical:
>>>>
>>>> - Xen is not supposed to use them for anything
>>>> - only given them to the VM configured for it
>>>>
>>>> I don't see much of a difference with MMIO regions, except for the
>>>> expected pagetable attributes: i.e. cacheable, not-cacheable. But even
>>>> in that case, there could be reasonable use cases for non-cacheable
>>>> mappings of reserved-memory regions, even if reserved-memory regions are
>>>> "normal" memory.
>>>>
>>>> Could you please help me understand why you see them so differently, as
>>>> far as to say that "treating them as "device" from Xen POV is a clear
>>>> abuse of the meaning"?
>>>
>>> Obviously if you take half of the picture, then it makes things easier.
>>> However, we are not here to discuss half of the picture but the full one
>>> (even if at the end you only implement half of it).
>>>
>>>>> Indeed, some of the regions may have a property "reusable" allowing the
>>>>> the OS
>>>>> to use them until they are claimed by the device driver owning the
>>>>> region. I
>>>>> don't know how Linux (or any other OS) is using it today, but I don't
>>>>> see what
>>>>> would prevent it to use them as hypercall buffer. This would obviously
>>>>> not
>>>>> work because they are not actual RAM from Xen POV.
>>>>
>>>> I haven't attempted at handling "reusable" reserved-memory regions
>>>> because I don't have a test environment and/or a use-case for them. In
>>>> other words, I don't have any "reusable" reserved-memory regions in any
>>>> of the boards (Xilinx and not Xilinx) I have access to. I could add a
>>>> warning if we find a "reusable" reserved-memory region at boot.
>>>
>>> Don't get me wrong, I don't ask for the implementation now, so a warning
>>> would be fine here. However, you need at least to show me some ground that
>>> re-usable memory can be implemented with your solution or they are not a
>>> concern for Xen at all.
>>>
>>>>
>>>> Nonetheless, if you have a concrete suggestion which doesn't require a
>>>> complete rework of this series, I can try to put extra effort to handle
>>>> this case even if it is not a benefit to my employer. I am also open to
>>>> the possibility of dropping patches 6-10 from the series.
>>> I don't think the series as it is would allow us to support re-usable
>>> memory. However as I haven't spent enough time to understand how this could
>>> be possibly dealt. So I am happy to be proved wrong.
>>
>> I thought a bit more about this series during the night. I do agree that we
>> need to improve the support of the reserved-memory today as we may give memory
>> to the allocator that are could be exposed to a guest via a different method
>> (iomem). So carving out the reserved-memory region from the memory allocator
>> is the first step to go.
>>
>> Now we have to differentiate the hardware domain from the other guests. I
>> don't have any objection regarding the way to map reserved-memory region to
>> the hardware domain because this is completely internal to Xen. However, I
>> have some objections with the current interface for DomU:
>>     1) It is still unclear how "reusable" property would fit in that story
>>     2) It is definitely not possible for a user to use 'iomem' for
>> reserved-memory region today because the partial Device-Tree doesn't allow you
>> to create /reserved-memory node nor /memory
>>     3) AFAIK, there are no way for to prevent the hardware domain to use the
>> reserved-region (status = "disabled" would not work).
>> So, IHMO, the guest support for reserved-memory is not in shape. So I think it
>> would be best if we don't permit the reserved-memory region in the iomem
>> rangeset. This would avoid us to tie us in an interface until we figure out
>> the correct plan for guest.
> 
> Wouldn't be proper documentation be enough? (See below for where the
> documentation should live.)
> 
> This is not about privilege over the system: whoever will make the
> decision to ask the hypervisor to map the page will have all the
> necessary rights to do it.  If the user wants to map a given region,
> either because she knows what she is doing, because she is
> experimenting, or for whatever reason, I think she should be allowed. In
> fact, she can always do it by reverting the patch. So why make it
> inconvenient for her?
TBH, I am getting very frustrated on reviewing this series. We spent our 
previous f2f meetings discussing reserved-memory in lengthy way. We also 
agreed on a plan (see below), but now we are back on square one again...

Yes, a user will need to revert the patch. But then as you said the user 
would know what he/she is doing. So reverting a patch is not going to be 
a complication.

However, I already pointed out multiple time that giving permission is 
not going to be enough. So I still don't see the value of having that in 
Xen without an easy way to use it.

For reminder, you agreed on the following splitting the series in 3 parts:
    - Part 1: Extend iomem to support cacheability
    - Part 2: Partially support reserved-memory for Dom0 and don't give 
iomem permission on them
    - Part 3: reserved-memory for guest

I agreed to merge part 1 and 2. Part 3 will be a start for a discussion 
how this should be supported for guest. I also pointed out that Xilinx 
can carry part 3 in their tree if they feel like too.

> 
> 
>> With that in place, I don't have a strong objection with patches 6-10.
>>
>> In any case I think you should clearly spell out in the commit message what
>> kind of reserved-memory region is supported.
> 
> Yes, this makes sense. I am thinking of adding a note to SUPPORT.md. Any
> other places where I should write it down aside from commit messages?
> 
> 
>> For instance, by just going through the binding, I have the feeling
>> that those properties are not actually supported:
>>      1) "no-map" - It is used to tell the OS to not create a virtual memory of
>> the region as part of its standard mapping of system memory, nor permit
>> speculative access to it under any circumstances other than under the control
>> of the device driver using the region. On Arm64, Xen will map reserved-memory
>> as part of xenheap (i.e the direct mapping), but carving out from xenheap
>> would not be sufficient as we use 1GB block for the mapping. So they may still
>> be covered. I would assume this is used for memory that needs to be mapped
>> non-cacheable, so it is potentially critical as Xen would map them cacheable
>> in the stage-1 hypervisor page-tables.
>>      2) "alloc-ranges": it is used to specify regions of memory where it is
>> acceptable to allocate memory from. This may not play well with the Dom0
>> memory allocator.
>>      3) "reusable": I mention here only for completeness. My understanding is
>> it could potentially be used for hypercall buffer. This needs to be
>> investigated.
> 
> Yes, you are right about these properties not being properly supported.
> Do you think that I should list them in SUPPORT.md under a new iomem
> section? Or do you prefer a longer document under docs/? Or both?

The properties have nothing to do with iomem. So it would be clearly the 
wrong place to put under. Instead this should be a separate sections.

Cheers,

-- 
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 10/10] xen/arm: add reserved-memory regions to the dom0 memory node
  2019-05-20 22:38             ` Julien Grall
@ 2019-05-20 22:38               ` " Julien Grall
  2019-06-05 16:30               ` Julien Grall
  1 sibling, 0 replies; 86+ messages in thread
From: Julien Grall @ 2019-05-20 22:38 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Stefano Stabellini

Hi Stefano,

On 20/05/2019 22:26, Stefano Stabellini wrote:
> On Sat, 11 May 2019, Julien Grall wrote:
>>>>> But I am still not happy with the approach taken for the reserved-memory
>>>>> regions in this series. As I pointed out before, they are just normal
>>>>> memory
>>>>> that was reserved for other purpose (CMA, framebuffer...).
>>>>>
>>>>> Treating them as "device" from Xen POV is a clear abuse of the meaning
>>>>> and I
>>>>> don't believe it is a viable solution long term.
>>>>
>>>> If we don't consider "reusable" memory regions as part of the
>>>> discussion, the distinction becomes more philosophical than practical:
>>>>
>>>> - Xen is not supposed to use them for anything
>>>> - only given them to the VM configured for it
>>>>
>>>> I don't see much of a difference with MMIO regions, except for the
>>>> expected pagetable attributes: i.e. cacheable, not-cacheable. But even
>>>> in that case, there could be reasonable use cases for non-cacheable
>>>> mappings of reserved-memory regions, even if reserved-memory regions are
>>>> "normal" memory.
>>>>
>>>> Could you please help me understand why you see them so differently, as
>>>> far as to say that "treating them as "device" from Xen POV is a clear
>>>> abuse of the meaning"?
>>>
>>> Obviously if you take half of the picture, then it makes things easier.
>>> However, we are not here to discuss half of the picture but the full one
>>> (even if at the end you only implement half of it).
>>>
>>>>> Indeed, some of the regions may have a property "reusable" allowing the
>>>>> the OS
>>>>> to use them until they are claimed by the device driver owning the
>>>>> region. I
>>>>> don't know how Linux (or any other OS) is using it today, but I don't
>>>>> see what
>>>>> would prevent it to use them as hypercall buffer. This would obviously
>>>>> not
>>>>> work because they are not actual RAM from Xen POV.
>>>>
>>>> I haven't attempted at handling "reusable" reserved-memory regions
>>>> because I don't have a test environment and/or a use-case for them. In
>>>> other words, I don't have any "reusable" reserved-memory regions in any
>>>> of the boards (Xilinx and not Xilinx) I have access to. I could add a
>>>> warning if we find a "reusable" reserved-memory region at boot.
>>>
>>> Don't get me wrong, I don't ask for the implementation now, so a warning
>>> would be fine here. However, you need at least to show me some ground that
>>> re-usable memory can be implemented with your solution or they are not a
>>> concern for Xen at all.
>>>
>>>>
>>>> Nonetheless, if you have a concrete suggestion which doesn't require a
>>>> complete rework of this series, I can try to put extra effort to handle
>>>> this case even if it is not a benefit to my employer. I am also open to
>>>> the possibility of dropping patches 6-10 from the series.
>>> I don't think the series as it is would allow us to support re-usable
>>> memory. However as I haven't spent enough time to understand how this could
>>> be possibly dealt. So I am happy to be proved wrong.
>>
>> I thought a bit more about this series during the night. I do agree that we
>> need to improve the support of the reserved-memory today as we may give memory
>> to the allocator that are could be exposed to a guest via a different method
>> (iomem). So carving out the reserved-memory region from the memory allocator
>> is the first step to go.
>>
>> Now we have to differentiate the hardware domain from the other guests. I
>> don't have any objection regarding the way to map reserved-memory region to
>> the hardware domain because this is completely internal to Xen. However, I
>> have some objections with the current interface for DomU:
>>     1) It is still unclear how "reusable" property would fit in that story
>>     2) It is definitely not possible for a user to use 'iomem' for
>> reserved-memory region today because the partial Device-Tree doesn't allow you
>> to create /reserved-memory node nor /memory
>>     3) AFAIK, there are no way for to prevent the hardware domain to use the
>> reserved-region (status = "disabled" would not work).
>> So, IHMO, the guest support for reserved-memory is not in shape. So I think it
>> would be best if we don't permit the reserved-memory region in the iomem
>> rangeset. This would avoid us to tie us in an interface until we figure out
>> the correct plan for guest.
> 
> Wouldn't be proper documentation be enough? (See below for where the
> documentation should live.)
> 
> This is not about privilege over the system: whoever will make the
> decision to ask the hypervisor to map the page will have all the
> necessary rights to do it.  If the user wants to map a given region,
> either because she knows what she is doing, because she is
> experimenting, or for whatever reason, I think she should be allowed. In
> fact, she can always do it by reverting the patch. So why make it
> inconvenient for her?
TBH, I am getting very frustrated on reviewing this series. We spent our 
previous f2f meetings discussing reserved-memory in lengthy way. We also 
agreed on a plan (see below), but now we are back on square one again...

Yes, a user will need to revert the patch. But then as you said the user 
would know what he/she is doing. So reverting a patch is not going to be 
a complication.

However, I already pointed out multiple time that giving permission is 
not going to be enough. So I still don't see the value of having that in 
Xen without an easy way to use it.

For reminder, you agreed on the following splitting the series in 3 parts:
    - Part 1: Extend iomem to support cacheability
    - Part 2: Partially support reserved-memory for Dom0 and don't give 
iomem permission on them
    - Part 3: reserved-memory for guest

I agreed to merge part 1 and 2. Part 3 will be a start for a discussion 
how this should be supported for guest. I also pointed out that Xilinx 
can carry part 3 in their tree if they feel like too.

> 
> 
>> With that in place, I don't have a strong objection with patches 6-10.
>>
>> In any case I think you should clearly spell out in the commit message what
>> kind of reserved-memory region is supported.
> 
> Yes, this makes sense. I am thinking of adding a note to SUPPORT.md. Any
> other places where I should write it down aside from commit messages?
> 
> 
>> For instance, by just going through the binding, I have the feeling
>> that those properties are not actually supported:
>>      1) "no-map" - It is used to tell the OS to not create a virtual memory of
>> the region as part of its standard mapping of system memory, nor permit
>> speculative access to it under any circumstances other than under the control
>> of the device driver using the region. On Arm64, Xen will map reserved-memory
>> as part of xenheap (i.e the direct mapping), but carving out from xenheap
>> would not be sufficient as we use 1GB block for the mapping. So they may still
>> be covered. I would assume this is used for memory that needs to be mapped
>> non-cacheable, so it is potentially critical as Xen would map them cacheable
>> in the stage-1 hypervisor page-tables.
>>      2) "alloc-ranges": it is used to specify regions of memory where it is
>> acceptable to allocate memory from. This may not play well with the Dom0
>> memory allocator.
>>      3) "reusable": I mention here only for completeness. My understanding is
>> it could potentially be used for hypercall buffer. This needs to be
>> investigated.
> 
> Yes, you are right about these properties not being properly supported.
> Do you think that I should list them in SUPPORT.md under a new iomem
> section? Or do you prefer a longer document under docs/? Or both?

The properties have nothing to do with iomem. So it would be clearly the 
wrong place to put under. Instead this should be a separate sections.

Cheers,

-- 
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 10/10] xen/arm: add reserved-memory regions to the dom0 memory node
  2019-05-20 22:38             ` Julien Grall
  2019-05-20 22:38               ` [Xen-devel] " Julien Grall
@ 2019-06-05 16:30               ` Julien Grall
  2019-06-21 23:47                 ` Stefano Stabellini
  1 sibling, 1 reply; 86+ messages in thread
From: Julien Grall @ 2019-06-05 16:30 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Stefano Stabellini

Hi,

On 20/05/2019 23:38, Julien Grall wrote:
> On 20/05/2019 22:26, Stefano Stabellini wrote:
>> On Sat, 11 May 2019, Julien Grall wrote:
>> This is not about privilege over the system: whoever will make the
>> decision to ask the hypervisor to map the page will have all the
>> necessary rights to do it.  If the user wants to map a given region,
>> either because she knows what she is doing, because she is
>> experimenting, or for whatever reason, I think she should be allowed. In
>> fact, she can always do it by reverting the patch. So why make it
>> inconvenient for her?
> TBH, I am getting very frustrated on reviewing this series. We spent our 
> previous f2f meetings discussing reserved-memory in lengthy way. We also agreed 
> on a plan (see below), but now we are back on square one again...
> 
> Yes, a user will need to revert the patch. But then as you said the user would 
> know what he/she is doing. So reverting a patch is not going to be a complication.
> 
> However, I already pointed out multiple time that giving permission is not going 
> to be enough. So I still don't see the value of having that in Xen without an 
> easy way to use it.
> 
> For reminder, you agreed on the following splitting the series in 3 parts:
>     - Part 1: Extend iomem to support cacheability
>     - Part 2: Partially support reserved-memory for Dom0 and don't give iomem 
> permission on them
>     - Part 3: reserved-memory for guest
> 
> I agreed to merge part 1 and 2. Part 3 will be a start for a discussion how this 
> should be supported for guest. I also pointed out that Xilinx can carry part 3 
> in their tree if they feel like too.

I just wanted to bump this as I haven't got any feedback on the way forward here.
It should be possible get part 1 and 2 merged for Xen 4.13.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 02/10] xen: rename un/map_mmio_regions to un/map_regions
  2019-05-01  9:22   ` Julien Grall
  2019-05-01  9:22     ` [Xen-devel] " Julien Grall
@ 2019-06-17 21:24     ` Stefano Stabellini
  2019-06-18 11:05       ` Julien Grall
  1 sibling, 1 reply; 86+ messages in thread
From: Stefano Stabellini @ 2019-06-17 21:24 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, andrew.cooper3, JBeulich,
	Stefano Stabellini

On Wed, 1 May 2019, Julien Grall wrote:
> Hi,
> 
> On 30/04/2019 22:02, Stefano Stabellini wrote:
> > Now that map_mmio_regions takes a p2mt parameter, there is no need to
> > keep "mmio" in the name. The p2mt parameter does a better job at
> > expressing what the mapping is about. Let's save the environment 5
> > characters at a time.
> 
> At least on Arm, what's the difference with guest_physmap_add_entry and this
> function now? On x86, how does the user now which function to use?
> 
> What actually tell the users it should not use this function for RAM?

Also taking into account Jan's comments, I'll drop this patch in the
next version, keeping the original name map_mmio_regions. If you have an
alternative suggestion let me know and I'll try to follow it.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy
  2019-05-02 15:12   ` Jan Beulich
  2019-05-02 15:12     ` [Xen-devel] " Jan Beulich
@ 2019-06-17 21:28     ` Stefano Stabellini
  2019-06-18  8:59       ` Jan Beulich
  1 sibling, 1 reply; 86+ messages in thread
From: Stefano Stabellini @ 2019-06-17 21:28 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Julien Grall, Stefano Stabellini,
	Stefano Stabellini, xen-devel

On Thu, 2 May 2019, Jan Beulich wrote:
> >>> On 30.04.19 at 23:02, <sstabellini@kernel.org> wrote:
> > --- a/xen/include/public/domctl.h
> > +++ b/xen/include/public/domctl.h
> > @@ -571,12 +571,24 @@ struct xen_domctl_bind_pt_irq {
> >  */
> >  #define DPCI_ADD_MAPPING         1
> >  #define DPCI_REMOVE_MAPPING      0
> > +/*
> > + * Default memory policy. Corresponds to:
> > + * Arm: MEMORY_POLICY_ARM_DEV_nGRE
> > + * x86: MEMORY_POLICY_X86_UC
> > + */
> > +#define MEMORY_POLICY_DEFAULT    0
> > +/* x86 only. Memory type UNCACHABLE */
> > +#define MEMORY_POLICY_X86_UC     0
> 
> I'm afraid this may end up misleading, as on NPT and in
> shadow mode we use UC- instead of UC afaics. Andrew,
> do you have an opinion either way what exactly should
> be stated here?

Ping?

I am happy to use any naming scheme you prefer, please provide input.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 05/10] libxl/xl: add memory policy option to iomem
  2019-05-01  9:42   ` Julien Grall
  2019-05-01  9:42     ` [Xen-devel] " Julien Grall
@ 2019-06-17 22:32     ` Stefano Stabellini
  2019-06-18 11:09       ` Julien Grall
  1 sibling, 1 reply; 86+ messages in thread
From: Stefano Stabellini @ 2019-06-17 22:32 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Stefano Stabellini, wei.liu2, ian.jackson,
	Jan Beulich, xen-devel

On Wed, 1 May 2019, Julien Grall wrote:
> Hi,
> 
> On 30/04/2019 22:02, Stefano Stabellini wrote:
> > Add a new memory policy option for the iomem parameter.
> > Possible values are:
> > - arm_devmem, device nGRE, the default on ARM
> > - arm_memory, WB cachable memory
> > - x86_uc: uncachable memory, the default on x86
> > 
> > Store the parameter in a new field in libxl_iomem_range.
> > 
> > Pass the memory policy option to xc_domain_mem_map_policy.
> > 
> > Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> > CC: ian.jackson@eu.citrix.com
> > CC: wei.liu2@citrix.com
> > ---
> > Changes in v2:
> > - add #define LIBXL_HAVE_MEMORY_POLICY
> > - ability to part the memory policy parameter even if gfn is not passed
> > - rename cache_policy to memory policy
> > - rename MEMORY_POLICY_DEVMEM to MEMORY_POLICY_ARM_DEV_nGRE
> > - rename MEMORY_POLICY_MEMORY to MEMORY_POLICY_ARM_MEM_WB
> > - rename memory to arm_memory and devmem to arm_devmem
> > - expand the non-security support status to non device passthrough iomem
> >    configurations
> > - rename iomem options
> > - add x86 specific iomem option
> > ---
> >   SUPPORT.md                  |  2 +-
> >   docs/man/xl.cfg.5.pod.in    |  7 ++++++-
> >   tools/libxl/libxl.h         |  5 +++++
> >   tools/libxl/libxl_create.c  | 21 +++++++++++++++++++--
> >   tools/libxl/libxl_types.idl |  9 +++++++++
> >   tools/xl/xl_parse.c         | 22 +++++++++++++++++++++-
> >   6 files changed, 61 insertions(+), 5 deletions(-)
> > 
> > diff --git a/SUPPORT.md b/SUPPORT.md
> > index e4fb15b..f29a299 100644
> > --- a/SUPPORT.md
> > +++ b/SUPPORT.md
> > @@ -649,7 +649,7 @@ to be used in addition to QEMU.
> >     	Status: Experimental
> >   -### ARM/Non-PCI device passthrough
> > +### ARM/Non-PCI device passthrough and other iomem configurations
> 
> I am not sure why iomem is added here?

It is added here to clarify that it is *not* security supported.


> Also what was the security support before hand? Was it supported?

In my view, it falls under the broader category of "Non-PCI device
passthrough" so it was already not security supported. But I thought it
would be good to make it explicit to avoid any misunderstandings.


> >         Status: Supported, not security supported
> >   diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
> > index c7d70e6..c85857e 100644
> > --- a/docs/man/xl.cfg.5.pod.in
> > +++ b/docs/man/xl.cfg.5.pod.in
> > @@ -1222,7 +1222,7 @@ is given in hexadecimal format and may either be a
> > range, e.g. C<2f8-2ff>
> >   It is recommended to only use this option for trusted VMs under
> >   administrator's control.
> >   -=item B<iomem=[ "IOMEM_START,NUM_PAGES[@GFN]",
> > "IOMEM_START,NUM_PAGES[@GFN]", ...]>
> > +=item B<iomem=[ "IOMEM_START,NUM_PAGES[@GFN],MEMORY_POLICY",
> > "IOMEM_START,NUM_PAGES[@GFN][,MEMORY_POLICY]", ...]>
> >     Allow auto-translated domains to access specific hardware I/O memory
> > pages.
> >   @@ -1233,6 +1233,11 @@ B<GFN> is not specified, the mapping will be
> > performed using B<IOMEM_START>
> >   as a start in the guest's address space, therefore performing a 1:1
> > mapping
> >   by default.
> >   All of these values must be given in hexadecimal format.
> > +B<MEMORY_POLICY> for ARM platforms:
> > +  - "arm_devmem" for Device nGRE, the default on ARM
> 
> This does not match the current default. At the moment, it is Device nGnRE.

I'll fix this throughout the whole series


> > +  - "arm_memory" for Outer Shareable Write-Back Cacheable Memory
> 
> The two names are quite confusing and will make quite difficult to introduce
> any new one. It also make little sense to use a different naming in xl and
> libxl. This only add an other level of confusion.

I'll change them to match the Xen names: arm_dev_ngnre and arm_mem_wb.


> Overall, this is not enough for a user to understand the memory policy. As I
> pointed out before, this is not straight forward on Arm as the resulting
> memory attribute will be a combination of stage-2 and stage-1.
> 
> We need to explain the implication of using the memory and the consequence of
> misuse it. And particularly as this is not security supported so we don't end
> up to security support in the future something that don't work.

I'll expand the text here to cover what you wrote.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy
  2019-05-07 16:41   ` Julien Grall
  2019-05-07 16:41     ` [Xen-devel] " Julien Grall
@ 2019-06-17 22:43     ` Stefano Stabellini
  2019-06-18 11:13       ` Julien Grall
  1 sibling, 1 reply; 86+ messages in thread
From: Stefano Stabellini @ 2019-06-17 22:43 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, andrew.cooper3, JBeulich,
	Stefano Stabellini

On Tue, 7 May 2019, Julien Grall wrote:
> > +#define MEMORY_POLICY_ARM_DEV_nGRE       0
> > +/* Arm only. Outer Shareable, Outer/Inner Write-Back Cacheable memory */
> > +#define MEMORY_POLICY_ARM_MEM_WB         1
> 
> I am wondering whether we should put Arm (resp. x86) defines under an ifdef
> arm (resp. x86). Do you see any use in the common toolstack code of those
> #ifdef?

Yes, they are used in libxl_create.c. I would prefer to avoid
introducing #ifdef here because it will allow us to get away with no
#ifdef in libxl/xl.

OK too all other comments.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy
  2019-06-17 21:28     ` Stefano Stabellini
@ 2019-06-18  8:59       ` Jan Beulich
  2019-06-18 20:32         ` Stefano Stabellini
  0 siblings, 1 reply; 86+ messages in thread
From: Jan Beulich @ 2019-06-18  8:59 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Andrew Cooper, Julien Grall, Stefano Stabellini, xen-devel

>>> On 17.06.19 at 23:28, <sstabellini@kernel.org> wrote:
> On Thu, 2 May 2019, Jan Beulich wrote:
>> >>> On 30.04.19 at 23:02, <sstabellini@kernel.org> wrote:
>> > --- a/xen/include/public/domctl.h
>> > +++ b/xen/include/public/domctl.h
>> > @@ -571,12 +571,24 @@ struct xen_domctl_bind_pt_irq {
>> >  */
>> >  #define DPCI_ADD_MAPPING         1
>> >  #define DPCI_REMOVE_MAPPING      0
>> > +/*
>> > + * Default memory policy. Corresponds to:
>> > + * Arm: MEMORY_POLICY_ARM_DEV_nGRE
>> > + * x86: MEMORY_POLICY_X86_UC
>> > + */
>> > +#define MEMORY_POLICY_DEFAULT    0
>> > +/* x86 only. Memory type UNCACHABLE */
>> > +#define MEMORY_POLICY_X86_UC     0
>> 
>> I'm afraid this may end up misleading, as on NPT and in
>> shadow mode we use UC- instead of UC afaics. Andrew,
>> do you have an opinion either way what exactly should
>> be stated here?
> 
> Ping?

To me? I've stated my opinion.

Jan

> I am happy to use any naming scheme you prefer, please provide input.





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 02/10] xen: rename un/map_mmio_regions to un/map_regions
  2019-06-17 21:24     ` Stefano Stabellini
@ 2019-06-18 11:05       ` Julien Grall
  2019-06-18 20:19         ` Stefano Stabellini
  0 siblings, 1 reply; 86+ messages in thread
From: Julien Grall @ 2019-06-18 11:05 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: xen-devel, andrew.cooper3, JBeulich, Stefano Stabellini

Hi Stefano,

On 17/06/2019 22:24, Stefano Stabellini wrote:
> On Wed, 1 May 2019, Julien Grall wrote:
>> Hi,
>>
>> On 30/04/2019 22:02, Stefano Stabellini wrote:
>>> Now that map_mmio_regions takes a p2mt parameter, there is no need to
>>> keep "mmio" in the name. The p2mt parameter does a better job at
>>> expressing what the mapping is about. Let's save the environment 5
>>> characters at a time.
>>
>> At least on Arm, what's the difference with guest_physmap_add_entry and this
>> function now? On x86, how does the user now which function to use?
>>
>> What actually tell the users it should not use this function for RAM?
> 
> Also taking into account Jan's comments, I'll drop this patch in the
> next version, keeping the original name map_mmio_regions. If you have an
> alternative suggestion let me know and I'll try to follow it.

As long as only p2m_mmio_* types can be used here, then I am fine with it.

Compare to x86, the P2M interface on Arm is pretty much a wild west so far. I 
have a TODO to rethink and add more check in them on Arm.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 05/10] libxl/xl: add memory policy option to iomem
  2019-06-17 22:32     ` Stefano Stabellini
@ 2019-06-18 11:09       ` Julien Grall
  0 siblings, 0 replies; 86+ messages in thread
From: Julien Grall @ 2019-06-18 11:09 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: xen-devel, ian.jackson, wei.liu2, Jan Beulich, Stefano Stabellini

Hi,

On 17/06/2019 23:32, Stefano Stabellini wrote:
> On Wed, 1 May 2019, Julien Grall wrote:
>> Hi,
>>
>> On 30/04/2019 22:02, Stefano Stabellini wrote:
>>> Add a new memory policy option for the iomem parameter.
>>> Possible values are:
>>> - arm_devmem, device nGRE, the default on ARM
>>> - arm_memory, WB cachable memory
>>> - x86_uc: uncachable memory, the default on x86
>>>
>>> Store the parameter in a new field in libxl_iomem_range.
>>>
>>> Pass the memory policy option to xc_domain_mem_map_policy.
>>>
>>> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
>>> CC: ian.jackson@eu.citrix.com
>>> CC: wei.liu2@citrix.com
>>> ---
>>> Changes in v2:
>>> - add #define LIBXL_HAVE_MEMORY_POLICY
>>> - ability to part the memory policy parameter even if gfn is not passed
>>> - rename cache_policy to memory policy
>>> - rename MEMORY_POLICY_DEVMEM to MEMORY_POLICY_ARM_DEV_nGRE
>>> - rename MEMORY_POLICY_MEMORY to MEMORY_POLICY_ARM_MEM_WB
>>> - rename memory to arm_memory and devmem to arm_devmem
>>> - expand the non-security support status to non device passthrough iomem
>>>     configurations
>>> - rename iomem options
>>> - add x86 specific iomem option
>>> ---
>>>    SUPPORT.md                  |  2 +-
>>>    docs/man/xl.cfg.5.pod.in    |  7 ++++++-
>>>    tools/libxl/libxl.h         |  5 +++++
>>>    tools/libxl/libxl_create.c  | 21 +++++++++++++++++++--
>>>    tools/libxl/libxl_types.idl |  9 +++++++++
>>>    tools/xl/xl_parse.c         | 22 +++++++++++++++++++++-
>>>    6 files changed, 61 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/SUPPORT.md b/SUPPORT.md
>>> index e4fb15b..f29a299 100644
>>> --- a/SUPPORT.md
>>> +++ b/SUPPORT.md
>>> @@ -649,7 +649,7 @@ to be used in addition to QEMU.
>>>      	Status: Experimental
>>>    -### ARM/Non-PCI device passthrough
>>> +### ARM/Non-PCI device passthrough and other iomem configurations
>>
>> I am not sure why iomem is added here?
> 
> It is added here to clarify that it is *not* security supported.
> 
> 
>> Also what was the security support before hand? Was it supported?
> 
> In my view, it falls under the broader category of "Non-PCI device
> passthrough" so it was already not security supported. But I thought it
> would be good to make it explicit to avoid any misunderstandings.

I am ok with this clarification. However, this should really be in a separate patch.

> 
> 
>>>          Status: Supported, not security supported
>>>    diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
>>> index c7d70e6..c85857e 100644
>>> --- a/docs/man/xl.cfg.5.pod.in
>>> +++ b/docs/man/xl.cfg.5.pod.in
>>> @@ -1222,7 +1222,7 @@ is given in hexadecimal format and may either be a
>>> range, e.g. C<2f8-2ff>
>>>    It is recommended to only use this option for trusted VMs under
>>>    administrator's control.
>>>    -=item B<iomem=[ "IOMEM_START,NUM_PAGES[@GFN]",
>>> "IOMEM_START,NUM_PAGES[@GFN]", ...]>
>>> +=item B<iomem=[ "IOMEM_START,NUM_PAGES[@GFN],MEMORY_POLICY",
>>> "IOMEM_START,NUM_PAGES[@GFN][,MEMORY_POLICY]", ...]>
>>>      Allow auto-translated domains to access specific hardware I/O memory
>>> pages.
>>>    @@ -1233,6 +1233,11 @@ B<GFN> is not specified, the mapping will be
>>> performed using B<IOMEM_START>
>>>    as a start in the guest's address space, therefore performing a 1:1
>>> mapping
>>>    by default.
>>>    All of these values must be given in hexadecimal format.
>>> +B<MEMORY_POLICY> for ARM platforms:
>>> +  - "arm_devmem" for Device nGRE, the default on ARM
>>
>> This does not match the current default. At the moment, it is Device nGnRE.
> 
> I'll fix this throughout the whole series
> 
> 
>>> +  - "arm_memory" for Outer Shareable Write-Back Cacheable Memory
>>
>> The two names are quite confusing and will make quite difficult to introduce
>> any new one. It also make little sense to use a different naming in xl and
>> libxl. This only add an other level of confusion.
> 
> I'll change them to match the Xen names: arm_dev_ngnre and arm_mem_wb.

I would prefer if we use uppercase for G, R E and WB. This make the name is bit 
more readable.

> 
> 
>> Overall, this is not enough for a user to understand the memory policy. As I
>> pointed out before, this is not straight forward on Arm as the resulting
>> memory attribute will be a combination of stage-2 and stage-1.
>>
>> We need to explain the implication of using the memory and the consequence of
>> misuse it. And particularly as this is not security supported so we don't end
>> up to security support in the future something that don't work.
> 
> I'll expand the text here to cover what you wrote.

Thank you!

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy
  2019-06-17 22:43     ` Stefano Stabellini
@ 2019-06-18 11:13       ` Julien Grall
  0 siblings, 0 replies; 86+ messages in thread
From: Julien Grall @ 2019-06-18 11:13 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: xen-devel, andrew.cooper3, JBeulich, Stefano Stabellini



On 17/06/2019 23:43, Stefano Stabellini wrote:
> On Tue, 7 May 2019, Julien Grall wrote:
>>> +#define MEMORY_POLICY_ARM_DEV_nGRE       0
>>> +/* Arm only. Outer Shareable, Outer/Inner Write-Back Cacheable memory */
>>> +#define MEMORY_POLICY_ARM_MEM_WB         1
>>
>> I am wondering whether we should put Arm (resp. x86) defines under an ifdef
>> arm (resp. x86). Do you see any use in the common toolstack code of those
>> #ifdef?
> 
> Yes, they are used in libxl_create.c. I would prefer to avoid
> introducing #ifdef here because it will allow us to get away with no
> #ifdef in libxl/xl.

Well, you can introduce an arch specific function that convert the memory 
policy. But I will leave this decision to the tools maintainers.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 05/10] libxl/xl: add memory policy option to iomem
  2019-04-30 21:02 ` [PATCH v2 05/10] libxl/xl: add memory policy option to iomem Stefano Stabellini
  2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
  2019-05-01  9:42   ` Julien Grall
@ 2019-06-18 11:15   ` Julien Grall
  2019-06-18 22:07     ` Stefano Stabellini
  2 siblings, 1 reply; 86+ messages in thread
From: Julien Grall @ 2019-06-18 11:15 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: Stefano Stabellini, ian.jackson, wei.liu2



On 30/04/2019 22:02, Stefano Stabellini wrote:
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index 89fe80f..a6c5e30 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -415,6 +415,21 @@ static void init_console_info(libxl__gc *gc,
>          Only 'channels' when mapped to consoles have a string name. */
>   }
>   
> +static uint32_t libxl__memory_policy_to_xc(libxl_memory_policy c)
> +{
> +    switch (c) {
> +    case LIBXL_MEMORY_POLICY_ARM_MEM_WB:
> +        return MEMORY_POLICY_ARM_MEM_WB;
> +    case LIBXL_MEMORY_POLICY_ARM_DEV_NGRE:
> +        return MEMORY_POLICY_ARM_DEV_nGRE;
> +    case LIBXL_MEMORY_POLICY_X86_UC:
> +        return MEMORY_POLICY_X86_UC;
> +    case LIBXL_MEMORY_POLICY_DEFAULT:
> +    default:

Looking at this again, don't we want to bail out if the policy is unknown? My 
concern here is the user may configure with something it didn't expect. The risk 
is the problem will be hard to debug.

I also believe this could be part of libxl_{arm,x86}.c allowing us to filter 
misuse early. Ian, Wei, any opinion?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 02/10] xen: rename un/map_mmio_regions to un/map_regions
  2019-06-18 11:05       ` Julien Grall
@ 2019-06-18 20:19         ` Stefano Stabellini
  0 siblings, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-06-18 20:19 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, andrew.cooper3, JBeulich,
	Stefano Stabellini

On Tue, 18 Jun 2019, Julien Grall wrote:
> Hi Stefano,
> 
> On 17/06/2019 22:24, Stefano Stabellini wrote:
> > On Wed, 1 May 2019, Julien Grall wrote:
> > > Hi,
> > > 
> > > On 30/04/2019 22:02, Stefano Stabellini wrote:
> > > > Now that map_mmio_regions takes a p2mt parameter, there is no need to
> > > > keep "mmio" in the name. The p2mt parameter does a better job at
> > > > expressing what the mapping is about. Let's save the environment 5
> > > > characters at a time.
> > > 
> > > At least on Arm, what's the difference with guest_physmap_add_entry and
> > > this
> > > function now? On x86, how does the user now which function to use?
> > > 
> > > What actually tell the users it should not use this function for RAM?
> > 
> > Also taking into account Jan's comments, I'll drop this patch in the
> > next version, keeping the original name map_mmio_regions. If you have an
> > alternative suggestion let me know and I'll try to follow it.
> 
> As long as only p2m_mmio_* types can be used here, then I am fine with it.

Only the p2m_mmio_* types are used, but there are no checks. I'll add an
ASSERT.


> Compare to x86, the P2M interface on Arm is pretty much a wild west so far. I
> have a TODO to rethink and add more check in them on Arm.

Yes, thank you

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy
  2019-06-18  8:59       ` Jan Beulich
@ 2019-06-18 20:32         ` Stefano Stabellini
  2019-06-18 23:15           ` Stefano Stabellini
  0 siblings, 1 reply; 86+ messages in thread
From: Stefano Stabellini @ 2019-06-18 20:32 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Julien Grall, Stefano Stabellini,
	Stefano Stabellini, xen-devel

On Tue, 18 Jun 2019, Jan Beulich wrote:
> >>> On 17.06.19 at 23:28, <sstabellini@kernel.org> wrote:
> > On Thu, 2 May 2019, Jan Beulich wrote:
> >> >>> On 30.04.19 at 23:02, <sstabellini@kernel.org> wrote:
> >> > --- a/xen/include/public/domctl.h
> >> > +++ b/xen/include/public/domctl.h
> >> > @@ -571,12 +571,24 @@ struct xen_domctl_bind_pt_irq {
> >> >  */
> >> >  #define DPCI_ADD_MAPPING         1
> >> >  #define DPCI_REMOVE_MAPPING      0
> >> > +/*
> >> > + * Default memory policy. Corresponds to:
> >> > + * Arm: MEMORY_POLICY_ARM_DEV_nGRE
> >> > + * x86: MEMORY_POLICY_X86_UC
> >> > + */
> >> > +#define MEMORY_POLICY_DEFAULT    0
> >> > +/* x86 only. Memory type UNCACHABLE */
> >> > +#define MEMORY_POLICY_X86_UC     0
> >> 
> >> I'm afraid this may end up misleading, as on NPT and in
> >> shadow mode we use UC- instead of UC afaics. Andrew,
> >> do you have an opinion either way what exactly should
> >> be stated here?
> > 
> > Ping?
> 
> To me? I've stated my opinion.

I cannot name the macro "MEMORY_POLICY_X86_UC-" because it cannot end
with a "-". Instead, I can name it MEMORY_POLICY_X86_UC_MINUS that seems
to be what Linux does. I'll rename the optional xl parameter too from
"x86_uc" to "x86_uc_minus".

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 05/10] libxl/xl: add memory policy option to iomem
  2019-06-18 11:15   ` Julien Grall
@ 2019-06-18 22:07     ` Stefano Stabellini
  2019-06-18 22:20       ` Julien Grall
  0 siblings, 1 reply; 86+ messages in thread
From: Stefano Stabellini @ 2019-06-18 22:07 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, ian.jackson, wei.liu2, Stefano Stabellini

On Tue, 18 Jun 2019, Julien Grall wrote:
> On 30/04/2019 22:02, Stefano Stabellini wrote:
> > diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> > index 89fe80f..a6c5e30 100644
> > --- a/tools/libxl/libxl_create.c
> > +++ b/tools/libxl/libxl_create.c
> > @@ -415,6 +415,21 @@ static void init_console_info(libxl__gc *gc,
> >          Only 'channels' when mapped to consoles have a string name. */
> >   }
> >   +static uint32_t libxl__memory_policy_to_xc(libxl_memory_policy c)
> > +{
> > +    switch (c) {
> > +    case LIBXL_MEMORY_POLICY_ARM_MEM_WB:
> > +        return MEMORY_POLICY_ARM_MEM_WB;
> > +    case LIBXL_MEMORY_POLICY_ARM_DEV_NGRE:
> > +        return MEMORY_POLICY_ARM_DEV_nGRE;
> > +    case LIBXL_MEMORY_POLICY_X86_UC:
> > +        return MEMORY_POLICY_X86_UC;
> > +    case LIBXL_MEMORY_POLICY_DEFAULT:
> > +    default:
> 
> Looking at this again, don't we want to bail out if the policy is unknown? My
> concern here is the user may configure with something it didn't expect. The
> risk is the problem will be hard to debug.
> 
> I also believe this could be part of libxl_{arm,x86}.c allowing us to filter
> misuse early.

This sounds like a good idea, I can do that. Then, I can also #ifdef the
hypercalls defines, although for some reason today libxl doesn't have
CONFIG_X86 or CONFIG_ARM set so I would also have to do the following in
the libxl Makefile:

ifeq ($(CONFIG_X86),y)
CFLAGS_LIBXL += -DCONFIG_X86
else
CFLAGS_LIBXL += -DCONFIG_ARM
endif


> Ian, Wei, any opinion?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 05/10] libxl/xl: add memory policy option to iomem
  2019-06-18 22:07     ` Stefano Stabellini
@ 2019-06-18 22:20       ` Julien Grall
  2019-06-18 22:46         ` Stefano Stabellini
  0 siblings, 1 reply; 86+ messages in thread
From: Julien Grall @ 2019-06-18 22:20 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: xen-devel, Julien Grall, ian.jackson, wei.liu2, Stefano Stabellini

[-- Attachment #1.1: Type: text/plain, Size: 2061 bytes --]

Sorry for the formatting.

On Tue, 18 Jun 2019, 23:09 Stefano Stabellini, <sstabellini@kernel.org>
wrote:

> On Tue, 18 Jun 2019, Julien Grall wrote:
> > On 30/04/2019 22:02, Stefano Stabellini wrote:
> > > diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> > > index 89fe80f..a6c5e30 100644
> > > --- a/tools/libxl/libxl_create.c
> > > +++ b/tools/libxl/libxl_create.c
> > > @@ -415,6 +415,21 @@ static void init_console_info(libxl__gc *gc,
> > >          Only 'channels' when mapped to consoles have a string name. */
> > >   }
> > >   +static uint32_t libxl__memory_policy_to_xc(libxl_memory_policy c)
> > > +{
> > > +    switch (c) {
> > > +    case LIBXL_MEMORY_POLICY_ARM_MEM_WB:
> > > +        return MEMORY_POLICY_ARM_MEM_WB;
> > > +    case LIBXL_MEMORY_POLICY_ARM_DEV_NGRE:
> > > +        return MEMORY_POLICY_ARM_DEV_nGRE;
> > > +    case LIBXL_MEMORY_POLICY_X86_UC:
> > > +        return MEMORY_POLICY_X86_UC;
> > > +    case LIBXL_MEMORY_POLICY_DEFAULT:
> > > +    default:
> >
> > Looking at this again, don't we want to bail out if the policy is
> unknown? My
> > concern here is the user may configure with something it didn't expect.
> The
> > risk is the problem will be hard to debug.
> >
> > I also believe this could be part of libxl_{arm,x86}.c allowing us to
> filter
> > misuse early.
>
> This sounds like a good idea, I can do that. Then, I can also #ifdef the
> hypercalls defines, although for some reason today libxl doesn't have
> CONFIG_X86 or CONFIG_ARM set so I would also have to do the following in
> the libxl Makefile:
>
> ifeq ($(CONFIG_X86),y)
> CFLAGS_LIBXL += -DCONFIG_X86
> else
> CFLAGS_LIBXL += -DCONFIG_ARM
> endif
>

Or just follow what we do today in other public headers:

#if defined(__arm__) || defined(__aarch64__)

You need to double check the exact syntax as I wrote it by memory.

Cheers,


>
> > Ian, Wei, any opinion?
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel

[-- Attachment #1.2: Type: text/html, Size: 3236 bytes --]

<div dir="auto"><div>Sorry for the formatting.<br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, 18 Jun 2019, 23:09 Stefano Stabellini, &lt;<a href="mailto:sstabellini@kernel.org">sstabellini@kernel.org</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Tue, 18 Jun 2019, Julien Grall wrote:<br>
&gt; On 30/04/2019 22:02, Stefano Stabellini wrote:<br>
&gt; &gt; diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c<br>
&gt; &gt; index 89fe80f..a6c5e30 100644<br>
&gt; &gt; --- a/tools/libxl/libxl_create.c<br>
&gt; &gt; +++ b/tools/libxl/libxl_create.c<br>
&gt; &gt; @@ -415,6 +415,21 @@ static void init_console_info(libxl__gc *gc,<br>
&gt; &gt;          Only &#39;channels&#39; when mapped to consoles have a string name. */<br>
&gt; &gt;   }<br>
&gt; &gt;   +static uint32_t libxl__memory_policy_to_xc(libxl_memory_policy c)<br>
&gt; &gt; +{<br>
&gt; &gt; +    switch (c) {<br>
&gt; &gt; +    case LIBXL_MEMORY_POLICY_ARM_MEM_WB:<br>
&gt; &gt; +        return MEMORY_POLICY_ARM_MEM_WB;<br>
&gt; &gt; +    case LIBXL_MEMORY_POLICY_ARM_DEV_NGRE:<br>
&gt; &gt; +        return MEMORY_POLICY_ARM_DEV_nGRE;<br>
&gt; &gt; +    case LIBXL_MEMORY_POLICY_X86_UC:<br>
&gt; &gt; +        return MEMORY_POLICY_X86_UC;<br>
&gt; &gt; +    case LIBXL_MEMORY_POLICY_DEFAULT:<br>
&gt; &gt; +    default:<br>
&gt; <br>
&gt; Looking at this again, don&#39;t we want to bail out if the policy is unknown? My<br>
&gt; concern here is the user may configure with something it didn&#39;t expect. The<br>
&gt; risk is the problem will be hard to debug.<br>
&gt; <br>
&gt; I also believe this could be part of libxl_{arm,x86}.c allowing us to filter<br>
&gt; misuse early.<br>
<br>
This sounds like a good idea, I can do that. Then, I can also #ifdef the<br>
hypercalls defines, although for some reason today libxl doesn&#39;t have<br>
CONFIG_X86 or CONFIG_ARM set so I would also have to do the following in<br>
the libxl Makefile:<br>
<br>
ifeq ($(CONFIG_X86),y)<br>
CFLAGS_LIBXL += -DCONFIG_X86<br>
else<br>
CFLAGS_LIBXL += -DCONFIG_ARM<br>
endif<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Or just follow what we do today in other public headers:</div><div dir="auto"><br></div><div dir="auto">#if defined(__arm__) || defined(__aarch64__)</div><div dir="auto"><br></div><div dir="auto">You need to double check the exact syntax as I wrote it by memory.</div><div dir="auto"><br></div><div dir="auto">Cheers,</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
&gt; Ian, Wei, any opinion?<br>
<br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href="mailto:Xen-devel@lists.xenproject.org" target="_blank" rel="noreferrer">Xen-devel@lists.xenproject.org</a><br>
<a href="https://lists.xenproject.org/mailman/listinfo/xen-devel" rel="noreferrer noreferrer" target="_blank">https://lists.xenproject.org/mailman/listinfo/xen-devel</a></blockquote></div></div></div>

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 05/10] libxl/xl: add memory policy option to iomem
  2019-06-18 22:20       ` Julien Grall
@ 2019-06-18 22:46         ` Stefano Stabellini
  0 siblings, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-06-18 22:46 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Stefano Stabellini, wei.liu2, ian.jackson,
	Julien Grall, xen-devel

[-- Attachment #1: Type: text/plain, Size: 2172 bytes --]

On Tue, 18 Jun 2019, Julien Grall wrote:
> Sorry for the formatting.
> 
> On Tue, 18 Jun 2019, 23:09 Stefano Stabellini, <sstabellini@kernel.org> wrote:
>       On Tue, 18 Jun 2019, Julien Grall wrote:
>       > On 30/04/2019 22:02, Stefano Stabellini wrote:
>       > > diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
>       > > index 89fe80f..a6c5e30 100644
>       > > --- a/tools/libxl/libxl_create.c
>       > > +++ b/tools/libxl/libxl_create.c
>       > > @@ -415,6 +415,21 @@ static void init_console_info(libxl__gc *gc,
>       > >          Only 'channels' when mapped to consoles have a string name. */
>       > >   }
>       > >   +static uint32_t libxl__memory_policy_to_xc(libxl_memory_policy c)
>       > > +{
>       > > +    switch (c) {
>       > > +    case LIBXL_MEMORY_POLICY_ARM_MEM_WB:
>       > > +        return MEMORY_POLICY_ARM_MEM_WB;
>       > > +    case LIBXL_MEMORY_POLICY_ARM_DEV_NGRE:
>       > > +        return MEMORY_POLICY_ARM_DEV_nGRE;
>       > > +    case LIBXL_MEMORY_POLICY_X86_UC:
>       > > +        return MEMORY_POLICY_X86_UC;
>       > > +    case LIBXL_MEMORY_POLICY_DEFAULT:
>       > > +    default:
>       >
>       > Looking at this again, don't we want to bail out if the policy is unknown? My
>       > concern here is the user may configure with something it didn't expect. The
>       > risk is the problem will be hard to debug.
>       >
>       > I also believe this could be part of libxl_{arm,x86}.c allowing us to filter
>       > misuse early.
> 
>       This sounds like a good idea, I can do that. Then, I can also #ifdef the
>       hypercalls defines, although for some reason today libxl doesn't have
>       CONFIG_X86 or CONFIG_ARM set so I would also have to do the following in
>       the libxl Makefile:
> 
>       ifeq ($(CONFIG_X86),y)
>       CFLAGS_LIBXL += -DCONFIG_X86
>       else
>       CFLAGS_LIBXL += -DCONFIG_ARM
>       endif
> 
> 
> Or just follow what we do today in other public headers:
> 
> #if defined(__arm__) || defined(__aarch64__)
> 
> You need to double check the exact syntax as I wrote it by memory.

Doh! Thank you

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy
  2019-06-18 20:32         ` Stefano Stabellini
@ 2019-06-18 23:15           ` Stefano Stabellini
  2019-06-19  6:53             ` Jan Beulich
  0 siblings, 1 reply; 86+ messages in thread
From: Stefano Stabellini @ 2019-06-18 23:15 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Andrew Cooper, Julien Grall, Stefano Stabellini, Jan Beulich, xen-devel

On Tue, 18 Jun 2019, Stefano Stabellini wrote:
> On Tue, 18 Jun 2019, Jan Beulich wrote:
> > >>> On 17.06.19 at 23:28, <sstabellini@kernel.org> wrote:
> > > On Thu, 2 May 2019, Jan Beulich wrote:
> > >> >>> On 30.04.19 at 23:02, <sstabellini@kernel.org> wrote:
> > >> > --- a/xen/include/public/domctl.h
> > >> > +++ b/xen/include/public/domctl.h
> > >> > @@ -571,12 +571,24 @@ struct xen_domctl_bind_pt_irq {
> > >> >  */
> > >> >  #define DPCI_ADD_MAPPING         1
> > >> >  #define DPCI_REMOVE_MAPPING      0
> > >> > +/*
> > >> > + * Default memory policy. Corresponds to:
> > >> > + * Arm: MEMORY_POLICY_ARM_DEV_nGRE
> > >> > + * x86: MEMORY_POLICY_X86_UC
> > >> > + */
> > >> > +#define MEMORY_POLICY_DEFAULT    0
> > >> > +/* x86 only. Memory type UNCACHABLE */
> > >> > +#define MEMORY_POLICY_X86_UC     0
> > >> 
> > >> I'm afraid this may end up misleading, as on NPT and in
> > >> shadow mode we use UC- instead of UC afaics. Andrew,
> > >> do you have an opinion either way what exactly should
> > >> be stated here?
> > > 
> > > Ping?
> > 
> > To me? I've stated my opinion.
> 
> I cannot name the macro "MEMORY_POLICY_X86_UC-" because it cannot end
> with a "-". Instead, I can name it MEMORY_POLICY_X86_UC_MINUS that seems
> to be what Linux does. I'll rename the optional xl parameter too from
> "x86_uc" to "x86_uc_minus".

I chatted with Andrew on IRC and he suggested to get rid of the option
entirely -- there is just one on x86 and doesn't necessarily need to be
explicitly visible. We could only have MEMORY_POLICY_DEFAULT, and also
remove the x86_uc setting from libxl/xl.

I am OK with this. However, given that I have already made all the
changes to have MEMORY_POLICY_X86_UC_MINUS and x86_uc_minus everywhere,
I'll send an update of the series with them.

Then you can decide whether you want to keep things like that or get rid
of it. Of course removing code is easy -- I am always happy to do it if
that's what you want.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy
  2019-06-18 23:15           ` Stefano Stabellini
@ 2019-06-19  6:53             ` Jan Beulich
  0 siblings, 0 replies; 86+ messages in thread
From: Jan Beulich @ 2019-06-19  6:53 UTC (permalink / raw)
  To: Andrew Cooper, Stefano Stabellini
  Cc: xen-devel, Julien Grall, Stefano Stabellini

>>> On 19.06.19 at 01:15, <sstabellini@kernel.org> wrote:
> On Tue, 18 Jun 2019, Stefano Stabellini wrote:
>> On Tue, 18 Jun 2019, Jan Beulich wrote:
>> > >>> On 17.06.19 at 23:28, <sstabellini@kernel.org> wrote:
>> > > On Thu, 2 May 2019, Jan Beulich wrote:
>> > >> >>> On 30.04.19 at 23:02, <sstabellini@kernel.org> wrote:
>> > >> > --- a/xen/include/public/domctl.h
>> > >> > +++ b/xen/include/public/domctl.h
>> > >> > @@ -571,12 +571,24 @@ struct xen_domctl_bind_pt_irq {
>> > >> >  */
>> > >> >  #define DPCI_ADD_MAPPING         1
>> > >> >  #define DPCI_REMOVE_MAPPING      0
>> > >> > +/*
>> > >> > + * Default memory policy. Corresponds to:
>> > >> > + * Arm: MEMORY_POLICY_ARM_DEV_nGRE
>> > >> > + * x86: MEMORY_POLICY_X86_UC
>> > >> > + */
>> > >> > +#define MEMORY_POLICY_DEFAULT    0
>> > >> > +/* x86 only. Memory type UNCACHABLE */
>> > >> > +#define MEMORY_POLICY_X86_UC     0
>> > >> 
>> > >> I'm afraid this may end up misleading, as on NPT and in
>> > >> shadow mode we use UC- instead of UC afaics. Andrew,
>> > >> do you have an opinion either way what exactly should
>> > >> be stated here?
>> > > 
>> > > Ping?
>> > 
>> > To me? I've stated my opinion.
>> 
>> I cannot name the macro "MEMORY_POLICY_X86_UC-" because it cannot end
>> with a "-". Instead, I can name it MEMORY_POLICY_X86_UC_MINUS that seems
>> to be what Linux does. I'll rename the optional xl parameter too from
>> "x86_uc" to "x86_uc_minus".
> 
> I chatted with Andrew on IRC and he suggested to get rid of the option
> entirely -- there is just one on x86 and doesn't necessarily need to be
> explicitly visible. We could only have MEMORY_POLICY_DEFAULT, and also
> remove the x86_uc setting from libxl/xl.
> 
> I am OK with this. However, given that I have already made all the
> changes to have MEMORY_POLICY_X86_UC_MINUS and x86_uc_minus everywhere,
> I'll send an update of the series with them.

Aren't we back to the question them whether to make this an Arm-
only interface? I'm having trouble seeing the value of an interface
which allows one to only "switch" from default to default.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 10/10] xen/arm: add reserved-memory regions to the dom0 memory node
  2019-06-05 16:30               ` Julien Grall
@ 2019-06-21 23:47                 ` Stefano Stabellini
  0 siblings, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-06-21 23:47 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, nd, Stefano Stabellini, Stefano Stabellini

[-- Attachment #1: Type: text/plain, Size: 2146 bytes --]

On Wed, 5 Jun 2019, Julien Grall wrote:
> Hi,
> 
> On 20/05/2019 23:38, Julien Grall wrote:
> > On 20/05/2019 22:26, Stefano Stabellini wrote:
> > > On Sat, 11 May 2019, Julien Grall wrote:
> > > This is not about privilege over the system: whoever will make the
> > > decision to ask the hypervisor to map the page will have all the
> > > necessary rights to do it.  If the user wants to map a given region,
> > > either because she knows what she is doing, because she is
> > > experimenting, or for whatever reason, I think she should be allowed. In
> > > fact, she can always do it by reverting the patch. So why make it
> > > inconvenient for her?
> > TBH, I am getting very frustrated on reviewing this series. We spent our
> > previous f2f meetings discussing reserved-memory in lengthy way. We also
> > agreed on a plan (see below), but now we are back on square one again...
> > 
> > Yes, a user will need to revert the patch. But then as you said the user
> > would know what he/she is doing. So reverting a patch is not going to be a
> > complication.
> > 
> > However, I already pointed out multiple time that giving permission is not
> > going to be enough. So I still don't see the value of having that in Xen
> > without an easy way to use it.
> > 
> > For reminder, you agreed on the following splitting the series in 3 parts:
> >     - Part 1: Extend iomem to support cacheability
> >     - Part 2: Partially support reserved-memory for Dom0 and don't give
> > iomem permission on them
> >     - Part 3: reserved-memory for guest
> > 
> > I agreed to merge part 1 and 2. Part 3 will be a start for a discussion how
> > this should be supported for guest. I also pointed out that Xilinx can carry
> > part 3 in their tree if they feel like too.
> 
> I just wanted to bump this as I haven't got any feedback on the way forward
> here.
> It should be possible get part 1 and 2 merged for Xen 4.13.

I am about to send an update with Part 2 only. I tried to address all
comments, the only one I didn't address (splitting a function into two),
I mentioned it explicitely.  Apologies if I missed anything, it wasn't
intentional.

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 08/10] xen/arm: keep track of reserved-memory regions
  2019-05-01 10:03   ` Julien Grall
  2019-05-01 10:03     ` [Xen-devel] " Julien Grall
@ 2019-06-21 23:47     ` Stefano Stabellini
  1 sibling, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-06-21 23:47 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Stefano Stabellini, Stefano Stabellini

On Wed, 1 May 2019, Julien Grall wrote:
> Hi Stefano,
> 
> On 30/04/2019 22:02, Stefano Stabellini wrote:
> > As we parse the device tree in Xen, keep track of the reserved-memory
> > regions as they need special treatment (follow-up patches will make use
> > of the stored information.)
> > 
> > Reuse process_memory_node to add reserved-memory regions to the
> > bootinfo.reserved_mem array. Remove the warning if there is no reg in
> > process_memory_node because it is a normal condition for
> > reserved-memory.
> 
> And it is not a normal condition for /memory... So your argument here is not
> sufficient for me to not keep the warning here for /memory.
 
You are right, I'll put the warning back in place.


> Rather than trying to re-purpose process_memory_node, I would prefer if you
> move out the parsing of "reg" and then provide 2 functions (one for /memory
> and one for /reserved-memory).
> 
> The parsing function will return an error if "reg" is not present, but it can
> be ignored by /reserved-memory and a warning is added for /memory.

I am OK with making this change, but I gave a look at the code for some
time and I cannot exactly figure out the interface you have in mind. I
understand completely separating the functions as I did in v1, but not
the partial split you are suggesting here.

I managed to address your other comments keeping a single function. I
suggest you take a look at the new version, then maybe write some
psedo-code to help me figure out what you would like me to do?


> > 
> > Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> > 
> > ---
> > 
> > Not done: create an e820-like structure on ARM.
> > 
> > Changes in v2:
> > - call process_memory_node from process_reserved_memory_node to avoid
> >    duplication
> > ---
> >   xen/arch/arm/bootfdt.c      | 30 ++++++++++++++++++++++--------
> >   xen/include/asm-arm/setup.h |  1 +
> >   2 files changed, 23 insertions(+), 8 deletions(-)
> > 
> > diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> > index b6600ab..9355a6e 100644
> > --- a/xen/arch/arm/bootfdt.c
> > +++ b/xen/arch/arm/bootfdt.c
> > @@ -135,6 +135,8 @@ static int __init process_memory_node(const void *fdt,
> > int node,
> >       const __be32 *cell;
> >       paddr_t start, size;
> >       u32 reg_cells = address_cells + size_cells;
> > +    struct meminfo *mem;
> > +    bool reserved = (bool)data;
> >         if ( address_cells < 1 || size_cells < 1 )
> >       {
> > @@ -143,29 +145,39 @@ static int __init process_memory_node(const void *fdt,
> > int node,
> >           return 0;
> >       }
> >   +    if ( reserved )
> > +        mem = &bootinfo.reserved_mem;
> > +    else
> > +        mem = &bootinfo.mem;
> > +
> >       prop = fdt_get_property(fdt, node, "reg", NULL);
> >       if ( !prop )
> > -    {
> > -        printk("fdt: node `%s': missing `reg' property\n", name);
> >           return 0;
> > -    }
> >         cell = (const __be32 *)prop->data;
> >       banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof (u32));
> >   -    for ( i = 0; i < banks && bootinfo.mem.nr_banks < NR_MEM_BANKS; i++ )
> > +    for ( i = 0; i < banks && mem->nr_banks < NR_MEM_BANKS; i++ )
> 
> As I pointed out on v1, this is pretty fragile. While ignoring /memory bank is
> fine if we have no more space, for /reserved-region this may mean using them
> in Xen allocator with the consequences we all know.

Yeah, we don't want that.


> If you split the function properly, then you will be able to treat
> reserved-regions and memory differently.

I did so, but without splitting the functions.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

* Re: [Xen-devel] [PATCH v2 0/10] iomem memory policy
  2019-05-16 16:52 ` [PATCH v2 0/10] iomem memory policy Oleksandr
  2019-05-16 16:52   ` [Xen-devel] " Oleksandr
@ 2019-06-21 23:48   ` Stefano Stabellini
  1 sibling, 0 replies; 86+ messages in thread
From: Stefano Stabellini @ 2019-06-21 23:48 UTC (permalink / raw)
  To: Oleksandr
  Cc: Stefano Stabellini, wei.liu2, andrew.cooper3, julien.grall,
	JBeulich, ian.jackson, xen-devel

[-- Attachment #1: Type: text/plain, Size: 11478 bytes --]

Hi Oleksandr,

Thanks for testing! Give a look at the latest version (v3). I don't
think this error will happen there.

Cheers,

Stefano

On Thu, 16 May 2019, Oleksandr wrote:
> 
> On 01.05.19 00:02, Stefano Stabellini wrote:
> > Hi all,
> 
> Hi, Stefano
> 
> 
> > 
> > This series introduces a memory policy parameter for the iomem option,
> > so that we can map an iomem region into a guest as cacheable memory.
> > 
> > Then, this series fixes the way Xen handles reserved memory regions on
> > ARM: they should be mapped as normal memory, instead today they are
> > treated as device memory.
> > 
> > Cheers,
> > 
> > Stefano
> > 
> > 
> > 
> > The following changes since commit be3d5b30331d87e177744dbe23138b9ebcdc86f1:
> > 
> >    x86/msr: Fix fallout from mostly c/s 832c180 (2019-04-15 17:51:30 +0100)
> > 
> > are available in the git repository at:
> > 
> >    http://xenbits.xenproject.org/git-http/people/sstabellini/xen-unstable.git
> > iomem_cache-v2
> > 
> > for you to fetch changes up to 4979f8e2f1120b2c394be815b071c017e287cf33:
> > 
> >    xen/arm: add reserved-memory regions to the dom0 memory node (2019-04-30
> > 13:56:40 -0700)
> > 
> > ----------------------------------------------------------------
> > Stefano Stabellini (10):
> >        xen: add a p2mt parameter to map_mmio_regions
> >        xen: rename un/map_mmio_regions to un/map_regions
> >        xen: extend XEN_DOMCTL_memory_mapping to handle memory policy
> >        libxc: introduce xc_domain_mem_map_policy
> >        libxl/xl: add memory policy option to iomem
> >        xen/arm: extend device_tree_for_each_node
> >        xen/arm: make process_memory_node a device_tree_node_func
> >        xen/arm: keep track of reserved-memory regions
> >        xen/arm: map reserved-memory regions as normal memory in dom0
> >        xen/arm: add reserved-memory regions to the dom0 memory node
> 
> Thank you for doing that. Support of reserved-memory in Xen on ARM is a quite
> important feature. We are interested in possibility to provide reserved-memory
> regions to DomU. Our system uses *thin Dom0* which doesn't have H/W IPs
> assigned which may require reserved-memory, unlike, other domains which could
> have. So, I would be happy to test your patch series on R-Car Gen3 platforms
> if you have a plan to extend this support for covering other than hwdom
> domains. There are a few quite different reserved-memory regions used in
> Renesas BSP, I think, it would be a good target to test on...
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas-bsp.git/tree/arch/arm64/boot/dts/renesas/r8a7795-salvator-x.dts#n37 
> 
> As for the current series, I have tested Xen boot only. Looks like, *real*
> reserved-memory regions were handled correctly, but some test
> "non-reserved-memory" node was interpreted as a "reserved-memory" and was
> taken into the account... Please see details below.
> 
> --------------------
> Host device tree contains the following nodes:
> 
> memory@48000000 {
>     device_type = "memory";
>     /* first 128MB is reserved for secure area. */
>     reg = <0x0 0x48000000 0x0 0x78000000>,
>           <0x5 0x00000000 0x0 0x80000000>,
>           <0x6 0x00000000 0x0 0x80000000>,
>           <0x7 0x00000000 0x0 0x80000000>;
> };
> 
> reserved-memory {
>     #address-cells = <2>;
>     #size-cells = <2>;
>     ranges;
> 
>     /* device specific region for Lossy Decompression */
>     lossy_decompress: linux,lossy_decompress@54000000 {
>         no-map;
>         reg = <0x00000000 0x54000000 0x0 0x03000000>;
>     };
> 
>     /* For Audio DSP */
>     adsp_reserved: linux,adsp@57000000 {
>         compatible = "shared-dma-pool";
>         reusable;
>         reg = <0x00000000 0x57000000 0x0 0x01000000>;
>     };
> 
>     /* global autoconfigured region for contiguous allocations */
>     linux,cma@58000000 {
>         compatible = "shared-dma-pool";
>         reusable;
>         reg = <0x00000000 0x58000000 0x0 0x18000000>;
>         linux,cma-default;
>     };
> 
>     /* device specific region for contiguous allocations */
>     mmp_reserved: linux,multimedia@70000000 {
>         compatible = "shared-dma-pool";
>         reusable;
>         reg = <0x00000000 0x70000000 0x0 0x10000000>;
>     };
> };
> 
> /* test "non-reserved-memory" node */
> sram: sram@47FFF000 {
>     compatible = "mmio-sram";
>     reg = <0x0 0x47FFF000 0x0 0x1000>;
> 
>     #address-cells = <1>;
>     #size-cells = <1>;
>     ranges = <0 0x0 0x47FFF000 0x1000>;
> 
>     scp_shmem: scp_shmem@0 {
>         compatible = "mmio-sram";
>         reg = <0x0 0x200>;
>     };
> };
> 
> --------------------
> 
> I added a print to see which memory regions were inserted:
> 
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index 9355a6e..23e68b0 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -162,6 +162,10 @@ static int __init process_memory_node(const void *fdt,
> int node,
>          device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
>          if ( !size )
>              continue;
> +
> +        dt_dprintk("node %s: insert bank %d: %#"PRIx64"->%#"PRIx64" type:
> %s\n",
> +                   name, i, start, start + size, reserved ? "reserved" :
> "normal");
> +
>          mem->bank[mem->nr_banks].start = start;
>          mem->bank[mem->nr_banks].size = size;
>          mem->nr_banks++;
> 
> --------------------
> 
> Xen log shows that test "non-reserved-memory" node (scp_shmem@0) is processed
> as "reserved-memory":
> 
> (XEN) Checking for initrd in /chosen
> (XEN) Initrd 0000000076000040-0000000077c87e47
> (XEN) node memory@48000000: insert bank 0: 0x48000000->0xc0000000 type: normal
> (XEN) node memory@48000000: insert bank 1: 0x500000000->0x580000000 type:
> normal
> (XEN) node memory@48000000: insert bank 2: 0x600000000->0x680000000 type:
> normal
> (XEN) node memory@48000000: insert bank 3: 0x700000000->0x780000000 type:
> normal
> (XEN) node linux,lossy_decompress@54000000: insert bank 0:
> 0x54000000->0x57000000 type: reserved
> (XEN) node linux,adsp@57000000: insert bank 0: 0x57000000->0x58000000 type:
> reserved
> (XEN) node linux,cma@58000000: insert bank 0: 0x58000000->0x70000000 type:
> reserved
> (XEN) node linux,multimedia@70000000: insert bank 0: 0x70000000->0x80000000
> type: reserved
> (XEN) node scp_shmem@0: insert bank 0: 0->0x200 type: reserved   <-----------
> test "non-reserved-memory" node
> (XEN) RAM: 0000000048000000 - 00000000bfffffff
> (XEN) RAM: 0000000500000000 - 000000057fffffff
> (XEN) RAM: 0000000600000000 - 000000067fffffff
> (XEN) RAM: 0000000700000000 - 000000077fffffff
> (XEN)
> (XEN) MODULE[0]: 0000000048000000 - 0000000048014080 Device Tree
> (XEN) MODULE[1]: 0000000076000040 - 0000000077c87e47 Ramdisk
> (XEN) MODULE[2]: 000000007a000000 - 000000007c000000 Kernel
> (XEN) MODULE[3]: 000000007c000000 - 000000007c010000 XSM
> (XEN)  RESVD[0]: 0000000048000000 - 0000000048014000
> (XEN)  RESVD[1]: 0000000076000040 - 0000000077c87e47
> 
> ...
> 
> (XEN) handle /memory@48000000
> (XEN)   Skip it (matched)
> (XEN) handle /reserved-memory
> (XEN) dt_irq_number: dev=/reserved-memory
> (XEN) /reserved-memory passthrough = 1 nirq = 0 naddr = 0
> (XEN) handle /reserved-memory/linux,lossy_decompress@54000000
> (XEN) dt_irq_number: dev=/reserved-memory/linux,lossy_decompress@54000000
> (XEN) /reserved-memory/linux,lossy_decompress@54000000 passthrough = 1 nirq =
> 0 naddr = 1
> (XEN) DT: ** translation for device
> /reserved-memory/linux,lossy_decompress@54000000 **
> (XEN) DT: bus is default (na=2, ns=2) on /reserved-memory
> (XEN) DT: translating address:<3> 00000000<3> 54000000<3>
> (XEN) DT: parent bus is default (na=2, ns=2) on /
> (XEN) DT: empty ranges; 1:1 translation
> (XEN) DT: parent translation for:<3> 00000000<3> 00000000<3>
> (XEN) DT: with offset: 54000000
> (XEN) DT: one level translation:<3> 00000000<3> 54000000<3>
> (XEN) DT: reached root node
> (XEN)   - MMIO: 0054000000 - 0057000000 P2MType=5
> (XEN) handle /reserved-memory/linux,adsp@57000000
> (XEN) dt_irq_number: dev=/reserved-memory/linux,adsp@57000000
> (XEN) /reserved-memory/linux,adsp@57000000 passthrough = 1 nirq = 0 naddr = 1
> (XEN) DT: ** translation for device /reserved-memory/linux,adsp@57000000 **
> (XEN) DT: bus is default (na=2, ns=2) on /reserved-memory
> (XEN) DT: translating address:<3> 00000000<3> 57000000<3>
> (XEN) DT: parent bus is default (na=2, ns=2) on /
> (XEN) DT: empty ranges; 1:1 translation
> (XEN) DT: parent translation for:<3> 00000000<3> 00000000<3>
> (XEN) DT: with offset: 57000000
> (XEN) DT: one level translation:<3> 00000000<3> 57000000<3>
> (XEN) DT: reached root node
> (XEN)   - MMIO: 0057000000 - 0058000000 P2MType=5
> (XEN) handle /reserved-memory/linux,cma@58000000
> (XEN) dt_irq_number: dev=/reserved-memory/linux,cma@58000000
> (XEN) /reserved-memory/linux,cma@58000000 passthrough = 1 nirq = 0 naddr = 1
> (XEN) DT: ** translation for device /reserved-memory/linux,cma@58000000 **
> (XEN) DT: bus is default (na=2, ns=2) on /reserved-memory
> (XEN) DT: translating address:<3> 00000000<3> 58000000<3>
> (XEN) DT: parent bus is default (na=2, ns=2) on /
> (XEN) DT: empty ranges; 1:1 translation
> (XEN) DT: parent translation for:<3> 00000000<3> 00000000<3>
> (XEN) DT: with offset: 58000000
> (XEN) DT: one level translation:<3> 00000000<3> 58000000<3>
> (XEN) DT: reached root node
> (XEN)   - MMIO: 0058000000 - 0070000000 P2MType=5
> (XEN) handle /reserved-memory/linux,multimedia@70000000
> (XEN) dt_irq_number: dev=/reserved-memory/linux,multimedia@70000000
> (XEN) /reserved-memory/linux,multimedia@70000000 passthrough = 1 nirq = 0
> naddr = 1
> (XEN) DT: ** translation for device /reserved-memory/linux,multimedia@70000000
> **
> (XEN) DT: bus is default (na=2, ns=2) on /reserved-memory
> (XEN) DT: translating address:<3> 00000000<3> 70000000<3>
> (XEN) DT: parent bus is default (na=2, ns=2) on /
> (XEN) DT: empty ranges; 1:1 translation
> (XEN) DT: parent translation for:<3> 00000000<3> 00000000<3>
> (XEN) DT: with offset: 70000000
> (XEN) DT: one level translation:<3> 00000000<3> 70000000<3>
> (XEN) DT: reached root node
> (XEN)   - MMIO: 0070000000 - 0080000000 P2MType=5
> 
> ...
> 
> 
> (XEN) Create memory node (reg size 4, nr cells 24)
> (XEN)   Bank 0: 0xb0000000->0xc0000000   <----------- Dom0 memory which is
> 256MB total
> (XEN)   Bank 0: 0x54000000->0x57000000   <-----------
> linux,lossy_decompress@54000000
> (XEN)   Bank 1: 0x57000000->0x58000000   <----------- linux,adsp@57000000
> (XEN)   Bank 2: 0x58000000->0x70000000   <----------- linux,cma@58000000
> (XEN)   Bank 3: 0x70000000->0x80000000   <-----------
> linux,multimedia@70000000
> (XEN)   Bank 4: 0->0x200   <----------- test "non-reserved-memory" node
> (XEN) Loading zImage from 000000007a000000 to
> 00000000b0080000-00000000b2080000
> (XEN) Loading dom0 initrd from 0000000076000040 to
> 0x00000000b8200000-0x00000000b9e87e07
> (XEN) Loading dom0 DTB to 0x00000000b8000000-0x00000000b8011b7f
> (XEN) Initial low memory virq threshold set at 0x4000 pages.
> (XEN) Std. Loglevel: All
> (XEN) Guest Loglevel: All
> 
> ...
> 
> 
> -- 
> Regards,
> 
> Oleksandr Tyshchenko
> 

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 86+ messages in thread

end of thread, back to index

Thread overview: 86+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-30 21:02 [PATCH v2 0/10] iomem memory policy Stefano Stabellini
2019-04-30 21:02 ` [Xen-devel] " Stefano Stabellini
2019-04-30 21:02 ` [PATCH v2 01/10] xen: add a p2mt parameter to map_mmio_regions Stefano Stabellini
2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
2019-05-02 14:59   ` Jan Beulich
2019-05-02 14:59     ` [Xen-devel] " Jan Beulich
2019-05-02 18:49     ` Stefano Stabellini
2019-05-02 18:49       ` [Xen-devel] " Stefano Stabellini
2019-05-15 13:39   ` Oleksandr
2019-05-15 13:39     ` [Xen-devel] " Oleksandr
2019-04-30 21:02 ` [PATCH v2 02/10] xen: rename un/map_mmio_regions to un/map_regions Stefano Stabellini
2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
2019-05-01  9:22   ` Julien Grall
2019-05-01  9:22     ` [Xen-devel] " Julien Grall
2019-06-17 21:24     ` Stefano Stabellini
2019-06-18 11:05       ` Julien Grall
2019-06-18 20:19         ` Stefano Stabellini
2019-05-02 15:03   ` Jan Beulich
2019-05-02 15:03     ` [Xen-devel] " Jan Beulich
2019-05-02 18:55     ` Stefano Stabellini
2019-05-02 18:55       ` [Xen-devel] " Stefano Stabellini
2019-04-30 21:02 ` [PATCH v2 03/10] xen: extend XEN_DOMCTL_memory_mapping to handle memory policy Stefano Stabellini
2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
2019-05-02 15:12   ` Jan Beulich
2019-05-02 15:12     ` [Xen-devel] " Jan Beulich
2019-06-17 21:28     ` Stefano Stabellini
2019-06-18  8:59       ` Jan Beulich
2019-06-18 20:32         ` Stefano Stabellini
2019-06-18 23:15           ` Stefano Stabellini
2019-06-19  6:53             ` Jan Beulich
2019-05-07 16:41   ` Julien Grall
2019-05-07 16:41     ` [Xen-devel] " Julien Grall
2019-06-17 22:43     ` Stefano Stabellini
2019-06-18 11:13       ` Julien Grall
2019-05-15 14:40   ` Oleksandr
2019-05-15 14:40     ` [Xen-devel] " Oleksandr
2019-04-30 21:02 ` [PATCH v2 04/10] libxc: introduce xc_domain_mem_map_policy Stefano Stabellini
2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
2019-04-30 21:02 ` [PATCH v2 05/10] libxl/xl: add memory policy option to iomem Stefano Stabellini
2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
2019-05-01  9:42   ` Julien Grall
2019-05-01  9:42     ` [Xen-devel] " Julien Grall
2019-06-17 22:32     ` Stefano Stabellini
2019-06-18 11:09       ` Julien Grall
2019-06-18 11:15   ` Julien Grall
2019-06-18 22:07     ` Stefano Stabellini
2019-06-18 22:20       ` Julien Grall
2019-06-18 22:46         ` Stefano Stabellini
2019-04-30 21:02 ` [PATCH v2 06/10] xen/arm: extend device_tree_for_each_node Stefano Stabellini
2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
2019-05-07 17:12   ` Julien Grall
2019-05-07 17:12     ` [Xen-devel] " Julien Grall
2019-04-30 21:02 ` [PATCH v2 07/10] xen/arm: make process_memory_node a device_tree_node_func Stefano Stabellini
2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
2019-05-01  9:47   ` Julien Grall
2019-05-01  9:47     ` [Xen-devel] " Julien Grall
2019-04-30 21:02 ` [PATCH v2 08/10] xen/arm: keep track of reserved-memory regions Stefano Stabellini
2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
2019-05-01 10:03   ` Julien Grall
2019-05-01 10:03     ` [Xen-devel] " Julien Grall
2019-06-21 23:47     ` Stefano Stabellini
2019-05-07 17:21   ` Julien Grall
2019-05-07 17:21     ` [Xen-devel] " Julien Grall
2019-04-30 21:02 ` [PATCH v2 09/10] xen/arm: map reserved-memory regions as normal memory in dom0 Stefano Stabellini
2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
2019-05-07 19:52   ` Julien Grall
2019-05-07 19:52     ` [Xen-devel] " Julien Grall
2019-04-30 21:02 ` [PATCH v2 10/10] xen/arm: add reserved-memory regions to the dom0 memory node Stefano Stabellini
2019-04-30 21:02   ` [Xen-devel] " Stefano Stabellini
2019-05-07 20:15   ` Julien Grall
2019-05-07 20:15     ` [Xen-devel] " Julien Grall
2019-05-10 20:51     ` Stefano Stabellini
2019-05-10 20:51       ` [Xen-devel] " Stefano Stabellini
2019-05-10 21:43       ` Julien Grall
2019-05-10 21:43         ` [Xen-devel] " Julien Grall
2019-05-11 12:40         ` Julien Grall
2019-05-11 12:40           ` [Xen-devel] " Julien Grall
2019-05-20 21:26           ` Stefano Stabellini
2019-05-20 21:26             ` [Xen-devel] " Stefano Stabellini
2019-05-20 22:38             ` Julien Grall
2019-05-20 22:38               ` [Xen-devel] " Julien Grall
2019-06-05 16:30               ` Julien Grall
2019-06-21 23:47                 ` Stefano Stabellini
2019-05-16 16:52 ` [PATCH v2 0/10] iomem memory policy Oleksandr
2019-05-16 16:52   ` [Xen-devel] " Oleksandr
2019-06-21 23:48   ` Stefano Stabellini

Xen-Devel Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/xen-devel/0 xen-devel/git/0.git
	git clone --mirror https://lore.kernel.org/xen-devel/1 xen-devel/git/1.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 xen-devel xen-devel/ https://lore.kernel.org/xen-devel \
		xen-devel@lists.xenproject.org xen-devel@archiver.kernel.org
	public-inbox-index xen-devel


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.xenproject.lists.xen-devel


AGPL code for this site: git clone https://public-inbox.org/ public-inbox