All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/5] DOMCTL-based guest magic regions allocation for dom0less
@ 2024-03-08  1:54 Henry Wang
  2024-03-08  1:54 ` [PATCH v2 1/5] xen/arm: Rename assign_static_memory_11() for consistency Henry Wang
                   ` (4 more replies)
  0 siblings, 5 replies; 31+ messages in thread
From: Henry Wang @ 2024-03-08  1:54 UTC (permalink / raw)
  To: xen-devel
  Cc: Henry Wang, Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Michal Orzel, Volodymyr Babchuk, Andrew Cooper, George Dunlap,
	Jan Beulich, Wei Liu, Shawn Anastasio, Alistair Francis,
	Bob Eshleman, Connor Davis, Roger Pau Monné,
	Anthony PERARD, Juergen Gross

An error message can seen from the init-dom0less application on
direct-mapped 1:1 domains:
```
Allocating magic pages
memory.c:238:d0v0 mfn 0x39000 doesn't belong to d1
Error on alloc magic pages
```

This is because populate_physmap() automatically assumes gfn == mfn
for direct mapped domains. This cannot be true for the magic pages
that are allocated later for Dom0less DomUs from the init-dom0less
helper application executed in Dom0. For domain using statically
allocated memory but not 1:1 direct-mapped, similar error "failed to
retrieve a reserved page" can be seen as the reserved memory list
is empty at that time.

This series tries to fix this issue using a DOMCTL-based approach,
because for 1:1 direct-mapped domUs, we need to avoid the RAM regions
and inform the toolstack about the region found by hypervisor for
mapping the magic pages. The first 2 patches are simple clean-ups.
Patch 3 introduced a new DOMCTL to get the guest memory map, currently
only used for the magic page regions. Patch 4 uses the same approach
as finding the extended regions to find the guest magic page regions
for direct-mapped DomUs. Patch 5 makes the init-dom0less application
consume the DOMCTL to avoid hardcoding the guest magic base address.

Henry Wang (5):
  xen/arm: Rename assign_static_memory_11() for consistency
  xen/domain.h: Centrialize is_domain_direct_mapped()
  xen/domctl, tools: Introduce a new domctl to get guest memory map
  xen/arm: Find unallocated spaces for magic pages of direct-mapped domU
  xen/memory, tools: Make init-dom0less consume XEN_DOMCTL_get_mem_map

 tools/helpers/init-dom0less.c            | 22 +++++++++---
 tools/include/xenctrl.h                  |  4 +++
 tools/libs/ctrl/xc_domain.c              | 32 +++++++++++++++++
 xen/arch/arm/dom0less-build.c            | 45 +++++++++++++++++++++++-
 xen/arch/arm/domain.c                    |  6 ++++
 xen/arch/arm/domain_build.c              | 30 ++++++++++------
 xen/arch/arm/domctl.c                    | 19 +++++++++-
 xen/arch/arm/include/asm/domain.h        | 10 ++++--
 xen/arch/arm/include/asm/domain_build.h  |  2 ++
 xen/arch/arm/include/asm/static-memory.h |  8 ++---
 xen/arch/arm/static-memory.c             |  5 +--
 xen/arch/ppc/include/asm/domain.h        |  2 --
 xen/arch/riscv/include/asm/domain.h      |  2 --
 xen/arch/x86/include/asm/domain.h        |  1 -
 xen/common/memory.c                      | 10 ++++--
 xen/include/public/arch-arm.h            |  4 +++
 xen/include/public/domctl.h              | 21 +++++++++++
 xen/include/public/memory.h              |  5 +++
 xen/include/xen/domain.h                 |  3 ++
 xen/include/xen/mm.h                     |  2 ++
 20 files changed, 201 insertions(+), 32 deletions(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v2 1/5] xen/arm: Rename assign_static_memory_11() for consistency
  2024-03-08  1:54 [PATCH v2 0/5] DOMCTL-based guest magic regions allocation for dom0less Henry Wang
@ 2024-03-08  1:54 ` Henry Wang
  2024-03-08  8:18   ` Michal Orzel
  2024-03-08  1:54 ` [PATCH v2 2/5] xen/domain.h: Centrialize is_domain_direct_mapped() Henry Wang
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 31+ messages in thread
From: Henry Wang @ 2024-03-08  1:54 UTC (permalink / raw)
  To: xen-devel
  Cc: Henry Wang, Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Michal Orzel, Volodymyr Babchuk

Currently on Arm there are 4 functions to allocate memory as domain
RAM at boot time for different types of domains:
(1) allocate_memory(): To allocate memory for Dom0less DomUs that
    do not use static memory.
(2) allocate_static_memory(): To allocate memory for Dom0less DomUs
    that use static memory.
(3) allocate_memory_11(): To allocate memory for Dom0.
(4) assign_static_memory_11(): To allocate memory for Dom0less DomUs
    that use static memory and directmapped.

To keep consistency between the names and the in-code comment on top
of the functions, rename assign_static_memory_11() to
allocate_static_memory_11(). No functional change intended.

Signed-off-by: Henry Wang <xin.wang2@amd.com>
---
v2:
- New patch
---
 xen/arch/arm/dom0less-build.c            | 2 +-
 xen/arch/arm/include/asm/static-memory.h | 8 ++++----
 xen/arch/arm/static-memory.c             | 5 +++--
 3 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/dom0less-build.c b/xen/arch/arm/dom0less-build.c
index fb63ec6fd1..1e1c8d83ae 100644
--- a/xen/arch/arm/dom0less-build.c
+++ b/xen/arch/arm/dom0less-build.c
@@ -806,7 +806,7 @@ static int __init construct_domU(struct domain *d,
     else if ( !is_domain_direct_mapped(d) )
         allocate_static_memory(d, &kinfo, node);
     else
-        assign_static_memory_11(d, &kinfo, node);
+        allocate_static_memory_11(d, &kinfo, node);
 
     rc = process_shm(d, &kinfo, node);
     if ( rc < 0 )
diff --git a/xen/arch/arm/include/asm/static-memory.h b/xen/arch/arm/include/asm/static-memory.h
index 3e3efd70c3..667e6d3804 100644
--- a/xen/arch/arm/include/asm/static-memory.h
+++ b/xen/arch/arm/include/asm/static-memory.h
@@ -9,7 +9,7 @@
 
 void allocate_static_memory(struct domain *d, struct kernel_info *kinfo,
                             const struct dt_device_node *node);
-void assign_static_memory_11(struct domain *d, struct kernel_info *kinfo,
+void allocate_static_memory_11(struct domain *d, struct kernel_info *kinfo,
                              const struct dt_device_node *node);
 void init_staticmem_pages(void);
 
@@ -22,9 +22,9 @@ static inline void allocate_static_memory(struct domain *d,
     ASSERT_UNREACHABLE();
 }
 
-static inline void assign_static_memory_11(struct domain *d,
-                                           struct kernel_info *kinfo,
-                                           const struct dt_device_node *node)
+static inline void allocate_static_memory_11(struct domain *d,
+                                             struct kernel_info *kinfo,
+                                             const struct dt_device_node *node)
 {
     ASSERT_UNREACHABLE();
 }
diff --git a/xen/arch/arm/static-memory.c b/xen/arch/arm/static-memory.c
index cffbab7241..20333a7f94 100644
--- a/xen/arch/arm/static-memory.c
+++ b/xen/arch/arm/static-memory.c
@@ -187,8 +187,9 @@ void __init allocate_static_memory(struct domain *d, struct kernel_info *kinfo,
  * The static memory will be directly mapped in the guest(Guest Physical
  * Address == Physical Address).
  */
-void __init assign_static_memory_11(struct domain *d, struct kernel_info *kinfo,
-                                    const struct dt_device_node *node)
+void __init allocate_static_memory_11(struct domain *d,
+                                      struct kernel_info *kinfo,
+                                      const struct dt_device_node *node)
 {
     u32 addr_cells, size_cells, reg_cells;
     unsigned int nr_banks, bank = 0;
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 2/5] xen/domain.h: Centrialize is_domain_direct_mapped()
  2024-03-08  1:54 [PATCH v2 0/5] DOMCTL-based guest magic regions allocation for dom0less Henry Wang
  2024-03-08  1:54 ` [PATCH v2 1/5] xen/arm: Rename assign_static_memory_11() for consistency Henry Wang
@ 2024-03-08  1:54 ` Henry Wang
  2024-03-08  8:59   ` Michal Orzel
  2024-03-11 18:02   ` Shawn Anastasio
  2024-03-08  1:54 ` [PATCH v2 3/5] xen/domctl, tools: Introduce a new domctl to get guest memory map Henry Wang
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 31+ messages in thread
From: Henry Wang @ 2024-03-08  1:54 UTC (permalink / raw)
  To: xen-devel
  Cc: Henry Wang, Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Michal Orzel, Volodymyr Babchuk, Andrew Cooper, George Dunlap,
	Jan Beulich, Wei Liu, Shawn Anastasio, Alistair Francis,
	Bob Eshleman, Connor Davis, Roger Pau Monné

Currently direct mapped domain is only supported by the Arm
architecture at the domain creation time by setting the CDF_directmap
flag. There is not a need for every non-Arm architecture, i.e. x86,
RISC-V and PPC, to define a stub is_domain_direct_mapped() in arch
header.

Move is_domain_direct_mapped() to a centralized place at xen/domain.h
and evaluate CDF_directmap for non-Arm architecture to 0.

Signed-off-by: Henry Wang <xin.wang2@amd.com>
---
v2:
- New patch
---
 xen/arch/arm/include/asm/domain.h   | 2 --
 xen/arch/ppc/include/asm/domain.h   | 2 --
 xen/arch/riscv/include/asm/domain.h | 2 --
 xen/arch/x86/include/asm/domain.h   | 1 -
 xen/include/xen/domain.h            | 3 +++
 5 files changed, 3 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index 8218afb862..f1d72c6e48 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -29,8 +29,6 @@ enum domain_type {
 #define is_64bit_domain(d) (0)
 #endif
 
-#define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
-
 /*
  * Is the domain using the host memory layout?
  *
diff --git a/xen/arch/ppc/include/asm/domain.h b/xen/arch/ppc/include/asm/domain.h
index 573276d0a8..3a447272c6 100644
--- a/xen/arch/ppc/include/asm/domain.h
+++ b/xen/arch/ppc/include/asm/domain.h
@@ -10,8 +10,6 @@ struct hvm_domain
     uint64_t              params[HVM_NR_PARAMS];
 };
 
-#define is_domain_direct_mapped(d) ((void)(d), 0)
-
 /* TODO: Implement */
 #define guest_mode(r) ({ (void)(r); BUG_ON("unimplemented"); 0; })
 
diff --git a/xen/arch/riscv/include/asm/domain.h b/xen/arch/riscv/include/asm/domain.h
index 0f5dc2be40..027bfa8a93 100644
--- a/xen/arch/riscv/include/asm/domain.h
+++ b/xen/arch/riscv/include/asm/domain.h
@@ -10,8 +10,6 @@ struct hvm_domain
     uint64_t              params[HVM_NR_PARAMS];
 };
 
-#define is_domain_direct_mapped(d) ((void)(d), 0)
-
 struct arch_vcpu_io {
 };
 
diff --git a/xen/arch/x86/include/asm/domain.h b/xen/arch/x86/include/asm/domain.h
index 622d22bef2..4bd78e3a6d 100644
--- a/xen/arch/x86/include/asm/domain.h
+++ b/xen/arch/x86/include/asm/domain.h
@@ -22,7 +22,6 @@
 #define is_hvm_pv_evtchn_domain(d) (is_hvm_domain(d) && \
         ((d)->arch.hvm.irq->callback_via_type == HVMIRQ_callback_vector || \
          (d)->vcpu[0]->arch.hvm.evtchn_upcall_vector))
-#define is_domain_direct_mapped(d) ((void)(d), 0)
 
 #define VCPU_TRAP_NONE         0
 #define VCPU_TRAP_NMI          1
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index b1a4aa6f38..3de5635291 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -41,6 +41,8 @@ void arch_get_domain_info(const struct domain *d,
 #ifdef CONFIG_ARM
 /* Should domain memory be directly mapped? */
 #define CDF_directmap            (1U << 1)
+#else
+#define CDF_directmap            0
 #endif
 /* Is domain memory on static allocation? */
 #ifdef CONFIG_STATIC_MEMORY
@@ -49,6 +51,7 @@ void arch_get_domain_info(const struct domain *d,
 #define CDF_staticmem            0
 #endif
 
+#define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
 #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
 
 /*
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 3/5] xen/domctl, tools: Introduce a new domctl to get guest memory map
  2024-03-08  1:54 [PATCH v2 0/5] DOMCTL-based guest magic regions allocation for dom0less Henry Wang
  2024-03-08  1:54 ` [PATCH v2 1/5] xen/arm: Rename assign_static_memory_11() for consistency Henry Wang
  2024-03-08  1:54 ` [PATCH v2 2/5] xen/domain.h: Centrialize is_domain_direct_mapped() Henry Wang
@ 2024-03-08  1:54 ` Henry Wang
  2024-03-11  9:10   ` Michal Orzel
  2024-03-11 16:58   ` Jan Beulich
  2024-03-08  1:54 ` [PATCH v2 4/5] xen/arm: Find unallocated spaces for magic pages of direct-mapped domU Henry Wang
  2024-03-08  1:54 ` [PATCH v2 5/5] xen/memory, tools: Make init-dom0less consume XEN_DOMCTL_get_mem_map Henry Wang
  4 siblings, 2 replies; 31+ messages in thread
From: Henry Wang @ 2024-03-08  1:54 UTC (permalink / raw)
  To: xen-devel
  Cc: Henry Wang, Wei Liu, Anthony PERARD, Juergen Gross,
	Andrew Cooper, George Dunlap, Jan Beulich, Julien Grall,
	Stefano Stabellini, Bertrand Marquis, Michal Orzel,
	Volodymyr Babchuk, Alec Kwapis

There are some use cases where the toolstack needs to know the guest
memory map. For example, the toolstack helper application
"init-dom0less" needs to know the guest magic page regions for 1:1
direct-mapped dom0less DomUs to allocate magic pages.

To address such needs, add XEN_DOMCTL_get_mem_map hypercall and
related data structures to query the hypervisor for the guest memory
map. The guest memory map is recorded in the domain structure and
currently only guest magic page region is recorded in the guest
memory map. The guest magic page region is initialized at the domain
creation time as the layout in the public header, and it is updated
for 1:1 dom0less DomUs (see the following commit) to avoid conflict
with RAM.

Take the opportunity to drop an unnecessary empty line to keep the
coding style consistent in the file.

Reported-by: Alec Kwapis <alec.kwapis@medtronic.com>
Signed-off-by: Henry Wang <xin.wang2@amd.com>
---
v2:
- New patch
RFC: I think the newly introduced "struct xen_domctl_mem_map" is
quite duplicated with "struct xen_memory_map", any comment on reuse
the "struct xen_memory_map" for simplicity?
---
 tools/include/xenctrl.h           |  4 ++++
 tools/libs/ctrl/xc_domain.c       | 32 +++++++++++++++++++++++++++++++
 xen/arch/arm/domain.c             |  6 ++++++
 xen/arch/arm/domctl.c             | 19 +++++++++++++++++-
 xen/arch/arm/include/asm/domain.h |  8 ++++++++
 xen/include/public/arch-arm.h     |  4 ++++
 xen/include/public/domctl.h       | 21 ++++++++++++++++++++
 7 files changed, 93 insertions(+), 1 deletion(-)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 2ef8b4e054..b25e9772a2 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1195,6 +1195,10 @@ int xc_domain_setmaxmem(xc_interface *xch,
                         uint32_t domid,
                         uint64_t max_memkb);
 
+int xc_get_domain_mem_map(xc_interface *xch, uint32_t domid,
+                          struct xen_mem_region mem_regions[],
+                          uint32_t *nr_regions);
+
 int xc_domain_set_memmap_limit(xc_interface *xch,
                                uint32_t domid,
                                unsigned long map_limitkb);
diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
index f2d9d14b4d..64b46bdfb4 100644
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -697,6 +697,38 @@ int xc_domain_setmaxmem(xc_interface *xch,
     return do_domctl(xch, &domctl);
 }
 
+int xc_get_domain_mem_map(xc_interface *xch, uint32_t domid,
+                          struct xen_mem_region mem_regions[],
+                          uint32_t *nr_regions)
+{
+    int rc;
+    struct xen_domctl domctl = {
+        .cmd         = XEN_DOMCTL_get_mem_map,
+        .domain      = domid,
+        .u.mem_map = {
+            .nr_mem_regions = *nr_regions,
+        },
+    };
+
+    DECLARE_HYPERCALL_BOUNCE(mem_regions,
+                             sizeof(xen_mem_region_t) * (*nr_regions),
+                             XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+
+    if ( !mem_regions || xc_hypercall_bounce_pre(xch, mem_regions) ||
+         (*nr_regions) < 1 )
+        return -1;
+
+    set_xen_guest_handle(domctl.u.mem_map.buffer, mem_regions);
+
+    rc = do_domctl(xch, &domctl);
+
+    xc_hypercall_bounce_post(xch, mem_regions);
+
+    *nr_regions = domctl.u.mem_map.nr_mem_regions;
+
+    return rc;
+}
+
 #if defined(__i386__) || defined(__x86_64__)
 int xc_domain_set_memory_map(xc_interface *xch,
                                uint32_t domid,
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 5e7a7f3e7e..54f3601ab0 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -696,6 +696,7 @@ int arch_domain_create(struct domain *d,
 {
     unsigned int count = 0;
     int rc;
+    struct mem_map_domain *mem_map = &d->arch.mem_map;
 
     BUILD_BUG_ON(GUEST_MAX_VCPUS < MAX_VIRT_CPUS);
 
@@ -785,6 +786,11 @@ int arch_domain_create(struct domain *d,
     d->arch.sve_vl = config->arch.sve_vl;
 #endif
 
+    mem_map->regions[mem_map->nr_mem_regions].start = GUEST_MAGIC_BASE;
+    mem_map->regions[mem_map->nr_mem_regions].size = GUEST_MAGIC_SIZE;
+    mem_map->regions[mem_map->nr_mem_regions].type = GUEST_MEM_REGION_MAGIC;
+    mem_map->nr_mem_regions++;
+
     return 0;
 
 fail:
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index ad56efb0f5..92024bcaa0 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -148,7 +148,6 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
 
         return 0;
     }
-
     case XEN_DOMCTL_vuart_op:
     {
         int rc;
@@ -176,6 +175,24 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
 
         return rc;
     }
+    case XEN_DOMCTL_get_mem_map:
+    {
+        int rc;
+        /*
+         * Cap the number of regions to the minimum value between toolstack and
+         * hypervisor to avoid overflowing the buffer.
+         */
+        uint32_t nr_regions = min(d->arch.mem_map.nr_mem_regions,
+                                  domctl->u.mem_map.nr_mem_regions);
+
+        if ( copy_to_guest(domctl->u.mem_map.buffer,
+                           d->arch.mem_map.regions,
+                           nr_regions) ||
+             __copy_to_guest(u_domctl, domctl, 1) )
+            rc = -EFAULT;
+
+        return rc;
+    }
     default:
         return subarch_do_domctl(domctl, d, u_domctl);
     }
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index f1d72c6e48..a559a9e499 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -10,6 +10,7 @@
 #include <asm/gic.h>
 #include <asm/vgic.h>
 #include <asm/vpl011.h>
+#include <public/domctl.h>
 #include <public/hvm/params.h>
 
 struct hvm_domain
@@ -59,6 +60,11 @@ struct paging_domain {
     unsigned long p2m_total_pages;
 };
 
+struct mem_map_domain {
+    unsigned int nr_mem_regions;
+    struct xen_mem_region regions[XEN_MAX_MEM_REGIONS];
+};
+
 struct arch_domain
 {
 #ifdef CONFIG_ARM_64
@@ -77,6 +83,8 @@ struct arch_domain
 
     struct paging_domain paging;
 
+    struct mem_map_domain mem_map;
+
     struct vmmio vmmio;
 
     /* Continuable domain_relinquish_resources(). */
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index a25e87dbda..a06eaf2dab 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -420,6 +420,10 @@ typedef uint64_t xen_callback_t;
  * should instead use the FDT.
  */
 
+/* Guest memory region types */
+#define GUEST_MEM_REGION_DEFAULT    0
+#define GUEST_MEM_REGION_MAGIC      1
+
 /* Physical Address Space */
 
 /* Virtio MMIO mappings */
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index a33f9ec32b..77bf999651 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -946,6 +946,25 @@ struct xen_domctl_paging_mempool {
     uint64_aligned_t size; /* Size in bytes. */
 };
 
+#define XEN_MAX_MEM_REGIONS 1
+
+struct xen_mem_region {
+    uint64_t start;
+    uint64_t size;
+    unsigned int type;
+};
+typedef struct xen_mem_region xen_mem_region_t;
+DEFINE_XEN_GUEST_HANDLE(xen_mem_region_t);
+
+struct xen_domctl_mem_map {
+    /* IN & OUT */
+    uint32_t nr_mem_regions;
+    /* OUT */
+    XEN_GUEST_HANDLE(xen_mem_region_t) buffer;
+};
+typedef struct xen_domctl_mem_map xen_domctl_mem_map_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_mem_map_t);
+
 #if defined(__i386__) || defined(__x86_64__)
 struct xen_domctl_vcpu_msr {
     uint32_t         index;
@@ -1277,6 +1296,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_vmtrace_op                    84
 #define XEN_DOMCTL_get_paging_mempool_size       85
 #define XEN_DOMCTL_set_paging_mempool_size       86
+#define XEN_DOMCTL_get_mem_map                   87
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1339,6 +1359,7 @@ struct xen_domctl {
         struct xen_domctl_vuart_op          vuart_op;
         struct xen_domctl_vmtrace_op        vmtrace_op;
         struct xen_domctl_paging_mempool    paging_mempool;
+        struct xen_domctl_mem_map           mem_map;
         uint8_t                             pad[128];
     } u;
 };
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 4/5] xen/arm: Find unallocated spaces for magic pages of direct-mapped domU
  2024-03-08  1:54 [PATCH v2 0/5] DOMCTL-based guest magic regions allocation for dom0less Henry Wang
                   ` (2 preceding siblings ...)
  2024-03-08  1:54 ` [PATCH v2 3/5] xen/domctl, tools: Introduce a new domctl to get guest memory map Henry Wang
@ 2024-03-08  1:54 ` Henry Wang
  2024-03-11 13:46   ` Michal Orzel
  2024-03-08  1:54 ` [PATCH v2 5/5] xen/memory, tools: Make init-dom0less consume XEN_DOMCTL_get_mem_map Henry Wang
  4 siblings, 1 reply; 31+ messages in thread
From: Henry Wang @ 2024-03-08  1:54 UTC (permalink / raw)
  To: xen-devel
  Cc: Henry Wang, Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Michal Orzel, Volodymyr Babchuk, Alec Kwapis

For 1:1 direct-mapped dom0less DomUs, the magic pages should not clash
with any RAM region. To find a proper region for guest magic pages,
we can reuse the logic of finding domain extended regions.

Extract the logic of finding domain extended regions to a helper
function named find_unused_memory() and use it to find unallocated
spaces for magic pages before make_hypervisor_node(). The result magic
page region is added to the reserved memory section of the bootinfo so
that it is carved out from the extended regions.

Reported-by: Alec Kwapis <alec.kwapis@medtronic.com>
Signed-off-by: Henry Wang <xin.wang2@amd.com>
---
v2:
- New patch
---
 xen/arch/arm/dom0less-build.c           | 43 +++++++++++++++++++++++++
 xen/arch/arm/domain_build.c             | 30 ++++++++++-------
 xen/arch/arm/include/asm/domain_build.h |  2 ++
 3 files changed, 64 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/dom0less-build.c b/xen/arch/arm/dom0less-build.c
index 1e1c8d83ae..99447bfb0c 100644
--- a/xen/arch/arm/dom0less-build.c
+++ b/xen/arch/arm/dom0less-build.c
@@ -682,6 +682,49 @@ static int __init prepare_dtb_domU(struct domain *d, struct kernel_info *kinfo)
 
     if ( kinfo->dom0less_feature & DOM0LESS_ENHANCED_NO_XS )
     {
+        if ( is_domain_direct_mapped(d) )
+        {
+            struct meminfo *avail_magic_regions = xzalloc(struct meminfo);
+            struct meminfo *rsrv_mem = &bootinfo.reserved_mem;
+            struct mem_map_domain *mem_map = &d->arch.mem_map;
+            uint64_t magic_region_start = INVALID_PADDR;
+            uint64_t magic_region_size = GUEST_MAGIC_SIZE;
+            unsigned int i;
+
+            if ( !avail_magic_regions )
+                return -ENOMEM;
+
+            ret = find_unused_memory(d, kinfo, avail_magic_regions);
+            if ( ret )
+            {
+                printk(XENLOG_WARNING
+                       "%pd: failed to find a region for domain magic pages\n",
+                      d);
+                goto err;
+            }
+
+            magic_region_start = avail_magic_regions->bank[0].start;
+
+            /*
+             * Register the magic region as reserved mem to make sure this
+             * region will not be counted when allocating extended regions.
+             */
+            rsrv_mem->bank[rsrv_mem->nr_banks].start = magic_region_start;
+            rsrv_mem->bank[rsrv_mem->nr_banks].size = magic_region_size;
+            rsrv_mem->bank[rsrv_mem->nr_banks].type = MEMBANK_DEFAULT;
+            rsrv_mem->nr_banks++;
+
+            /* Update the domain memory map. */
+            for ( i = 0; i < mem_map->nr_mem_regions; i++ )
+            {
+                if ( mem_map->regions[i].type == GUEST_MEM_REGION_MAGIC )
+                {
+                    mem_map->regions[i].start = magic_region_start;
+                    mem_map->regions[i].size = magic_region_size;
+                }
+            }
+        }
+
         ret = make_hypervisor_node(d, kinfo, addrcells, sizecells);
         if ( ret )
             goto err;
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 085d88671e..b36b98ee7d 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1110,6 +1110,24 @@ static int __init find_domU_holes(const struct kernel_info *kinfo,
     return res;
 }
 
+int __init find_unused_memory(struct domain *d, const struct kernel_info *kinfo,
+                              struct meminfo *mem_region)
+{
+    int res;
+
+    if ( is_domain_direct_mapped(d) )
+    {
+        if ( !is_iommu_enabled(d) )
+            res = find_unallocated_memory(kinfo, mem_region);
+        else
+            res = find_memory_holes(kinfo, mem_region);
+    }
+    else
+        res = find_domU_holes(kinfo, mem_region);
+
+    return res;
+}
+
 int __init make_hypervisor_node(struct domain *d,
                                 const struct kernel_info *kinfo,
                                 int addrcells, int sizecells)
@@ -1161,17 +1179,7 @@ int __init make_hypervisor_node(struct domain *d,
         if ( !ext_regions )
             return -ENOMEM;
 
-        if ( is_domain_direct_mapped(d) )
-        {
-            if ( !is_iommu_enabled(d) )
-                res = find_unallocated_memory(kinfo, ext_regions);
-            else
-                res = find_memory_holes(kinfo, ext_regions);
-        }
-        else
-        {
-            res = find_domU_holes(kinfo, ext_regions);
-        }
+        res = find_unused_memory(d, kinfo, ext_regions);
 
         if ( res )
             printk(XENLOG_WARNING "%pd: failed to allocate extended regions\n",
diff --git a/xen/arch/arm/include/asm/domain_build.h b/xen/arch/arm/include/asm/domain_build.h
index da9e6025f3..4458012644 100644
--- a/xen/arch/arm/include/asm/domain_build.h
+++ b/xen/arch/arm/include/asm/domain_build.h
@@ -10,6 +10,8 @@ bool allocate_bank_memory(struct domain *d, struct kernel_info *kinfo,
                           gfn_t sgfn, paddr_t tot_size);
 int construct_domain(struct domain *d, struct kernel_info *kinfo);
 int domain_fdt_begin_node(void *fdt, const char *name, uint64_t unit);
+int find_unused_memory(struct domain *d, const struct kernel_info *kinfo,
+                       struct meminfo *mem_region);
 int make_chosen_node(const struct kernel_info *kinfo);
 int make_cpus_node(const struct domain *d, void *fdt);
 int make_hypervisor_node(struct domain *d, const struct kernel_info *kinfo,
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 5/5] xen/memory, tools: Make init-dom0less consume XEN_DOMCTL_get_mem_map
  2024-03-08  1:54 [PATCH v2 0/5] DOMCTL-based guest magic regions allocation for dom0less Henry Wang
                   ` (3 preceding siblings ...)
  2024-03-08  1:54 ` [PATCH v2 4/5] xen/arm: Find unallocated spaces for magic pages of direct-mapped domU Henry Wang
@ 2024-03-08  1:54 ` Henry Wang
  2024-03-11 17:07   ` Jan Beulich
  2024-03-25 15:35   ` Anthony PERARD
  4 siblings, 2 replies; 31+ messages in thread
From: Henry Wang @ 2024-03-08  1:54 UTC (permalink / raw)
  To: xen-devel
  Cc: Henry Wang, Wei Liu, Anthony PERARD, Andrew Cooper,
	George Dunlap, Jan Beulich, Julien Grall, Stefano Stabellini,
	Alec Kwapis

Previous commits enable the toolstack to get the domain memory map,
therefore instead of hardcoding the guest magic pages region, use
the XEN_DOMCTL_get_mem_map domctl to get the start address of the
guest magic pages region. Add the (XEN)MEMF_force_heap_alloc memory
flags to force populate_physmap() to allocate page from domheap
instead of using 1:1 or static allocated pages to map the magic pages.

Reported-by: Alec Kwapis <alec.kwapis@medtronic.com>
Signed-off-by: Henry Wang <xin.wang2@amd.com>
---
v2:
- New patch
---
 tools/helpers/init-dom0less.c | 22 ++++++++++++++++++----
 xen/common/memory.c           | 10 ++++++++--
 xen/include/public/memory.h   |  5 +++++
 xen/include/xen/mm.h          |  2 ++
 4 files changed, 33 insertions(+), 6 deletions(-)

diff --git a/tools/helpers/init-dom0less.c b/tools/helpers/init-dom0less.c
index fee93459c4..92c612f6da 100644
--- a/tools/helpers/init-dom0less.c
+++ b/tools/helpers/init-dom0less.c
@@ -23,16 +23,30 @@ static int alloc_xs_page(struct xc_interface_core *xch,
                          libxl_dominfo *info,
                          uint64_t *xenstore_pfn)
 {
-    int rc;
-    const xen_pfn_t base = GUEST_MAGIC_BASE >> XC_PAGE_SHIFT;
-    xen_pfn_t p2m = (GUEST_MAGIC_BASE >> XC_PAGE_SHIFT) + XENSTORE_PFN_OFFSET;
+    int rc, i;
+    xen_pfn_t base = ((xen_pfn_t)-1);
+    xen_pfn_t p2m = ((xen_pfn_t)-1);
+    uint32_t nr_regions = XEN_MAX_MEM_REGIONS;
+    struct xen_mem_region mem_regions[XEN_MAX_MEM_REGIONS] = {0};
+
+    rc = xc_get_domain_mem_map(xch, info->domid, mem_regions, &nr_regions);
+
+    for ( i = 0; i < nr_regions; i++ )
+    {
+        if ( mem_regions[i].type == GUEST_MEM_REGION_MAGIC )
+        {
+            base = mem_regions[i].start >> XC_PAGE_SHIFT;
+            p2m = (mem_regions[i].start >> XC_PAGE_SHIFT) + XENSTORE_PFN_OFFSET;
+        }
+    }
 
     rc = xc_domain_setmaxmem(xch, info->domid,
                              info->max_memkb + (XC_PAGE_SIZE/1024));
     if (rc < 0)
         return rc;
 
-    rc = xc_domain_populate_physmap_exact(xch, info->domid, 1, 0, 0, &p2m);
+    rc = xc_domain_populate_physmap_exact(xch, info->domid, 1, 0,
+                                          XENMEMF_force_heap_alloc, &p2m);
     if (rc < 0)
         return rc;
 
diff --git a/xen/common/memory.c b/xen/common/memory.c
index b3b05c2ec0..18b6c16aed 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -219,7 +219,8 @@ static void populate_physmap(struct memop_args *a)
         }
         else
         {
-            if ( is_domain_direct_mapped(d) )
+            if ( is_domain_direct_mapped(d) &&
+                 !(a->memflags & MEMF_force_heap_alloc) )
             {
                 mfn = _mfn(gpfn);
 
@@ -246,7 +247,8 @@ static void populate_physmap(struct memop_args *a)
 
                 mfn = _mfn(gpfn);
             }
-            else if ( is_domain_using_staticmem(d) )
+            else if ( is_domain_using_staticmem(d) &&
+                      !(a->memflags & MEMF_force_heap_alloc) )
             {
                 /*
                  * No easy way to guarantee the retrieved pages are contiguous,
@@ -1433,6 +1435,10 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
              && (reservation.mem_flags & XENMEMF_populate_on_demand) )
             args.memflags |= MEMF_populate_on_demand;
 
+        if ( op == XENMEM_populate_physmap
+             && (reservation.mem_flags & XENMEMF_force_heap_alloc) )
+            args.memflags |= MEMF_force_heap_alloc;
+
         if ( xsm_memory_adjust_reservation(XSM_TARGET, curr_d, d) )
         {
             rcu_unlock_domain(d);
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 5e545ae9a4..2a1bfa5bfa 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -41,6 +41,11 @@
 #define XENMEMF_exact_node(n) (XENMEMF_node(n) | XENMEMF_exact_node_request)
 /* Flag to indicate the node specified is virtual node */
 #define XENMEMF_vnode  (1<<18)
+/*
+ * Flag to force populate physmap to use pages from domheap instead of 1:1
+ * or static allocation.
+ */
+#define XENMEMF_force_heap_alloc  (1<<19)
 #endif
 
 struct xen_memory_reservation {
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index bb29b352ec..a4554f730d 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -205,6 +205,8 @@ struct npfec {
 #define  MEMF_no_icache_flush (1U<<_MEMF_no_icache_flush)
 #define _MEMF_no_scrub    8
 #define  MEMF_no_scrub    (1U<<_MEMF_no_scrub)
+#define _MEMF_force_heap_alloc 9
+#define  MEMF_force_heap_alloc (1U<<_MEMF_force_heap_alloc)
 #define _MEMF_node        16
 #define  MEMF_node_mask   ((1U << (8 * sizeof(nodeid_t))) - 1)
 #define  MEMF_node(n)     ((((n) + 1) & MEMF_node_mask) << _MEMF_node)
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 1/5] xen/arm: Rename assign_static_memory_11() for consistency
  2024-03-08  1:54 ` [PATCH v2 1/5] xen/arm: Rename assign_static_memory_11() for consistency Henry Wang
@ 2024-03-08  8:18   ` Michal Orzel
  2024-03-08  8:22     ` Henry Wang
  0 siblings, 1 reply; 31+ messages in thread
From: Michal Orzel @ 2024-03-08  8:18 UTC (permalink / raw)
  To: Henry Wang, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Bertrand Marquis, Volodymyr Babchuk

Hi Henry,

On 08/03/2024 02:54, Henry Wang wrote:
> Currently on Arm there are 4 functions to allocate memory as domain
> RAM at boot time for different types of domains:
> (1) allocate_memory(): To allocate memory for Dom0less DomUs that
>     do not use static memory.
> (2) allocate_static_memory(): To allocate memory for Dom0less DomUs
>     that use static memory.
> (3) allocate_memory_11(): To allocate memory for Dom0.
> (4) assign_static_memory_11(): To allocate memory for Dom0less DomUs
>     that use static memory and directmapped.
> 
> To keep consistency between the names and the in-code comment on top
> of the functions, rename assign_static_memory_11() to
> allocate_static_memory_11(). No functional change intended.
There was a reason for this naming. The function is called assign_ and not allocate_ because
there is no allocation done inside. The function maps specified host regions to guest regions.
Refer:
https://lore.kernel.org/xen-devel/20220214031956.3726764-6-penny.zheng@arm.com/

~Michal


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 1/5] xen/arm: Rename assign_static_memory_11() for consistency
  2024-03-08  8:18   ` Michal Orzel
@ 2024-03-08  8:22     ` Henry Wang
  0 siblings, 0 replies; 31+ messages in thread
From: Henry Wang @ 2024-03-08  8:22 UTC (permalink / raw)
  To: Michal Orzel, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Bertrand Marquis, Volodymyr Babchuk

Hi Michal,

On 3/8/2024 4:18 PM, Michal Orzel wrote:
> Hi Henry,
>
> On 08/03/2024 02:54, Henry Wang wrote:
>> Currently on Arm there are 4 functions to allocate memory as domain
>> RAM at boot time for different types of domains:
>> (1) allocate_memory(): To allocate memory for Dom0less DomUs that
>>      do not use static memory.
>> (2) allocate_static_memory(): To allocate memory for Dom0less DomUs
>>      that use static memory.
>> (3) allocate_memory_11(): To allocate memory for Dom0.
>> (4) assign_static_memory_11(): To allocate memory for Dom0less DomUs
>>      that use static memory and directmapped.
>>
>> To keep consistency between the names and the in-code comment on top
>> of the functions, rename assign_static_memory_11() to
>> allocate_static_memory_11(). No functional change intended.
> There was a reason for this naming. The function is called assign_ and not allocate_ because
> there is no allocation done inside. The function maps specified host regions to guest regions.
> Refer:
> https://lore.kernel.org/xen-devel/20220214031956.3726764-6-penny.zheng@arm.com/

Emmm I indeed had the same idea and thought there should be a reason 
about the naming, but at the same time still misguided by the in-code 
comment on top of the function saying "Allocate static memory as RAM for 
one specific domain d." :/

I guess I will either simply drop this patch or correct the above 
in-code comment (which I am not sure if it is worthwhile for an 
independent patch). Anyway, thanks for the info!

Kind regards,
Henry

> ~Michal



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 2/5] xen/domain.h: Centrialize is_domain_direct_mapped()
  2024-03-08  1:54 ` [PATCH v2 2/5] xen/domain.h: Centrialize is_domain_direct_mapped() Henry Wang
@ 2024-03-08  8:59   ` Michal Orzel
  2024-03-08  9:06     ` Henry Wang
  2024-03-11 18:02   ` Shawn Anastasio
  1 sibling, 1 reply; 31+ messages in thread
From: Michal Orzel @ 2024-03-08  8:59 UTC (permalink / raw)
  To: Henry Wang, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Shawn Anastasio, Alistair Francis, Bob Eshleman,
	Connor Davis, Roger Pau Monné

Hi Henry,

On 08/03/2024 02:54, Henry Wang wrote:
> Currently direct mapped domain is only supported by the Arm
> architecture at the domain creation time by setting the CDF_directmap
> flag. There is not a need for every non-Arm architecture, i.e. x86,
> RISC-V and PPC, to define a stub is_domain_direct_mapped() in arch
> header.
> 
> Move is_domain_direct_mapped() to a centralized place at xen/domain.h
> and evaluate CDF_directmap for non-Arm architecture to 0.
> 
> Signed-off-by: Henry Wang <xin.wang2@amd.com>
Shouldn't you add Suggested-by: Jan?

Reviewed-by: Michal Orzel <michal.orzel@amd.com>

Another alternative would be to let the arch header define it if needed and use a centralized stub in xen/domain.h:

#ifndef is_domain_direct_mapped
#define is_domain_direct_mapped(d) ((void)(d), 0)
#endif

I'm not sure which solution is better.

~Michal


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 2/5] xen/domain.h: Centrialize is_domain_direct_mapped()
  2024-03-08  8:59   ` Michal Orzel
@ 2024-03-08  9:06     ` Henry Wang
  2024-03-08  9:41       ` Jan Beulich
  0 siblings, 1 reply; 31+ messages in thread
From: Henry Wang @ 2024-03-08  9:06 UTC (permalink / raw)
  To: Michal Orzel, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Shawn Anastasio, Alistair Francis, Bob Eshleman,
	Connor Davis, Roger Pau Monné

Hi Michal,

On 3/8/2024 4:59 PM, Michal Orzel wrote:
> Hi Henry,
>
> On 08/03/2024 02:54, Henry Wang wrote:
>> Currently direct mapped domain is only supported by the Arm
>> architecture at the domain creation time by setting the CDF_directmap
>> flag. There is not a need for every non-Arm architecture, i.e. x86,
>> RISC-V and PPC, to define a stub is_domain_direct_mapped() in arch
>> header.
>>
>> Move is_domain_direct_mapped() to a centralized place at xen/domain.h
>> and evaluate CDF_directmap for non-Arm architecture to 0.
>>
>> Signed-off-by: Henry Wang <xin.wang2@amd.com>
> Shouldn't you add Suggested-by: Jan?

Yeah indeed I should have added it. I always have a hard time to 
determine if I should add "Suggested-by"/"Reported-by" tags, sorry for 
missing it this time.

I will add it in the next version.

> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

Thanks!

> Another alternative would be to let the arch header define it if needed and use a centralized stub in xen/domain.h:
>
> #ifndef is_domain_direct_mapped
> #define is_domain_direct_mapped(d) ((void)(d), 0)
> #endif
>
> I'm not sure which solution is better.

Thanks for the suggestion. I am fine with either way, so let's see what 
the others would say and I will do the update of this patch accordingly.

Kind regards,
Henry

> ~Michal



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 2/5] xen/domain.h: Centrialize is_domain_direct_mapped()
  2024-03-08  9:06     ` Henry Wang
@ 2024-03-08  9:41       ` Jan Beulich
  0 siblings, 0 replies; 31+ messages in thread
From: Jan Beulich @ 2024-03-08  9:41 UTC (permalink / raw)
  To: Henry Wang
  Cc: Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	Shawn Anastasio, Alistair Francis, Bob Eshleman, Connor Davis,
	Roger Pau Monné,
	xen-devel, Michal Orzel

On 08.03.2024 10:06, Henry Wang wrote:
> On 3/8/2024 4:59 PM, Michal Orzel wrote:
>> Another alternative would be to let the arch header define it if needed and use a centralized stub in xen/domain.h:
>>
>> #ifndef is_domain_direct_mapped
>> #define is_domain_direct_mapped(d) ((void)(d), 0)
>> #endif
>>
>> I'm not sure which solution is better.
> 
> Thanks for the suggestion. I am fine with either way, so let's see what 
> the others would say and I will do the update of this patch accordingly.

While I wouldn't strictly mind the addition of an #ifdef, I'd prefer it to
be omitted until such time when an arch would really want to override it.
IOW patch as-is
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 3/5] xen/domctl, tools: Introduce a new domctl to get guest memory map
  2024-03-08  1:54 ` [PATCH v2 3/5] xen/domctl, tools: Introduce a new domctl to get guest memory map Henry Wang
@ 2024-03-11  9:10   ` Michal Orzel
  2024-03-11  9:46     ` Henry Wang
  2024-03-11 16:58   ` Jan Beulich
  1 sibling, 1 reply; 31+ messages in thread
From: Michal Orzel @ 2024-03-11  9:10 UTC (permalink / raw)
  To: Henry Wang, xen-devel
  Cc: Wei Liu, Anthony PERARD, Juergen Gross, Andrew Cooper,
	George Dunlap, Jan Beulich, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk, Alec Kwapis

Hi Henry,

On 08/03/2024 02:54, Henry Wang wrote:
> There are some use cases where the toolstack needs to know the guest
> memory map. For example, the toolstack helper application
> "init-dom0less" needs to know the guest magic page regions for 1:1
> direct-mapped dom0less DomUs to allocate magic pages.
> 
> To address such needs, add XEN_DOMCTL_get_mem_map hypercall and
> related data structures to query the hypervisor for the guest memory
> map. The guest memory map is recorded in the domain structure and
> currently only guest magic page region is recorded in the guest
> memory map. The guest magic page region is initialized at the domain
> creation time as the layout in the public header, and it is updated
> for 1:1 dom0less DomUs (see the following commit) to avoid conflict
> with RAM.
> 
> Take the opportunity to drop an unnecessary empty line to keep the
> coding style consistent in the file.
> 
> Reported-by: Alec Kwapis <alec.kwapis@medtronic.com>
> Signed-off-by: Henry Wang <xin.wang2@amd.com>
> ---
> v2:
> - New patch
> RFC: I think the newly introduced "struct xen_domctl_mem_map" is
> quite duplicated with "struct xen_memory_map", any comment on reuse
> the "struct xen_memory_map" for simplicity?
> ---
>  tools/include/xenctrl.h           |  4 ++++
>  tools/libs/ctrl/xc_domain.c       | 32 +++++++++++++++++++++++++++++++
>  xen/arch/arm/domain.c             |  6 ++++++
>  xen/arch/arm/domctl.c             | 19 +++++++++++++++++-
>  xen/arch/arm/include/asm/domain.h |  8 ++++++++
>  xen/include/public/arch-arm.h     |  4 ++++
>  xen/include/public/domctl.h       | 21 ++++++++++++++++++++
>  7 files changed, 93 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
> index 2ef8b4e054..b25e9772a2 100644
> --- a/tools/include/xenctrl.h
> +++ b/tools/include/xenctrl.h
> @@ -1195,6 +1195,10 @@ int xc_domain_setmaxmem(xc_interface *xch,
>                          uint32_t domid,
>                          uint64_t max_memkb);
>  
> +int xc_get_domain_mem_map(xc_interface *xch, uint32_t domid,
> +                          struct xen_mem_region mem_regions[],
> +                          uint32_t *nr_regions);
> +
>  int xc_domain_set_memmap_limit(xc_interface *xch,
>                                 uint32_t domid,
>                                 unsigned long map_limitkb);
> diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
> index f2d9d14b4d..64b46bdfb4 100644
> --- a/tools/libs/ctrl/xc_domain.c
> +++ b/tools/libs/ctrl/xc_domain.c
> @@ -697,6 +697,38 @@ int xc_domain_setmaxmem(xc_interface *xch,
>      return do_domctl(xch, &domctl);
>  }
>  
> +int xc_get_domain_mem_map(xc_interface *xch, uint32_t domid,
> +                          struct xen_mem_region mem_regions[],
> +                          uint32_t *nr_regions)
> +{
> +    int rc;
> +    struct xen_domctl domctl = {
> +        .cmd         = XEN_DOMCTL_get_mem_map,
> +        .domain      = domid,
> +        .u.mem_map = {
> +            .nr_mem_regions = *nr_regions,
> +        },
> +    };
> +
> +    DECLARE_HYPERCALL_BOUNCE(mem_regions,
> +                             sizeof(xen_mem_region_t) * (*nr_regions),
> +                             XC_HYPERCALL_BUFFER_BOUNCE_OUT);
> +
> +    if ( !mem_regions || xc_hypercall_bounce_pre(xch, mem_regions) ||
> +         (*nr_regions) < 1 )
> +        return -1;
> +
> +    set_xen_guest_handle(domctl.u.mem_map.buffer, mem_regions);
> +
> +    rc = do_domctl(xch, &domctl);
> +
> +    xc_hypercall_bounce_post(xch, mem_regions);
> +
> +    *nr_regions = domctl.u.mem_map.nr_mem_regions;
> +
> +    return rc;
> +}
> +
>  #if defined(__i386__) || defined(__x86_64__)
>  int xc_domain_set_memory_map(xc_interface *xch,
>                                 uint32_t domid,
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 5e7a7f3e7e..54f3601ab0 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -696,6 +696,7 @@ int arch_domain_create(struct domain *d,
>  {
>      unsigned int count = 0;
>      int rc;
> +    struct mem_map_domain *mem_map = &d->arch.mem_map;
>  
>      BUILD_BUG_ON(GUEST_MAX_VCPUS < MAX_VIRT_CPUS);
>  
> @@ -785,6 +786,11 @@ int arch_domain_create(struct domain *d,
>      d->arch.sve_vl = config->arch.sve_vl;
>  #endif
>  
> +    mem_map->regions[mem_map->nr_mem_regions].start = GUEST_MAGIC_BASE;
You don't check for exceeding max number of regions. Is the expectation that nr_mem_regions
is 0 at this stage? Maybe add an ASSERT here.

> +    mem_map->regions[mem_map->nr_mem_regions].size = GUEST_MAGIC_SIZE;
> +    mem_map->regions[mem_map->nr_mem_regions].type = GUEST_MEM_REGION_MAGIC;
> +    mem_map->nr_mem_regions++;
> +
>      return 0;
>  
>  fail:
> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
> index ad56efb0f5..92024bcaa0 100644
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -148,7 +148,6 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>  
>          return 0;
>      }
> -
>      case XEN_DOMCTL_vuart_op:
>      {
>          int rc;
> @@ -176,6 +175,24 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>  
>          return rc;
>      }
> +    case XEN_DOMCTL_get_mem_map:
> +    {
> +        int rc;
Without initialization, what will be the rc value on success?

> +        /*
> +         * Cap the number of regions to the minimum value between toolstack and
> +         * hypervisor to avoid overflowing the buffer.
> +         */
> +        uint32_t nr_regions = min(d->arch.mem_map.nr_mem_regions,
> +                                  domctl->u.mem_map.nr_mem_regions);
> +
> +        if ( copy_to_guest(domctl->u.mem_map.buffer,
> +                           d->arch.mem_map.regions,
> +                           nr_regions) ||
> +             __copy_to_guest(u_domctl, domctl, 1) )
In domctl.h, you wrote that nr_regions is IN/OUT but you don't seem to write back the actual number
of regions.

> +            rc = -EFAULT;
> +
> +        return rc;
> +    }
>      default:
>          return subarch_do_domctl(domctl, d, u_domctl);
>      }
> diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
> index f1d72c6e48..a559a9e499 100644
> --- a/xen/arch/arm/include/asm/domain.h
> +++ b/xen/arch/arm/include/asm/domain.h
> @@ -10,6 +10,7 @@
>  #include <asm/gic.h>
>  #include <asm/vgic.h>
>  #include <asm/vpl011.h>
> +#include <public/domctl.h>
>  #include <public/hvm/params.h>
>  
>  struct hvm_domain
> @@ -59,6 +60,11 @@ struct paging_domain {
>      unsigned long p2m_total_pages;
>  };
>  
> +struct mem_map_domain {
> +    unsigned int nr_mem_regions;
> +    struct xen_mem_region regions[XEN_MAX_MEM_REGIONS];
> +};
> +
>  struct arch_domain
>  {
>  #ifdef CONFIG_ARM_64
> @@ -77,6 +83,8 @@ struct arch_domain
>  
>      struct paging_domain paging;
>  
> +    struct mem_map_domain mem_map;
> +
>      struct vmmio vmmio;
>  
>      /* Continuable domain_relinquish_resources(). */
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index a25e87dbda..a06eaf2dab 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -420,6 +420,10 @@ typedef uint64_t xen_callback_t;
>   * should instead use the FDT.
>   */
>  
> +/* Guest memory region types */
> +#define GUEST_MEM_REGION_DEFAULT    0
What's the purpose of this default type? It seems unusued.

> +#define GUEST_MEM_REGION_MAGIC      1
> +
>  /* Physical Address Space */
>  
>  /* Virtio MMIO mappings */
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index a33f9ec32b..77bf999651 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -946,6 +946,25 @@ struct xen_domctl_paging_mempool {
>      uint64_aligned_t size; /* Size in bytes. */
>  };
>  
> +#define XEN_MAX_MEM_REGIONS 1
The max number of regions can differ between arches. How are you going to handle it?

~Michal


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 3/5] xen/domctl, tools: Introduce a new domctl to get guest memory map
  2024-03-11  9:10   ` Michal Orzel
@ 2024-03-11  9:46     ` Henry Wang
  0 siblings, 0 replies; 31+ messages in thread
From: Henry Wang @ 2024-03-11  9:46 UTC (permalink / raw)
  To: Michal Orzel, xen-devel
  Cc: Wei Liu, Anthony PERARD, Juergen Gross, Andrew Cooper,
	George Dunlap, Jan Beulich, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk, Alec Kwapis

Hi Michal,

On 3/11/2024 5:10 PM, Michal Orzel wrote:
> Hi Henry,
>
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 5e7a7f3e7e..54f3601ab0 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -696,6 +696,7 @@ int arch_domain_create(struct domain *d,
>   {
>       unsigned int count = 0;
>       int rc;
> +    struct mem_map_domain *mem_map = &d->arch.mem_map;
>   
>       BUILD_BUG_ON(GUEST_MAX_VCPUS < MAX_VIRT_CPUS);
>   
> @@ -785,6 +786,11 @@ int arch_domain_create(struct domain *d,
>       d->arch.sve_vl = config->arch.sve_vl;
>   #endif
>   
> +    mem_map->regions[mem_map->nr_mem_regions].start = GUEST_MAGIC_BASE;
> You don't check for exceeding max number of regions. Is the expectation that nr_mem_regions
> is 0 at this stage? Maybe add an ASSERT here.

Sure, I will add the checking.

>> +    mem_map->regions[mem_map->nr_mem_regions].size = GUEST_MAGIC_SIZE;
>> +    mem_map->regions[mem_map->nr_mem_regions].type = GUEST_MEM_REGION_MAGIC;
>> +    mem_map->nr_mem_regions++;
>> +
>>       return 0;
>>   
>>   fail:
>> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
>> index ad56efb0f5..92024bcaa0 100644
>> --- a/xen/arch/arm/domctl.c
>> +++ b/xen/arch/arm/domctl.c
>> @@ -148,7 +148,6 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>>   
>>           return 0;
>>       }
>> -
>>       case XEN_DOMCTL_vuart_op:
>>       {
>>           int rc;
>> @@ -176,6 +175,24 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>>   
>>           return rc;
>>       }
>> +    case XEN_DOMCTL_get_mem_map:
>> +    {
>> +        int rc;
> Without initialization, what will be the rc value on success?

Thanks for catching this (and the copy back issue below). I made a silly 
mistake here and didn't catch it as I also missed checking the rc in the 
toolstack side...I will fix both side.

>> +        /*
>> +         * Cap the number of regions to the minimum value between toolstack and
>> +         * hypervisor to avoid overflowing the buffer.
>> +         */
>> +        uint32_t nr_regions = min(d->arch.mem_map.nr_mem_regions,
>> +                                  domctl->u.mem_map.nr_mem_regions);
>> +
>> +        if ( copy_to_guest(domctl->u.mem_map.buffer,
>> +                           d->arch.mem_map.regions,
>> +                           nr_regions) ||
>> +             __copy_to_guest(u_domctl, domctl, 1) )
> In domctl.h, you wrote that nr_regions is IN/OUT but you don't seem to write back the actual number
> of regions.

Thanks. Added "domctl->u.mem_map.nr_mem_regions = nr_regions;" locally.

>> +/* Guest memory region types */
>> +#define GUEST_MEM_REGION_DEFAULT    0
> What's the purpose of this default type? It seems unusued.

I added it because struct arch_domain (or we should say struct domain) 
is zalloc-ed. So the default type field in struct xen_mem_region is 0. 
Otherwise we may (mistakenly) define a region type as 0 and lead to 
mistakes.
>> +#define GUEST_MEM_REGION_MAGIC      1
>> +
>>   /* Physical Address Space */
>>   
>>   /* Virtio MMIO mappings */
>> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
>> index a33f9ec32b..77bf999651 100644
>> --- a/xen/include/public/domctl.h
>> +++ b/xen/include/public/domctl.h
>> @@ -946,6 +946,25 @@ struct xen_domctl_paging_mempool {
>>       uint64_aligned_t size; /* Size in bytes. */
>>   };
>>   
>> +#define XEN_MAX_MEM_REGIONS 1
> The max number of regions can differ between arches. How are you going to handle it?

I think we can add
```
#ifndef XEN_MAX_MEM_REGIONS

#define XEN_MAX_MEM_REGIONS 1

#endif
```
here and define the arch specific XEN_MAX_MEM_REGIONS in 
public/arch-*.h. I will fix this in v3.

Kind regards,
Henry

> ~Michal



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 4/5] xen/arm: Find unallocated spaces for magic pages of direct-mapped domU
  2024-03-08  1:54 ` [PATCH v2 4/5] xen/arm: Find unallocated spaces for magic pages of direct-mapped domU Henry Wang
@ 2024-03-11 13:46   ` Michal Orzel
  2024-03-11 13:50     ` Michal Orzel
  2024-03-12  3:25     ` Henry Wang
  0 siblings, 2 replies; 31+ messages in thread
From: Michal Orzel @ 2024-03-11 13:46 UTC (permalink / raw)
  To: Henry Wang, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Alec Kwapis

Hi Henry,

On 08/03/2024 02:54, Henry Wang wrote:
> For 1:1 direct-mapped dom0less DomUs, the magic pages should not clash
> with any RAM region. To find a proper region for guest magic pages,
> we can reuse the logic of finding domain extended regions.
> 
> Extract the logic of finding domain extended regions to a helper
> function named find_unused_memory() and use it to find unallocated
> spaces for magic pages before make_hypervisor_node(). The result magic
> page region is added to the reserved memory section of the bootinfo so
> that it is carved out from the extended regions.
> 
> Reported-by: Alec Kwapis <alec.kwapis@medtronic.com>
> Signed-off-by: Henry Wang <xin.wang2@amd.com>
> ---
> v2:
> - New patch
> ---
>  xen/arch/arm/dom0less-build.c           | 43 +++++++++++++++++++++++++
>  xen/arch/arm/domain_build.c             | 30 ++++++++++-------
>  xen/arch/arm/include/asm/domain_build.h |  2 ++
>  3 files changed, 64 insertions(+), 11 deletions(-)
> 
> diff --git a/xen/arch/arm/dom0less-build.c b/xen/arch/arm/dom0less-build.c
> index 1e1c8d83ae..99447bfb0c 100644
> --- a/xen/arch/arm/dom0less-build.c
> +++ b/xen/arch/arm/dom0less-build.c
> @@ -682,6 +682,49 @@ static int __init prepare_dtb_domU(struct domain *d, struct kernel_info *kinfo)
>  
>      if ( kinfo->dom0less_feature & DOM0LESS_ENHANCED_NO_XS )
>      {
> +        if ( is_domain_direct_mapped(d) )
> +        {
This whole block is dependent on static memory feature that is compiled out by default.
Shouldn't you move it to static-memory.c ?

> +            struct meminfo *avail_magic_regions = xzalloc(struct meminfo);
I can't see corresponding xfree(avail_magic_regions). It's not going to be used after unused memory
regions are retrieved.

> +            struct meminfo *rsrv_mem = &bootinfo.reserved_mem;
> +            struct mem_map_domain *mem_map = &d->arch.mem_map;
> +            uint64_t magic_region_start = INVALID_PADDR;
What's the purpose of this initialization? magic_region_start is going to be re-assigned before making use of this value.

> +            uint64_t magic_region_size = GUEST_MAGIC_SIZE;
Why not paddr_t?

> +            unsigned int i;
> +
> +            if ( !avail_magic_regions )
> +                return -ENOMEM;
What about memory allocated for kinfo->fdt? You should goto err;

> +
> +            ret = find_unused_memory(d, kinfo, avail_magic_regions);
> +            if ( ret )
> +            {
> +                printk(XENLOG_WARNING
> +                       "%pd: failed to find a region for domain magic pages\n",
> +                      d);
> +                goto err;
What about memory allocated for avail_magic_regions? You should free it.

> +            }
> +
> +            magic_region_start = avail_magic_regions->bank[0].start;
> +
> +            /*
> +             * Register the magic region as reserved mem to make sure this
> +             * region will not be counted when allocating extended regions.
Well, this is only true in case find_unallocated_memory() is used to retrieve free regions.
What if our direct mapped domU used partial dtb and IOMMU is in use? In this case,
find_memory_holes() will be used and the behavior will be different.

Also, I'm not sure if it is a good idea to call find_unused_memory twice (with lots of steps inside)
just to retrieve 16MB (btw. add_ext_regions will only return 64MB+ regions) region for magic pages.
I'll let other maintainers share their opinion.

Also, CCing Carlo since he was in a need of retrieving free memory regions as well for cache coloring with dom0.

~Michal


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 4/5] xen/arm: Find unallocated spaces for magic pages of direct-mapped domU
  2024-03-11 13:46   ` Michal Orzel
@ 2024-03-11 13:50     ` Michal Orzel
  2024-03-12  3:25     ` Henry Wang
  1 sibling, 0 replies; 31+ messages in thread
From: Michal Orzel @ 2024-03-11 13:50 UTC (permalink / raw)
  To: Henry Wang, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Alec Kwapis, Carlo Nonato



On 11/03/2024 14:46, Michal Orzel wrote:
> 
> 
> Hi Henry,
> 
> On 08/03/2024 02:54, Henry Wang wrote:
>> For 1:1 direct-mapped dom0less DomUs, the magic pages should not clash
>> with any RAM region. To find a proper region for guest magic pages,
>> we can reuse the logic of finding domain extended regions.
>>
>> Extract the logic of finding domain extended regions to a helper
>> function named find_unused_memory() and use it to find unallocated
>> spaces for magic pages before make_hypervisor_node(). The result magic
>> page region is added to the reserved memory section of the bootinfo so
>> that it is carved out from the extended regions.
>>
>> Reported-by: Alec Kwapis <alec.kwapis@medtronic.com>
>> Signed-off-by: Henry Wang <xin.wang2@amd.com>
>> ---
>> v2:
>> - New patch
>> ---
>>  xen/arch/arm/dom0less-build.c           | 43 +++++++++++++++++++++++++
>>  xen/arch/arm/domain_build.c             | 30 ++++++++++-------
>>  xen/arch/arm/include/asm/domain_build.h |  2 ++
>>  3 files changed, 64 insertions(+), 11 deletions(-)
>>
>> diff --git a/xen/arch/arm/dom0less-build.c b/xen/arch/arm/dom0less-build.c
>> index 1e1c8d83ae..99447bfb0c 100644
>> --- a/xen/arch/arm/dom0less-build.c
>> +++ b/xen/arch/arm/dom0less-build.c
>> @@ -682,6 +682,49 @@ static int __init prepare_dtb_domU(struct domain *d, struct kernel_info *kinfo)
>>
>>      if ( kinfo->dom0less_feature & DOM0LESS_ENHANCED_NO_XS )
>>      {
>> +        if ( is_domain_direct_mapped(d) )
>> +        {
> This whole block is dependent on static memory feature that is compiled out by default.
> Shouldn't you move it to static-memory.c ?
> 
>> +            struct meminfo *avail_magic_regions = xzalloc(struct meminfo);
> I can't see corresponding xfree(avail_magic_regions). It's not going to be used after unused memory
> regions are retrieved.
> 
>> +            struct meminfo *rsrv_mem = &bootinfo.reserved_mem;
>> +            struct mem_map_domain *mem_map = &d->arch.mem_map;
>> +            uint64_t magic_region_start = INVALID_PADDR;
> What's the purpose of this initialization? magic_region_start is going to be re-assigned before making use of this value.
> 
>> +            uint64_t magic_region_size = GUEST_MAGIC_SIZE;
> Why not paddr_t?
> 
>> +            unsigned int i;
>> +
>> +            if ( !avail_magic_regions )
>> +                return -ENOMEM;
> What about memory allocated for kinfo->fdt? You should goto err;
> 
>> +
>> +            ret = find_unused_memory(d, kinfo, avail_magic_regions);
>> +            if ( ret )
>> +            {
>> +                printk(XENLOG_WARNING
>> +                       "%pd: failed to find a region for domain magic pages\n",
>> +                      d);
>> +                goto err;
> What about memory allocated for avail_magic_regions? You should free it.
> 
>> +            }
>> +
>> +            magic_region_start = avail_magic_regions->bank[0].start;
>> +
>> +            /*
>> +             * Register the magic region as reserved mem to make sure this
>> +             * region will not be counted when allocating extended regions.
> Well, this is only true in case find_unallocated_memory() is used to retrieve free regions.
> What if our direct mapped domU used partial dtb and IOMMU is in use? In this case,
> find_memory_holes() will be used and the behavior will be different.
> 
> Also, I'm not sure if it is a good idea to call find_unused_memory twice (with lots of steps inside)
> just to retrieve 16MB (btw. add_ext_regions will only return 64MB+ regions) region for magic pages.
> I'll let other maintainers share their opinion.
> 
> Also, CCing Carlo since he was in a need of retrieving free memory regions as well for cache coloring with dom0.
In the end, I forgot to CC Carlo. Adding him now.

~Michal


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 3/5] xen/domctl, tools: Introduce a new domctl to get guest memory map
  2024-03-08  1:54 ` [PATCH v2 3/5] xen/domctl, tools: Introduce a new domctl to get guest memory map Henry Wang
  2024-03-11  9:10   ` Michal Orzel
@ 2024-03-11 16:58   ` Jan Beulich
  2024-03-12  3:06     ` Henry Wang
  1 sibling, 1 reply; 31+ messages in thread
From: Jan Beulich @ 2024-03-11 16:58 UTC (permalink / raw)
  To: Henry Wang
  Cc: Wei Liu, Anthony PERARD, Juergen Gross, Andrew Cooper,
	George Dunlap, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Michal Orzel, Volodymyr Babchuk, Alec Kwapis,
	xen-devel

On 08.03.2024 02:54, Henry Wang wrote:
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -946,6 +946,25 @@ struct xen_domctl_paging_mempool {
>      uint64_aligned_t size; /* Size in bytes. */
>  };
>  
> +#define XEN_MAX_MEM_REGIONS 1
> +
> +struct xen_mem_region {
> +    uint64_t start;
> +    uint64_t size;

uint64_aligned_t?

> +    unsigned int type;

uint32_t and explicit padding (incl checking thereof) please.

> +};
> +typedef struct xen_mem_region xen_mem_region_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_mem_region_t);
> +
> +struct xen_domctl_mem_map {
> +    /* IN & OUT */
> +    uint32_t nr_mem_regions;
> +    /* OUT */
> +    XEN_GUEST_HANDLE(xen_mem_region_t) buffer;

XEN_GUEST_HANDLE_64() and explicit padding (+checking) again please.

Jan


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 5/5] xen/memory, tools: Make init-dom0less consume XEN_DOMCTL_get_mem_map
  2024-03-08  1:54 ` [PATCH v2 5/5] xen/memory, tools: Make init-dom0less consume XEN_DOMCTL_get_mem_map Henry Wang
@ 2024-03-11 17:07   ` Jan Beulich
  2024-03-12  3:44     ` Henry Wang
  2024-03-29  5:11     ` Henry Wang
  2024-03-25 15:35   ` Anthony PERARD
  1 sibling, 2 replies; 31+ messages in thread
From: Jan Beulich @ 2024-03-11 17:07 UTC (permalink / raw)
  To: Henry Wang
  Cc: Wei Liu, Anthony PERARD, Andrew Cooper, George Dunlap,
	Julien Grall, Stefano Stabellini, Alec Kwapis, xen-devel

I'm afraid the title doesn't really say what the patch actually means
to achieve.

On 08.03.2024 02:54, Henry Wang wrote:
> Previous commits enable the toolstack to get the domain memory map,
> therefore instead of hardcoding the guest magic pages region, use
> the XEN_DOMCTL_get_mem_map domctl to get the start address of the
> guest magic pages region. Add the (XEN)MEMF_force_heap_alloc memory
> flags to force populate_physmap() to allocate page from domheap
> instead of using 1:1 or static allocated pages to map the magic pages.

A patch description wants to be (largely) self-contained. "Previous
commits" shouldn't be mentioned; recall that the sequence in which
patches go in is unknown to you up front. (In fact the terms "commit"
or "patch" should be avoided altogether when describing what a patch
does. The only valid use I can think of is when referring to commits
already in the tree, and then typically by quoting their hash and
title.)

> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -41,6 +41,11 @@
>  #define XENMEMF_exact_node(n) (XENMEMF_node(n) | XENMEMF_exact_node_request)
>  /* Flag to indicate the node specified is virtual node */
>  #define XENMEMF_vnode  (1<<18)
> +/*
> + * Flag to force populate physmap to use pages from domheap instead of 1:1
> + * or static allocation.
> + */
> +#define XENMEMF_force_heap_alloc  (1<<19)
>  #endif

If this is for populate_physmap only, then other sub-ops need to reject
its use.

I have to admit I'm a little wary of allocating another flag here and ...

> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -205,6 +205,8 @@ struct npfec {
>  #define  MEMF_no_icache_flush (1U<<_MEMF_no_icache_flush)
>  #define _MEMF_no_scrub    8
>  #define  MEMF_no_scrub    (1U<<_MEMF_no_scrub)
> +#define _MEMF_force_heap_alloc 9
> +#define  MEMF_force_heap_alloc (1U<<_MEMF_force_heap_alloc)
>  #define _MEMF_node        16
>  #define  MEMF_node_mask   ((1U << (8 * sizeof(nodeid_t))) - 1)
>  #define  MEMF_node(n)     ((((n) + 1) & MEMF_node_mask) << _MEMF_node)

... here - we don't have that many left. Since other sub-ops aren't
intended to support this flag, did you consider adding another (perhaps
even arch-specific) sub-op instead?

Jan


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 2/5] xen/domain.h: Centrialize is_domain_direct_mapped()
  2024-03-08  1:54 ` [PATCH v2 2/5] xen/domain.h: Centrialize is_domain_direct_mapped() Henry Wang
  2024-03-08  8:59   ` Michal Orzel
@ 2024-03-11 18:02   ` Shawn Anastasio
  1 sibling, 0 replies; 31+ messages in thread
From: Shawn Anastasio @ 2024-03-11 18:02 UTC (permalink / raw)
  To: Henry Wang, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Bertrand Marquis, Michal Orzel,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Alistair Francis, Bob Eshleman, Connor Davis,
	Roger Pau Monné

Hi Henry,

On 3/7/24 7:54 PM, Henry Wang wrote:
> Currently direct mapped domain is only supported by the Arm
> architecture at the domain creation time by setting the CDF_directmap
> flag. There is not a need for every non-Arm architecture, i.e. x86,
> RISC-V and PPC, to define a stub is_domain_direct_mapped() in arch
> header.
> 
> Move is_domain_direct_mapped() to a centralized place at xen/domain.h
> and evaluate CDF_directmap for non-Arm architecture to 0.
> 
> Signed-off-by: Henry Wang <xin.wang2@amd.com>

Regardless of whether or not you decide to go with the centralized ifdef
approach suggested by Michal, consider the PPC parts:

Acked-by: Shawn Anastasio <sanastasio@raptorengineering.com>

Thanks,
Shawn


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 3/5] xen/domctl, tools: Introduce a new domctl to get guest memory map
  2024-03-11 16:58   ` Jan Beulich
@ 2024-03-12  3:06     ` Henry Wang
  0 siblings, 0 replies; 31+ messages in thread
From: Henry Wang @ 2024-03-12  3:06 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Wei Liu, Anthony PERARD, Juergen Gross, Andrew Cooper,
	George Dunlap, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Michal Orzel, Volodymyr Babchuk, Alec Kwapis,
	xen-devel

Hi Jan,

On 3/12/2024 12:58 AM, Jan Beulich wrote:
> On 08.03.2024 02:54, Henry Wang wrote:
>> --- a/xen/include/public/domctl.h
>> +++ b/xen/include/public/domctl.h
>> @@ -946,6 +946,25 @@ struct xen_domctl_paging_mempool {
>>       uint64_aligned_t size; /* Size in bytes. */
>>   };
>>   
>> +#define XEN_MAX_MEM_REGIONS 1
>> +
>> +struct xen_mem_region {
>> +    uint64_t start;
>> +    uint64_t size;
> uint64_aligned_t?

Yes this makes great sense, thanks for catching it here and ...

>> +    unsigned int type;
> uint32_t and explicit padding (incl checking thereof) please.

...here and ...

>> +};
>> +typedef struct xen_mem_region xen_mem_region_t;
>> +DEFINE_XEN_GUEST_HANDLE(xen_mem_region_t);
>> +
>> +struct xen_domctl_mem_map {
>> +    /* IN & OUT */
>> +    uint32_t nr_mem_regions;
>> +    /* OUT */
>> +    XEN_GUEST_HANDLE(xen_mem_region_t) buffer;
> XEN_GUEST_HANDLE_64() and explicit padding (+checking) again please.

...here. I will update the patch accordingly and add the padding + 
checking in v3.

Kind regards,
Henry

> Jan



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 4/5] xen/arm: Find unallocated spaces for magic pages of direct-mapped domU
  2024-03-11 13:46   ` Michal Orzel
  2024-03-11 13:50     ` Michal Orzel
@ 2024-03-12  3:25     ` Henry Wang
  2024-03-13 11:09       ` Carlo Nonato
  1 sibling, 1 reply; 31+ messages in thread
From: Henry Wang @ 2024-03-12  3:25 UTC (permalink / raw)
  To: Michal Orzel, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Bertrand Marquis,
	Volodymyr Babchuk, Alec Kwapis, Carlo Nonato

Hi Michal,

On 3/11/2024 9:46 PM, Michal Orzel wrote:
> Hi Henry,
>
> diff --git a/xen/arch/arm/dom0less-build.c b/xen/arch/arm/dom0less-build.c
> index 1e1c8d83ae..99447bfb0c 100644
> --- a/xen/arch/arm/dom0less-build.c
> +++ b/xen/arch/arm/dom0less-build.c
> @@ -682,6 +682,49 @@ static int __init prepare_dtb_domU(struct domain *d, struct kernel_info *kinfo)
>   
>       if ( kinfo->dom0less_feature & DOM0LESS_ENHANCED_NO_XS )
>       {
> +        if ( is_domain_direct_mapped(d) )
> +        {
> This whole block is dependent on static memory feature that is compiled out by default.
> Shouldn't you move it to static-memory.c ?

This makes sense. I will convert this block to a function then move to 
static-memory.c in v3.

>> +            struct meminfo *avail_magic_regions = xzalloc(struct meminfo);
> I can't see corresponding xfree(avail_magic_regions). It's not going to be used after unused memory
> regions are retrieved.

Hmmm I realized I've made a mess here and below. I will do the proper 
error handling in v3.

>> +            struct meminfo *rsrv_mem = &bootinfo.reserved_mem;
>> +            struct mem_map_domain *mem_map = &d->arch.mem_map;
>> +            uint64_t magic_region_start = INVALID_PADDR;
> What's the purpose of this initialization? magic_region_start is going to be re-assigned before making use of this value.

Personally for variables holding an address, I would like to init the 
local variable to a poison value before use. But you are right it does 
not make a difference here I think. I can drop the initialization if you 
prefer not having it, sure.

>> +            uint64_t magic_region_size = GUEST_MAGIC_SIZE;
> Why not paddr_t?

Good catch, I mixed struct meminfo with the newly added struct. Will use 
paddr_t.
>> +
>> +            magic_region_start = avail_magic_regions->bank[0].start;
>> +
>> +            /*
>> +             * Register the magic region as reserved mem to make sure this
>> +             * region will not be counted when allocating extended regions.
> Well, this is only true in case find_unallocated_memory() is used to retrieve free regions.
> What if our direct mapped domU used partial dtb and IOMMU is in use? In this case,
> find_memory_holes() will be used and the behavior will be different.
>
> Also, I'm not sure if it is a good idea to call find_unused_memory twice (with lots of steps inside)
> just to retrieve 16MB (btw. add_ext_regions will only return 64MB+ regions) region for magic pages.
> I'll let other maintainers share their opinion.

I agree with your point. Let's wait a bit longer for more 
ideas/comments. If no other inputs, I think I will drop the
"adding to reserved_mem" part of logic and record the found unused 
memory in kinfo, then use rangeset_remove_range() to remove this range 
in both

find_unallocated_memory() and find_memory_holes().

> Also, CCing Carlo since he was in a need of retrieving free memory regions as well for cache coloring with dom0.

(+ Carlo)
Any inputs from your side for this topic Carlo?

Kind regards,
Henry
> ~Michal



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 5/5] xen/memory, tools: Make init-dom0less consume XEN_DOMCTL_get_mem_map
  2024-03-11 17:07   ` Jan Beulich
@ 2024-03-12  3:44     ` Henry Wang
  2024-03-12  7:34       ` Jan Beulich
  2024-03-29  5:11     ` Henry Wang
  1 sibling, 1 reply; 31+ messages in thread
From: Henry Wang @ 2024-03-12  3:44 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Wei Liu, Anthony PERARD, Andrew Cooper, George Dunlap,
	Julien Grall, Stefano Stabellini, Alec Kwapis, xen-devel

Hi Jan,

On 3/12/2024 1:07 AM, Jan Beulich wrote:
> I'm afraid the title doesn't really say what the patch actually means
> to achieve.
>
> On 08.03.2024 02:54, Henry Wang wrote:
>> Previous commits enable the toolstack to get the domain memory map,
>> therefore instead of hardcoding the guest magic pages region, use
>> the XEN_DOMCTL_get_mem_map domctl to get the start address of the
>> guest magic pages region. Add the (XEN)MEMF_force_heap_alloc memory
>> flags to force populate_physmap() to allocate page from domheap
>> instead of using 1:1 or static allocated pages to map the magic pages.
> A patch description wants to be (largely) self-contained. "Previous
> commits" shouldn't be mentioned; recall that the sequence in which
> patches go in is unknown to you up front. (In fact the terms "commit"
> or "patch" should be avoided altogether when describing what a patch
> does. The only valid use I can think of is when referring to commits
> already in the tree, and then typically by quoting their hash and
> title.)

Thanks for the detailed explanation. I will rewrite the title and part 
of the commit message in v3 to make it clear.

>> --- a/xen/include/public/memory.h
>> +++ b/xen/include/public/memory.h
>> @@ -41,6 +41,11 @@
>>   #define XENMEMF_exact_node(n) (XENMEMF_node(n) | XENMEMF_exact_node_request)
>>   /* Flag to indicate the node specified is virtual node */
>>   #define XENMEMF_vnode  (1<<18)
>> +/*
>> + * Flag to force populate physmap to use pages from domheap instead of 1:1
>> + * or static allocation.
>> + */
>> +#define XENMEMF_force_heap_alloc  (1<<19)
>>   #endif
> If this is for populate_physmap only, then other sub-ops need to reject
> its use.
>
> I have to admit I'm a little wary of allocating another flag here and ...
>
>> --- a/xen/include/xen/mm.h
>> +++ b/xen/include/xen/mm.h
>> @@ -205,6 +205,8 @@ struct npfec {
>>   #define  MEMF_no_icache_flush (1U<<_MEMF_no_icache_flush)
>>   #define _MEMF_no_scrub    8
>>   #define  MEMF_no_scrub    (1U<<_MEMF_no_scrub)
>> +#define _MEMF_force_heap_alloc 9
>> +#define  MEMF_force_heap_alloc (1U<<_MEMF_force_heap_alloc)
>>   #define _MEMF_node        16
>>   #define  MEMF_node_mask   ((1U << (8 * sizeof(nodeid_t))) - 1)
>>   #define  MEMF_node(n)     ((((n) + 1) & MEMF_node_mask) << _MEMF_node)
> ... here - we don't have that many left. Since other sub-ops aren't
> intended to support this flag, did you consider adding another (perhaps
> even arch-specific) sub-op instead?

Not really, I basically followed the discussion from [1] to implement 
this patch. However I understand your concern. Just want to make sure if 
I understand your suggestion correctly, by "adding another sub-op" you 
mean adding a sub-op similar as "XENMEM_populate_physmap" but only with 
executing the "else" part I want, so we can drop the use of these two 
added flags? Thanks!

[1] 
https://lore.kernel.org/xen-devel/3982ba47-6709-47e3-a9c2-e2d3b4a2d8e3@xen.org/

Kind regards,
Henry

> Jan



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 5/5] xen/memory, tools: Make init-dom0less consume XEN_DOMCTL_get_mem_map
  2024-03-12  3:44     ` Henry Wang
@ 2024-03-12  7:34       ` Jan Beulich
  2024-03-12  7:36         ` Henry Wang
  0 siblings, 1 reply; 31+ messages in thread
From: Jan Beulich @ 2024-03-12  7:34 UTC (permalink / raw)
  To: Henry Wang
  Cc: Wei Liu, Anthony PERARD, Andrew Cooper, George Dunlap,
	Julien Grall, Stefano Stabellini, Alec Kwapis, xen-devel

On 12.03.2024 04:44, Henry Wang wrote:
> On 3/12/2024 1:07 AM, Jan Beulich wrote:
>> On 08.03.2024 02:54, Henry Wang wrote:
>>> --- a/xen/include/public/memory.h
>>> +++ b/xen/include/public/memory.h
>>> @@ -41,6 +41,11 @@
>>>   #define XENMEMF_exact_node(n) (XENMEMF_node(n) | XENMEMF_exact_node_request)
>>>   /* Flag to indicate the node specified is virtual node */
>>>   #define XENMEMF_vnode  (1<<18)
>>> +/*
>>> + * Flag to force populate physmap to use pages from domheap instead of 1:1
>>> + * or static allocation.
>>> + */
>>> +#define XENMEMF_force_heap_alloc  (1<<19)
>>>   #endif
>> If this is for populate_physmap only, then other sub-ops need to reject
>> its use.
>>
>> I have to admit I'm a little wary of allocating another flag here and ...
>>
>>> --- a/xen/include/xen/mm.h
>>> +++ b/xen/include/xen/mm.h
>>> @@ -205,6 +205,8 @@ struct npfec {
>>>   #define  MEMF_no_icache_flush (1U<<_MEMF_no_icache_flush)
>>>   #define _MEMF_no_scrub    8
>>>   #define  MEMF_no_scrub    (1U<<_MEMF_no_scrub)
>>> +#define _MEMF_force_heap_alloc 9
>>> +#define  MEMF_force_heap_alloc (1U<<_MEMF_force_heap_alloc)
>>>   #define _MEMF_node        16
>>>   #define  MEMF_node_mask   ((1U << (8 * sizeof(nodeid_t))) - 1)
>>>   #define  MEMF_node(n)     ((((n) + 1) & MEMF_node_mask) << _MEMF_node)
>> ... here - we don't have that many left. Since other sub-ops aren't
>> intended to support this flag, did you consider adding another (perhaps
>> even arch-specific) sub-op instead?
> 
> Not really, I basically followed the discussion from [1] to implement 
> this patch. However I understand your concern. Just want to make sure if 
> I understand your suggestion correctly, by "adding another sub-op" you 
> mean adding a sub-op similar as "XENMEM_populate_physmap" but only with 
> executing the "else" part I want, so we can drop the use of these two 
> added flags? Thanks!
> 
> [1] 
> https://lore.kernel.org/xen-devel/3982ba47-6709-47e3-a9c2-e2d3b4a2d8e3@xen.org/

In which case please check with Julien (and perhaps other Arm maintainers)
before deciding on whether to go this alternative route.

Jan


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 5/5] xen/memory, tools: Make init-dom0less consume XEN_DOMCTL_get_mem_map
  2024-03-12  7:34       ` Jan Beulich
@ 2024-03-12  7:36         ` Henry Wang
  0 siblings, 0 replies; 31+ messages in thread
From: Henry Wang @ 2024-03-12  7:36 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Wei Liu, Anthony PERARD, Andrew Cooper, George Dunlap,
	Julien Grall, Stefano Stabellini, Alec Kwapis, xen-devel

Hi Jan,

On 3/12/2024 3:34 PM, Jan Beulich wrote:
> On 12.03.2024 04:44, Henry Wang wrote:
>> On 3/12/2024 1:07 AM, Jan Beulich wrote:
>>> On 08.03.2024 02:54, Henry Wang wrote:
>>>> --- a/xen/include/public/memory.h
>>>> +++ b/xen/include/public/memory.h
>>>> @@ -41,6 +41,11 @@
>>>>    #define XENMEMF_exact_node(n) (XENMEMF_node(n) | XENMEMF_exact_node_request)
>>>>    /* Flag to indicate the node specified is virtual node */
>>>>    #define XENMEMF_vnode  (1<<18)
>>>> +/*
>>>> + * Flag to force populate physmap to use pages from domheap instead of 1:1
>>>> + * or static allocation.
>>>> + */
>>>> +#define XENMEMF_force_heap_alloc  (1<<19)
>>>>    #endif
>>> If this is for populate_physmap only, then other sub-ops need to reject
>>> its use.
>>>
>>> I have to admit I'm a little wary of allocating another flag here and ...
>>>
>>>> --- a/xen/include/xen/mm.h
>>>> +++ b/xen/include/xen/mm.h
>>>> @@ -205,6 +205,8 @@ struct npfec {
>>>>    #define  MEMF_no_icache_flush (1U<<_MEMF_no_icache_flush)
>>>>    #define _MEMF_no_scrub    8
>>>>    #define  MEMF_no_scrub    (1U<<_MEMF_no_scrub)
>>>> +#define _MEMF_force_heap_alloc 9
>>>> +#define  MEMF_force_heap_alloc (1U<<_MEMF_force_heap_alloc)
>>>>    #define _MEMF_node        16
>>>>    #define  MEMF_node_mask   ((1U << (8 * sizeof(nodeid_t))) - 1)
>>>>    #define  MEMF_node(n)     ((((n) + 1) & MEMF_node_mask) << _MEMF_node)
>>> ... here - we don't have that many left. Since other sub-ops aren't
>>> intended to support this flag, did you consider adding another (perhaps
>>> even arch-specific) sub-op instead?
>> Not really, I basically followed the discussion from [1] to implement
>> this patch. However I understand your concern. Just want to make sure if
>> I understand your suggestion correctly, by "adding another sub-op" you
>> mean adding a sub-op similar as "XENMEM_populate_physmap" but only with
>> executing the "else" part I want, so we can drop the use of these two
>> added flags? Thanks!
>>
>> [1]
>> https://lore.kernel.org/xen-devel/3982ba47-6709-47e3-a9c2-e2d3b4a2d8e3@xen.org/
> In which case please check with Julien (and perhaps other Arm maintainers)
> before deciding on whether to go this alternative route.

Yes sure, I will wait a bit longer for the agreement of the discussion 
before implementing the code.

Kind regards,
Henry

> Jan



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 4/5] xen/arm: Find unallocated spaces for magic pages of direct-mapped domU
  2024-03-12  3:25     ` Henry Wang
@ 2024-03-13 11:09       ` Carlo Nonato
  0 siblings, 0 replies; 31+ messages in thread
From: Carlo Nonato @ 2024-03-13 11:09 UTC (permalink / raw)
  To: Henry Wang
  Cc: Michal Orzel, xen-devel, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk, Alec Kwapis

Hi Henry,

On Tue, Mar 12, 2024 at 4:25 AM Henry Wang <xin.wang2@amd.com> wrote:
>
> Hi Michal,
>
> On 3/11/2024 9:46 PM, Michal Orzel wrote:
> > Hi Henry,
> >
> > diff --git a/xen/arch/arm/dom0less-build.c b/xen/arch/arm/dom0less-build.c
> > index 1e1c8d83ae..99447bfb0c 100644
> > --- a/xen/arch/arm/dom0less-build.c
> > +++ b/xen/arch/arm/dom0less-build.c
> > @@ -682,6 +682,49 @@ static int __init prepare_dtb_domU(struct domain *d, struct kernel_info *kinfo)
> >
> >       if ( kinfo->dom0less_feature & DOM0LESS_ENHANCED_NO_XS )
> >       {
> > +        if ( is_domain_direct_mapped(d) )
> > +        {
> > This whole block is dependent on static memory feature that is compiled out by default.
> > Shouldn't you move it to static-memory.c ?
>
> This makes sense. I will convert this block to a function then move to
> static-memory.c in v3.
>
> >> +            struct meminfo *avail_magic_regions = xzalloc(struct meminfo);
> > I can't see corresponding xfree(avail_magic_regions). It's not going to be used after unused memory
> > regions are retrieved.
>
> Hmmm I realized I've made a mess here and below. I will do the proper
> error handling in v3.
>
> >> +            struct meminfo *rsrv_mem = &bootinfo.reserved_mem;
> >> +            struct mem_map_domain *mem_map = &d->arch.mem_map;
> >> +            uint64_t magic_region_start = INVALID_PADDR;
> > What's the purpose of this initialization? magic_region_start is going to be re-assigned before making use of this value.
>
> Personally for variables holding an address, I would like to init the
> local variable to a poison value before use. But you are right it does
> not make a difference here I think. I can drop the initialization if you
> prefer not having it, sure.
>
> >> +            uint64_t magic_region_size = GUEST_MAGIC_SIZE;
> > Why not paddr_t?
>
> Good catch, I mixed struct meminfo with the newly added struct. Will use
> paddr_t.
> >> +
> >> +            magic_region_start = avail_magic_regions->bank[0].start;
> >> +
> >> +            /*
> >> +             * Register the magic region as reserved mem to make sure this
> >> +             * region will not be counted when allocating extended regions.
> > Well, this is only true in case find_unallocated_memory() is used to retrieve free regions.
> > What if our direct mapped domU used partial dtb and IOMMU is in use? In this case,
> > find_memory_holes() will be used and the behavior will be different.
> >
> > Also, I'm not sure if it is a good idea to call find_unused_memory twice (with lots of steps inside)
> > just to retrieve 16MB (btw. add_ext_regions will only return 64MB+ regions) region for magic pages.
> > I'll let other maintainers share their opinion.
>
> I agree with your point. Let's wait a bit longer for more
> ideas/comments. If no other inputs, I think I will drop the
> "adding to reserved_mem" part of logic and record the found unused
> memory in kinfo, then use rangeset_remove_range() to remove this range
> in both
>
> find_unallocated_memory() and find_memory_holes().
>
> > Also, CCing Carlo since he was in a need of retrieving free memory regions as well for cache coloring with dom0.
>
> (+ Carlo)
> Any inputs from your side for this topic Carlo?

Nothing at the moment.

Thanks.

> Kind regards,
> Henry
> > ~Michal
>


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 5/5] xen/memory, tools: Make init-dom0less consume XEN_DOMCTL_get_mem_map
  2024-03-08  1:54 ` [PATCH v2 5/5] xen/memory, tools: Make init-dom0less consume XEN_DOMCTL_get_mem_map Henry Wang
  2024-03-11 17:07   ` Jan Beulich
@ 2024-03-25 15:35   ` Anthony PERARD
  2024-03-26  1:21     ` Henry Wang
  1 sibling, 1 reply; 31+ messages in thread
From: Anthony PERARD @ 2024-03-25 15:35 UTC (permalink / raw)
  To: Henry Wang
  Cc: xen-devel, Wei Liu, Andrew Cooper, George Dunlap, Jan Beulich,
	Julien Grall, Stefano Stabellini, Alec Kwapis

On Fri, Mar 08, 2024 at 09:54:35AM +0800, Henry Wang wrote:
> diff --git a/tools/helpers/init-dom0less.c b/tools/helpers/init-dom0less.c
> index fee93459c4..92c612f6da 100644
> --- a/tools/helpers/init-dom0less.c
> +++ b/tools/helpers/init-dom0less.c
> @@ -23,16 +23,30 @@ static int alloc_xs_page(struct xc_interface_core *xch,
>                           libxl_dominfo *info,
>                           uint64_t *xenstore_pfn)
>  {
> -    int rc;
> -    const xen_pfn_t base = GUEST_MAGIC_BASE >> XC_PAGE_SHIFT;
> -    xen_pfn_t p2m = (GUEST_MAGIC_BASE >> XC_PAGE_SHIFT) + XENSTORE_PFN_OFFSET;
> +    int rc, i;
> +    xen_pfn_t base = ((xen_pfn_t)-1);
> +    xen_pfn_t p2m = ((xen_pfn_t)-1);
> +    uint32_t nr_regions = XEN_MAX_MEM_REGIONS;
> +    struct xen_mem_region mem_regions[XEN_MAX_MEM_REGIONS] = {0};
> +
> +    rc = xc_get_domain_mem_map(xch, info->domid, mem_regions, &nr_regions);

Shouldn't you check the value of in `rc`?

> +    for ( i = 0; i < nr_regions; i++ )
> +    {
> +        if ( mem_regions[i].type == GUEST_MEM_REGION_MAGIC )
> +        {
> +            base = mem_regions[i].start >> XC_PAGE_SHIFT;
> +            p2m = (mem_regions[i].start >> XC_PAGE_SHIFT) + XENSTORE_PFN_OFFSET;
> +        }
> +    }

Thanks,

-- 
Anthony PERARD


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 5/5] xen/memory, tools: Make init-dom0less consume XEN_DOMCTL_get_mem_map
  2024-03-25 15:35   ` Anthony PERARD
@ 2024-03-26  1:21     ` Henry Wang
  0 siblings, 0 replies; 31+ messages in thread
From: Henry Wang @ 2024-03-26  1:21 UTC (permalink / raw)
  To: Anthony PERARD
  Cc: xen-devel, Wei Liu, Andrew Cooper, George Dunlap, Jan Beulich,
	Julien Grall, Stefano Stabellini, Alec Kwapis

Hi Anthony,

On 3/25/2024 11:35 PM, Anthony PERARD wrote:
> On Fri, Mar 08, 2024 at 09:54:35AM +0800, Henry Wang wrote:
>> diff --git a/tools/helpers/init-dom0less.c b/tools/helpers/init-dom0less.c
>> index fee93459c4..92c612f6da 100644
>> --- a/tools/helpers/init-dom0less.c
>> +++ b/tools/helpers/init-dom0less.c
>> @@ -23,16 +23,30 @@ static int alloc_xs_page(struct xc_interface_core *xch,
>>                            libxl_dominfo *info,
>>                            uint64_t *xenstore_pfn)
>>   {
>> -    int rc;
>> -    const xen_pfn_t base = GUEST_MAGIC_BASE >> XC_PAGE_SHIFT;
>> -    xen_pfn_t p2m = (GUEST_MAGIC_BASE >> XC_PAGE_SHIFT) + XENSTORE_PFN_OFFSET;
>> +    int rc, i;
>> +    xen_pfn_t base = ((xen_pfn_t)-1);
>> +    xen_pfn_t p2m = ((xen_pfn_t)-1);
>> +    uint32_t nr_regions = XEN_MAX_MEM_REGIONS;
>> +    struct xen_mem_region mem_regions[XEN_MAX_MEM_REGIONS] = {0};
>> +
>> +    rc = xc_get_domain_mem_map(xch, info->domid, mem_regions, &nr_regions);
> Shouldn't you check the value of in `rc`?
Yes I should have done so. I will do that in the next version. Thanks! 
Kind regards, Henry


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 5/5] xen/memory, tools: Make init-dom0less consume XEN_DOMCTL_get_mem_map
  2024-03-11 17:07   ` Jan Beulich
  2024-03-12  3:44     ` Henry Wang
@ 2024-03-29  5:11     ` Henry Wang
  2024-04-02  7:05       ` Jan Beulich
  1 sibling, 1 reply; 31+ messages in thread
From: Henry Wang @ 2024-03-29  5:11 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Wei Liu, Anthony PERARD, Andrew Cooper, George Dunlap,
	Julien Grall, Stefano Stabellini, Alec Kwapis, xen-devel

Hi Jan,

On 3/12/2024 1:07 AM, Jan Beulich wrote:
>> +/*
>> + * Flag to force populate physmap to use pages from domheap instead of 1:1
>> + * or static allocation.
>> + */
>> +#define XENMEMF_force_heap_alloc  (1<<19)
>>   #endif
> If this is for populate_physmap only, then other sub-ops need to reject
> its use.
>
> I have to admit I'm a little wary of allocating another flag here and ...
>
>> --- a/xen/include/xen/mm.h
>> +++ b/xen/include/xen/mm.h
>> @@ -205,6 +205,8 @@ struct npfec {
>>   #define  MEMF_no_icache_flush (1U<<_MEMF_no_icache_flush)
>>   #define _MEMF_no_scrub    8
>>   #define  MEMF_no_scrub    (1U<<_MEMF_no_scrub)
>> +#define _MEMF_force_heap_alloc 9
>> +#define  MEMF_force_heap_alloc (1U<<_MEMF_force_heap_alloc)
>>   #define _MEMF_node        16
>>   #define  MEMF_node_mask   ((1U << (8 * sizeof(nodeid_t))) - 1)
>>   #define  MEMF_node(n)     ((((n) + 1) & MEMF_node_mask) << _MEMF_node)
> ... here - we don't have that many left. Since other sub-ops aren't
> intended to support this flag, did you consider adding another (perhaps
> even arch-specific) sub-op instead?

While revisiting this comment when trying to come up with a V3, I 
realized adding a sub-op here in the same level as 
XENMEM_populate_physmap will basically duplicate the function 
populate_physmap() with just the "else" (the non-1:1 allocation) part, 
also a similar xc_domain_populate_physmap_exact() & co will be needed 
from the toolstack side to call the new sub-op. So I am having the 
concern of the duplication of code and not sure if I understand you 
correctly. Would you please elaborate a bit more or clarify if I 
understand you correctly? Thanks!

Kind regards,
Henry

> Jan



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 5/5] xen/memory, tools: Make init-dom0less consume XEN_DOMCTL_get_mem_map
  2024-03-29  5:11     ` Henry Wang
@ 2024-04-02  7:05       ` Jan Beulich
  2024-04-02  8:43         ` Henry Wang
  0 siblings, 1 reply; 31+ messages in thread
From: Jan Beulich @ 2024-04-02  7:05 UTC (permalink / raw)
  To: Henry Wang
  Cc: Wei Liu, Anthony PERARD, Andrew Cooper, George Dunlap,
	Julien Grall, Stefano Stabellini, Alec Kwapis, xen-devel

On 29.03.2024 06:11, Henry Wang wrote:
> On 3/12/2024 1:07 AM, Jan Beulich wrote:
>>> +/*
>>> + * Flag to force populate physmap to use pages from domheap instead of 1:1
>>> + * or static allocation.
>>> + */
>>> +#define XENMEMF_force_heap_alloc  (1<<19)
>>>   #endif
>> If this is for populate_physmap only, then other sub-ops need to reject
>> its use.
>>
>> I have to admit I'm a little wary of allocating another flag here and ...
>>
>>> --- a/xen/include/xen/mm.h
>>> +++ b/xen/include/xen/mm.h
>>> @@ -205,6 +205,8 @@ struct npfec {
>>>   #define  MEMF_no_icache_flush (1U<<_MEMF_no_icache_flush)
>>>   #define _MEMF_no_scrub    8
>>>   #define  MEMF_no_scrub    (1U<<_MEMF_no_scrub)
>>> +#define _MEMF_force_heap_alloc 9
>>> +#define  MEMF_force_heap_alloc (1U<<_MEMF_force_heap_alloc)
>>>   #define _MEMF_node        16
>>>   #define  MEMF_node_mask   ((1U << (8 * sizeof(nodeid_t))) - 1)
>>>   #define  MEMF_node(n)     ((((n) + 1) & MEMF_node_mask) << _MEMF_node)
>> ... here - we don't have that many left. Since other sub-ops aren't
>> intended to support this flag, did you consider adding another (perhaps
>> even arch-specific) sub-op instead?
> 
> While revisiting this comment when trying to come up with a V3, I 
> realized adding a sub-op here in the same level as 
> XENMEM_populate_physmap will basically duplicate the function 
> populate_physmap() with just the "else" (the non-1:1 allocation) part, 
> also a similar xc_domain_populate_physmap_exact() & co will be needed 
> from the toolstack side to call the new sub-op. So I am having the 
> concern of the duplication of code and not sure if I understand you 
> correctly. Would you please elaborate a bit more or clarify if I 
> understand you correctly? Thanks!

Well, the goal is to avoid both code duplication and introduction of a new,
single-use flag. The new sub-op suggestion, I realize now, would mainly have
helped with avoiding the new flag in the public interface. That's still
desirable imo. Internally, have you checked which MEMF_* are actually used
by populate_physmap()? Briefly looking, e.g. MEMF_no_dma and MEMF_no_refcount
aren't. It therefore would be possible to consider re-purposing one that
isn't (likely to be) used there. Of course doing so requires care to avoid
passing that flag down to other code (page_alloc.c functions in particular),
where the meaning would be the original one.

Jan

Jan


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 5/5] xen/memory, tools: Make init-dom0less consume XEN_DOMCTL_get_mem_map
  2024-04-02  7:05       ` Jan Beulich
@ 2024-04-02  8:43         ` Henry Wang
  2024-04-02  8:51           ` Jan Beulich
  0 siblings, 1 reply; 31+ messages in thread
From: Henry Wang @ 2024-04-02  8:43 UTC (permalink / raw)
  To: Jan Beulich, Andrew Cooper, Julien Grall, Stefano Stabellini
  Cc: Wei Liu, Anthony PERARD, George Dunlap, Alec Kwapis, xen-devel

Hi Jan,

On 4/2/2024 3:05 PM, Jan Beulich wrote:
> On 29.03.2024 06:11, Henry Wang wrote:
>> On 3/12/2024 1:07 AM, Jan Beulich wrote:
>>>> +/*
>>>> + * Flag to force populate physmap to use pages from domheap instead of 1:1
>>>> + * or static allocation.
>>>> + */
>>>> +#define XENMEMF_force_heap_alloc  (1<<19)
>>>>    #endif
>>> If this is for populate_physmap only, then other sub-ops need to reject
>>> its use.
>>>
>>> I have to admit I'm a little wary of allocating another flag here and ...
>>>
>>>> --- a/xen/include/xen/mm.h
>>>> +++ b/xen/include/xen/mm.h
>>>> @@ -205,6 +205,8 @@ struct npfec {
>>>>    #define  MEMF_no_icache_flush (1U<<_MEMF_no_icache_flush)
>>>>    #define _MEMF_no_scrub    8
>>>>    #define  MEMF_no_scrub    (1U<<_MEMF_no_scrub)
>>>> +#define _MEMF_force_heap_alloc 9
>>>> +#define  MEMF_force_heap_alloc (1U<<_MEMF_force_heap_alloc)
>>>>    #define _MEMF_node        16
>>>>    #define  MEMF_node_mask   ((1U << (8 * sizeof(nodeid_t))) - 1)
>>>>    #define  MEMF_node(n)     ((((n) + 1) & MEMF_node_mask) << _MEMF_node)
>>> ... here - we don't have that many left. Since other sub-ops aren't
>>> intended to support this flag, did you consider adding another (perhaps
>>> even arch-specific) sub-op instead?
>> While revisiting this comment when trying to come up with a V3, I
>> realized adding a sub-op here in the same level as
>> XENMEM_populate_physmap will basically duplicate the function
>> populate_physmap() with just the "else" (the non-1:1 allocation) part,
>> also a similar xc_domain_populate_physmap_exact() & co will be needed
>> from the toolstack side to call the new sub-op. So I am having the
>> concern of the duplication of code and not sure if I understand you
>> correctly. Would you please elaborate a bit more or clarify if I
>> understand you correctly? Thanks!
> Well, the goal is to avoid both code duplication and introduction of a new,
> single-use flag. The new sub-op suggestion, I realize now, would mainly have
> helped with avoiding the new flag in the public interface. That's still
> desirable imo. Internally, have you checked which MEMF_* are actually used
> by populate_physmap()? Briefly looking, e.g. MEMF_no_dma and MEMF_no_refcount
> aren't. It therefore would be possible to consider re-purposing one that
> isn't (likely to be) used there. Of course doing so requires care to avoid
> passing that flag down to other code (page_alloc.c functions in particular),
> where the meaning would be the original one.

I think you made a good point, however, to be honest I am not sure about 
the repurposing flags such as MEMF_no_dma and MEMF_no_refcount, because 
I think the name and the purpose of the flag should be clear and 
less-misleading. Reusing either one for another meaning, namely forcing 
a non-heap allocation in populate_physmap() would be confusing in the 
future. Also if one day these flags will be needed in 
populate_physmap(), current repurposing approach will lead to a even 
confusing code base.

I also do agree very much that we need to also avoid the code 
duplication, so compared to other two suggested approach, adding a new 
MEMF would be the cleanest solution IMHO, as it is just one bit and MEMF 
flags are not added very often.

I would also curious what the other common code maintainers will say on 
this issue: @Andrew, @Stefano, @Julien, any ideas? Thanks!

Kind regards,
Henry

> Jan



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 5/5] xen/memory, tools: Make init-dom0less consume XEN_DOMCTL_get_mem_map
  2024-04-02  8:43         ` Henry Wang
@ 2024-04-02  8:51           ` Jan Beulich
  2024-04-02  9:03             ` Henry Wang
  0 siblings, 1 reply; 31+ messages in thread
From: Jan Beulich @ 2024-04-02  8:51 UTC (permalink / raw)
  To: Henry Wang
  Cc: Wei Liu, Anthony PERARD, George Dunlap, Alec Kwapis, xen-devel,
	Andrew Cooper, Julien Grall, Stefano Stabellini

On 02.04.2024 10:43, Henry Wang wrote:
> On 4/2/2024 3:05 PM, Jan Beulich wrote:
>> On 29.03.2024 06:11, Henry Wang wrote:
>>> On 3/12/2024 1:07 AM, Jan Beulich wrote:
>>>>> +/*
>>>>> + * Flag to force populate physmap to use pages from domheap instead of 1:1
>>>>> + * or static allocation.
>>>>> + */
>>>>> +#define XENMEMF_force_heap_alloc  (1<<19)
>>>>>    #endif
>>>> If this is for populate_physmap only, then other sub-ops need to reject
>>>> its use.
>>>>
>>>> I have to admit I'm a little wary of allocating another flag here and ...
>>>>
>>>>> --- a/xen/include/xen/mm.h
>>>>> +++ b/xen/include/xen/mm.h
>>>>> @@ -205,6 +205,8 @@ struct npfec {
>>>>>    #define  MEMF_no_icache_flush (1U<<_MEMF_no_icache_flush)
>>>>>    #define _MEMF_no_scrub    8
>>>>>    #define  MEMF_no_scrub    (1U<<_MEMF_no_scrub)
>>>>> +#define _MEMF_force_heap_alloc 9
>>>>> +#define  MEMF_force_heap_alloc (1U<<_MEMF_force_heap_alloc)
>>>>>    #define _MEMF_node        16
>>>>>    #define  MEMF_node_mask   ((1U << (8 * sizeof(nodeid_t))) - 1)
>>>>>    #define  MEMF_node(n)     ((((n) + 1) & MEMF_node_mask) << _MEMF_node)
>>>> ... here - we don't have that many left. Since other sub-ops aren't
>>>> intended to support this flag, did you consider adding another (perhaps
>>>> even arch-specific) sub-op instead?
>>> While revisiting this comment when trying to come up with a V3, I
>>> realized adding a sub-op here in the same level as
>>> XENMEM_populate_physmap will basically duplicate the function
>>> populate_physmap() with just the "else" (the non-1:1 allocation) part,
>>> also a similar xc_domain_populate_physmap_exact() & co will be needed
>>> from the toolstack side to call the new sub-op. So I am having the
>>> concern of the duplication of code and not sure if I understand you
>>> correctly. Would you please elaborate a bit more or clarify if I
>>> understand you correctly? Thanks!
>> Well, the goal is to avoid both code duplication and introduction of a new,
>> single-use flag. The new sub-op suggestion, I realize now, would mainly have
>> helped with avoiding the new flag in the public interface. That's still
>> desirable imo. Internally, have you checked which MEMF_* are actually used
>> by populate_physmap()? Briefly looking, e.g. MEMF_no_dma and MEMF_no_refcount
>> aren't. It therefore would be possible to consider re-purposing one that
>> isn't (likely to be) used there. Of course doing so requires care to avoid
>> passing that flag down to other code (page_alloc.c functions in particular),
>> where the meaning would be the original one.
> 
> I think you made a good point, however, to be honest I am not sure about 
> the repurposing flags such as MEMF_no_dma and MEMF_no_refcount, because 
> I think the name and the purpose of the flag should be clear and 
> less-misleading. Reusing either one for another meaning, namely forcing 
> a non-heap allocation in populate_physmap() would be confusing in the 
> future. Also if one day these flags will be needed in 
> populate_physmap(), current repurposing approach will lead to a even 
> confusing code base.

For the latter - hence "(likely to be)" in my earlier reply.

For the naming - of course an aliasing #define ought to be used then, to
make the purpose clear at the use sites.

Jan

> I also do agree very much that we need to also avoid the code 
> duplication, so compared to other two suggested approach, adding a new 
> MEMF would be the cleanest solution IMHO, as it is just one bit and MEMF 
> flags are not added very often.
> 
> I would also curious what the other common code maintainers will say on 
> this issue: @Andrew, @Stefano, @Julien, any ideas? Thanks!
> 
> Kind regards,
> Henry



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 5/5] xen/memory, tools: Make init-dom0less consume XEN_DOMCTL_get_mem_map
  2024-04-02  8:51           ` Jan Beulich
@ 2024-04-02  9:03             ` Henry Wang
  0 siblings, 0 replies; 31+ messages in thread
From: Henry Wang @ 2024-04-02  9:03 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Wei Liu, Anthony PERARD, George Dunlap, Alec Kwapis, xen-devel,
	Andrew Cooper, Julien Grall, Stefano Stabellini

Hi Jan,

On 4/2/2024 4:51 PM, Jan Beulich wrote:
> On 02.04.2024 10:43, Henry Wang wrote:
>> On 4/2/2024 3:05 PM, Jan Beulich wrote:
>>> On 29.03.2024 06:11, Henry Wang wrote:
>>>> On 3/12/2024 1:07 AM, Jan Beulich wrote:
>>>>>> +/*
>>>>>> + * Flag to force populate physmap to use pages from domheap instead of 1:1
>>>>>> + * or static allocation.
>>>>>> + */
>>>>>> +#define XENMEMF_force_heap_alloc  (1<<19)
>>>>>>     #endif
>>>>> If this is for populate_physmap only, then other sub-ops need to reject
>>>>> its use.
>>>>>
>>>>> I have to admit I'm a little wary of allocating another flag here and ...
>>>>>
>>>>>> --- a/xen/include/xen/mm.h
>>>>>> +++ b/xen/include/xen/mm.h
>>>>>> @@ -205,6 +205,8 @@ struct npfec {
>>>>>>     #define  MEMF_no_icache_flush (1U<<_MEMF_no_icache_flush)
>>>>>>     #define _MEMF_no_scrub    8
>>>>>>     #define  MEMF_no_scrub    (1U<<_MEMF_no_scrub)
>>>>>> +#define _MEMF_force_heap_alloc 9
>>>>>> +#define  MEMF_force_heap_alloc (1U<<_MEMF_force_heap_alloc)
>>>>>>     #define _MEMF_node        16
>>>>>>     #define  MEMF_node_mask   ((1U << (8 * sizeof(nodeid_t))) - 1)
>>>>>>     #define  MEMF_node(n)     ((((n) + 1) & MEMF_node_mask) << _MEMF_node)
>>>>> ... here - we don't have that many left. Since other sub-ops aren't
>>>>> intended to support this flag, did you consider adding another (perhaps
>>>>> even arch-specific) sub-op instead?
>>>> While revisiting this comment when trying to come up with a V3, I
>>>> realized adding a sub-op here in the same level as
>>>> XENMEM_populate_physmap will basically duplicate the function
>>>> populate_physmap() with just the "else" (the non-1:1 allocation) part,
>>>> also a similar xc_domain_populate_physmap_exact() & co will be needed
>>>> from the toolstack side to call the new sub-op. So I am having the
>>>> concern of the duplication of code and not sure if I understand you
>>>> correctly. Would you please elaborate a bit more or clarify if I
>>>> understand you correctly? Thanks!
>>> Well, the goal is to avoid both code duplication and introduction of a new,
>>> single-use flag. The new sub-op suggestion, I realize now, would mainly have
>>> helped with avoiding the new flag in the public interface. That's still
>>> desirable imo. Internally, have you checked which MEMF_* are actually used
>>> by populate_physmap()? Briefly looking, e.g. MEMF_no_dma and MEMF_no_refcount
>>> aren't. It therefore would be possible to consider re-purposing one that
>>> isn't (likely to be) used there. Of course doing so requires care to avoid
>>> passing that flag down to other code (page_alloc.c functions in particular),
>>> where the meaning would be the original one.
>> I think you made a good point, however, to be honest I am not sure about
>> the repurposing flags such as MEMF_no_dma and MEMF_no_refcount, because
>> I think the name and the purpose of the flag should be clear and
>> less-misleading. Reusing either one for another meaning, namely forcing
>> a non-heap allocation in populate_physmap() would be confusing in the
>> future. Also if one day these flags will be needed in
>> populate_physmap(), current repurposing approach will lead to a even
>> confusing code base.
> For the latter - hence "(likely to be)" in my earlier reply.

Agreed.

> For the naming - of course an aliasing #define ought to be used then, to
> make the purpose clear at the use sites.

Well I have to admit the alias #define approach is clever (thanks) and I 
am getting persuaded gradually, as there will be also another benefit 
for me to do less modification from my side :) I will firstly try this 
approach in v3 if ...

>> I also do agree very much that we need to also avoid the code
>> duplication, so compared to other two suggested approach, adding a new
>> MEMF would be the cleanest solution IMHO, as it is just one bit and MEMF
>> flags are not added very often.
>>
>> I would also curious what the other common code maintainers will say on
>> this issue: @Andrew, @Stefano, @Julien, any ideas? Thanks!

...not receiving any other input, and we can continue the discussion in 
v3. Thanks!

Kind regards,
Henry

> Jan



^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2024-04-02  9:04 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-08  1:54 [PATCH v2 0/5] DOMCTL-based guest magic regions allocation for dom0less Henry Wang
2024-03-08  1:54 ` [PATCH v2 1/5] xen/arm: Rename assign_static_memory_11() for consistency Henry Wang
2024-03-08  8:18   ` Michal Orzel
2024-03-08  8:22     ` Henry Wang
2024-03-08  1:54 ` [PATCH v2 2/5] xen/domain.h: Centrialize is_domain_direct_mapped() Henry Wang
2024-03-08  8:59   ` Michal Orzel
2024-03-08  9:06     ` Henry Wang
2024-03-08  9:41       ` Jan Beulich
2024-03-11 18:02   ` Shawn Anastasio
2024-03-08  1:54 ` [PATCH v2 3/5] xen/domctl, tools: Introduce a new domctl to get guest memory map Henry Wang
2024-03-11  9:10   ` Michal Orzel
2024-03-11  9:46     ` Henry Wang
2024-03-11 16:58   ` Jan Beulich
2024-03-12  3:06     ` Henry Wang
2024-03-08  1:54 ` [PATCH v2 4/5] xen/arm: Find unallocated spaces for magic pages of direct-mapped domU Henry Wang
2024-03-11 13:46   ` Michal Orzel
2024-03-11 13:50     ` Michal Orzel
2024-03-12  3:25     ` Henry Wang
2024-03-13 11:09       ` Carlo Nonato
2024-03-08  1:54 ` [PATCH v2 5/5] xen/memory, tools: Make init-dom0less consume XEN_DOMCTL_get_mem_map Henry Wang
2024-03-11 17:07   ` Jan Beulich
2024-03-12  3:44     ` Henry Wang
2024-03-12  7:34       ` Jan Beulich
2024-03-12  7:36         ` Henry Wang
2024-03-29  5:11     ` Henry Wang
2024-04-02  7:05       ` Jan Beulich
2024-04-02  8:43         ` Henry Wang
2024-04-02  8:51           ` Jan Beulich
2024-04-02  9:03             ` Henry Wang
2024-03-25 15:35   ` Anthony PERARD
2024-03-26  1:21     ` Henry Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.