All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/2] Introduce reserved Xenheap
@ 2022-02-24  1:30 Henry Wang
  2022-02-24  1:30 ` [RFC PATCH 1/2] docs, xen/arm: Introduce reserved Xenheap memory Henry Wang
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Henry Wang @ 2022-02-24  1:30 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien
  Cc: Bertrand.Marquis, wei.chen, penny.zheng, Henry.Wang

The reserved Xenheap, or statically configured Xenheap, refers to parts
of RAM reserved in the beginning for Xenheap. Like the static memory
allocation, such reserved Xenheap regions are reserved by configuration
in the device tree using physical address ranges.

This feature is useful to run Xen on Arm MPU systems, where only a
finite number of memory protection regions are available. The limited
number of protection regions places requirement on planning the use of
MPU protection regions and one or more MPU protection regions needs to
be reserved only for Xenheap.

Therefore, this patch series is sent as RFC for comments from the
community. The first patch introduces the reserved Xenheap and the
device tree processing code. The second patch adds the implementation of
the reserved Xenheap pages handling in boot and heap allocator on Arm64.

Henry Wang (2):
  docs, xen/arm: Introduce reserved Xenheap memory
  xen/arm: Handle reserved Xenheap pages in boot/heap allocator

 docs/misc/arm/device-tree/booting.txt | 43 ++++++++++++++++++++++
 xen/arch/arm/bootfdt.c                | 52 +++++++++++++++++++++------
 xen/arch/arm/include/asm/setup.h      |  3 ++
 xen/arch/arm/setup.c                  | 52 +++++++++++++++++++--------
 4 files changed, 125 insertions(+), 25 deletions(-)

-- 
2.17.1



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [RFC PATCH 1/2] docs, xen/arm: Introduce reserved Xenheap memory
  2022-02-24  1:30 [RFC PATCH 0/2] Introduce reserved Xenheap Henry Wang
@ 2022-02-24  1:30 ` Henry Wang
  2022-02-24  1:30 ` [RFC PATCH 2/2] xen/arm: Handle reserved Xenheap pages in boot/heap allocator Henry Wang
  2022-02-25 20:08 ` [RFC PATCH 0/2] Introduce reserved Xenheap Julien Grall
  2 siblings, 0 replies; 10+ messages in thread
From: Henry Wang @ 2022-02-24  1:30 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien
  Cc: Bertrand.Marquis, wei.chen, penny.zheng, Henry.Wang

This commit introduces the reserved Xenheap memory, which is parts
of RAM reserved in the beginning of the boot time for Xenheap.

A new boolean field `xen_heap` in `struct membank` is added to store
the configuration telling if the memory bank is reserved as Xenheap
through `xen,static-mem` in device tree `chosen` node.

Also, this commit introduces the logic to parse the reserved Xenheap
configuation in device tree by reusing the device tree entry definition
of the static memory allocation feature:

- Add a boolean parameter `xen_heap` to `device_tree_get_meminfo`
to reflect whether the memory bank is reserved as the Xenheap.

- Use `device_tree_get_meminfo` to parse the reserved Xenheap
configuation in `chosen` node of the device tree.

- In order to reuse the function `device_tree_get_meminfo`, the
return type of `process_chosen_node` is changed from void to int.

A documentation section is added, describing the definition of
reserved Xenheap memory and the method of enabling the reserved
Xenheap memory through device tree on AArch64 at boot time.

Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
For reviewers:
The name of the device tree property was chosen because we want to
reuse as much as the device tree parsing helpers from the static
memory allocation feature, but we would like to hear the upstream
reviewers' opinion about if using "xen,static-heap" is better,
although this will add another checking path in `early_scan_node`.
---
 docs/misc/arm/device-tree/booting.txt | 43 +++++++++++++++++++++++
 xen/arch/arm/bootfdt.c                | 50 +++++++++++++++++++++------
 xen/arch/arm/include/asm/setup.h      |  1 +
 3 files changed, 84 insertions(+), 10 deletions(-)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index a94125394e..c8c4a90bf4 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -355,3 +355,46 @@ device-tree:
 
 This will reserve a 512MB region starting at the host physical address
 0x30000000 to be exclusively used by DomU1.
+
+
+Reserved Xenheap Memory
+=======================
+
+The reserved Xenheap memory (also known as statically configured Xenheap)
+refers to parts of RAM reserved in the beginning for Xenheap. The memory is
+reserved by configuration in the device tree using physical address ranges.
+
+The definition of reserved Xenheap memory in the device tree indicates:
+
+(1) Only the defined reserved memory areas will be used as Xenheap.
+
+(2) The defined reserved heap areas shall always go to the heap allocator
+and only be used as Xenheap.
+
+The reserved Xenheap memory is an optional feature and can be enabled by
+adding a device tree property in the `chosen` node. Currently, this feature
+is only supported on AArch64 and this feature reuses the static memory
+allocation device tree configuration.
+
+The dtb property should look like as follows:
+
+- property name
+
+    "xen,static-mem" (Should be used in the `chosen` node)
+
+- cells
+
+    Specify the start address and the length of the reserved Xenheap memory.
+    The number of cells for the address and the size should be defined
+    using the properties `#xen,static-mem-address-cells` and
+    `#xen,static-mem-size-cells` respectively.
+
+Below is an example on how to specify the reserved Xenheap in device tree:
+
+        chosen {
+                #xen,static-mem-address-cells = <0x2>;
+                #xen,static-mem-size-cells = <0x2>;
+                xen,static-mem = <0x0 0x30000000 0x0 0x40000000>;
+        };
+
+RAM at 0x30000000 of 1G size will be reserved as Xenheap.
diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index e318ef9603..cc360cfd07 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -66,7 +66,8 @@ void __init device_tree_get_reg(const __be32 **cell, u32 address_cells,
 static int __init device_tree_get_meminfo(const void *fdt, int node,
                                           const char *prop_name,
                                           u32 address_cells, u32 size_cells,
-                                          void *data, bool xen_domain)
+                                          void *data, bool xen_domain,
+                                          bool xen_heap)
 {
     const struct fdt_property *prop;
     unsigned int i, banks;
@@ -98,6 +99,7 @@ static int __init device_tree_get_meminfo(const void *fdt, int node,
         mem->bank[mem->nr_banks].start = start;
         mem->bank[mem->nr_banks].size = size;
         mem->bank[mem->nr_banks].xen_domain = xen_domain;
+        mem->bank[mem->nr_banks].xen_heap = xen_heap;
         mem->nr_banks++;
     }
 
@@ -187,7 +189,7 @@ static int __init process_memory_node(const void *fdt, int node,
                                       void *data)
 {
     return device_tree_get_meminfo(fdt, node, "reg", address_cells, size_cells,
-                                   data, false);
+                                   data, false, false);
 }
 
 static int __init process_reserved_memory_node(const void *fdt, int node,
@@ -295,7 +297,7 @@ static void __init process_multiboot_node(const void *fdt, int node,
                      kind, start, domU);
 }
 
-static void __init process_chosen_node(const void *fdt, int node,
+static int __init process_chosen_node(const void *fdt, int node,
                                        const char *name,
                                        u32 address_cells, u32 size_cells)
 {
@@ -303,16 +305,41 @@ static void __init process_chosen_node(const void *fdt, int node,
     paddr_t start, end;
     int len;
 
+    printk("Checking for reserved Xenheap in /chosen\n");
+
+    if ( fdt_get_property(fdt, node, "xen,static-mem", NULL) )
+    {
+        u32 address_cells = device_tree_get_u32(fdt, node,
+                                                "#xen,static-mem-address-cells",
+                                                0);
+        u32 size_cells = device_tree_get_u32(fdt, node,
+                                             "#xen,static-mem-size-cells", 0);
+        int rc;
+
+        if ( address_cells < 1 || size_cells < 1 )
+        {
+            printk("fdt: node `%s': invalid #xen,static-mem-address-cells or #xen,static-mem-size-cells\n",
+                   name);
+            return -EINVAL;
+        }
+
+        rc = device_tree_get_meminfo(fdt, node, "xen,static-mem", address_cells,
+                                     size_cells, &bootinfo.reserved_mem, false,
+                                     true);
+        if ( rc )
+            return rc;
+    }
+
     printk("Checking for initrd in /chosen\n");
 
     prop = fdt_get_property(fdt, node, "linux,initrd-start", &len);
     if ( !prop )
         /* No initrd present. */
-        return;
+        return 0;
     if ( len != sizeof(u32) && len != sizeof(u64) )
     {
         printk("linux,initrd-start property has invalid length %d\n", len);
-        return;
+        return -EINVAL;
     }
     start = dt_read_number((void *)&prop->data, dt_size_to_cells(len));
 
@@ -320,12 +347,12 @@ static void __init process_chosen_node(const void *fdt, int node,
     if ( !prop )
     {
         printk("linux,initrd-end not present but -start was\n");
-        return;
+        return -EINVAL;
     }
     if ( len != sizeof(u32) && len != sizeof(u64) )
     {
         printk("linux,initrd-end property has invalid length %d\n", len);
-        return;
+        return -EINVAL;
     }
     end = dt_read_number((void *)&prop->data, dt_size_to_cells(len));
 
@@ -333,12 +360,14 @@ static void __init process_chosen_node(const void *fdt, int node,
     {
         printk("linux,initrd limits invalid: %"PRIpaddr" >= %"PRIpaddr"\n",
                   start, end);
-        return;
+        return -EINVAL;
     }
 
     printk("Initrd %"PRIpaddr"-%"PRIpaddr"\n", start, end);
 
     add_boot_module(BOOTMOD_RAMDISK, start, end-start, false);
+
+    return 0;
 }
 
 static int __init process_domain_node(const void *fdt, int node,
@@ -360,7 +389,8 @@ static int __init process_domain_node(const void *fdt, int node,
                                      "#xen,static-mem-size-cells", 0);
 
     return device_tree_get_meminfo(fdt, node, "xen,static-mem", address_cells,
-                                   size_cells, &bootinfo.reserved_mem, true);
+                                   size_cells, &bootinfo.reserved_mem, true,
+                                   false);
 }
 
 static int __init early_scan_node(const void *fdt,
@@ -380,7 +410,7 @@ static int __init early_scan_node(const void *fdt,
               device_tree_node_compatible(fdt, node, "multiboot,module" )))
         process_multiboot_node(fdt, node, name, address_cells, size_cells);
     else if ( depth == 1 && device_tree_node_matches(fdt, node, "chosen") )
-        process_chosen_node(fdt, node, name, address_cells, size_cells);
+        rc = process_chosen_node(fdt, node, name, address_cells, size_cells);
     else if ( depth == 2 && device_tree_node_compatible(fdt, node, "xen,domain") )
         rc = process_domain_node(fdt, node, name, address_cells, size_cells);
 
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index 7a1e1d6798..188a85c51c 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -27,6 +27,7 @@ struct membank {
     paddr_t start;
     paddr_t size;
     bool xen_domain; /* whether the memory bank is bound to a Xen domain. */
+    bool xen_heap;   /* whether the memory bank is reserved as Xenheap. */
 };
 
 struct meminfo {
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC PATCH 2/2] xen/arm: Handle reserved Xenheap pages in boot/heap allocator
  2022-02-24  1:30 [RFC PATCH 0/2] Introduce reserved Xenheap Henry Wang
  2022-02-24  1:30 ` [RFC PATCH 1/2] docs, xen/arm: Introduce reserved Xenheap memory Henry Wang
@ 2022-02-24  1:30 ` Henry Wang
  2022-02-25 20:08 ` [RFC PATCH 0/2] Introduce reserved Xenheap Julien Grall
  2 siblings, 0 replies; 10+ messages in thread
From: Henry Wang @ 2022-02-24  1:30 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien
  Cc: Bertrand.Marquis, wei.chen, penny.zheng, Henry.Wang

This commit firstly adds a global variable `reserved_heap`.
This newly introduced global variable is set at the device tree
parsing time if the reserved Xenheap ranges are defined in the
device tree chosen node.

In `setup_mm`, If the reserved Xenheap is enabled and used, we
make sure that these reserved Xenheap pages are added to the boot
allocator. Otherwise there would be a case that if all available
memory is used for the reserved Xenheap, the boot allocator will
contain zero page at boot time and eventually cause failures.

These reserved Xenheap pages in the boot allocator are added to the
heap allocator at `end_boot_allocator()`.

If the reserved Xenheap is disabled, we stick to the current page
allocation strategy at boot time.

Also, this commit removes some unneeded brackets in `setup_mm`
following the Xen coding style.

Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/arm/bootfdt.c           |  2 ++
 xen/arch/arm/include/asm/setup.h |  2 ++
 xen/arch/arm/setup.c             | 52 +++++++++++++++++++++++---------
 3 files changed, 41 insertions(+), 15 deletions(-)

diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index cc360cfd07..588f1e3aeb 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -328,6 +328,8 @@ static int __init process_chosen_node(const void *fdt, int node,
                                      true);
         if ( rc )
             return rc;
+
+        reserved_heap = true;
     }
 
     printk("Checking for initrd in /chosen\n");
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index 188a85c51c..fef9d0f378 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -92,6 +92,8 @@ extern struct bootinfo bootinfo;
 
 extern domid_t max_init_domid;
 
+extern bool reserved_heap;
+
 void copy_from_paddr(void *dst, paddr_t paddr, unsigned long len);
 
 size_t estimate_efi_size(int mem_nr_banks);
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index d5d0792ed4..2bb20deac5 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -73,6 +73,8 @@ integer_param("xenheap_megabytes", opt_xenheap_megabytes);
 
 domid_t __read_mostly max_init_domid;
 
+bool reserved_heap;
+
 static __used void init_done(void)
 {
     /* Must be done past setting system_state. */
@@ -772,41 +774,61 @@ static void __init setup_mm(void)
     paddr_t ram_start = ~0;
     paddr_t ram_end = 0;
     paddr_t ram_size = 0;
+    paddr_t bank_start = ~0;
+    paddr_t bank_size = 0;
+    paddr_t bank_end = 0;
     int bank;
 
     init_pdx();
 
+    if ( reserved_heap )
+    {
+        for ( bank = 0 ; bank < bootinfo.reserved_mem.nr_banks; bank++ )
+        {
+            if ( bootinfo.reserved_mem.bank[bank].xen_heap )
+            {
+                bank_start = bootinfo.reserved_mem.bank[bank].start;
+                bank_size = bootinfo.reserved_mem.bank[bank].size;
+                bank_end = bank_start + bank_size;
+
+                init_boot_pages(bank_start, bank_end);
+            }
+        }
+    }
+
     total_pages = 0;
     for ( bank = 0 ; bank < bootinfo.mem.nr_banks; bank++ )
     {
-        paddr_t bank_start = bootinfo.mem.bank[bank].start;
-        paddr_t bank_size = bootinfo.mem.bank[bank].size;
-        paddr_t bank_end = bank_start + bank_size;
         paddr_t s, e;
 
+        bank_start = bootinfo.mem.bank[bank].start;
+        bank_size = bootinfo.mem.bank[bank].size;
+        bank_end = bank_start + bank_size;
+
         ram_size = ram_size + bank_size;
         ram_start = min(ram_start,bank_start);
         ram_end = max(ram_end,bank_end);
 
         setup_xenheap_mappings(bank_start>>PAGE_SHIFT, bank_size>>PAGE_SHIFT);
 
-        s = bank_start;
-        while ( s < bank_end )
+        if ( !reserved_heap )
         {
-            paddr_t n = bank_end;
+            s = bank_start;
+            while ( s < bank_end )
+            {
+                paddr_t n = bank_end;
 
-            e = next_module(s, &n);
+                e = next_module(s, &n);
 
-            if ( e == ~(paddr_t)0 )
-            {
-                e = n = bank_end;
-            }
+                if ( e == ~(paddr_t)0 )
+                    e = n = bank_end;
 
-            if ( e > bank_end )
-                e = bank_end;
+                if ( e > bank_end )
+                    e = bank_end;
 
-            fw_unreserved_regions(s, e, init_boot_pages, 0);
-            s = n;
+                fw_unreserved_regions(s, e, init_boot_pages, 0);
+                s = n;
+            }
         }
     }
 
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH 0/2] Introduce reserved Xenheap
  2022-02-24  1:30 [RFC PATCH 0/2] Introduce reserved Xenheap Henry Wang
  2022-02-24  1:30 ` [RFC PATCH 1/2] docs, xen/arm: Introduce reserved Xenheap memory Henry Wang
  2022-02-24  1:30 ` [RFC PATCH 2/2] xen/arm: Handle reserved Xenheap pages in boot/heap allocator Henry Wang
@ 2022-02-25 20:08 ` Julien Grall
  2022-02-28  7:12   ` Henry Wang
  2 siblings, 1 reply; 10+ messages in thread
From: Julien Grall @ 2022-02-25 20:08 UTC (permalink / raw)
  To: Henry Wang, xen-devel, sstabellini
  Cc: Bertrand.Marquis, wei.chen, penny.zheng

Hi Henry,

On 24/02/2022 01:30, Henry Wang wrote:
> The reserved Xenheap, or statically configured Xenheap, refers to parts
> of RAM reserved in the beginning for Xenheap. Like the static memory
> allocation, such reserved Xenheap regions are reserved by configuration
> in the device tree using physical address ranges.

In Xen, we have the concept of domheap and xenheap. For Arm64 and x86 
they would be the same. But for Arm32, they would be different: xenheap 
is always mapped whereas domheap is separate.

Skimming through the series, I think you want to use the region for both 
domheap and xenheap. Is that correct?

Furthemore, now that we are introducing more static region, it will get 
easier to overlap the regions by mistakes. I think we want to have some 
logic in Xen (or outside) to ensure that none of them overlaps. Do you 
have any plan for that?

> 
> This feature is useful to run Xen on Arm MPU systems, where only a
> finite number of memory protection regions are available. The limited
> number of protection regions places requirement on planning the use of
> MPU protection regions and one or more MPU protection regions needs to
> be reserved only for Xenheap.
> 
> Therefore, this patch series is sent as RFC for comments from the
> community. The first patch introduces the reserved Xenheap and the
> device tree processing code. The second patch adds the implementation of
> the reserved Xenheap pages handling in boot and heap allocator on Arm64.
> 
> Henry Wang (2):
>    docs, xen/arm: Introduce reserved Xenheap memory
>    xen/arm: Handle reserved Xenheap pages in boot/heap allocator
> 
>   docs/misc/arm/device-tree/booting.txt | 43 ++++++++++++++++++++++
>   xen/arch/arm/bootfdt.c                | 52 +++++++++++++++++++++------
>   xen/arch/arm/include/asm/setup.h      |  3 ++
>   xen/arch/arm/setup.c                  | 52 +++++++++++++++++++--------
>   4 files changed, 125 insertions(+), 25 deletions(-)
> 

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [RFC PATCH 0/2] Introduce reserved Xenheap
  2022-02-25 20:08 ` [RFC PATCH 0/2] Introduce reserved Xenheap Julien Grall
@ 2022-02-28  7:12   ` Henry Wang
  2022-02-28 18:51     ` Julien Grall
  0 siblings, 1 reply; 10+ messages in thread
From: Henry Wang @ 2022-02-28  7:12 UTC (permalink / raw)
  To: Julien Grall, xen-devel, sstabellini
  Cc: Bertrand Marquis, Wei Chen, Penny Zheng

Hi Julien,

Thanks very much for your time reading the series and your feedback. Please
find the inline reply below.

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Saturday, February 26, 2022 4:09 AM
> To: Henry Wang <Henry.Wang@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; Penny Zheng <Penny.Zheng@arm.com>
> Subject: Re: [RFC PATCH 0/2] Introduce reserved Xenheap
> 
> Hi Henry,
> 
> On 24/02/2022 01:30, Henry Wang wrote:
> > The reserved Xenheap, or statically configured Xenheap, refers to parts
> > of RAM reserved in the beginning for Xenheap. Like the static memory
> > allocation, such reserved Xenheap regions are reserved by configuration
> > in the device tree using physical address ranges.
> 
> In Xen, we have the concept of domheap and xenheap. For Arm64 and x86
> they would be the same. But for Arm32, they would be different: xenheap
> is always mapped whereas domheap is separate.
> 
> Skimming through the series, I think you want to use the region for both
> domheap and xenheap. Is that correct?

Yes I think that would be correct, for Arm32, instead of using the full
`ram_pages` as the initial value of `heap_pages`, we want to use the
region specified in the device tree. But we are confused if this is the
correct (or preferred) way for Arm32, so in this series we only
implemented the reserved heap for Arm64.

Could you please share your opinion on this? Thanks!

> 
> Furthemore, now that we are introducing more static region, it will get
> easier to overlap the regions by mistakes. I think we want to have some
> logic in Xen (or outside) to ensure that none of them overlaps. Do you
> have any plan for that?

Totally agree with this idea, but before we actually implement the code,
we would like to firstly share our thoughts on this: One option could be to
add data structures to notes down these static memory regions when the
device tree is parsed, and then we can check if they are overlapped. Over
the long term (and this long term option is currently not in our plan),
maybe we can add something in the Xen toolstack for this usage?

Also, I am wondering if the overlapping check logic should be introduced
in this series. WDYT?

> 
> >
> > This feature is useful to run Xen on Arm MPU systems, where only a
> > finite number of memory protection regions are available. The limited
> > number of protection regions places requirement on planning the use of
> > MPU protection regions and one or more MPU protection regions needs to
> > be reserved only for Xenheap.
> >
> > Therefore, this patch series is sent as RFC for comments from the
> > community. The first patch introduces the reserved Xenheap and the
> > device tree processing code. The second patch adds the implementation of
> > the reserved Xenheap pages handling in boot and heap allocator on Arm64.
> >
> > Henry Wang (2):
> >    docs, xen/arm: Introduce reserved Xenheap memory
> >    xen/arm: Handle reserved Xenheap pages in boot/heap allocator
> >
> >   docs/misc/arm/device-tree/booting.txt | 43 ++++++++++++++++++++++
> >   xen/arch/arm/bootfdt.c                | 52 +++++++++++++++++++++------
> >   xen/arch/arm/include/asm/setup.h      |  3 ++
> >   xen/arch/arm/setup.c                  | 52 +++++++++++++++++++--------
> >   4 files changed, 125 insertions(+), 25 deletions(-)
> >
> 
> --
> Julien Grall

Kind regards,

Henry

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH 0/2] Introduce reserved Xenheap
  2022-02-28  7:12   ` Henry Wang
@ 2022-02-28 18:51     ` Julien Grall
  2022-03-01  2:11       ` Henry Wang
  2022-03-01 13:39       ` Bertrand Marquis
  0 siblings, 2 replies; 10+ messages in thread
From: Julien Grall @ 2022-02-28 18:51 UTC (permalink / raw)
  To: Henry Wang, xen-devel, sstabellini
  Cc: Bertrand Marquis, Wei Chen, Penny Zheng



On 28/02/2022 07:12, Henry Wang wrote:
> Hi Julien,

Hi Henry,

>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Saturday, February 26, 2022 4:09 AM
>> To: Henry Wang <Henry.Wang@arm.com>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; Penny Zheng <Penny.Zheng@arm.com>
>> Subject: Re: [RFC PATCH 0/2] Introduce reserved Xenheap
>>
>> Hi Henry,
>>
>> On 24/02/2022 01:30, Henry Wang wrote:
>>> The reserved Xenheap, or statically configured Xenheap, refers to parts
>>> of RAM reserved in the beginning for Xenheap. Like the static memory
>>> allocation, such reserved Xenheap regions are reserved by configuration
>>> in the device tree using physical address ranges.
>>
>> In Xen, we have the concept of domheap and xenheap. For Arm64 and x86
>> they would be the same. But for Arm32, they would be different: xenheap
>> is always mapped whereas domheap is separate.
>>
>> Skimming through the series, I think you want to use the region for both
>> domheap and xenheap. Is that correct?
> 
> Yes I think that would be correct, for Arm32, instead of using the full
> `ram_pages` as the initial value of `heap_pages`, we want to use the
> region specified in the device tree. But we are confused if this is the
> correct (or preferred) way for Arm32, so in this series we only
> implemented the reserved heap for Arm64.

That's an interesting point. When I skimmed through the series on 
Friday, my first thought was that for arm32 it would be only xenheap (so
all the rest of memory is domheap).

However, Xen can allocate memory from domheap for its own purpose (e.g. 
we don't need contiguous memory, or for page-tables).

In a fully static environment, the domheap and xenheap are both going to 
be quite small. It would also be somewhat difficult for a user to size 
it. So I think, it would be easier to use the region you introduce for 
both domheap and xenheap.

Stefano, Bertrand, any opionions?

On a separate topic, I think we need some documentation explaining how a 
user can size the xenheap. How did you figure out for your setup?

>>
>> Furthemore, now that we are introducing more static region, it will get
>> easier to overlap the regions by mistakes. I think we want to have some
>> logic in Xen (or outside) to ensure that none of them overlaps. Do you
>> have any plan for that?
> 
> Totally agree with this idea, but before we actually implement the code,
> we would like to firstly share our thoughts on this: One option could be to
> add data structures to notes down these static memory regions when the
> device tree is parsed, and then we can check if they are overlapped.

This should work.

> Over
> the long term (and this long term option is currently not in our plan),
> maybe we can add something in the Xen toolstack for this usage?

When I read "Xen toolstack", I read the tools that will run in dom0. Is 
it what you meant?

> 
> Also, I am wondering if the overlapping check logic should be introduced
> in this series. WDYT?

I would do that in a separate series.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [RFC PATCH 0/2] Introduce reserved Xenheap
  2022-02-28 18:51     ` Julien Grall
@ 2022-03-01  2:11       ` Henry Wang
  2022-03-01  2:23         ` Wei Chen
  2022-03-01 13:39       ` Bertrand Marquis
  1 sibling, 1 reply; 10+ messages in thread
From: Henry Wang @ 2022-03-01  2:11 UTC (permalink / raw)
  To: Julien Grall, xen-devel, sstabellini
  Cc: Bertrand Marquis, Wei Chen, Penny Zheng

Hi Julien,

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> On 28/02/2022 07:12, Henry Wang wrote:
> > Hi Julien,
> 
> Hi Henry,
> 
> >> -----Original Message-----
> >> From: Julien Grall <julien@xen.org>
> >> Hi Henry,
> >>
> >> On 24/02/2022 01:30, Henry Wang wrote:
> >>> The reserved Xenheap, or statically configured Xenheap, refers to parts
> >>> of RAM reserved in the beginning for Xenheap. Like the static memory
> >>> allocation, such reserved Xenheap regions are reserved by configuration
> >>> in the device tree using physical address ranges.
> >>
> >> In Xen, we have the concept of domheap and xenheap. For Arm64 and
> x86
> >> they would be the same. But for Arm32, they would be different: xenheap
> >> is always mapped whereas domheap is separate.
> >>
> >> Skimming through the series, I think you want to use the region for both
> >> domheap and xenheap. Is that correct?
> >
> > Yes I think that would be correct, for Arm32, instead of using the full
> > `ram_pages` as the initial value of `heap_pages`, we want to use the
> > region specified in the device tree. But we are confused if this is the
> > correct (or preferred) way for Arm32, so in this series we only
> > implemented the reserved heap for Arm64.
> 
> That's an interesting point. When I skimmed through the series on
> Friday, my first thought was that for arm32 it would be only xenheap (so
> all the rest of memory is domheap).
> 
> However, Xen can allocate memory from domheap for its own purpose (e.g.
> we don't need contiguous memory, or for page-tables).
> 
> In a fully static environment, the domheap and xenheap are both going to
> be quite small. It would also be somewhat difficult for a user to size
> it. So I think, it would be easier to use the region you introduce for
> both domheap and xenheap.
> 
> Stefano, Bertrand, any opionions?
> 
> On a separate topic, I think we need some documentation explaining how a
> user can size the xenheap. How did you figure out for your setup?

Not sure if I fully understand the question. I will explain in two parts: I tested
this series on a dom0less (static mem) system on FVP_Base.
(1) For configuring the system, I followed the documentation I added in the
first patch in this series (docs/misc/arm/device-tree/booting.txt). The idea is
adding some static mem regions under the chosen node.

     chosen {
+        #xen,static-mem-address-cells = <0x2>;
+        #xen,static-mem-size-cells = <0x2>;
+        xen,static-mem = <0x8 0x80000000 0x0 0x00100000 0x8 0x90000000 0x0 0x08000000>;
     [...]
     }

(2) For verifying this series, what I did was basically playing with the region size
and number of the regions, adding printks and also see if the guests can boot
as expected when I change the xenheap size.

> 
> >>
> >> Furthemore, now that we are introducing more static region, it will get
> >> easier to overlap the regions by mistakes. I think we want to have some
> >> logic in Xen (or outside) to ensure that none of them overlaps. Do you
> >> have any plan for that?
> >
> > Totally agree with this idea, but before we actually implement the code,
> > we would like to firstly share our thoughts on this: One option could be to
> > add data structures to notes down these static memory regions when the
> > device tree is parsed, and then we can check if they are overlapped.
> 
> This should work.

Ack.

> 
> > Over
> > the long term (and this long term option is currently not in our plan),
> > maybe we can add something in the Xen toolstack for this usage?
> 
> When I read "Xen toolstack", I read the tools that will run in dom0. Is
> it what you meant?

Nonono, sorry for the misleading. I mean a build time tool that can run
on host (build machine) to generate/configure the Xen DTS for static
allocated memory. But maybe this tool can be placed in Xen tool or it can
be a separate tool that out of Xen's scope.

Anyway, this is just an idea as we find it is not easy for users to configure
so many static items manually.

> 
> >
> > Also, I am wondering if the overlapping check logic should be introduced
> > in this series. WDYT?
> 
> I would do that in a separate series.

Ack.

Kind regards,

Henry

> 
> Cheers,
> 
> --
> Julien Grall

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [RFC PATCH 0/2] Introduce reserved Xenheap
  2022-03-01  2:11       ` Henry Wang
@ 2022-03-01  2:23         ` Wei Chen
  2022-03-01 23:32           ` Stefano Stabellini
  0 siblings, 1 reply; 10+ messages in thread
From: Wei Chen @ 2022-03-01  2:23 UTC (permalink / raw)
  To: Henry Wang, Julien Grall, xen-devel, sstabellini
  Cc: Bertrand Marquis, Penny Zheng

Hi Julien,

> -----Original Message-----
> From: Henry Wang <Henry.Wang@arm.com>
> Sent: 2022年3月1日 10:11
> To: Julien Grall <julien@xen.org>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; Penny Zheng <Penny.Zheng@arm.com>
> Subject: RE: [RFC PATCH 0/2] Introduce reserved Xenheap
> 
> Hi Julien,
> 
> > -----Original Message-----
> > From: Julien Grall <julien@xen.org>
> > On 28/02/2022 07:12, Henry Wang wrote:
> > > Hi Julien,
> >
> > Hi Henry,
> >
> > >> -----Original Message-----
> > >> From: Julien Grall <julien@xen.org>
> > >> Hi Henry,
> > >>
> > >> On 24/02/2022 01:30, Henry Wang wrote:
> > >>> The reserved Xenheap, or statically configured Xenheap, refers to
> parts
> > >>> of RAM reserved in the beginning for Xenheap. Like the static memory
> > >>> allocation, such reserved Xenheap regions are reserved by
> configuration
> > >>> in the device tree using physical address ranges.
> > >>
> > >> In Xen, we have the concept of domheap and xenheap. For Arm64 and
> > x86
> > >> they would be the same. But for Arm32, they would be different:
> xenheap
> > >> is always mapped whereas domheap is separate.
> > >>
> > >> Skimming through the series, I think you want to use the region for
> both
> > >> domheap and xenheap. Is that correct?
> > >
> > > Yes I think that would be correct, for Arm32, instead of using the
> full
> > > `ram_pages` as the initial value of `heap_pages`, we want to use the
> > > region specified in the device tree. But we are confused if this is
> the
> > > correct (or preferred) way for Arm32, so in this series we only
> > > implemented the reserved heap for Arm64.
> >
> > That's an interesting point. When I skimmed through the series on
> > Friday, my first thought was that for arm32 it would be only xenheap (so
> > all the rest of memory is domheap).
> >
> > However, Xen can allocate memory from domheap for its own purpose (e.g.
> > we don't need contiguous memory, or for page-tables).
> >
> > In a fully static environment, the domheap and xenheap are both going to
> > be quite small. It would also be somewhat difficult for a user to size
> > it. So I think, it would be easier to use the region you introduce for
> > both domheap and xenheap.
> >
> > Stefano, Bertrand, any opionions?
> >
> > On a separate topic, I think we need some documentation explaining how a
> > user can size the xenheap. How did you figure out for your setup?
> 
> Not sure if I fully understand the question. I will explain in two parts:
> I tested
> this series on a dom0less (static mem) system on FVP_Base.
> (1) For configuring the system, I followed the documentation I added in
> the
> first patch in this series (docs/misc/arm/device-tree/booting.txt). The
> idea is
> adding some static mem regions under the chosen node.
> 
>      chosen {
> +        #xen,static-mem-address-cells = <0x2>;
> +        #xen,static-mem-size-cells = <0x2>;
> +        xen,static-mem = <0x8 0x80000000 0x0 0x00100000 0x8 0x90000000
> 0x0 0x08000000>;
>      [...]
>      }
> 
> (2) For verifying this series, what I did was basically playing with the
> region size
> and number of the regions, adding printks and also see if the guests can
> boot
> as expected when I change the xenheap size.
> 
> >
> > >>
> > >> Furthemore, now that we are introducing more static region, it will
> get
> > >> easier to overlap the regions by mistakes. I think we want to have
> some
> > >> logic in Xen (or outside) to ensure that none of them overlaps. Do
> you
> > >> have any plan for that?
> > >
> > > Totally agree with this idea, but before we actually implement the
> code,
> > > we would like to firstly share our thoughts on this: One option could
> be to
> > > add data structures to notes down these static memory regions when the
> > > device tree is parsed, and then we can check if they are overlapped.
> >
> > This should work.
> 
> Ack.
> 
> >
> > > Over
> > > the long term (and this long term option is currently not in our plan),
> > > maybe we can add something in the Xen toolstack for this usage?
> >
> > When I read "Xen toolstack", I read the tools that will run in dom0. Is
> > it what you meant?
> 
> Nonono, sorry for the misleading. I mean a build time tool that can run
> on host (build machine) to generate/configure the Xen DTS for static
> allocated memory. But maybe this tool can be placed in Xen tool or it can
> be a separate tool that out of Xen's scope.
> 
> Anyway, this is just an idea as we find it is not easy for users to
> configure
> so many static items manually.

Not only for this one. As v8R64 support code also includes lots of static
allocated items, it will also encounter this user configuration issue.
So this would be a long term consideration. We can discuss this topic
after Xen V8R64 support code upstream work be done.

And this tool does not necessarily need to be provided by the community.
Vendors that want to use Xen also can do it. IMO, it would be better if
community could provide it. Anyway let's defer this topic : ) 

Thanks,
Wei Chen

> 
> >
> > >
> > > Also, I am wondering if the overlapping check logic should be
> introduced
> > > in this series. WDYT?
> >
> > I would do that in a separate series.
> 
> Ack.
> 
> Kind regards,
> 
> Henry
> 
> >
> > Cheers,
> >
> > --
> > Julien Grall


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH 0/2] Introduce reserved Xenheap
  2022-02-28 18:51     ` Julien Grall
  2022-03-01  2:11       ` Henry Wang
@ 2022-03-01 13:39       ` Bertrand Marquis
  1 sibling, 0 replies; 10+ messages in thread
From: Bertrand Marquis @ 2022-03-01 13:39 UTC (permalink / raw)
  To: Julien Grall; +Cc: Henry Wang, xen-devel, sstabellini, Wei Chen, Penny Zheng

Hi,

> On 28 Feb 2022, at 18:51, Julien Grall <julien@xen.org> wrote:
> 
> 
> 
> On 28/02/2022 07:12, Henry Wang wrote:
>> Hi Julien,
> 
> Hi Henry,
> 
>>> -----Original Message-----
>>> From: Julien Grall <julien@xen.org>
>>> Sent: Saturday, February 26, 2022 4:09 AM
>>> To: Henry Wang <Henry.Wang@arm.com>; xen-devel@lists.xenproject.org;
>>> sstabellini@kernel.org
>>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>>> <Wei.Chen@arm.com>; Penny Zheng <Penny.Zheng@arm.com>
>>> Subject: Re: [RFC PATCH 0/2] Introduce reserved Xenheap
>>> 
>>> Hi Henry,
>>> 
>>> On 24/02/2022 01:30, Henry Wang wrote:
>>>> The reserved Xenheap, or statically configured Xenheap, refers to parts
>>>> of RAM reserved in the beginning for Xenheap. Like the static memory
>>>> allocation, such reserved Xenheap regions are reserved by configuration
>>>> in the device tree using physical address ranges.
>>> 
>>> In Xen, we have the concept of domheap and xenheap. For Arm64 and x86
>>> they would be the same. But for Arm32, they would be different: xenheap
>>> is always mapped whereas domheap is separate.
>>> 
>>> Skimming through the series, I think you want to use the region for both
>>> domheap and xenheap. Is that correct?
>> Yes I think that would be correct, for Arm32, instead of using the full
>> `ram_pages` as the initial value of `heap_pages`, we want to use the
>> region specified in the device tree. But we are confused if this is the
>> correct (or preferred) way for Arm32, so in this series we only
>> implemented the reserved heap for Arm64.
> 
> That's an interesting point. When I skimmed through the series on Friday, my first thought was that for arm32 it would be only xenheap (so
> all the rest of memory is domheap).
> 
> However, Xen can allocate memory from domheap for its own purpose (e.g. we don't need contiguous memory, or for page-tables).
> 
> In a fully static environment, the domheap and xenheap are both going to be quite small. It would also be somewhat difficult for a user to size it. So I think, it would be easier to use the region you introduce for both domheap and xenheap.
> 
> Stefano, Bertrand, any opionions?

Only one region is easier to configure and I think in this case it will also prevent lots of over allocation.
So in a full static case, having only one heap is a good strategy for now.

There might be some cases where someone would want to fully control the memory allocated by Xen per domain and in this case be able to size it for each guest (to make sure one guest cannot be impacted by an other at all).
But this is definitely something that could be done later, if needed.

Cheers
Bertrand

> 
> On a separate topic, I think we need some documentation explaining how a user can size the xenheap. How did you figure out for your setup?
> 
>>> 
>>> Furthemore, now that we are introducing more static region, it will get
>>> easier to overlap the regions by mistakes. I think we want to have some
>>> logic in Xen (or outside) to ensure that none of them overlaps. Do you
>>> have any plan for that?
>> Totally agree with this idea, but before we actually implement the code,
>> we would like to firstly share our thoughts on this: One option could be to
>> add data structures to notes down these static memory regions when the
>> device tree is parsed, and then we can check if they are overlapped.
> 
> This should work.
> 
>> Over
>> the long term (and this long term option is currently not in our plan),
>> maybe we can add something in the Xen toolstack for this usage?
> 
> When I read "Xen toolstack", I read the tools that will run in dom0. Is it what you meant?
> 
>> Also, I am wondering if the overlapping check logic should be introduced
>> in this series. WDYT?
> 
> I would do that in a separate series.
> 
> Cheers,
> 
> -- 
> Julien Grall



^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [RFC PATCH 0/2] Introduce reserved Xenheap
  2022-03-01  2:23         ` Wei Chen
@ 2022-03-01 23:32           ` Stefano Stabellini
  0 siblings, 0 replies; 10+ messages in thread
From: Stefano Stabellini @ 2022-03-01 23:32 UTC (permalink / raw)
  To: Wei Chen
  Cc: Henry Wang, Julien Grall, xen-devel, sstabellini,
	Bertrand Marquis, Penny Zheng

On Tue, 1 Mar 2022, Wei Chen wrote:
> > Hi Julien,
> > 
> > > -----Original Message-----
> > > From: Julien Grall <julien@xen.org>
> > > On 28/02/2022 07:12, Henry Wang wrote:
> > > > Hi Julien,
> > >
> > > Hi Henry,
> > >
> > > >> -----Original Message-----
> > > >> From: Julien Grall <julien@xen.org>
> > > >> Hi Henry,
> > > >>
> > > >> On 24/02/2022 01:30, Henry Wang wrote:
> > > >>> The reserved Xenheap, or statically configured Xenheap, refers to
> > parts
> > > >>> of RAM reserved in the beginning for Xenheap. Like the static memory
> > > >>> allocation, such reserved Xenheap regions are reserved by
> > configuration
> > > >>> in the device tree using physical address ranges.
> > > >>
> > > >> In Xen, we have the concept of domheap and xenheap. For Arm64 and
> > > x86
> > > >> they would be the same. But for Arm32, they would be different:
> > xenheap
> > > >> is always mapped whereas domheap is separate.
> > > >>
> > > >> Skimming through the series, I think you want to use the region for
> > both
> > > >> domheap and xenheap. Is that correct?
> > > >
> > > > Yes I think that would be correct, for Arm32, instead of using the
> > full
> > > > `ram_pages` as the initial value of `heap_pages`, we want to use the
> > > > region specified in the device tree. But we are confused if this is
> > the
> > > > correct (or preferred) way for Arm32, so in this series we only
> > > > implemented the reserved heap for Arm64.
> > >
> > > That's an interesting point. When I skimmed through the series on
> > > Friday, my first thought was that for arm32 it would be only xenheap (so
> > > all the rest of memory is domheap).
> > >
> > > However, Xen can allocate memory from domheap for its own purpose (e.g.
> > > we don't need contiguous memory, or for page-tables).
> > >
> > > In a fully static environment, the domheap and xenheap are both going to
> > > be quite small. It would also be somewhat difficult for a user to size
> > > it. So I think, it would be easier to use the region you introduce for
> > > both domheap and xenheap.
> > >
> > > Stefano, Bertrand, any opionions?
> > >
> > > On a separate topic, I think we need some documentation explaining how a
> > > user can size the xenheap. How did you figure out for your setup?
> > 
> > Not sure if I fully understand the question. I will explain in two parts:
> > I tested
> > this series on a dom0less (static mem) system on FVP_Base.
> > (1) For configuring the system, I followed the documentation I added in
> > the
> > first patch in this series (docs/misc/arm/device-tree/booting.txt). The
> > idea is
> > adding some static mem regions under the chosen node.
> > 
> >      chosen {
> > +        #xen,static-mem-address-cells = <0x2>;
> > +        #xen,static-mem-size-cells = <0x2>;
> > +        xen,static-mem = <0x8 0x80000000 0x0 0x00100000 0x8 0x90000000
> > 0x0 0x08000000>;
> >      [...]
> >      }
> > 
> > (2) For verifying this series, what I did was basically playing with the
> > region size
> > and number of the regions, adding printks and also see if the guests can
> > boot
> > as expected when I change the xenheap size.
> > 
> > >
> > > >>
> > > >> Furthemore, now that we are introducing more static region, it will
> > get
> > > >> easier to overlap the regions by mistakes. I think we want to have
> > some
> > > >> logic in Xen (or outside) to ensure that none of them overlaps. Do
> > you
> > > >> have any plan for that?
> > > >
> > > > Totally agree with this idea, but before we actually implement the
> > code,
> > > > we would like to firstly share our thoughts on this: One option could
> > be to
> > > > add data structures to notes down these static memory regions when the
> > > > device tree is parsed, and then we can check if they are overlapped.
> > >
> > > This should work.
> > 
> > Ack.
> > 
> > >
> > > > Over
> > > > the long term (and this long term option is currently not in our plan),
> > > > maybe we can add something in the Xen toolstack for this usage?
> > >
> > > When I read "Xen toolstack", I read the tools that will run in dom0. Is
> > > it what you meant?
> > 
> > Nonono, sorry for the misleading. I mean a build time tool that can run
> > on host (build machine) to generate/configure the Xen DTS for static
> > allocated memory. But maybe this tool can be placed in Xen tool or it can
> > be a separate tool that out of Xen's scope.
> > 
> > Anyway, this is just an idea as we find it is not easy for users to
> > configure
> > so many static items manually.
> 
> Not only for this one. As v8R64 support code also includes lots of static
> allocated items, it will also encounter this user configuration issue.
> So this would be a long term consideration. We can discuss this topic
> after Xen V8R64 support code upstream work be done.
> 
> And this tool does not necessarily need to be provided by the community.
> Vendors that want to use Xen also can do it. IMO, it would be better if
> community could provide it. Anyway let's defer this topic : ) 

Yes, I agree with you that it would be best if this tool was provided by
the community. I'll continue the conversation on the Armv8-R64 thread.


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-03-01 23:32 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-24  1:30 [RFC PATCH 0/2] Introduce reserved Xenheap Henry Wang
2022-02-24  1:30 ` [RFC PATCH 1/2] docs, xen/arm: Introduce reserved Xenheap memory Henry Wang
2022-02-24  1:30 ` [RFC PATCH 2/2] xen/arm: Handle reserved Xenheap pages in boot/heap allocator Henry Wang
2022-02-25 20:08 ` [RFC PATCH 0/2] Introduce reserved Xenheap Julien Grall
2022-02-28  7:12   ` Henry Wang
2022-02-28 18:51     ` Julien Grall
2022-03-01  2:11       ` Henry Wang
2022-03-01  2:23         ` Wei Chen
2022-03-01 23:32           ` Stefano Stabellini
2022-03-01 13:39       ` Bertrand Marquis

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.