All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/10] Domain on Static Allocation
@ 2021-05-18  5:21 Penny Zheng
  2021-05-18  5:21 ` [PATCH 01/10] xen/arm: introduce domain " Penny Zheng
                   ` (9 more replies)
  0 siblings, 10 replies; 82+ messages in thread
From: Penny Zheng @ 2021-05-18  5:21 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien
  Cc: Bertrand.Marquis, Penny.Zheng, Wei.Chen, nd

Static allocation refers to system or sub-system(domains) for which memory
areas are pre-defined by configuration using physical address ranges.
Those pre-defined memory, -- Static Momery, as parts of RAM reserved in the
beginning, shall never go to heap allocator or boot allocator for any use.

This Patch Serie only talks about Domain on Static Allocation.

Domain on Static Allocation is supported through device tree property
`xen,static-mem` specifying reserved RAM banks as this domain's guest RAM.
By default, they shall be mapped to the fixed guest RAM address
`GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.

Looking into related [design link](
https://lists.xenproject.org/archives/html/xen-devel/2021-05/msg00882.html)
for more details.

The whole design is about Static Allocation and 1:1 direct-map, and this
Patch Serie only covers parts of it, which are Domain on Static Allocation.
Other features will be delievered through different patch series.

Penny Zheng (10):
  xen/arm: introduce domain on Static Allocation
  xen/arm: handle static memory in dt_unreserved_regions
  xen/arm: introduce PGC_reserved
  xen/arm: static memory initialization
  xen/arm: introduce alloc_staticmem_pages
  xen: replace order with nr_pfns in assign_pages for better
    compatibility
  xen/arm: intruduce alloc_domstatic_pages
  xen/arm: introduce reserved_page_list
  xen/arm: parse `xen,static-mem` info during domain construction
  xen/arm: introduce allocate_static_memory

 docs/misc/arm/device-tree/booting.txt |  33 ++++
 xen/arch/arm/bootfdt.c                |  52 +++++++
 xen/arch/arm/domain_build.c           | 211 +++++++++++++++++++++++++-
 xen/arch/arm/setup.c                  |  41 ++++-
 xen/arch/x86/pv/dom0_build.c          |   2 +-
 xen/common/domain.c                   |   1 +
 xen/common/grant_table.c              |   2 +-
 xen/common/memory.c                   |   4 +-
 xen/common/page_alloc.c               | 210 +++++++++++++++++++++++--
 xen/include/asm-arm/domain.h          |   3 +
 xen/include/asm-arm/mm.h              |  16 +-
 xen/include/asm-arm/setup.h           |   2 +
 xen/include/xen/mm.h                  |   9 +-
 xen/include/xen/sched.h               |   5 +
 14 files changed, 564 insertions(+), 27 deletions(-)

-- 
2.25.1



^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 01/10] xen/arm: introduce domain on Static Allocation
  2021-05-18  5:21 [PATCH 00/10] Domain on Static Allocation Penny Zheng
@ 2021-05-18  5:21 ` Penny Zheng
  2021-05-18  8:58   ` Julien Grall
  2021-05-18  5:21 ` [PATCH 02/10] xen/arm: handle static memory in dt_unreserved_regions Penny Zheng
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 82+ messages in thread
From: Penny Zheng @ 2021-05-18  5:21 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien
  Cc: Bertrand.Marquis, Penny.Zheng, Wei.Chen, nd

Static Allocation refers to system or sub-system(domains) for which memory
areas are pre-defined by configuration using physical address ranges.
Those pre-defined memory, -- Static Momery, as parts of RAM reserved in the
beginning, shall never go to heap allocator or boot allocator for any use.

Domains on Static Allocation is supported through device tree property
`xen,static-mem` specifying reserved RAM banks as this domain's guest RAM.
By default, they shall be mapped to the fixed guest RAM address
`GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.

This patch introduces this new `xen,static-mem` property to define static
memory nodes in device tree file.
This patch also documents and parses this new attribute at boot time and
stores related info in static_mem for later initialization.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 docs/misc/arm/device-tree/booting.txt | 33 +++++++++++++++++
 xen/arch/arm/bootfdt.c                | 52 +++++++++++++++++++++++++++
 xen/include/asm-arm/setup.h           |  2 ++
 3 files changed, 87 insertions(+)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 5243bc7fd3..d209149d71 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -268,3 +268,36 @@ The DTB fragment is loaded at 0xc000000 in the example above. It should
 follow the convention explained in docs/misc/arm/passthrough.txt. The
 DTB fragment will be added to the guest device tree, so that the guest
 kernel will be able to discover the device.
+
+
+Static Allocation
+=============
+
+Static Allocation refers to system or sub-system(domains) for which memory
+areas are pre-defined by configuration using physical address ranges.
+Those pre-defined memory, -- Static Momery, as parts of RAM reserved in the
+beginning, shall never go to heap allocator or boot allocator for any use.
+
+Domains on Static Allocation is supported through device tree property
+`xen,static-mem` specifying reserved RAM banks as this domain's guest RAM.
+By default, they shall be mapped to the fixed guest RAM address
+`GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.
+
+Static Allocation is only supported on AArch64 for now.
+
+The dtb property should look like as follows:
+
+        chosen {
+            domU1 {
+                compatible = "xen,domain";
+                #address-cells = <0x2>;
+                #size-cells = <0x2>;
+                cpus = <2>;
+                xen,static-mem = <0x0 0x30000000 0x0 0x20000000>;
+
+                ...
+            };
+        };
+
+DOMU1 on Static Allocation has reserved RAM bank at 0x30000000 of 512MB size
+as guest RAM.
diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index dcff512648..e9f14e6a44 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -327,6 +327,55 @@ static void __init process_chosen_node(const void *fdt, int node,
     add_boot_module(BOOTMOD_RAMDISK, start, end-start, false);
 }
 
+static int __init process_static_memory(const void *fdt, int node,
+                                        const char *name,
+                                        u32 address_cells, u32 size_cells,
+                                        void *data)
+{
+    int i;
+    int banks;
+    const __be32 *cell;
+    paddr_t start, size;
+    u32 reg_cells = address_cells + size_cells;
+    struct meminfo *mem = data;
+    const struct fdt_property *prop;
+
+    if ( address_cells < 1 || size_cells < 1 )
+    {
+        printk("fdt: invalid #address-cells or #size-cells for static memory");
+        return -EINVAL;
+    }
+
+    /*
+     * Check if static memory property belongs to a specific domain, that is,
+     * its node `domUx` has compatible string "xen,domain".
+     */
+    if ( fdt_node_check_compatible(fdt, node, "xen,domain") != 0 )
+        printk("xen,static-mem property can only locate under /domUx node.\n");
+
+    prop = fdt_get_property(fdt, node, name, NULL);
+    if ( !prop )
+        return -ENOENT;
+
+    cell = (const __be32 *)prop->data;
+    banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof (u32));
+
+    for ( i = 0; i < banks && mem->nr_banks < NR_MEM_BANKS; i++ )
+    {
+        device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
+        /* Some DT may describe empty bank, ignore them */
+        if ( !size )
+            continue;
+        mem->bank[mem->nr_banks].start = start;
+        mem->bank[mem->nr_banks].size = size;
+        mem->nr_banks++;
+    }
+
+    if ( i < banks )
+        return -ENOSPC;
+    return 0;
+}
+
 static int __init early_scan_node(const void *fdt,
                                   int node, const char *name, int depth,
                                   u32 address_cells, u32 size_cells,
@@ -345,6 +394,9 @@ static int __init early_scan_node(const void *fdt,
         process_multiboot_node(fdt, node, name, address_cells, size_cells);
     else if ( depth == 1 && device_tree_node_matches(fdt, node, "chosen") )
         process_chosen_node(fdt, node, name, address_cells, size_cells);
+    else if ( depth == 2 && fdt_get_property(fdt, node, "xen,static-mem", NULL) )
+        process_static_memory(fdt, node, "xen,static-mem", address_cells,
+                              size_cells, &bootinfo.static_mem);
 
     if ( rc < 0 )
         printk("fdt: node `%s': parsing failed\n", name);
diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h
index 5283244015..5e9f296760 100644
--- a/xen/include/asm-arm/setup.h
+++ b/xen/include/asm-arm/setup.h
@@ -74,6 +74,8 @@ struct bootinfo {
 #ifdef CONFIG_ACPI
     struct meminfo acpi;
 #endif
+    /* Static Memory */
+    struct meminfo static_mem;
 };
 
 extern struct bootinfo bootinfo;
-- 
2.25.1



^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 02/10] xen/arm: handle static memory in dt_unreserved_regions
  2021-05-18  5:21 [PATCH 00/10] Domain on Static Allocation Penny Zheng
  2021-05-18  5:21 ` [PATCH 01/10] xen/arm: introduce domain " Penny Zheng
@ 2021-05-18  5:21 ` Penny Zheng
  2021-05-18  9:04   ` Julien Grall
  2021-05-18  5:21 ` [PATCH 03/10] xen/arm: introduce PGC_reserved Penny Zheng
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 82+ messages in thread
From: Penny Zheng @ 2021-05-18  5:21 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien
  Cc: Bertrand.Marquis, Penny.Zheng, Wei.Chen, nd

static memory regions overlap with memory nodes. The
overlapping memory is reserved-memory and should be
handled accordingly:
dt_unreserved_regions should skip these regions the
same way they are already skipping mem-reserved regions.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/arch/arm/setup.c | 39 +++++++++++++++++++++++++++++++++------
 1 file changed, 33 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 00aad1c194..444dbbd676 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -201,7 +201,7 @@ static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
                                          void (*cb)(paddr_t, paddr_t),
                                          int first)
 {
-    int i, nr = fdt_num_mem_rsv(device_tree_flattened);
+    int i, nr_reserved, nr_static, nr = fdt_num_mem_rsv(device_tree_flattened);
 
     for ( i = first; i < nr ; i++ )
     {
@@ -222,18 +222,45 @@ static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
     }
 
     /*
-     * i is the current bootmodule we are evaluating across all possible
-     * kinds.
+     * i is the current reserved RAM banks we are evaluating across all
+     * possible kinds.
      *
      * When retrieving the corresponding reserved-memory addresses
      * below, we need to index the bootinfo.reserved_mem bank starting
      * from 0, and only counting the reserved-memory modules. Hence,
      * we need to use i - nr.
      */
-    for ( ; i - nr < bootinfo.reserved_mem.nr_banks; i++ )
+    i = i - nr;
+    nr_reserved = bootinfo.reserved_mem.nr_banks;
+    for ( ; i < nr_reserved; i++ )
     {
-        paddr_t r_s = bootinfo.reserved_mem.bank[i - nr].start;
-        paddr_t r_e = r_s + bootinfo.reserved_mem.bank[i - nr].size;
+        paddr_t r_s = bootinfo.reserved_mem.bank[i].start;
+        paddr_t r_e = r_s + bootinfo.reserved_mem.bank[i].size;
+
+        if ( s < r_e && r_s < e )
+        {
+            dt_unreserved_regions(r_e, e, cb, i + 1);
+            dt_unreserved_regions(s, r_s, cb, i + 1);
+            return;
+        }
+    }
+
+    /*
+     * i is the current reserved RAM banks we are evaluating across all
+     * possible kinds.
+     *
+     * When retrieving the corresponding static-memory bank address
+     * below, we need to index the bootinfo.static_mem starting
+     * from 0, and only counting the static-memory bank. Hence,
+     * we need to use i - nr_reserved.
+     */
+
+    i = i - nr_reserved;
+    nr_static = bootinfo.static_mem.nr_banks;
+    for ( ; i < nr_static; i++ )
+    {
+        paddr_t r_s = bootinfo.static_mem.bank[i].start;
+        paddr_t r_e = r_s + bootinfo.static_mem.bank[i].size;
 
         if ( s < r_e && r_s < e )
         {
-- 
2.25.1



^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 03/10] xen/arm: introduce PGC_reserved
  2021-05-18  5:21 [PATCH 00/10] Domain on Static Allocation Penny Zheng
  2021-05-18  5:21 ` [PATCH 01/10] xen/arm: introduce domain " Penny Zheng
  2021-05-18  5:21 ` [PATCH 02/10] xen/arm: handle static memory in dt_unreserved_regions Penny Zheng
@ 2021-05-18  5:21 ` Penny Zheng
  2021-05-18  9:45   ` Julien Grall
  2021-05-18  5:21 ` [PATCH 04/10] xen/arm: static memory initialization Penny Zheng
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 82+ messages in thread
From: Penny Zheng @ 2021-05-18  5:21 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien
  Cc: Bertrand.Marquis, Penny.Zheng, Wei.Chen, nd

In order to differentiate pages of static memory, from those allocated from
heap, this patch introduces a new page flag PGC_reserved to tell.

New struct reserved in struct page_info is to describe reserved page info,
that is, which specific domain this page is reserved to.

Helper page_get_reserved_owner and page_set_reserved_owner are
designated to get/set reserved page's owner.

Struct domain is enlarged to more than PAGE_SIZE, due to newly-imported
struct reserved in struct page_info.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/include/asm-arm/mm.h | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 0b7de3102e..d8922fd5db 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -88,7 +88,15 @@ struct page_info
          */
         u32 tlbflush_timestamp;
     };
-    u64 pad;
+
+    /* Page is reserved. */
+    struct {
+        /*
+         * Reserved Owner of this page,
+         * if this page is reserved to a specific domain.
+         */
+        struct domain *domain;
+    } reserved;
 };
 
 #define PG_shift(idx)   (BITS_PER_LONG - (idx))
@@ -108,6 +116,9 @@ struct page_info
   /* Page is Xen heap? */
 #define _PGC_xen_heap     PG_shift(2)
 #define PGC_xen_heap      PG_mask(1, 2)
+  /* Page is reserved, referring static memory */
+#define _PGC_reserved     PG_shift(3)
+#define PGC_reserved      PG_mask(1, 3)
 /* ... */
 /* Page is broken? */
 #define _PGC_broken       PG_shift(7)
@@ -161,6 +172,9 @@ extern unsigned long xenheap_base_pdx;
 #define page_get_owner(_p)    (_p)->v.inuse.domain
 #define page_set_owner(_p,_d) ((_p)->v.inuse.domain = (_d))
 
+#define page_get_reserved_owner(_p)    (_p)->reserved.domain
+#define page_set_reserved_owner(_p,_d) ((_p)->reserved.domain = (_d))
+
 #define maddr_get_owner(ma)   (page_get_owner(maddr_to_page((ma))))
 
 #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START)
-- 
2.25.1



^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 04/10] xen/arm: static memory initialization
  2021-05-18  5:21 [PATCH 00/10] Domain on Static Allocation Penny Zheng
                   ` (2 preceding siblings ...)
  2021-05-18  5:21 ` [PATCH 03/10] xen/arm: introduce PGC_reserved Penny Zheng
@ 2021-05-18  5:21 ` Penny Zheng
  2021-05-18  7:15   ` Jan Beulich
  2021-05-18 10:00   ` Julien Grall
  2021-05-18  5:21 ` [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages Penny Zheng
                   ` (5 subsequent siblings)
  9 siblings, 2 replies; 82+ messages in thread
From: Penny Zheng @ 2021-05-18  5:21 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien
  Cc: Bertrand.Marquis, Penny.Zheng, Wei.Chen, nd

This patch introduces static memory initialization, during system RAM boot up.

New func init_staticmem_pages is the equivalent of init_heap_pages, responsible
for static memory initialization.

Helper func free_staticmem_pages is the equivalent of free_heap_pages, to free
nr_pfns pages of static memory.
For each page, it includes the following steps:
1. change page state from in-use(also initialization state) to free state and
grant PGC_reserved.
2. set its owner NULL and make sure this page is not a guest frame any more
3. follow the same cache coherency policy in free_heap_pages
4. scrub the page in need

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/arch/arm/setup.c    |  2 ++
 xen/common/page_alloc.c | 70 +++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/mm.h    |  3 ++
 3 files changed, 75 insertions(+)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 444dbbd676..f80162c478 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -818,6 +818,8 @@ static void __init setup_mm(void)
 
     setup_frametable_mappings(ram_start, ram_end);
     max_page = PFN_DOWN(ram_end);
+
+    init_staticmem_pages();
 }
 #endif
 
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index ace6333c18..58b53c6ac2 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -150,6 +150,9 @@
 #define p2m_pod_offline_or_broken_hit(pg) 0
 #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
 #endif
+#ifdef CONFIG_ARM_64
+#include <asm/setup.h>
+#endif
 
 /*
  * Comma-separated list of hexadecimal page numbers containing bad bytes.
@@ -1512,6 +1515,49 @@ static void free_heap_pages(
     spin_unlock(&heap_lock);
 }
 
+/* Equivalent of free_heap_pages to free nr_pfns pages of static memory. */
+static void free_staticmem_pages(struct page_info *pg, unsigned long nr_pfns,
+                                 bool need_scrub)
+{
+    mfn_t mfn = page_to_mfn(pg);
+    int i;
+
+    for ( i = 0; i < nr_pfns; i++ )
+    {
+        switch ( pg[i].count_info & PGC_state )
+        {
+        case PGC_state_inuse:
+            BUG_ON(pg[i].count_info & PGC_broken);
+            /* Make it free and reserved. */
+            pg[i].count_info = PGC_state_free | PGC_reserved;
+            break;
+
+        default:
+            printk(XENLOG_ERR
+                   "Page state shall be only in PGC_state_inuse. "
+                   "pg[%u] MFN %"PRI_mfn" count_info=%#lx tlbflush_timestamp=%#x.\n",
+                   i, mfn_x(mfn) + i,
+                   pg[i].count_info,
+                   pg[i].tlbflush_timestamp);
+            BUG();
+        }
+
+        /*
+         * Follow the same cache coherence scheme in free_heap_pages.
+         * If a page has no owner it will need no safety TLB flush.
+         */
+        pg[i].u.free.need_tlbflush = (page_get_owner(&pg[i]) != NULL);
+        if ( pg[i].u.free.need_tlbflush )
+            page_set_tlbflush_timestamp(&pg[i]);
+
+        /* This page is not a guest frame any more. */
+        page_set_owner(&pg[i], NULL);
+        set_gpfn_from_mfn(mfn_x(mfn) + i, INVALID_M2P_ENTRY);
+
+        if ( need_scrub )
+            scrub_one_page(&pg[i]);
+    }
+}
 
 /*
  * Following rules applied for page offline:
@@ -1828,6 +1874,30 @@ static void init_heap_pages(
     }
 }
 
+/* Equivalent of init_heap_pages to do static memory initialization */
+void __init init_staticmem_pages(void)
+{
+    int bank;
+
+    /*
+     * TODO: Considering NUMA-support scenario.
+     */
+    for ( bank = 0 ; bank < bootinfo.static_mem.nr_banks; bank++ )
+    {
+        paddr_t bank_start = bootinfo.static_mem.bank[bank].start;
+        paddr_t bank_size = bootinfo.static_mem.bank[bank].size;
+        paddr_t bank_end = bank_start + bank_size;
+
+        bank_start = round_pgup(bank_start);
+        bank_end = round_pgdown(bank_end);
+        if ( bank_end <= bank_start )
+            return;
+
+        free_staticmem_pages(maddr_to_page(bank_start),
+                            (bank_end - bank_start) >> PAGE_SHIFT, false);
+    }
+}
+
 static unsigned long avail_heap_pages(
     unsigned int zone_lo, unsigned int zone_hi, unsigned int node)
 {
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 667f9dac83..8b1a2207b2 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -85,6 +85,9 @@ bool scrub_free_pages(void);
 } while ( false )
 #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
 
+/* Static Memory */
+void init_staticmem_pages(void);
+
 /* Map machine page range in Xen virtual address space. */
 int map_pages_to_xen(
     unsigned long virt,
-- 
2.25.1



^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
  2021-05-18  5:21 [PATCH 00/10] Domain on Static Allocation Penny Zheng
                   ` (3 preceding siblings ...)
  2021-05-18  5:21 ` [PATCH 04/10] xen/arm: static memory initialization Penny Zheng
@ 2021-05-18  5:21 ` Penny Zheng
  2021-05-18  7:24   ` Jan Beulich
  2021-05-18 10:15   ` Julien Grall
  2021-05-18  5:21 ` [PATCH 06/10] xen: replace order with nr_pfns in assign_pages for better compatibility Penny Zheng
                   ` (4 subsequent siblings)
  9 siblings, 2 replies; 82+ messages in thread
From: Penny Zheng @ 2021-05-18  5:21 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien
  Cc: Bertrand.Marquis, Penny.Zheng, Wei.Chen, nd

alloc_staticmem_pages is designated to allocate nr_pfns contiguous
pages of static memory. And it is the equivalent of alloc_heap_pages
for static memory.
This commit only covers allocating at specified starting address.

For each page, it shall check if the page is reserved
(PGC_reserved) and free. It shall also do a set of necessary
initialization, which are mostly the same ones in alloc_heap_pages,
like, following the same cache-coherency policy and turning page
status into PGC_state_used, etc.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/common/page_alloc.c | 64 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 64 insertions(+)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 58b53c6ac2..adf2889e76 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages(
     return pg;
 }
 
+/*
+ * Allocate nr_pfns contiguous pages, starting at #start, of static memory.
+ * It is the equivalent of alloc_heap_pages for static memory
+ */
+static struct page_info *alloc_staticmem_pages(unsigned long nr_pfns,
+                                                paddr_t start,
+                                                unsigned int memflags)
+{
+    bool need_tlbflush = false;
+    uint32_t tlbflush_timestamp = 0;
+    unsigned int i;
+    struct page_info *pg;
+    mfn_t s_mfn;
+
+    /* For now, it only supports allocating at specified address. */
+    s_mfn = maddr_to_mfn(start);
+    pg = mfn_to_page(s_mfn);
+    if ( !pg )
+        return NULL;
+
+    for ( i = 0; i < nr_pfns; i++)
+    {
+        /*
+         * Reference count must continuously be zero for free pages
+         * of static memory(PGC_reserved).
+         */
+        ASSERT(pg[i].count_info & PGC_reserved);
+        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
+        {
+            printk(XENLOG_ERR
+                    "Reference count must continuously be zero for free pages"
+                    "pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
+                    i, mfn_x(page_to_mfn(pg + i)),
+                    pg[i].count_info, pg[i].tlbflush_timestamp);
+            BUG();
+        }
+
+        if ( !(memflags & MEMF_no_tlbflush) )
+            accumulate_tlbflush(&need_tlbflush, &pg[i],
+                                &tlbflush_timestamp);
+
+        /*
+         * Reserve flag PGC_reserved and change page state
+         * to PGC_state_inuse.
+         */
+        pg[i].count_info = (pg[i].count_info & PGC_reserved) | PGC_state_inuse;
+        /* Initialise fields which have other uses for free pages. */
+        pg[i].u.inuse.type_info = 0;
+        page_set_owner(&pg[i], NULL);
+
+        /*
+         * Ensure cache and RAM are consistent for platforms where the
+         * guest can control its own visibility of/through the cache.
+         */
+        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
+                            !(memflags & MEMF_no_icache_flush));
+    }
+
+    if ( need_tlbflush )
+        filtered_flush_tlb_mask(tlbflush_timestamp);
+
+    return pg;
+}
+
 /* Remove any offlined page in the buddy pointed to by head. */
 static int reserve_offlined_page(struct page_info *head)
 {
-- 
2.25.1



^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 06/10] xen: replace order with nr_pfns in assign_pages for better compatibility
  2021-05-18  5:21 [PATCH 00/10] Domain on Static Allocation Penny Zheng
                   ` (4 preceding siblings ...)
  2021-05-18  5:21 ` [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages Penny Zheng
@ 2021-05-18  5:21 ` Penny Zheng
  2021-05-18  7:27   ` Jan Beulich
  2021-05-18 10:20   ` Julien Grall
  2021-05-18  5:21 ` [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages Penny Zheng
                   ` (3 subsequent siblings)
  9 siblings, 2 replies; 82+ messages in thread
From: Penny Zheng @ 2021-05-18  5:21 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien
  Cc: Bertrand.Marquis, Penny.Zheng, Wei.Chen, nd

Function parameter order in assign_pages is always used as 1ul << order,
referring to 2@order pages.

Now, for better compatibility with new static memory, order shall
be replaced with nr_pfns pointing to page count with no constraint,
like 250MB.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/arch/x86/pv/dom0_build.c |  2 +-
 xen/common/grant_table.c     |  2 +-
 xen/common/memory.c          |  4 ++--
 xen/common/page_alloc.c      | 16 ++++++++--------
 xen/include/xen/mm.h         |  2 +-
 5 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index e0801a9e6d..4e57836763 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -556,7 +556,7 @@ int __init dom0_construct_pv(struct domain *d,
         else
         {
             while ( count-- )
-                if ( assign_pages(d, mfn_to_page(_mfn(mfn++)), 0, 0) )
+                if ( assign_pages(d, mfn_to_page(_mfn(mfn++)), 1, 0) )
                     BUG();
         }
         initrd->mod_end = 0;
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index ab30e2e8cf..925bf924bd 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -2354,7 +2354,7 @@ gnttab_transfer(
          * is respected and speculative execution is blocked accordingly
          */
         if ( unlikely(!evaluate_nospec(okay)) ||
-            unlikely(assign_pages(e, page, 0, MEMF_no_refcount)) )
+            unlikely(assign_pages(e, page, 1, MEMF_no_refcount)) )
         {
             bool drop_dom_ref;
 
diff --git a/xen/common/memory.c b/xen/common/memory.c
index b5c70c4b85..2dca23aa7f 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -722,7 +722,7 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
         /* Assign each output page to the domain. */
         for ( j = 0; (page = page_list_remove_head(&out_chunk_list)); ++j )
         {
-            if ( assign_pages(d, page, exch.out.extent_order,
+            if ( assign_pages(d, page, 1UL << exch.out.extent_order,
                               MEMF_no_refcount) )
             {
                 unsigned long dec_count;
@@ -791,7 +791,7 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
      * cleared PGC_allocated.
      */
     while ( (page = page_list_remove_head(&in_chunk_list)) )
-        if ( assign_pages(d, page, 0, MEMF_no_refcount) )
+        if ( assign_pages(d, page, 1, MEMF_no_refcount) )
         {
             BUG_ON(!d->is_dying);
             free_domheap_page(page);
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index adf2889e76..0eb9f22a00 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2388,7 +2388,7 @@ void init_domheap_pages(paddr_t ps, paddr_t pe)
 int assign_pages(
     struct domain *d,
     struct page_info *pg,
-    unsigned int order,
+    unsigned long nr_pfns,
     unsigned int memflags)
 {
     int rc = 0;
@@ -2408,7 +2408,7 @@ int assign_pages(
     {
         unsigned int extra_pages = 0;
 
-        for ( i = 0; i < (1ul << order); i++ )
+        for ( i = 0; i < nr_pfns; i++ )
         {
             ASSERT(!(pg[i].count_info & ~PGC_extra));
             if ( pg[i].count_info & PGC_extra )
@@ -2417,18 +2417,18 @@ int assign_pages(
 
         ASSERT(!extra_pages ||
                ((memflags & MEMF_no_refcount) &&
-                extra_pages == 1u << order));
+                extra_pages == nr_pfns));
     }
 #endif
 
     if ( pg[0].count_info & PGC_extra )
     {
-        d->extra_pages += 1u << order;
+        d->extra_pages += nr_pfns;
         memflags &= ~MEMF_no_refcount;
     }
     else if ( !(memflags & MEMF_no_refcount) )
     {
-        unsigned int tot_pages = domain_tot_pages(d) + (1 << order);
+        unsigned int tot_pages = domain_tot_pages(d) + nr_pfns;
 
         if ( unlikely(tot_pages > d->max_pages) )
         {
@@ -2440,10 +2440,10 @@ int assign_pages(
     }
 
     if ( !(memflags & MEMF_no_refcount) &&
-         unlikely(domain_adjust_tot_pages(d, 1 << order) == (1 << order)) )
+         unlikely(domain_adjust_tot_pages(d, nr_pfns) == nr_pfns) )
         get_knownalive_domain(d);
 
-    for ( i = 0; i < (1 << order); i++ )
+    for ( i = 0; i < nr_pfns; i++ )
     {
         ASSERT(page_get_owner(&pg[i]) == NULL);
         page_set_owner(&pg[i], d);
@@ -2499,7 +2499,7 @@ struct page_info *alloc_domheap_pages(
                 pg[i].count_info = PGC_extra;
             }
         }
-        if ( assign_pages(d, pg, order, memflags) )
+        if ( assign_pages(d, pg, 1ul << order, memflags) )
         {
             free_heap_pages(pg, order, memflags & MEMF_no_scrub);
             return NULL;
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 8b1a2207b2..dcf9daaa46 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -131,7 +131,7 @@ void heap_init_late(void);
 int assign_pages(
     struct domain *d,
     struct page_info *pg,
-    unsigned int order,
+    unsigned long nr_pfns,
     unsigned int memflags);
 
 /* Dump info to serial console */
-- 
2.25.1



^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
  2021-05-18  5:21 [PATCH 00/10] Domain on Static Allocation Penny Zheng
                   ` (5 preceding siblings ...)
  2021-05-18  5:21 ` [PATCH 06/10] xen: replace order with nr_pfns in assign_pages for better compatibility Penny Zheng
@ 2021-05-18  5:21 ` Penny Zheng
  2021-05-18  7:34   ` Jan Beulich
  2021-05-18 10:30   ` Julien Grall
  2021-05-18  5:21 ` [PATCH 08/10] xen/arm: introduce reserved_page_list Penny Zheng
                   ` (2 subsequent siblings)
  9 siblings, 2 replies; 82+ messages in thread
From: Penny Zheng @ 2021-05-18  5:21 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien
  Cc: Bertrand.Marquis, Penny.Zheng, Wei.Chen, nd

alloc_domstatic_pages is the equivalent of alloc_domheap_pages for
static mmeory, and it is to allocate nr_pfns pages of static memory
and assign them to one specific domain.

It uses alloc_staticmen_pages to get nr_pages pages of static memory,
then on success, it will use assign_pages to assign those pages to
one specific domain, including using page_set_reserved_owner to set its
reserved domain owner.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/common/page_alloc.c | 53 +++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/mm.h    |  4 ++++
 2 files changed, 57 insertions(+)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 0eb9f22a00..f1f1296a61 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2447,6 +2447,9 @@ int assign_pages(
     {
         ASSERT(page_get_owner(&pg[i]) == NULL);
         page_set_owner(&pg[i], d);
+        /* use page_set_reserved_owner to set its reserved domain owner. */
+        if ( (pg[i].count_info & PGC_reserved) )
+            page_set_reserved_owner(&pg[i], d);
         smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
         pg[i].count_info =
             (pg[i].count_info & PGC_extra) | PGC_allocated | 1;
@@ -2509,6 +2512,56 @@ struct page_info *alloc_domheap_pages(
     return pg;
 }
 
+/*
+ * Allocate nr_pfns contiguous pages, starting at #start, of static memory,
+ * then assign them to one specific domain #d.
+ * It is the equivalent of alloc_domheap_pages for static memory.
+ */
+struct page_info *alloc_domstatic_pages(
+        struct domain *d, unsigned long nr_pfns, paddr_t start,
+        unsigned int memflags)
+{
+    struct page_info *pg = NULL;
+    unsigned long dma_size;
+
+    ASSERT(!in_irq());
+
+    if ( memflags & MEMF_no_owner )
+        memflags |= MEMF_no_refcount;
+
+    if ( !dma_bitsize )
+        memflags &= ~MEMF_no_dma;
+    else
+    {
+        dma_size = 1ul << bits_to_zone(dma_bitsize);
+        /* Starting address shall meet the DMA limitation. */
+        if ( dma_size && start < dma_size )
+            return NULL;
+    }
+
+    pg = alloc_staticmem_pages(nr_pfns, start, memflags);
+    if ( !pg )
+        return NULL;
+
+    if ( d && !(memflags & MEMF_no_owner) )
+    {
+        if ( memflags & MEMF_no_refcount )
+        {
+            unsigned long i;
+
+            for ( i = 0; i < nr_pfns; i++ )
+                pg[i].count_info = PGC_extra;
+        }
+        if ( assign_pages(d, pg, nr_pfns, memflags) )
+        {
+            free_staticmem_pages(pg, nr_pfns, memflags & MEMF_no_scrub);
+            return NULL;
+        }
+    }
+
+    return pg;
+}
+
 void free_domheap_pages(struct page_info *pg, unsigned int order)
 {
     struct domain *d = page_get_owner(pg);
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index dcf9daaa46..e45987f0ed 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -111,6 +111,10 @@ unsigned long __must_check domain_adjust_tot_pages(struct domain *d,
 int domain_set_outstanding_pages(struct domain *d, unsigned long pages);
 void get_outstanding_claims(uint64_t *free_pages, uint64_t *outstanding_pages);
 
+/* Static Memory */
+struct page_info *alloc_domstatic_pages(struct domain *d,
+        unsigned long nr_pfns, paddr_t start, unsigned int memflags);
+
 /* Domain suballocator. These functions are *not* interrupt-safe.*/
 void init_domheap_pages(paddr_t ps, paddr_t pe);
 struct page_info *alloc_domheap_pages(
-- 
2.25.1



^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 08/10] xen/arm: introduce reserved_page_list
  2021-05-18  5:21 [PATCH 00/10] Domain on Static Allocation Penny Zheng
                   ` (6 preceding siblings ...)
  2021-05-18  5:21 ` [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages Penny Zheng
@ 2021-05-18  5:21 ` Penny Zheng
  2021-05-18  7:39   ` Jan Beulich
  2021-05-18 11:02   ` Julien Grall
  2021-05-18  5:21 ` [PATCH 09/10] xen/arm: parse `xen,static-mem` info during domain construction Penny Zheng
  2021-05-18  5:21 ` [PATCH 10/10] xen/arm: introduce allocate_static_memory Penny Zheng
  9 siblings, 2 replies; 82+ messages in thread
From: Penny Zheng @ 2021-05-18  5:21 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien
  Cc: Bertrand.Marquis, Penny.Zheng, Wei.Chen, nd

Since page_list under struct domain refers to linked pages as gueast RAM
allocated from heap, it should not include reserved pages of static memory.

The number of PGC_reserved pages assigned to a domain is tracked in
a new 'reserved_pages' counter. Also introduce a new reserved_page_list
to link pages of static memory. Let page_to_list return reserved_page_list,
when flag is PGC_reserved.

Later, when domain get destroyed or restarted, those new values will help
relinquish memory to proper place, not been given back to heap.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/common/domain.c     | 1 +
 xen/common/page_alloc.c | 7 +++++--
 xen/include/xen/sched.h | 5 +++++
 3 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 6b71c6d6a9..c38afd2969 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -578,6 +578,7 @@ struct domain *domain_create(domid_t domid,
     INIT_PAGE_LIST_HEAD(&d->page_list);
     INIT_PAGE_LIST_HEAD(&d->extra_page_list);
     INIT_PAGE_LIST_HEAD(&d->xenpage_list);
+    INIT_PAGE_LIST_HEAD(&d->reserved_page_list);
 
     spin_lock_init(&d->node_affinity_lock);
     d->node_affinity = NODE_MASK_ALL;
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index f1f1296a61..e3f07ec6c5 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2410,7 +2410,7 @@ int assign_pages(
 
         for ( i = 0; i < nr_pfns; i++ )
         {
-            ASSERT(!(pg[i].count_info & ~PGC_extra));
+            ASSERT(!(pg[i].count_info & ~(PGC_extra | PGC_reserved)));
             if ( pg[i].count_info & PGC_extra )
                 extra_pages++;
         }
@@ -2439,6 +2439,9 @@ int assign_pages(
         }
     }
 
+    if ( pg[0].count_info & PGC_reserved )
+        d->reserved_pages += nr_pfns;
+
     if ( !(memflags & MEMF_no_refcount) &&
          unlikely(domain_adjust_tot_pages(d, nr_pfns) == nr_pfns) )
         get_knownalive_domain(d);
@@ -2452,7 +2455,7 @@ int assign_pages(
             page_set_reserved_owner(&pg[i], d);
         smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
         pg[i].count_info =
-            (pg[i].count_info & PGC_extra) | PGC_allocated | 1;
+            (pg[i].count_info & (PGC_extra | PGC_reserved)) | PGC_allocated | 1;
         page_list_add_tail(&pg[i], page_to_list(d, &pg[i]));
     }
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 3982167144..b6333ed8bb 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -368,6 +368,7 @@ struct domain
     struct page_list_head page_list;  /* linked list */
     struct page_list_head extra_page_list; /* linked list (size extra_pages) */
     struct page_list_head xenpage_list; /* linked list (size xenheap_pages) */
+    struct page_list_head reserved_page_list; /* linked list (size reserved pages) */
 
     /*
      * This field should only be directly accessed by domain_adjust_tot_pages()
@@ -379,6 +380,7 @@ struct domain
     unsigned int     outstanding_pages; /* pages claimed but not possessed */
     unsigned int     max_pages;         /* maximum value for domain_tot_pages() */
     unsigned int     extra_pages;       /* pages not included in domain_tot_pages() */
+    unsigned int     reserved_pages;    /* pages of static memory */
     atomic_t         shr_pages;         /* shared pages */
     atomic_t         paged_pages;       /* paged-out pages */
 
@@ -588,6 +590,9 @@ static inline struct page_list_head *page_to_list(
     if ( pg->count_info & PGC_extra )
         return &d->extra_page_list;
 
+    if ( pg->count_info & PGC_reserved )
+        return &d->reserved_page_list;
+
     return &d->page_list;
 }
 
-- 
2.25.1



^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 09/10] xen/arm: parse `xen,static-mem` info during domain construction
  2021-05-18  5:21 [PATCH 00/10] Domain on Static Allocation Penny Zheng
                   ` (7 preceding siblings ...)
  2021-05-18  5:21 ` [PATCH 08/10] xen/arm: introduce reserved_page_list Penny Zheng
@ 2021-05-18  5:21 ` Penny Zheng
  2021-05-18 12:09   ` Julien Grall
  2021-05-18  5:21 ` [PATCH 10/10] xen/arm: introduce allocate_static_memory Penny Zheng
  9 siblings, 1 reply; 82+ messages in thread
From: Penny Zheng @ 2021-05-18  5:21 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien
  Cc: Bertrand.Marquis, Penny.Zheng, Wei.Chen, nd

This commit parses `xen,static-mem` device tree property, to acquire
static memory info reserved for this domain, when constructing domain
during boot-up.

Related info shall be stored in new static_mem value under per domain
struct arch_domain.

Right now, the implementation of allocate_static_memory is missing, and
will be introduced later. It just BUG() out at the moment.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/arch/arm/domain_build.c  | 58 ++++++++++++++++++++++++++++++++----
 xen/include/asm-arm/domain.h |  3 ++
 2 files changed, 56 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 282416e74d..30b55588b7 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2424,17 +2424,61 @@ static int __init construct_domU(struct domain *d,
 {
     struct kernel_info kinfo = {};
     int rc;
-    u64 mem;
+    u64 mem, static_mem_size = 0;
+    const struct dt_property *prop;
+    u32 static_mem_len;
+    bool static_mem = false;
+
+    /*
+     * Guest RAM could be of static memory from static allocation,
+     * which will be specified through "xen,static-mem" property.
+     */
+    prop = dt_find_property(node, "xen,static-mem", &static_mem_len);
+    if ( prop )
+    {
+        const __be32 *cell;
+        u32 addr_cells = 2, size_cells = 2, reg_cells;
+        u64 start, size;
+        int i, banks;
+        static_mem = true;
+
+        dt_property_read_u32(node, "#address-cells", &addr_cells);
+        dt_property_read_u32(node, "#size-cells", &size_cells);
+        BUG_ON(size_cells > 2 || addr_cells > 2);
+        reg_cells = addr_cells + size_cells;
+
+        cell = (const __be32 *)prop->value;
+        banks = static_mem_len / (reg_cells * sizeof (u32));
+        BUG_ON(banks > NR_MEM_BANKS);
+
+        for ( i = 0; i < banks; i++ )
+        {
+            device_tree_get_reg(&cell, addr_cells, size_cells, &start, &size);
+            d->arch.static_mem.bank[i].start = start;
+            d->arch.static_mem.bank[i].size = size;
+            static_mem_size += size;
+
+            printk(XENLOG_INFO
+                    "Static Memory Bank[%d] for Domain %pd:"
+                    "0x%"PRIx64"-0x%"PRIx64"\n",
+                    i, d,
+                    d->arch.static_mem.bank[i].start,
+                    d->arch.static_mem.bank[i].start +
+                    d->arch.static_mem.bank[i].size);
+        }
+        d->arch.static_mem.nr_banks = banks;
+    }
 
     rc = dt_property_read_u64(node, "memory", &mem);
-    if ( !rc )
+    if ( !static_mem && !rc )
     {
         printk("Error building DomU: cannot read \"memory\" property\n");
         return -EINVAL;
     }
-    kinfo.unassigned_mem = (paddr_t)mem * SZ_1K;
+    kinfo.unassigned_mem = static_mem ? static_mem_size : (paddr_t)mem * SZ_1K;
 
-    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
+    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n",
+            d->max_vcpus, (kinfo.unassigned_mem) >> 10);
 
     kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
 
@@ -2452,7 +2496,11 @@ static int __init construct_domU(struct domain *d,
     /* type must be set before allocate memory */
     d->arch.type = kinfo.type;
 #endif
-    allocate_memory(d, &kinfo);
+    if ( static_mem )
+        /* allocate_static_memory(d, &kinfo); */
+        BUG();
+    else
+        allocate_memory(d, &kinfo);
 
     rc = prepare_dtb_domU(d, &kinfo);
     if ( rc < 0 )
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index c9277b5c6d..81b8eb453c 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -10,6 +10,7 @@
 #include <asm/gic.h>
 #include <asm/vgic.h>
 #include <asm/vpl011.h>
+#include <asm/setup.h>
 #include <public/hvm/params.h>
 
 struct hvm_domain
@@ -89,6 +90,8 @@ struct arch_domain
 #ifdef CONFIG_TEE
     void *tee;
 #endif
+
+    struct meminfo static_mem;
 }  __cacheline_aligned;
 
 struct arch_vcpu
-- 
2.25.1



^ permalink raw reply	[flat|nested] 82+ messages in thread

* [PATCH 10/10] xen/arm: introduce allocate_static_memory
  2021-05-18  5:21 [PATCH 00/10] Domain on Static Allocation Penny Zheng
                   ` (8 preceding siblings ...)
  2021-05-18  5:21 ` [PATCH 09/10] xen/arm: parse `xen,static-mem` info during domain construction Penny Zheng
@ 2021-05-18  5:21 ` Penny Zheng
  2021-05-18 12:05   ` Julien Grall
  9 siblings, 1 reply; 82+ messages in thread
From: Penny Zheng @ 2021-05-18  5:21 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien
  Cc: Bertrand.Marquis, Penny.Zheng, Wei.Chen, nd

This commit introduces allocate_static_memory to allocate static memory as
guest RAM for domain on Static Allocation.

It uses alloc_domstatic_pages to allocate pre-defined static memory banks
for this domain, and uses guest_physmap_add_page to set up P2M table,
guest starting at fixed GUEST_RAM0_BASE, GUEST_RAM1_BASE.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/arch/arm/domain_build.c | 157 +++++++++++++++++++++++++++++++++++-
 1 file changed, 155 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 30b55588b7..9f662313ad 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -437,6 +437,50 @@ static bool __init allocate_bank_memory(struct domain *d,
     return true;
 }
 
+/*
+ * #ram_index and #ram_index refer to the index and starting address of guest
+ * memory kank stored in kinfo->mem.
+ * Static memory at #smfn of #tot_size shall be mapped #sgfn, and
+ * #sgfn will be next guest address to map when returning.
+ */
+static bool __init allocate_static_bank_memory(struct domain *d,
+                                               struct kernel_info *kinfo,
+                                               int ram_index,
+                                               paddr_t ram_addr,
+                                               gfn_t* sgfn,
+                                               mfn_t smfn,
+                                               paddr_t tot_size)
+{
+    int res;
+    struct membank *bank;
+    paddr_t _size = tot_size;
+
+    bank = &kinfo->mem.bank[ram_index];
+    bank->start = ram_addr;
+    bank->size = bank->size + tot_size;
+
+    while ( tot_size > 0 )
+    {
+        unsigned int order = get_allocation_size(tot_size);
+
+        res = guest_physmap_add_page(d, *sgfn, smfn, order);
+        if ( res )
+        {
+            dprintk(XENLOG_ERR, "Failed map pages to DOMU: %d", res);
+            return false;
+        }
+
+        *sgfn = gfn_add(*sgfn, 1UL << order);
+        smfn = mfn_add(smfn, 1UL << order);
+        tot_size -= (1ULL << (PAGE_SHIFT + order));
+    }
+
+    kinfo->mem.nr_banks = ram_index + 1;
+    kinfo->unassigned_mem -= _size;
+
+    return true;
+}
+
 static void __init allocate_memory(struct domain *d, struct kernel_info *kinfo)
 {
     unsigned int i;
@@ -480,6 +524,116 @@ fail:
           (unsigned long)kinfo->unassigned_mem >> 10);
 }
 
+/* Allocate memory from static memory as RAM for one specific domain d. */
+static void __init allocate_static_memory(struct domain *d,
+                                            struct kernel_info *kinfo)
+{
+    int nr_banks, _banks = 0;
+    size_t ram0_size = GUEST_RAM0_SIZE, ram1_size = GUEST_RAM1_SIZE;
+    paddr_t bank_start, bank_size;
+    gfn_t sgfn;
+    mfn_t smfn;
+
+    kinfo->mem.nr_banks = 0;
+    sgfn = gaddr_to_gfn(GUEST_RAM0_BASE);
+    nr_banks = d->arch.static_mem.nr_banks;
+    ASSERT(nr_banks >= 0);
+
+    if ( kinfo->unassigned_mem <= 0 )
+        goto fail;
+
+    while ( _banks < nr_banks )
+    {
+        bank_start = d->arch.static_mem.bank[_banks].start;
+        smfn = maddr_to_mfn(bank_start);
+        bank_size = d->arch.static_mem.bank[_banks].size;
+
+        if ( !alloc_domstatic_pages(d, bank_size >> PAGE_SHIFT, bank_start, 0) )
+        {
+            printk(XENLOG_ERR
+                    "%pd: cannot allocate static memory"
+                    "(0x%"PRIx64" - 0x%"PRIx64")",
+                    d, bank_start, bank_start + bank_size);
+            goto fail;
+        }
+
+        /*
+         * By default, it shall be mapped to the fixed guest RAM address
+         * `GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.
+         * Starting from RAM0(GUEST_RAM0_BASE).
+         */
+        if ( ram0_size )
+        {
+            /* RAM0 at most holds GUEST_RAM0_SIZE. */
+            if ( ram0_size >= bank_size )
+            {
+                if ( !allocate_static_bank_memory(d, kinfo,
+                                                  0, GUEST_RAM0_BASE,
+                                                  &sgfn, smfn, bank_size) )
+                    goto fail;
+
+                ram0_size = ram0_size - bank_size;
+                _banks++;
+                continue;
+            }
+            else /* bank_size > ram0_size */
+            {
+                if ( !allocate_static_bank_memory(d, kinfo,
+                                                  0, GUEST_RAM0_BASE,
+                                                  &sgfn, smfn, ram0_size) )
+                    goto fail;
+
+                /* The whole RAM0 is consumed. */
+                ram0_size -= ram0_size;
+                /* This bank hasn't been totally mapped, seeking to RAM1. */
+                bank_size = bank_size - ram0_size;
+                smfn = mfn_add(smfn, ram0_size >> PAGE_SHIFT);
+                sgfn = gaddr_to_gfn(GUEST_RAM1_BASE);
+            }
+        }
+
+        if ( ram1_size >= bank_size )
+        {
+            if ( !allocate_static_bank_memory(d, kinfo,
+                                              1, GUEST_RAM1_BASE,
+                                              &sgfn, smfn, bank_size) )
+            goto fail;
+
+            ram1_size = ram1_size - bank_size;
+            _banks++;
+            continue;
+        }
+        else
+            /*
+             * If RAM1 still couldn't meet the requirement,
+             * no way to seek for now.
+             */
+            goto fail;
+    }
+
+    if ( kinfo->unassigned_mem )
+        goto fail;
+
+    for( int i = 0; i < kinfo->mem.nr_banks; i++ )
+    {
+        printk(XENLOG_INFO "%pd BANK[%d] %#"PRIpaddr"-%#"PRIpaddr" (%ldMB)\n",
+               d,
+               i,
+               kinfo->mem.bank[i].start,
+               kinfo->mem.bank[i].start + kinfo->mem.bank[i].size,
+               /* Don't want format this as PRIpaddr (16 digit hex) */
+               (unsigned long)(kinfo->mem.bank[i].size >> 20));
+    }
+
+    return;
+
+fail:
+    panic("Failed to allocate requested domain memory."
+          /* Don't want format this as PRIpaddr (16 digit hex) */
+          " %ldKB unallocated. Fix the VMs configurations.\n",
+          (unsigned long)kinfo->unassigned_mem >> 10);
+}
+
 static int __init write_properties(struct domain *d, struct kernel_info *kinfo,
                                    const struct dt_device_node *node)
 {
@@ -2497,8 +2651,7 @@ static int __init construct_domU(struct domain *d,
     d->arch.type = kinfo.type;
 #endif
     if ( static_mem )
-        /* allocate_static_memory(d, &kinfo); */
-        BUG();
+        allocate_static_memory(d, &kinfo);
     else
         allocate_memory(d, &kinfo);
 
-- 
2.25.1



^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 04/10] xen/arm: static memory initialization
  2021-05-18  5:21 ` [PATCH 04/10] xen/arm: static memory initialization Penny Zheng
@ 2021-05-18  7:15   ` Jan Beulich
  2021-05-18  9:51     ` Penny Zheng
  2021-05-18 10:00   ` Julien Grall
  1 sibling, 1 reply; 82+ messages in thread
From: Jan Beulich @ 2021-05-18  7:15 UTC (permalink / raw)
  To: Penny Zheng
  Cc: Bertrand.Marquis, Wei.Chen, nd, xen-devel, sstabellini, julien

On 18.05.2021 07:21, Penny Zheng wrote:
> This patch introduces static memory initialization, during system RAM boot up.
> 
> New func init_staticmem_pages is the equivalent of init_heap_pages, responsible
> for static memory initialization.
> 
> Helper func free_staticmem_pages is the equivalent of free_heap_pages, to free
> nr_pfns pages of static memory.
> For each page, it includes the following steps:
> 1. change page state from in-use(also initialization state) to free state and
> grant PGC_reserved.
> 2. set its owner NULL and make sure this page is not a guest frame any more

But isn't the goal (as per the previous patch) to associate such pages
with a _specific_ domain?

> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -150,6 +150,9 @@
>  #define p2m_pod_offline_or_broken_hit(pg) 0
>  #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
>  #endif
> +#ifdef CONFIG_ARM_64
> +#include <asm/setup.h>
> +#endif

Whatever it is that's needed from this header suggests the code won't
build for other architectures. I think init_staticmem_pages() in its
current shape shouldn't live in this (common) file.

> @@ -1512,6 +1515,49 @@ static void free_heap_pages(
>      spin_unlock(&heap_lock);
>  }
>  
> +/* Equivalent of free_heap_pages to free nr_pfns pages of static memory. */
> +static void free_staticmem_pages(struct page_info *pg, unsigned long nr_pfns,
> +                                 bool need_scrub)

Right now this function gets called only from an __init one. Unless
it is intended to gain further callers, it should be marked __init
itself then. Otherwise it should be made sure that other
architectures don't include this (dead there) code.

> +{
> +    mfn_t mfn = page_to_mfn(pg);
> +    int i;

This type doesn't fit nr_pfns'es.

Jan


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
  2021-05-18  5:21 ` [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages Penny Zheng
@ 2021-05-18  7:24   ` Jan Beulich
  2021-05-18  9:30     ` Penny Zheng
  2021-05-18 10:09     ` Julien Grall
  2021-05-18 10:15   ` Julien Grall
  1 sibling, 2 replies; 82+ messages in thread
From: Jan Beulich @ 2021-05-18  7:24 UTC (permalink / raw)
  To: Penny Zheng
  Cc: Bertrand.Marquis, Wei.Chen, nd, xen-devel, sstabellini, julien

On 18.05.2021 07:21, Penny Zheng wrote:
> alloc_staticmem_pages is designated to allocate nr_pfns contiguous
> pages of static memory. And it is the equivalent of alloc_heap_pages
> for static memory.
> This commit only covers allocating at specified starting address.
> 
> For each page, it shall check if the page is reserved
> (PGC_reserved) and free. It shall also do a set of necessary
> initialization, which are mostly the same ones in alloc_heap_pages,
> like, following the same cache-coherency policy and turning page
> status into PGC_state_used, etc.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>  xen/common/page_alloc.c | 64 +++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 64 insertions(+)
> 
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 58b53c6ac2..adf2889e76 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages(
>      return pg;
>  }
>  
> +/*
> + * Allocate nr_pfns contiguous pages, starting at #start, of static memory.
> + * It is the equivalent of alloc_heap_pages for static memory
> + */
> +static struct page_info *alloc_staticmem_pages(unsigned long nr_pfns,
> +                                                paddr_t start,
> +                                                unsigned int memflags)

This is surely breaking the build (at this point in the series -
recall that a series should build fine at every patch boundary),
for introducing an unused static function, which most compilers
will warn about.

Also again - please avoid introducing code that's always dead for
certain architectures. Quite likely you want a Kconfig option to
put a suitable #ifdef around such functions.

And a nit: Please correct the apparently off-by-one indentation.

> +{
> +    bool need_tlbflush = false;
> +    uint32_t tlbflush_timestamp = 0;
> +    unsigned int i;

This variable's type should (again) match nr_pfns'es (albeit I
think that parameter really wants to be nr_mfns).

> +    struct page_info *pg;
> +    mfn_t s_mfn;
> +
> +    /* For now, it only supports allocating at specified address. */
> +    s_mfn = maddr_to_mfn(start);
> +    pg = mfn_to_page(s_mfn);
> +    if ( !pg )
> +        return NULL;

Under what conditions would mfn_to_page() return NULL?

> +    for ( i = 0; i < nr_pfns; i++)
> +    {
> +        /*
> +         * Reference count must continuously be zero for free pages
> +         * of static memory(PGC_reserved).
> +         */
> +        ASSERT(pg[i].count_info & PGC_reserved);
> +        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
> +        {
> +            printk(XENLOG_ERR
> +                    "Reference count must continuously be zero for free pages"
> +                    "pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
> +                    i, mfn_x(page_to_mfn(pg + i)),
> +                    pg[i].count_info, pg[i].tlbflush_timestamp);

Nit: Indentation again.

> +            BUG();
> +        }
> +
> +        if ( !(memflags & MEMF_no_tlbflush) )
> +            accumulate_tlbflush(&need_tlbflush, &pg[i],
> +                                &tlbflush_timestamp);
> +
> +        /*
> +         * Reserve flag PGC_reserved and change page state

DYM "Preserve ..."?

> +         * to PGC_state_inuse.
> +         */
> +        pg[i].count_info = (pg[i].count_info & PGC_reserved) | PGC_state_inuse;
> +        /* Initialise fields which have other uses for free pages. */
> +        pg[i].u.inuse.type_info = 0;
> +        page_set_owner(&pg[i], NULL);
> +
> +        /*
> +         * Ensure cache and RAM are consistent for platforms where the
> +         * guest can control its own visibility of/through the cache.
> +         */
> +        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
> +                            !(memflags & MEMF_no_icache_flush));
> +    }
> +
> +    if ( need_tlbflush )
> +        filtered_flush_tlb_mask(tlbflush_timestamp);

With reserved pages dedicated to a specific domain, in how far is it
possible that stale mappings from a prior use can still be around,
making such TLB flushing necessary?

Jan


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 06/10] xen: replace order with nr_pfns in assign_pages for better compatibility
  2021-05-18  5:21 ` [PATCH 06/10] xen: replace order with nr_pfns in assign_pages for better compatibility Penny Zheng
@ 2021-05-18  7:27   ` Jan Beulich
  2021-05-18  9:11     ` Penny Zheng
  2021-05-18 10:20   ` Julien Grall
  1 sibling, 1 reply; 82+ messages in thread
From: Jan Beulich @ 2021-05-18  7:27 UTC (permalink / raw)
  To: Penny Zheng
  Cc: Bertrand.Marquis, Wei.Chen, nd, xen-devel, sstabellini, julien

On 18.05.2021 07:21, Penny Zheng wrote:
> Function parameter order in assign_pages is always used as 1ul << order,
> referring to 2@order pages.
> 
> Now, for better compatibility with new static memory, order shall
> be replaced with nr_pfns pointing to page count with no constraint,
> like 250MB.

While I'm not entirely opposed, I'm also not convinced. The new
user could as well break up the range into suitable power-of-2
chunks. In no case do I view the wording "compatibility" here as
appropriate. There's no incompatibility at present.

Jan


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
  2021-05-18  5:21 ` [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages Penny Zheng
@ 2021-05-18  7:34   ` Jan Beulich
  2021-05-18  8:57     ` Penny Zheng
  2021-05-18 10:30   ` Julien Grall
  1 sibling, 1 reply; 82+ messages in thread
From: Jan Beulich @ 2021-05-18  7:34 UTC (permalink / raw)
  To: Penny Zheng
  Cc: Bertrand.Marquis, Wei.Chen, nd, xen-devel, sstabellini, julien

On 18.05.2021 07:21, Penny Zheng wrote:
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -2447,6 +2447,9 @@ int assign_pages(
>      {
>          ASSERT(page_get_owner(&pg[i]) == NULL);
>          page_set_owner(&pg[i], d);
> +        /* use page_set_reserved_owner to set its reserved domain owner. */
> +        if ( (pg[i].count_info & PGC_reserved) )
> +            page_set_reserved_owner(&pg[i], d);

Now this is puzzling: What's the point of setting two owner fields
to the same value? I also don't recall you having introduced
page_set_reserved_owner() for x86, so how is this going to build
there?

> @@ -2509,6 +2512,56 @@ struct page_info *alloc_domheap_pages(
>      return pg;
>  }
>  
> +/*
> + * Allocate nr_pfns contiguous pages, starting at #start, of static memory,
> + * then assign them to one specific domain #d.
> + * It is the equivalent of alloc_domheap_pages for static memory.
> + */
> +struct page_info *alloc_domstatic_pages(
> +        struct domain *d, unsigned long nr_pfns, paddr_t start,
> +        unsigned int memflags)
> +{
> +    struct page_info *pg = NULL;
> +    unsigned long dma_size;
> +
> +    ASSERT(!in_irq());
> +
> +    if ( memflags & MEMF_no_owner )
> +        memflags |= MEMF_no_refcount;
> +
> +    if ( !dma_bitsize )
> +        memflags &= ~MEMF_no_dma;
> +    else
> +    {
> +        dma_size = 1ul << bits_to_zone(dma_bitsize);
> +        /* Starting address shall meet the DMA limitation. */
> +        if ( dma_size && start < dma_size )
> +            return NULL;

It is the entire range (i.e. in particular the last byte) which needs
to meet such a restriction. I'm not convinced though that DMA width
restrictions and static allocation are sensible to coexist.

> +    }
> +
> +    pg = alloc_staticmem_pages(nr_pfns, start, memflags);
> +    if ( !pg )
> +        return NULL;
> +
> +    if ( d && !(memflags & MEMF_no_owner) )
> +    {
> +        if ( memflags & MEMF_no_refcount )
> +        {
> +            unsigned long i;
> +
> +            for ( i = 0; i < nr_pfns; i++ )
> +                pg[i].count_info = PGC_extra;
> +        }

Is this as well as the MEMF_no_owner case actually meaningful for
statically allocated pages?

Jan


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 08/10] xen/arm: introduce reserved_page_list
  2021-05-18  5:21 ` [PATCH 08/10] xen/arm: introduce reserved_page_list Penny Zheng
@ 2021-05-18  7:39   ` Jan Beulich
  2021-05-18  8:38     ` Penny Zheng
  2021-05-18 11:02   ` Julien Grall
  1 sibling, 1 reply; 82+ messages in thread
From: Jan Beulich @ 2021-05-18  7:39 UTC (permalink / raw)
  To: Penny Zheng
  Cc: Bertrand.Marquis, Wei.Chen, nd, xen-devel, sstabellini, julien

On 18.05.2021 07:21, Penny Zheng wrote:
> Since page_list under struct domain refers to linked pages as gueast RAM
> allocated from heap, it should not include reserved pages of static memory.
> 
> The number of PGC_reserved pages assigned to a domain is tracked in
> a new 'reserved_pages' counter. Also introduce a new reserved_page_list
> to link pages of static memory. Let page_to_list return reserved_page_list,
> when flag is PGC_reserved.
> 
> Later, when domain get destroyed or restarted, those new values will help
> relinquish memory to proper place, not been given back to heap.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>  xen/common/domain.c     | 1 +
>  xen/common/page_alloc.c | 7 +++++--
>  xen/include/xen/sched.h | 5 +++++
>  3 files changed, 11 insertions(+), 2 deletions(-)

This contradicts the title's prefix: There's no Arm-specific change
here at all. But imo the title is correct, and the changes should
be Arm-specific. There's no point having struct domain fields on
e.g. x86 which aren't used there at all.

> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -2410,7 +2410,7 @@ int assign_pages(
>  
>          for ( i = 0; i < nr_pfns; i++ )
>          {
> -            ASSERT(!(pg[i].count_info & ~PGC_extra));
> +            ASSERT(!(pg[i].count_info & ~(PGC_extra | PGC_reserved)));
>              if ( pg[i].count_info & PGC_extra )
>                  extra_pages++;
>          }
> @@ -2439,6 +2439,9 @@ int assign_pages(
>          }
>      }
>  
> +    if ( pg[0].count_info & PGC_reserved )
> +        d->reserved_pages += nr_pfns;

I guess this again will fail to build on x86.

> @@ -588,6 +590,9 @@ static inline struct page_list_head *page_to_list(
>      if ( pg->count_info & PGC_extra )
>          return &d->extra_page_list;
>  
> +    if ( pg->count_info & PGC_reserved )
> +        return &d->reserved_page_list;

Same here.

Jan


^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 08/10] xen/arm: introduce reserved_page_list
  2021-05-18  7:39   ` Jan Beulich
@ 2021-05-18  8:38     ` Penny Zheng
  2021-05-18 11:24       ` Jan Beulich
  0 siblings, 1 reply; 82+ messages in thread
From: Penny Zheng @ 2021-05-18  8:38 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Bertrand Marquis, Wei Chen, nd, xen-devel, sstabellini, julien

Hi Jan

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: Tuesday, May 18, 2021 3:39 PM
> To: Penny Zheng <Penny.Zheng@arm.com>
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org; julien@xen.org
> Subject: Re: [PATCH 08/10] xen/arm: introduce reserved_page_list
> 
> On 18.05.2021 07:21, Penny Zheng wrote:
> > Since page_list under struct domain refers to linked pages as gueast
> > RAM allocated from heap, it should not include reserved pages of static
> memory.
> >
> > The number of PGC_reserved pages assigned to a domain is tracked in a
> > new 'reserved_pages' counter. Also introduce a new reserved_page_list
> > to link pages of static memory. Let page_to_list return
> > reserved_page_list, when flag is PGC_reserved.
> >
> > Later, when domain get destroyed or restarted, those new values will
> > help relinquish memory to proper place, not been given back to heap.
> >
> > Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > ---
> >  xen/common/domain.c     | 1 +
> >  xen/common/page_alloc.c | 7 +++++--
> >  xen/include/xen/sched.h | 5 +++++
> >  3 files changed, 11 insertions(+), 2 deletions(-)
> 
> This contradicts the title's prefix: There's no Arm-specific change here at all.
> But imo the title is correct, and the changes should be Arm-specific. There's
> no point having struct domain fields on e.g. x86 which aren't used there at all.
> 

Yep, you're right.
I'll add ifdefs in the following changes.

> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -2410,7 +2410,7 @@ int assign_pages(
> >
> >          for ( i = 0; i < nr_pfns; i++ )
> >          {
> > -            ASSERT(!(pg[i].count_info & ~PGC_extra));
> > +            ASSERT(!(pg[i].count_info & ~(PGC_extra |
> > + PGC_reserved)));
> >              if ( pg[i].count_info & PGC_extra )
> >                  extra_pages++;
> >          }
> > @@ -2439,6 +2439,9 @@ int assign_pages(
> >          }
> >      }
> >
> > +    if ( pg[0].count_info & PGC_reserved )
> > +        d->reserved_pages += nr_pfns;
> 
> I guess this again will fail to build on x86.
> 
> > @@ -588,6 +590,9 @@ static inline struct page_list_head *page_to_list(
> >      if ( pg->count_info & PGC_extra )
> >          return &d->extra_page_list;
> >
> > +    if ( pg->count_info & PGC_reserved )
> > +        return &d->reserved_page_list;
> 
> Same here.
> 
> Jan

Thanks
Penny

^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
  2021-05-18  7:34   ` Jan Beulich
@ 2021-05-18  8:57     ` Penny Zheng
  2021-05-18 11:23       ` Jan Beulich
  2021-05-18 12:13       ` Julien Grall
  0 siblings, 2 replies; 82+ messages in thread
From: Penny Zheng @ 2021-05-18  8:57 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Bertrand Marquis, Wei Chen, nd, xen-devel, sstabellini, julien

Hi Jan

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: Tuesday, May 18, 2021 3:35 PM
> To: Penny Zheng <Penny.Zheng@arm.com>
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org; julien@xen.org
> Subject: Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
> 
> On 18.05.2021 07:21, Penny Zheng wrote:
> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -2447,6 +2447,9 @@ int assign_pages(
> >      {
> >          ASSERT(page_get_owner(&pg[i]) == NULL);
> >          page_set_owner(&pg[i], d);
> > +        /* use page_set_reserved_owner to set its reserved domain owner.
> */
> > +        if ( (pg[i].count_info & PGC_reserved) )
> > +            page_set_reserved_owner(&pg[i], d);
> 
> Now this is puzzling: What's the point of setting two owner fields to the same
> value? I also don't recall you having introduced
> page_set_reserved_owner() for x86, so how is this going to build there?
> 

Thanks for pointing out that it will fail on x86.
As for the same value, sure, I shall change it to domid_t domid to record its reserved owner.
Only domid is enough for differentiate. 
And even when domain get rebooted, struct domain may be destroyed, but domid will stays
The same.
Major user cases for domain on static allocation are referring to the whole system are static,
No runtime creation.

> > @@ -2509,6 +2512,56 @@ struct page_info *alloc_domheap_pages(
> >      return pg;
> >  }
> >
> > +/*
> > + * Allocate nr_pfns contiguous pages, starting at #start, of static
> > +memory,
> > + * then assign them to one specific domain #d.
> > + * It is the equivalent of alloc_domheap_pages for static memory.
> > + */
> > +struct page_info *alloc_domstatic_pages(
> > +        struct domain *d, unsigned long nr_pfns, paddr_t start,
> > +        unsigned int memflags)
> > +{
> > +    struct page_info *pg = NULL;
> > +    unsigned long dma_size;
> > +
> > +    ASSERT(!in_irq());
> > +
> > +    if ( memflags & MEMF_no_owner )
> > +        memflags |= MEMF_no_refcount;
> > +
> > +    if ( !dma_bitsize )
> > +        memflags &= ~MEMF_no_dma;
> > +    else
> > +    {
> > +        dma_size = 1ul << bits_to_zone(dma_bitsize);
> > +        /* Starting address shall meet the DMA limitation. */
> > +        if ( dma_size && start < dma_size )
> > +            return NULL;
> 
> It is the entire range (i.e. in particular the last byte) which needs to meet such
> a restriction. I'm not convinced though that DMA width restrictions and static
> allocation are sensible to coexist.
> 

FWIT, if starting address meets the limitation, the last byte, greater than starting
address shall meet it too.

> > +    }
> > +
> > +    pg = alloc_staticmem_pages(nr_pfns, start, memflags);
> > +    if ( !pg )
> > +        return NULL;
> > +
> > +    if ( d && !(memflags & MEMF_no_owner) )
> > +    {
> > +        if ( memflags & MEMF_no_refcount )
> > +        {
> > +            unsigned long i;
> > +
> > +            for ( i = 0; i < nr_pfns; i++ )
> > +                pg[i].count_info = PGC_extra;
> > +        }
> 
> Is this as well as the MEMF_no_owner case actually meaningful for statically
> allocated pages?
> 

Thanks for pointing out. Truly, we do not need to take it considered.

> Jan

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
  2021-05-18  5:21 ` [PATCH 01/10] xen/arm: introduce domain " Penny Zheng
@ 2021-05-18  8:58   ` Julien Grall
  2021-05-19  2:22     ` Penny Zheng
  0 siblings, 1 reply; 82+ messages in thread
From: Julien Grall @ 2021-05-18  8:58 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Bertrand.Marquis, Wei.Chen, nd

Hi Penny,

On 18/05/2021 06:21, Penny Zheng wrote:
> Static Allocation refers to system or sub-system(domains) for which memory
> areas are pre-defined by configuration using physical address ranges.
> Those pre-defined memory, -- Static Momery, as parts of RAM reserved in the

s/Momery/Memory/

> beginning, shall never go to heap allocator or boot allocator for any use.
> 
> Domains on Static Allocation is supported through device tree property
> `xen,static-mem` specifying reserved RAM banks as this domain's guest RAM.
> By default, they shall be mapped to the fixed guest RAM address
> `GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.
> 
> This patch introduces this new `xen,static-mem` property to define static
> memory nodes in device tree file.
> This patch also documents and parses this new attribute at boot time and
> stores related info in static_mem for later initialization.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   docs/misc/arm/device-tree/booting.txt | 33 +++++++++++++++++
>   xen/arch/arm/bootfdt.c                | 52 +++++++++++++++++++++++++++
>   xen/include/asm-arm/setup.h           |  2 ++
>   3 files changed, 87 insertions(+)
> 
> diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> index 5243bc7fd3..d209149d71 100644
> --- a/docs/misc/arm/device-tree/booting.txt
> +++ b/docs/misc/arm/device-tree/booting.txt
> @@ -268,3 +268,36 @@ The DTB fragment is loaded at 0xc000000 in the example above. It should
>   follow the convention explained in docs/misc/arm/passthrough.txt. The
>   DTB fragment will be added to the guest device tree, so that the guest
>   kernel will be able to discover the device.
> +
> +
> +Static Allocation
> +=============
> +
> +Static Allocation refers to system or sub-system(domains) for which memory
> +areas are pre-defined by configuration using physical address ranges.
> +Those pre-defined memory, -- Static Momery, as parts of RAM reserved in the

s/Momery/Memory/

> +beginning, shall never go to heap allocator or boot allocator for any use.
> +
> +Domains on Static Allocation is supported through device tree property
> +`xen,static-mem` specifying reserved RAM banks as this domain's guest RAM.

I would suggest to use "physical RAM" when you refer to the host memory.

> +By default, they shall be mapped to the fixed guest RAM address
> +`GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.

There are a few bits that needs to clarified or part of the description:
   1) "By default" suggests there is an alternative possibility. 
However, I don't see any.
   2) Will the first region of xen,static-mem be mapped to 
GUEST_RAM0_BASE and the second to GUEST_RAM1_BASE? What if a third 
region is specificed?
   3) We don't guarantee the base address and the size of the banks. 
Wouldn't it be better to let the admin select the region he/she wants?
   4) How do you determine the number of cells for the address and the size?

> +Static Allocation is only supported on AArch64 for now.

The code doesn't seem to be AArch64 specific. So why can't this be used 
for 32-bit Arm?

> +
> +The dtb property should look like as follows:
> +
> +        chosen {
> +            domU1 {
> +                compatible = "xen,domain";
> +                #address-cells = <0x2>;
> +                #size-cells = <0x2>;
> +                cpus = <2>;
> +                xen,static-mem = <0x0 0x30000000 0x0 0x20000000>;
> +
> +                ...
> +            };
> +        };
> +
> +DOMU1 on Static Allocation has reserved RAM bank at 0x30000000 of 512MB size

Do you mean "DomU1 will have a static memory of 512MB reserved from the 
physical address..."?

> +as guest RAM.
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index dcff512648..e9f14e6a44 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -327,6 +327,55 @@ static void __init process_chosen_node(const void *fdt, int node,
>       add_boot_module(BOOTMOD_RAMDISK, start, end-start, false);
>   }
>   
> +static int __init process_static_memory(const void *fdt, int node,
> +                                        const char *name,
> +                                        u32 address_cells, u32 size_cells,
> +                                        void *data)
> +{
> +    int i;
> +    int banks;
> +    const __be32 *cell;
> +    paddr_t start, size;
> +    u32 reg_cells = address_cells + size_cells;
> +    struct meminfo *mem = data;
> +    const struct fdt_property *prop;
> +
> +    if ( address_cells < 1 || size_cells < 1 )
> +    {
> +        printk("fdt: invalid #address-cells or #size-cells for static memory");
> +        return -EINVAL;
> +    }
> +
> +    /*
> +     * Check if static memory property belongs to a specific domain, that is,
> +     * its node `domUx` has compatible string "xen,domain".
> +     */
> +    if ( fdt_node_check_compatible(fdt, node, "xen,domain") != 0 )
> +        printk("xen,static-mem property can only locate under /domUx node.\n");
> +
> +    prop = fdt_get_property(fdt, node, name, NULL);
> +    if ( !prop )
> +        return -ENOENT;
> +
> +    cell = (const __be32 *)prop->data;
> +    banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof (u32));
> +
> +    for ( i = 0; i < banks && mem->nr_banks < NR_MEM_BANKS; i++ )
> +    {
> +        device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
> +        /* Some DT may describe empty bank, ignore them */
> +        if ( !size )
> +            continue;
> +        mem->bank[mem->nr_banks].start = start;
> +        mem->bank[mem->nr_banks].size = size;
> +        mem->nr_banks++;
> +    }
> +
> +    if ( i < banks )
> +        return -ENOSPC;
> +    return 0;
> +}
> +
>   static int __init early_scan_node(const void *fdt,
>                                     int node, const char *name, int depth,
>                                     u32 address_cells, u32 size_cells,
> @@ -345,6 +394,9 @@ static int __init early_scan_node(const void *fdt,
>           process_multiboot_node(fdt, node, name, address_cells, size_cells);
>       else if ( depth == 1 && device_tree_node_matches(fdt, node, "chosen") )
>           process_chosen_node(fdt, node, name, address_cells, size_cells);
> +    else if ( depth == 2 && fdt_get_property(fdt, node, "xen,static-mem", NULL) )
> +        process_static_memory(fdt, node, "xen,static-mem", address_cells,
> +                              size_cells, &bootinfo.static_mem);

I am a bit concerned to add yet another method to parse the DT and all 
the extra code it will add like in patch #2.

 From the host PoV, they are memory reserved for a specific purpose. 
Would it be possible to consider the reserve-memory binding for that 
purpose? This will happen outside of chosen, but we could use a phandle 
to refer the region.

>   
>       if ( rc < 0 )
>           printk("fdt: node `%s': parsing failed\n", name);
> diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h
> index 5283244015..5e9f296760 100644
> --- a/xen/include/asm-arm/setup.h
> +++ b/xen/include/asm-arm/setup.h
> @@ -74,6 +74,8 @@ struct bootinfo {
>   #ifdef CONFIG_ACPI
>       struct meminfo acpi;
>   #endif
> +    /* Static Memory */
> +    struct meminfo static_mem;
>   };
>   
>   extern struct bootinfo bootinfo;
> 

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 02/10] xen/arm: handle static memory in dt_unreserved_regions
  2021-05-18  5:21 ` [PATCH 02/10] xen/arm: handle static memory in dt_unreserved_regions Penny Zheng
@ 2021-05-18  9:04   ` Julien Grall
  0 siblings, 0 replies; 82+ messages in thread
From: Julien Grall @ 2021-05-18  9:04 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Bertrand.Marquis, Wei.Chen, nd

Hi Penny,

On 18/05/2021 06:21, Penny Zheng wrote:
> static memory regions overlap with memory nodes. The
> overlapping memory is reserved-memory and should be
> handled accordingly:
> dt_unreserved_regions should skip these regions the
> same way they are already skipping mem-reserved regions.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/arch/arm/setup.c | 39 +++++++++++++++++++++++++++++++++------
>   1 file changed, 33 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 00aad1c194..444dbbd676 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -201,7 +201,7 @@ static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
>                                            void (*cb)(paddr_t, paddr_t),
>                                            int first)
>   {
> -    int i, nr = fdt_num_mem_rsv(device_tree_flattened);
> +    int i, nr_reserved, nr_static, nr = fdt_num_mem_rsv(device_tree_flattened);
>   
>       for ( i = first; i < nr ; i++ )
>       {
> @@ -222,18 +222,45 @@ static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
>       }
>   
>       /*
> -     * i is the current bootmodule we are evaluating across all possible
> -     * kinds.
> +     * i is the current reserved RAM banks we are evaluating across all
> +     * possible kinds.
>        *
>        * When retrieving the corresponding reserved-memory addresses
>        * below, we need to index the bootinfo.reserved_mem bank starting
>        * from 0, and only counting the reserved-memory modules. Hence,
>        * we need to use i - nr.
>        */
> -    for ( ; i - nr < bootinfo.reserved_mem.nr_banks; i++ )
> +    i = i - nr;
> +    nr_reserved = bootinfo.reserved_mem.nr_banks;
> +    for ( ; i < nr_reserved; i++ )
>       {
> -        paddr_t r_s = bootinfo.reserved_mem.bank[i - nr].start;
> -        paddr_t r_e = r_s + bootinfo.reserved_mem.bank[i - nr].size;
> +        paddr_t r_s = bootinfo.reserved_mem.bank[i].start;
> +        paddr_t r_e = r_s + bootinfo.reserved_mem.bank[i].size;
> +
> +        if ( s < r_e && r_s < e )
> +        {
> +            dt_unreserved_regions(r_e, e, cb, i + 1);
> +            dt_unreserved_regions(s, r_s, cb, i + 1);
> +            return;
> +        }
> +    }
> +
> +    /*
> +     * i is the current reserved RAM banks we are evaluating across all
> +     * possible kinds.
> +     *
> +     * When retrieving the corresponding static-memory bank address
> +     * below, we need to index the bootinfo.static_mem starting
> +     * from 0, and only counting the static-memory bank. Hence,
> +     * we need to use i - nr_reserved.
> +     */
> +
> +    i = i - nr_reserved;
> +    nr_static = bootinfo.static_mem.nr_banks;
> +    for ( ; i < nr_static; i++ )
> +    {
> +        paddr_t r_s = bootinfo.static_mem.bank[i].start; > +        paddr_t r_e = r_s + bootinfo.static_mem.bank[i].size;

This is the 3rd loop we are adding in dt_unreserved_regions(). Each loop 
are doing pretty much the same thing except with a different array. I'd 
like to avoid the new loop if possible.

As mentionned in patch#1, the static memory is another kind of reserved 
memory. So could we describe the static memory using the reserved-memory?

>   
>           if ( s < r_e && r_s < e )
>           {
> 

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 06/10] xen: replace order with nr_pfns in assign_pages for better compatibility
  2021-05-18  7:27   ` Jan Beulich
@ 2021-05-18  9:11     ` Penny Zheng
  0 siblings, 0 replies; 82+ messages in thread
From: Penny Zheng @ 2021-05-18  9:11 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Bertrand Marquis, Wei Chen, nd, xen-devel, sstabellini, julien

Hi Jan

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: Tuesday, May 18, 2021 3:28 PM
> To: Penny Zheng <Penny.Zheng@arm.com>
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org; julien@xen.org
> Subject: Re: [PATCH 06/10] xen: replace order with nr_pfns in assign_pages
> for better compatibility
> 
> On 18.05.2021 07:21, Penny Zheng wrote:
> > Function parameter order in assign_pages is always used as 1ul <<
> > order, referring to 2@order pages.
> >
> > Now, for better compatibility with new static memory, order shall be
> > replaced with nr_pfns pointing to page count with no constraint, like
> > 250MB.
> 
> While I'm not entirely opposed, I'm also not convinced. The new user could
> as well break up the range into suitable power-of-2 chunks. In no case do I
> view the wording "compatibility" here as appropriate. There's no
> incompatibility at present.
> 

Yes, maybe the incompatibility is not the good choice here.
Sure, the new user definitely could choose the workaround to break up the range, while
it may cost extra time. 
And while considering MPU system,  memory range size is often not in the power-of-2.  

> Jan

Thanks
Penny

^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
  2021-05-18  7:24   ` Jan Beulich
@ 2021-05-18  9:30     ` Penny Zheng
  2021-05-18 10:09     ` Julien Grall
  1 sibling, 0 replies; 82+ messages in thread
From: Penny Zheng @ 2021-05-18  9:30 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Bertrand Marquis, Wei Chen, nd, xen-devel, sstabellini, julien

Hi Jan

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: Tuesday, May 18, 2021 3:24 PM
> To: Penny Zheng <Penny.Zheng@arm.com>
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org; julien@xen.org
> Subject: Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
> 
> On 18.05.2021 07:21, Penny Zheng wrote:
> > alloc_staticmem_pages is designated to allocate nr_pfns contiguous
> > pages of static memory. And it is the equivalent of alloc_heap_pages
> > for static memory.
> > This commit only covers allocating at specified starting address.
> >
> > For each page, it shall check if the page is reserved
> > (PGC_reserved) and free. It shall also do a set of necessary
> > initialization, which are mostly the same ones in alloc_heap_pages,
> > like, following the same cache-coherency policy and turning page
> > status into PGC_state_used, etc.
> >
> > Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > ---
> >  xen/common/page_alloc.c | 64
> > +++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 64 insertions(+)
> >
> > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index
> > 58b53c6ac2..adf2889e76 100644
> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages(
> >      return pg;
> >  }
> >
> > +/*
> > + * Allocate nr_pfns contiguous pages, starting at #start, of static memory.
> > + * It is the equivalent of alloc_heap_pages for static memory  */
> > +static struct page_info *alloc_staticmem_pages(unsigned long nr_pfns,
> > +                                                paddr_t start,
> > +                                                unsigned int
> > +memflags)
> 
> This is surely breaking the build (at this point in the series - recall that a series
> should build fine at every patch boundary), for introducing an unused static
> function, which most compilers will warn about.
>

Sure, I'll combine it with other commits

> Also again - please avoid introducing code that's always dead for certain
> architectures. Quite likely you want a Kconfig option to put a suitable #ifdef
> around such functions.
> 

Sure, sorry for all the missing #ifdefs.

> And a nit: Please correct the apparently off-by-one indentation.
>

Sure, I'll check through the code more carefully.

> > +{
> > +    bool need_tlbflush = false;
> > +    uint32_t tlbflush_timestamp = 0;
> > +    unsigned int i;
> 
> This variable's type should (again) match nr_pfns'es (albeit I think that
> parameter really wants to be nr_mfns).
> 

Correct if I understand you wrongly, you mean that parameters in alloc_staticmem_pages
are better be named after unsigned long nr_mfns, right?

> > +    struct page_info *pg;
> > +    mfn_t s_mfn;
> > +
> > +    /* For now, it only supports allocating at specified address. */
> > +    s_mfn = maddr_to_mfn(start);
> > +    pg = mfn_to_page(s_mfn);
> > +    if ( !pg )
> > +        return NULL;
> 
> Under what conditions would mfn_to_page() return NULL?

Right, my mistake.

>
> > +    for ( i = 0; i < nr_pfns; i++)
> > +    {
> > +        /*
> > +         * Reference count must continuously be zero for free pages
> > +         * of static memory(PGC_reserved).
> > +         */
> > +        ASSERT(pg[i].count_info & PGC_reserved);
> > +        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
> > +        {
> > +            printk(XENLOG_ERR
> > +                    "Reference count must continuously be zero for free pages"
> > +                    "pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
> > +                    i, mfn_x(page_to_mfn(pg + i)),
> > +                    pg[i].count_info, pg[i].tlbflush_timestamp);
> 
> Nit: Indentation again.
>
 
Thx

> > +            BUG();
> > +        }
> > +
> > +        if ( !(memflags & MEMF_no_tlbflush) )
> > +            accumulate_tlbflush(&need_tlbflush, &pg[i],
> > +                                &tlbflush_timestamp);
> > +
> > +        /*
> > +         * Reserve flag PGC_reserved and change page state
> 
> DYM "Preserve ..."?
> 

Sure, thx

> > +         * to PGC_state_inuse.
> > +         */
> > +        pg[i].count_info = (pg[i].count_info & PGC_reserved) |
> PGC_state_inuse;
> > +        /* Initialise fields which have other uses for free pages. */
> > +        pg[i].u.inuse.type_info = 0;
> > +        page_set_owner(&pg[i], NULL);
> > +
> > +        /*
> > +         * Ensure cache and RAM are consistent for platforms where the
> > +         * guest can control its own visibility of/through the cache.
> > +         */
> > +        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
> > +                            !(memflags & MEMF_no_icache_flush));
> > +    }
> > +
> > +    if ( need_tlbflush )
> > +        filtered_flush_tlb_mask(tlbflush_timestamp);
> 
> With reserved pages dedicated to a specific domain, in how far is it possible
> that stale mappings from a prior use can still be around, making such TLB
> flushing necessary?
> 

Yes, you're right.

> Jan

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
  2021-05-18  5:21 ` [PATCH 03/10] xen/arm: introduce PGC_reserved Penny Zheng
@ 2021-05-18  9:45   ` Julien Grall
  2021-05-19  3:16     ` Penny Zheng
  0 siblings, 1 reply; 82+ messages in thread
From: Julien Grall @ 2021-05-18  9:45 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Bertrand.Marquis, Wei.Chen, nd



On 18/05/2021 06:21, Penny Zheng wrote:
> In order to differentiate pages of static memory, from those allocated from
> heap, this patch introduces a new page flag PGC_reserved to tell.
> 
> New struct reserved in struct page_info is to describe reserved page info,
> that is, which specific domain this page is reserved to. >
> Helper page_get_reserved_owner and page_set_reserved_owner are
> designated to get/set reserved page's owner.
> 
> Struct domain is enlarged to more than PAGE_SIZE, due to newly-imported
> struct reserved in struct page_info.

struct domain may embed a pointer to a struct page_info but never 
directly embed the structure. So can you clarify what you mean?

> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/include/asm-arm/mm.h | 16 +++++++++++++++-
>   1 file changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index 0b7de3102e..d8922fd5db 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -88,7 +88,15 @@ struct page_info
>            */
>           u32 tlbflush_timestamp;
>       };
> -    u64 pad;
> +
> +    /* Page is reserved. */
> +    struct {
> +        /*
> +         * Reserved Owner of this page,
> +         * if this page is reserved to a specific domain.
> +         */
> +        struct domain *domain;
> +    } reserved;

The space in page_info is quite tight, so I would like to avoid 
introducing new fields unless we can't get away from it.

In this case, it is not clear why we need to differentiate the "Owner" 
vs the "Reserved Owner". It might be clearer if this change is folded in 
the first user of the field.

As an aside, for 32-bit Arm, you need to add a 4-byte padding.

>   };
>   
>   #define PG_shift(idx)   (BITS_PER_LONG - (idx))
> @@ -108,6 +116,9 @@ struct page_info
>     /* Page is Xen heap? */
>   #define _PGC_xen_heap     PG_shift(2)
>   #define PGC_xen_heap      PG_mask(1, 2)
> +  /* Page is reserved, referring static memory */

I would drop the second part of the sentence because the flag could be 
used for other purpose. One example is reserved memory when Live Updating.

> +#define _PGC_reserved     PG_shift(3)
> +#define PGC_reserved      PG_mask(1, 3)
>   /* ... */
>   /* Page is broken? */
>   #define _PGC_broken       PG_shift(7)
> @@ -161,6 +172,9 @@ extern unsigned long xenheap_base_pdx;
>   #define page_get_owner(_p)    (_p)->v.inuse.domain
>   #define page_set_owner(_p,_d) ((_p)->v.inuse.domain = (_d))
>   
> +#define page_get_reserved_owner(_p)    (_p)->reserved.domain
> +#define page_set_reserved_owner(_p,_d) ((_p)->reserved.domain = (_d))
> +
>   #define maddr_get_owner(ma)   (page_get_owner(maddr_to_page((ma))))
>   
>   #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START)
> 

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 04/10] xen/arm: static memory initialization
  2021-05-18  7:15   ` Jan Beulich
@ 2021-05-18  9:51     ` Penny Zheng
  2021-05-18 10:43       ` Jan Beulich
  0 siblings, 1 reply; 82+ messages in thread
From: Penny Zheng @ 2021-05-18  9:51 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Bertrand Marquis, Wei Chen, nd, xen-devel, sstabellini, julien

Hi Jan

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: Tuesday, May 18, 2021 3:16 PM
> To: Penny Zheng <Penny.Zheng@arm.com>
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org; julien@xen.org
> Subject: Re: [PATCH 04/10] xen/arm: static memory initialization
> 
> On 18.05.2021 07:21, Penny Zheng wrote:
> > This patch introduces static memory initialization, during system RAM boot
> up.
> >
> > New func init_staticmem_pages is the equivalent of init_heap_pages,
> > responsible for static memory initialization.
> >
> > Helper func free_staticmem_pages is the equivalent of free_heap_pages,
> > to free nr_pfns pages of static memory.
> > For each page, it includes the following steps:
> > 1. change page state from in-use(also initialization state) to free
> > state and grant PGC_reserved.
> > 2. set its owner NULL and make sure this page is not a guest frame any
> > more
> 
> But isn't the goal (as per the previous patch) to associate such pages with a
> _specific_ domain?
> 

Free_staticmem_pages are alike free_heap_pages, are not used only for initialization.
Freeing used pages to unused are also included.
Here, setting its owner NULL is to set owner in used field NULL.
Still, I need to add more explanation here.

> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -150,6 +150,9 @@
> >  #define p2m_pod_offline_or_broken_hit(pg) 0  #define
> > p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)  #endif
> > +#ifdef CONFIG_ARM_64
> > +#include <asm/setup.h>
> > +#endif
> 
> Whatever it is that's needed from this header suggests the code won't build
> for other architectures. I think init_staticmem_pages() in its current shape
> shouldn't live in this (common) file.
> 

Yes, I should include them all both under one specific config, maybe like
CONFIG_STATIC_MEM, and this config is arm-specific.

> > @@ -1512,6 +1515,49 @@ static void free_heap_pages(
> >      spin_unlock(&heap_lock);
> >  }
> >
> > +/* Equivalent of free_heap_pages to free nr_pfns pages of static
> > +memory. */ static void free_staticmem_pages(struct page_info *pg,
> unsigned long nr_pfns,
> > +                                 bool need_scrub)
> 
> Right now this function gets called only from an __init one. Unless it is
> intended to gain further callers, it should be marked __init itself then.
> Otherwise it should be made sure that other architectures don't include this
> (dead there) code.
> 

Sure, I'll add __init. Thx.

> > +{
> > +    mfn_t mfn = page_to_mfn(pg);
> > +    int i;
> 
> This type doesn't fit nr_pfns'es.
> 

Sure, nr_mfns is better in also many other places.

> Jan

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 04/10] xen/arm: static memory initialization
  2021-05-18  5:21 ` [PATCH 04/10] xen/arm: static memory initialization Penny Zheng
  2021-05-18  7:15   ` Jan Beulich
@ 2021-05-18 10:00   ` Julien Grall
  2021-05-18 10:01     ` Julien Grall
  2021-05-19  5:02     ` Penny Zheng
  1 sibling, 2 replies; 82+ messages in thread
From: Julien Grall @ 2021-05-18 10:00 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Bertrand.Marquis, Wei.Chen, nd

Hi Penny,

On 18/05/2021 06:21, Penny Zheng wrote:
> This patch introduces static memory initialization, during system RAM boot up.
> 
> New func init_staticmem_pages is the equivalent of init_heap_pages, responsible
> for static memory initialization.
> 
> Helper func free_staticmem_pages is the equivalent of free_heap_pages, to free
> nr_pfns pages of static memory.
> For each page, it includes the following steps:
> 1. change page state from in-use(also initialization state) to free state and
> grant PGC_reserved.
> 2. set its owner NULL and make sure this page is not a guest frame any more
> 3. follow the same cache coherency policy in free_heap_pages
> 4. scrub the page in need
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/arch/arm/setup.c    |  2 ++
>   xen/common/page_alloc.c | 70 +++++++++++++++++++++++++++++++++++++++++
>   xen/include/xen/mm.h    |  3 ++
>   3 files changed, 75 insertions(+)
> 
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 444dbbd676..f80162c478 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -818,6 +818,8 @@ static void __init setup_mm(void)
>   
>       setup_frametable_mappings(ram_start, ram_end);
>       max_page = PFN_DOWN(ram_end);
> +
> +    init_staticmem_pages();
>   }
>   #endif
>   
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index ace6333c18..58b53c6ac2 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -150,6 +150,9 @@
>   #define p2m_pod_offline_or_broken_hit(pg) 0
>   #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
>   #endif
> +#ifdef CONFIG_ARM_64
> +#include <asm/setup.h>
> +#endif
>   
>   /*
>    * Comma-separated list of hexadecimal page numbers containing bad bytes.
> @@ -1512,6 +1515,49 @@ static void free_heap_pages(
>       spin_unlock(&heap_lock);
>   }
>   
> +/* Equivalent of free_heap_pages to free nr_pfns pages of static memory. */
> +static void free_staticmem_pages(struct page_info *pg, unsigned long nr_pfns,

This function is dealing with MFNs, so the second parameter should be 
called nr_mfns.

> +                                 bool need_scrub)
> +{
> +    mfn_t mfn = page_to_mfn(pg);
> +    int i;
> +
> +    for ( i = 0; i < nr_pfns; i++ )
> +    {
> +        switch ( pg[i].count_info & PGC_state )
> +        {
> +        case PGC_state_inuse:
> +            BUG_ON(pg[i].count_info & PGC_broken);
> +            /* Make it free and reserved. */
> +            pg[i].count_info = PGC_state_free | PGC_reserved;
> +            break;
> +
> +        default:
> +            printk(XENLOG_ERR
> +                   "Page state shall be only in PGC_state_inuse. "
> +                   "pg[%u] MFN %"PRI_mfn" count_info=%#lx tlbflush_timestamp=%#x.\n",
> +                   i, mfn_x(mfn) + i,
> +                   pg[i].count_info,
> +                   pg[i].tlbflush_timestamp);
> +            BUG();
> +        }
> +
> +        /*
> +         * Follow the same cache coherence scheme in free_heap_pages.
> +         * If a page has no owner it will need no safety TLB flush.
> +         */
> +        pg[i].u.free.need_tlbflush = (page_get_owner(&pg[i]) != NULL);
> +        if ( pg[i].u.free.need_tlbflush )
> +            page_set_tlbflush_timestamp(&pg[i]);
> +
> +        /* This page is not a guest frame any more. */
> +        page_set_owner(&pg[i], NULL);
> +        set_gpfn_from_mfn(mfn_x(mfn) + i, INVALID_M2P_ENTRY);

The code looks quite similar to free_heap_pages(). Could we possibly 
create an helper which can be called from both?

> +
> +        if ( need_scrub )
> +            scrub_one_page(&pg[i]);

So the scrubbing will be synchronous. Is it what we want?

You also seem to miss the flush the call to flush_page_to_ram().

> +    }
> +}
>   
>   /*
>    * Following rules applied for page offline:
> @@ -1828,6 +1874,30 @@ static void init_heap_pages(
>       }
>   }
>   
> +/* Equivalent of init_heap_pages to do static memory initialization */
> +void __init init_staticmem_pages(void)
> +{
> +    int bank;
> +
> +    /*
> +     * TODO: Considering NUMA-support scenario.
> +     */
> +    for ( bank = 0 ; bank < bootinfo.static_mem.nr_banks; bank++ )

bootinfo is arm specific, so this code should live in arch/arm rather 
than common/.

> +    {
> +        paddr_t bank_start = bootinfo.static_mem.bank[bank].start;
> +        paddr_t bank_size = bootinfo.static_mem.bank[bank].size;
> +        paddr_t bank_end = bank_start + bank_size;
> +
> +        bank_start = round_pgup(bank_start);
> +        bank_end = round_pgdown(bank_end);
> +        if ( bank_end <= bank_start )
> +            return;
> +
> +        free_staticmem_pages(maddr_to_page(bank_start),
> +                            (bank_end - bank_start) >> PAGE_SHIFT, false);
> +    }
> +}
> +
>   static unsigned long avail_heap_pages(
>       unsigned int zone_lo, unsigned int zone_hi, unsigned int node)
>   {
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index 667f9dac83..8b1a2207b2 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -85,6 +85,9 @@ bool scrub_free_pages(void);
>   } while ( false )
>   #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
>   
> +/* Static Memory */
> +void init_staticmem_pages(void);
> +
>   /* Map machine page range in Xen virtual address space. */
>   int map_pages_to_xen(
>       unsigned long virt,
> 

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 04/10] xen/arm: static memory initialization
  2021-05-18 10:00   ` Julien Grall
@ 2021-05-18 10:01     ` Julien Grall
  2021-05-19  5:02     ` Penny Zheng
  1 sibling, 0 replies; 82+ messages in thread
From: Julien Grall @ 2021-05-18 10:01 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Bertrand.Marquis, Wei.Chen, nd



On 18/05/2021 11:00, Julien Grall wrote:
> Hi Penny,
> 
> On 18/05/2021 06:21, Penny Zheng wrote:
>> This patch introduces static memory initialization, during system RAM 
>> boot up.
>>
>> New func init_staticmem_pages is the equivalent of init_heap_pages, 
>> responsible
>> for static memory initialization.
>>
>> Helper func free_staticmem_pages is the equivalent of free_heap_pages, 
>> to free
>> nr_pfns pages of static memory.
>> For each page, it includes the following steps:
>> 1. change page state from in-use(also initialization state) to free 
>> state and
>> grant PGC_reserved.
>> 2. set its owner NULL and make sure this page is not a guest frame any 
>> more
>> 3. follow the same cache coherency policy in free_heap_pages
>> 4. scrub the page in need
>>
>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>> ---
>>   xen/arch/arm/setup.c    |  2 ++
>>   xen/common/page_alloc.c | 70 +++++++++++++++++++++++++++++++++++++++++
>>   xen/include/xen/mm.h    |  3 ++
>>   3 files changed, 75 insertions(+)
>>
>> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
>> index 444dbbd676..f80162c478 100644
>> --- a/xen/arch/arm/setup.c
>> +++ b/xen/arch/arm/setup.c
>> @@ -818,6 +818,8 @@ static void __init setup_mm(void)
>>       setup_frametable_mappings(ram_start, ram_end);
>>       max_page = PFN_DOWN(ram_end);
>> +
>> +    init_staticmem_pages();
>>   }
>>   #endif
>> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
>> index ace6333c18..58b53c6ac2 100644
>> --- a/xen/common/page_alloc.c
>> +++ b/xen/common/page_alloc.c
>> @@ -150,6 +150,9 @@
>>   #define p2m_pod_offline_or_broken_hit(pg) 0
>>   #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
>>   #endif
>> +#ifdef CONFIG_ARM_64
>> +#include <asm/setup.h>
>> +#endif
>>   /*
>>    * Comma-separated list of hexadecimal page numbers containing bad 
>> bytes.
>> @@ -1512,6 +1515,49 @@ static void free_heap_pages(
>>       spin_unlock(&heap_lock);
>>   }
>> +/* Equivalent of free_heap_pages to free nr_pfns pages of static 
>> memory. */
>> +static void free_staticmem_pages(struct page_info *pg, unsigned long 
>> nr_pfns,
> 
> This function is dealing with MFNs, so the second parameter should be 
> called nr_mfns.
> 
>> +                                 bool need_scrub)
>> +{
>> +    mfn_t mfn = page_to_mfn(pg);
>> +    int i;
>> +
>> +    for ( i = 0; i < nr_pfns; i++ )
>> +    {
>> +        switch ( pg[i].count_info & PGC_state )
>> +        {
>> +        case PGC_state_inuse:
>> +            BUG_ON(pg[i].count_info & PGC_broken);
>> +            /* Make it free and reserved. */
>> +            pg[i].count_info = PGC_state_free | PGC_reserved;
>> +            break;
>> +
>> +        default:
>> +            printk(XENLOG_ERR
>> +                   "Page state shall be only in PGC_state_inuse. "
>> +                   "pg[%u] MFN %"PRI_mfn" count_info=%#lx 
>> tlbflush_timestamp=%#x.\n",
>> +                   i, mfn_x(mfn) + i,
>> +                   pg[i].count_info,
>> +                   pg[i].tlbflush_timestamp);
>> +            BUG();
>> +        }
>> +
>> +        /*
>> +         * Follow the same cache coherence scheme in free_heap_pages.
>> +         * If a page has no owner it will need no safety TLB flush.
>> +         */
>> +        pg[i].u.free.need_tlbflush = (page_get_owner(&pg[i]) != NULL);
>> +        if ( pg[i].u.free.need_tlbflush )
>> +            page_set_tlbflush_timestamp(&pg[i]);
>> +
>> +        /* This page is not a guest frame any more. */
>> +        page_set_owner(&pg[i], NULL);
>> +        set_gpfn_from_mfn(mfn_x(mfn) + i, INVALID_M2P_ENTRY);
> 
> The code looks quite similar to free_heap_pages(). Could we possibly 
> create an helper which can be called from both?
> 
>> +
>> +        if ( need_scrub )
>> +            scrub_one_page(&pg[i]);
> 
> So the scrubbing will be synchronous. Is it what we want?
> 
> You also seem to miss the flush the call to flush_page_to_ram().

Hmmmm... Sorry I looked at the wrong function. This is not necessary for 
the free part.

> 
>> +    }
>> +}
>>   /*
>>    * Following rules applied for page offline:
>> @@ -1828,6 +1874,30 @@ static void init_heap_pages(
>>       }
>>   }
>> +/* Equivalent of init_heap_pages to do static memory initialization */
>> +void __init init_staticmem_pages(void)
>> +{
>> +    int bank;
>> +
>> +    /*
>> +     * TODO: Considering NUMA-support scenario.
>> +     */
>> +    for ( bank = 0 ; bank < bootinfo.static_mem.nr_banks; bank++ )
> 
> bootinfo is arm specific, so this code should live in arch/arm rather 
> than common/.
> 
>> +    {
>> +        paddr_t bank_start = bootinfo.static_mem.bank[bank].start;
>> +        paddr_t bank_size = bootinfo.static_mem.bank[bank].size;
>> +        paddr_t bank_end = bank_start + bank_size;
>> +
>> +        bank_start = round_pgup(bank_start);
>> +        bank_end = round_pgdown(bank_end);
>> +        if ( bank_end <= bank_start )
>> +            return;
>> +
>> +        free_staticmem_pages(maddr_to_page(bank_start),
>> +                            (bank_end - bank_start) >> PAGE_SHIFT, 
>> false);
>> +    }
>> +}
>> +
>>   static unsigned long avail_heap_pages(
>>       unsigned int zone_lo, unsigned int zone_hi, unsigned int node)
>>   {
>> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
>> index 667f9dac83..8b1a2207b2 100644
>> --- a/xen/include/xen/mm.h
>> +++ b/xen/include/xen/mm.h
>> @@ -85,6 +85,9 @@ bool scrub_free_pages(void);
>>   } while ( false )
>>   #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
>> +/* Static Memory */
>> +void init_staticmem_pages(void);
>> +
>>   /* Map machine page range in Xen virtual address space. */
>>   int map_pages_to_xen(
>>       unsigned long virt,
>>
> 
> Cheers,
> 

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
  2021-05-18  7:24   ` Jan Beulich
  2021-05-18  9:30     ` Penny Zheng
@ 2021-05-18 10:09     ` Julien Grall
  1 sibling, 0 replies; 82+ messages in thread
From: Julien Grall @ 2021-05-18 10:09 UTC (permalink / raw)
  To: Jan Beulich, Penny Zheng
  Cc: Bertrand.Marquis, Wei.Chen, nd, xen-devel, sstabellini

Hi Jan,

On 18/05/2021 08:24, Jan Beulich wrote:
> On 18.05.2021 07:21, Penny Zheng wrote:
>> +         * to PGC_state_inuse.
>> +         */
>> +        pg[i].count_info = (pg[i].count_info & PGC_reserved) | PGC_state_inuse;
>> +        /* Initialise fields which have other uses for free pages. */
>> +        pg[i].u.inuse.type_info = 0;
>> +        page_set_owner(&pg[i], NULL);
>> +
>> +        /*
>> +         * Ensure cache and RAM are consistent for platforms where the
>> +         * guest can control its own visibility of/through the cache.
>> +         */
>> +        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
>> +                            !(memflags & MEMF_no_icache_flush));
>> +    }
>> +
>> +    if ( need_tlbflush )
>> +        filtered_flush_tlb_mask(tlbflush_timestamp);
> 
> With reserved pages dedicated to a specific domain, in how far is it
> possible that stale mappings from a prior use can still be around,
> making such TLB flushing necessary?

I would rather not make the assumption. I can see future where we just 
want to allocate memory from a static pool that may be shared with 
multiple domains.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
  2021-05-18  5:21 ` [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages Penny Zheng
  2021-05-18  7:24   ` Jan Beulich
@ 2021-05-18 10:15   ` Julien Grall
  2021-05-19  5:23     ` Penny Zheng
  1 sibling, 1 reply; 82+ messages in thread
From: Julien Grall @ 2021-05-18 10:15 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Bertrand.Marquis, Wei.Chen, nd

Hi Penny,

On 18/05/2021 06:21, Penny Zheng wrote:
> alloc_staticmem_pages is designated to allocate nr_pfns contiguous
> pages of static memory. And it is the equivalent of alloc_heap_pages
> for static memory.
> This commit only covers allocating at specified starting address.
> 
> For each page, it shall check if the page is reserved
> (PGC_reserved) and free. It shall also do a set of necessary
> initialization, which are mostly the same ones in alloc_heap_pages,
> like, following the same cache-coherency policy and turning page
> status into PGC_state_used, etc.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/common/page_alloc.c | 64 +++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 64 insertions(+)
> 
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 58b53c6ac2..adf2889e76 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages(
>       return pg;
>   }
>   
> +/*
> + * Allocate nr_pfns contiguous pages, starting at #start, of static memory.
> + * It is the equivalent of alloc_heap_pages for static memory
> + */
> +static struct page_info *alloc_staticmem_pages(unsigned long nr_pfns,

This wants to be nr_mfns.

> +                                                paddr_t start,

I would prefer if this helper takes an mfn_t in parameter.

> +                                                unsigned int memflags)
> +{
> +    bool need_tlbflush = false;
> +    uint32_t tlbflush_timestamp = 0;
> +    unsigned int i;
> +    struct page_info *pg;
> +    mfn_t s_mfn;
> +
> +    /* For now, it only supports allocating at specified address. */
> +    s_mfn = maddr_to_mfn(start);
> +    pg = mfn_to_page(s_mfn);

We should avoid to make the assumption the start address will be valid. 
So you want to call mfn_valid() first.

At the same time, there is no guarantee that if the first page is valid, 
then the next nr_pfns will be. So the check should be performed for all 
of them.

> +    if ( !pg )
> +        return NULL;
> +
> +    for ( i = 0; i < nr_pfns; i++)
> +    {
> +        /*
> +         * Reference count must continuously be zero for free pages
> +         * of static memory(PGC_reserved).
> +         */
> +        ASSERT(pg[i].count_info & PGC_reserved);
> +        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
> +        {
> +            printk(XENLOG_ERR
> +                    "Reference count must continuously be zero for free pages"
> +                    "pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
> +                    i, mfn_x(page_to_mfn(pg + i)),
> +                    pg[i].count_info, pg[i].tlbflush_timestamp);
> +            BUG();

So we would crash Xen if the caller pass a wrong range. Is it what we want?

Also, who is going to prevent concurrent access?

> +        }
> +
> +        if ( !(memflags & MEMF_no_tlbflush) )
> +            accumulate_tlbflush(&need_tlbflush, &pg[i],
> +                                &tlbflush_timestamp);
> +
> +        /*
> +         * Reserve flag PGC_reserved and change page state
> +         * to PGC_state_inuse.
> +         */
> +        pg[i].count_info = (pg[i].count_info & PGC_reserved) | PGC_state_inuse;
> +        /* Initialise fields which have other uses for free pages. */
> +        pg[i].u.inuse.type_info = 0;
> +        page_set_owner(&pg[i], NULL);
> +
> +        /*
> +         * Ensure cache and RAM are consistent for platforms where the
> +         * guest can control its own visibility of/through the cache.
> +         */
> +        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
> +                            !(memflags & MEMF_no_icache_flush));
> +    }
> +
> +    if ( need_tlbflush )
> +        filtered_flush_tlb_mask(tlbflush_timestamp);
> +
> +    return pg;
> +}
> +
>   /* Remove any offlined page in the buddy pointed to by head. */
>   static int reserve_offlined_page(struct page_info *head)
>   {
> 

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 06/10] xen: replace order with nr_pfns in assign_pages for better compatibility
  2021-05-18  5:21 ` [PATCH 06/10] xen: replace order with nr_pfns in assign_pages for better compatibility Penny Zheng
  2021-05-18  7:27   ` Jan Beulich
@ 2021-05-18 10:20   ` Julien Grall
  2021-05-19  5:35     ` Penny Zheng
  1 sibling, 1 reply; 82+ messages in thread
From: Julien Grall @ 2021-05-18 10:20 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Bertrand.Marquis, Wei.Chen, nd

Hi Penny,

On 18/05/2021 06:21, Penny Zheng wrote:
> Function parameter order in assign_pages is always used as 1ul << order,
> referring to 2@order pages.
> 
> Now, for better compatibility with new static memory, order shall
> be replaced with nr_pfns pointing to page count with no constraint,
> like 250MB.

We have similar requirements for LiveUpdate because are preserving the 
memory with a number of pages (rather than a power-of-two). With the 
current interface would be need to split the range in a power of 2 which 
is a bit of pain.

However, I think I would prefer if we introduce a new interface (maybe 
assign_pages_nr()) rather than change the meaning of the field. This is 
for two reasons:
   1) We limit the risk to make mistake when backporting a patch touch 
assign_pages().
   2) Adding (1UL << order) for pretty much all the caller is not nice.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
  2021-05-18  5:21 ` [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages Penny Zheng
  2021-05-18  7:34   ` Jan Beulich
@ 2021-05-18 10:30   ` Julien Grall
  2021-05-19  6:03     ` Penny Zheng
  1 sibling, 1 reply; 82+ messages in thread
From: Julien Grall @ 2021-05-18 10:30 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Bertrand.Marquis, Wei.Chen, nd

Hi Penny,

Title: s/intruduce/introduce/

On 18/05/2021 06:21, Penny Zheng wrote:
> alloc_domstatic_pages is the equivalent of alloc_domheap_pages for
> static mmeory, and it is to allocate nr_pfns pages of static memory
> and assign them to one specific domain.
> 
> It uses alloc_staticmen_pages to get nr_pages pages of static memory,
> then on success, it will use assign_pages to assign those pages to
> one specific domain, including using page_set_reserved_owner to set its
> reserved domain owner.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/common/page_alloc.c | 53 +++++++++++++++++++++++++++++++++++++++++
>   xen/include/xen/mm.h    |  4 ++++
>   2 files changed, 57 insertions(+)
> 
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 0eb9f22a00..f1f1296a61 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -2447,6 +2447,9 @@ int assign_pages(
>       {
>           ASSERT(page_get_owner(&pg[i]) == NULL);
>           page_set_owner(&pg[i], d);
> +        /* use page_set_reserved_owner to set its reserved domain owner. */
> +        if ( (pg[i].count_info & PGC_reserved) )
> +            page_set_reserved_owner(&pg[i], d);

I have skimmed through the rest of the series and couldn't find anyone 
calling page_get_reserved_owner(). The value is also going to be the 
exact same as page_set_owner().

So why do we need it?

>           smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
>           pg[i].count_info =
>               (pg[i].count_info & PGC_extra) | PGC_allocated | 1;

This will clobber PGC_reserved.

> @@ -2509,6 +2512,56 @@ struct page_info *alloc_domheap_pages(
>       return pg;
>   }
>   
> +/*
> + * Allocate nr_pfns contiguous pages, starting at #start, of static memory,

s/nr_pfns/nr_mfns/

> + * then assign them to one specific domain #d.
> + * It is the equivalent of alloc_domheap_pages for static memory.
> + */
> +struct page_info *alloc_domstatic_pages(
> +        struct domain *d, unsigned long nr_pfns, paddr_t start,

s/nr_pfns/nf_mfns/. Also, I would the third parameter to be an mfn_t.

> +        unsigned int memflags)
> +{
> +    struct page_info *pg = NULL;
> +    unsigned long dma_size;
> +
> +    ASSERT(!in_irq());
> +
> +    if ( memflags & MEMF_no_owner )
> +        memflags |= MEMF_no_refcount;
> +
> +    if ( !dma_bitsize )
> +        memflags &= ~MEMF_no_dma;
> +    else
> +    {
> +        dma_size = 1ul << bits_to_zone(dma_bitsize);
> +        /* Starting address shall meet the DMA limitation. */
> +        if ( dma_size && start < dma_size )
> +            return NULL;
> +    }
> +
> +    pg = alloc_staticmem_pages(nr_pfns, start, memflags);
> +    if ( !pg )
> +        return NULL;
> +
> +    if ( d && !(memflags & MEMF_no_owner) )
> +    {
> +        if ( memflags & MEMF_no_refcount )
> +        {
> +            unsigned long i;
> +
> +            for ( i = 0; i < nr_pfns; i++ )
> +                pg[i].count_info = PGC_extra;
> +        }
> +        if ( assign_pages(d, pg, nr_pfns, memflags) )
> +        {
> +            free_staticmem_pages(pg, nr_pfns, memflags & MEMF_no_scrub);
> +            return NULL;
> +        }
> +    }
> +
> +    return pg;
> +}
> +
>   void free_domheap_pages(struct page_info *pg, unsigned int order)
>   {
>       struct domain *d = page_get_owner(pg);
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index dcf9daaa46..e45987f0ed 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -111,6 +111,10 @@ unsigned long __must_check domain_adjust_tot_pages(struct domain *d,
>   int domain_set_outstanding_pages(struct domain *d, unsigned long pages);
>   void get_outstanding_claims(uint64_t *free_pages, uint64_t *outstanding_pages);
>   
> +/* Static Memory */
> +struct page_info *alloc_domstatic_pages(struct domain *d,
> +        unsigned long nr_pfns, paddr_t start, unsigned int memflags);
> +
>   /* Domain suballocator. These functions are *not* interrupt-safe.*/
>   void init_domheap_pages(paddr_t ps, paddr_t pe);
>   struct page_info *alloc_domheap_pages(
> 

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 04/10] xen/arm: static memory initialization
  2021-05-18  9:51     ` Penny Zheng
@ 2021-05-18 10:43       ` Jan Beulich
  2021-05-20  9:04         ` Penny Zheng
  0 siblings, 1 reply; 82+ messages in thread
From: Jan Beulich @ 2021-05-18 10:43 UTC (permalink / raw)
  To: Penny Zheng
  Cc: Bertrand Marquis, Wei Chen, nd, xen-devel, sstabellini, julien

On 18.05.2021 11:51, Penny Zheng wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Tuesday, May 18, 2021 3:16 PM
>>
>> On 18.05.2021 07:21, Penny Zheng wrote:
>>> This patch introduces static memory initialization, during system RAM boot
>> up.
>>>
>>> New func init_staticmem_pages is the equivalent of init_heap_pages,
>>> responsible for static memory initialization.
>>>
>>> Helper func free_staticmem_pages is the equivalent of free_heap_pages,
>>> to free nr_pfns pages of static memory.
>>> For each page, it includes the following steps:
>>> 1. change page state from in-use(also initialization state) to free
>>> state and grant PGC_reserved.
>>> 2. set its owner NULL and make sure this page is not a guest frame any
>>> more
>>
>> But isn't the goal (as per the previous patch) to associate such pages with a
>> _specific_ domain?
>>
> 
> Free_staticmem_pages are alike free_heap_pages, are not used only for initialization.
> Freeing used pages to unused are also included.
> Here, setting its owner NULL is to set owner in used field NULL.

I'm afraid I still don't understand.

> Still, I need to add more explanation here.

Yes please.

>>> @@ -1512,6 +1515,49 @@ static void free_heap_pages(
>>>      spin_unlock(&heap_lock);
>>>  }
>>>
>>> +/* Equivalent of free_heap_pages to free nr_pfns pages of static
>>> +memory. */ static void free_staticmem_pages(struct page_info *pg,
>> unsigned long nr_pfns,
>>> +                                 bool need_scrub)
>>
>> Right now this function gets called only from an __init one. Unless it is
>> intended to gain further callers, it should be marked __init itself then.
>> Otherwise it should be made sure that other architectures don't include this
>> (dead there) code.
>>
> 
> Sure, I'll add __init. Thx.

Didn't I see a 2nd call to the function later in the series? That
one doesn't permit the function to be __init, iirc.

Jan


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 08/10] xen/arm: introduce reserved_page_list
  2021-05-18  5:21 ` [PATCH 08/10] xen/arm: introduce reserved_page_list Penny Zheng
  2021-05-18  7:39   ` Jan Beulich
@ 2021-05-18 11:02   ` Julien Grall
  2021-05-19  6:43     ` Penny Zheng
  1 sibling, 1 reply; 82+ messages in thread
From: Julien Grall @ 2021-05-18 11:02 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Bertrand.Marquis, Wei.Chen, nd

Hi Penny,

On 18/05/2021 06:21, Penny Zheng wrote:
> Since page_list under struct domain refers to linked pages as gueast RAM

s/gueast/guest/

> allocated from heap, it should not include reserved pages of static memory.

You already have PGC_reserved to indicate they are "static memory". So 
why do you need yet another list?

> 
> The number of PGC_reserved pages assigned to a domain is tracked in
> a new 'reserved_pages' counter. Also introduce a new reserved_page_list
> to link pages of static memory. Let page_to_list return reserved_page_list,
> when flag is PGC_reserved.
> 
> Later, when domain get destroyed or restarted, those new values will help
> relinquish memory to proper place, not been given back to heap.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/common/domain.c     | 1 +
>   xen/common/page_alloc.c | 7 +++++--
>   xen/include/xen/sched.h | 5 +++++
>   3 files changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 6b71c6d6a9..c38afd2969 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -578,6 +578,7 @@ struct domain *domain_create(domid_t domid,
>       INIT_PAGE_LIST_HEAD(&d->page_list);
>       INIT_PAGE_LIST_HEAD(&d->extra_page_list);
>       INIT_PAGE_LIST_HEAD(&d->xenpage_list);
> +    INIT_PAGE_LIST_HEAD(&d->reserved_page_list);
>   
>       spin_lock_init(&d->node_affinity_lock);
>       d->node_affinity = NODE_MASK_ALL;
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index f1f1296a61..e3f07ec6c5 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -2410,7 +2410,7 @@ int assign_pages(
>   
>           for ( i = 0; i < nr_pfns; i++ )
>           {
> -            ASSERT(!(pg[i].count_info & ~PGC_extra));
> +            ASSERT(!(pg[i].count_info & ~(PGC_extra | PGC_reserved)));
I think this change belongs to the previous patch.

>               if ( pg[i].count_info & PGC_extra )
>                   extra_pages++;
>           }
> @@ -2439,6 +2439,9 @@ int assign_pages(
>           }
>       }
>   
> +    if ( pg[0].count_info & PGC_reserved )
> +        d->reserved_pages += nr_pfns;

reserved_pages doesn't seem to be ever read or decremented. So why do 
you need it?

> +
>       if ( !(memflags & MEMF_no_refcount) &&
>            unlikely(domain_adjust_tot_pages(d, nr_pfns) == nr_pfns) )
>           get_knownalive_domain(d);
> @@ -2452,7 +2455,7 @@ int assign_pages(
>               page_set_reserved_owner(&pg[i], d);
>           smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
>           pg[i].count_info =
> -            (pg[i].count_info & PGC_extra) | PGC_allocated | 1;
> +            (pg[i].count_info & (PGC_extra | PGC_reserved)) | PGC_allocated | 1;

Same here.

>           page_list_add_tail(&pg[i], page_to_list(d, &pg[i]));
>       }
>   
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index 3982167144..b6333ed8bb 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -368,6 +368,7 @@ struct domain
>       struct page_list_head page_list;  /* linked list */
>       struct page_list_head extra_page_list; /* linked list (size extra_pages) */
>       struct page_list_head xenpage_list; /* linked list (size xenheap_pages) */
> +    struct page_list_head reserved_page_list; /* linked list (size reserved pages) */
>   
>       /*
>        * This field should only be directly accessed by domain_adjust_tot_pages()
> @@ -379,6 +380,7 @@ struct domain
>       unsigned int     outstanding_pages; /* pages claimed but not possessed */
>       unsigned int     max_pages;         /* maximum value for domain_tot_pages() */
>       unsigned int     extra_pages;       /* pages not included in domain_tot_pages() */
> +    unsigned int     reserved_pages;    /* pages of static memory */
>       atomic_t         shr_pages;         /* shared pages */
>       atomic_t         paged_pages;       /* paged-out pages */
>   
> @@ -588,6 +590,9 @@ static inline struct page_list_head *page_to_list(
>       if ( pg->count_info & PGC_extra )
>           return &d->extra_page_list;
>   
> +    if ( pg->count_info & PGC_reserved )
> +        return &d->reserved_page_list;
> +
>       return &d->page_list;
>   }
>   
> 

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
  2021-05-18  8:57     ` Penny Zheng
@ 2021-05-18 11:23       ` Jan Beulich
  2021-05-21  6:41         ` Penny Zheng
  2021-05-18 12:13       ` Julien Grall
  1 sibling, 1 reply; 82+ messages in thread
From: Jan Beulich @ 2021-05-18 11:23 UTC (permalink / raw)
  To: Penny Zheng
  Cc: Bertrand Marquis, Wei Chen, nd, xen-devel, sstabellini, julien

On 18.05.2021 10:57, Penny Zheng wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Tuesday, May 18, 2021 3:35 PM
>>
>> On 18.05.2021 07:21, Penny Zheng wrote:
>>> --- a/xen/common/page_alloc.c
>>> +++ b/xen/common/page_alloc.c
>>> @@ -2447,6 +2447,9 @@ int assign_pages(
>>>      {
>>>          ASSERT(page_get_owner(&pg[i]) == NULL);
>>>          page_set_owner(&pg[i], d);
>>> +        /* use page_set_reserved_owner to set its reserved domain owner.
>> */
>>> +        if ( (pg[i].count_info & PGC_reserved) )
>>> +            page_set_reserved_owner(&pg[i], d);
>>
>> Now this is puzzling: What's the point of setting two owner fields to the same
>> value? I also don't recall you having introduced
>> page_set_reserved_owner() for x86, so how is this going to build there?
>>
> 
> Thanks for pointing out that it will fail on x86.
> As for the same value, sure, I shall change it to domid_t domid to record its reserved owner.
> Only domid is enough for differentiate. 
> And even when domain get rebooted, struct domain may be destroyed, but domid will stays
> The same.

Will it? Are you intending to put in place restrictions that make it
impossible for the ID to get re-used by another domain?

> Major user cases for domain on static allocation are referring to the whole system are static,
> No runtime creation.

Right, but that's not currently enforced afaics. If you would
enforce it, it may simplify a number of things.

>>> @@ -2509,6 +2512,56 @@ struct page_info *alloc_domheap_pages(
>>>      return pg;
>>>  }
>>>
>>> +/*
>>> + * Allocate nr_pfns contiguous pages, starting at #start, of static
>>> +memory,
>>> + * then assign them to one specific domain #d.
>>> + * It is the equivalent of alloc_domheap_pages for static memory.
>>> + */
>>> +struct page_info *alloc_domstatic_pages(
>>> +        struct domain *d, unsigned long nr_pfns, paddr_t start,
>>> +        unsigned int memflags)
>>> +{
>>> +    struct page_info *pg = NULL;
>>> +    unsigned long dma_size;
>>> +
>>> +    ASSERT(!in_irq());
>>> +
>>> +    if ( memflags & MEMF_no_owner )
>>> +        memflags |= MEMF_no_refcount;
>>> +
>>> +    if ( !dma_bitsize )
>>> +        memflags &= ~MEMF_no_dma;
>>> +    else
>>> +    {
>>> +        dma_size = 1ul << bits_to_zone(dma_bitsize);
>>> +        /* Starting address shall meet the DMA limitation. */
>>> +        if ( dma_size && start < dma_size )
>>> +            return NULL;
>>
>> It is the entire range (i.e. in particular the last byte) which needs to meet such
>> a restriction. I'm not convinced though that DMA width restrictions and static
>> allocation are sensible to coexist.
>>
> 
> FWIT, if starting address meets the limitation, the last byte, greater than starting
> address shall meet it too.

I'm afraid I don't know what you're meaning to tell me here.

Jan


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 08/10] xen/arm: introduce reserved_page_list
  2021-05-18  8:38     ` Penny Zheng
@ 2021-05-18 11:24       ` Jan Beulich
  2021-05-19  6:46         ` Penny Zheng
  0 siblings, 1 reply; 82+ messages in thread
From: Jan Beulich @ 2021-05-18 11:24 UTC (permalink / raw)
  To: Penny Zheng
  Cc: Bertrand Marquis, Wei Chen, nd, xen-devel, sstabellini, julien

On 18.05.2021 10:38, Penny Zheng wrote:
> Hi Jan
> 
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Tuesday, May 18, 2021 3:39 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org; julien@xen.org
>> Subject: Re: [PATCH 08/10] xen/arm: introduce reserved_page_list
>>
>> On 18.05.2021 07:21, Penny Zheng wrote:
>>> Since page_list under struct domain refers to linked pages as gueast
>>> RAM allocated from heap, it should not include reserved pages of static
>> memory.
>>>
>>> The number of PGC_reserved pages assigned to a domain is tracked in a
>>> new 'reserved_pages' counter. Also introduce a new reserved_page_list
>>> to link pages of static memory. Let page_to_list return
>>> reserved_page_list, when flag is PGC_reserved.
>>>
>>> Later, when domain get destroyed or restarted, those new values will
>>> help relinquish memory to proper place, not been given back to heap.
>>>
>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>> ---
>>>  xen/common/domain.c     | 1 +
>>>  xen/common/page_alloc.c | 7 +++++--
>>>  xen/include/xen/sched.h | 5 +++++
>>>  3 files changed, 11 insertions(+), 2 deletions(-)
>>
>> This contradicts the title's prefix: There's no Arm-specific change here at all.
>> But imo the title is correct, and the changes should be Arm-specific. There's
>> no point having struct domain fields on e.g. x86 which aren't used there at all.
>>
> 
> Yep, you're right.
> I'll add ifdefs in the following changes.

As necessary, please. Moving stuff to Arm-specific files would be
preferable.

Jan


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 10/10] xen/arm: introduce allocate_static_memory
  2021-05-18  5:21 ` [PATCH 10/10] xen/arm: introduce allocate_static_memory Penny Zheng
@ 2021-05-18 12:05   ` Julien Grall
  2021-05-19  7:27     ` Penny Zheng
  0 siblings, 1 reply; 82+ messages in thread
From: Julien Grall @ 2021-05-18 12:05 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Bertrand.Marquis, Wei.Chen, nd

Hi Penny,

On 18/05/2021 06:21, Penny Zheng wrote:
> This commit introduces allocate_static_memory to allocate static memory as
> guest RAM for domain on Static Allocation.
> 
> It uses alloc_domstatic_pages to allocate pre-defined static memory banks
> for this domain, and uses guest_physmap_add_page to set up P2M table,
> guest starting at fixed GUEST_RAM0_BASE, GUEST_RAM1_BASE.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/arch/arm/domain_build.c | 157 +++++++++++++++++++++++++++++++++++-
>   1 file changed, 155 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 30b55588b7..9f662313ad 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -437,6 +437,50 @@ static bool __init allocate_bank_memory(struct domain *d,
>       return true;
>   }
>   
> +/*
> + * #ram_index and #ram_index refer to the index and starting address of guest
> + * memory kank stored in kinfo->mem.
> + * Static memory at #smfn of #tot_size shall be mapped #sgfn, and
> + * #sgfn will be next guest address to map when returning.
> + */
> +static bool __init allocate_static_bank_memory(struct domain *d,
> +                                               struct kernel_info *kinfo,
> +                                               int ram_index,

Please use unsigned.

> +                                               paddr_t ram_addr,
> +                                               gfn_t* sgfn,

I am confused, what is the difference between ram_addr and sgfn?

> +                                               mfn_t smfn,
> +                                               paddr_t tot_size)
> +{
> +    int res;
> +    struct membank *bank;
> +    paddr_t _size = tot_size;
> +
> +    bank = &kinfo->mem.bank[ram_index];
> +    bank->start = ram_addr;
> +    bank->size = bank->size + tot_size;
> +
> +    while ( tot_size > 0 )
> +    {
> +        unsigned int order = get_allocation_size(tot_size);
> +
> +        res = guest_physmap_add_page(d, *sgfn, smfn, order);
> +        if ( res )
> +        {
> +            dprintk(XENLOG_ERR, "Failed map pages to DOMU: %d", res);
> +            return false;
> +        }
> +
> +        *sgfn = gfn_add(*sgfn, 1UL << order);
> +        smfn = mfn_add(smfn, 1UL << order);
> +        tot_size -= (1ULL << (PAGE_SHIFT + order));
> +    }
> +
> +    kinfo->mem.nr_banks = ram_index + 1;
> +    kinfo->unassigned_mem -= _size;
> +
> +    return true;
> +}
> +
>   static void __init allocate_memory(struct domain *d, struct kernel_info *kinfo)
>   {
>       unsigned int i;
> @@ -480,6 +524,116 @@ fail:
>             (unsigned long)kinfo->unassigned_mem >> 10);
>   }
>   
> +/* Allocate memory from static memory as RAM for one specific domain d. */
> +static void __init allocate_static_memory(struct domain *d,
> +                                            struct kernel_info *kinfo)
> +{
> +    int nr_banks, _banks = 0;

AFAICT, _banks is the index in the array. I think it would be clearer if 
it is caller 'bank' or 'idx'.

> +    size_t ram0_size = GUEST_RAM0_SIZE, ram1_size = GUEST_RAM1_SIZE;
> +    paddr_t bank_start, bank_size;
> +    gfn_t sgfn;
> +    mfn_t smfn;
> +
> +    kinfo->mem.nr_banks = 0;
> +    sgfn = gaddr_to_gfn(GUEST_RAM0_BASE);
> +    nr_banks = d->arch.static_mem.nr_banks;
> +    ASSERT(nr_banks >= 0);
> +
> +    if ( kinfo->unassigned_mem <= 0 )
> +        goto fail;
> +
> +    while ( _banks < nr_banks )
> +    {
> +        bank_start = d->arch.static_mem.bank[_banks].start;
> +        smfn = maddr_to_mfn(bank_start);
> +        bank_size = d->arch.static_mem.bank[_banks].size;

The variable name are slightly confusing because it doesn't tell whether 
this is physical are guest RAM. You might want to consider to prefix 
them with p (resp. g) for physical (resp. guest) RAM.

> +
> +        if ( !alloc_domstatic_pages(d, bank_size >> PAGE_SHIFT, bank_start, 0) )
> +        {
> +            printk(XENLOG_ERR
> +                    "%pd: cannot allocate static memory"
> +                    "(0x%"PRIx64" - 0x%"PRIx64")",

bank_start and bank_size are both paddr_t. So this should be PRIpaddr.

> +                    d, bank_start, bank_start + bank_size);
> +            goto fail;
> +        }
> +
> +        /*
> +         * By default, it shall be mapped to the fixed guest RAM address
> +         * `GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.
> +         * Starting from RAM0(GUEST_RAM0_BASE).
> +         */

Ok. So you are first trying to exhaust the guest bank 0 and then moved 
to bank 1. This wasn't entirely clear from the design document.

I am fine with that, but in this case, the developper should not need to 
know that (in fact this is not part of the ABI).

Regarding this code, I am a bit concerned about the scalability if we 
introduce a second bank.

Can we have an array of the possible guest banks and increment the index 
when exhausting the current bank?

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 09/10] xen/arm: parse `xen,static-mem` info during domain construction
  2021-05-18  5:21 ` [PATCH 09/10] xen/arm: parse `xen,static-mem` info during domain construction Penny Zheng
@ 2021-05-18 12:09   ` Julien Grall
  2021-05-19  7:58     ` Penny Zheng
  0 siblings, 1 reply; 82+ messages in thread
From: Julien Grall @ 2021-05-18 12:09 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Bertrand.Marquis, Wei.Chen, nd

Hi Penny,

On 18/05/2021 06:21, Penny Zheng wrote:
> This commit parses `xen,static-mem` device tree property, to acquire
> static memory info reserved for this domain, when constructing domain
> during boot-up.
> 
> Related info shall be stored in new static_mem value under per domain
> struct arch_domain.

So far, this seems to only be used during boot. So can't this be kept in 
the kinfo structure?

> 
> Right now, the implementation of allocate_static_memory is missing, and
> will be introduced later. It just BUG() out at the moment.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/arch/arm/domain_build.c  | 58 ++++++++++++++++++++++++++++++++----
>   xen/include/asm-arm/domain.h |  3 ++
>   2 files changed, 56 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 282416e74d..30b55588b7 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -2424,17 +2424,61 @@ static int __init construct_domU(struct domain *d,
>   {
>       struct kernel_info kinfo = {};
>       int rc;
> -    u64 mem;
> +    u64 mem, static_mem_size = 0;
> +    const struct dt_property *prop;
> +    u32 static_mem_len;
> +    bool static_mem = false;
> +
> +    /*
> +     * Guest RAM could be of static memory from static allocation,
> +     * which will be specified through "xen,static-mem" property.
> +     */
> +    prop = dt_find_property(node, "xen,static-mem", &static_mem_len);
> +    if ( prop )
> +    {
> +        const __be32 *cell;
> +        u32 addr_cells = 2, size_cells = 2, reg_cells;
> +        u64 start, size;
> +        int i, banks;
> +        static_mem = true;
> +
> +        dt_property_read_u32(node, "#address-cells", &addr_cells);
> +        dt_property_read_u32(node, "#size-cells", &size_cells);
> +        BUG_ON(size_cells > 2 || addr_cells > 2);
> +        reg_cells = addr_cells + size_cells;
> +
> +        cell = (const __be32 *)prop->value;
> +        banks = static_mem_len / (reg_cells * sizeof (u32));
> +        BUG_ON(banks > NR_MEM_BANKS);
> +
> +        for ( i = 0; i < banks; i++ )
> +        {
> +            device_tree_get_reg(&cell, addr_cells, size_cells, &start, &size);
> +            d->arch.static_mem.bank[i].start = start;
> +            d->arch.static_mem.bank[i].size = size;
> +            static_mem_size += size;
> +
> +            printk(XENLOG_INFO
> +                    "Static Memory Bank[%d] for Domain %pd:"
> +                    "0x%"PRIx64"-0x%"PRIx64"\n",
> +                    i, d,
> +                    d->arch.static_mem.bank[i].start,
> +                    d->arch.static_mem.bank[i].start +
> +                    d->arch.static_mem.bank[i].size);
> +        }
> +        d->arch.static_mem.nr_banks = banks;
> +    }

Could we allocate the memory as we parse?

>   
>       rc = dt_property_read_u64(node, "memory", &mem);
> -    if ( !rc )
> +    if ( !static_mem && !rc )
>       {
>           printk("Error building DomU: cannot read \"memory\" property\n");
>           return -EINVAL;
>       }
> -    kinfo.unassigned_mem = (paddr_t)mem * SZ_1K;
> +    kinfo.unassigned_mem = static_mem ? static_mem_size : (paddr_t)mem * SZ_1K;
>   
> -    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
> +    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n",
> +            d->max_vcpus, (kinfo.unassigned_mem) >> 10);
>   
>       kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
>   
> @@ -2452,7 +2496,11 @@ static int __init construct_domU(struct domain *d,
>       /* type must be set before allocate memory */
>       d->arch.type = kinfo.type;
>   #endif
> -    allocate_memory(d, &kinfo);
> +    if ( static_mem )
> +        /* allocate_static_memory(d, &kinfo); */
> +        BUG();
> +    else
> +        allocate_memory(d, &kinfo);
>   
>       rc = prepare_dtb_domU(d, &kinfo);
>       if ( rc < 0 )
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index c9277b5c6d..81b8eb453c 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -10,6 +10,7 @@
>   #include <asm/gic.h>
>   #include <asm/vgic.h>
>   #include <asm/vpl011.h>
> +#include <asm/setup.h>
>   #include <public/hvm/params.h>
>   
>   struct hvm_domain
> @@ -89,6 +90,8 @@ struct arch_domain
>   #ifdef CONFIG_TEE
>       void *tee;
>   #endif
> +
> +    struct meminfo static_mem;
>   }  __cacheline_aligned;
>   
>   struct arch_vcpu
> 

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
  2021-05-18  8:57     ` Penny Zheng
  2021-05-18 11:23       ` Jan Beulich
@ 2021-05-18 12:13       ` Julien Grall
  2021-05-19  7:52         ` Penny Zheng
  1 sibling, 1 reply; 82+ messages in thread
From: Julien Grall @ 2021-05-18 12:13 UTC (permalink / raw)
  To: Penny Zheng, Jan Beulich
  Cc: Bertrand Marquis, Wei Chen, nd, xen-devel, sstabellini

Hi Penny,

On 18/05/2021 09:57, Penny Zheng wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Tuesday, May 18, 2021 3:35 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org; julien@xen.org
>> Subject: Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
>>
>> On 18.05.2021 07:21, Penny Zheng wrote:
>>> --- a/xen/common/page_alloc.c
>>> +++ b/xen/common/page_alloc.c
>>> @@ -2447,6 +2447,9 @@ int assign_pages(
>>>       {
>>>           ASSERT(page_get_owner(&pg[i]) == NULL);
>>>           page_set_owner(&pg[i], d);
>>> +        /* use page_set_reserved_owner to set its reserved domain owner.
>> */
>>> +        if ( (pg[i].count_info & PGC_reserved) )
>>> +            page_set_reserved_owner(&pg[i], d);
>>
>> Now this is puzzling: What's the point of setting two owner fields to the same
>> value? I also don't recall you having introduced
>> page_set_reserved_owner() for x86, so how is this going to build there?
>>
> 
> Thanks for pointing out that it will fail on x86.
> As for the same value, sure, I shall change it to domid_t domid to record its reserved owner.
> Only domid is enough for differentiate.
> And even when domain get rebooted, struct domain may be destroyed, but domid will stays
> The same.
> Major user cases for domain on static allocation are referring to the whole system are static,
> No runtime creation.

One may want to have static memory yet doesn't care about the domid. So 
I am not in favor to restrict about the domid unless there is no other way.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
  2021-05-18  8:58   ` Julien Grall
@ 2021-05-19  2:22     ` Penny Zheng
  2021-05-19 18:27       ` Julien Grall
  0 siblings, 1 reply; 82+ messages in thread
From: Penny Zheng @ 2021-05-19  2:22 UTC (permalink / raw)
  To: Julien Grall, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd

Hi Julien

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Tuesday, May 18, 2021 4:58 PM
> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
> 
> Hi Penny,
> 
> On 18/05/2021 06:21, Penny Zheng wrote:
> > Static Allocation refers to system or sub-system(domains) for which
> > memory areas are pre-defined by configuration using physical address
> ranges.
> > Those pre-defined memory, -- Static Momery, as parts of RAM reserved
> > in the
> 
> s/Momery/Memory/

Oh, thx!

> 
> > beginning, shall never go to heap allocator or boot allocator for any use.
> >
> > Domains on Static Allocation is supported through device tree property
> > `xen,static-mem` specifying reserved RAM banks as this domain's guest
> RAM.
> > By default, they shall be mapped to the fixed guest RAM address
> > `GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.
> >
> > This patch introduces this new `xen,static-mem` property to define
> > static memory nodes in device tree file.
> > This patch also documents and parses this new attribute at boot time
> > and stores related info in static_mem for later initialization.
> >
> > Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > ---
> >   docs/misc/arm/device-tree/booting.txt | 33 +++++++++++++++++
> >   xen/arch/arm/bootfdt.c                | 52 +++++++++++++++++++++++++++
> >   xen/include/asm-arm/setup.h           |  2 ++
> >   3 files changed, 87 insertions(+)
> >
> > diff --git a/docs/misc/arm/device-tree/booting.txt
> > b/docs/misc/arm/device-tree/booting.txt
> > index 5243bc7fd3..d209149d71 100644
> > --- a/docs/misc/arm/device-tree/booting.txt
> > +++ b/docs/misc/arm/device-tree/booting.txt
> > @@ -268,3 +268,36 @@ The DTB fragment is loaded at 0xc000000 in the
> example above. It should
> >   follow the convention explained in docs/misc/arm/passthrough.txt. The
> >   DTB fragment will be added to the guest device tree, so that the guest
> >   kernel will be able to discover the device.
> > +
> > +
> > +Static Allocation
> > +=============
> > +
> > +Static Allocation refers to system or sub-system(domains) for which
> > +memory areas are pre-defined by configuration using physical address
> ranges.
> > +Those pre-defined memory, -- Static Momery, as parts of RAM reserved
> > +in the
> 
> s/Momery/Memory/
> 

Oh, thx

> > +beginning, shall never go to heap allocator or boot allocator for any use.
> > +
> > +Domains on Static Allocation is supported through device tree
> > +property `xen,static-mem` specifying reserved RAM banks as this domain's
> guest RAM.
> 
> I would suggest to use "physical RAM" when you refer to the host memory.
> 
> > +By default, they shall be mapped to the fixed guest RAM address
> > +`GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.
> 
> There are a few bits that needs to clarified or part of the description:
>    1) "By default" suggests there is an alternative possibility.
> However, I don't see any.
>    2) Will the first region of xen,static-mem be mapped to GUEST_RAM0_BASE
> and the second to GUEST_RAM1_BASE? What if a third region is specificed?
>    3) We don't guarantee the base address and the size of the banks.
> Wouldn't it be better to let the admin select the region he/she wants?
>    4) How do you determine the number of cells for the address and the size?
> 

The specific implementation on this part could be traced to the last commit
https://patchew.org/Xen/20210518052113.725808-1-penny.zheng@arm.com/20210518052113.725808-11-penny.zheng@arm.com/

It will exhaust the GUEST_RAM0_SIZE, then seek to the GUEST_RAM1_BASE.
GUEST_RAM0 may take up several regions.

Yes, I may add the 1:1 direct-map scenario here to explain the alternative possibility

For the third point, are you suggesting that we could provide an option that let user
also define guest memory base address/size?

I'm confused on the fourth point, you mean the address cell and size cell for xen,static-mem = <...>?
It will be consistent with the ones defined in the parent node, domUx.

> > +Static Allocation is only supported on AArch64 for now.
> 
> The code doesn't seem to be AArch64 specific. So why can't this be used for
> 32-bit Arm?
> 

True, we have plans to make it also workable in AArch32 in the future.
Because we considered XEN on cortex-R52.

> > +
> > +The dtb property should look like as follows:
> > +
> > +        chosen {
> > +            domU1 {
> > +                compatible = "xen,domain";
> > +                #address-cells = <0x2>;
> > +                #size-cells = <0x2>;
> > +                cpus = <2>;
> > +                xen,static-mem = <0x0 0x30000000 0x0 0x20000000>;
> > +
> > +                ...
> > +            };
> > +        };
> > +
> > +DOMU1 on Static Allocation has reserved RAM bank at 0x30000000 of
> > +512MB size
> 
> Do you mean "DomU1 will have a static memory of 512MB reserved from the
> physical address..."?
>

Yes, yes. You phrase it more clearly, thx
 
> > +as guest RAM.
> > diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c index
> > dcff512648..e9f14e6a44 100644
> > --- a/xen/arch/arm/bootfdt.c
> > +++ b/xen/arch/arm/bootfdt.c
> > @@ -327,6 +327,55 @@ static void __init process_chosen_node(const void
> *fdt, int node,
> >       add_boot_module(BOOTMOD_RAMDISK, start, end-start, false);
> >   }
> >
> > +static int __init process_static_memory(const void *fdt, int node,
> > +                                        const char *name,
> > +                                        u32 address_cells, u32 size_cells,
> > +                                        void *data) {
> > +    int i;
> > +    int banks;
> > +    const __be32 *cell;
> > +    paddr_t start, size;
> > +    u32 reg_cells = address_cells + size_cells;
> > +    struct meminfo *mem = data;
> > +    const struct fdt_property *prop;
> > +
> > +    if ( address_cells < 1 || size_cells < 1 )
> > +    {
> > +        printk("fdt: invalid #address-cells or #size-cells for static memory");
> > +        return -EINVAL;
> > +    }
> > +
> > +    /*
> > +     * Check if static memory property belongs to a specific domain, that is,
> > +     * its node `domUx` has compatible string "xen,domain".
> > +     */
> > +    if ( fdt_node_check_compatible(fdt, node, "xen,domain") != 0 )
> > +        printk("xen,static-mem property can only locate under /domUx
> > + node.\n");
> > +
> > +    prop = fdt_get_property(fdt, node, name, NULL);
> > +    if ( !prop )
> > +        return -ENOENT;
> > +
> > +    cell = (const __be32 *)prop->data;
> > +    banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof (u32));
> > +
> > +    for ( i = 0; i < banks && mem->nr_banks < NR_MEM_BANKS; i++ )
> > +    {
> > +        device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
> > +        /* Some DT may describe empty bank, ignore them */
> > +        if ( !size )
> > +            continue;
> > +        mem->bank[mem->nr_banks].start = start;
> > +        mem->bank[mem->nr_banks].size = size;
> > +        mem->nr_banks++;
> > +    }
> > +
> > +    if ( i < banks )
> > +        return -ENOSPC;
> > +    return 0;
> > +}
> > +
> >   static int __init early_scan_node(const void *fdt,
> >                                     int node, const char *name, int depth,
> >                                     u32 address_cells, u32 size_cells,
> > @@ -345,6 +394,9 @@ static int __init early_scan_node(const void *fdt,
> >           process_multiboot_node(fdt, node, name, address_cells, size_cells);
> >       else if ( depth == 1 && device_tree_node_matches(fdt, node, "chosen") )
> >           process_chosen_node(fdt, node, name, address_cells,
> > size_cells);
> > +    else if ( depth == 2 && fdt_get_property(fdt, node, "xen,static-mem",
> NULL) )
> > +        process_static_memory(fdt, node, "xen,static-mem", address_cells,
> > +                              size_cells, &bootinfo.static_mem);
> 
> I am a bit concerned to add yet another method to parse the DT and all the
> extra code it will add like in patch #2.
> 
>  From the host PoV, they are memory reserved for a specific purpose.
> Would it be possible to consider the reserve-memory binding for that
> purpose? This will happen outside of chosen, but we could use a phandle to
> refer the region.
> 

Correct me if I understand wrongly, do you mean what this device tree snippet looks like:

reserved-memory {
   #address-cells = <2>;
   #size-cells = <2>;
   ranges;
 
   static-mem-domU1: static-mem@0x30000000{
      reg = <0x0 0x30000000 0x0 0x20000000>;
   };
};

Chosen {
 ...
domU1 {
   xen,static-mem = <&static-mem-domU1>;
};
...
};

> >
> >       if ( rc < 0 )
> >           printk("fdt: node `%s': parsing failed\n", name); diff --git
> > a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h index
> > 5283244015..5e9f296760 100644
> > --- a/xen/include/asm-arm/setup.h
> > +++ b/xen/include/asm-arm/setup.h
> > @@ -74,6 +74,8 @@ struct bootinfo {
> >   #ifdef CONFIG_ACPI
> >       struct meminfo acpi;
> >   #endif
> > +    /* Static Memory */
> > +    struct meminfo static_mem;
> >   };
> >
> >   extern struct bootinfo bootinfo;
> >
> 
> Cheers,
> 
> --
> Julien Grall

^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 03/10] xen/arm: introduce PGC_reserved
  2021-05-18  9:45   ` Julien Grall
@ 2021-05-19  3:16     ` Penny Zheng
  2021-05-19  9:49       ` Jan Beulich
  2021-05-19 19:46       ` Julien Grall
  0 siblings, 2 replies; 82+ messages in thread
From: Penny Zheng @ 2021-05-19  3:16 UTC (permalink / raw)
  To: Julien Grall, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd

Hi Julien

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Tuesday, May 18, 2021 5:46 PM
> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
> 
> 
> 
> On 18/05/2021 06:21, Penny Zheng wrote:
> > In order to differentiate pages of static memory, from those allocated
> > from heap, this patch introduces a new page flag PGC_reserved to tell.
> >
> > New struct reserved in struct page_info is to describe reserved page
> > info, that is, which specific domain this page is reserved to. >
> > Helper page_get_reserved_owner and page_set_reserved_owner are
> > designated to get/set reserved page's owner.
> >
> > Struct domain is enlarged to more than PAGE_SIZE, due to
> > newly-imported struct reserved in struct page_info.
> 
> struct domain may embed a pointer to a struct page_info but never directly
> embed the structure. So can you clarify what you mean?
> 
> >
> > Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > ---
> >   xen/include/asm-arm/mm.h | 16 +++++++++++++++-
> >   1 file changed, 15 insertions(+), 1 deletion(-)
> >
> > diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index
> > 0b7de3102e..d8922fd5db 100644
> > --- a/xen/include/asm-arm/mm.h
> > +++ b/xen/include/asm-arm/mm.h
> > @@ -88,7 +88,15 @@ struct page_info
> >            */
> >           u32 tlbflush_timestamp;
> >       };
> > -    u64 pad;
> > +
> > +    /* Page is reserved. */
> > +    struct {
> > +        /*
> > +         * Reserved Owner of this page,
> > +         * if this page is reserved to a specific domain.
> > +         */
> > +        struct domain *domain;
> > +    } reserved;
> 
> The space in page_info is quite tight, so I would like to avoid introducing new
> fields unless we can't get away from it.
> 
> In this case, it is not clear why we need to differentiate the "Owner"
> vs the "Reserved Owner". It might be clearer if this change is folded in the
> first user of the field.
> 
> As an aside, for 32-bit Arm, you need to add a 4-byte padding.
> 

Yeah, I may delete this change. I imported this change as considering the functionality
of rebooting domain on static allocation. 

A little more discussion on rebooting domain on static allocation. 
Considering the major user cases for domain on static allocation
are system has a total pre-defined, static behavior all the time. No domain allocation
on runtime, while there still exists domain rebooting.

And when rebooting domain on static allocation, all these reserved pages could
not go back to heap when freeing them.  So I am considering to use one global
`struct page_info*[DOMID]` value to store.
 
As Jan suggested, when domain get rebooted, struct domain will not exist anymore.
But I think DOMID info could last.

> >   };
> >
> >   #define PG_shift(idx)   (BITS_PER_LONG - (idx))
> > @@ -108,6 +116,9 @@ struct page_info
> >     /* Page is Xen heap? */
> >   #define _PGC_xen_heap     PG_shift(2)
> >   #define PGC_xen_heap      PG_mask(1, 2)
> > +  /* Page is reserved, referring static memory */
> 
> I would drop the second part of the sentence because the flag could be used
> for other purpose. One example is reserved memory when Live Updating.
> 

Sure, I will drop it.

> > +#define _PGC_reserved     PG_shift(3)
> > +#define PGC_reserved      PG_mask(1, 3)
> >   /* ... */
> >   /* Page is broken? */
> >   #define _PGC_broken       PG_shift(7)
> > @@ -161,6 +172,9 @@ extern unsigned long xenheap_base_pdx;
> >   #define page_get_owner(_p)    (_p)->v.inuse.domain
> >   #define page_set_owner(_p,_d) ((_p)->v.inuse.domain = (_d))
> >
> > +#define page_get_reserved_owner(_p)    (_p)->reserved.domain
> > +#define page_set_reserved_owner(_p,_d) ((_p)->reserved.domain = (_d))
> > +
> >   #define maddr_get_owner(ma)   (page_get_owner(maddr_to_page((ma))))
> >
> >   #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START)
> >
> 
> Cheers,
> 
> --
> Julien Grall


Cheers,

Penny Zheng

^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 04/10] xen/arm: static memory initialization
  2021-05-18 10:00   ` Julien Grall
  2021-05-18 10:01     ` Julien Grall
@ 2021-05-19  5:02     ` Penny Zheng
  1 sibling, 0 replies; 82+ messages in thread
From: Penny Zheng @ 2021-05-19  5:02 UTC (permalink / raw)
  To: Julien Grall, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd

Hi Julien

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Tuesday, May 18, 2021 6:01 PM
> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH 04/10] xen/arm: static memory initialization
> 
> Hi Penny,
> 
> On 18/05/2021 06:21, Penny Zheng wrote:
> > This patch introduces static memory initialization, during system RAM boot
> up.
> >
> > New func init_staticmem_pages is the equivalent of init_heap_pages,
> > responsible for static memory initialization.
> >
> > Helper func free_staticmem_pages is the equivalent of free_heap_pages,
> > to free nr_pfns pages of static memory.
> > For each page, it includes the following steps:
> > 1. change page state from in-use(also initialization state) to free
> > state and grant PGC_reserved.
> > 2. set its owner NULL and make sure this page is not a guest frame any
> > more 3. follow the same cache coherency policy in free_heap_pages 4.
> > scrub the page in need
> >
> > Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > ---
> >   xen/arch/arm/setup.c    |  2 ++
> >   xen/common/page_alloc.c | 70
> +++++++++++++++++++++++++++++++++++++++++
> >   xen/include/xen/mm.h    |  3 ++
> >   3 files changed, 75 insertions(+)
> >
> > diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index
> > 444dbbd676..f80162c478 100644
> > --- a/xen/arch/arm/setup.c
> > +++ b/xen/arch/arm/setup.c
> > @@ -818,6 +818,8 @@ static void __init setup_mm(void)
> >
> >       setup_frametable_mappings(ram_start, ram_end);
> >       max_page = PFN_DOWN(ram_end);
> > +
> > +    init_staticmem_pages();
> >   }
> >   #endif
> >
> > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index
> > ace6333c18..58b53c6ac2 100644
> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -150,6 +150,9 @@
> >   #define p2m_pod_offline_or_broken_hit(pg) 0
> >   #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
> >   #endif
> > +#ifdef CONFIG_ARM_64
> > +#include <asm/setup.h>
> > +#endif
> >
> >   /*
> >    * Comma-separated list of hexadecimal page numbers containing bad
> bytes.
> > @@ -1512,6 +1515,49 @@ static void free_heap_pages(
> >       spin_unlock(&heap_lock);
> >   }
> >
> > +/* Equivalent of free_heap_pages to free nr_pfns pages of static
> > +memory. */ static void free_staticmem_pages(struct page_info *pg,
> > +unsigned long nr_pfns,
> 
> This function is dealing with MFNs, so the second parameter should be called
> nr_mfns.
> 

Agreed, thx.

> > +                                 bool need_scrub) {
> > +    mfn_t mfn = page_to_mfn(pg);
> > +    int i;
> > +
> > +    for ( i = 0; i < nr_pfns; i++ )
> > +    {
> > +        switch ( pg[i].count_info & PGC_state )
> > +        {
> > +        case PGC_state_inuse:
> > +            BUG_ON(pg[i].count_info & PGC_broken);
> > +            /* Make it free and reserved. */
> > +            pg[i].count_info = PGC_state_free | PGC_reserved;
> > +            break;
> > +
> > +        default:
> > +            printk(XENLOG_ERR
> > +                   "Page state shall be only in PGC_state_inuse. "
> > +                   "pg[%u] MFN %"PRI_mfn" count_info=%#lx
> tlbflush_timestamp=%#x.\n",
> > +                   i, mfn_x(mfn) + i,
> > +                   pg[i].count_info,
> > +                   pg[i].tlbflush_timestamp);
> > +            BUG();
> > +        }
> > +
> > +        /*
> > +         * Follow the same cache coherence scheme in free_heap_pages.
> > +         * If a page has no owner it will need no safety TLB flush.
> > +         */
> > +        pg[i].u.free.need_tlbflush = (page_get_owner(&pg[i]) != NULL);
> > +        if ( pg[i].u.free.need_tlbflush )
> > +            page_set_tlbflush_timestamp(&pg[i]);
> > +
> > +        /* This page is not a guest frame any more. */
> > +        page_set_owner(&pg[i], NULL);
> > +        set_gpfn_from_mfn(mfn_x(mfn) + i, INVALID_M2P_ENTRY);
> 
> The code looks quite similar to free_heap_pages(). Could we possibly create
> an helper which can be called from both?
> 

Ok, I will extract the common code here and there.
 
> > +
> > +        if ( need_scrub )
> > +            scrub_one_page(&pg[i]);
> 
> So the scrubbing will be synchronous. Is it what we want?
> 
> You also seem to miss the flush the call to flush_page_to_ram().
> 

Yeah, right now, it is synchronous.
I guess that it is not an optimal choice, only a workable one right now.
I'm trying to borrow some asynchronous scrubbing ideas from heap in the future.

Yes! If we are doing synchronous scrubbing, we need to flush_page_to_ram(). Thx.

> > +    }
> > +}
> >
> >   /*
> >    * Following rules applied for page offline:
> > @@ -1828,6 +1874,30 @@ static void init_heap_pages(
> >       }
> >   }
> >
> > +/* Equivalent of init_heap_pages to do static memory initialization
> > +*/ void __init init_staticmem_pages(void) {
> > +    int bank;
> > +
> > +    /*
> > +     * TODO: Considering NUMA-support scenario.
> > +     */
> > +    for ( bank = 0 ; bank < bootinfo.static_mem.nr_banks; bank++ )
> 
> bootinfo is arm specific, so this code should live in arch/arm rather than
> common/.
> 

Yes, I'm considering to create a new CONFIG_STATIC_MEM to include all
static-memory-related functions.

> > +    {
> > +        paddr_t bank_start = bootinfo.static_mem.bank[bank].start;
> > +        paddr_t bank_size = bootinfo.static_mem.bank[bank].size;
> > +        paddr_t bank_end = bank_start + bank_size;
> > +
> > +        bank_start = round_pgup(bank_start);
> > +        bank_end = round_pgdown(bank_end);
> > +        if ( bank_end <= bank_start )
> > +            return;
> > +
> > +        free_staticmem_pages(maddr_to_page(bank_start),
> > +                            (bank_end - bank_start) >> PAGE_SHIFT, false);
> > +    }
> > +}
> > +
> >   static unsigned long avail_heap_pages(
> >       unsigned int zone_lo, unsigned int zone_hi, unsigned int node)
> >   {
> > diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h index
> > 667f9dac83..8b1a2207b2 100644
> > --- a/xen/include/xen/mm.h
> > +++ b/xen/include/xen/mm.h
> > @@ -85,6 +85,9 @@ bool scrub_free_pages(void);
> >   } while ( false )
> >   #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
> >
> > +/* Static Memory */
> > +void init_staticmem_pages(void);
> > +
> >   /* Map machine page range in Xen virtual address space. */
> >   int map_pages_to_xen(
> >       unsigned long virt,
> >
> 
> Cheers,
> 
> --
> Julien Grall

Cheers

Penny

^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
  2021-05-18 10:15   ` Julien Grall
@ 2021-05-19  5:23     ` Penny Zheng
  2021-05-24 10:10       ` Penny Zheng
  0 siblings, 1 reply; 82+ messages in thread
From: Penny Zheng @ 2021-05-19  5:23 UTC (permalink / raw)
  To: Julien Grall, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd

Hi Julien

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Tuesday, May 18, 2021 6:15 PM
> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
> 
> Hi Penny,
> 
> On 18/05/2021 06:21, Penny Zheng wrote:
> > alloc_staticmem_pages is designated to allocate nr_pfns contiguous
> > pages of static memory. And it is the equivalent of alloc_heap_pages
> > for static memory.
> > This commit only covers allocating at specified starting address.
> >
> > For each page, it shall check if the page is reserved
> > (PGC_reserved) and free. It shall also do a set of necessary
> > initialization, which are mostly the same ones in alloc_heap_pages,
> > like, following the same cache-coherency policy and turning page
> > status into PGC_state_used, etc.
> >
> > Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > ---
> >   xen/common/page_alloc.c | 64
> +++++++++++++++++++++++++++++++++++++++++
> >   1 file changed, 64 insertions(+)
> >
> > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index
> > 58b53c6ac2..adf2889e76 100644
> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages(
> >       return pg;
> >   }
> >
> > +/*
> > + * Allocate nr_pfns contiguous pages, starting at #start, of static memory.
> > + * It is the equivalent of alloc_heap_pages for static memory  */
> > +static struct page_info *alloc_staticmem_pages(unsigned long nr_pfns,
> 
> This wants to be nr_mfns.
> 
> > +                                                paddr_t start,
> 
> I would prefer if this helper takes an mfn_t in parameter.
> 

Sure, I will change both.

> > +                                                unsigned int
> > +memflags) {
> > +    bool need_tlbflush = false;
> > +    uint32_t tlbflush_timestamp = 0;
> > +    unsigned int i;
> > +    struct page_info *pg;
> > +    mfn_t s_mfn;
> > +
> > +    /* For now, it only supports allocating at specified address. */
> > +    s_mfn = maddr_to_mfn(start);
> > +    pg = mfn_to_page(s_mfn);
> 
> We should avoid to make the assumption the start address will be valid.
> So you want to call mfn_valid() first.
> 
> At the same time, there is no guarantee that if the first page is valid, then the
> next nr_pfns will be. So the check should be performed for all of them.
> 

Ok. I'll do validation check on both of them.

> > +    if ( !pg )
> > +        return NULL;
> > +
> > +    for ( i = 0; i < nr_pfns; i++)
> > +    {
> > +        /*
> > +         * Reference count must continuously be zero for free pages
> > +         * of static memory(PGC_reserved).
> > +         */
> > +        ASSERT(pg[i].count_info & PGC_reserved);
> > +        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
> > +        {
> > +            printk(XENLOG_ERR
> > +                    "Reference count must continuously be zero for free pages"
> > +                    "pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
> > +                    i, mfn_x(page_to_mfn(pg + i)),
> > +                    pg[i].count_info, pg[i].tlbflush_timestamp);
> > +            BUG();
> 
> So we would crash Xen if the caller pass a wrong range. Is it what we want?
> 
> Also, who is going to prevent concurrent access?
> 

Sure, to fix concurrency issue, I may need to add one spinlock like
`static DEFINE_SPINLOCK(staticmem_lock);`

In current alloc_heap_pages, it will do similar check, that pages in free state MUST have
zero reference count. I guess, if condition not met, there is no need to proceed.

> > +        }
> > +
> > +        if ( !(memflags & MEMF_no_tlbflush) )
> > +            accumulate_tlbflush(&need_tlbflush, &pg[i],
> > +                                &tlbflush_timestamp);
> > +
> > +        /*
> > +         * Reserve flag PGC_reserved and change page state
> > +         * to PGC_state_inuse.
> > +         */
> > +        pg[i].count_info = (pg[i].count_info & PGC_reserved) |
> PGC_state_inuse;
> > +        /* Initialise fields which have other uses for free pages. */
> > +        pg[i].u.inuse.type_info = 0;
> > +        page_set_owner(&pg[i], NULL);
> > +
> > +        /*
> > +         * Ensure cache and RAM are consistent for platforms where the
> > +         * guest can control its own visibility of/through the cache.
> > +         */
> > +        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
> > +                            !(memflags & MEMF_no_icache_flush));
> > +    }
> > +
> > +    if ( need_tlbflush )
> > +        filtered_flush_tlb_mask(tlbflush_timestamp);
> > +
> > +    return pg;
> > +}
> > +
> >   /* Remove any offlined page in the buddy pointed to by head. */
> >   static int reserve_offlined_page(struct page_info *head)
> >   {
> >
> 
> Cheers,
> 
> --
> Julien Grall

Cheers,

Penny Zheng

^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 06/10] xen: replace order with nr_pfns in assign_pages for better compatibility
  2021-05-18 10:20   ` Julien Grall
@ 2021-05-19  5:35     ` Penny Zheng
  0 siblings, 0 replies; 82+ messages in thread
From: Penny Zheng @ 2021-05-19  5:35 UTC (permalink / raw)
  To: Julien Grall, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd

Hi Julien

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Tuesday, May 18, 2021 6:21 PM
> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH 06/10] xen: replace order with nr_pfns in assign_pages
> for better compatibility
> 
> Hi Penny,
> 
> On 18/05/2021 06:21, Penny Zheng wrote:
> > Function parameter order in assign_pages is always used as 1ul <<
> > order, referring to 2@order pages.
> >
> > Now, for better compatibility with new static memory, order shall be
> > replaced with nr_pfns pointing to page count with no constraint, like
> > 250MB.
> 
> We have similar requirements for LiveUpdate because are preserving the
> memory with a number of pages (rather than a power-of-two). With the
> current interface would be need to split the range in a power of 2 which is a
> bit of pain.
> 
> However, I think I would prefer if we introduce a new interface (maybe
> assign_pages_nr()) rather than change the meaning of the field. This is for
> two reasons:
>    1) We limit the risk to make mistake when backporting a patch touch
> assign_pages().
>    2) Adding (1UL << order) for pretty much all the caller is not nice.
> 

Ok. I will create a new interface assign_pages_nr(), and let assign_pages to call it with
2@order.

> Cheers,
> 
> --
> Julien Grall

Cheers

Penny Zheng

^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
  2021-05-18 10:30   ` Julien Grall
@ 2021-05-19  6:03     ` Penny Zheng
  0 siblings, 0 replies; 82+ messages in thread
From: Penny Zheng @ 2021-05-19  6:03 UTC (permalink / raw)
  To: Julien Grall, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd

Hi Julien

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Tuesday, May 18, 2021 6:30 PM
> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
> 
> Hi Penny,
> 
> Title: s/intruduce/introduce/
> 

Thx~

> On 18/05/2021 06:21, Penny Zheng wrote:
> > alloc_domstatic_pages is the equivalent of alloc_domheap_pages for
> > static mmeory, and it is to allocate nr_pfns pages of static memory
> > and assign them to one specific domain.
> >
> > It uses alloc_staticmen_pages to get nr_pages pages of static memory,
> > then on success, it will use assign_pages to assign those pages to one
> > specific domain, including using page_set_reserved_owner to set its
> > reserved domain owner.
> >
> > Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > ---
> >   xen/common/page_alloc.c | 53
> +++++++++++++++++++++++++++++++++++++++++
> >   xen/include/xen/mm.h    |  4 ++++
> >   2 files changed, 57 insertions(+)
> >
> > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index
> > 0eb9f22a00..f1f1296a61 100644
> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -2447,6 +2447,9 @@ int assign_pages(
> >       {
> >           ASSERT(page_get_owner(&pg[i]) == NULL);
> >           page_set_owner(&pg[i], d);
> > +        /* use page_set_reserved_owner to set its reserved domain owner.
> */
> > +        if ( (pg[i].count_info & PGC_reserved) )
> > +            page_set_reserved_owner(&pg[i], d);
> 
> I have skimmed through the rest of the series and couldn't find anyone
> calling page_get_reserved_owner(). The value is also going to be the exact
> same as page_set_owner().
> 
> So why do we need it?
> 

In my first intent, This two helper page_get_reserved_owner/ page_set_reserved_owner
and the new field `reserved` in page_info are all for rebooting domain on static allocation. 

I was considering that, when implementing rebooting domain on static allocation, memory
will be relinquished and right now, all freed back to heap, which is not suitable for static memory here.
` relinquish_memory(d, &d->page_list)  --> put_page -->  free_domheap_page`

For pages in PGC_reserved, now, I am considering that, other than giving it back to heap, maybe creating
a new global `struct page_info*[DOMID]` value to hold.

So it is better to have a new field in struct page_info, as follows, to hold such info.

/* Page is reserved. */
struct {
    /*
     * Reserved Owner of this page,
     * if this page is reserved to a specific domain.
     */
    domid_t reserved_owner;
} reserved;

But this patch Serie is not going to include this feature, and I will delete related helpers and values.

> >           smp_wmb(); /* Domain pointer must be visible before updating
> refcnt. */
> >           pg[i].count_info =
> >               (pg[i].count_info & PGC_extra) | PGC_allocated | 1;
> 
> This will clobber PGC_reserved.
> 

related changes have been included into the commit of "0008-xen-arm-introduce-reserved_page_list.patch".
 
> > @@ -2509,6 +2512,56 @@ struct page_info *alloc_domheap_pages(
> >       return pg;
> >   }
> >
> > +/*
> > + * Allocate nr_pfns contiguous pages, starting at #start, of static
> > +memory,
> 
> s/nr_pfns/nr_mfns/
> 

Sure.

> > + * then assign them to one specific domain #d.
> > + * It is the equivalent of alloc_domheap_pages for static memory.
> > + */
> > +struct page_info *alloc_domstatic_pages(
> > +        struct domain *d, unsigned long nr_pfns, paddr_t start,
> 
> s/nr_pfns/nf_mfns/. Also, I would the third parameter to be an mfn_t.
> 

Sure.

> > +        unsigned int memflags)
> > +{
> > +    struct page_info *pg = NULL;
> > +    unsigned long dma_size;
> > +
> > +    ASSERT(!in_irq());
> > +
> > +    if ( memflags & MEMF_no_owner )
> > +        memflags |= MEMF_no_refcount;
> > +
> > +    if ( !dma_bitsize )
> > +        memflags &= ~MEMF_no_dma;
> > +    else
> > +    {
> > +        dma_size = 1ul << bits_to_zone(dma_bitsize);
> > +        /* Starting address shall meet the DMA limitation. */
> > +        if ( dma_size && start < dma_size )
> > +            return NULL;
> > +    }
> > +
> > +    pg = alloc_staticmem_pages(nr_pfns, start, memflags);
> > +    if ( !pg )
> > +        return NULL;
> > +
> > +    if ( d && !(memflags & MEMF_no_owner) )
> > +    {
> > +        if ( memflags & MEMF_no_refcount )
> > +        {
> > +            unsigned long i;
> > +
> > +            for ( i = 0; i < nr_pfns; i++ )
> > +                pg[i].count_info = PGC_extra;
> > +        }
> > +        if ( assign_pages(d, pg, nr_pfns, memflags) )
> > +        {
> > +            free_staticmem_pages(pg, nr_pfns, memflags & MEMF_no_scrub);
> > +            return NULL;
> > +        }
> > +    }
> > +
> > +    return pg;
> > +}
> > +
> >   void free_domheap_pages(struct page_info *pg, unsigned int order)
> >   {
> >       struct domain *d = page_get_owner(pg); diff --git
> > a/xen/include/xen/mm.h b/xen/include/xen/mm.h index
> > dcf9daaa46..e45987f0ed 100644
> > --- a/xen/include/xen/mm.h
> > +++ b/xen/include/xen/mm.h
> > @@ -111,6 +111,10 @@ unsigned long __must_check
> domain_adjust_tot_pages(struct domain *d,
> >   int domain_set_outstanding_pages(struct domain *d, unsigned long
> pages);
> >   void get_outstanding_claims(uint64_t *free_pages, uint64_t
> > *outstanding_pages);
> >
> > +/* Static Memory */
> > +struct page_info *alloc_domstatic_pages(struct domain *d,
> > +        unsigned long nr_pfns, paddr_t start, unsigned int memflags);
> > +
> >   /* Domain suballocator. These functions are *not* interrupt-safe.*/
> >   void init_domheap_pages(paddr_t ps, paddr_t pe);
> >   struct page_info *alloc_domheap_pages(
> >
> 
> Cheers,
> 
> --
> Julien Grall

Cheers,

Penny

^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 08/10] xen/arm: introduce reserved_page_list
  2021-05-18 11:02   ` Julien Grall
@ 2021-05-19  6:43     ` Penny Zheng
  0 siblings, 0 replies; 82+ messages in thread
From: Penny Zheng @ 2021-05-19  6:43 UTC (permalink / raw)
  To: Julien Grall, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd

Hi Julien

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Tuesday, May 18, 2021 7:02 PM
> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH 08/10] xen/arm: introduce reserved_page_list
> 
> Hi Penny,
> 
> On 18/05/2021 06:21, Penny Zheng wrote:
> > Since page_list under struct domain refers to linked pages as gueast
> > RAM
> 
> s/gueast/guest/
> 

Thx~

> > allocated from heap, it should not include reserved pages of static memory.
> 
> You already have PGC_reserved to indicate they are "static memory". So why
> do you need yet another list?
> 
> >
> > The number of PGC_reserved pages assigned to a domain is tracked in a
> > new 'reserved_pages' counter. Also introduce a new reserved_page_list
> > to link pages of static memory. Let page_to_list return
> > reserved_page_list, when flag is PGC_reserved.
> >
> > Later, when domain get destroyed or restarted, those new values will
> > help relinquish memory to proper place, not been given back to heap.
> >
> > Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > ---
> >   xen/common/domain.c     | 1 +
> >   xen/common/page_alloc.c | 7 +++++--
> >   xen/include/xen/sched.h | 5 +++++
> >   3 files changed, 11 insertions(+), 2 deletions(-)
> >
> > diff --git a/xen/common/domain.c b/xen/common/domain.c index
> > 6b71c6d6a9..c38afd2969 100644
> > --- a/xen/common/domain.c
> > +++ b/xen/common/domain.c
> > @@ -578,6 +578,7 @@ struct domain *domain_create(domid_t domid,
> >       INIT_PAGE_LIST_HEAD(&d->page_list);
> >       INIT_PAGE_LIST_HEAD(&d->extra_page_list);
> >       INIT_PAGE_LIST_HEAD(&d->xenpage_list);
> > +    INIT_PAGE_LIST_HEAD(&d->reserved_page_list);
> >
> >       spin_lock_init(&d->node_affinity_lock);
> >       d->node_affinity = NODE_MASK_ALL; diff --git
> > a/xen/common/page_alloc.c b/xen/common/page_alloc.c index
> > f1f1296a61..e3f07ec6c5 100644
> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -2410,7 +2410,7 @@ int assign_pages(
> >
> >           for ( i = 0; i < nr_pfns; i++ )
> >           {
> > -            ASSERT(!(pg[i].count_info & ~PGC_extra));
> > +            ASSERT(!(pg[i].count_info & ~(PGC_extra |
> > + PGC_reserved)));
> I think this change belongs to the previous patch.
> 

Ok. It will be re-organized into previous commit of
"xen/arm: intruduce alloc_domstatic_pages"

> >               if ( pg[i].count_info & PGC_extra )
> >                   extra_pages++;
> >           }
> > @@ -2439,6 +2439,9 @@ int assign_pages(
> >           }
> >       }
> >
> > +    if ( pg[0].count_info & PGC_reserved )
> > +        d->reserved_pages += nr_pfns;
> 
> reserved_pages doesn't seem to be ever read or decremented. So why do
> you need it?
>

Yeah, I may delete it from this Patch Serie.

Like I addressed in before commits:

"when implementing rebooting domain on static allocation, memory will be relinquished
and right now, all shall be freed back to heap, which is not suitable for static memory here.
` relinquish_memory(d, &d->page_list)  --> put_page -->  free_domheap_page`

For pages in PGC_reserved, now, I am considering that, other than giving it back to heap,
maybe creating a new global `struct page_info*[DOMID]` value to hold.

So it is better to have a new field in struct page_info, as follows, to hold such info.

/* Page is reserved. */
struct {
    /*
     * Reserved Owner of this page,
     * if this page is reserved to a specific domain.
     */
    domid_t reserved_owner;
} reserved;
" 

So I will delete here, and re-import when implementing rebooting domain on static allocation.

> > +
> >       if ( !(memflags & MEMF_no_refcount) &&
> >            unlikely(domain_adjust_tot_pages(d, nr_pfns) == nr_pfns) )
> >           get_knownalive_domain(d);
> > @@ -2452,7 +2455,7 @@ int assign_pages(
> >               page_set_reserved_owner(&pg[i], d);
> >           smp_wmb(); /* Domain pointer must be visible before updating
> refcnt. */
> >           pg[i].count_info =
> > -            (pg[i].count_info & PGC_extra) | PGC_allocated | 1;
> > +            (pg[i].count_info & (PGC_extra | PGC_reserved)) |
> > + PGC_allocated | 1;
> 
> Same here.

I'll re-organize it to the previous commit.

> 
> >           page_list_add_tail(&pg[i], page_to_list(d, &pg[i]));
> >       }
> >
> > diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index
> > 3982167144..b6333ed8bb 100644
> > --- a/xen/include/xen/sched.h
> > +++ b/xen/include/xen/sched.h
> > @@ -368,6 +368,7 @@ struct domain
> >       struct page_list_head page_list;  /* linked list */
> >       struct page_list_head extra_page_list; /* linked list (size extra_pages) */
> >       struct page_list_head xenpage_list; /* linked list (size
> > xenheap_pages) */
> > +    struct page_list_head reserved_page_list; /* linked list (size
> > + reserved pages) */
> >
> >       /*
> >        * This field should only be directly accessed by
> > domain_adjust_tot_pages() @@ -379,6 +380,7 @@ struct domain
> >       unsigned int     outstanding_pages; /* pages claimed but not possessed
> */
> >       unsigned int     max_pages;         /* maximum value for
> domain_tot_pages() */
> >       unsigned int     extra_pages;       /* pages not included in
> domain_tot_pages() */
> > +    unsigned int     reserved_pages;    /* pages of static memory */
> >       atomic_t         shr_pages;         /* shared pages */
> >       atomic_t         paged_pages;       /* paged-out pages */
> >
> > @@ -588,6 +590,9 @@ static inline struct page_list_head *page_to_list(
> >       if ( pg->count_info & PGC_extra )
> >           return &d->extra_page_list;
> >
> > +    if ( pg->count_info & PGC_reserved )
> > +        return &d->reserved_page_list;
> > +
> >       return &d->page_list;
> >   }
> >
> >
> 
> Cheers,
> 
> --
> Julien Grall

Cheers

Penny Zheng

^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 08/10] xen/arm: introduce reserved_page_list
  2021-05-18 11:24       ` Jan Beulich
@ 2021-05-19  6:46         ` Penny Zheng
  0 siblings, 0 replies; 82+ messages in thread
From: Penny Zheng @ 2021-05-19  6:46 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Bertrand Marquis, Wei Chen, nd, xen-devel, sstabellini, julien

Hi Jan 

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: Tuesday, May 18, 2021 7:25 PM
> To: Penny Zheng <Penny.Zheng@arm.com>
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org; julien@xen.org
> Subject: Re: [PATCH 08/10] xen/arm: introduce reserved_page_list
> 
> On 18.05.2021 10:38, Penny Zheng wrote:
> > Hi Jan
> >
> >> -----Original Message-----
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: Tuesday, May 18, 2021 3:39 PM
> >> To: Penny Zheng <Penny.Zheng@arm.com>
> >> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> >> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-
> devel@lists.xenproject.org;
> >> sstabellini@kernel.org; julien@xen.org
> >> Subject: Re: [PATCH 08/10] xen/arm: introduce reserved_page_list
> >>
> >> On 18.05.2021 07:21, Penny Zheng wrote:
> >>> Since page_list under struct domain refers to linked pages as gueast
> >>> RAM allocated from heap, it should not include reserved pages of
> >>> static
> >> memory.
> >>>
> >>> The number of PGC_reserved pages assigned to a domain is tracked in
> >>> a new 'reserved_pages' counter. Also introduce a new
> >>> reserved_page_list to link pages of static memory. Let page_to_list
> >>> return reserved_page_list, when flag is PGC_reserved.
> >>>
> >>> Later, when domain get destroyed or restarted, those new values will
> >>> help relinquish memory to proper place, not been given back to heap.
> >>>
> >>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> >>> ---
> >>>  xen/common/domain.c     | 1 +
> >>>  xen/common/page_alloc.c | 7 +++++--  xen/include/xen/sched.h | 5
> >>> +++++
> >>>  3 files changed, 11 insertions(+), 2 deletions(-)
> >>
> >> This contradicts the title's prefix: There's no Arm-specific change here at
> all.
> >> But imo the title is correct, and the changes should be Arm-specific.
> >> There's no point having struct domain fields on e.g. x86 which aren't used
> there at all.
> >>
> >
> > Yep, you're right.
> > I'll add ifdefs in the following changes.
> 
> As necessary, please. Moving stuff to Arm-specific files would be preferable.
> 

Sure, I'll add a new CONFIG_STATICMEM to include all related functions and variables. Thx

> Jan

Cheers

Penny Zheng

^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 10/10] xen/arm: introduce allocate_static_memory
  2021-05-18 12:05   ` Julien Grall
@ 2021-05-19  7:27     ` Penny Zheng
  2021-05-19 20:10       ` Julien Grall
  0 siblings, 1 reply; 82+ messages in thread
From: Penny Zheng @ 2021-05-19  7:27 UTC (permalink / raw)
  To: Julien Grall, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd

Hi Julien

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Tuesday, May 18, 2021 8:06 PM
> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH 10/10] xen/arm: introduce allocate_static_memory
> 
> Hi Penny,
> 
> On 18/05/2021 06:21, Penny Zheng wrote:
> > This commit introduces allocate_static_memory to allocate static
> > memory as guest RAM for domain on Static Allocation.
> >
> > It uses alloc_domstatic_pages to allocate pre-defined static memory
> > banks for this domain, and uses guest_physmap_add_page to set up P2M
> > table, guest starting at fixed GUEST_RAM0_BASE, GUEST_RAM1_BASE.
> >
> > Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > ---
> >   xen/arch/arm/domain_build.c | 157
> +++++++++++++++++++++++++++++++++++-
> >   1 file changed, 155 insertions(+), 2 deletions(-)
> >
> > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > index 30b55588b7..9f662313ad 100644
> > --- a/xen/arch/arm/domain_build.c
> > +++ b/xen/arch/arm/domain_build.c
> > @@ -437,6 +437,50 @@ static bool __init allocate_bank_memory(struct
> domain *d,
> >       return true;
> >   }
> >
> > +/*
> > + * #ram_index and #ram_index refer to the index and starting address
> > +of guest
> > + * memory kank stored in kinfo->mem.
> > + * Static memory at #smfn of #tot_size shall be mapped #sgfn, and
> > + * #sgfn will be next guest address to map when returning.
> > + */
> > +static bool __init allocate_static_bank_memory(struct domain *d,
> > +                                               struct kernel_info *kinfo,
> > +                                               int ram_index,
> 
> Please use unsigned.
> 
> > +                                               paddr_t ram_addr,
> > +                                               gfn_t* sgfn,
> 
> I am confused, what is the difference between ram_addr and sgfn?
> 

We need to constructing kinfo->mem(guest RAM banks) here, and
we are indexing in static_mem(physical ram banks). Multiple physical ram
banks consist of one guest ram bank(like, GUEST_RAM0). 

ram_addr  here will either be GUEST_RAM0_BASE, or GUEST_RAM1_BASE,
for now. I kinds struggled in how to name it. And maybe it shall not be a
parameter here.

Maybe I should switch.. case.. on the ram_index, if its 0, its GUEST_RAM0_BASE,
And if its 1, its GUEST_RAM1_BASE.

> > +                                               mfn_t smfn,
> > +                                               paddr_t tot_size) {
> > +    int res;
> > +    struct membank *bank;
> > +    paddr_t _size = tot_size;
> > +
> > +    bank = &kinfo->mem.bank[ram_index];
> > +    bank->start = ram_addr;
> > +    bank->size = bank->size + tot_size;
> > +
> > +    while ( tot_size > 0 )
> > +    {
> > +        unsigned int order = get_allocation_size(tot_size);
> > +
> > +        res = guest_physmap_add_page(d, *sgfn, smfn, order);
> > +        if ( res )
> > +        {
> > +            dprintk(XENLOG_ERR, "Failed map pages to DOMU: %d", res);
> > +            return false;
> > +        }
> > +
> > +        *sgfn = gfn_add(*sgfn, 1UL << order);
> > +        smfn = mfn_add(smfn, 1UL << order);
> > +        tot_size -= (1ULL << (PAGE_SHIFT + order));
> > +    }
> > +
> > +    kinfo->mem.nr_banks = ram_index + 1;
> > +    kinfo->unassigned_mem -= _size;
> > +
> > +    return true;
> > +}
> > +
> >   static void __init allocate_memory(struct domain *d, struct kernel_info
> *kinfo)
> >   {
> >       unsigned int i;
> > @@ -480,6 +524,116 @@ fail:
> >             (unsigned long)kinfo->unassigned_mem >> 10);
> >   }
> >
> > +/* Allocate memory from static memory as RAM for one specific domain
> > +d. */ static void __init allocate_static_memory(struct domain *d,
> > +                                            struct kernel_info
> > +*kinfo) {
> > +    int nr_banks, _banks = 0;
> 
> AFAICT, _banks is the index in the array. I think it would be clearer if it is
> caller 'bank' or 'idx'.
> 

Sure, I’ll use the 'bank' here.

> > +    size_t ram0_size = GUEST_RAM0_SIZE, ram1_size = GUEST_RAM1_SIZE;
> > +    paddr_t bank_start, bank_size;
> > +    gfn_t sgfn;
> > +    mfn_t smfn;
> > +
> > +    kinfo->mem.nr_banks = 0;
> > +    sgfn = gaddr_to_gfn(GUEST_RAM0_BASE);
> > +    nr_banks = d->arch.static_mem.nr_banks;
> > +    ASSERT(nr_banks >= 0);
> > +
> > +    if ( kinfo->unassigned_mem <= 0 )
> > +        goto fail;
> > +
> > +    while ( _banks < nr_banks )
> > +    {
> > +        bank_start = d->arch.static_mem.bank[_banks].start;
> > +        smfn = maddr_to_mfn(bank_start);
> > +        bank_size = d->arch.static_mem.bank[_banks].size;
> 
> The variable name are slightly confusing because it doesn't tell whether this
> is physical are guest RAM. You might want to consider to prefix them with p
> (resp. g) for physical (resp. guest) RAM.

Sure, I'll rename to make it more clearly.

> 
> > +
> > +        if ( !alloc_domstatic_pages(d, bank_size >> PAGE_SHIFT, bank_start,
> 0) )
> > +        {
> > +            printk(XENLOG_ERR
> > +                    "%pd: cannot allocate static memory"
> > +                    "(0x%"PRIx64" - 0x%"PRIx64")",
> 
> bank_start and bank_size are both paddr_t. So this should be PRIpaddr.

Sure, I'll change

> 
> > +                    d, bank_start, bank_start + bank_size);
> > +            goto fail;
> > +        }
> > +
> > +        /*
> > +         * By default, it shall be mapped to the fixed guest RAM address
> > +         * `GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.
> > +         * Starting from RAM0(GUEST_RAM0_BASE).
> > +         */
> 
> Ok. So you are first trying to exhaust the guest bank 0 and then moved to
> bank 1. This wasn't entirely clear from the design document.
> 
> I am fine with that, but in this case, the developper should not need to know
> that (in fact this is not part of the ABI).
> 
> Regarding this code, I am a bit concerned about the scalability if we introduce
> a second bank.
> 
> Can we have an array of the possible guest banks and increment the index
> when exhausting the current bank?
> 

Correct me if I understand wrongly, 

What you suggest here is that we make an array of guest banks, right now, including
GUEST_RAM0 and GUEST_RAM1. And if later, adding more guest banks, it will not
bring scalability problem here, right?


> Cheers,
> 
> --
> Julien Grall

Cheers

Penny Zheng

^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
  2021-05-18 12:13       ` Julien Grall
@ 2021-05-19  7:52         ` Penny Zheng
  2021-05-19 20:01           ` Julien Grall
  0 siblings, 1 reply; 82+ messages in thread
From: Penny Zheng @ 2021-05-19  7:52 UTC (permalink / raw)
  To: Julien Grall, Jan Beulich
  Cc: Bertrand Marquis, Wei Chen, nd, xen-devel, sstabellini

Hi Julien

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Tuesday, May 18, 2021 8:13 PM
> To: Penny Zheng <Penny.Zheng@arm.com>; Jan Beulich <jbeulich@suse.com>
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Subject: Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
> 
> Hi Penny,
> 
> On 18/05/2021 09:57, Penny Zheng wrote:
> >> -----Original Message-----
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: Tuesday, May 18, 2021 3:35 PM
> >> To: Penny Zheng <Penny.Zheng@arm.com>
> >> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> >> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-
> devel@lists.xenproject.org;
> >> sstabellini@kernel.org; julien@xen.org
> >> Subject: Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
> >>
> >> On 18.05.2021 07:21, Penny Zheng wrote:
> >>> --- a/xen/common/page_alloc.c
> >>> +++ b/xen/common/page_alloc.c
> >>> @@ -2447,6 +2447,9 @@ int assign_pages(
> >>>       {
> >>>           ASSERT(page_get_owner(&pg[i]) == NULL);
> >>>           page_set_owner(&pg[i], d);
> >>> +        /* use page_set_reserved_owner to set its reserved domain owner.
> >> */
> >>> +        if ( (pg[i].count_info & PGC_reserved) )
> >>> +            page_set_reserved_owner(&pg[i], d);
> >>
> >> Now this is puzzling: What's the point of setting two owner fields to
> >> the same value? I also don't recall you having introduced
> >> page_set_reserved_owner() for x86, so how is this going to build there?
> >>
> >
> > Thanks for pointing out that it will fail on x86.
> > As for the same value, sure, I shall change it to domid_t domid to record its
> reserved owner.
> > Only domid is enough for differentiate.
> > And even when domain get rebooted, struct domain may be destroyed, but
> > domid will stays The same.
> > Major user cases for domain on static allocation are referring to the
> > whole system are static, No runtime creation.
> 
> One may want to have static memory yet doesn't care about the domid. So I
> am not in favor to restrict about the domid unless there is no other way.
> 

The user case you bring up here is that static memory pool? 

Right now, the user cases are mostly restricted to static system.
If we bring runtime allocation, `xl` here, it will add a lot more complexity. 
But if the system has static behavior, the domid is also static. 

On rebooting domain from static memory pool, it brings up more discussion, like
do we intend to give the memory back to static memory pool when rebooting,
if so, ram could be allocated from different place compared with the previous one.
And it kinds destroys system static behavior. 

or not giving back, just store it in global variable `struct page_info *[DOMID]` like the
others. 

> Cheers,
> 
> --
> Julien Grall

Cheers

Penny Zheng

^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 09/10] xen/arm: parse `xen,static-mem` info during domain construction
  2021-05-18 12:09   ` Julien Grall
@ 2021-05-19  7:58     ` Penny Zheng
  0 siblings, 0 replies; 82+ messages in thread
From: Penny Zheng @ 2021-05-19  7:58 UTC (permalink / raw)
  To: Julien Grall, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd

Hi Julien

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Tuesday, May 18, 2021 8:09 PM
> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH 09/10] xen/arm: parse `xen,static-mem` info during
> domain construction
> 
> Hi Penny,
> 
> On 18/05/2021 06:21, Penny Zheng wrote:
> > This commit parses `xen,static-mem` device tree property, to acquire
> > static memory info reserved for this domain, when constructing domain
> > during boot-up.
> >
> > Related info shall be stored in new static_mem value under per domain
> > struct arch_domain.
> 
> So far, this seems to only be used during boot. So can't this be kept in the
> kinfo structure?
> 

Sure, I'll store it in kinfo

> >
> > Right now, the implementation of allocate_static_memory is missing,
> > and will be introduced later. It just BUG() out at the moment.
> >
> > Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > ---
> >   xen/arch/arm/domain_build.c  | 58
> ++++++++++++++++++++++++++++++++----
> >   xen/include/asm-arm/domain.h |  3 ++
> >   2 files changed, 56 insertions(+), 5 deletions(-)
> >
> > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > index 282416e74d..30b55588b7 100644
> > --- a/xen/arch/arm/domain_build.c
> > +++ b/xen/arch/arm/domain_build.c
> > @@ -2424,17 +2424,61 @@ static int __init construct_domU(struct domain
> *d,
> >   {
> >       struct kernel_info kinfo = {};
> >       int rc;
> > -    u64 mem;
> > +    u64 mem, static_mem_size = 0;
> > +    const struct dt_property *prop;
> > +    u32 static_mem_len;
> > +    bool static_mem = false;
> > +
> > +    /*
> > +     * Guest RAM could be of static memory from static allocation,
> > +     * which will be specified through "xen,static-mem" property.
> > +     */
> > +    prop = dt_find_property(node, "xen,static-mem", &static_mem_len);
> > +    if ( prop )
> > +    {
> > +        const __be32 *cell;
> > +        u32 addr_cells = 2, size_cells = 2, reg_cells;
> > +        u64 start, size;
> > +        int i, banks;
> > +        static_mem = true;
> > +
> > +        dt_property_read_u32(node, "#address-cells", &addr_cells);
> > +        dt_property_read_u32(node, "#size-cells", &size_cells);
> > +        BUG_ON(size_cells > 2 || addr_cells > 2);
> > +        reg_cells = addr_cells + size_cells;
> > +
> > +        cell = (const __be32 *)prop->value;
> > +        banks = static_mem_len / (reg_cells * sizeof (u32));
> > +        BUG_ON(banks > NR_MEM_BANKS);
> > +
> > +        for ( i = 0; i < banks; i++ )
> > +        {
> > +            device_tree_get_reg(&cell, addr_cells, size_cells, &start, &size);
> > +            d->arch.static_mem.bank[i].start = start;
> > +            d->arch.static_mem.bank[i].size = size;
> > +            static_mem_size += size;
> > +
> > +            printk(XENLOG_INFO
> > +                    "Static Memory Bank[%d] for Domain %pd:"
> > +                    "0x%"PRIx64"-0x%"PRIx64"\n",
> > +                    i, d,
> > +                    d->arch.static_mem.bank[i].start,
> > +                    d->arch.static_mem.bank[i].start +
> > +                    d->arch.static_mem.bank[i].size);
> > +        }
> > +        d->arch.static_mem.nr_banks = banks;
> > +    }
> 
> Could we allocate the memory as we parse?
> 

Ok. I'll try.

> >
> >       rc = dt_property_read_u64(node, "memory", &mem);
> > -    if ( !rc )
> > +    if ( !static_mem && !rc )
> >       {
> >           printk("Error building DomU: cannot read \"memory\" property\n");
> >           return -EINVAL;
> >       }
> > -    kinfo.unassigned_mem = (paddr_t)mem * SZ_1K;
> > +    kinfo.unassigned_mem = static_mem ? static_mem_size :
> > + (paddr_t)mem * SZ_1K;
> >
> > -    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d-
> >max_vcpus, mem);
> > +    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n",
> > +            d->max_vcpus, (kinfo.unassigned_mem) >> 10);
> >
> >       kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
> >
> > @@ -2452,7 +2496,11 @@ static int __init construct_domU(struct domain
> *d,
> >       /* type must be set before allocate memory */
> >       d->arch.type = kinfo.type;
> >   #endif
> > -    allocate_memory(d, &kinfo);
> > +    if ( static_mem )
> > +        /* allocate_static_memory(d, &kinfo); */
> > +        BUG();
> > +    else
> > +        allocate_memory(d, &kinfo);
> >
> >       rc = prepare_dtb_domU(d, &kinfo);
> >       if ( rc < 0 )
> > diff --git a/xen/include/asm-arm/domain.h
> > b/xen/include/asm-arm/domain.h index c9277b5c6d..81b8eb453c 100644
> > --- a/xen/include/asm-arm/domain.h
> > +++ b/xen/include/asm-arm/domain.h
> > @@ -10,6 +10,7 @@
> >   #include <asm/gic.h>
> >   #include <asm/vgic.h>
> >   #include <asm/vpl011.h>
> > +#include <asm/setup.h>
> >   #include <public/hvm/params.h>
> >
> >   struct hvm_domain
> > @@ -89,6 +90,8 @@ struct arch_domain
> >   #ifdef CONFIG_TEE
> >       void *tee;
> >   #endif
> > +
> > +    struct meminfo static_mem;
> >   }  __cacheline_aligned;
> >
> >   struct arch_vcpu
> >
> 
> Cheers,
> 
> --
> Julien Grall

Cheers

Penny


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
  2021-05-19  3:16     ` Penny Zheng
@ 2021-05-19  9:49       ` Jan Beulich
  2021-05-19 19:49         ` Julien Grall
  2021-05-19 19:46       ` Julien Grall
  1 sibling, 1 reply; 82+ messages in thread
From: Jan Beulich @ 2021-05-19  9:49 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini
  Cc: Bertrand Marquis, Wei Chen, nd, Julien Grall

On 19.05.2021 05:16, Penny Zheng wrote:
>> From: Julien Grall <julien@xen.org>
>> Sent: Tuesday, May 18, 2021 5:46 PM
>>
>> On 18/05/2021 06:21, Penny Zheng wrote:
>>> --- a/xen/include/asm-arm/mm.h
>>> +++ b/xen/include/asm-arm/mm.h
>>> @@ -88,7 +88,15 @@ struct page_info
>>>            */
>>>           u32 tlbflush_timestamp;
>>>       };
>>> -    u64 pad;
>>> +
>>> +    /* Page is reserved. */
>>> +    struct {
>>> +        /*
>>> +         * Reserved Owner of this page,
>>> +         * if this page is reserved to a specific domain.
>>> +         */
>>> +        struct domain *domain;
>>> +    } reserved;
>>
>> The space in page_info is quite tight, so I would like to avoid introducing new
>> fields unless we can't get away from it.
>>
>> In this case, it is not clear why we need to differentiate the "Owner"
>> vs the "Reserved Owner". It might be clearer if this change is folded in the
>> first user of the field.
>>
>> As an aside, for 32-bit Arm, you need to add a 4-byte padding.
>>
> 
> Yeah, I may delete this change. I imported this change as considering the functionality
> of rebooting domain on static allocation. 
> 
> A little more discussion on rebooting domain on static allocation. 
> Considering the major user cases for domain on static allocation
> are system has a total pre-defined, static behavior all the time. No domain allocation
> on runtime, while there still exists domain rebooting.
> 
> And when rebooting domain on static allocation, all these reserved pages could
> not go back to heap when freeing them.  So I am considering to use one global
> `struct page_info*[DOMID]` value to store.

Except such a separate array will consume quite a bit of space for
no real gain: v.free has 32 bits of padding space right now on
Arm64, so there's room for a domid_t there already. Even on Arm32
this could be arranged for, as I doubt "order" needs to be 32 bits
wide.

Jan


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
  2021-05-19  2:22     ` Penny Zheng
@ 2021-05-19 18:27       ` Julien Grall
  2021-05-20  6:07         ` Penny Zheng
  0 siblings, 1 reply; 82+ messages in thread
From: Julien Grall @ 2021-05-19 18:27 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd

On 19/05/2021 03:22, Penny Zheng wrote:
> Hi Julien

Hi Penny,

>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Tuesday, May 18, 2021 4:58 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; nd <nd@arm.com>
>> Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
>>> +beginning, shall never go to heap allocator or boot allocator for any use.
>>> +
>>> +Domains on Static Allocation is supported through device tree
>>> +property `xen,static-mem` specifying reserved RAM banks as this domain's
>> guest RAM.
>>
>> I would suggest to use "physical RAM" when you refer to the host memory.
>>
>>> +By default, they shall be mapped to the fixed guest RAM address
>>> +`GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.
>>
>> There are a few bits that needs to clarified or part of the description:
>>     1) "By default" suggests there is an alternative possibility.
>> However, I don't see any.
>>     2) Will the first region of xen,static-mem be mapped to GUEST_RAM0_BASE
>> and the second to GUEST_RAM1_BASE? What if a third region is specificed?
>>     3) We don't guarantee the base address and the size of the banks.
>> Wouldn't it be better to let the admin select the region he/she wants?
>>     4) How do you determine the number of cells for the address and the size?
>>
> 
> The specific implementation on this part could be traced to the last commit
> https://patchew.org/Xen/20210518052113.725808-1-penny.zheng@arm.com/20210518052113.725808-11-penny.zheng@arm.com/

Right. My point is an admin should not have to read the code in order to 
figure out how the allocation works.

> 
> It will exhaust the GUEST_RAM0_SIZE, then seek to the GUEST_RAM1_BASE.
> GUEST_RAM0 may take up several regions.

Can this be clarified in the commit message.

> Yes, I may add the 1:1 direct-map scenario here to explain the alternative possibility

Ok. So I would suggest to remove "by default" until the implementation 
for direct-map is added.

> For the third point, are you suggesting that we could provide an option that let user
> also define guest memory base address/size?

When reading the document, I originally thought that the first region 
would be mapped to bank1, and then the second region mapped to bank2...

But from your reply, this is not the expected behavior. Therefore, 
please ignore my suggestion here.

> I'm confused on the fourth point, you mean the address cell and size cell for xen,static-mem = <...>?

Yes. This should be clarified in the document. See more below why?

> It will be consistent with the ones defined in the parent node, domUx.
Hmmm... To take the example you provided, the parent would be chosen. 
However, from the example, I would expect the property #{address, 
size}-cells in domU1 to be used. What did I miss?

>>> +Static Allocation is only supported on AArch64 for now.
>>
>> The code doesn't seem to be AArch64 specific. So why can't this be used for
>> 32-bit Arm?
>>
> 
> True, we have plans to make it also workable in AArch32 in the future.
> Because we considered XEN on cortex-R52.

All the code seems to be implemented in arm generic code. So isn't it 
already working?

>>>    static int __init early_scan_node(const void *fdt,
>>>                                      int node, const char *name, int depth,
>>>                                      u32 address_cells, u32 size_cells,
>>> @@ -345,6 +394,9 @@ static int __init early_scan_node(const void *fdt,
>>>            process_multiboot_node(fdt, node, name, address_cells, size_cells);
>>>        else if ( depth == 1 && device_tree_node_matches(fdt, node, "chosen") )
>>>            process_chosen_node(fdt, node, name, address_cells,
>>> size_cells);
>>> +    else if ( depth == 2 && fdt_get_property(fdt, node, "xen,static-mem",
>> NULL) )
>>> +        process_static_memory(fdt, node, "xen,static-mem", address_cells,
>>> +                              size_cells, &bootinfo.static_mem);
>>
>> I am a bit concerned to add yet another method to parse the DT and all the
>> extra code it will add like in patch #2.
>>
>>   From the host PoV, they are memory reserved for a specific purpose.
>> Would it be possible to consider the reserve-memory binding for that
>> purpose? This will happen outside of chosen, but we could use a phandle to
>> refer the region.
>>
> 
> Correct me if I understand wrongly, do you mean what this device tree snippet looks like:

Yes, this is what I had in mind. Although I have one small remark below.


> reserved-memory {
>     #address-cells = <2>;
>     #size-cells = <2>;
>     ranges;
>   
>     static-mem-domU1: static-mem@0x30000000{

I think the node would need to contain a compatible (name to be defined).

>        reg = <0x0 0x30000000 0x0 0x20000000>;
>     };
> };
> 
> Chosen {
>   ...
> domU1 {
>     xen,static-mem = <&static-mem-domU1>;
> };
> ...
> };
> 
>>>
>>>        if ( rc < 0 )
>>>            printk("fdt: node `%s': parsing failed\n", name); diff --git
>>> a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h index
>>> 5283244015..5e9f296760 100644
>>> --- a/xen/include/asm-arm/setup.h
>>> +++ b/xen/include/asm-arm/setup.h
>>> @@ -74,6 +74,8 @@ struct bootinfo {
>>>    #ifdef CONFIG_ACPI
>>>        struct meminfo acpi;
>>>    #endif
>>> +    /* Static Memory */
>>> +    struct meminfo static_mem;
>>>    };
>>>
>>>    extern struct bootinfo bootinfo;
>>>
>>
>> Cheers,
>>
>> --
>> Julien Grall

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
  2021-05-19  3:16     ` Penny Zheng
  2021-05-19  9:49       ` Jan Beulich
@ 2021-05-19 19:46       ` Julien Grall
  2021-05-20  6:19         ` Penny Zheng
  1 sibling, 1 reply; 82+ messages in thread
From: Julien Grall @ 2021-05-19 19:46 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd



On 19/05/2021 04:16, Penny Zheng wrote:
> Hi Julien

Hi Penny,

> 
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Tuesday, May 18, 2021 5:46 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; nd <nd@arm.com>
>> Subject: Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
>>
>>
>>
>> On 18/05/2021 06:21, Penny Zheng wrote:
>>> In order to differentiate pages of static memory, from those allocated
>>> from heap, this patch introduces a new page flag PGC_reserved to tell.
>>>
>>> New struct reserved in struct page_info is to describe reserved page
>>> info, that is, which specific domain this page is reserved to. >
>>> Helper page_get_reserved_owner and page_set_reserved_owner are
>>> designated to get/set reserved page's owner.
>>>
>>> Struct domain is enlarged to more than PAGE_SIZE, due to
>>> newly-imported struct reserved in struct page_info.
>>
>> struct domain may embed a pointer to a struct page_info but never directly
>> embed the structure. So can you clarify what you mean?
>>
>>>
>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>> ---
>>>    xen/include/asm-arm/mm.h | 16 +++++++++++++++-
>>>    1 file changed, 15 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
>> index
>>> 0b7de3102e..d8922fd5db 100644
>>> --- a/xen/include/asm-arm/mm.h
>>> +++ b/xen/include/asm-arm/mm.h
>>> @@ -88,7 +88,15 @@ struct page_info
>>>             */
>>>            u32 tlbflush_timestamp;
>>>        };
>>> -    u64 pad;
>>> +
>>> +    /* Page is reserved. */
>>> +    struct {
>>> +        /*
>>> +         * Reserved Owner of this page,
>>> +         * if this page is reserved to a specific domain.
>>> +         */
>>> +        struct domain *domain;
>>> +    } reserved;
>>
>> The space in page_info is quite tight, so I would like to avoid introducing new
>> fields unless we can't get away from it.
>>
>> In this case, it is not clear why we need to differentiate the "Owner"
>> vs the "Reserved Owner". It might be clearer if this change is folded in the
>> first user of the field.
>>
>> As an aside, for 32-bit Arm, you need to add a 4-byte padding.
>>
> 
> Yeah, I may delete this change. I imported this change as considering the functionality
> of rebooting domain on static allocation.
> 
> A little more discussion on rebooting domain on static allocation.
> Considering the major user cases for domain on static allocation
> are system has a total pre-defined, static behavior all the time. No domain allocation
> on runtime, while there still exists domain rebooting.

Hmmm... With this series it is still possible to allocate memory at 
runtime outside of the static allocation (see my comment on the design 
document [1]). So is it meant to be complete?

> 
> And when rebooting domain on static allocation, all these reserved pages could
> not go back to heap when freeing them.  So I am considering to use one global
> `struct page_info*[DOMID]` value to store.
>   
> As Jan suggested, when domain get rebooted, struct domain will not exist anymore.
> But I think DOMID info could last.
You would need to make sure the domid to be reserved. But I am not 
entirely convinced this is necessary here.

When recreating the domain, you need a way to know its configuration. 
Mostly likely this will come from the Device-Tree. At which point, you 
can also find the static region from there.

Cheers,

[1] <7ab73cb0-39d5-f8bf-660f-b3d77f3247bd@xen.org>

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
  2021-05-19  9:49       ` Jan Beulich
@ 2021-05-19 19:49         ` Julien Grall
  2021-05-20  7:05           ` Jan Beulich
  0 siblings, 1 reply; 82+ messages in thread
From: Julien Grall @ 2021-05-19 19:49 UTC (permalink / raw)
  To: Jan Beulich, Penny Zheng, xen-devel, sstabellini
  Cc: Bertrand Marquis, Wei Chen, nd

Hi Jan,

On 19/05/2021 10:49, Jan Beulich wrote:
> On 19.05.2021 05:16, Penny Zheng wrote:
>>> From: Julien Grall <julien@xen.org>
>>> Sent: Tuesday, May 18, 2021 5:46 PM
>>>
>>> On 18/05/2021 06:21, Penny Zheng wrote:
>>>> --- a/xen/include/asm-arm/mm.h
>>>> +++ b/xen/include/asm-arm/mm.h
>>>> @@ -88,7 +88,15 @@ struct page_info
>>>>             */
>>>>            u32 tlbflush_timestamp;
>>>>        };
>>>> -    u64 pad;
>>>> +
>>>> +    /* Page is reserved. */
>>>> +    struct {
>>>> +        /*
>>>> +         * Reserved Owner of this page,
>>>> +         * if this page is reserved to a specific domain.
>>>> +         */
>>>> +        struct domain *domain;
>>>> +    } reserved;
>>>
>>> The space in page_info is quite tight, so I would like to avoid introducing new
>>> fields unless we can't get away from it.
>>>
>>> In this case, it is not clear why we need to differentiate the "Owner"
>>> vs the "Reserved Owner". It might be clearer if this change is folded in the
>>> first user of the field.
>>>
>>> As an aside, for 32-bit Arm, you need to add a 4-byte padding.
>>>
>>
>> Yeah, I may delete this change. I imported this change as considering the functionality
>> of rebooting domain on static allocation.
>>
>> A little more discussion on rebooting domain on static allocation.
>> Considering the major user cases for domain on static allocation
>> are system has a total pre-defined, static behavior all the time. No domain allocation
>> on runtime, while there still exists domain rebooting.
>>
>> And when rebooting domain on static allocation, all these reserved pages could
>> not go back to heap when freeing them.  So I am considering to use one global
>> `struct page_info*[DOMID]` value to store.
> 
> Except such a separate array will consume quite a bit of space for
> no real gain: v.free has 32 bits of padding space right now on
> Arm64, so there's room for a domid_t there already. Even on Arm32
> this could be arranged for, as I doubt "order" needs to be 32 bits
> wide.

I agree we shouldn't need 32-bit to cover the "order". Although, I would 
like to see any user reading the field before introducing it.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
  2021-05-19  7:52         ` Penny Zheng
@ 2021-05-19 20:01           ` Julien Grall
  0 siblings, 0 replies; 82+ messages in thread
From: Julien Grall @ 2021-05-19 20:01 UTC (permalink / raw)
  To: Penny Zheng, Jan Beulich
  Cc: Bertrand Marquis, Wei Chen, nd, xen-devel, sstabellini



On 19/05/2021 08:52, Penny Zheng wrote:
> Hi Julien

Hi Penny,

> 
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Tuesday, May 18, 2021 8:13 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>; Jan Beulich <jbeulich@suse.com>
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org
>> Subject: Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
>>
>> Hi Penny,
>>
>> On 18/05/2021 09:57, Penny Zheng wrote:
>>>> -----Original Message-----
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: Tuesday, May 18, 2021 3:35 PM
>>>> To: Penny Zheng <Penny.Zheng@arm.com>
>>>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>>>> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-
>> devel@lists.xenproject.org;
>>>> sstabellini@kernel.org; julien@xen.org
>>>> Subject: Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
>>>>
>>>> On 18.05.2021 07:21, Penny Zheng wrote:
>>>>> --- a/xen/common/page_alloc.c
>>>>> +++ b/xen/common/page_alloc.c
>>>>> @@ -2447,6 +2447,9 @@ int assign_pages(
>>>>>        {
>>>>>            ASSERT(page_get_owner(&pg[i]) == NULL);
>>>>>            page_set_owner(&pg[i], d);
>>>>> +        /* use page_set_reserved_owner to set its reserved domain owner.
>>>> */
>>>>> +        if ( (pg[i].count_info & PGC_reserved) )
>>>>> +            page_set_reserved_owner(&pg[i], d);
>>>>
>>>> Now this is puzzling: What's the point of setting two owner fields to
>>>> the same value? I also don't recall you having introduced
>>>> page_set_reserved_owner() for x86, so how is this going to build there?
>>>>
>>>
>>> Thanks for pointing out that it will fail on x86.
>>> As for the same value, sure, I shall change it to domid_t domid to record its
>> reserved owner.
>>> Only domid is enough for differentiate.
>>> And even when domain get rebooted, struct domain may be destroyed, but
>>> domid will stays The same.
>>> Major user cases for domain on static allocation are referring to the
>>> whole system are static, No runtime creation.
>>
>> One may want to have static memory yet doesn't care about the domid. So I
>> am not in favor to restrict about the domid unless there is no other way.
>>
> 
> The user case you bring up here is that static memory pool?

No. The use case I am talking about is an user who wants to give a 
specific memory region to the guest but doesn't care about which domid 
was allocated to the guest.

> 
> Right now, the user cases are mostly restricted to static system.
> If we bring runtime allocation, `xl` here, it will add a lot more complexity.
> But if the system has static behavior, the domid is also static.
I read this as the admin would have to specify the domain ID in the 
Device-Tree. Is that what you meant?

If so, then I don't see why we should mandate that. I would mind less if 
by static you mean the domid will be allocated by Xen and then not 
changed accross reboot.

> 
> On rebooting domain from static memory pool, it brings up more discussion, like
> do we intend to give the memory back to static memory pool when rebooting,
> if so, ram could be allocated from different place compared with the previous one.

You should have all the information in the Device-Tree to re-assign the 
correct regions. So why would it be allocated from a different place?

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 10/10] xen/arm: introduce allocate_static_memory
  2021-05-19  7:27     ` Penny Zheng
@ 2021-05-19 20:10       ` Julien Grall
  2021-05-20  6:29         ` Penny Zheng
  0 siblings, 1 reply; 82+ messages in thread
From: Julien Grall @ 2021-05-19 20:10 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd



On 19/05/2021 08:27, Penny Zheng wrote:
> Hi Julien
> 
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Tuesday, May 18, 2021 8:06 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; nd <nd@arm.com>
>> Subject: Re: [PATCH 10/10] xen/arm: introduce allocate_static_memory
>>
>> Hi Penny,
>>
>> On 18/05/2021 06:21, Penny Zheng wrote:
>>> This commit introduces allocate_static_memory to allocate static
>>> memory as guest RAM for domain on Static Allocation.
>>>
>>> It uses alloc_domstatic_pages to allocate pre-defined static memory
>>> banks for this domain, and uses guest_physmap_add_page to set up P2M
>>> table, guest starting at fixed GUEST_RAM0_BASE, GUEST_RAM1_BASE.
>>>
>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>> ---
>>>    xen/arch/arm/domain_build.c | 157
>> +++++++++++++++++++++++++++++++++++-
>>>    1 file changed, 155 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>> index 30b55588b7..9f662313ad 100644
>>> --- a/xen/arch/arm/domain_build.c
>>> +++ b/xen/arch/arm/domain_build.c
>>> @@ -437,6 +437,50 @@ static bool __init allocate_bank_memory(struct
>> domain *d,
>>>        return true;
>>>    }
>>>
>>> +/*
>>> + * #ram_index and #ram_index refer to the index and starting address
>>> +of guest
>>> + * memory kank stored in kinfo->mem.
>>> + * Static memory at #smfn of #tot_size shall be mapped #sgfn, and
>>> + * #sgfn will be next guest address to map when returning.
>>> + */
>>> +static bool __init allocate_static_bank_memory(struct domain *d,
>>> +                                               struct kernel_info *kinfo,
>>> +                                               int ram_index,
>>
>> Please use unsigned.
>>
>>> +                                               paddr_t ram_addr,
>>> +                                               gfn_t* sgfn,
>>
>> I am confused, what is the difference between ram_addr and sgfn?
>>
> 
> We need to constructing kinfo->mem(guest RAM banks) here, and
> we are indexing in static_mem(physical ram banks). Multiple physical ram
> banks consist of one guest ram bank(like, GUEST_RAM0).
> 
> ram_addr  here will either be GUEST_RAM0_BASE, or GUEST_RAM1_BASE,
> for now. I kinds struggled in how to name it. And maybe it shall not be a
> parameter here.
> 
> Maybe I should switch.. case.. on the ram_index, if its 0, its GUEST_RAM0_BASE,
> And if its 1, its GUEST_RAM1_BASE.

You only need to set kinfo->mem.bank[ram_index].start once. This is when 
you know the bank is first used.

AFAICT, this function will map the memory for a range start at ``sgfn``. 
It doesn't feel this belongs to the function.

The same remark is valid for kinfo->mem.nr_banks.

>>> +                                               mfn_t smfn,
>>> +                                               paddr_t tot_size) {
>>> +    int res;
>>> +    struct membank *bank;
>>> +    paddr_t _size = tot_size;
>>> +
>>> +    bank = &kinfo->mem.bank[ram_index];
>>> +    bank->start = ram_addr;
>>> +    bank->size = bank->size + tot_size;
>>> +
>>> +    while ( tot_size > 0 )
>>> +    {
>>> +        unsigned int order = get_allocation_size(tot_size);
>>> +
>>> +        res = guest_physmap_add_page(d, *sgfn, smfn, order);
>>> +        if ( res )
>>> +        {
>>> +            dprintk(XENLOG_ERR, "Failed map pages to DOMU: %d", res);
>>> +            return false;
>>> +        }
>>> +
>>> +        *sgfn = gfn_add(*sgfn, 1UL << order);
>>> +        smfn = mfn_add(smfn, 1UL << order);
>>> +        tot_size -= (1ULL << (PAGE_SHIFT + order));
>>> +    }
>>> +
>>> +    kinfo->mem.nr_banks = ram_index + 1;
>>> +    kinfo->unassigned_mem -= _size;
>>> +
>>> +    return true;
>>> +}
>>> +
>>>    static void __init allocate_memory(struct domain *d, struct kernel_info
>> *kinfo)
>>>    {
>>>        unsigned int i;
>>> @@ -480,6 +524,116 @@ fail:
>>>              (unsigned long)kinfo->unassigned_mem >> 10);
>>>    }
>>>
>>> +/* Allocate memory from static memory as RAM for one specific domain
>>> +d. */ static void __init allocate_static_memory(struct domain *d,
>>> +                                            struct kernel_info
>>> +*kinfo) {
>>> +    int nr_banks, _banks = 0;
>>
>> AFAICT, _banks is the index in the array. I think it would be clearer if it is
>> caller 'bank' or 'idx'.
>>
> 
> Sure, I’ll use the 'bank' here.
> 
>>> +    size_t ram0_size = GUEST_RAM0_SIZE, ram1_size = GUEST_RAM1_SIZE;
>>> +    paddr_t bank_start, bank_size;
>>> +    gfn_t sgfn;
>>> +    mfn_t smfn;
>>> +
>>> +    kinfo->mem.nr_banks = 0;
>>> +    sgfn = gaddr_to_gfn(GUEST_RAM0_BASE);
>>> +    nr_banks = d->arch.static_mem.nr_banks;
>>> +    ASSERT(nr_banks >= 0);
>>> +
>>> +    if ( kinfo->unassigned_mem <= 0 )
>>> +        goto fail;
>>> +
>>> +    while ( _banks < nr_banks )
>>> +    {
>>> +        bank_start = d->arch.static_mem.bank[_banks].start;
>>> +        smfn = maddr_to_mfn(bank_start);
>>> +        bank_size = d->arch.static_mem.bank[_banks].size;
>>
>> The variable name are slightly confusing because it doesn't tell whether this
>> is physical are guest RAM. You might want to consider to prefix them with p
>> (resp. g) for physical (resp. guest) RAM.
> 
> Sure, I'll rename to make it more clearly.
> 
>>
>>> +
>>> +        if ( !alloc_domstatic_pages(d, bank_size >> PAGE_SHIFT, bank_start,
>> 0) )
>>> +        {
>>> +            printk(XENLOG_ERR
>>> +                    "%pd: cannot allocate static memory"
>>> +                    "(0x%"PRIx64" - 0x%"PRIx64")",
>>
>> bank_start and bank_size are both paddr_t. So this should be PRIpaddr.
> 
> Sure, I'll change
> 
>>
>>> +                    d, bank_start, bank_start + bank_size);
>>> +            goto fail;
>>> +        }
>>> +
>>> +        /*
>>> +         * By default, it shall be mapped to the fixed guest RAM address
>>> +         * `GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.
>>> +         * Starting from RAM0(GUEST_RAM0_BASE).
>>> +         */
>>
>> Ok. So you are first trying to exhaust the guest bank 0 and then moved to
>> bank 1. This wasn't entirely clear from the design document.
>>
>> I am fine with that, but in this case, the developper should not need to know
>> that (in fact this is not part of the ABI).
>>
>> Regarding this code, I am a bit concerned about the scalability if we introduce
>> a second bank.
>>
>> Can we have an array of the possible guest banks and increment the index
>> when exhausting the current bank?
>>
> 
> Correct me if I understand wrongly,
> 
> What you suggest here is that we make an array of guest banks, right now, including
> GUEST_RAM0 and GUEST_RAM1. And if later, adding more guest banks, it will not
> bring scalability problem here, right?

Yes. This should also reduce the current complexity of the code.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
  2021-05-19 18:27       ` Julien Grall
@ 2021-05-20  6:07         ` Penny Zheng
  2021-05-20  8:50           ` Julien Grall
  0 siblings, 1 reply; 82+ messages in thread
From: Penny Zheng @ 2021-05-20  6:07 UTC (permalink / raw)
  To: Julien Grall, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd

Hi Julien

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Thursday, May 20, 2021 2:27 AM
> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
> 
> On 19/05/2021 03:22, Penny Zheng wrote:
> > Hi Julien
> 
> Hi Penny,
> 
> >> -----Original Message-----
> >> From: Julien Grall <julien@xen.org>
> >> Sent: Tuesday, May 18, 2021 4:58 PM
> >> To: Penny Zheng <Penny.Zheng@arm.com>;
> >> xen-devel@lists.xenproject.org; sstabellini@kernel.org
> >> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> >> <Wei.Chen@arm.com>; nd <nd@arm.com>
> >> Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static
> >> Allocation
> >>> +beginning, shall never go to heap allocator or boot allocator for any use.
> >>> +
> >>> +Domains on Static Allocation is supported through device tree
> >>> +property `xen,static-mem` specifying reserved RAM banks as this
> >>> +domain's
> >> guest RAM.
> >>
> >> I would suggest to use "physical RAM" when you refer to the host memory.
> >>
> >>> +By default, they shall be mapped to the fixed guest RAM address
> >>> +`GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.
> >>
> >> There are a few bits that needs to clarified or part of the description:
> >>     1) "By default" suggests there is an alternative possibility.
> >> However, I don't see any.
> >>     2) Will the first region of xen,static-mem be mapped to
> >> GUEST_RAM0_BASE and the second to GUEST_RAM1_BASE? What if a third
> region is specificed?
> >>     3) We don't guarantee the base address and the size of the banks.
> >> Wouldn't it be better to let the admin select the region he/she wants?
> >>     4) How do you determine the number of cells for the address and the size?
> >>
> >
> > The specific implementation on this part could be traced to the last
> > commit
> > https://patchew.org/Xen/20210518052113.725808-1-
> penny.zheng@arm.com/20
> > 210518052113.725808-11-penny.zheng@arm.com/
> 
> Right. My point is an admin should not have to read the code in order to figure
> out how the allocation works.
> 
> >
> > It will exhaust the GUEST_RAM0_SIZE, then seek to the GUEST_RAM1_BASE.
> > GUEST_RAM0 may take up several regions.
> 
> Can this be clarified in the commit message.
> 

Ok, I will expand commit to let it be clarified more clearly.

> > Yes, I may add the 1:1 direct-map scenario here to explain the
> > alternative possibility
> 
> Ok. So I would suggest to remove "by default" until the implementation for
> direct-map is added.
> 

Ok,  will do. Thx.

> > For the third point, are you suggesting that we could provide an
> > option that let user also define guest memory base address/size?
> 
> When reading the document, I originally thought that the first region would be
> mapped to bank1, and then the second region mapped to bank2...
> 
> But from your reply, this is not the expected behavior. Therefore, please ignore
> my suggestion here.
> 
> > I'm confused on the fourth point, you mean the address cell and size cell for
> xen,static-mem = <...>?
> 
> Yes. This should be clarified in the document. See more below why?
> 
> > It will be consistent with the ones defined in the parent node, domUx.
> Hmmm... To take the example you provided, the parent would be chosen.
> However, from the example, I would expect the property #{address, size}-cells
> in domU1 to be used. What did I miss?
> 

Yeah, the property #{address, size}-cells in domU1 will be used. And the parent
node will be domU1.

The dtb property should look like as follows:

        chosen {
            domU1 {
                compatible = "xen,domain";
                #address-cells = <0x2>;
                #size-cells = <0x2>;
                cpus = <2>;
                xen,static-mem = <0x0 0x30000000 0x0 0x20000000>;

                ...
            };
        };

> > +DOMU1 on Static Allocation has reserved RAM bank at 0x30000000 of 512MB size

> >>> +Static Allocation is only supported on AArch64 for now.
> >>
> >> The code doesn't seem to be AArch64 specific. So why can't this be
> >> used for 32-bit Arm?
> >>
> >
> > True, we have plans to make it also workable in AArch32 in the future.
> > Because we considered XEN on cortex-R52.
> 
> All the code seems to be implemented in arm generic code. So isn't it already
> working?
> 
> >>>    static int __init early_scan_node(const void *fdt,
> >>>                                      int node, const char *name, int depth,
> >>>                                      u32 address_cells, u32
> >>> size_cells, @@ -345,6 +394,9 @@ static int __init early_scan_node(const
> void *fdt,
> >>>            process_multiboot_node(fdt, node, name, address_cells, size_cells);
> >>>        else if ( depth == 1 && device_tree_node_matches(fdt, node,
> "chosen") )
> >>>            process_chosen_node(fdt, node, name, address_cells,
> >>> size_cells);
> >>> +    else if ( depth == 2 && fdt_get_property(fdt, node,
> >>> + "xen,static-mem",
> >> NULL) )
> >>> +        process_static_memory(fdt, node, "xen,static-mem", address_cells,
> >>> +                              size_cells, &bootinfo.static_mem);
> >>
> >> I am a bit concerned to add yet another method to parse the DT and
> >> all the extra code it will add like in patch #2.
> >>
> >>   From the host PoV, they are memory reserved for a specific purpose.
> >> Would it be possible to consider the reserve-memory binding for that
> >> purpose? This will happen outside of chosen, but we could use a
> >> phandle to refer the region.
> >>
> >
> > Correct me if I understand wrongly, do you mean what this device tree
> snippet looks like:
> 
> Yes, this is what I had in mind. Although I have one small remark below.
> 
> 
> > reserved-memory {
> >     #address-cells = <2>;
> >     #size-cells = <2>;
> >     ranges;
> >
> >     static-mem-domU1: static-mem@0x30000000{
> 
> I think the node would need to contain a compatible (name to be defined).
> 

Ok, maybe, hmmm, how about "xen,static-memory"?

> >        reg = <0x0 0x30000000 0x0 0x20000000>;
> >     };
> > };
> >
> > Chosen {
> >   ...
> > domU1 {
> >     xen,static-mem = <&static-mem-domU1>; }; ...
> > };
> >
> >>>
> >>>        if ( rc < 0 )
> >>>            printk("fdt: node `%s': parsing failed\n", name); diff --git
> >>> a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h index
> >>> 5283244015..5e9f296760 100644
> >>> --- a/xen/include/asm-arm/setup.h
> >>> +++ b/xen/include/asm-arm/setup.h
> >>> @@ -74,6 +74,8 @@ struct bootinfo {
> >>>    #ifdef CONFIG_ACPI
> >>>        struct meminfo acpi;
> >>>    #endif
> >>> +    /* Static Memory */
> >>> +    struct meminfo static_mem;
> >>>    };
> >>>
> >>>    extern struct bootinfo bootinfo;
> >>>
> >>
> >> Cheers,
> >>
> >> --
> >> Julien Grall
> 
> Cheers,
> 
> --
> Julien Grall

Cheers

Penny Zheng

^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 03/10] xen/arm: introduce PGC_reserved
  2021-05-19 19:46       ` Julien Grall
@ 2021-05-20  6:19         ` Penny Zheng
  2021-05-20  8:40           ` Penny Zheng
  0 siblings, 1 reply; 82+ messages in thread
From: Penny Zheng @ 2021-05-20  6:19 UTC (permalink / raw)
  To: Julien Grall, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd

Hi Julien

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Thursday, May 20, 2021 3:46 AM
> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
> 
> 
> 
> On 19/05/2021 04:16, Penny Zheng wrote:
> > Hi Julien
> 
> Hi Penny,
> 
> >
> >> -----Original Message-----
> >> From: Julien Grall <julien@xen.org>
> >> Sent: Tuesday, May 18, 2021 5:46 PM
> >> To: Penny Zheng <Penny.Zheng@arm.com>;
> >> xen-devel@lists.xenproject.org; sstabellini@kernel.org
> >> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> >> <Wei.Chen@arm.com>; nd <nd@arm.com>
> >> Subject: Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
> >>
> >>
> >>
> >> On 18/05/2021 06:21, Penny Zheng wrote:
> >>> In order to differentiate pages of static memory, from those
> >>> allocated from heap, this patch introduces a new page flag PGC_reserved
> to tell.
> >>>
> >>> New struct reserved in struct page_info is to describe reserved page
> >>> info, that is, which specific domain this page is reserved to. >
> >>> Helper page_get_reserved_owner and page_set_reserved_owner are
> >>> designated to get/set reserved page's owner.
> >>>
> >>> Struct domain is enlarged to more than PAGE_SIZE, due to
> >>> newly-imported struct reserved in struct page_info.
> >>
> >> struct domain may embed a pointer to a struct page_info but never
> >> directly embed the structure. So can you clarify what you mean?
> >>
> >>>
> >>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> >>> ---
> >>>    xen/include/asm-arm/mm.h | 16 +++++++++++++++-
> >>>    1 file changed, 15 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> >> index
> >>> 0b7de3102e..d8922fd5db 100644
> >>> --- a/xen/include/asm-arm/mm.h
> >>> +++ b/xen/include/asm-arm/mm.h
> >>> @@ -88,7 +88,15 @@ struct page_info
> >>>             */
> >>>            u32 tlbflush_timestamp;
> >>>        };
> >>> -    u64 pad;
> >>> +
> >>> +    /* Page is reserved. */
> >>> +    struct {
> >>> +        /*
> >>> +         * Reserved Owner of this page,
> >>> +         * if this page is reserved to a specific domain.
> >>> +         */
> >>> +        struct domain *domain;
> >>> +    } reserved;
> >>
> >> The space in page_info is quite tight, so I would like to avoid
> >> introducing new fields unless we can't get away from it.
> >>
> >> In this case, it is not clear why we need to differentiate the "Owner"
> >> vs the "Reserved Owner". It might be clearer if this change is folded
> >> in the first user of the field.
> >>
> >> As an aside, for 32-bit Arm, you need to add a 4-byte padding.
> >>
> >
> > Yeah, I may delete this change. I imported this change as considering
> > the functionality of rebooting domain on static allocation.
> >
> > A little more discussion on rebooting domain on static allocation.
> > Considering the major user cases for domain on static allocation are
> > system has a total pre-defined, static behavior all the time. No
> > domain allocation on runtime, while there still exists domain rebooting.
> 
> Hmmm... With this series it is still possible to allocate memory at runtime
> outside of the static allocation (see my comment on the design document [1]).
> So is it meant to be complete?
> 

I'm guessing that the "allocate memory at runtime outside of the static allocation" is
referring to XEN heap on static allocation, that is, users pre-defining heap in device tree
configuration to let the whole system more static and predictable.

And I've replied you in the design there, sorry for the late reply. Save your time, and
I’ll paste here:

"Right now, on AArch64, all RAM, except reserved memory, will be finally given to
buddy allocator as heap,  like you said, guest RAM for normal domain will be allocated
from there, xmalloc eventually is get memory from there, etc. So we want to refine the heap
here, not iterating through `bootinfo.mem` to set up XEN heap, but like iterating
`bootinfo. reserved_heap` to set up XEN heap.

On ARM32, xen heap and domain heap are separately mapped, which is more complicated
here. That's why I only talking about implementing these features on AArch64 as first step."

 Above implementation will be delivered through another patch Serie. This patch Serie
Is only focusing on Domain on Static Allocation. 

> >
> > And when rebooting domain on static allocation, all these reserved
> > pages could not go back to heap when freeing them.  So I am
> > considering to use one global `struct page_info*[DOMID]` value to store.
> >
> > As Jan suggested, when domain get rebooted, struct domain will not exist
> anymore.
> > But I think DOMID info could last.
> You would need to make sure the domid to be reserved. But I am not entirely
> convinced this is necessary here.
> 
> When recreating the domain, you need a way to know its configuration.
> Mostly likely this will come from the Device-Tree. At which point, you can also
> find the static region from there.
> 

True, true. I'll dig more in your suggestion, thx. 😉

> Cheers,
> 
> [1] <7ab73cb0-39d5-f8bf-660f-b3d77f3247bd@xen.org>
> 
> --
> Julien Grall

Cheers

Penny


^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 10/10] xen/arm: introduce allocate_static_memory
  2021-05-19 20:10       ` Julien Grall
@ 2021-05-20  6:29         ` Penny Zheng
  0 siblings, 0 replies; 82+ messages in thread
From: Penny Zheng @ 2021-05-20  6:29 UTC (permalink / raw)
  To: Julien Grall, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd

Hi Julien 

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Thursday, May 20, 2021 4:10 AM
> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH 10/10] xen/arm: introduce allocate_static_memory
> 
> 
> 
> On 19/05/2021 08:27, Penny Zheng wrote:
> > Hi Julien
> >
> >> -----Original Message-----
> >> From: Julien Grall <julien@xen.org>
> >> Sent: Tuesday, May 18, 2021 8:06 PM
> >> To: Penny Zheng <Penny.Zheng@arm.com>;
> >> xen-devel@lists.xenproject.org; sstabellini@kernel.org
> >> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> >> <Wei.Chen@arm.com>; nd <nd@arm.com>
> >> Subject: Re: [PATCH 10/10] xen/arm: introduce allocate_static_memory
> >>
> >> Hi Penny,
> >>
> >> On 18/05/2021 06:21, Penny Zheng wrote:
> >>> This commit introduces allocate_static_memory to allocate static
> >>> memory as guest RAM for domain on Static Allocation.
> >>>
> >>> It uses alloc_domstatic_pages to allocate pre-defined static memory
> >>> banks for this domain, and uses guest_physmap_add_page to set up P2M
> >>> table, guest starting at fixed GUEST_RAM0_BASE, GUEST_RAM1_BASE.
> >>>
> >>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> >>> ---
> >>>    xen/arch/arm/domain_build.c | 157
> >> +++++++++++++++++++++++++++++++++++-
> >>>    1 file changed, 155 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/xen/arch/arm/domain_build.c
> >>> b/xen/arch/arm/domain_build.c index 30b55588b7..9f662313ad 100644
> >>> --- a/xen/arch/arm/domain_build.c
> >>> +++ b/xen/arch/arm/domain_build.c
> >>> @@ -437,6 +437,50 @@ static bool __init allocate_bank_memory(struct
> >> domain *d,
> >>>        return true;
> >>>    }
> >>>
> >>> +/*
> >>> + * #ram_index and #ram_index refer to the index and starting
> >>> +address of guest
> >>> + * memory kank stored in kinfo->mem.
> >>> + * Static memory at #smfn of #tot_size shall be mapped #sgfn, and
> >>> + * #sgfn will be next guest address to map when returning.
> >>> + */
> >>> +static bool __init allocate_static_bank_memory(struct domain *d,
> >>> +                                               struct kernel_info *kinfo,
> >>> +                                               int ram_index,
> >>
> >> Please use unsigned.
> >>
> >>> +                                               paddr_t ram_addr,
> >>> +                                               gfn_t* sgfn,
> >>
> >> I am confused, what is the difference between ram_addr and sgfn?
> >>
> >
> > We need to constructing kinfo->mem(guest RAM banks) here, and we are
> > indexing in static_mem(physical ram banks). Multiple physical ram
> > banks consist of one guest ram bank(like, GUEST_RAM0).
> >
> > ram_addr  here will either be GUEST_RAM0_BASE, or GUEST_RAM1_BASE,
> for
> > now. I kinds struggled in how to name it. And maybe it shall not be a
> > parameter here.
> >
> > Maybe I should switch.. case.. on the ram_index, if its 0, its
> > GUEST_RAM0_BASE, And if its 1, its GUEST_RAM1_BASE.
> 
> You only need to set kinfo->mem.bank[ram_index].start once. This is when
> you know the bank is first used.
> 
> AFAICT, this function will map the memory for a range start at ``sgfn``.
> It doesn't feel this belongs to the function.
> 
> The same remark is valid for kinfo->mem.nr_banks.
> 

Ok. I finally totally understand what you suggest here.
I'll try to let the action related to setting kinfo->mem.bank[ram_index].start/
kinfo->mem.bank[ram_index].size/ kinfo->mem. nr_banks out of this function,
and only keep the simple functionality of mapping the memory for a range start
at ``sgfn``.

> >>> +                                               mfn_t smfn,
> >>> +                                               paddr_t tot_size) {
> >>> +    int res;
> >>> +    struct membank *bank;
> >>> +    paddr_t _size = tot_size;
> >>> +
> >>> +    bank = &kinfo->mem.bank[ram_index];
> >>> +    bank->start = ram_addr;
> >>> +    bank->size = bank->size + tot_size;
> >>> +
> >>> +    while ( tot_size > 0 )
> >>> +    {
> >>> +        unsigned int order = get_allocation_size(tot_size);
> >>> +
> >>> +        res = guest_physmap_add_page(d, *sgfn, smfn, order);
> >>> +        if ( res )
> >>> +        {
> >>> +            dprintk(XENLOG_ERR, "Failed map pages to DOMU: %d", res);
> >>> +            return false;
> >>> +        }
> >>> +
> >>> +        *sgfn = gfn_add(*sgfn, 1UL << order);
> >>> +        smfn = mfn_add(smfn, 1UL << order);
> >>> +        tot_size -= (1ULL << (PAGE_SHIFT + order));
> >>> +    }
> >>> +
> >>> +    kinfo->mem.nr_banks = ram_index + 1;
> >>> +    kinfo->unassigned_mem -= _size;
> >>> +
> >>> +    return true;
> >>> +}
> >>> +
> >>>    static void __init allocate_memory(struct domain *d, struct
> >>> kernel_info
> >> *kinfo)
> >>>    {
> >>>        unsigned int i;
> >>> @@ -480,6 +524,116 @@ fail:
> >>>              (unsigned long)kinfo->unassigned_mem >> 10);
> >>>    }
> >>>
> >>> +/* Allocate memory from static memory as RAM for one specific
> >>> +domain d. */ static void __init allocate_static_memory(struct domain *d,
> >>> +                                            struct kernel_info
> >>> +*kinfo) {
> >>> +    int nr_banks, _banks = 0;
> >>
> >> AFAICT, _banks is the index in the array. I think it would be clearer
> >> if it is caller 'bank' or 'idx'.
> >>
> >
> > Sure, I’ll use the 'bank' here.
> >
> >>> +    size_t ram0_size = GUEST_RAM0_SIZE, ram1_size = GUEST_RAM1_SIZE;
> >>> +    paddr_t bank_start, bank_size;
> >>> +    gfn_t sgfn;
> >>> +    mfn_t smfn;
> >>> +
> >>> +    kinfo->mem.nr_banks = 0;
> >>> +    sgfn = gaddr_to_gfn(GUEST_RAM0_BASE);
> >>> +    nr_banks = d->arch.static_mem.nr_banks;
> >>> +    ASSERT(nr_banks >= 0);
> >>> +
> >>> +    if ( kinfo->unassigned_mem <= 0 )
> >>> +        goto fail;
> >>> +
> >>> +    while ( _banks < nr_banks )
> >>> +    {
> >>> +        bank_start = d->arch.static_mem.bank[_banks].start;
> >>> +        smfn = maddr_to_mfn(bank_start);
> >>> +        bank_size = d->arch.static_mem.bank[_banks].size;
> >>
> >> The variable name are slightly confusing because it doesn't tell
> >> whether this is physical are guest RAM. You might want to consider to
> >> prefix them with p (resp. g) for physical (resp. guest) RAM.
> >
> > Sure, I'll rename to make it more clearly.
> >
> >>
> >>> +
> >>> +        if ( !alloc_domstatic_pages(d, bank_size >> PAGE_SHIFT,
> >>> + bank_start,
> >> 0) )
> >>> +        {
> >>> +            printk(XENLOG_ERR
> >>> +                    "%pd: cannot allocate static memory"
> >>> +                    "(0x%"PRIx64" - 0x%"PRIx64")",
> >>
> >> bank_start and bank_size are both paddr_t. So this should be PRIpaddr.
> >
> > Sure, I'll change
> >
> >>
> >>> +                    d, bank_start, bank_start + bank_size);
> >>> +            goto fail;
> >>> +        }
> >>> +
> >>> +        /*
> >>> +         * By default, it shall be mapped to the fixed guest RAM address
> >>> +         * `GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.
> >>> +         * Starting from RAM0(GUEST_RAM0_BASE).
> >>> +         */
> >>
> >> Ok. So you are first trying to exhaust the guest bank 0 and then
> >> moved to bank 1. This wasn't entirely clear from the design document.
> >>
> >> I am fine with that, but in this case, the developper should not need
> >> to know that (in fact this is not part of the ABI).
> >>
> >> Regarding this code, I am a bit concerned about the scalability if we
> >> introduce a second bank.
> >>
> >> Can we have an array of the possible guest banks and increment the
> >> index when exhausting the current bank?
> >>
> >
> > Correct me if I understand wrongly,
> >
> > What you suggest here is that we make an array of guest banks, right
> > now, including
> > GUEST_RAM0 and GUEST_RAM1. And if later, adding more guest banks, it
> > will not bring scalability problem here, right?
> 
> Yes. This should also reduce the current complexity of the code.
> 
> Cheers,
> 
> --
> Julien Grall

Cheers

Penny

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
  2021-05-19 19:49         ` Julien Grall
@ 2021-05-20  7:05           ` Jan Beulich
  0 siblings, 0 replies; 82+ messages in thread
From: Jan Beulich @ 2021-05-20  7:05 UTC (permalink / raw)
  To: Julien Grall
  Cc: Bertrand Marquis, Wei Chen, nd, xen-devel, Penny Zheng, sstabellini

On 19.05.2021 21:49, Julien Grall wrote:
> On 19/05/2021 10:49, Jan Beulich wrote:
>> On 19.05.2021 05:16, Penny Zheng wrote:
>>>> From: Julien Grall <julien@xen.org>
>>>> Sent: Tuesday, May 18, 2021 5:46 PM
>>>>
>>>> On 18/05/2021 06:21, Penny Zheng wrote:
>>>>> --- a/xen/include/asm-arm/mm.h
>>>>> +++ b/xen/include/asm-arm/mm.h
>>>>> @@ -88,7 +88,15 @@ struct page_info
>>>>>             */
>>>>>            u32 tlbflush_timestamp;
>>>>>        };
>>>>> -    u64 pad;
>>>>> +
>>>>> +    /* Page is reserved. */
>>>>> +    struct {
>>>>> +        /*
>>>>> +         * Reserved Owner of this page,
>>>>> +         * if this page is reserved to a specific domain.
>>>>> +         */
>>>>> +        struct domain *domain;
>>>>> +    } reserved;
>>>>
>>>> The space in page_info is quite tight, so I would like to avoid introducing new
>>>> fields unless we can't get away from it.
>>>>
>>>> In this case, it is not clear why we need to differentiate the "Owner"
>>>> vs the "Reserved Owner". It might be clearer if this change is folded in the
>>>> first user of the field.
>>>>
>>>> As an aside, for 32-bit Arm, you need to add a 4-byte padding.
>>>>
>>>
>>> Yeah, I may delete this change. I imported this change as considering the functionality
>>> of rebooting domain on static allocation.
>>>
>>> A little more discussion on rebooting domain on static allocation.
>>> Considering the major user cases for domain on static allocation
>>> are system has a total pre-defined, static behavior all the time. No domain allocation
>>> on runtime, while there still exists domain rebooting.
>>>
>>> And when rebooting domain on static allocation, all these reserved pages could
>>> not go back to heap when freeing them.  So I am considering to use one global
>>> `struct page_info*[DOMID]` value to store.
>>
>> Except such a separate array will consume quite a bit of space for
>> no real gain: v.free has 32 bits of padding space right now on
>> Arm64, so there's room for a domid_t there already. Even on Arm32
>> this could be arranged for, as I doubt "order" needs to be 32 bits
>> wide.
> 
> I agree we shouldn't need 32-bit to cover the "order". Although, I would 
> like to see any user reading the field before introducing it.

Of course, but I thought the plan was to mark static pages with their
designated domid, which would happen prior to domain creation. The
reader of the field would then naturally appear during domain creation.

Jan


^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 03/10] xen/arm: introduce PGC_reserved
  2021-05-20  6:19         ` Penny Zheng
@ 2021-05-20  8:40           ` Penny Zheng
  2021-05-20  8:59             ` Julien Grall
  0 siblings, 1 reply; 82+ messages in thread
From: Penny Zheng @ 2021-05-20  8:40 UTC (permalink / raw)
  To: Julien Grall, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd

Hi julien 

> -----Original Message-----
> From: Penny Zheng
> Sent: Thursday, May 20, 2021 2:20 PM
> To: Julien Grall <julien@xen.org>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>
> Subject: RE: [PATCH 03/10] xen/arm: introduce PGC_reserved
> 
> Hi Julien
> 
> > -----Original Message-----
> > From: Julien Grall <julien@xen.org>
> > Sent: Thursday, May 20, 2021 3:46 AM
> > To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> > sstabellini@kernel.org
> > Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> > <Wei.Chen@arm.com>; nd <nd@arm.com>
> > Subject: Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
> >
> >
> >
> > On 19/05/2021 04:16, Penny Zheng wrote:
> > > Hi Julien
> >
> > Hi Penny,
> >
> > >
> > >> -----Original Message-----
> > >> From: Julien Grall <julien@xen.org>
> > >> Sent: Tuesday, May 18, 2021 5:46 PM
> > >> To: Penny Zheng <Penny.Zheng@arm.com>;
> > >> xen-devel@lists.xenproject.org; sstabellini@kernel.org
> > >> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> > >> <Wei.Chen@arm.com>; nd <nd@arm.com>
> > >> Subject: Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
> > >>
> > >>
> > >>
> > >> On 18/05/2021 06:21, Penny Zheng wrote:
> > >>> In order to differentiate pages of static memory, from those
> > >>> allocated from heap, this patch introduces a new page flag
> > >>> PGC_reserved
> > to tell.
> > >>>
> > >>> New struct reserved in struct page_info is to describe reserved
> > >>> page info, that is, which specific domain this page is reserved
> > >>> to. > Helper page_get_reserved_owner and page_set_reserved_owner
> > >>> are designated to get/set reserved page's owner.
> > >>>
> > >>> Struct domain is enlarged to more than PAGE_SIZE, due to
> > >>> newly-imported struct reserved in struct page_info.
> > >>
> > >> struct domain may embed a pointer to a struct page_info but never
> > >> directly embed the structure. So can you clarify what you mean?
> > >>
> > >>>
> > >>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > >>> ---
> > >>>    xen/include/asm-arm/mm.h | 16 +++++++++++++++-
> > >>>    1 file changed, 15 insertions(+), 1 deletion(-)
> > >>>
> > >>> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> > >> index
> > >>> 0b7de3102e..d8922fd5db 100644
> > >>> --- a/xen/include/asm-arm/mm.h
> > >>> +++ b/xen/include/asm-arm/mm.h
> > >>> @@ -88,7 +88,15 @@ struct page_info
> > >>>             */
> > >>>            u32 tlbflush_timestamp;
> > >>>        };
> > >>> -    u64 pad;
> > >>> +
> > >>> +    /* Page is reserved. */
> > >>> +    struct {
> > >>> +        /*
> > >>> +         * Reserved Owner of this page,
> > >>> +         * if this page is reserved to a specific domain.
> > >>> +         */
> > >>> +        struct domain *domain;
> > >>> +    } reserved;
> > >>
> > >> The space in page_info is quite tight, so I would like to avoid
> > >> introducing new fields unless we can't get away from it.
> > >>
> > >> In this case, it is not clear why we need to differentiate the "Owner"
> > >> vs the "Reserved Owner". It might be clearer if this change is
> > >> folded in the first user of the field.
> > >>
> > >> As an aside, for 32-bit Arm, you need to add a 4-byte padding.
> > >>
> > >
> > > Yeah, I may delete this change. I imported this change as
> > > considering the functionality of rebooting domain on static allocation.
> > >
> > > A little more discussion on rebooting domain on static allocation.
> > > Considering the major user cases for domain on static allocation are
> > > system has a total pre-defined, static behavior all the time. No
> > > domain allocation on runtime, while there still exists domain rebooting.
> >
> > Hmmm... With this series it is still possible to allocate memory at
> > runtime outside of the static allocation (see my comment on the design
> document [1]).
> > So is it meant to be complete?
> >
> 
> I'm guessing that the "allocate memory at runtime outside of the static
> allocation" is referring to XEN heap on static allocation, that is, users pre-
> defining heap in device tree configuration to let the whole system more static
> and predictable.
> 
> And I've replied you in the design there, sorry for the late reply. Save your time,
> and I’ll paste here:
> 
> "Right now, on AArch64, all RAM, except reserved memory, will be finally
> given to buddy allocator as heap,  like you said, guest RAM for normal domain
> will be allocated from there, xmalloc eventually is get memory from there, etc.
> So we want to refine the heap here, not iterating through `bootinfo.mem` to
> set up XEN heap, but like iterating `bootinfo. reserved_heap` to set up XEN
> heap.
> 
> On ARM32, xen heap and domain heap are separately mapped, which is more
> complicated here. That's why I only talking about implementing these features
> on AArch64 as first step."
> 
>  Above implementation will be delivered through another patch Serie. This
> patch Serie Is only focusing on Domain on Static Allocation.
> 

Oh, Second thought on this. 
And I think you are referring to balloon in/out here, hmm, also, like
I replied there:
"For issues on ballooning out or in, it is not supported here.
Domain on Static Allocation and 1:1 direct-map are all based on
dom0-less right now, so no PV, grant table, event channel, etc, considered.

Right now, it only supports device got passthrough into the guest."

> > >
> > > And when rebooting domain on static allocation, all these reserved
> > > pages could not go back to heap when freeing them.  So I am
> > > considering to use one global `struct page_info*[DOMID]` value to store.
> > >
> > > As Jan suggested, when domain get rebooted, struct domain will not
> > > exist
> > anymore.
> > > But I think DOMID info could last.
> > You would need to make sure the domid to be reserved. But I am not
> > entirely convinced this is necessary here.
> >
> > When recreating the domain, you need a way to know its configuration.
> > Mostly likely this will come from the Device-Tree. At which point, you
> > can also find the static region from there.
> >
> 
> True, true. I'll dig more in your suggestion, thx. 😉
> 
> > Cheers,
> >
> > [1] <7ab73cb0-39d5-f8bf-660f-b3d77f3247bd@xen.org>
> >
> > --
> > Julien Grall
> 
> Cheers
> 
> Penny

Cheers

Penny


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
  2021-05-20  6:07         ` Penny Zheng
@ 2021-05-20  8:50           ` Julien Grall
  2021-06-02 10:09             ` Penny Zheng
  0 siblings, 1 reply; 82+ messages in thread
From: Julien Grall @ 2021-05-20  8:50 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd

Hi,

On 20/05/2021 07:07, Penny Zheng wrote:
>>> It will be consistent with the ones defined in the parent node, domUx.
>> Hmmm... To take the example you provided, the parent would be chosen.
>> However, from the example, I would expect the property #{address, size}-cells
>> in domU1 to be used. What did I miss?
>>
> 
> Yeah, the property #{address, size}-cells in domU1 will be used. And the parent
> node will be domU1.

You may have misunderstood what I meant. "domU1" is the node that 
contains the property "xen,static-mem". The parent node would be the one 
above (in our case "chosen").

> 
> The dtb property should look like as follows:
> 
>          chosen {
>              domU1 {
>                  compatible = "xen,domain";
>                  #address-cells = <0x2>;
>                  #size-cells = <0x2>;
>                  cpus = <2>;
>                  xen,static-mem = <0x0 0x30000000 0x0 0x20000000>;
> 
>                  ...
>              };
>          };
> 
>>> +DOMU1 on Static Allocation has reserved RAM bank at 0x30000000 of 512MB size
> 
>>>>> +Static Allocation is only supported on AArch64 for now.
>>>>
>>>> The code doesn't seem to be AArch64 specific. So why can't this be
>>>> used for 32-bit Arm?
>>>>
>>>
>>> True, we have plans to make it also workable in AArch32 in the future.
>>> Because we considered XEN on cortex-R52.
>>
>> All the code seems to be implemented in arm generic code. So isn't it already
>> working?
>>
>>>>>     static int __init early_scan_node(const void *fdt,
>>>>>                                       int node, const char *name, int depth,
>>>>>                                       u32 address_cells, u32
>>>>> size_cells, @@ -345,6 +394,9 @@ static int __init early_scan_node(const
>> void *fdt,
>>>>>             process_multiboot_node(fdt, node, name, address_cells, size_cells);
>>>>>         else if ( depth == 1 && device_tree_node_matches(fdt, node,
>> "chosen") )
>>>>>             process_chosen_node(fdt, node, name, address_cells,
>>>>> size_cells);
>>>>> +    else if ( depth == 2 && fdt_get_property(fdt, node,
>>>>> + "xen,static-mem",
>>>> NULL) )
>>>>> +        process_static_memory(fdt, node, "xen,static-mem", address_cells,
>>>>> +                              size_cells, &bootinfo.static_mem);
>>>>
>>>> I am a bit concerned to add yet another method to parse the DT and
>>>> all the extra code it will add like in patch #2.
>>>>
>>>>    From the host PoV, they are memory reserved for a specific purpose.
>>>> Would it be possible to consider the reserve-memory binding for that
>>>> purpose? This will happen outside of chosen, but we could use a
>>>> phandle to refer the region.
>>>>
>>>
>>> Correct me if I understand wrongly, do you mean what this device tree
>> snippet looks like:
>>
>> Yes, this is what I had in mind. Although I have one small remark below.
>>
>>
>>> reserved-memory {
>>>      #address-cells = <2>;
>>>      #size-cells = <2>;
>>>      ranges;
>>>
>>>      static-mem-domU1: static-mem@0x30000000{
>>
>> I think the node would need to contain a compatible (name to be defined).
>>
> 
> Ok, maybe, hmmm, how about "xen,static-memory"?

I would possibly add "domain" in the name to make clear this is domain 
memory. Stefano, what do you think?

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
  2021-05-20  8:40           ` Penny Zheng
@ 2021-05-20  8:59             ` Julien Grall
  2021-05-20  9:27               ` Jan Beulich
  0 siblings, 1 reply; 82+ messages in thread
From: Julien Grall @ 2021-05-20  8:59 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd



On 20/05/2021 09:40, Penny Zheng wrote:
> Hi julien

Hi Penny,

> 
>> -----Original Message-----
>> From: Penny Zheng
>> Sent: Thursday, May 20, 2021 2:20 PM
>> To: Julien Grall <julien@xen.org>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; nd <nd@arm.com>
>> Subject: RE: [PATCH 03/10] xen/arm: introduce PGC_reserved
>>
>> Hi Julien
>>
>>> -----Original Message-----
>>> From: Julien Grall <julien@xen.org>
>>> Sent: Thursday, May 20, 2021 3:46 AM
>>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
>>> sstabellini@kernel.org
>>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>>> <Wei.Chen@arm.com>; nd <nd@arm.com>
>>> Subject: Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
>>>
>>>
>>>
>>> On 19/05/2021 04:16, Penny Zheng wrote:
>>>> Hi Julien
>>>
>>> Hi Penny,
>>>
>>>>
>>>>> -----Original Message-----
>>>>> From: Julien Grall <julien@xen.org>
>>>>> Sent: Tuesday, May 18, 2021 5:46 PM
>>>>> To: Penny Zheng <Penny.Zheng@arm.com>;
>>>>> xen-devel@lists.xenproject.org; sstabellini@kernel.org
>>>>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>>>>> <Wei.Chen@arm.com>; nd <nd@arm.com>
>>>>> Subject: Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
>>>>>
>>>>>
>>>>>
>>>>> On 18/05/2021 06:21, Penny Zheng wrote:
>>>>>> In order to differentiate pages of static memory, from those
>>>>>> allocated from heap, this patch introduces a new page flag
>>>>>> PGC_reserved
>>> to tell.
>>>>>>
>>>>>> New struct reserved in struct page_info is to describe reserved
>>>>>> page info, that is, which specific domain this page is reserved
>>>>>> to. > Helper page_get_reserved_owner and page_set_reserved_owner
>>>>>> are designated to get/set reserved page's owner.
>>>>>>
>>>>>> Struct domain is enlarged to more than PAGE_SIZE, due to
>>>>>> newly-imported struct reserved in struct page_info.
>>>>>
>>>>> struct domain may embed a pointer to a struct page_info but never
>>>>> directly embed the structure. So can you clarify what you mean?
>>>>>
>>>>>>
>>>>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>>>>> ---
>>>>>>     xen/include/asm-arm/mm.h | 16 +++++++++++++++-
>>>>>>     1 file changed, 15 insertions(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
>>>>> index
>>>>>> 0b7de3102e..d8922fd5db 100644
>>>>>> --- a/xen/include/asm-arm/mm.h
>>>>>> +++ b/xen/include/asm-arm/mm.h
>>>>>> @@ -88,7 +88,15 @@ struct page_info
>>>>>>              */
>>>>>>             u32 tlbflush_timestamp;
>>>>>>         };
>>>>>> -    u64 pad;
>>>>>> +
>>>>>> +    /* Page is reserved. */
>>>>>> +    struct {
>>>>>> +        /*
>>>>>> +         * Reserved Owner of this page,
>>>>>> +         * if this page is reserved to a specific domain.
>>>>>> +         */
>>>>>> +        struct domain *domain;
>>>>>> +    } reserved;
>>>>>
>>>>> The space in page_info is quite tight, so I would like to avoid
>>>>> introducing new fields unless we can't get away from it.
>>>>>
>>>>> In this case, it is not clear why we need to differentiate the "Owner"
>>>>> vs the "Reserved Owner". It might be clearer if this change is
>>>>> folded in the first user of the field.
>>>>>
>>>>> As an aside, for 32-bit Arm, you need to add a 4-byte padding.
>>>>>
>>>>
>>>> Yeah, I may delete this change. I imported this change as
>>>> considering the functionality of rebooting domain on static allocation.
>>>>
>>>> A little more discussion on rebooting domain on static allocation.
>>>> Considering the major user cases for domain on static allocation are
>>>> system has a total pre-defined, static behavior all the time. No
>>>> domain allocation on runtime, while there still exists domain rebooting.
>>>
>>> Hmmm... With this series it is still possible to allocate memory at
>>> runtime outside of the static allocation (see my comment on the design
>> document [1]).
>>> So is it meant to be complete?
>>>
>>
>> I'm guessing that the "allocate memory at runtime outside of the static
>> allocation" is referring to XEN heap on static allocation, that is, users pre-
>> defining heap in device tree configuration to let the whole system more static
>> and predictable.
>>
>> And I've replied you in the design there, sorry for the late reply. Save your time,
>> and I’ll paste here:
>>
>> "Right now, on AArch64, all RAM, except reserved memory, will be finally
>> given to buddy allocator as heap,  like you said, guest RAM for normal domain
>> will be allocated from there, xmalloc eventually is get memory from there, etc.
>> So we want to refine the heap here, not iterating through `bootinfo.mem` to
>> set up XEN heap, but like iterating `bootinfo. reserved_heap` to set up XEN
>> heap.
>>
>> On ARM32, xen heap and domain heap are separately mapped, which is more
>> complicated here. That's why I only talking about implementing these features
>> on AArch64 as first step."
>>
>>   Above implementation will be delivered through another patch Serie. This
>> patch Serie Is only focusing on Domain on Static Allocation.
>>
> 
> Oh, Second thought on this.
> And I think you are referring to balloon in/out here, hmm, also, like

Yes I am referring to balloon in/out.

> I replied there:
> "For issues on ballooning out or in, it is not supported here.

So long you are not using the solution in prod then you are fine (see 
below)... But then we should make clear this feature is a tech preview.

> Domain on Static Allocation and 1:1 direct-map are all based on
> dom0-less right now, so no PV, grant table, event channel, etc, considered.
> 
> Right now, it only supports device got passthrough into the guest."

So we are not creating the hypervisor node in the DT for dom0less domU. 
However, the hypercalls are still accessible by a domU if it really
wants to use them.

Therefore, a guest can easily mess up with your static configuration and 
predictability.

IMHO, this is a must to solve before "static memory" can be used in 
production.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 04/10] xen/arm: static memory initialization
  2021-05-18 10:43       ` Jan Beulich
@ 2021-05-20  9:04         ` Penny Zheng
  2021-05-20  9:32           ` Jan Beulich
  0 siblings, 1 reply; 82+ messages in thread
From: Penny Zheng @ 2021-05-20  9:04 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Bertrand Marquis, Wei Chen, nd, xen-devel, sstabellini, julien

Hi Jan

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: Tuesday, May 18, 2021 6:43 PM
> To: Penny Zheng <Penny.Zheng@arm.com>
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org; julien@xen.org
> Subject: Re: [PATCH 04/10] xen/arm: static memory initialization
> 
> On 18.05.2021 11:51, Penny Zheng wrote:
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: Tuesday, May 18, 2021 3:16 PM
> >>
> >> On 18.05.2021 07:21, Penny Zheng wrote:
> >>> This patch introduces static memory initialization, during system
> >>> RAM boot
> >> up.
> >>>
> >>> New func init_staticmem_pages is the equivalent of init_heap_pages,
> >>> responsible for static memory initialization.
> >>>
> >>> Helper func free_staticmem_pages is the equivalent of
> >>> free_heap_pages, to free nr_pfns pages of static memory.
> >>> For each page, it includes the following steps:
> >>> 1. change page state from in-use(also initialization state) to free
> >>> state and grant PGC_reserved.
> >>> 2. set its owner NULL and make sure this page is not a guest frame
> >>> any more
> >>
> >> But isn't the goal (as per the previous patch) to associate such
> >> pages with a _specific_ domain?
> >>
> >
> > Free_staticmem_pages are alike free_heap_pages, are not used only for
> initialization.
> > Freeing used pages to unused are also included.
> > Here, setting its owner NULL is to set owner in used field NULL.
> 
> I'm afraid I still don't understand.
> 

When initializing heap, xen is using free_heap_pages to do the initialization.
And also when normal domain get destroyed/rebooted, xen is using free_domheap_pages,
which calls free_heap_pages to free the pages.

So here, since free_staticmem_pages is the equivalent of the free_heap_pages for static
memory, I'm also considering both two scenarios here. And if it is domain get destroyed/rebooted,
Page state is in inuse state(PGC_inuse), and the page_info.v.inuse.domain is holding the
domain owner.
When freeing then, we need to switch the page state to free state(PGC_free) and set its inuse owner to NULL.

I'll clarify it more clearly in commit message.

> > Still, I need to add more explanation here.
> 
> Yes please.
> 
> >>> @@ -1512,6 +1515,49 @@ static void free_heap_pages(
> >>>      spin_unlock(&heap_lock);
> >>>  }
> >>>
> >>> +/* Equivalent of free_heap_pages to free nr_pfns pages of static
> >>> +memory. */ static void free_staticmem_pages(struct page_info *pg,
> >> unsigned long nr_pfns,
> >>> +                                 bool need_scrub)
> >>
> >> Right now this function gets called only from an __init one. Unless
> >> it is intended to gain further callers, it should be marked __init itself then.
> >> Otherwise it should be made sure that other architectures don't
> >> include this (dead there) code.
> >>
> >
> > Sure, I'll add __init. Thx.
> 
> Didn't I see a 2nd call to the function later in the series? That one doesn't
> permit the function to be __init, iirc.
> 
> Jan

Cheers

Penny

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
  2021-05-20  8:59             ` Julien Grall
@ 2021-05-20  9:27               ` Jan Beulich
  2021-05-20  9:45                 ` Julien Grall
  0 siblings, 1 reply; 82+ messages in thread
From: Jan Beulich @ 2021-05-20  9:27 UTC (permalink / raw)
  To: Julien Grall, Penny Zheng
  Cc: Bertrand Marquis, Wei Chen, nd, xen-devel, sstabellini

On 20.05.2021 10:59, Julien Grall wrote:
> On 20/05/2021 09:40, Penny Zheng wrote:
>> Oh, Second thought on this.
>> And I think you are referring to balloon in/out here, hmm, also, like
> 
> Yes I am referring to balloon in/out.
> 
>> I replied there:
>> "For issues on ballooning out or in, it is not supported here.
> 
> So long you are not using the solution in prod then you are fine (see 
> below)... But then we should make clear this feature is a tech preview.
> 
>> Domain on Static Allocation and 1:1 direct-map are all based on
>> dom0-less right now, so no PV, grant table, event channel, etc, considered.
>>
>> Right now, it only supports device got passthrough into the guest."
> 
> So we are not creating the hypervisor node in the DT for dom0less domU. 
> However, the hypercalls are still accessible by a domU if it really
> wants to use them.
> 
> Therefore, a guest can easily mess up with your static configuration and 
> predictability.
> 
> IMHO, this is a must to solve before "static memory" can be used in 
> production.

I'm having trouble seeing why it can't be addressed right away: Such
guests can balloon in only after they've ballooned out some pages,
and such balloon-in requests would be satisfied from the same static
memory that is associated by the guest anyway.

Jan


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 04/10] xen/arm: static memory initialization
  2021-05-20  9:04         ` Penny Zheng
@ 2021-05-20  9:32           ` Jan Beulich
  0 siblings, 0 replies; 82+ messages in thread
From: Jan Beulich @ 2021-05-20  9:32 UTC (permalink / raw)
  To: Penny Zheng
  Cc: Bertrand Marquis, Wei Chen, nd, xen-devel, sstabellini, julien

On 20.05.2021 11:04, Penny Zheng wrote:
> Hi Jan
> 
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Tuesday, May 18, 2021 6:43 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org; julien@xen.org
>> Subject: Re: [PATCH 04/10] xen/arm: static memory initialization
>>
>> On 18.05.2021 11:51, Penny Zheng wrote:
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: Tuesday, May 18, 2021 3:16 PM
>>>>
>>>> On 18.05.2021 07:21, Penny Zheng wrote:
>>>>> This patch introduces static memory initialization, during system
>>>>> RAM boot
>>>> up.
>>>>>
>>>>> New func init_staticmem_pages is the equivalent of init_heap_pages,
>>>>> responsible for static memory initialization.
>>>>>
>>>>> Helper func free_staticmem_pages is the equivalent of
>>>>> free_heap_pages, to free nr_pfns pages of static memory.
>>>>> For each page, it includes the following steps:
>>>>> 1. change page state from in-use(also initialization state) to free
>>>>> state and grant PGC_reserved.
>>>>> 2. set its owner NULL and make sure this page is not a guest frame
>>>>> any more
>>>>
>>>> But isn't the goal (as per the previous patch) to associate such
>>>> pages with a _specific_ domain?
>>>>
>>>
>>> Free_staticmem_pages are alike free_heap_pages, are not used only for
>> initialization.
>>> Freeing used pages to unused are also included.
>>> Here, setting its owner NULL is to set owner in used field NULL.
>>
>> I'm afraid I still don't understand.
>>
> 
> When initializing heap, xen is using free_heap_pages to do the initialization.
> And also when normal domain get destroyed/rebooted, xen is using free_domheap_pages,
> which calls free_heap_pages to free the pages.
> 
> So here, since free_staticmem_pages is the equivalent of the free_heap_pages for static
> memory, I'm also considering both two scenarios here. And if it is domain get destroyed/rebooted,
> Page state is in inuse state(PGC_inuse), and the page_info.v.inuse.domain is holding the
> domain owner.
> When freeing then, we need to switch the page state to free state(PGC_free) and set its inuse owner to NULL.

Perhaps my confusion comes from your earlier outline missing

3. re-associate the page with the domain (as represented in free
   pages)

The property of "designated for Dom<N>" should never go away, if I
understand the overall proposal correctly.

Jan


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
  2021-05-20  9:27               ` Jan Beulich
@ 2021-05-20  9:45                 ` Julien Grall
  0 siblings, 0 replies; 82+ messages in thread
From: Julien Grall @ 2021-05-20  9:45 UTC (permalink / raw)
  To: Jan Beulich, Penny Zheng
  Cc: Bertrand Marquis, Wei Chen, nd, xen-devel, sstabellini

Hi Jan,

On 20/05/2021 10:27, Jan Beulich wrote:
> On 20.05.2021 10:59, Julien Grall wrote:
>> On 20/05/2021 09:40, Penny Zheng wrote:
>>> Oh, Second thought on this.
>>> And I think you are referring to balloon in/out here, hmm, also, like
>>
>> Yes I am referring to balloon in/out.
>>
>>> I replied there:
>>> "For issues on ballooning out or in, it is not supported here.
>>
>> So long you are not using the solution in prod then you are fine (see
>> below)... But then we should make clear this feature is a tech preview.
>>
>>> Domain on Static Allocation and 1:1 direct-map are all based on
>>> dom0-less right now, so no PV, grant table, event channel, etc, considered.
>>>
>>> Right now, it only supports device got passthrough into the guest."
>>
>> So we are not creating the hypervisor node in the DT for dom0less domU.
>> However, the hypercalls are still accessible by a domU if it really
>> wants to use them.
>>
>> Therefore, a guest can easily mess up with your static configuration and
>> predictability.
>>
>> IMHO, this is a must to solve before "static memory" can be used in
>> production.
> 
> I'm having trouble seeing why it can't be addressed right away: 

It can be solved right away. Dom0less domUs don't officially know they 
are running on Xen (they could bruteforce it though), so I think it 
would be fine to merge without it for a tech preview.

> Such
> guests can balloon in only after they've ballooned out some pages,
> and such balloon-in requests would be satisfied from the same static
> memory that is associated by the guest anyway.

This would require some bookeeping to know the page was used previously. 
But nothing very challenging though.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
  2021-05-18 11:23       ` Jan Beulich
@ 2021-05-21  6:41         ` Penny Zheng
  2021-05-21  7:09           ` Jan Beulich
  0 siblings, 1 reply; 82+ messages in thread
From: Penny Zheng @ 2021-05-21  6:41 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Bertrand Marquis, Wei Chen, nd, xen-devel, sstabellini, julien

Hi Jan

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: Tuesday, May 18, 2021 7:23 PM
> To: Penny Zheng <Penny.Zheng@arm.com>
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org; julien@xen.org
> Subject: Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
> 
> On 18.05.2021 10:57, Penny Zheng wrote:
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: Tuesday, May 18, 2021 3:35 PM
> >>
> >> On 18.05.2021 07:21, Penny Zheng wrote:
> >>> --- a/xen/common/page_alloc.c
> >>> +++ b/xen/common/page_alloc.c
> >>> @@ -2447,6 +2447,9 @@ int assign_pages(
> >>>      {
> >>>          ASSERT(page_get_owner(&pg[i]) == NULL);
> >>>          page_set_owner(&pg[i], d);
> >>> +        /* use page_set_reserved_owner to set its reserved domain owner.
> >> */
> >>> +        if ( (pg[i].count_info & PGC_reserved) )
> >>> +            page_set_reserved_owner(&pg[i], d);
> >>
> >> Now this is puzzling: What's the point of setting two owner fields to
> >> the same value? I also don't recall you having introduced
> >> page_set_reserved_owner() for x86, so how is this going to build there?
> >>
> >
> > Thanks for pointing out that it will fail on x86.
> > As for the same value, sure, I shall change it to domid_t domid to record its
> reserved owner.
> > Only domid is enough for differentiate.
> > And even when domain get rebooted, struct domain may be destroyed, but
> > domid will stays The same.
> 
> Will it? Are you intending to put in place restrictions that make it impossible
> for the ID to get re-used by another domain?
> 
> > Major user cases for domain on static allocation are referring to the
> > whole system are static, No runtime creation.
> 
> Right, but that's not currently enforced afaics. If you would enforce it, it may
> simplify a number of things.
> 
> >>> @@ -2509,6 +2512,56 @@ struct page_info *alloc_domheap_pages(
> >>>      return pg;
> >>>  }
> >>>
> >>> +/*
> >>> + * Allocate nr_pfns contiguous pages, starting at #start, of static
> >>> +memory,
> >>> + * then assign them to one specific domain #d.
> >>> + * It is the equivalent of alloc_domheap_pages for static memory.
> >>> + */
> >>> +struct page_info *alloc_domstatic_pages(
> >>> +        struct domain *d, unsigned long nr_pfns, paddr_t start,
> >>> +        unsigned int memflags)
> >>> +{
> >>> +    struct page_info *pg = NULL;
> >>> +    unsigned long dma_size;
> >>> +
> >>> +    ASSERT(!in_irq());
> >>> +
> >>> +    if ( memflags & MEMF_no_owner )
> >>> +        memflags |= MEMF_no_refcount;
> >>> +
> >>> +    if ( !dma_bitsize )
> >>> +        memflags &= ~MEMF_no_dma;
> >>> +    else
> >>> +    {
> >>> +        dma_size = 1ul << bits_to_zone(dma_bitsize);
> >>> +        /* Starting address shall meet the DMA limitation. */
> >>> +        if ( dma_size && start < dma_size )
> >>> +            return NULL;
> >>
> >> It is the entire range (i.e. in particular the last byte) which needs
> >> to meet such a restriction. I'm not convinced though that DMA width
> >> restrictions and static allocation are sensible to coexist.
> >>
> >
> > FWIT, if starting address meets the limitation, the last byte, greater
> > than starting address shall meet it too.
> 
> I'm afraid I don't know what you're meaning to tell me here.
> 

Referring to alloc_domheap_pages, if `dma_bitsize` is none-zero value, 
it will use  alloc_heap_pages to allocate pages from [dma_zone + 1,
zone_hi], `dma_zone + 1` pointing to address larger than 2^(dma_zone + 1).
So I was setting address limitation for the starting address.   

> Jan

Cheers

Penny

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
  2021-05-21  6:41         ` Penny Zheng
@ 2021-05-21  7:09           ` Jan Beulich
  2021-06-03  2:44             ` Penny Zheng
  0 siblings, 1 reply; 82+ messages in thread
From: Jan Beulich @ 2021-05-21  7:09 UTC (permalink / raw)
  To: Penny Zheng
  Cc: Bertrand Marquis, Wei Chen, nd, xen-devel, sstabellini, julien

On 21.05.2021 08:41, Penny Zheng wrote:
> Hi Jan
> 
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Tuesday, May 18, 2021 7:23 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org; julien@xen.org
>> Subject: Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
>>
>> On 18.05.2021 10:57, Penny Zheng wrote:
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: Tuesday, May 18, 2021 3:35 PM
>>>>
>>>> On 18.05.2021 07:21, Penny Zheng wrote:
>>>>> --- a/xen/common/page_alloc.c
>>>>> +++ b/xen/common/page_alloc.c
>>>>> @@ -2447,6 +2447,9 @@ int assign_pages(
>>>>>      {
>>>>>          ASSERT(page_get_owner(&pg[i]) == NULL);
>>>>>          page_set_owner(&pg[i], d);
>>>>> +        /* use page_set_reserved_owner to set its reserved domain owner.
>>>> */
>>>>> +        if ( (pg[i].count_info & PGC_reserved) )
>>>>> +            page_set_reserved_owner(&pg[i], d);
>>>>
>>>> Now this is puzzling: What's the point of setting two owner fields to
>>>> the same value? I also don't recall you having introduced
>>>> page_set_reserved_owner() for x86, so how is this going to build there?
>>>>
>>>
>>> Thanks for pointing out that it will fail on x86.
>>> As for the same value, sure, I shall change it to domid_t domid to record its
>> reserved owner.
>>> Only domid is enough for differentiate.
>>> And even when domain get rebooted, struct domain may be destroyed, but
>>> domid will stays The same.
>>
>> Will it? Are you intending to put in place restrictions that make it impossible
>> for the ID to get re-used by another domain?
>>
>>> Major user cases for domain on static allocation are referring to the
>>> whole system are static, No runtime creation.
>>
>> Right, but that's not currently enforced afaics. If you would enforce it, it may
>> simplify a number of things.
>>
>>>>> @@ -2509,6 +2512,56 @@ struct page_info *alloc_domheap_pages(
>>>>>      return pg;
>>>>>  }
>>>>>
>>>>> +/*
>>>>> + * Allocate nr_pfns contiguous pages, starting at #start, of static
>>>>> +memory,
>>>>> + * then assign them to one specific domain #d.
>>>>> + * It is the equivalent of alloc_domheap_pages for static memory.
>>>>> + */
>>>>> +struct page_info *alloc_domstatic_pages(
>>>>> +        struct domain *d, unsigned long nr_pfns, paddr_t start,
>>>>> +        unsigned int memflags)
>>>>> +{
>>>>> +    struct page_info *pg = NULL;
>>>>> +    unsigned long dma_size;
>>>>> +
>>>>> +    ASSERT(!in_irq());
>>>>> +
>>>>> +    if ( memflags & MEMF_no_owner )
>>>>> +        memflags |= MEMF_no_refcount;
>>>>> +
>>>>> +    if ( !dma_bitsize )
>>>>> +        memflags &= ~MEMF_no_dma;
>>>>> +    else
>>>>> +    {
>>>>> +        dma_size = 1ul << bits_to_zone(dma_bitsize);
>>>>> +        /* Starting address shall meet the DMA limitation. */
>>>>> +        if ( dma_size && start < dma_size )
>>>>> +            return NULL;
>>>>
>>>> It is the entire range (i.e. in particular the last byte) which needs
>>>> to meet such a restriction. I'm not convinced though that DMA width
>>>> restrictions and static allocation are sensible to coexist.
>>>>
>>>
>>> FWIT, if starting address meets the limitation, the last byte, greater
>>> than starting address shall meet it too.
>>
>> I'm afraid I don't know what you're meaning to tell me here.
>>
> 
> Referring to alloc_domheap_pages, if `dma_bitsize` is none-zero value, 
> it will use  alloc_heap_pages to allocate pages from [dma_zone + 1,
> zone_hi], `dma_zone + 1` pointing to address larger than 2^(dma_zone + 1).
> So I was setting address limitation for the starting address.   

But does this zone concept apply to static pages at all?

Jan


^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
  2021-05-19  5:23     ` Penny Zheng
@ 2021-05-24 10:10       ` Penny Zheng
  2021-05-24 10:24         ` Julien Grall
  0 siblings, 1 reply; 82+ messages in thread
From: Penny Zheng @ 2021-05-24 10:10 UTC (permalink / raw)
  To: Julien Grall, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd

Hi Julien

> -----Original Message-----
> From: Penny Zheng
> Sent: Wednesday, May 19, 2021 1:24 PM
> To: Julien Grall <julien@xen.org>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>
> Subject: RE: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
> 
> Hi Julien
> 
> > -----Original Message-----
> > From: Julien Grall <julien@xen.org>
> > Sent: Tuesday, May 18, 2021 6:15 PM
> > To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> > sstabellini@kernel.org
> > Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> > <Wei.Chen@arm.com>; nd <nd@arm.com>
> > Subject: Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
> >
> > Hi Penny,
> >
> > On 18/05/2021 06:21, Penny Zheng wrote:
> > > alloc_staticmem_pages is designated to allocate nr_pfns contiguous
> > > pages of static memory. And it is the equivalent of alloc_heap_pages
> > > for static memory.
> > > This commit only covers allocating at specified starting address.
> > >
> > > For each page, it shall check if the page is reserved
> > > (PGC_reserved) and free. It shall also do a set of necessary
> > > initialization, which are mostly the same ones in alloc_heap_pages,
> > > like, following the same cache-coherency policy and turning page
> > > status into PGC_state_used, etc.
> > >
> > > Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > > ---
> > >   xen/common/page_alloc.c | 64
> > +++++++++++++++++++++++++++++++++++++++++
> > >   1 file changed, 64 insertions(+)
> > >
> > > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index
> > > 58b53c6ac2..adf2889e76 100644
> > > --- a/xen/common/page_alloc.c
> > > +++ b/xen/common/page_alloc.c
> > > @@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages(
> > >       return pg;
> > >   }
> > >
> > > +/*
> > > + * Allocate nr_pfns contiguous pages, starting at #start, of static memory.
> > > + * It is the equivalent of alloc_heap_pages for static memory  */
> > > +static struct page_info *alloc_staticmem_pages(unsigned long
> > > +nr_pfns,
> >
> > This wants to be nr_mfns.
> >
> > > +                                                paddr_t start,
> >
> > I would prefer if this helper takes an mfn_t in parameter.
> >
> 
> Sure, I will change both.
> 
> > > +                                                unsigned int
> > > +memflags) {
> > > +    bool need_tlbflush = false;
> > > +    uint32_t tlbflush_timestamp = 0;
> > > +    unsigned int i;
> > > +    struct page_info *pg;
> > > +    mfn_t s_mfn;
> > > +
> > > +    /* For now, it only supports allocating at specified address. */
> > > +    s_mfn = maddr_to_mfn(start);
> > > +    pg = mfn_to_page(s_mfn);
> >
> > We should avoid to make the assumption the start address will be valid.
> > So you want to call mfn_valid() first.
> >
> > At the same time, there is no guarantee that if the first page is
> > valid, then the next nr_pfns will be. So the check should be performed for all
> of them.
> >
> 
> Ok. I'll do validation check on both of them.
> 
> > > +    if ( !pg )
> > > +        return NULL;
> > > +
> > > +    for ( i = 0; i < nr_pfns; i++)
> > > +    {
> > > +        /*
> > > +         * Reference count must continuously be zero for free pages
> > > +         * of static memory(PGC_reserved).
> > > +         */
> > > +        ASSERT(pg[i].count_info & PGC_reserved);
> > > +        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
> > > +        {
> > > +            printk(XENLOG_ERR
> > > +                    "Reference count must continuously be zero for free pages"
> > > +                    "pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
> > > +                    i, mfn_x(page_to_mfn(pg + i)),
> > > +                    pg[i].count_info, pg[i].tlbflush_timestamp);
> > > +            BUG();
> >
> > So we would crash Xen if the caller pass a wrong range. Is it what we want?
> >
> > Also, who is going to prevent concurrent access?
> >
> 
> Sure, to fix concurrency issue, I may need to add one spinlock like `static
> DEFINE_SPINLOCK(staticmem_lock);`
> 
> In current alloc_heap_pages, it will do similar check, that pages in free state
> MUST have zero reference count. I guess, if condition not met, there is no need
> to proceed.
> 

Another thought on concurrency problem, when constructing patch v2, do we need to
consider concurrency here? 
heap_lock is to take care concurrent allocation on the one heap, but static memory is
always reserved for only one specific domain.

> > > +        }
> > > +
> > > +        if ( !(memflags & MEMF_no_tlbflush) )
> > > +            accumulate_tlbflush(&need_tlbflush, &pg[i],
> > > +                                &tlbflush_timestamp);
> > > +
> > > +        /*
> > > +         * Reserve flag PGC_reserved and change page state
> > > +         * to PGC_state_inuse.
> > > +         */
> > > +        pg[i].count_info = (pg[i].count_info & PGC_reserved) |
> > PGC_state_inuse;
> > > +        /* Initialise fields which have other uses for free pages. */
> > > +        pg[i].u.inuse.type_info = 0;
> > > +        page_set_owner(&pg[i], NULL);
> > > +
> > > +        /*
> > > +         * Ensure cache and RAM are consistent for platforms where the
> > > +         * guest can control its own visibility of/through the cache.
> > > +         */
> > > +        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
> > > +                            !(memflags & MEMF_no_icache_flush));
> > > +    }
> > > +
> > > +    if ( need_tlbflush )
> > > +        filtered_flush_tlb_mask(tlbflush_timestamp);
> > > +
> > > +    return pg;
> > > +}
> > > +
> > >   /* Remove any offlined page in the buddy pointed to by head. */
> > >   static int reserve_offlined_page(struct page_info *head)
> > >   {
> > >
> >
> > Cheers,
> >
> > --
> > Julien Grall
> 
> Cheers,
> 
> Penny Zheng

Cheers

Penny

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
  2021-05-24 10:10       ` Penny Zheng
@ 2021-05-24 10:24         ` Julien Grall
  0 siblings, 0 replies; 82+ messages in thread
From: Julien Grall @ 2021-05-24 10:24 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd



On 24/05/2021 11:10, Penny Zheng wrote:
> Hi Julien

Hi Penny,

>>>> +    if ( !pg )
>>>> +        return NULL;
>>>> +
>>>> +    for ( i = 0; i < nr_pfns; i++)
>>>> +    {
>>>> +        /*
>>>> +         * Reference count must continuously be zero for free pages
>>>> +         * of static memory(PGC_reserved).
>>>> +         */
>>>> +        ASSERT(pg[i].count_info & PGC_reserved);
>>>> +        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
>>>> +        {
>>>> +            printk(XENLOG_ERR
>>>> +                    "Reference count must continuously be zero for free pages"
>>>> +                    "pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
>>>> +                    i, mfn_x(page_to_mfn(pg + i)),
>>>> +                    pg[i].count_info, pg[i].tlbflush_timestamp);
>>>> +            BUG();
>>>
>>> So we would crash Xen if the caller pass a wrong range. Is it what we want?
>>>
>>> Also, who is going to prevent concurrent access?
>>>
>>
>> Sure, to fix concurrency issue, I may need to add one spinlock like `static
>> DEFINE_SPINLOCK(staticmem_lock);`
>>
>> In current alloc_heap_pages, it will do similar check, that pages in free state
>> MUST have zero reference count. I guess, if condition not met, there is no need
>> to proceed.
>>
> 
> Another thought on concurrency problem, when constructing patch v2, do we need to
> consider concurrency here?
> heap_lock is to take care concurrent allocation on the one heap, but static memory is
> always reserved for only one specific domain.
In theory yes, but you are relying on the admin to correctly write the 
device-tree nodes.

You are probably not going to hit the problem today because the domains 
are created one by one. But, as you may want to allocate memory at 
runtime, this is quite important to get the code protected from 
concurrent access.

Here, you will likely want to use the heaplock rather than a new lock. 
So you are also protect against concurrent access to count_info from 
other part of Xen.


Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
  2021-05-20  8:50           ` Julien Grall
@ 2021-06-02 10:09             ` Penny Zheng
  2021-06-03  9:09               ` Julien Grall
  0 siblings, 1 reply; 82+ messages in thread
From: Penny Zheng @ 2021-06-02 10:09 UTC (permalink / raw)
  To: Julien Grall, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd

Hi Julien 

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Thursday, May 20, 2021 4:51 PM
> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
> 
> Hi,
> 
> On 20/05/2021 07:07, Penny Zheng wrote:
> >>> It will be consistent with the ones defined in the parent node, domUx.
> >> Hmmm... To take the example you provided, the parent would be chosen.
> >> However, from the example, I would expect the property #{address,
> >> size}-cells in domU1 to be used. What did I miss?
> >>
> >
> > Yeah, the property #{address, size}-cells in domU1 will be used. And
> > the parent node will be domU1.
> 
> You may have misunderstood what I meant. "domU1" is the node that
> contains the property "xen,static-mem". The parent node would be the one
> above (in our case "chosen").
> 

I re-re-reconsider what you meant here, hope this time I get what you mean, correct me if I'm wrong,
List an example here:

    / {
        reserved-memory {
            #address-cells = <2>;
            #size-cells = <2>;

            staticmemdomU1: static-memory@0x30000000 {
                compatible = "xen,static-memory-domain";
                reg = <0x0 0x30000000 0x0 0x20000000>;
            };
        };

        chosen {
            domU1 {
                compatible = "xen,domain";
                #address-cells = <0x1>;
                #size-cells = <0x1>;
                cpus = <2>;
                xen,static-mem = <&staticmemdomU1>;

               ...
            };
        };
    };

If user gives two different #address-cells and #size-cells in reserved-memory and domU1, Then when parsing it
through `xen,static-mem`, it may have unexpected answers.
I could not think a way to fix it properly in codes, do you have any suggestion? Or we just put a warning in doc/commits.

> >
> > The dtb property should look like as follows:
> >
> >          chosen {
> >              domU1 {
> >                  compatible = "xen,domain";
> >                  #address-cells = <0x2>;
> >                  #size-cells = <0x2>;
> >                  cpus = <2>;
> >                  xen,static-mem = <0x0 0x30000000 0x0 0x20000000>;
> >
> >                  ...
> >              };
> >          };
> >
> >>> +DOMU1 on Static Allocation has reserved RAM bank at 0x30000000 of
> >>> +512MB size
> >
> >>>>> +Static Allocation is only supported on AArch64 for now.
> >>>>
> >>>> The code doesn't seem to be AArch64 specific. So why can't this be
> >>>> used for 32-bit Arm?
> >>>>
> >>>
> >>> True, we have plans to make it also workable in AArch32 in the future.
> >>> Because we considered XEN on cortex-R52.
> >>
> >> All the code seems to be implemented in arm generic code. So isn't it
> >> already working?
> >>
> >>>>>     static int __init early_scan_node(const void *fdt,
> >>>>>                                       int node, const char *name, int depth,
> >>>>>                                       u32 address_cells, u32
> >>>>> size_cells, @@ -345,6 +394,9 @@ static int __init
> >>>>> early_scan_node(const
> >> void *fdt,
> >>>>>             process_multiboot_node(fdt, node, name, address_cells,
> size_cells);
> >>>>>         else if ( depth == 1 && device_tree_node_matches(fdt,
> >>>>> node,
> >> "chosen") )
> >>>>>             process_chosen_node(fdt, node, name, address_cells,
> >>>>> size_cells);
> >>>>> +    else if ( depth == 2 && fdt_get_property(fdt, node,
> >>>>> + "xen,static-mem",
> >>>> NULL) )
> >>>>> +        process_static_memory(fdt, node, "xen,static-mem", address_cells,
> >>>>> +                              size_cells, &bootinfo.static_mem);
> >>>>
> >>>> I am a bit concerned to add yet another method to parse the DT and
> >>>> all the extra code it will add like in patch #2.
> >>>>
> >>>>    From the host PoV, they are memory reserved for a specific purpose.
> >>>> Would it be possible to consider the reserve-memory binding for
> >>>> that purpose? This will happen outside of chosen, but we could use
> >>>> a phandle to refer the region.
> >>>>
> >>>
> >>> Correct me if I understand wrongly, do you mean what this device
> >>> tree
> >> snippet looks like:
> >>
> >> Yes, this is what I had in mind. Although I have one small remark below.
> >>
> >>
> >>> reserved-memory {
> >>>      #address-cells = <2>;
> >>>      #size-cells = <2>;
> >>>      ranges;
> >>>
> >>>      static-mem-domU1: static-mem@0x30000000{
> >>
> >> I think the node would need to contain a compatible (name to be defined).
> >>
> >
> > Ok, maybe, hmmm, how about "xen,static-memory"?
> 
> I would possibly add "domain" in the name to make clear this is domain
> memory. Stefano, what do you think?
> 
> Cheers,
> 
> 
> Julien Grall

Cheers,

Penny Zheng

^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
  2021-05-21  7:09           ` Jan Beulich
@ 2021-06-03  2:44             ` Penny Zheng
  0 siblings, 0 replies; 82+ messages in thread
From: Penny Zheng @ 2021-06-03  2:44 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Bertrand Marquis, Wei Chen, nd, xen-devel, sstabellini, julien

Hi Jan

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: Friday, May 21, 2021 3:09 PM
> To: Penny Zheng <Penny.Zheng@arm.com>
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org; julien@xen.org
> Subject: Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
> 
> On 21.05.2021 08:41, Penny Zheng wrote:
> > Hi Jan
> >
> >> -----Original Message-----
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: Tuesday, May 18, 2021 7:23 PM
> >> To: Penny Zheng <Penny.Zheng@arm.com>
> >> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> >> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
> >> sstabellini@kernel.org; julien@xen.org
> >> Subject: Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
> >>
> >> On 18.05.2021 10:57, Penny Zheng wrote:
> >>>> From: Jan Beulich <jbeulich@suse.com>
> >>>> Sent: Tuesday, May 18, 2021 3:35 PM
> >>>>
> >>>> On 18.05.2021 07:21, Penny Zheng wrote:
> >>>>> --- a/xen/common/page_alloc.c
> >>>>> +++ b/xen/common/page_alloc.c
> >>>>> @@ -2447,6 +2447,9 @@ int assign_pages(
> >>>>>      {
> >>>>>          ASSERT(page_get_owner(&pg[i]) == NULL);
> >>>>>          page_set_owner(&pg[i], d);
> >>>>> +        /* use page_set_reserved_owner to set its reserved domain owner.
> >>>> */
> >>>>> +        if ( (pg[i].count_info & PGC_reserved) )
> >>>>> +            page_set_reserved_owner(&pg[i], d);
> >>>>
> >>>> Now this is puzzling: What's the point of setting two owner fields
> >>>> to the same value? I also don't recall you having introduced
> >>>> page_set_reserved_owner() for x86, so how is this going to build there?
> >>>>
> >>>
> >>> Thanks for pointing out that it will fail on x86.
> >>> As for the same value, sure, I shall change it to domid_t domid to
> >>> record its
> >> reserved owner.
> >>> Only domid is enough for differentiate.
> >>> And even when domain get rebooted, struct domain may be destroyed,
> >>> but domid will stays The same.
> >>
> >> Will it? Are you intending to put in place restrictions that make it
> >> impossible for the ID to get re-used by another domain?
> >>
> >>> Major user cases for domain on static allocation are referring to
> >>> the whole system are static, No runtime creation.
> >>
> >> Right, but that's not currently enforced afaics. If you would enforce
> >> it, it may simplify a number of things.
> >>
> >>>>> @@ -2509,6 +2512,56 @@ struct page_info *alloc_domheap_pages(
> >>>>>      return pg;
> >>>>>  }
> >>>>>
> >>>>> +/*
> >>>>> + * Allocate nr_pfns contiguous pages, starting at #start, of
> >>>>> +static memory,
> >>>>> + * then assign them to one specific domain #d.
> >>>>> + * It is the equivalent of alloc_domheap_pages for static memory.
> >>>>> + */
> >>>>> +struct page_info *alloc_domstatic_pages(
> >>>>> +        struct domain *d, unsigned long nr_pfns, paddr_t start,
> >>>>> +        unsigned int memflags)
> >>>>> +{
> >>>>> +    struct page_info *pg = NULL;
> >>>>> +    unsigned long dma_size;
> >>>>> +
> >>>>> +    ASSERT(!in_irq());
> >>>>> +
> >>>>> +    if ( memflags & MEMF_no_owner )
> >>>>> +        memflags |= MEMF_no_refcount;
> >>>>> +
> >>>>> +    if ( !dma_bitsize )
> >>>>> +        memflags &= ~MEMF_no_dma;
> >>>>> +    else
> >>>>> +    {
> >>>>> +        dma_size = 1ul << bits_to_zone(dma_bitsize);
> >>>>> +        /* Starting address shall meet the DMA limitation. */
> >>>>> +        if ( dma_size && start < dma_size )
> >>>>> +            return NULL;
> >>>>
> >>>> It is the entire range (i.e. in particular the last byte) which
> >>>> needs to meet such a restriction. I'm not convinced though that DMA
> >>>> width restrictions and static allocation are sensible to coexist.
> >>>>
> >>>
> >>> FWIT, if starting address meets the limitation, the last byte,
> >>> greater than starting address shall meet it too.
> >>
> >> I'm afraid I don't know what you're meaning to tell me here.
> >>
> >
> > Referring to alloc_domheap_pages, if `dma_bitsize` is none-zero value,
> > it will use  alloc_heap_pages to allocate pages from [dma_zone + 1,
> > zone_hi], `dma_zone + 1` pointing to address larger than 2^(dma_zone + 1).
> > So I was setting address limitation for the starting address.
> 
> But does this zone concept apply to static pages at all?
> 

Oh, so sorry. I finally got what you were asking here. Hmm, I was using the logic in
bits_to_zone to do the address bits translation. But I got, it will bring confusion. I'll
fix it. Thx.

> Jan

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
  2021-06-02 10:09             ` Penny Zheng
@ 2021-06-03  9:09               ` Julien Grall
  2021-06-03 21:32                 ` Stefano Stabellini
  0 siblings, 1 reply; 82+ messages in thread
From: Julien Grall @ 2021-06-03  9:09 UTC (permalink / raw)
  To: Penny Zheng, xen-devel, sstabellini; +Cc: Bertrand Marquis, Wei Chen, nd

Hi,

On 02/06/2021 11:09, Penny Zheng wrote:
> Hi Julien
> 
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Thursday, May 20, 2021 4:51 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; nd <nd@arm.com>
>> Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
>>
>> Hi,
>>
>> On 20/05/2021 07:07, Penny Zheng wrote:
>>>>> It will be consistent with the ones defined in the parent node, domUx.
>>>> Hmmm... To take the example you provided, the parent would be chosen.
>>>> However, from the example, I would expect the property #{address,
>>>> size}-cells in domU1 to be used. What did I miss?
>>>>
>>>
>>> Yeah, the property #{address, size}-cells in domU1 will be used. And
>>> the parent node will be domU1.
>>
>> You may have misunderstood what I meant. "domU1" is the node that
>> contains the property "xen,static-mem". The parent node would be the one
>> above (in our case "chosen").
>>
> 
> I re-re-reconsider what you meant here, hope this time I get what you mean, correct me if I'm wrong,
> List an example here:
> 
>      / {
>          reserved-memory {
>              #address-cells = <2>;
>              #size-cells = <2>;
> 
>              staticmemdomU1: static-memory@0x30000000 {
>                  compatible = "xen,static-memory-domain";
>                  reg = <0x0 0x30000000 0x0 0x20000000>;
>              };
>          };
> 
>          chosen {
>              domU1 {
>                  compatible = "xen,domain";
>                  #address-cells = <0x1>;
>                  #size-cells = <0x1>;
>                  cpus = <2>;
>                  xen,static-mem = <&staticmemdomU1>;
> 
>                 ...
>              };
>          };
>      };
> 
> If user gives two different #address-cells and #size-cells in reserved-memory and domU1, Then when parsing it
> through `xen,static-mem`, it may have unexpected answers.

Why are you using the #address-cells and #size-cells from the node domU1 
to parse staticmemdomU1?

> I could not think a way to fix it properly in codes, do you have any suggestion? Or we just put a warning in doc/commits.

The correct approach is to find the parent of staticmemdomU1 (i.e. 
reserved-memory) and use the #address-cells and #size-cells from there.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
  2021-06-03  9:09               ` Julien Grall
@ 2021-06-03 21:32                 ` Stefano Stabellini
  2021-06-03 22:07                   ` Julien Grall
  0 siblings, 1 reply; 82+ messages in thread
From: Stefano Stabellini @ 2021-06-03 21:32 UTC (permalink / raw)
  To: Julien Grall
  Cc: Penny Zheng, xen-devel, sstabellini, Bertrand Marquis, Wei Chen, nd

I have not read most emails in this thread (sorry!) but I spotted this
discussion about device tree and I would like to reply to that as we
have discussed something very similar in the context of system device
tree.


On Thu, 3 Jun 2021, Julien Grall wrote:
> On 02/06/2021 11:09, Penny Zheng wrote:
> > Hi Julien
> > 
> > > -----Original Message-----
> > > From: Julien Grall <julien@xen.org>
> > > Sent: Thursday, May 20, 2021 4:51 PM
> > > To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
> > > sstabellini@kernel.org
> > > Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> > > <Wei.Chen@arm.com>; nd <nd@arm.com>
> > > Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
> > > 
> > > Hi,
> > > 
> > > On 20/05/2021 07:07, Penny Zheng wrote:
> > > > > > It will be consistent with the ones defined in the parent node,
> > > > > > domUx.
> > > > > Hmmm... To take the example you provided, the parent would be chosen.
> > > > > However, from the example, I would expect the property #{address,
> > > > > size}-cells in domU1 to be used. What did I miss?
> > > > > 
> > > > 
> > > > Yeah, the property #{address, size}-cells in domU1 will be used. And
> > > > the parent node will be domU1.
> > > 
> > > You may have misunderstood what I meant. "domU1" is the node that
> > > contains the property "xen,static-mem". The parent node would be the one
> > > above (in our case "chosen").
> > > 
> > 
> > I re-re-reconsider what you meant here, hope this time I get what you mean,
> > correct me if I'm wrong,
> > List an example here:
> > 
> >      / {
> >          reserved-memory {
> >              #address-cells = <2>;
> >              #size-cells = <2>;
> > 
> >              staticmemdomU1: static-memory@0x30000000 {
> >                  compatible = "xen,static-memory-domain";
> >                  reg = <0x0 0x30000000 0x0 0x20000000>;
> >              };
> >          };
> > 
> >          chosen {
> >              domU1 {
> >                  compatible = "xen,domain";
> >                  #address-cells = <0x1>;
> >                  #size-cells = <0x1>;
> >                  cpus = <2>;
> >                  xen,static-mem = <&staticmemdomU1>;
> > 
> >                 ...
> >              };
> >          };
> >      };
> > 
> > If user gives two different #address-cells and #size-cells in
> > reserved-memory and domU1, Then when parsing it
> > through `xen,static-mem`, it may have unexpected answers.
> 
> Why are you using the #address-cells and #size-cells from the node domU1 to
> parse staticmemdomU1?
> 
> > I could not think a way to fix it properly in codes, do you have any
> > suggestion? Or we just put a warning in doc/commits.
> 
> The correct approach is to find the parent of staticmemdomU1 (i.e.
> reserved-memory) and use the #address-cells and #size-cells from there.

Julien is right about how to parse the static-memory.

But I have a suggestion on the new binding. The /reserved-memory node is
a weird node: it is one of the very few node (the only node aside from
/chosen) which is about software configurations rather than hardware
description.

For this reason, in a device tree with multiple domains /reserved-memory
doesn't make a lot of sense: for which domain is the memory reserved?

This was one of the first points raised by Rob Herring in reviewing
system device tree.

So the solution we went for is the following: if there is a default
domain /reserved-memory applies to the default domain. Otherwise, each
domain is going to have its own reserved-memory. Example:

        domU1 {
            compatible = "xen,domain";
            #address-cells = <0x1>;
            #size-cells = <0x1>;
            cpus = <2>;

            reserved-memory {
                #address-cells = <2>;
                #size-cells = <2>;
   
                static-memory@0x30000000 {
                    compatible = "xen,static-memory-domain";
                    reg = <0x0 0x30000000 0x0 0x20000000>;
                };
            };
        };


So I don't think we want to use reserved-memory for this, either
/reserved-memory or /chosen/domU1/reserved-memory. Instead it would be
good to align it with system device tree and define it as a new property
under domU1.

In system device tree we would use a property called "memory" to specify
one or more ranges, e.g.:

    domU1 {
        memory = <0x0 0x500000 0x0 0x7fb00000>;

Unfortunately for xen,domains we have already defined "memory" to
specify an amount, rather than a range. That's too bad because the most
natural way to do this would be:

    domU1 {
        size = <amount>;
        memory = <ranges>;

When we'll introduce native system device tree support in Xen we'll be
able to do that. For now, we need to come up with a different property.
For instance: "static-memory" (other names are welcome if you have a
better suggestion).

We use a new property called "static-memory" together with
#static-memory-address-cells and #static-memory-size-cells to define how
many cells to use for address and size.

Example:

    domU1 {
        #static-memory-address-cells = <2>;
        #static-memory-size-cells = <2>;
        static-memory = <0x0 0x500000 0x0 0x7fb00000>;



Another alternative would be to extend the definition of the existing
"memory" property to potentially handle not just sizes but also address
and size pairs. There are a couple of ways to do that, including using
#memory-address-cells = <0>; to specify when memory only has a size, or
changing compatible string to "xen,domain-v2". But actually I would
avoid it. I would keep it simple and just introduce a new property like
"static-memory" (we can come up with a better name).



^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
  2021-06-03 21:32                 ` Stefano Stabellini
@ 2021-06-03 22:07                   ` Julien Grall
  2021-06-03 23:55                     ` Stefano Stabellini
  0 siblings, 1 reply; 82+ messages in thread
From: Julien Grall @ 2021-06-03 22:07 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: Penny Zheng, xen-devel, Bertrand Marquis, Wei Chen, nd

Hi,

On Thu, 3 Jun 2021 at 22:33, Stefano Stabellini <sstabellini@kernel.org> wrote:
> On Thu, 3 Jun 2021, Julien Grall wrote:
> > On 02/06/2021 11:09, Penny Zheng wrote:
> > > I could not think a way to fix it properly in codes, do you have any
> > > suggestion? Or we just put a warning in doc/commits.
> >
> > The correct approach is to find the parent of staticmemdomU1 (i.e.
> > reserved-memory) and use the #address-cells and #size-cells from there.
>
> Julien is right about how to parse the static-memory.
>
> But I have a suggestion on the new binding. The /reserved-memory node is
> a weird node: it is one of the very few node (the only node aside from
> /chosen) which is about software configurations rather than hardware
> description.
>
> For this reason, in a device tree with multiple domains /reserved-memory
> doesn't make a lot of sense: for which domain is the memory reserved?

IHMO, /reserved-memory refers to the memory that the hypervisor should
not touch. It is just a coincidence that most of the domains are then
passed through to dom0.

This also matches the fact that the GIC, /memory is consumed by the
hypervisor directly and not the domain..

>
> This was one of the first points raised by Rob Herring in reviewing
> system device tree.
>
> So the solution we went for is the following: if there is a default
> domain /reserved-memory applies to the default domain. Otherwise, each
> domain is going to have its own reserved-memory. Example:
>
>         domU1 {
>             compatible = "xen,domain";
>             #address-cells = <0x1>;
>             #size-cells = <0x1>;
>             cpus = <2>;
>
>             reserved-memory {
>                 #address-cells = <2>;
>                 #size-cells = <2>;
>
>                 static-memory@0x30000000 {
>                     compatible = "xen,static-memory-domain";
>                     reg = <0x0 0x30000000 0x0 0x20000000>;
>                 };
>             };
>         };

This sounds wrong to me because the memory is reserved from the
hypervisor PoV not from the domain. IOW, when I read this, I think the
memory will be reserved in the domain.

>
> So I don't think we want to use reserved-memory for this, either
> /reserved-memory or /chosen/domU1/reserved-memory. Instead it would be
> good to align it with system device tree and define it as a new property
> under domU1.

Do you have any formal documentation of the system device-tree?

>
> In system device tree we would use a property called "memory" to specify
> one or more ranges, e.g.:
>
>     domU1 {
>         memory = <0x0 0x500000 0x0 0x7fb00000>;
>
> Unfortunately for xen,domains we have already defined "memory" to
> specify an amount, rather than a range. That's too bad because the most
> natural way to do this would be:
>
>     domU1 {
>         size = <amount>;
>         memory = <ranges>;
>
> When we'll introduce native system device tree support in Xen we'll be
> able to do that. For now, we need to come up with a different property.
> For instance: "static-memory" (other names are welcome if you have a
> better suggestion).
>
> We use a new property called "static-memory" together with
> #static-memory-address-cells and #static-memory-size-cells to define how
> many cells to use for address and size.
>
> Example:
>
>     domU1 {
>         #static-memory-address-cells = <2>;
>         #static-memory-size-cells = <2>;
>         static-memory = <0x0 0x500000 0x0 0x7fb00000>;

This is pretty similar to what Penny suggested. But I dislike it
because of the amount of code that needs to be duplicated with the
reserved memory.

Cheers,


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
  2021-06-03 22:07                   ` Julien Grall
@ 2021-06-03 23:55                     ` Stefano Stabellini
  2021-06-04  4:00                       ` Penny Zheng
  2021-06-07 18:09                       ` Julien Grall
  0 siblings, 2 replies; 82+ messages in thread
From: Stefano Stabellini @ 2021-06-03 23:55 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Penny Zheng, xen-devel, Bertrand Marquis,
	Wei Chen, nd

On Thu, 3 Jun 2021, Julien Grall wrote:
> On Thu, 3 Jun 2021 at 22:33, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > On Thu, 3 Jun 2021, Julien Grall wrote:
> > > On 02/06/2021 11:09, Penny Zheng wrote:
> > > > I could not think a way to fix it properly in codes, do you have any
> > > > suggestion? Or we just put a warning in doc/commits.
> > >
> > > The correct approach is to find the parent of staticmemdomU1 (i.e.
> > > reserved-memory) and use the #address-cells and #size-cells from there.
> >
> > Julien is right about how to parse the static-memory.
> >
> > But I have a suggestion on the new binding. The /reserved-memory node is
> > a weird node: it is one of the very few node (the only node aside from
> > /chosen) which is about software configurations rather than hardware
> > description.
> >
> > For this reason, in a device tree with multiple domains /reserved-memory
> > doesn't make a lot of sense: for which domain is the memory reserved?
> 
> IHMO, /reserved-memory refers to the memory that the hypervisor should
> not touch. It is just a coincidence that most of the domains are then
> passed through to dom0.
>
> This also matches the fact that the GIC, /memory is consumed by the
> hypervisor directly and not the domain..

In system device tree one of the key principles is to distinguish between
hardware description and domains configuration. The domains
configuration is under /domains (originally it was under /chosen then
the DT maintainers requested to move it to its own top-level node), while
everything else is for hardware description.

/chosen and /reserved-memory are exceptions. They are top-level nodes
but they are for software configurations. In system device tree
configurations go under the domain node. This makes sense: Xen, dom0 and
domU can all have different reserved-memory and chosen configurations.

/domains/domU1/reserved-memory gives us a clear way to express
reserved-memory configurations for domU1.

Which leaves us with /reserved-memory. Who is that for? It is for the
default domain.

The default domain is the one receiving all devices by default. In a Xen
setting, it is probably Dom0. In this case, we don't want to add
reserved-memory regions for DomUs to Dom0's list. Dom0's reserved-memory
list is for its own drivers. We could also make an argument that the
default domain is Xen itself. From a spec perspective, that would be
fine too. In this case, /reserved-memory is a list of memory regions
reserved for Xen drivers.  Either way, I don't think is a great fit for
domains memory allocations.


> > This was one of the first points raised by Rob Herring in reviewing
> > system device tree.
> >
> > So the solution we went for is the following: if there is a default
> > domain /reserved-memory applies to the default domain. Otherwise, each
> > domain is going to have its own reserved-memory. Example:
> >
> >         domU1 {
> >             compatible = "xen,domain";
> >             #address-cells = <0x1>;
> >             #size-cells = <0x1>;
> >             cpus = <2>;
> >
> >             reserved-memory {
> >                 #address-cells = <2>;
> >                 #size-cells = <2>;
> >
> >                 static-memory@0x30000000 {
> >                     compatible = "xen,static-memory-domain";
> >                     reg = <0x0 0x30000000 0x0 0x20000000>;
> >                 };
> >             };
> >         };
> 
> This sounds wrong to me because the memory is reserved from the
> hypervisor PoV not from the domain. IOW, when I read this, I think the
> memory will be reserved in the domain.

It is definitely very wrong to place the static-memory allocation under
/chosen/domU1/reserved-memory. Sorry if I caused confusion. I only meant
it as an example of how reserved-memory (actual reserved-memory list
driver-specific memory ranges) is used.


> >
> > So I don't think we want to use reserved-memory for this, either
> > /reserved-memory or /chosen/domU1/reserved-memory. Instead it would be
> > good to align it with system device tree and define it as a new property
> > under domU1.
> 
> Do you have any formal documentation of the system device-tree?

It lives here:
https://github.com/devicetree-org/lopper/tree/master/specification

Start from specification.md. It is the oldest part of the spec, so it is
not yet written with a formal specification language.

FYI there are a number of things in-flight in regards to domains that
we discussed in the last call but they are not yet settled, thus, they
are not yet committed (access flags definitions and hierarchical
domains). However, they don't affect domains memory allocations so from
that perspective nothing has changed.


> > In system device tree we would use a property called "memory" to specify
> > one or more ranges, e.g.:
> >
> >     domU1 {
> >         memory = <0x0 0x500000 0x0 0x7fb00000>;
> >
> > Unfortunately for xen,domains we have already defined "memory" to
> > specify an amount, rather than a range. That's too bad because the most
> > natural way to do this would be:
> >
> >     domU1 {
> >         size = <amount>;
> >         memory = <ranges>;
> >
> > When we'll introduce native system device tree support in Xen we'll be
> > able to do that. For now, we need to come up with a different property.
> > For instance: "static-memory" (other names are welcome if you have a
> > better suggestion).
> >
> > We use a new property called "static-memory" together with
> > #static-memory-address-cells and #static-memory-size-cells to define how
> > many cells to use for address and size.
> >
> > Example:
> >
> >     domU1 {
> >         #static-memory-address-cells = <2>;
> >         #static-memory-size-cells = <2>;
> >         static-memory = <0x0 0x500000 0x0 0x7fb00000>;
> 
> This is pretty similar to what Penny suggested. But I dislike it
> because of the amount of code that needs to be duplicated with the
> reserved memory.

Where is the code duplication? In the parsing itself?

If there is code duplication, can we find a way to share some of the
code to avoid the duplication?


^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
  2021-06-03 23:55                     ` Stefano Stabellini
@ 2021-06-04  4:00                       ` Penny Zheng
  2021-06-05  2:00                         ` Stefano Stabellini
  2021-06-07 18:09                       ` Julien Grall
  1 sibling, 1 reply; 82+ messages in thread
From: Penny Zheng @ 2021-06-04  4:00 UTC (permalink / raw)
  To: Stefano Stabellini, Julien Grall
  Cc: xen-devel, Bertrand Marquis, Wei Chen, nd

Hi stefano and julien 

> -----Original Message-----
> From: Stefano Stabellini <sstabellini@kernel.org>
> Sent: Friday, June 4, 2021 7:56 AM
> To: Julien Grall <julien.grall.oss@gmail.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>; Penny Zheng
> <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org; Bertrand Marquis
> <Bertrand.Marquis@arm.com>; Wei Chen <Wei.Chen@arm.com>; nd
> <nd@arm.com>
> Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
> 
> On Thu, 3 Jun 2021, Julien Grall wrote:
> > On Thu, 3 Jun 2021 at 22:33, Stefano Stabellini <sstabellini@kernel.org>
> wrote:
> > > On Thu, 3 Jun 2021, Julien Grall wrote:
> > > > On 02/06/2021 11:09, Penny Zheng wrote:
> > > > > I could not think a way to fix it properly in codes, do you have
> > > > > any suggestion? Or we just put a warning in doc/commits.
> > > >
> > > > The correct approach is to find the parent of staticmemdomU1 (i.e.
> > > > reserved-memory) and use the #address-cells and #size-cells from there.
> > >
> > > Julien is right about how to parse the static-memory.
> > >
> > > But I have a suggestion on the new binding. The /reserved-memory
> > > node is a weird node: it is one of the very few node (the only node
> > > aside from
> > > /chosen) which is about software configurations rather than hardware
> > > description.
> > >
> > > For this reason, in a device tree with multiple domains
> > > /reserved-memory doesn't make a lot of sense: for which domain is the
> memory reserved?
> >
> > IHMO, /reserved-memory refers to the memory that the hypervisor should
> > not touch. It is just a coincidence that most of the domains are then
> > passed through to dom0.
> >
> > This also matches the fact that the GIC, /memory is consumed by the
> > hypervisor directly and not the domain..
> 
> In system device tree one of the key principles is to distinguish between
> hardware description and domains configuration. The domains configuration
> is under /domains (originally it was under /chosen then the DT maintainers
> requested to move it to its own top-level node), while everything else is for
> hardware description.
> 
> /chosen and /reserved-memory are exceptions. They are top-level nodes but
> they are for software configurations. In system device tree configurations go
> under the domain node. This makes sense: Xen, dom0 and domU can all have
> different reserved-memory and chosen configurations.
> 
> /domains/domU1/reserved-memory gives us a clear way to express reserved-
> memory configurations for domU1.
> 
> Which leaves us with /reserved-memory. Who is that for? It is for the default
> domain.
> 
> The default domain is the one receiving all devices by default. In a Xen setting,
> it is probably Dom0. In this case, we don't want to add reserved-memory
> regions for DomUs to Dom0's list. Dom0's reserved-memory list is for its own
> drivers. We could also make an argument that the default domain is Xen itself.
> From a spec perspective, that would be fine too. In this case, /reserved-
> memory is a list of memory regions reserved for Xen drivers.  Either way, I don't
> think is a great fit for domains memory allocations.
> 
> 
> > > This was one of the first points raised by Rob Herring in reviewing
> > > system device tree.
> > >
> > > So the solution we went for is the following: if there is a default
> > > domain /reserved-memory applies to the default domain. Otherwise,
> > > each domain is going to have its own reserved-memory. Example:
> > >
> > >         domU1 {
> > >             compatible = "xen,domain";
> > >             #address-cells = <0x1>;
> > >             #size-cells = <0x1>;
> > >             cpus = <2>;
> > >
> > >             reserved-memory {
> > >                 #address-cells = <2>;
> > >                 #size-cells = <2>;
> > >
> > >                 static-memory@0x30000000 {
> > >                     compatible = "xen,static-memory-domain";
> > >                     reg = <0x0 0x30000000 0x0 0x20000000>;
> > >                 };
> > >             };
> > >         };
> >
> > This sounds wrong to me because the memory is reserved from the
> > hypervisor PoV not from the domain. IOW, when I read this, I think the
> > memory will be reserved in the domain.
> 
> It is definitely very wrong to place the static-memory allocation under
> /chosen/domU1/reserved-memory. Sorry if I caused confusion. I only meant it
> as an example of how reserved-memory (actual reserved-memory list driver-
> specific memory ranges) is used.
> 
> 
> > >
> > > So I don't think we want to use reserved-memory for this, either
> > > /reserved-memory or /chosen/domU1/reserved-memory. Instead it would
> > > be good to align it with system device tree and define it as a new
> > > property under domU1.
> >
> > Do you have any formal documentation of the system device-tree?
> 
> It lives here:
> https://github.com/devicetree-org/lopper/tree/master/specification
> 
> Start from specification.md. It is the oldest part of the spec, so it is not yet
> written with a formal specification language.
> 
> FYI there are a number of things in-flight in regards to domains that we
> discussed in the last call but they are not yet settled, thus, they are not yet
> committed (access flags definitions and hierarchical domains). However, they
> don't affect domains memory allocations so from that perspective nothing has
> changed.
> 
> 
> > > In system device tree we would use a property called "memory" to
> > > specify one or more ranges, e.g.:
> > >
> > >     domU1 {
> > >         memory = <0x0 0x500000 0x0 0x7fb00000>;
> > >
> > > Unfortunately for xen,domains we have already defined "memory" to
> > > specify an amount, rather than a range. That's too bad because the
> > > most natural way to do this would be:
> > >
> > >     domU1 {
> > >         size = <amount>;
> > >         memory = <ranges>;
> > >
> > > When we'll introduce native system device tree support in Xen we'll
> > > be able to do that. For now, we need to come up with a different property.
> > > For instance: "static-memory" (other names are welcome if you have a
> > > better suggestion).
> > >
> > > We use a new property called "static-memory" together with
> > > #static-memory-address-cells and #static-memory-size-cells to define
> > > how many cells to use for address and size.
> > >
> > > Example:
> > >
> > >     domU1 {
> > >         #static-memory-address-cells = <2>;
> > >         #static-memory-size-cells = <2>;
> > >         static-memory = <0x0 0x500000 0x0 0x7fb00000>;
> >
> > This is pretty similar to what Penny suggested. But I dislike it
> > because of the amount of code that needs to be duplicated with the
> > reserved memory.
> 
> Where is the code duplication? In the parsing itself?
> 
> If there is code duplication, can we find a way to share some of the code to
> avoid the duplication?

Both your opinions are so convincing... :/

Correct me if I am wrong:
I think the duplication which Julien means are here, See commit 
https://patchew.org/Xen/20210518052113.725808-1-penny.zheng@arm.com/20210518052113.725808-3-penny.zheng@arm.com/
I added a another similar loop in dt_unreserved_regions to unreserve static memory.
For this part, I could try to extract common codes.

But another part I think is just this commit, where I added another check for static memory
in early_scan_node:

+    else if ( depth == 2 && fdt_get_property(fdt, node, "xen,static-mem", NULL) )
+        process_static_memory(fdt, node, "xen,static-mem", address_cells,
+                              size_cells, &bootinfo.static_mem);

TBH, I don't know how to fix here....

I've already finished Patch v2, we could continue discussing on how to define it in
Device tree here, and it will be included in Patch v3~~~ 😉

Cheers
Penny Zheng


^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
  2021-06-04  4:00                       ` Penny Zheng
@ 2021-06-05  2:00                         ` Stefano Stabellini
  0 siblings, 0 replies; 82+ messages in thread
From: Stefano Stabellini @ 2021-06-05  2:00 UTC (permalink / raw)
  To: Penny Zheng
  Cc: Stefano Stabellini, Julien Grall, xen-devel, Bertrand Marquis,
	Wei Chen, nd

On Fri, 4 Jun 2021, Penny Zheng wrote:
> > > > In system device tree we would use a property called "memory" to
> > > > specify one or more ranges, e.g.:
> > > >
> > > >     domU1 {
> > > >         memory = <0x0 0x500000 0x0 0x7fb00000>;
> > > >
> > > > Unfortunately for xen,domains we have already defined "memory" to
> > > > specify an amount, rather than a range. That's too bad because the
> > > > most natural way to do this would be:
> > > >
> > > >     domU1 {
> > > >         size = <amount>;
> > > >         memory = <ranges>;
> > > >
> > > > When we'll introduce native system device tree support in Xen we'll
> > > > be able to do that. For now, we need to come up with a different property.
> > > > For instance: "static-memory" (other names are welcome if you have a
> > > > better suggestion).
> > > >
> > > > We use a new property called "static-memory" together with
> > > > #static-memory-address-cells and #static-memory-size-cells to define
> > > > how many cells to use for address and size.
> > > >
> > > > Example:
> > > >
> > > >     domU1 {
> > > >         #static-memory-address-cells = <2>;
> > > >         #static-memory-size-cells = <2>;
> > > >         static-memory = <0x0 0x500000 0x0 0x7fb00000>;
> > >
> > > This is pretty similar to what Penny suggested. But I dislike it
> > > because of the amount of code that needs to be duplicated with the
> > > reserved memory.
> > 
> > Where is the code duplication? In the parsing itself?
> > 
> > If there is code duplication, can we find a way to share some of the code to
> > avoid the duplication?
> 
> Both your opinions are so convincing... :/
> 
> Correct me if I am wrong:
> I think the duplication which Julien means are here, See commit 
> https://patchew.org/Xen/20210518052113.725808-1-penny.zheng@arm.com/20210518052113.725808-3-penny.zheng@arm.com/
> I added a another similar loop in dt_unreserved_regions to unreserve static memory.
> For this part, I could try to extract common codes.
> 
> But another part I think is just this commit, where I added another check for static memory
> in early_scan_node:
> 
> +    else if ( depth == 2 && fdt_get_property(fdt, node, "xen,static-mem", NULL) )
> +        process_static_memory(fdt, node, "xen,static-mem", address_cells,
> +                              size_cells, &bootinfo.static_mem);
> 
> TBH, I don't know how to fix here....

Is it only one loop in dt_unreserved_regions and another call to
process_static_memory? If you can make the code common in
dt_unreserved_regions there wouldn't be much duplication left.


To explain my point of view a bit better, I think we have a lot more
freedom in the Xen implementation compared to the device tree
specification. For the sake of an example, let's say that we wanted Xen
to reuse bootinfo.reserved_mem for both reserved-memory and
static-memory. I don't know if it is even a good idea (I haven't looked
into it, it is just an example) but I think it would be OK, we could
decide to do that.

We have less room for flexibility in the device tree specification.
/reserved-memory is for special configurations of 1 domain only. I don't
think we could add domain static memory allocations to it.


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
  2021-06-03 23:55                     ` Stefano Stabellini
  2021-06-04  4:00                       ` Penny Zheng
@ 2021-06-07 18:09                       ` Julien Grall
  2021-06-09  9:56                         ` Bertrand Marquis
  1 sibling, 1 reply; 82+ messages in thread
From: Julien Grall @ 2021-06-07 18:09 UTC (permalink / raw)
  To: Stefano Stabellini, Julien Grall
  Cc: Penny Zheng, xen-devel, Bertrand Marquis, Wei Chen, nd

Hi Stefano,

On 04/06/2021 00:55, Stefano Stabellini wrote:
> On Thu, 3 Jun 2021, Julien Grall wrote:
>> On Thu, 3 Jun 2021 at 22:33, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>> On Thu, 3 Jun 2021, Julien Grall wrote:
>>>> On 02/06/2021 11:09, Penny Zheng wrote:
>>>>> I could not think a way to fix it properly in codes, do you have any
>>>>> suggestion? Or we just put a warning in doc/commits.
>>>>
>>>> The correct approach is to find the parent of staticmemdomU1 (i.e.
>>>> reserved-memory) and use the #address-cells and #size-cells from there.
>>>
>>> Julien is right about how to parse the static-memory.
>>>
>>> But I have a suggestion on the new binding. The /reserved-memory node is
>>> a weird node: it is one of the very few node (the only node aside from
>>> /chosen) which is about software configurations rather than hardware
>>> description.
>>>
>>> For this reason, in a device tree with multiple domains /reserved-memory
>>> doesn't make a lot of sense: for which domain is the memory reserved?
>>
>> IHMO, /reserved-memory refers to the memory that the hypervisor should
>> not touch. It is just a coincidence that most of the domains are then
>> passed through to dom0.
>>
>> This also matches the fact that the GIC, /memory is consumed by the
>> hypervisor directly and not the domain..
> 
> In system device tree one of the key principles is to distinguish between
> hardware description and domains configuration. The domains
> configuration is under /domains (originally it was under /chosen then
> the DT maintainers requested to move it to its own top-level node), while
> everything else is for hardware description.
> 
> /chosen and /reserved-memory are exceptions. They are top-level nodes
> but they are for software configurations. In system device tree
> configurations go under the domain node. This makes sense: Xen, dom0 and
> domU can all have different reserved-memory and chosen configurations.
> 
> /domains/domU1/reserved-memory gives us a clear way to express
> reserved-memory configurations for domU1.
> 
> Which leaves us with /reserved-memory. Who is that for? It is for the
> default domain.
> 
> The default domain is the one receiving all devices by default. In a Xen
> setting, it is probably Dom0. 

Let's take an example, let say in the future someone wants to allocate a 
specific region for the memory used by the GICv3 ITS.

 From what you said above, /reserved-memory would be used by dom0. So 
how would you be able to tell the hypervisor that the region is reserved 
for itself?

> In this case, we don't want to add
> reserved-memory regions for DomUs to Dom0's list. Dom0's reserved-memory
> list is for its own drivers. We could also make an argument that the
> default domain is Xen itself. From a spec perspective, that would be
> fine too. In this case, /reserved-memory is a list of memory regions
> reserved for Xen drivers. 

We seem to have a different way to read the binding description in [1]. 
For convenience, I will copy it here:

"Reserved memory is specified as a node under the /reserved-memory node.
The operating system shall exclude reserved memory from normal usage
one can create child nodes describing particular reserved (excluded from
normal use) memory regions. Such memory regions are usually designed for
the special usage by various device drivers.
"

I read it as this can be used to exclude any memory from the allocator 
for a specific purpose. They give the example of device drivers, but 
they don't exclude other purpose. So...

> Either way, I don't think is a great fit for
> domains memory allocations.

... I don't really understand why this is not a great fit. The regions 
have been *reserved* for a purpose.

>>>
>>> So I don't think we want to use reserved-memory for this, either
>>> /reserved-memory or /chosen/domU1/reserved-memory. Instead it would be
>>> good to align it with system device tree and define it as a new property
>>> under domU1.
>>
>> Do you have any formal documentation of the system device-tree?
> 
> It lives here:
> https://github.com/devicetree-org/lopper/tree/master/specification
> 
> Start from specification.md. It is the oldest part of the spec, so it is
> not yet written with a formal specification language.
> 
> FYI there are a number of things in-flight in regards to domains that
> we discussed in the last call but they are not yet settled, thus, they
> are not yet committed (access flags definitions and hierarchical
> domains). However, they don't affect domains memory allocations so from
> that perspective nothing has changed.

Thanks!

> 
> 
>>> In system device tree we would use a property called "memory" to specify
>>> one or more ranges, e.g.:
>>>
>>>      domU1 {
>>>          memory = <0x0 0x500000 0x0 0x7fb00000>;
>>>
>>> Unfortunately for xen,domains we have already defined "memory" to
>>> specify an amount, rather than a range. That's too bad because the most
>>> natural way to do this would be:
>>>
>>>      domU1 {
>>>          size = <amount>;
>>>          memory = <ranges>;
>>>
>>> When we'll introduce native system device tree support in Xen we'll be
>>> able to do that. For now, we need to come up with a different property.
>>> For instance: "static-memory" (other names are welcome if you have a
>>> better suggestion).
>>>
>>> We use a new property called "static-memory" together with
>>> #static-memory-address-cells and #static-memory-size-cells to define how
>>> many cells to use for address and size.
>>>
>>> Example:
>>>
>>>      domU1 {
>>>          #static-memory-address-cells = <2>;
>>>          #static-memory-size-cells = <2>;
>>>          static-memory = <0x0 0x500000 0x0 0x7fb00000>;
>>
>> This is pretty similar to what Penny suggested. But I dislike it
>> because of the amount of code that needs to be duplicated with the
>> reserved memory.
> 
> Where is the code duplication? In the parsing itself?

So the problem is we need an entire new way to parse and walk yet 
another binding that will describe memory excluded from normal allocator 
hypervisor.

The code is pretty much the same as parsing /reserved-memory except it 
will be on a different address cells, size cells, property.

> 
> If there is code duplication, can we find a way to share some of the
> code to avoid the duplication?

Feel free to propose one. I suggested to use /reserved-memory because 
this is the approach that makes the most sense to me (see my reply above).

TBH, even after your explanation, I am still a bit puzzled into why 
/reserved-memory cannot be leveraged to exclude domain region from the 
hypervisor allocator.

Cheers,

[1] 
https://www.kernel.org/doc/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
  2021-06-07 18:09                       ` Julien Grall
@ 2021-06-09  9:56                         ` Bertrand Marquis
  2021-06-09 10:47                           ` Julien Grall
  0 siblings, 1 reply; 82+ messages in thread
From: Bertrand Marquis @ 2021-06-09  9:56 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Julien Grall, Penny Zheng, xen-devel, Wei Chen, nd

[-- Attachment #1: Type: text/plain, Size: 7640 bytes --]

Hi All,

On 7 Jun 2021, at 19:09, Julien Grall <julien@xen.org<mailto:julien@xen.org>> wrote:

Hi Stefano,

On 04/06/2021 00:55, Stefano Stabellini wrote:
On Thu, 3 Jun 2021, Julien Grall wrote:
On Thu, 3 Jun 2021 at 22:33, Stefano Stabellini <sstabellini@kernel.org<mailto:sstabellini@kernel.org>> wrote:
On Thu, 3 Jun 2021, Julien Grall wrote:
On 02/06/2021 11:09, Penny Zheng wrote:
I could not think a way to fix it properly in codes, do you have any
suggestion? Or we just put a warning in doc/commits.

The correct approach is to find the parent of staticmemdomU1 (i.e.
reserved-memory) and use the #address-cells and #size-cells from there.

Julien is right about how to parse the static-memory.

But I have a suggestion on the new binding. The /reserved-memory node is
a weird node: it is one of the very few node (the only node aside from
/chosen) which is about software configurations rather than hardware
description.

For this reason, in a device tree with multiple domains /reserved-memory
doesn't make a lot of sense: for which domain is the memory reserved?

IHMO, /reserved-memory refers to the memory that the hypervisor should
not touch. It is just a coincidence that most of the domains are then
passed through to dom0.

This also matches the fact that the GIC, /memory is consumed by the
hypervisor directly and not the domain..
In system device tree one of the key principles is to distinguish between
hardware description and domains configuration. The domains
configuration is under /domains (originally it was under /chosen then
the DT maintainers requested to move it to its own top-level node), while
everything else is for hardware description.
/chosen and /reserved-memory are exceptions. They are top-level nodes
but they are for software configurations. In system device tree
configurations go under the domain node. This makes sense: Xen, dom0 and
domU can all have different reserved-memory and chosen configurations.
/domains/domU1/reserved-memory gives us a clear way to express
reserved-memory configurations for domU1.
Which leaves us with /reserved-memory. Who is that for? It is for the
default domain.
The default domain is the one receiving all devices by default. In a Xen
setting, it is probably Dom0.

Let's take an example, let say in the future someone wants to allocate a specific region for the memory used by the GICv3 ITS.

From what you said above, /reserved-memory would be used by dom0. So how would you be able to tell the hypervisor that the region is reserved for itself?

In this case, we don't want to add
reserved-memory regions for DomUs to Dom0's list. Dom0's reserved-memory
list is for its own drivers. We could also make an argument that the
default domain is Xen itself. From a spec perspective, that would be
fine too. In this case, /reserved-memory is a list of memory regions
reserved for Xen drivers.

We seem to have a different way to read the binding description in [1]. For convenience, I will copy it here:

"Reserved memory is specified as a node under the /reserved-memory node.
The operating system shall exclude reserved memory from normal usage
one can create child nodes describing particular reserved (excluded from
normal use) memory regions. Such memory regions are usually designed for
the special usage by various device drivers.
"

I read it as this can be used to exclude any memory from the allocator for a specific purpose. They give the example of device drivers, but they don't exclude other purpose. So...

Either way, I don't think is a great fit for
domains memory allocations.

... I don't really understand why this is not a great fit. The regions have been *reserved* for a purpose.


So I don't think we want to use reserved-memory for this, either
/reserved-memory or /chosen/domU1/reserved-memory. Instead it would be
good to align it with system device tree and define it as a new property
under domU1.

Do you have any formal documentation of the system device-tree?
It lives here:
https://github.com/devicetree-org/lopper/tree/master/specification
Start from specification.md. It is the oldest part of the spec, so it is
not yet written with a formal specification language.
FYI there are a number of things in-flight in regards to domains that
we discussed in the last call but they are not yet settled, thus, they
are not yet committed (access flags definitions and hierarchical
domains). However, they don't affect domains memory allocations so from
that perspective nothing has changed.

Thanks!

In system device tree we would use a property called "memory" to specify
one or more ranges, e.g.:

    domU1 {
        memory = <0x0 0x500000 0x0 0x7fb00000>;

Unfortunately for xen,domains we have already defined "memory" to
specify an amount, rather than a range. That's too bad because the most
natural way to do this would be:

    domU1 {
        size = <amount>;
        memory = <ranges>;

When we'll introduce native system device tree support in Xen we'll be
able to do that. For now, we need to come up with a different property.
For instance: "static-memory" (other names are welcome if you have a
better suggestion).

We use a new property called "static-memory" together with
#static-memory-address-cells and #static-memory-size-cells to define how
many cells to use for address and size.

Example:

    domU1 {
        #static-memory-address-cells = <2>;
        #static-memory-size-cells = <2>;
        static-memory = <0x0 0x500000 0x0 0x7fb00000>;

This is pretty similar to what Penny suggested. But I dislike it
because of the amount of code that needs to be duplicated with the
reserved memory.
Where is the code duplication? In the parsing itself?

So the problem is we need an entire new way to parse and walk yet another binding that will describe memory excluded from normal allocator hypervisor.

The code is pretty much the same as parsing /reserved-memory except it will be on a different address cells, size cells, property.

If there is code duplication, can we find a way to share some of the
code to avoid the duplication?

Feel free to propose one. I suggested to use /reserved-memory because this is the approach that makes the most sense to me (see my reply above).

TBH, even after your explanation, I am still a bit puzzled into why /reserved-memory cannot be leveraged to exclude domain region from the hypervisor allocator.

I really tend to think that the original solution from Penny is for now the easiest and simplest to document.

In the long term, using directly memory and giving in it the address range directly is the most natural solution but that would clash with the current usage for it.

I would like to suggest the following approach:
- keep original solution from Penny
- start to discuss a domain v2 so that we could solve current issues we have including the passthrough devices which are not really easy to define.
As a user I would just expect to put a device tree or links in a domain definition to define its characteristic and devices and using the standard names (memory for example).
Also I must admit I need to read more the system device tree spec to check if we could just use it directly (and be compliant to a standard).

Would that approach be acceptable ?
I am more then happy to drive a working group on rethinking the device tree together with Penny.

Cheers
Bertrand


Cheers,

[1] https://www.kernel.org/doc/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt

--
Julien Grall


[-- Attachment #2: Type: text/html, Size: 35938 bytes --]

^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
  2021-06-09  9:56                         ` Bertrand Marquis
@ 2021-06-09 10:47                           ` Julien Grall
  2021-06-15  6:08                             ` Penny Zheng
  0 siblings, 1 reply; 82+ messages in thread
From: Julien Grall @ 2021-06-09 10:47 UTC (permalink / raw)
  To: Bertrand Marquis
  Cc: Stefano Stabellini, Julien Grall, Penny Zheng, xen-devel, Wei Chen, nd



On 09/06/2021 10:56, Bertrand Marquis wrote:
> Hi All,

Hi,

>> On 7 Jun 2021, at 19:09, Julien Grall <julien@xen.org 
>> <mailto:julien@xen.org>> wrote:
>> Feel free to propose one. I suggested to use /reserved-memory because 
>> this is the approach that makes the most sense to me (see my reply above).
>>
>> TBH, even after your explanation, I am still a bit puzzled into why 
>> /reserved-memory cannot be leveraged to exclude domain region from the 
>> hypervisor allocator.
> 
> I really tend to think that the original solution from Penny is for now 
> the easiest and simplest to document.

I can live with Penny's solution so long we don't duplicate the parsing 
and we don't create new datastructure in Xen for the new type of 
reserved memory. However...

> 
> In the long term, using directly memory and giving in it the address 
> range directly is the most natural solution but that would clash with 
> the current usage for it.

... we are already going to have quite some churn to support the system 
device-tree. So I don't want yet another binding to be invented in a few 
months time.

IOW, the new binding should be a long term solution rather than a 
temporary one to fill the gap until we agree on what you call "domain v2".

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* RE: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
  2021-06-09 10:47                           ` Julien Grall
@ 2021-06-15  6:08                             ` Penny Zheng
  2021-06-17 11:22                               ` Julien Grall
  0 siblings, 1 reply; 82+ messages in thread
From: Penny Zheng @ 2021-06-15  6:08 UTC (permalink / raw)
  To: Julien Grall, Bertrand Marquis
  Cc: Stefano Stabellini, Julien Grall, xen-devel, Wei Chen, nd

Hi julien

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: Wednesday, June 9, 2021 6:47 PM
> To: Bertrand Marquis <Bertrand.Marquis@arm.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>; Julien Grall
> <julien.grall.oss@gmail.com>; Penny Zheng <Penny.Zheng@arm.com>; xen-
> devel@lists.xenproject.org; Wei Chen <Wei.Chen@arm.com>; nd
> <nd@arm.com>
> Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
> 
> 
> 
> On 09/06/2021 10:56, Bertrand Marquis wrote:
> > Hi All,
> 
> Hi,
> 
> >> On 7 Jun 2021, at 19:09, Julien Grall <julien@xen.org
> >> <mailto:julien@xen.org>> wrote:
> >> Feel free to propose one. I suggested to use /reserved-memory because
> >> this is the approach that makes the most sense to me (see my reply above).
> >>
> >> TBH, even after your explanation, I am still a bit puzzled into why
> >> /reserved-memory cannot be leveraged to exclude domain region from
> >> the hypervisor allocator.
> >
> > I really tend to think that the original solution from Penny is for
> > now the easiest and simplest to document.
> 
> I can live with Penny's solution so long we don't duplicate the parsing and we
> don't create new datastructure in Xen for the new type of reserved memory.
> However...
> 

Just to confirm what I understand here, you are not only worrying about the duplication code imported in
dt_unreserved_regions, but also must introducing another path in func early_scan_node to parse my first implementation
"xen,static-mem = <...>", right?

On duplication code part, I could think of a way to extract common codes to fix, but for introducing another new path to parse,
FWIT, it is inevitable if not re-using reserved-memory. ;/

> > In the long term, using directly memory and giving in it the address
> > range directly is the most natural solution but that would clash with
> > the current usage for it.
> 
> ... we are already going to have quite some churn to support the system
> device-tree. So I don't want yet another binding to be invented in a few
> months time.
> 
> IOW, the new binding should be a long term solution rather than a temporary
> one to fill the gap until we agree on what you call "domain v2".
> 
> Cheers,
> 
> --

Cheers

Penny
> Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

* Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
  2021-06-15  6:08                             ` Penny Zheng
@ 2021-06-17 11:22                               ` Julien Grall
  0 siblings, 0 replies; 82+ messages in thread
From: Julien Grall @ 2021-06-17 11:22 UTC (permalink / raw)
  To: Penny Zheng, Bertrand Marquis
  Cc: Stefano Stabellini, Julien Grall, xen-devel, Wei Chen, nd

On 15/06/2021 08:08, Penny Zheng wrote:
> Hi julien

Hi Penny,

>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Wednesday, June 9, 2021 6:47 PM
>> To: Bertrand Marquis <Bertrand.Marquis@arm.com>
>> Cc: Stefano Stabellini <sstabellini@kernel.org>; Julien Grall
>> <julien.grall.oss@gmail.com>; Penny Zheng <Penny.Zheng@arm.com>; xen-
>> devel@lists.xenproject.org; Wei Chen <Wei.Chen@arm.com>; nd
>> <nd@arm.com>
>> Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
>>
>>
>>
>> On 09/06/2021 10:56, Bertrand Marquis wrote:
>>> Hi All,
>>
>> Hi,
>>
>>>> On 7 Jun 2021, at 19:09, Julien Grall <julien@xen.org
>>>> <mailto:julien@xen.org>> wrote:
>>>> Feel free to propose one. I suggested to use /reserved-memory because
>>>> this is the approach that makes the most sense to me (see my reply above).
>>>>
>>>> TBH, even after your explanation, I am still a bit puzzled into why
>>>> /reserved-memory cannot be leveraged to exclude domain region from
>>>> the hypervisor allocator.
>>>
>>> I really tend to think that the original solution from Penny is for
>>> now the easiest and simplest to document.
>>
>> I can live with Penny's solution so long we don't duplicate the parsing and we
>> don't create new datastructure in Xen for the new type of reserved memory.
>> However...
>>
> 
> Just to confirm what I understand here, you are not only worrying about the duplication code imported in
> dt_unreserved_regions, but also must introducing another path in func early_scan_node to parse my first implementation
> "xen,static-mem = <...>", right?

That's correct.

> 
> On duplication code part, I could think of a way to extract common codes to fix, but for introducing another new path to parse,
> FWIT, it is inevitable if not re-using reserved-memory. ;/

I don't think this is inevitable. If you look at the code, we already 
share the parsing between reserved-memory and memory.

AFAICT, the main difference now is the property to parse. Other than 
that, the content is exactly the same. So we could pass the name of the 
property to parse.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 82+ messages in thread

end of thread, other threads:[~2021-06-17 11:22 UTC | newest]

Thread overview: 82+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-18  5:21 [PATCH 00/10] Domain on Static Allocation Penny Zheng
2021-05-18  5:21 ` [PATCH 01/10] xen/arm: introduce domain " Penny Zheng
2021-05-18  8:58   ` Julien Grall
2021-05-19  2:22     ` Penny Zheng
2021-05-19 18:27       ` Julien Grall
2021-05-20  6:07         ` Penny Zheng
2021-05-20  8:50           ` Julien Grall
2021-06-02 10:09             ` Penny Zheng
2021-06-03  9:09               ` Julien Grall
2021-06-03 21:32                 ` Stefano Stabellini
2021-06-03 22:07                   ` Julien Grall
2021-06-03 23:55                     ` Stefano Stabellini
2021-06-04  4:00                       ` Penny Zheng
2021-06-05  2:00                         ` Stefano Stabellini
2021-06-07 18:09                       ` Julien Grall
2021-06-09  9:56                         ` Bertrand Marquis
2021-06-09 10:47                           ` Julien Grall
2021-06-15  6:08                             ` Penny Zheng
2021-06-17 11:22                               ` Julien Grall
2021-05-18  5:21 ` [PATCH 02/10] xen/arm: handle static memory in dt_unreserved_regions Penny Zheng
2021-05-18  9:04   ` Julien Grall
2021-05-18  5:21 ` [PATCH 03/10] xen/arm: introduce PGC_reserved Penny Zheng
2021-05-18  9:45   ` Julien Grall
2021-05-19  3:16     ` Penny Zheng
2021-05-19  9:49       ` Jan Beulich
2021-05-19 19:49         ` Julien Grall
2021-05-20  7:05           ` Jan Beulich
2021-05-19 19:46       ` Julien Grall
2021-05-20  6:19         ` Penny Zheng
2021-05-20  8:40           ` Penny Zheng
2021-05-20  8:59             ` Julien Grall
2021-05-20  9:27               ` Jan Beulich
2021-05-20  9:45                 ` Julien Grall
2021-05-18  5:21 ` [PATCH 04/10] xen/arm: static memory initialization Penny Zheng
2021-05-18  7:15   ` Jan Beulich
2021-05-18  9:51     ` Penny Zheng
2021-05-18 10:43       ` Jan Beulich
2021-05-20  9:04         ` Penny Zheng
2021-05-20  9:32           ` Jan Beulich
2021-05-18 10:00   ` Julien Grall
2021-05-18 10:01     ` Julien Grall
2021-05-19  5:02     ` Penny Zheng
2021-05-18  5:21 ` [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages Penny Zheng
2021-05-18  7:24   ` Jan Beulich
2021-05-18  9:30     ` Penny Zheng
2021-05-18 10:09     ` Julien Grall
2021-05-18 10:15   ` Julien Grall
2021-05-19  5:23     ` Penny Zheng
2021-05-24 10:10       ` Penny Zheng
2021-05-24 10:24         ` Julien Grall
2021-05-18  5:21 ` [PATCH 06/10] xen: replace order with nr_pfns in assign_pages for better compatibility Penny Zheng
2021-05-18  7:27   ` Jan Beulich
2021-05-18  9:11     ` Penny Zheng
2021-05-18 10:20   ` Julien Grall
2021-05-19  5:35     ` Penny Zheng
2021-05-18  5:21 ` [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages Penny Zheng
2021-05-18  7:34   ` Jan Beulich
2021-05-18  8:57     ` Penny Zheng
2021-05-18 11:23       ` Jan Beulich
2021-05-21  6:41         ` Penny Zheng
2021-05-21  7:09           ` Jan Beulich
2021-06-03  2:44             ` Penny Zheng
2021-05-18 12:13       ` Julien Grall
2021-05-19  7:52         ` Penny Zheng
2021-05-19 20:01           ` Julien Grall
2021-05-18 10:30   ` Julien Grall
2021-05-19  6:03     ` Penny Zheng
2021-05-18  5:21 ` [PATCH 08/10] xen/arm: introduce reserved_page_list Penny Zheng
2021-05-18  7:39   ` Jan Beulich
2021-05-18  8:38     ` Penny Zheng
2021-05-18 11:24       ` Jan Beulich
2021-05-19  6:46         ` Penny Zheng
2021-05-18 11:02   ` Julien Grall
2021-05-19  6:43     ` Penny Zheng
2021-05-18  5:21 ` [PATCH 09/10] xen/arm: parse `xen,static-mem` info during domain construction Penny Zheng
2021-05-18 12:09   ` Julien Grall
2021-05-19  7:58     ` Penny Zheng
2021-05-18  5:21 ` [PATCH 10/10] xen/arm: introduce allocate_static_memory Penny Zheng
2021-05-18 12:05   ` Julien Grall
2021-05-19  7:27     ` Penny Zheng
2021-05-19 20:10       ` Julien Grall
2021-05-20  6:29         ` Penny Zheng

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.