All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v7 0/7] Domain on Static Allocation
@ 2021-09-10  2:52 Penny Zheng
  2021-09-10  2:52 ` [PATCH v7 1/7] xen/arm: introduce new helper device_tree_get_meminfo Penny Zheng
                   ` (6 more replies)
  0 siblings, 7 replies; 15+ messages in thread
From: Penny Zheng @ 2021-09-10  2:52 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien; +Cc: Bertrand.Marquis, Wei.Chen, jbeulich

Static Allocation refers to system or sub-system(domains) for which memory
areas are pre-defined by configuration using physical address ranges.

Those pre-defined memory, -- Static Memory, as parts of RAM reserved in the
beginning, shall never go to heap allocator or boot allocator for any use.

Memory can be statically allocated to a domain using the property "xen,static-
mem" defined in the domain configuration. The number of cells for the address
and the size must be defined using respectively the properties
"#xen,static-mem-address-cells" and "#xen,static-mem-size-cells".

The property 'memory' is still needed and should match the amount of memory
given to the guest. Currently, it either comes from static memory or lets Xen
allocate from heap. *Mixing* is not supported.

The static memory will be mapped in the guest at the usual guest memory
addresses (GUEST_RAM0_BASE, GUEST_RAM1_BASE) defined by
xen/include/public/arch-arm.h.

This Patch Serie only talks about Domain on Static Allocation.

Looking into related [design link](
https://lists.xenproject.org/archives/html/xen-devel/2021-05/msg00882.html)
for more details.

The whole design is about Static Allocation and 1:1 direct-map, and this
Patch Serie only covers parts of it, which are Domain on Static Allocation.
Other features will be delievered through different patch series.
---
changes in v2:
 - consider reserved-memory binding and use phandle to refer(still in
discussion)
 - remove unused reserved field in struct page_info, unused helper
page_get_reserved_owner and page_set_reserved_owner
 - introduce CONFIG_STATIC_ALLOCATION to avoid bringing dead codes in
other archs.
 - extract common code from free_heap_pages and free_staticmem_pages
 - introduce new interface assign_pages_nr
 - change pfn-named to mfn-named
 - remove meaningless MEMF_no_owner case
 - leave zone concept out of DMA limitation check
 - take care of concurrency issue on static memory allocation
 - ensure the consistency when `xen,static-mem` and `memory` are both defined
 - fix the scalability issue in allocate_static_memory
 - allocate static memory when parse
---
changes in v3:
- use "xen,static-mem" property to be compatible with System Sevice Tree
in the future
- introduce new helper device_tree_get_meminfo
- extract common codes for dealing with reserved memory stored in
bootinfo
- rename from "free_page" to "mark_page_free"
- remove non-trivial page_to_mfn conversion in "mark_page_free" due to
pdx compression, and let the MFN passed in
- let all switch-cases shared in "mark_page_free"
- change CONFIG_STATIC_ALLOCATION to CONFIG_STATIC_MEMORY
- put init_staticmem_pages in setup_mm
- rename assign_pages_nr to assign_pages
- alter the order of assign_pages parameters to help backporting
- change name from alloc_staticmem_pages/alloc_domstatic_pages to
acquire_staticmem_pages and acquire_domstatic_pages.
- remove hunks' #ifdef-ary by introducing PGC_reserved = 0
- remove DMA restriction
- "memory" property shall be mandatory
- rename allocate_static_bank_memory to append_static_memory_to_bank
- infer the next GFN from the bank information in append_static_memory_to_bank
- simplify the code of double loop in allocate_static_memory
---
changes in v4:
- move the option CONFIG_STATIC_MEMORY to common code, and with Arm
"select"ing it
- replace round_pg{down,up}() with PFN_DOWN()/PFN_UP()
- in all cases where order-0 pages get passed, prefer using new assign_pages
to pass literal 1
- reconstruct the order of assign_pages parameters
- moving tlb/cache flush outside of the locked region, considering XSA-364
and reducing the amount of work happening with the heap_lock held
- remove MEMF_no_refcount case
- make acquire_staticmem_pages/acquire_domstatic_pages being __init
---
changes in v5:
- check the node using the Xen domain binding whether contains the property
"xen,static-mem", not the property itself
- add "rc = ..." to get the error propagated
- introduce new field "bool xen_domain", then static memory shall be also stored
as reserved memory(bootinfo.reserved_mem), but being bind to one
specific Xen domain node.
- make CONFIG_STATIC_MEMORY user selectable and gated by UNSUPPORTED.
- wrap all static-allocation-related codes with CONFIG_STATIC_MEMORY
even in arm-specific file.
- make bank_start/bank_end type of mfn_t, and rename bank_size to
bank_pages.
- Having both functions assign_pages/assign_page with similar parameter
arrangement
- bundle all the functions for static allocation in a single place
- return an error and revert the changes, when the page is not free
and reserved.
- check the MFN is valid for every page and also add a comment to warn
that this function needs to be reworked if used outside of boot.
- use less of mfn_to_page/page_to_mfn
- use ASSERT_UNREACHABLE() to also check that the two flags are clear
- pass the start MFN first and then the number of pages in both
acquire_staticmem_pages and acquire_domstatic_pages
- make acquire_domstatic_pages() to return an errno
- don't split comment over multi-line (even they are more than 80 characters)
- simply use dt_find_property(node, "xen,static-mem", NULL) to tell
whether using allocate_static_memory, and add error comment when
"xen,static-mem" is used but CONFIG_STATIC_MEMORY is not enabled.
- exporting p2m_insert_mapping() and introduce guest_physmap_add_pages
to cope with adding guest RAM p2m mapping with nr_pages.
- check both pbase and psize are page aligned
- simplify the code in the loops by moving append_static_memory_to_bank()
outside of the if/else.
---
change v6:
- move address_cells and size_cells check into device_tree_get_meminfo
- doc and comment refinement
- indentation, and the kconfig message fix
- let assign_page have a different order for the arguments, not assign_pages
- fix variable declaration and some format
- use PFN_DOWN(...)
- provide a stub allocate_static_memory implementation for
!CONFIG_STATIC_MEMORY
- add a statement to clarify the binding between the "memory" property
and "xen,static-mem"
---
change v7:
- harmonize the argument order of assign_pages and assign_page
---
TODO:
- reboot domain on static allocation
- Implement all memory-ops(hypercalls) regarding domain on static allocation
to balloon in/out memory
- asynchronously scrubbing PGC_reserved pages
Penny Zheng (7):
  xen/arm: introduce new helper device_tree_get_meminfo
  xen/arm: introduce domain on Static Allocation
  xen: introduce mark_page_free
  xen/arm: static memory initialization
  xen: re-define assign_pages and introduce a new function assign_page
  xen/arm: introduce acquire_staticmem_pages and acquire_domstatic_pages
  xen/arm: introduce allocate_static_memory

 docs/misc/arm/device-tree/booting.txt |  42 +++++
 xen/arch/arm/bootfdt.c                | 109 +++++++----
 xen/arch/arm/domain_build.c           | 161 ++++++++++++++++-
 xen/arch/arm/p2m.c                    |   7 +-
 xen/arch/arm/setup.c                  |  27 +++
 xen/arch/x86/pv/dom0_build.c          |   2 +-
 xen/common/Kconfig                    |  13 ++
 xen/common/grant_table.c              |   2 +-
 xen/common/memory.c                   |   6 +-
 xen/common/page_alloc.c               | 251 ++++++++++++++++++++------
 xen/include/asm-arm/mm.h              |   3 +
 xen/include/asm-arm/p2m.h             |  11 ++
 xen/include/asm-arm/setup.h           |   1 +
 xen/include/xen/mm.h                  |  14 ++
 14 files changed, 550 insertions(+), 99 deletions(-)

-- 
2.25.1



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v7 1/7] xen/arm: introduce new helper device_tree_get_meminfo
  2021-09-10  2:52 [PATCH v7 0/7] Domain on Static Allocation Penny Zheng
@ 2021-09-10  2:52 ` Penny Zheng
  2021-09-10  2:52 ` [PATCH v7 2/7] xen/arm: introduce domain on Static Allocation Penny Zheng
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Penny Zheng @ 2021-09-10  2:52 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien; +Cc: Bertrand.Marquis, Wei.Chen, jbeulich

This commit creates a new helper device_tree_get_meminfo to iterate over a
device tree property to get memory info, like "reg".

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/arm/bootfdt.c | 83 ++++++++++++++++++++++++------------------
 1 file changed, 47 insertions(+), 36 deletions(-)

diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index 476e32e0f5..b01badda3e 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -63,6 +63,52 @@ void __init device_tree_get_reg(const __be32 **cell, u32 address_cells,
     *size = dt_next_cell(size_cells, cell);
 }
 
+static int __init device_tree_get_meminfo(const void *fdt, int node,
+                                          const char *prop_name,
+                                          u32 address_cells, u32 size_cells,
+                                          void *data)
+{
+    const struct fdt_property *prop;
+    unsigned int i, banks;
+    const __be32 *cell;
+    u32 reg_cells = address_cells + size_cells;
+    paddr_t start, size;
+    struct meminfo *mem = data;
+
+    if ( address_cells < 1 || size_cells < 1 )
+    {
+        printk("fdt: property `%s': invalid #address-cells or #size-cells",
+               prop_name);
+        return -EINVAL;
+    }
+
+    prop = fdt_get_property(fdt, node, prop_name, NULL);
+    if ( !prop )
+        return -ENOENT;
+
+    cell = (const __be32 *)prop->data;
+    banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof (u32));
+
+    for ( i = 0; i < banks && mem->nr_banks < NR_MEM_BANKS; i++ )
+    {
+        device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
+        /* Some DT may describe empty bank, ignore them */
+        if ( !size )
+            continue;
+        mem->bank[mem->nr_banks].start = start;
+        mem->bank[mem->nr_banks].size = size;
+        mem->nr_banks++;
+    }
+
+    if ( i < banks )
+    {
+        printk("Warning: Max number of supported memory regions reached.\n");
+        return -ENOSPC;
+    }
+
+    return 0;
+}
+
 u32 __init device_tree_get_u32(const void *fdt, int node,
                                const char *prop_name, u32 dflt)
 {
@@ -139,42 +185,7 @@ static int __init process_memory_node(const void *fdt, int node,
                                       u32 address_cells, u32 size_cells,
                                       void *data)
 {
-    const struct fdt_property *prop;
-    int i;
-    int banks;
-    const __be32 *cell;
-    paddr_t start, size;
-    u32 reg_cells = address_cells + size_cells;
-    struct meminfo *mem = data;
-
-    if ( address_cells < 1 || size_cells < 1 )
-    {
-        printk("fdt: node `%s': invalid #address-cells or #size-cells",
-               name);
-        return -EINVAL;
-    }
-
-    prop = fdt_get_property(fdt, node, "reg", NULL);
-    if ( !prop )
-        return -ENOENT;
-
-    cell = (const __be32 *)prop->data;
-    banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof (u32));
-
-    for ( i = 0; i < banks && mem->nr_banks < NR_MEM_BANKS; i++ )
-    {
-        device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
-        /* Some DT may describe empty bank, ignore them */
-        if ( !size )
-            continue;
-        mem->bank[mem->nr_banks].start = start;
-        mem->bank[mem->nr_banks].size = size;
-        mem->nr_banks++;
-    }
-
-    if ( i < banks )
-        return -ENOSPC;
-    return 0;
+    return device_tree_get_meminfo(fdt, node, "reg", address_cells, size_cells, data);
 }
 
 static int __init process_reserved_memory_node(const void *fdt, int node,
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v7 2/7] xen/arm: introduce domain on Static Allocation
  2021-09-10  2:52 [PATCH v7 0/7] Domain on Static Allocation Penny Zheng
  2021-09-10  2:52 ` [PATCH v7 1/7] xen/arm: introduce new helper device_tree_get_meminfo Penny Zheng
@ 2021-09-10  2:52 ` Penny Zheng
  2021-09-10  2:52 ` [PATCH v7 3/7] xen: introduce mark_page_free Penny Zheng
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Penny Zheng @ 2021-09-10  2:52 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien; +Cc: Bertrand.Marquis, Wei.Chen, jbeulich

Static Allocation refers to system or sub-system(domains) for which memory
areas are pre-defined by configuration using physical address ranges.

Those pre-defined memory, -- Static Memory, as parts of RAM reserved in the
beginning, shall never go to heap allocator or boot allocator for any use.

Memory can be statically allocated to a domain using the property "xen,static-
mem" defined in the domain configuration. The number of cells for the address
and the size must be defined using respectively the properties
"#xen,static-mem-address-cells" and "#xen,static-mem-size-cells".

The property 'memory' is still needed and should match the amount of memory
given to the guest. Currently, it either comes from static memory or lets Xen
allocate from heap. *Mixing* is not supported.

The static memory will be mapped in the guest at the usual guest memory
addresses (GUEST_RAM0_BASE, GUEST_RAM1_BASE) defined by
xen/include/public/arch-arm.h.

This patch introduces this new `xen,static-mem` feature, and also documents
and parses this new attribute at boot time.

This patch also introduces a new field "bool xen_domain" in "struct membank"
to tell whether the memory bank is reserved as the whole hardware resource,
or bind to a xen domain node, through "xen,static-mem"

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 docs/misc/arm/device-tree/booting.txt | 42 +++++++++++++++++++++++++++
 xen/arch/arm/bootfdt.c                | 30 +++++++++++++++++--
 xen/include/asm-arm/setup.h           |  1 +
 3 files changed, 71 insertions(+), 2 deletions(-)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 5243bc7fd3..44cd9e1a9a 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -268,3 +268,45 @@ The DTB fragment is loaded at 0xc000000 in the example above. It should
 follow the convention explained in docs/misc/arm/passthrough.txt. The
 DTB fragment will be added to the guest device tree, so that the guest
 kernel will be able to discover the device.
+
+
+Static Allocation
+=============
+
+Static Allocation refers to system or sub-system(domains) for which memory
+areas are pre-defined by configuration using physical address ranges.
+
+Memory can be statically allocated to a domain using the property "xen,static-
+mem" defined in the domain configuration. The number of cells for the address
+and the size must be defined using respectively the properties
+"#xen,static-mem-address-cells" and "#xen,static-mem-size-cells".
+
+The property 'memory' is still needed and should match the amount of memory
+given to the guest. Currently, it either comes from static memory or lets Xen
+allocate from heap. *Mixing* is not supported.
+
+The static memory will be mapped in the guest at the usual guest memory
+addresses (GUEST_RAM0_BASE, GUEST_RAM1_BASE) defined by
+xen/include/public/arch-arm.h.
+
+Below is an example on how to specify the static memory region in the
+device-tree:
+
+    / {
+        chosen {
+            domU1 {
+                compatible = "xen,domain";
+                #address-cells = <0x2>;
+                #size-cells = <0x2>;
+                cpus = <2>;
+                memory = <0x0 0x80000>;
+                #xen,static-mem-address-cells = <0x1>;
+                #xen,static-mem-size-cells = <0x1>;
+                xen,static-mem = <0x30000000 0x20000000>;
+                ...
+            };
+        };
+    };
+
+This will reserve a 512MB region starting at the host physical address
+0x30000000 to be exclusively used by DomU1.
diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index b01badda3e..afaa0e249b 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -66,7 +66,7 @@ void __init device_tree_get_reg(const __be32 **cell, u32 address_cells,
 static int __init device_tree_get_meminfo(const void *fdt, int node,
                                           const char *prop_name,
                                           u32 address_cells, u32 size_cells,
-                                          void *data)
+                                          void *data, bool xen_domain)
 {
     const struct fdt_property *prop;
     unsigned int i, banks;
@@ -97,6 +97,7 @@ static int __init device_tree_get_meminfo(const void *fdt, int node,
             continue;
         mem->bank[mem->nr_banks].start = start;
         mem->bank[mem->nr_banks].size = size;
+        mem->bank[mem->nr_banks].xen_domain = xen_domain;
         mem->nr_banks++;
     }
 
@@ -185,7 +186,8 @@ static int __init process_memory_node(const void *fdt, int node,
                                       u32 address_cells, u32 size_cells,
                                       void *data)
 {
-    return device_tree_get_meminfo(fdt, node, "reg", address_cells, size_cells, data);
+    return device_tree_get_meminfo(fdt, node, "reg", address_cells, size_cells,
+                                   data, false);
 }
 
 static int __init process_reserved_memory_node(const void *fdt, int node,
@@ -339,6 +341,28 @@ static void __init process_chosen_node(const void *fdt, int node,
     add_boot_module(BOOTMOD_RAMDISK, start, end-start, false);
 }
 
+static int __init process_domain_node(const void *fdt, int node,
+                                      const char *name,
+                                      u32 address_cells, u32 size_cells)
+{
+    const struct fdt_property *prop;
+
+    printk("Checking for \"xen,static-mem\" in domain node\n");
+
+    prop = fdt_get_property(fdt, node, "xen,static-mem", NULL);
+    if ( !prop )
+        /* No "xen,static-mem" present. */
+        return 0;
+
+    address_cells = device_tree_get_u32(fdt, node,
+                                        "#xen,static-mem-address-cells", 0);
+    size_cells = device_tree_get_u32(fdt, node,
+                                     "#xen,static-mem-size-cells", 0);
+
+    return device_tree_get_meminfo(fdt, node, "xen,static-mem", address_cells,
+                                   size_cells, &bootinfo.reserved_mem, true);
+}
+
 static int __init early_scan_node(const void *fdt,
                                   int node, const char *name, int depth,
                                   u32 address_cells, u32 size_cells,
@@ -357,6 +381,8 @@ static int __init early_scan_node(const void *fdt,
         process_multiboot_node(fdt, node, name, address_cells, size_cells);
     else if ( depth == 1 && device_tree_node_matches(fdt, node, "chosen") )
         process_chosen_node(fdt, node, name, address_cells, size_cells);
+    else if ( depth == 2 && device_tree_node_compatible(fdt, node, "xen,domain") )
+        rc = process_domain_node(fdt, node, name, address_cells, size_cells);
 
     if ( rc < 0 )
         printk("fdt: node `%s': parsing failed\n", name);
diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h
index c4b6af6029..95da0b7ab9 100644
--- a/xen/include/asm-arm/setup.h
+++ b/xen/include/asm-arm/setup.h
@@ -24,6 +24,7 @@ typedef enum {
 struct membank {
     paddr_t start;
     paddr_t size;
+    bool xen_domain; /* whether the memory bank is bound to a Xen domain. */
 };
 
 struct meminfo {
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v7 3/7] xen: introduce mark_page_free
  2021-09-10  2:52 [PATCH v7 0/7] Domain on Static Allocation Penny Zheng
  2021-09-10  2:52 ` [PATCH v7 1/7] xen/arm: introduce new helper device_tree_get_meminfo Penny Zheng
  2021-09-10  2:52 ` [PATCH v7 2/7] xen/arm: introduce domain on Static Allocation Penny Zheng
@ 2021-09-10  2:52 ` Penny Zheng
  2021-09-15 13:55   ` Jan Beulich
  2021-09-10  2:52 ` [PATCH v7 4/7] xen/arm: static memory initialization Penny Zheng
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Penny Zheng @ 2021-09-10  2:52 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien; +Cc: Bertrand.Marquis, Wei.Chen, jbeulich

This commit defines a new helper mark_page_free to extract common code,
like following the same cache/TLB coherency policy, between free_heap_pages
and the new function free_staticmem_pages, which will be introduced later.

The PDX compression makes that conversion between the MFN and the page can
be potentially non-trivial. As the function is internal, pass the MFN and
the page. They are both expected to match.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 xen/common/page_alloc.c | 89 ++++++++++++++++++++++-------------------
 1 file changed, 48 insertions(+), 41 deletions(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 958ba0cd92..a3ee5eca9e 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1376,6 +1376,53 @@ bool scrub_free_pages(void)
     return node_to_scrub(false) != NUMA_NO_NODE;
 }
 
+static void mark_page_free(struct page_info *pg, mfn_t mfn)
+{
+    ASSERT(mfn_x(mfn) == mfn_x(page_to_mfn(pg)));
+
+    /*
+     * Cannot assume that count_info == 0, as there are some corner cases
+     * where it isn't the case and yet it isn't a bug:
+     *  1. page_get_owner() is NULL
+     *  2. page_get_owner() is a domain that was never accessible by
+     *     its domid (e.g., failed to fully construct the domain).
+     *  3. page was never addressable by the guest (e.g., it's an
+     *     auto-translate-physmap guest and the page was never included
+     *     in its pseudophysical address space).
+     * In all the above cases there can be no guest mappings of this page.
+     */
+    switch ( pg->count_info & PGC_state )
+    {
+    case PGC_state_inuse:
+        BUG_ON(pg->count_info & PGC_broken);
+        pg->count_info = PGC_state_free;
+        break;
+
+    case PGC_state_offlining:
+        pg->count_info = (pg->count_info & PGC_broken) |
+                         PGC_state_offlined;
+        tainted = 1;
+        break;
+
+    default:
+        printk(XENLOG_ERR
+               "pg MFN %"PRI_mfn" c=%#lx o=%u v=%#lx t=%#x\n",
+               mfn_x(mfn),
+               pg->count_info, pg->v.free.order,
+               pg->u.free.val, pg->tlbflush_timestamp);
+        BUG();
+    }
+
+    /* If a page has no owner it will need no safety TLB flush. */
+    pg->u.free.need_tlbflush = (page_get_owner(pg) != NULL);
+    if ( pg->u.free.need_tlbflush )
+        page_set_tlbflush_timestamp(pg);
+
+    /* This page is not a guest frame any more. */
+    page_set_owner(pg, NULL); /* set_gpfn_from_mfn snoops pg owner */
+    set_gpfn_from_mfn(mfn_x(mfn), INVALID_M2P_ENTRY);
+}
+
 /* Free 2^@order set of pages. */
 static void free_heap_pages(
     struct page_info *pg, unsigned int order, bool need_scrub)
@@ -1392,47 +1439,7 @@ static void free_heap_pages(
 
     for ( i = 0; i < (1 << order); i++ )
     {
-        /*
-         * Cannot assume that count_info == 0, as there are some corner cases
-         * where it isn't the case and yet it isn't a bug:
-         *  1. page_get_owner() is NULL
-         *  2. page_get_owner() is a domain that was never accessible by
-         *     its domid (e.g., failed to fully construct the domain).
-         *  3. page was never addressable by the guest (e.g., it's an
-         *     auto-translate-physmap guest and the page was never included
-         *     in its pseudophysical address space).
-         * In all the above cases there can be no guest mappings of this page.
-         */
-        switch ( pg[i].count_info & PGC_state )
-        {
-        case PGC_state_inuse:
-            BUG_ON(pg[i].count_info & PGC_broken);
-            pg[i].count_info = PGC_state_free;
-            break;
-
-        case PGC_state_offlining:
-            pg[i].count_info = (pg[i].count_info & PGC_broken) |
-                               PGC_state_offlined;
-            tainted = 1;
-            break;
-
-        default:
-            printk(XENLOG_ERR
-                   "pg[%u] MFN %"PRI_mfn" c=%#lx o=%u v=%#lx t=%#x\n",
-                   i, mfn_x(mfn) + i,
-                   pg[i].count_info, pg[i].v.free.order,
-                   pg[i].u.free.val, pg[i].tlbflush_timestamp);
-            BUG();
-        }
-
-        /* If a page has no owner it will need no safety TLB flush. */
-        pg[i].u.free.need_tlbflush = (page_get_owner(&pg[i]) != NULL);
-        if ( pg[i].u.free.need_tlbflush )
-            page_set_tlbflush_timestamp(&pg[i]);
-
-        /* This page is not a guest frame any more. */
-        page_set_owner(&pg[i], NULL); /* set_gpfn_from_mfn snoops pg owner */
-        set_gpfn_from_mfn(mfn_x(mfn) + i, INVALID_M2P_ENTRY);
+        mark_page_free(&pg[i], mfn_add(mfn, i));
 
         if ( need_scrub )
         {
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v7 4/7] xen/arm: static memory initialization
  2021-09-10  2:52 [PATCH v7 0/7] Domain on Static Allocation Penny Zheng
                   ` (2 preceding siblings ...)
  2021-09-10  2:52 ` [PATCH v7 3/7] xen: introduce mark_page_free Penny Zheng
@ 2021-09-10  2:52 ` Penny Zheng
  2021-09-10  2:52 ` [PATCH v7 5/7] xen: re-define assign_pages and introduce a new function assign_page Penny Zheng
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Penny Zheng @ 2021-09-10  2:52 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien; +Cc: Bertrand.Marquis, Wei.Chen, jbeulich

This patch introduces static memory initialization, during system boot-up.

The new function init_staticmem_pages is responsible for static memory
initialization.

Helper free_staticmem_pages is the equivalent of free_heap_pages, to free
nr_mfns pages of static memory.

This commit also introduces a new CONFIG_STATIC_MEMORY option to wrap all
static-allocation-related code.

Put asynchronously scrubbing pages of static memory in TODO list.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/arm/setup.c    | 27 +++++++++++++++++++++++++++
 xen/common/Kconfig      | 13 +++++++++++++
 xen/common/page_alloc.c | 21 +++++++++++++++++++++
 xen/include/xen/mm.h    |  6 ++++++
 4 files changed, 67 insertions(+)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 63a908e325..5be7f2b0c2 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -609,6 +609,29 @@ static void __init init_pdx(void)
     }
 }
 
+/* Static memory initialization */
+static void __init init_staticmem_pages(void)
+{
+#ifdef CONFIG_STATIC_MEMORY
+    unsigned int bank;
+
+    for ( bank = 0 ; bank < bootinfo.reserved_mem.nr_banks; bank++ )
+    {
+        if ( bootinfo.reserved_mem.bank[bank].xen_domain )
+        {
+            mfn_t bank_start = _mfn(PFN_UP(bootinfo.reserved_mem.bank[bank].start));
+            unsigned long bank_pages = PFN_DOWN(bootinfo.reserved_mem.bank[bank].size);
+            mfn_t bank_end = mfn_add(bank_start, bank_pages);
+
+            if ( mfn_x(bank_end) <= mfn_x(bank_start) )
+                return;
+
+            free_staticmem_pages(mfn_to_page(bank_start), bank_pages, false);
+        }
+    }
+#endif
+}
+
 #ifdef CONFIG_ARM_32
 static void __init setup_mm(void)
 {
@@ -736,6 +759,8 @@ static void __init setup_mm(void)
     /* Add xenheap memory that was not already added to the boot allocator. */
     init_xenheap_pages(mfn_to_maddr(xenheap_mfn_start),
                        mfn_to_maddr(xenheap_mfn_end));
+
+    init_staticmem_pages();
 }
 #else /* CONFIG_ARM_64 */
 static void __init setup_mm(void)
@@ -789,6 +814,8 @@ static void __init setup_mm(void)
 
     setup_frametable_mappings(ram_start, ram_end);
     max_page = PFN_DOWN(ram_end);
+
+    init_staticmem_pages();
 }
 #endif
 
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index 0ddd18e11a..3558be0dbc 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -67,6 +67,19 @@ config MEM_ACCESS
 config NEEDS_LIBELF
 	bool
 
+config STATIC_MEMORY
+	bool "Static Allocation Support (UNSUPPORTED)" if UNSUPPORTED
+	depends on ARM
+	help
+	  Static Allocation refers to system or sub-system(domains) for
+	  which memory areas are pre-defined by configuration using physical
+	  address ranges.
+
+	  When enabled, memory can be statically allocated to a domain using
+	  the property "xen,static-mem" defined in the domain configuration.
+
+	  If unsure, say N.
+
 menu "Speculative hardening"
 
 config SPECULATIVE_HARDEN_ARRAY
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index a3ee5eca9e..ba7adc80db 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2604,6 +2604,27 @@ struct domain *get_pg_owner(domid_t domid)
     return pg_owner;
 }
 
+#ifdef CONFIG_STATIC_MEMORY
+/* Equivalent of free_heap_pages to free nr_mfns pages of static memory. */
+void __init free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
+                                 bool need_scrub)
+{
+    mfn_t mfn = page_to_mfn(pg);
+    unsigned long i;
+
+    for ( i = 0; i < nr_mfns; i++ )
+    {
+        mark_page_free(&pg[i], mfn_add(mfn, i));
+
+        if ( need_scrub )
+        {
+            /* TODO: asynchronous scrubbing for pages of static memory. */
+            scrub_one_page(pg);
+        }
+    }
+}
+#endif
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 667f9dac83..8e8fb5a615 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -85,6 +85,12 @@ bool scrub_free_pages(void);
 } while ( false )
 #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
 
+#ifdef CONFIG_STATIC_MEMORY
+/* These functions are for static memory */
+void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
+                          bool need_scrub);
+#endif
+
 /* Map machine page range in Xen virtual address space. */
 int map_pages_to_xen(
     unsigned long virt,
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v7 5/7] xen: re-define assign_pages and introduce a new function assign_page
  2021-09-10  2:52 [PATCH v7 0/7] Domain on Static Allocation Penny Zheng
                   ` (3 preceding siblings ...)
  2021-09-10  2:52 ` [PATCH v7 4/7] xen/arm: static memory initialization Penny Zheng
@ 2021-09-10  2:52 ` Penny Zheng
  2021-09-10 11:08   ` Jan Beulich
  2021-09-10  2:52 ` [PATCH v7 6/7] xen/arm: introduce acquire_staticmem_pages and acquire_domstatic_pages Penny Zheng
  2021-09-10  2:52 ` [PATCH v7 7/7] xen/arm: introduce allocate_static_memory Penny Zheng
  6 siblings, 1 reply; 15+ messages in thread
From: Penny Zheng @ 2021-09-10  2:52 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien; +Cc: Bertrand.Marquis, Wei.Chen, jbeulich

In order to deal with the trouble of count-to-order conversion when page number
is not in a power-of-two, this commit re-define assign_pages for nr pages and
assign_page for original page with a single order.

Backporting confusion could be helped by altering the order of assign_pages
parameters, such that the compiler would point out that adjustments at call
sites are needed.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/arch/x86/pv/dom0_build.c |  2 +-
 xen/common/grant_table.c     |  2 +-
 xen/common/memory.c          |  6 +++---
 xen/common/page_alloc.c      | 23 ++++++++++++++---------
 xen/include/xen/mm.h         |  6 ++++++
 5 files changed, 25 insertions(+), 14 deletions(-)

diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index d7f9e04b28..77efd3918c 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -568,7 +568,7 @@ int __init dom0_construct_pv(struct domain *d,
         else
         {
             while ( count-- )
-                if ( assign_pages(d, mfn_to_page(_mfn(mfn++)), 0, 0) )
+                if ( assign_pages(mfn_to_page(_mfn(mfn++)), 1, d, 0) )
                     BUG();
         }
         initrd->mod_end = 0;
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index e80f8d044d..fe1fc11b22 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -2358,7 +2358,7 @@ gnttab_transfer(
          * is respected and speculative execution is blocked accordingly
          */
         if ( unlikely(!evaluate_nospec(okay)) ||
-            unlikely(assign_pages(e, page, 0, MEMF_no_refcount)) )
+            unlikely(assign_pages(page, 1, e, MEMF_no_refcount)) )
         {
             bool drop_dom_ref;
 
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 74babb0bd7..63642278fd 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -728,8 +728,8 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
         /* Assign each output page to the domain. */
         for ( j = 0; (page = page_list_remove_head(&out_chunk_list)); ++j )
         {
-            if ( assign_pages(d, page, exch.out.extent_order,
-                              MEMF_no_refcount) )
+            if ( assign_page(page, exch.out.extent_order, d,
+                             MEMF_no_refcount) )
             {
                 unsigned long dec_count;
                 bool_t drop_dom_ref;
@@ -797,7 +797,7 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
      * cleared PGC_allocated.
      */
     while ( (page = page_list_remove_head(&in_chunk_list)) )
-        if ( assign_pages(d, page, 0, MEMF_no_refcount) )
+        if ( assign_pages(page, 1, d, MEMF_no_refcount) )
         {
             BUG_ON(!d->is_dying);
             free_domheap_page(page);
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index ba7adc80db..2aa8edac8c 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2259,9 +2259,9 @@ void init_domheap_pages(paddr_t ps, paddr_t pe)
 
 
 int assign_pages(
-    struct domain *d,
     struct page_info *pg,
-    unsigned int order,
+    unsigned long nr,
+    struct domain *d,
     unsigned int memflags)
 {
     int rc = 0;
@@ -2281,7 +2281,7 @@ int assign_pages(
     {
         unsigned int extra_pages = 0;
 
-        for ( i = 0; i < (1ul << order); i++ )
+        for ( i = 0; i < nr; i++ )
         {
             ASSERT(!(pg[i].count_info & ~PGC_extra));
             if ( pg[i].count_info & PGC_extra )
@@ -2290,18 +2290,18 @@ int assign_pages(
 
         ASSERT(!extra_pages ||
                ((memflags & MEMF_no_refcount) &&
-                extra_pages == 1u << order));
+                extra_pages == nr));
     }
 #endif
 
     if ( pg[0].count_info & PGC_extra )
     {
-        d->extra_pages += 1u << order;
+        d->extra_pages += nr;
         memflags &= ~MEMF_no_refcount;
     }
     else if ( !(memflags & MEMF_no_refcount) )
     {
-        unsigned int tot_pages = domain_tot_pages(d) + (1 << order);
+        unsigned int tot_pages = domain_tot_pages(d) + nr;
 
         if ( unlikely(tot_pages > d->max_pages) )
         {
@@ -2313,10 +2313,10 @@ int assign_pages(
     }
 
     if ( !(memflags & MEMF_no_refcount) &&
-         unlikely(domain_adjust_tot_pages(d, 1 << order) == (1 << order)) )
+         unlikely(domain_adjust_tot_pages(d, nr) == nr) )
         get_knownalive_domain(d);
 
-    for ( i = 0; i < (1 << order); i++ )
+    for ( i = 0; i < nr; i++ )
     {
         ASSERT(page_get_owner(&pg[i]) == NULL);
         page_set_owner(&pg[i], d);
@@ -2331,6 +2331,11 @@ int assign_pages(
     return rc;
 }
 
+int assign_page(struct page_info *pg, unsigned int order, struct domain *d,
+                unsigned int memflags)
+{
+    return assign_pages(pg, 1UL << order, d, memflags);
+}
 
 struct page_info *alloc_domheap_pages(
     struct domain *d, unsigned int order, unsigned int memflags)
@@ -2373,7 +2378,7 @@ struct page_info *alloc_domheap_pages(
                 pg[i].count_info = PGC_extra;
             }
         }
-        if ( assign_pages(d, pg, order, memflags) )
+        if ( assign_page(pg, order, d, memflags) )
         {
             free_heap_pages(pg, order, memflags & MEMF_no_scrub);
             return NULL;
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 8e8fb5a615..9d6a45174e 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -132,9 +132,15 @@ int query_page_offline(mfn_t mfn, uint32_t *status);
 void heap_init_late(void);
 
 int assign_pages(
+    struct page_info *pg,
+    unsigned long nr,
     struct domain *d,
+    unsigned int memflags);
+
+int assign_page(
     struct page_info *pg,
     unsigned int order,
+    struct domain *d,
     unsigned int memflags);
 
 /* Dump info to serial console */
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v7 6/7] xen/arm: introduce acquire_staticmem_pages and acquire_domstatic_pages
  2021-09-10  2:52 [PATCH v7 0/7] Domain on Static Allocation Penny Zheng
                   ` (4 preceding siblings ...)
  2021-09-10  2:52 ` [PATCH v7 5/7] xen: re-define assign_pages and introduce a new function assign_page Penny Zheng
@ 2021-09-10  2:52 ` Penny Zheng
  2021-09-10  2:52 ` [PATCH v7 7/7] xen/arm: introduce allocate_static_memory Penny Zheng
  6 siblings, 0 replies; 15+ messages in thread
From: Penny Zheng @ 2021-09-10  2:52 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien; +Cc: Bertrand.Marquis, Wei.Chen, jbeulich

New function acquire_staticmem_pages aims to acquire nr_mfns contiguous pages
of static memory, starting at #smfn. And it is the equivalent of
alloc_heap_pages for static memory.

For each page, it shall check if the page is reserved(PGC_reserved)
and free. It shall also do a set of necessary initialization, which are
mostly the same ones in alloc_heap_pages, like, following the same
cache-coherency policy and turning page status into PGC_state_inuse, etc.

New function acquire_domstatic_pages is the equivalent of alloc_domheap_pages
for static memory, and it is to acquire nr_mfns contiguous pages of
static memory and assign them to one specific domain.

It uses acquire_staticmem_pages to acquire nr_mfns pages of static memory.
Then on success, it will use assign_pages to assign those pages to one
specific domain.

In order to differentiate pages of static memory from those allocated from
heap, this patch introduces a new page flag PGC_reserved, then mark pages of
static memory PGC_reserved when initializing them.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/common/page_alloc.c  | 118 ++++++++++++++++++++++++++++++++++++++-
 xen/include/asm-arm/mm.h |   3 +
 xen/include/xen/mm.h     |   2 +
 3 files changed, 121 insertions(+), 2 deletions(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 2aa8edac8c..5eb87b51c8 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -151,6 +151,10 @@
 #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
 #endif
 
+#ifndef PGC_reserved
+#define PGC_reserved 0
+#endif
+
 /*
  * Comma-separated list of hexadecimal page numbers containing bad bytes.
  * e.g. 'badpage=0x3f45,0x8a321'.
@@ -2283,7 +2287,7 @@ int assign_pages(
 
         for ( i = 0; i < nr; i++ )
         {
-            ASSERT(!(pg[i].count_info & ~PGC_extra));
+            ASSERT(!(pg[i].count_info & ~(PGC_extra | PGC_reserved)));
             if ( pg[i].count_info & PGC_extra )
                 extra_pages++;
         }
@@ -2322,7 +2326,8 @@ int assign_pages(
         page_set_owner(&pg[i], d);
         smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
         pg[i].count_info =
-            (pg[i].count_info & PGC_extra) | PGC_allocated | 1;
+            (pg[i].count_info & (PGC_extra | PGC_reserved)) | PGC_allocated | 1;
+
         page_list_add_tail(&pg[i], page_to_list(d, &pg[i]));
     }
 
@@ -2626,8 +2631,117 @@ void __init free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
             /* TODO: asynchronous scrubbing for pages of static memory. */
             scrub_one_page(pg);
         }
+
+        /* In case initializing page of static memory, mark it PGC_reserved. */
+        pg[i].count_info |= PGC_reserved;
     }
 }
+
+/*
+ * Acquire nr_mfns contiguous reserved pages, starting at #smfn, of
+ * static memory.
+ * This function needs to be reworked if used outside of boot.
+ */
+static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
+                                                         unsigned long nr_mfns,
+                                                         unsigned int memflags)
+{
+    bool need_tlbflush = false;
+    uint32_t tlbflush_timestamp = 0;
+    unsigned long i;
+    struct page_info *pg;
+
+    ASSERT(nr_mfns);
+    for ( i = 0; i < nr_mfns; i++ )
+        if ( !mfn_valid(mfn_add(smfn, i)) )
+            return NULL;
+
+    pg = mfn_to_page(smfn);
+
+    spin_lock(&heap_lock);
+
+    for ( i = 0; i < nr_mfns; i++ )
+    {
+        /* The page should be reserved and not yet allocated. */
+        if ( pg[i].count_info != (PGC_state_free | PGC_reserved) )
+        {
+            printk(XENLOG_ERR
+                   "pg[%lu] Static MFN %"PRI_mfn" c=%#lx t=%#x\n",
+                   i, mfn_x(smfn) + i,
+                   pg[i].count_info, pg[i].tlbflush_timestamp);
+            goto out_err;
+        }
+
+        if ( !(memflags & MEMF_no_tlbflush) )
+            accumulate_tlbflush(&need_tlbflush, &pg[i],
+                                &tlbflush_timestamp);
+
+        /*
+         * Preserve flag PGC_reserved and change page state
+         * to PGC_state_inuse.
+         */
+        pg[i].count_info = PGC_reserved | PGC_state_inuse;
+        /* Initialise fields which have other uses for free pages. */
+        pg[i].u.inuse.type_info = 0;
+        page_set_owner(&pg[i], NULL);
+    }
+
+    spin_unlock(&heap_lock);
+
+    if ( need_tlbflush )
+        filtered_flush_tlb_mask(tlbflush_timestamp);
+
+    /*
+     * Ensure cache and RAM are consistent for platforms where the guest
+     * can control its own visibility of/through the cache.
+     */
+    for ( i = 0; i < nr_mfns; i++ )
+        flush_page_to_ram(mfn_x(smfn) + i, !(memflags & MEMF_no_icache_flush));
+
+    return pg;
+
+ out_err:
+    while ( i-- )
+        pg[i].count_info = PGC_reserved | PGC_state_free;
+
+    spin_unlock(&heap_lock);
+
+    return NULL;
+}
+
+/*
+ * Acquire nr_mfns contiguous pages, starting at #smfn, of static memory,
+ * then assign them to one specific domain #d.
+ */
+int __init acquire_domstatic_pages(struct domain *d, mfn_t smfn,
+                                   unsigned long nr_mfns, unsigned int memflags)
+{
+    struct page_info *pg;
+
+    ASSERT(!in_irq());
+
+    pg = acquire_staticmem_pages(smfn, nr_mfns, memflags);
+    if ( !pg )
+        return -ENOENT;
+
+    if ( !d || (memflags & (MEMF_no_owner | MEMF_no_refcount)) )
+    {
+        /*
+         * Respective handling omitted here because right now
+         * acquired static memory is only for guest RAM.
+         */
+        ASSERT_UNREACHABLE();
+        return -EINVAL;
+    }
+
+    if ( assign_pages(pg, nr_mfns, d, memflags) )
+    {
+        free_staticmem_pages(pg, nr_mfns, memflags & MEMF_no_scrub);
+        return -EINVAL;
+    }
+
+    return 0;
+}
 #endif
 
 /*
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index ded74d29da..7b5e7b7f69 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -108,6 +108,9 @@ struct page_info
   /* Page is Xen heap? */
 #define _PGC_xen_heap     PG_shift(2)
 #define PGC_xen_heap      PG_mask(1, 2)
+  /* Page is reserved */
+#define _PGC_reserved     PG_shift(3)
+#define PGC_reserved      PG_mask(1, 3)
 /* ... */
 /* Page is broken? */
 #define _PGC_broken       PG_shift(7)
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 9d6a45174e..f8500f04d4 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -89,6 +89,8 @@ bool scrub_free_pages(void);
 /* These functions are for static memory */
 void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
                           bool need_scrub);
+int acquire_domstatic_pages(struct domain *d, mfn_t smfn, unsigned long nr_mfns,
+                            unsigned int memflags);
 #endif
 
 /* Map machine page range in Xen virtual address space. */
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v7 7/7] xen/arm: introduce allocate_static_memory
  2021-09-10  2:52 [PATCH v7 0/7] Domain on Static Allocation Penny Zheng
                   ` (5 preceding siblings ...)
  2021-09-10  2:52 ` [PATCH v7 6/7] xen/arm: introduce acquire_staticmem_pages and acquire_domstatic_pages Penny Zheng
@ 2021-09-10  2:52 ` Penny Zheng
  6 siblings, 0 replies; 15+ messages in thread
From: Penny Zheng @ 2021-09-10  2:52 UTC (permalink / raw)
  To: xen-devel, sstabellini, julien; +Cc: Bertrand.Marquis, Wei.Chen, jbeulich

This commit introduces a new function allocate_static_memory to allocate
static memory as guest RAM for domains on Static Allocation.

It uses acquire_domstatic_pages to acquire pre-configured static memory
for the domain, and uses guest_physmap_add_pages to set up the P2M table.
These pre-defined static memory banks shall be mapped to the usual guest
memory addresses (GUEST_RAM0_BASE, GUEST_RAM1_BASE) defined by
xen/include/public/arch-arm.h.

In order to deal with the trouble of count-to-order conversion when page number
is not in a power-of-two, this commit exports p2m_insert_mapping and introduce
a new function guest_physmap_add_pages to cope with adding guest RAM p2m
mapping with nr_pages.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/arm/domain_build.c | 161 +++++++++++++++++++++++++++++++++++-
 xen/arch/arm/p2m.c          |   7 +-
 xen/include/asm-arm/p2m.h   |  11 +++
 3 files changed, 173 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 206038d1c0..62ab7d0ead 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -480,6 +480,162 @@ fail:
           (unsigned long)kinfo->unassigned_mem >> 10);
 }
 
+#ifdef CONFIG_STATIC_MEMORY
+static bool __init append_static_memory_to_bank(struct domain *d,
+                                                struct membank *bank,
+                                                mfn_t smfn,
+                                                paddr_t size)
+{
+    int res;
+    unsigned int nr_pages = PFN_DOWN(size);
+    /* Infer next GFN. */
+    gfn_t sgfn = gaddr_to_gfn(bank->start + bank->size);
+
+    res = guest_physmap_add_pages(d, sgfn, smfn, nr_pages);
+    if ( res )
+    {
+        dprintk(XENLOG_ERR, "Failed to map pages to DOMU: %d", res);
+        return false;
+    }
+
+    bank->size = bank->size + size;
+
+    return true;
+}
+
+/* Allocate memory from static memory as RAM for one specific domain d. */
+static void __init allocate_static_memory(struct domain *d,
+                                          struct kernel_info *kinfo,
+                                          const struct dt_device_node *node)
+{
+    const struct dt_property *prop;
+    u32 addr_cells, size_cells, reg_cells;
+    unsigned int nr_banks, gbank, bank = 0;
+    const uint64_t rambase[] = GUEST_RAM_BANK_BASES;
+    const uint64_t ramsize[] = GUEST_RAM_BANK_SIZES;
+    const __be32 *cell;
+    u64 tot_size = 0;
+    paddr_t pbase, psize, gsize;
+    mfn_t smfn;
+    int res;
+
+    prop = dt_find_property(node, "xen,static-mem", NULL);
+    if ( !dt_property_read_u32(node, "#xen,static-mem-address-cells",
+                               &addr_cells) )
+    {
+        printk(XENLOG_ERR
+               "%pd: failed to read \"#xen,static-mem-address-cells\".\n", d);
+        goto fail;
+    }
+
+    if ( !dt_property_read_u32(node, "#xen,static-mem-size-cells",
+                               &size_cells) )
+    {
+        printk(XENLOG_ERR
+               "%pd: failed to read \"#xen,static-mem-size-cells\".\n", d);
+        goto fail;
+    }
+    reg_cells = addr_cells + size_cells;
+
+    /*
+     * The static memory will be mapped in the guest at the usual guest memory
+     * addresses (GUEST_RAM0_BASE, GUEST_RAM1_BASE) defined by
+     * xen/include/public/arch-arm.h.
+     */
+    gbank = 0;
+    gsize = ramsize[gbank];
+    kinfo->mem.bank[gbank].start = rambase[gbank];
+
+    cell = (const __be32 *)prop->value;
+    nr_banks = (prop->length) / (reg_cells * sizeof (u32));
+
+    for ( ; bank < nr_banks; bank++ )
+    {
+        device_tree_get_reg(&cell, addr_cells, size_cells, &pbase, &psize);
+        ASSERT(IS_ALIGNED(pbase, PAGE_SIZE) && IS_ALIGNED(psize, PAGE_SIZE));
+
+        smfn = maddr_to_mfn(pbase);
+        res = acquire_domstatic_pages(d, smfn, PFN_DOWN(psize), 0);
+        if ( res )
+        {
+            printk(XENLOG_ERR
+                   "%pd: failed to acquire static memory: %d.\n", d, res);
+            goto fail;
+        }
+
+        printk(XENLOG_INFO "%pd: STATIC BANK[%u] %#"PRIpaddr"-%#"PRIpaddr"\n",
+               d, bank, pbase, pbase + psize);
+
+        while ( 1 )
+        {
+            /* Map as much as possible the static range to the guest bank */
+            if ( !append_static_memory_to_bank(d, &kinfo->mem.bank[gbank], smfn,
+                                               min(psize, gsize)) )
+                goto fail;
+
+            /*
+             * The current physical bank is fully mapped.
+             * Handle the next physical bank.
+             */
+            if ( gsize >= psize )
+            {
+                gsize = gsize - psize;
+                break;
+            }
+            /*
+             * When current guest bank is not enough to map, exhaust
+             * the current one and seek to the next.
+             * Before seeking to the next, check if we still have available
+             * guest bank.
+             */
+            else if ( (gbank + 1) >= GUEST_RAM_BANKS )
+            {
+                printk(XENLOG_ERR "Exhausted all possible guest banks.\n");
+                goto fail;
+            }
+            else
+            {
+                psize = psize - gsize;
+                smfn = mfn_add(smfn, gsize >> PAGE_SHIFT);
+                /* Update to the next guest bank. */
+                gbank++;
+                gsize = ramsize[gbank];
+                kinfo->mem.bank[gbank].start = rambase[gbank];
+            }
+        }
+
+        tot_size += psize;
+    }
+
+    kinfo->mem.nr_banks = ++gbank;
+
+    kinfo->unassigned_mem -= tot_size;
+    /*
+     * The property 'memory' should match the amount of memory given to the
+     * guest.
+     * Currently, it is only possible to either acquire static memory or let
+     * Xen allocate. *Mixing* is not supported.
+     */
+    if ( kinfo->unassigned_mem )
+    {
+        printk(XENLOG_ERR
+               "Size of \"memory\" property doesn't match up with the sum-up of \"xen,static-mem\". Unsupported configuration.\n");
+        goto fail;
+    }
+
+    return;
+
+ fail:
+    panic("Failed to allocate requested static memory for domain %pd.", d);
+}
+#else
+static void __init allocate_static_memory(struct domain *d,
+                                          struct kernel_info *kinfo,
+                                          const struct dt_device_node *node)
+{
+}
+#endif
+
 static int __init write_properties(struct domain *d, struct kernel_info *kinfo,
                                    const struct dt_device_node *node)
 {
@@ -2453,7 +2609,10 @@ static int __init construct_domU(struct domain *d,
     /* type must be set before allocate memory */
     d->arch.type = kinfo.type;
 #endif
-    allocate_memory(d, &kinfo);
+    if ( !dt_find_property(node, "xen,static-mem", NULL) )
+        allocate_memory(d, &kinfo);
+    else
+        allocate_static_memory(d, &kinfo, node);
 
     rc = prepare_dtb_domU(d, &kinfo);
     if ( rc < 0 )
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index eff9a105e7..6e01e83967 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1293,11 +1293,8 @@ out:
     return resolved;
 }
 
-static inline int p2m_insert_mapping(struct domain *d,
-                                     gfn_t start_gfn,
-                                     unsigned long nr,
-                                     mfn_t mfn,
-                                     p2m_type_t t)
+int p2m_insert_mapping(struct domain *d, gfn_t start_gfn, unsigned long nr,
+                       mfn_t mfn, p2m_type_t t)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int rc;
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 6a2108398f..f885cc522b 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -300,6 +300,9 @@ int map_dev_mmio_region(struct domain *d,
                         unsigned long nr,
                         mfn_t mfn);
 
+int p2m_insert_mapping(struct domain *d, gfn_t start_gfn, unsigned long nr,
+                       mfn_t mfn, p2m_type_t t);
+
 int guest_physmap_add_entry(struct domain *d,
                             gfn_t gfn,
                             mfn_t mfn,
@@ -315,6 +318,14 @@ static inline int guest_physmap_add_page(struct domain *d,
     return guest_physmap_add_entry(d, gfn, mfn, page_order, p2m_ram_rw);
 }
 
+static inline int guest_physmap_add_pages(struct domain *d,
+                                          gfn_t gfn,
+                                          mfn_t mfn,
+                                          unsigned int nr_pages)
+{
+    return p2m_insert_mapping(d, gfn, nr_pages, mfn, p2m_ram_rw);
+}
+
 mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn);
 
 /* Look up a GFN and take a reference count on the backing page. */
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 5/7] xen: re-define assign_pages and introduce a new function assign_page
  2021-09-10  2:52 ` [PATCH v7 5/7] xen: re-define assign_pages and introduce a new function assign_page Penny Zheng
@ 2021-09-10 11:08   ` Jan Beulich
  2021-09-10 16:23     ` Stefano Stabellini
  0 siblings, 1 reply; 15+ messages in thread
From: Jan Beulich @ 2021-09-10 11:08 UTC (permalink / raw)
  To: Penny Zheng; +Cc: Bertrand.Marquis, Wei.Chen, xen-devel, sstabellini, julien

On 10.09.2021 04:52, Penny Zheng wrote:
> In order to deal with the trouble of count-to-order conversion when page number
> is not in a power-of-two, this commit re-define assign_pages for nr pages and
> assign_page for original page with a single order.
> 
> Backporting confusion could be helped by altering the order of assign_pages
> parameters, such that the compiler would point out that adjustments at call
> sites are needed.

Thanks, this now takes care of my primary concern. However, I (now?) have
another (I thought I would have mentioned this before):

> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -2259,9 +2259,9 @@ void init_domheap_pages(paddr_t ps, paddr_t pe)
>  
>  
>  int assign_pages(
> -    struct domain *d,
>      struct page_info *pg,
> -    unsigned int order,
> +    unsigned long nr,

If this really is to be "unsigned long" (and not "unsigned int"), then...

> @@ -2281,7 +2281,7 @@ int assign_pages(
>      {
>          unsigned int extra_pages = 0;
>  
> -        for ( i = 0; i < (1ul << order); i++ )
> +        for ( i = 0; i < nr; i++ )

... you will want to (a) show that there is a need for this in the
remaining patches of this series and (b) prove that despite the
remaining patches potentially using this, albeit at boot/init time
only, there isn't any problem from ending up with 4 billion (or
more) iteration loops that would then result. On x86 at least I'm
sure this would be an issue.

Otherwise I would request that in the subsequent patches requests
be suitably broken up, with process_pending_softirqs() invoked
in between. Which would get me back to my feeling that the original
assign_pages() is quite good enough, as your new code would need to
split requests now anyway (and to avoid the need for that splitting
was the primary argument in favor of the change here).

> @@ -2290,18 +2290,18 @@ int assign_pages(
>  
>          ASSERT(!extra_pages ||
>                 ((memflags & MEMF_no_refcount) &&
> -                extra_pages == 1u << order));
> +                extra_pages == nr));
>      }
>  #endif
>  
>      if ( pg[0].count_info & PGC_extra )
>      {
> -        d->extra_pages += 1u << order;
> +        d->extra_pages += nr;
>          memflags &= ~MEMF_no_refcount;
>      }
>      else if ( !(memflags & MEMF_no_refcount) )
>      {
> -        unsigned int tot_pages = domain_tot_pages(d) + (1 << order);
> +        unsigned int tot_pages = domain_tot_pages(d) + nr;
>  
>          if ( unlikely(tot_pages > d->max_pages) )
>          {
> @@ -2313,10 +2313,10 @@ int assign_pages(
>      }
>  
>      if ( !(memflags & MEMF_no_refcount) &&
> -         unlikely(domain_adjust_tot_pages(d, 1 << order) == (1 << order)) )
> +         unlikely(domain_adjust_tot_pages(d, nr) == nr) )
>          get_knownalive_domain(d);
>  
> -    for ( i = 0; i < (1 << order); i++ )
> +    for ( i = 0; i < nr; i++ )
>      {
>          ASSERT(page_get_owner(&pg[i]) == NULL);
>          page_set_owner(&pg[i], d);
> @@ -2331,6 +2331,11 @@ int assign_pages(
>      return rc;
>  }
>  
> +int assign_page(struct page_info *pg, unsigned int order, struct domain *d,
> +                unsigned int memflags)
> +{
> +    return assign_pages(pg, 1UL << order, d, memflags);
> +}
>  
>  struct page_info *alloc_domheap_pages(
>      struct domain *d, unsigned int order, unsigned int memflags)
> @@ -2373,7 +2378,7 @@ struct page_info *alloc_domheap_pages(
>                  pg[i].count_info = PGC_extra;
>              }
>          }
> -        if ( assign_pages(d, pg, order, memflags) )
> +        if ( assign_page(pg, order, d, memflags) )

Note that here, for example, order is necessarily no larger than
MAX_ORDER.

Jan



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 5/7] xen: re-define assign_pages and introduce a new function assign_page
  2021-09-10 11:08   ` Jan Beulich
@ 2021-09-10 16:23     ` Stefano Stabellini
  2021-09-15  8:04       ` Jan Beulich
  0 siblings, 1 reply; 15+ messages in thread
From: Stefano Stabellini @ 2021-09-10 16:23 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Penny Zheng, Bertrand.Marquis, Wei.Chen, xen-devel, sstabellini, julien

On Fri, 10 Sep 2021, Jan Beulich wrote:
> On 10.09.2021 04:52, Penny Zheng wrote:
> > In order to deal with the trouble of count-to-order conversion when page number
> > is not in a power-of-two, this commit re-define assign_pages for nr pages and
> > assign_page for original page with a single order.
> > 
> > Backporting confusion could be helped by altering the order of assign_pages
> > parameters, such that the compiler would point out that adjustments at call
> > sites are needed.
> 
> Thanks, this now takes care of my primary concern. However, I (now?) have
> another (I thought I would have mentioned this before):
> 
> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -2259,9 +2259,9 @@ void init_domheap_pages(paddr_t ps, paddr_t pe)
> >  
> >  
> >  int assign_pages(
> > -    struct domain *d,
> >      struct page_info *pg,
> > -    unsigned int order,
> > +    unsigned long nr,
> 
> If this really is to be "unsigned long" (and not "unsigned int"), then...

Hi Jan,

Thanks for spotting this. I think it makes sense to use "unsigned int
nr" here.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 5/7] xen: re-define assign_pages and introduce a new function assign_page
  2021-09-10 16:23     ` Stefano Stabellini
@ 2021-09-15  8:04       ` Jan Beulich
  2021-09-15 18:12         ` Stefano Stabellini
  0 siblings, 1 reply; 15+ messages in thread
From: Jan Beulich @ 2021-09-15  8:04 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Penny Zheng, Bertrand.Marquis, Wei.Chen, xen-devel, julien

On 10.09.2021 18:23, Stefano Stabellini wrote:
> On Fri, 10 Sep 2021, Jan Beulich wrote:
>> On 10.09.2021 04:52, Penny Zheng wrote:
>>> In order to deal with the trouble of count-to-order conversion when page number
>>> is not in a power-of-two, this commit re-define assign_pages for nr pages and
>>> assign_page for original page with a single order.
>>>
>>> Backporting confusion could be helped by altering the order of assign_pages
>>> parameters, such that the compiler would point out that adjustments at call
>>> sites are needed.
>>
>> Thanks, this now takes care of my primary concern. However, I (now?) have
>> another (I thought I would have mentioned this before):
>>
>>> --- a/xen/common/page_alloc.c
>>> +++ b/xen/common/page_alloc.c
>>> @@ -2259,9 +2259,9 @@ void init_domheap_pages(paddr_t ps, paddr_t pe)
>>>  
>>>  
>>>  int assign_pages(
>>> -    struct domain *d,
>>>      struct page_info *pg,
>>> -    unsigned int order,
>>> +    unsigned long nr,
>>
>> If this really is to be "unsigned long" (and not "unsigned int"), then...
> 
> Thanks for spotting this. I think it makes sense to use "unsigned int
> nr" here.

I see you've made the change while committing, but the subsequent patch
then would have needed adjustment as well: It's now silently truncating
an "unsigned long" value to "unsigned int". It was the splitting that's
now needed there _anyway_ that made me wonder whether the patch here
really is worthwhile to have. But of course acquire_domstatic_pages()
could for now also simply reject too large values ...

Jan



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 3/7] xen: introduce mark_page_free
  2021-09-10  2:52 ` [PATCH v7 3/7] xen: introduce mark_page_free Penny Zheng
@ 2021-09-15 13:55   ` Jan Beulich
  2021-09-16  3:13     ` Penny Zheng
  0 siblings, 1 reply; 15+ messages in thread
From: Jan Beulich @ 2021-09-15 13:55 UTC (permalink / raw)
  To: Penny Zheng; +Cc: Bertrand.Marquis, Wei.Chen, xen-devel, sstabellini, julien

On 10.09.2021 04:52, Penny Zheng wrote:
> This commit defines a new helper mark_page_free to extract common code,
> like following the same cache/TLB coherency policy, between free_heap_pages
> and the new function free_staticmem_pages, which will be introduced later.
> 
> The PDX compression makes that conversion between the MFN and the page can
> be potentially non-trivial. As the function is internal, pass the MFN and
> the page. They are both expected to match.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Julien Grall <jgrall@amazon.com>
> ---
>  xen/common/page_alloc.c | 89 ++++++++++++++++++++++-------------------
>  1 file changed, 48 insertions(+), 41 deletions(-)
> 
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 958ba0cd92..a3ee5eca9e 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1376,6 +1376,53 @@ bool scrub_free_pages(void)
>      return node_to_scrub(false) != NUMA_NO_NODE;
>  }
>  
> +static void mark_page_free(struct page_info *pg, mfn_t mfn)
> +{
> +    ASSERT(mfn_x(mfn) == mfn_x(page_to_mfn(pg)));
> +
> +    /*
> +     * Cannot assume that count_info == 0, as there are some corner cases
> +     * where it isn't the case and yet it isn't a bug:
> +     *  1. page_get_owner() is NULL
> +     *  2. page_get_owner() is a domain that was never accessible by
> +     *     its domid (e.g., failed to fully construct the domain).
> +     *  3. page was never addressable by the guest (e.g., it's an
> +     *     auto-translate-physmap guest and the page was never included
> +     *     in its pseudophysical address space).
> +     * In all the above cases there can be no guest mappings of this page.
> +     */
> +    switch ( pg->count_info & PGC_state )
> +    {
> +    case PGC_state_inuse:
> +        BUG_ON(pg->count_info & PGC_broken);
> +        pg->count_info = PGC_state_free;
> +        break;
> +
> +    case PGC_state_offlining:
> +        pg->count_info = (pg->count_info & PGC_broken) |
> +                         PGC_state_offlined;
> +        tainted = 1;

You've broken two things at the same time here: You write to the global
variable of this name now, while ...

> @@ -1392,47 +1439,7 @@ static void free_heap_pages(
>  
>      for ( i = 0; i < (1 << order); i++ )
>      {
> -        /*
> -         * Cannot assume that count_info == 0, as there are some corner cases
> -         * where it isn't the case and yet it isn't a bug:
> -         *  1. page_get_owner() is NULL
> -         *  2. page_get_owner() is a domain that was never accessible by
> -         *     its domid (e.g., failed to fully construct the domain).
> -         *  3. page was never addressable by the guest (e.g., it's an
> -         *     auto-translate-physmap guest and the page was never included
> -         *     in its pseudophysical address space).
> -         * In all the above cases there can be no guest mappings of this page.
> -         */
> -        switch ( pg[i].count_info & PGC_state )
> -        {
> -        case PGC_state_inuse:
> -            BUG_ON(pg[i].count_info & PGC_broken);
> -            pg[i].count_info = PGC_state_free;
> -            break;
> -
> -        case PGC_state_offlining:
> -            pg[i].count_info = (pg[i].count_info & PGC_broken) |
> -                               PGC_state_offlined;
> -            tainted = 1;

... the local variable here doesn't get written anymore. Which Coverity
was kind enough to point out - please reference Coverity ID 1491872 in
the fix that I hope you will be able to provide quickly. (The easiest
change would seem to be to make mark_page_free() return bool, and set
"tainted" to 1 here accordingly.)

I understand that the two variables having the same name isn't very
helpful. I certainly wouldn't mind if you renamed the local one
suitably.

Jan



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 5/7] xen: re-define assign_pages and introduce a new function assign_page
  2021-09-15  8:04       ` Jan Beulich
@ 2021-09-15 18:12         ` Stefano Stabellini
  2021-09-16  6:32           ` Jan Beulich
  0 siblings, 1 reply; 15+ messages in thread
From: Stefano Stabellini @ 2021-09-15 18:12 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Penny Zheng, Bertrand.Marquis, Wei.Chen,
	xen-devel, julien

On Wed, 15 Sep 2021, Jan Beulich wrote:
> On 10.09.2021 18:23, Stefano Stabellini wrote:
> > On Fri, 10 Sep 2021, Jan Beulich wrote:
> >> On 10.09.2021 04:52, Penny Zheng wrote:
> >>> In order to deal with the trouble of count-to-order conversion when page number
> >>> is not in a power-of-two, this commit re-define assign_pages for nr pages and
> >>> assign_page for original page with a single order.
> >>>
> >>> Backporting confusion could be helped by altering the order of assign_pages
> >>> parameters, such that the compiler would point out that adjustments at call
> >>> sites are needed.
> >>
> >> Thanks, this now takes care of my primary concern. However, I (now?) have
> >> another (I thought I would have mentioned this before):
> >>
> >>> --- a/xen/common/page_alloc.c
> >>> +++ b/xen/common/page_alloc.c
> >>> @@ -2259,9 +2259,9 @@ void init_domheap_pages(paddr_t ps, paddr_t pe)
> >>>  
> >>>  
> >>>  int assign_pages(
> >>> -    struct domain *d,
> >>>      struct page_info *pg,
> >>> -    unsigned int order,
> >>> +    unsigned long nr,
> >>
> >> If this really is to be "unsigned long" (and not "unsigned int"), then...
> > 
> > Thanks for spotting this. I think it makes sense to use "unsigned int
> > nr" here.
> 
> I see you've made the change while committing, but the subsequent patch
> then would have needed adjustment as well: It's now silently truncating
> an "unsigned long" value to "unsigned int". It was the splitting that's
> now needed there _anyway_ that made me wonder whether the patch here
> really is worthwhile to have. But of course acquire_domstatic_pages()
> could for now also simply reject too large values ...

Yes. Are you thinking of a check like the following at the beginning of
acquire_domstatic_pages?

    if ( nr_mfns > UINT_MAX )
        return -EINVAL;

An alternative would be to change acquire_domstatic_pages to take an
unsigned int as nr_mfns parameter, but then it would just push the
problem up one level to allocate_static_memory, which is reading the
value from device tree so it is out of our control. So I think it is a
good idea to have an explicit check and acquire_domstatic_pages would be
a good place for it.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [PATCH v7 3/7] xen: introduce mark_page_free
  2021-09-15 13:55   ` Jan Beulich
@ 2021-09-16  3:13     ` Penny Zheng
  0 siblings, 0 replies; 15+ messages in thread
From: Penny Zheng @ 2021-09-16  3:13 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Bertrand Marquis, Wei Chen, xen-devel, sstabellini, julien

Hi Jan

Sorry for the broken.

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: Wednesday, September 15, 2021 9:56 PM
> To: Penny Zheng <Penny.Zheng@arm.com>
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> <Wei.Chen@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org; julien@xen.org
> Subject: Re: [PATCH v7 3/7] xen: introduce mark_page_free
> 
> On 10.09.2021 04:52, Penny Zheng wrote:
> > This commit defines a new helper mark_page_free to extract common
> > code, like following the same cache/TLB coherency policy, between
> > free_heap_pages and the new function free_staticmem_pages, which will be
> introduced later.
> >
> > The PDX compression makes that conversion between the MFN and the page
> > can be potentially non-trivial. As the function is internal, pass the
> > MFN and the page. They are both expected to match.
> >
> > Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > Acked-by: Jan Beulich <jbeulich@suse.com>
> > Reviewed-by: Julien Grall <jgrall@amazon.com>
> > ---
> >  xen/common/page_alloc.c | 89
> > ++++++++++++++++++++++-------------------
> >  1 file changed, 48 insertions(+), 41 deletions(-)
> >
> > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index
> > 958ba0cd92..a3ee5eca9e 100644
> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -1376,6 +1376,53 @@ bool scrub_free_pages(void)
> >      return node_to_scrub(false) != NUMA_NO_NODE;  }
> >
> > +static void mark_page_free(struct page_info *pg, mfn_t mfn) {
> > +    ASSERT(mfn_x(mfn) == mfn_x(page_to_mfn(pg)));
> > +
> > +    /*
> > +     * Cannot assume that count_info == 0, as there are some corner cases
> > +     * where it isn't the case and yet it isn't a bug:
> > +     *  1. page_get_owner() is NULL
> > +     *  2. page_get_owner() is a domain that was never accessible by
> > +     *     its domid (e.g., failed to fully construct the domain).
> > +     *  3. page was never addressable by the guest (e.g., it's an
> > +     *     auto-translate-physmap guest and the page was never included
> > +     *     in its pseudophysical address space).
> > +     * In all the above cases there can be no guest mappings of this page.
> > +     */
> > +    switch ( pg->count_info & PGC_state )
> > +    {
> > +    case PGC_state_inuse:
> > +        BUG_ON(pg->count_info & PGC_broken);
> > +        pg->count_info = PGC_state_free;
> > +        break;
> > +
> > +    case PGC_state_offlining:
> > +        pg->count_info = (pg->count_info & PGC_broken) |
> > +                         PGC_state_offlined;
> > +        tainted = 1;
> 
> You've broken two things at the same time here: You write to the global
> variable of this name now, while ...
> 
> > @@ -1392,47 +1439,7 @@ static void free_heap_pages(
> >
> >      for ( i = 0; i < (1 << order); i++ )
> >      {
> > -        /*
> > -         * Cannot assume that count_info == 0, as there are some corner cases
> > -         * where it isn't the case and yet it isn't a bug:
> > -         *  1. page_get_owner() is NULL
> > -         *  2. page_get_owner() is a domain that was never accessible by
> > -         *     its domid (e.g., failed to fully construct the domain).
> > -         *  3. page was never addressable by the guest (e.g., it's an
> > -         *     auto-translate-physmap guest and the page was never included
> > -         *     in its pseudophysical address space).
> > -         * In all the above cases there can be no guest mappings of this page.
> > -         */
> > -        switch ( pg[i].count_info & PGC_state )
> > -        {
> > -        case PGC_state_inuse:
> > -            BUG_ON(pg[i].count_info & PGC_broken);
> > -            pg[i].count_info = PGC_state_free;
> > -            break;
> > -
> > -        case PGC_state_offlining:
> > -            pg[i].count_info = (pg[i].count_info & PGC_broken) |
> > -                               PGC_state_offlined;
> > -            tainted = 1;
> 
> ... the local variable here doesn't get written anymore. Which Coverity was
> kind enough to point out - please reference Coverity ID 1491872 in the fix that
> I hope you will be able to provide quickly. (The easiest change would seem to
> be to make mark_page_free() return bool, and set "tainted" to 1 here
> accordingly.)
> 

Sure. I'll fix it today and let mark_page_free() return the status of "tainted".

> I understand that the two variables having the same name isn't very helpful. I
> certainly wouldn't mind if you renamed the local one suitably.
> 

I'll rename the local one to "pg_tainted" to tell the difference.

> Jan

Penny

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 5/7] xen: re-define assign_pages and introduce a new function assign_page
  2021-09-15 18:12         ` Stefano Stabellini
@ 2021-09-16  6:32           ` Jan Beulich
  0 siblings, 0 replies; 15+ messages in thread
From: Jan Beulich @ 2021-09-16  6:32 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Penny Zheng, Bertrand.Marquis, Wei.Chen, xen-devel, julien

On 15.09.2021 20:12, Stefano Stabellini wrote:
> On Wed, 15 Sep 2021, Jan Beulich wrote:
>> On 10.09.2021 18:23, Stefano Stabellini wrote:
>>> On Fri, 10 Sep 2021, Jan Beulich wrote:
>>>> On 10.09.2021 04:52, Penny Zheng wrote:
>>>>> In order to deal with the trouble of count-to-order conversion when page number
>>>>> is not in a power-of-two, this commit re-define assign_pages for nr pages and
>>>>> assign_page for original page with a single order.
>>>>>
>>>>> Backporting confusion could be helped by altering the order of assign_pages
>>>>> parameters, such that the compiler would point out that adjustments at call
>>>>> sites are needed.
>>>>
>>>> Thanks, this now takes care of my primary concern. However, I (now?) have
>>>> another (I thought I would have mentioned this before):
>>>>
>>>>> --- a/xen/common/page_alloc.c
>>>>> +++ b/xen/common/page_alloc.c
>>>>> @@ -2259,9 +2259,9 @@ void init_domheap_pages(paddr_t ps, paddr_t pe)
>>>>>  
>>>>>  
>>>>>  int assign_pages(
>>>>> -    struct domain *d,
>>>>>      struct page_info *pg,
>>>>> -    unsigned int order,
>>>>> +    unsigned long nr,
>>>>
>>>> If this really is to be "unsigned long" (and not "unsigned int"), then...
>>>
>>> Thanks for spotting this. I think it makes sense to use "unsigned int
>>> nr" here.
>>
>> I see you've made the change while committing, but the subsequent patch
>> then would have needed adjustment as well: It's now silently truncating
>> an "unsigned long" value to "unsigned int". It was the splitting that's
>> now needed there _anyway_ that made me wonder whether the patch here
>> really is worthwhile to have. But of course acquire_domstatic_pages()
>> could for now also simply reject too large values ...
> 
> Yes. Are you thinking of a check like the following at the beginning of
> acquire_domstatic_pages?
> 
>     if ( nr_mfns > UINT_MAX )
>         return -EINVAL;

Something like this might be the way to go.

> An alternative would be to change acquire_domstatic_pages to take an
> unsigned int as nr_mfns parameter, but then it would just push the
> problem up one level to allocate_static_memory, which is reading the
> value from device tree so it is out of our control. So I think it is a
> good idea to have an explicit check and acquire_domstatic_pages would be
> a good place for it.

If this was put closer to the DT parsing, maybe a (more) sensible error
message could be given?

Jan



^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2021-09-16  6:33 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-10  2:52 [PATCH v7 0/7] Domain on Static Allocation Penny Zheng
2021-09-10  2:52 ` [PATCH v7 1/7] xen/arm: introduce new helper device_tree_get_meminfo Penny Zheng
2021-09-10  2:52 ` [PATCH v7 2/7] xen/arm: introduce domain on Static Allocation Penny Zheng
2021-09-10  2:52 ` [PATCH v7 3/7] xen: introduce mark_page_free Penny Zheng
2021-09-15 13:55   ` Jan Beulich
2021-09-16  3:13     ` Penny Zheng
2021-09-10  2:52 ` [PATCH v7 4/7] xen/arm: static memory initialization Penny Zheng
2021-09-10  2:52 ` [PATCH v7 5/7] xen: re-define assign_pages and introduce a new function assign_page Penny Zheng
2021-09-10 11:08   ` Jan Beulich
2021-09-10 16:23     ` Stefano Stabellini
2021-09-15  8:04       ` Jan Beulich
2021-09-15 18:12         ` Stefano Stabellini
2021-09-16  6:32           ` Jan Beulich
2021-09-10  2:52 ` [PATCH v7 6/7] xen/arm: introduce acquire_staticmem_pages and acquire_domstatic_pages Penny Zheng
2021-09-10  2:52 ` [PATCH v7 7/7] xen/arm: introduce allocate_static_memory Penny Zheng

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.