All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/5] xen/arm: mm: Bunch of clean-ups
@ 2022-07-20 18:44 Julien Grall
  2022-07-20 18:44 ` [PATCH v2 1/5] xen/arm: Remove most of the *_VIRT_END defines Julien Grall
                   ` (6 more replies)
  0 siblings, 7 replies; 20+ messages in thread
From: Julien Grall @ 2022-07-20 18:44 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk, Konrad Rzeszutek Wilk, Ross Lagerwall,
	Andrew Cooper, George Dunlap, Jan Beulich, Wei Liu,
	Roger Pau Monné

From: Julien Grall <jgrall@amazon.com>

Hi all,

This series is a collection of patches to clean-up the MM subsystem
I have done in preparation for the next revision of "xen/arm: Don't
switch TTBR while the MMU is on" [1].

Cheers,

[1] https://lore.kernel.org/all/20220309112048.17377-1-julien@xen.org/

Julien Grall (5):
  xen/arm: Remove most of the *_VIRT_END defines
  xen/arm32: mm: Consolidate the domheap mappings initialization
  xen: Rename CONFIG_DOMAIN_PAGE to CONFIG_ARCH_MAP_DOMAIN_PAGE and...
  xen/arm: mm: Move domain_{,un}map_* helpers in a separate file
  xen/arm: mm: Reduce the area that xen_second covers

 xen/arch/arm/Kconfig                |   1 +
 xen/arch/arm/Makefile               |   1 +
 xen/arch/arm/domain_page.c          | 193 ++++++++++++++++++++++++
 xen/arch/arm/include/asm/arm32/mm.h |   8 +
 xen/arch/arm/include/asm/config.h   |  19 +--
 xen/arch/arm/include/asm/lpae.h     |  17 +++
 xen/arch/arm/livepatch.c            |   2 +-
 xen/arch/arm/mm.c                   | 221 ++++------------------------
 xen/arch/arm/setup.c                |  21 ++-
 xen/arch/x86/Kconfig                |   1 +
 xen/arch/x86/include/asm/config.h   |   1 -
 xen/common/Kconfig                  |   6 +
 xen/include/xen/domain_page.h       |   6 +-
 13 files changed, 283 insertions(+), 214 deletions(-)
 create mode 100644 xen/arch/arm/domain_page.c

-- 
2.32.0



^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v2 1/5] xen/arm: Remove most of the *_VIRT_END defines
  2022-07-20 18:44 [PATCH v2 0/5] xen/arm: mm: Bunch of clean-ups Julien Grall
@ 2022-07-20 18:44 ` Julien Grall
  2022-07-21  8:09   ` Bertrand Marquis
  2022-07-21  8:35   ` Luca Fancellu
  2022-07-20 18:44 ` [PATCH v2 2/5] xen/arm32: mm: Consolidate the domheap mappings initialization Julien Grall
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 20+ messages in thread
From: Julien Grall @ 2022-07-20 18:44 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk, Konrad Rzeszutek Wilk, Ross Lagerwall

From: Julien Grall <jgrall@amazon.com>

At the moment, *_VIRT_END may either point to the address after the end
or the last address of the region.

The lack of consistency make quite difficult to reason with them.

Furthermore, there is a risk of overflow in the case where the address
points past to the end. I am not aware of any cases, so this is only a
latent bug.

Start to solve the problem by removing all the *_VIRT_END exclusively used
by the Arm code and add *_VIRT_SIZE when it is not present.

Take the opportunity to rename BOOT_FDT_SLOT_SIZE to BOOT_FDT_VIRT_SIZE
for better consistency and use _AT(vaddr_t, ).

Also take the opportunity to fix the coding style of the comment touched
in mm.c.

Signed-off-by: Julien Grall <jgrall@amazon.com>

----

I noticed that a few functions in Xen expect [start, end[. This is risky
as we may end up with end < start if the region is defined right at the
top of the address space.

I haven't yet tackle this issue. But I would at least like to get rid
of *_VIRT_END.

This was originally sent separately (lets call it v0).

    Changes in v2:
        - Correct the check in domain_page_map_to_mfn()

    Changes in v1:
        - Mention the coding style change.
---
 xen/arch/arm/include/asm/config.h | 18 ++++++++----------
 xen/arch/arm/livepatch.c          |  2 +-
 xen/arch/arm/mm.c                 | 13 ++++++++-----
 3 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
index 3e2a55a91058..66db618b34e7 100644
--- a/xen/arch/arm/include/asm/config.h
+++ b/xen/arch/arm/include/asm/config.h
@@ -111,12 +111,11 @@
 #define FIXMAP_ADDR(n)        (_AT(vaddr_t,0x00400000) + (n) * PAGE_SIZE)
 
 #define BOOT_FDT_VIRT_START    _AT(vaddr_t,0x00600000)
-#define BOOT_FDT_SLOT_SIZE     MB(4)
-#define BOOT_FDT_VIRT_END      (BOOT_FDT_VIRT_START + BOOT_FDT_SLOT_SIZE)
+#define BOOT_FDT_VIRT_SIZE     _AT(vaddr_t, MB(4))
 
 #ifdef CONFIG_LIVEPATCH
 #define LIVEPATCH_VMAP_START   _AT(vaddr_t,0x00a00000)
-#define LIVEPATCH_VMAP_END     (LIVEPATCH_VMAP_START + MB(2))
+#define LIVEPATCH_VMAP_SIZE    _AT(vaddr_t, MB(2))
 #endif
 
 #define HYPERVISOR_VIRT_START  XEN_VIRT_START
@@ -132,18 +131,18 @@
 #define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - 1)
 
 #define VMAP_VIRT_START        _AT(vaddr_t,0x10000000)
+#define VMAP_VIRT_SIZE         _AT(vaddr_t, GB(1) - MB(256))
 
 #define XENHEAP_VIRT_START     _AT(vaddr_t,0x40000000)
-#define XENHEAP_VIRT_END       _AT(vaddr_t,0x7fffffff)
-#define DOMHEAP_VIRT_START     _AT(vaddr_t,0x80000000)
-#define DOMHEAP_VIRT_END       _AT(vaddr_t,0xffffffff)
+#define XENHEAP_VIRT_SIZE      _AT(vaddr_t, GB(1))
 
-#define VMAP_VIRT_END    XENHEAP_VIRT_START
+#define DOMHEAP_VIRT_START     _AT(vaddr_t,0x80000000)
+#define DOMHEAP_VIRT_SIZE      _AT(vaddr_t, GB(2))
 
 #define DOMHEAP_ENTRIES        1024  /* 1024 2MB mapping slots */
 
 /* Number of domheap pagetable pages required at the second level (2MB mappings) */
-#define DOMHEAP_SECOND_PAGES ((DOMHEAP_VIRT_END - DOMHEAP_VIRT_START + 1) >> FIRST_SHIFT)
+#define DOMHEAP_SECOND_PAGES (DOMHEAP_VIRT_SIZE >> FIRST_SHIFT)
 
 #else /* ARM_64 */
 
@@ -152,12 +151,11 @@
 #define SLOT0_ENTRY_SIZE  SLOT0(1)
 
 #define VMAP_VIRT_START  GB(1)
-#define VMAP_VIRT_END    (VMAP_VIRT_START + GB(1))
+#define VMAP_VIRT_SIZE   GB(1)
 
 #define FRAMETABLE_VIRT_START  GB(32)
 #define FRAMETABLE_SIZE        GB(32)
 #define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
-#define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - 1)
 
 #define DIRECTMAP_VIRT_START   SLOT0(256)
 #define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (265-256))
diff --git a/xen/arch/arm/livepatch.c b/xen/arch/arm/livepatch.c
index 75e8adcfd6a1..57abc746e60b 100644
--- a/xen/arch/arm/livepatch.c
+++ b/xen/arch/arm/livepatch.c
@@ -175,7 +175,7 @@ void __init arch_livepatch_init(void)
     void *start, *end;
 
     start = (void *)LIVEPATCH_VMAP_START;
-    end = (void *)LIVEPATCH_VMAP_END;
+    end = start + LIVEPATCH_VMAP_SIZE;
 
     vm_init_type(VMAP_XEN, start, end);
 
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 56fd0845861f..0177bc6b34d2 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -128,9 +128,11 @@ static DEFINE_PAGE_TABLE(xen_first);
 /* xen_pgtable == root of the trie (zeroeth level on 64-bit, first on 32-bit) */
 static DEFINE_PER_CPU(lpae_t *, xen_pgtable);
 #define THIS_CPU_PGTABLE this_cpu(xen_pgtable)
-/* xen_dommap == pages used by map_domain_page, these pages contain
+/*
+ * xen_dommap == pages used by map_domain_page, these pages contain
  * the second level pagetables which map the domheap region
- * DOMHEAP_VIRT_START...DOMHEAP_VIRT_END in 2MB chunks. */
+ * starting at DOMHEAP_VIRT_START in 2MB chunks.
+ */
 static DEFINE_PER_CPU(lpae_t *, xen_dommap);
 /* Root of the trie for cpu0, other CPU's PTs are dynamically allocated */
 static DEFINE_PAGE_TABLE(cpu0_pgtable);
@@ -476,7 +478,7 @@ mfn_t domain_page_map_to_mfn(const void *ptr)
     int slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
     unsigned long offset = (va>>THIRD_SHIFT) & XEN_PT_LPAE_ENTRY_MASK;
 
-    if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
+    if ( (va >= VMAP_VIRT_START) && ((va - VMAP_VIRT_START) < VMAP_VIRT_SIZE) )
         return virt_to_mfn(va);
 
     ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
@@ -570,7 +572,8 @@ void __init remove_early_mappings(void)
     int rc;
 
     /* destroy the _PAGE_BLOCK mapping */
-    rc = modify_xen_mappings(BOOT_FDT_VIRT_START, BOOT_FDT_VIRT_END,
+    rc = modify_xen_mappings(BOOT_FDT_VIRT_START,
+                             BOOT_FDT_VIRT_START + BOOT_FDT_VIRT_SIZE,
                              _PAGE_BLOCK);
     BUG_ON(rc);
 }
@@ -850,7 +853,7 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
 
 void *__init arch_vmap_virt_end(void)
 {
-    return (void *)VMAP_VIRT_END;
+    return (void *)(VMAP_VIRT_START + VMAP_VIRT_SIZE);
 }
 
 /*
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 2/5] xen/arm32: mm: Consolidate the domheap mappings initialization
  2022-07-20 18:44 [PATCH v2 0/5] xen/arm: mm: Bunch of clean-ups Julien Grall
  2022-07-20 18:44 ` [PATCH v2 1/5] xen/arm: Remove most of the *_VIRT_END defines Julien Grall
@ 2022-07-20 18:44 ` Julien Grall
  2022-07-21  8:36   ` Bertrand Marquis
  2022-07-21  9:42   ` Luca Fancellu
  2022-07-20 18:44 ` [PATCH v2 3/5] xen: Rename CONFIG_DOMAIN_PAGE to CONFIG_ARCH_MAP_DOMAIN_PAGE and Julien Grall
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 20+ messages in thread
From: Julien Grall @ 2022-07-20 18:44 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk

From: Julien Grall <jgrall@amazon.com>

At the moment, the domheap mappings initialization is done separately for
the boot CPU and secondary CPUs. The main difference is for the former
the pages are part of Xen binary whilst for the latter they are
dynamically allocated.

It would be good to have a single helper so it is easier to rework
on the domheap is initialized.

For CPU0, we still need to use pre-allocated pages because the
allocators may use domain_map_page(), so we need to have the domheap
area ready first. But we can still delay the initialization to setup_mm().

Introduce a new helper init_domheap_mappings() that will be called
from setup_mm() for the boot CPU and from init_secondary_pagetables()
for secondary CPUs.

Signed-off-by: Julien Grall <jgrall@amazon.com>

----
    Changes in v2:
        - Fix function name in the commit message
        - Remove duplicated 'been' in the comment
---
 xen/arch/arm/include/asm/arm32/mm.h |  2 +
 xen/arch/arm/mm.c                   | 92 +++++++++++++++++++----------
 xen/arch/arm/setup.c                |  8 +++
 3 files changed, 71 insertions(+), 31 deletions(-)

diff --git a/xen/arch/arm/include/asm/arm32/mm.h b/xen/arch/arm/include/asm/arm32/mm.h
index 6b039d9ceaa2..575373aeb985 100644
--- a/xen/arch/arm/include/asm/arm32/mm.h
+++ b/xen/arch/arm/include/asm/arm32/mm.h
@@ -10,6 +10,8 @@ static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr)
     return false;
 }
 
+bool init_domheap_mappings(unsigned int cpu);
+
 #endif /* __ARM_ARM32_MM_H__ */
 
 /*
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0177bc6b34d2..9311f3530066 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -372,6 +372,58 @@ void clear_fixmap(unsigned int map)
 }
 
 #ifdef CONFIG_DOMAIN_PAGE
+/*
+ * Prepare the area that will be used to map domheap pages. They are
+ * mapped in 2MB chunks, so we need to allocate the page-tables up to
+ * the 2nd level.
+ *
+ * The caller should make sure the root page-table for @cpu has been
+ * allocated.
+ */
+bool init_domheap_mappings(unsigned int cpu)
+{
+    unsigned int order = get_order_from_pages(DOMHEAP_SECOND_PAGES);
+    lpae_t *root = per_cpu(xen_pgtable, cpu);
+    unsigned int i, first_idx;
+    lpae_t *domheap;
+    mfn_t mfn;
+
+    ASSERT(root);
+    ASSERT(!per_cpu(xen_dommap, cpu));
+
+    /*
+     * The domheap for cpu0 is before the heap is initialized. So we
+     * need to use pre-allocated pages.
+     */
+    if ( !cpu )
+        domheap = cpu0_dommap;
+    else
+        domheap = alloc_xenheap_pages(order, 0);
+
+    if ( !domheap )
+        return false;
+
+    /* Ensure the domheap has no stray mappings */
+    memset(domheap, 0, DOMHEAP_SECOND_PAGES * PAGE_SIZE);
+
+    /*
+     * Update the first level mapping to reference the local CPUs
+     * domheap mapping pages.
+     */
+    mfn = virt_to_mfn(domheap);
+    first_idx = first_table_offset(DOMHEAP_VIRT_START);
+    for ( i = 0; i < DOMHEAP_SECOND_PAGES; i++ )
+    {
+        lpae_t pte = mfn_to_xen_entry(mfn_add(mfn, i), MT_NORMAL);
+        pte.pt.table = 1;
+        write_pte(&root[first_idx + i], pte);
+    }
+
+    per_cpu(xen_dommap, cpu) = domheap;
+
+    return true;
+}
+
 void *map_domain_page_global(mfn_t mfn)
 {
     return vmap(&mfn, 1);
@@ -633,16 +685,6 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
         p[i].pt.xn = 0;
     }
 
-#ifdef CONFIG_ARM_32
-    for ( i = 0; i < DOMHEAP_SECOND_PAGES; i++ )
-    {
-        p[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)]
-            = pte_of_xenaddr((uintptr_t)(cpu0_dommap +
-                                         i * XEN_PT_LPAE_ENTRIES));
-        p[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)].pt.table = 1;
-    }
-#endif
-
     /* Break up the Xen mapping into 4k pages and protect them separately. */
     for ( i = 0; i < XEN_PT_LPAE_ENTRIES; i++ )
     {
@@ -686,7 +728,6 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
 
 #ifdef CONFIG_ARM_32
     per_cpu(xen_pgtable, 0) = cpu0_pgtable;
-    per_cpu(xen_dommap, 0) = cpu0_dommap;
 #endif
 }
 
@@ -719,39 +760,28 @@ int init_secondary_pagetables(int cpu)
 #else
 int init_secondary_pagetables(int cpu)
 {
-    lpae_t *first, *domheap, pte;
-    int i;
+    lpae_t *first;
 
     first = alloc_xenheap_page(); /* root == first level on 32-bit 3-level trie */
-    domheap = alloc_xenheap_pages(get_order_from_pages(DOMHEAP_SECOND_PAGES), 0);
 
-    if ( domheap == NULL || first == NULL )
+    if ( !first )
     {
-        printk("Not enough free memory for secondary CPU%d pagetables\n", cpu);
-        free_xenheap_pages(domheap, get_order_from_pages(DOMHEAP_SECOND_PAGES));
-        free_xenheap_page(first);
+        printk("CPU%u: Unable to allocate the first page-table\n", cpu);
         return -ENOMEM;
     }
 
     /* Initialise root pagetable from root of boot tables */
     memcpy(first, cpu0_pgtable, PAGE_SIZE);
+    per_cpu(xen_pgtable, cpu) = first;
 
-    /* Ensure the domheap has no stray mappings */
-    memset(domheap, 0, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
-
-    /* Update the first level mapping to reference the local CPUs
-     * domheap mapping pages. */
-    for ( i = 0; i < DOMHEAP_SECOND_PAGES; i++ )
+    if ( !init_domheap_mappings(cpu) )
     {
-        pte = mfn_to_xen_entry(virt_to_mfn(domheap + i * XEN_PT_LPAE_ENTRIES),
-                               MT_NORMAL);
-        pte.pt.table = 1;
-        write_pte(&first[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)], pte);
+        printk("CPU%u: Unable to prepare the domheap page-tables\n", cpu);
+        per_cpu(xen_pgtable, cpu) = NULL;
+        free_xenheap_page(first);
+        return -ENOMEM;
     }
 
-    per_cpu(xen_pgtable, cpu) = first;
-    per_cpu(xen_dommap, cpu) = domheap;
-
     clear_boot_pagetables();
 
     /* Set init_ttbr for this CPU coming up */
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 85ff956ec2e3..068e84b10335 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -783,6 +783,14 @@ static void __init setup_mm(void)
     setup_frametable_mappings(ram_start, ram_end);
     max_page = PFN_DOWN(ram_end);
 
+    /*
+     * The allocators may need to use map_domain_page() (such as for
+     * scrubbing pages). So we need to prepare the domheap area first.
+     */
+    if ( !init_domheap_mappings(smp_processor_id()) )
+        panic("CPU%u: Unable to prepare the domheap page-tables\n",
+              smp_processor_id());
+
     /* Add xenheap memory that was not already added to the boot allocator. */
     init_xenheap_pages(mfn_to_maddr(xenheap_mfn_start),
                        mfn_to_maddr(xenheap_mfn_end));
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 3/5] xen: Rename CONFIG_DOMAIN_PAGE to CONFIG_ARCH_MAP_DOMAIN_PAGE and...
  2022-07-20 18:44 [PATCH v2 0/5] xen/arm: mm: Bunch of clean-ups Julien Grall
  2022-07-20 18:44 ` [PATCH v2 1/5] xen/arm: Remove most of the *_VIRT_END defines Julien Grall
  2022-07-20 18:44 ` [PATCH v2 2/5] xen/arm32: mm: Consolidate the domheap mappings initialization Julien Grall
@ 2022-07-20 18:44 ` Julien Grall
  2022-07-21  8:40   ` Bertrand Marquis
  2022-07-25 15:51   ` Jan Beulich
  2022-07-20 18:44 ` [PATCH v2 4/5] xen/arm: mm: Move domain_{,un}map_* helpers in a separate file Julien Grall
                   ` (3 subsequent siblings)
  6 siblings, 2 replies; 20+ messages in thread
From: Julien Grall @ 2022-07-20 18:44 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu, Roger Pau Monné

From: Julien Grall <jgrall@amazon.com>

move it to Kconfig.

The define CONFIG_DOMAIN_PAGE indicates whether the architecture provide
helpers to map/unmap a domain page. Rename it to the define to
CONFIG_ARCH_MAP_DOMAIN_PAGE so it is clearer that this will not remove
support for domain page (this is not a concept that Xen can't get
away with).

Take the opportunity to move CONFIG_MAP_DOMAIN_PAGE to Kconfig as this
will soon be necessary to use it in the Makefile.

Signed-off-by: Julien Grall <jgrall@amazon.com>

----
    Changes in v2:
        - New patch
---
 xen/arch/arm/Kconfig              | 1 +
 xen/arch/arm/include/asm/config.h | 1 -
 xen/arch/arm/mm.c                 | 2 +-
 xen/arch/x86/Kconfig              | 1 +
 xen/arch/x86/include/asm/config.h | 1 -
 xen/common/Kconfig                | 3 +++
 xen/include/xen/domain_page.h     | 6 +++---
 7 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index be9eff014120..33e004d702bf 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -1,6 +1,7 @@
 config ARM_32
 	def_bool y
 	depends on "$(ARCH)" = "arm32"
+	select ARCH_MAP_DOMAIN_PAGE
 
 config ARM_64
 	def_bool y
diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
index 66db618b34e7..2fafb9f2283c 100644
--- a/xen/arch/arm/include/asm/config.h
+++ b/xen/arch/arm/include/asm/config.h
@@ -122,7 +122,6 @@
 
 #ifdef CONFIG_ARM_32
 
-#define CONFIG_DOMAIN_PAGE 1
 #define CONFIG_SEPARATE_XENHEAP 1
 
 #define FRAMETABLE_VIRT_START  _AT(vaddr_t,0x02000000)
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 9311f3530066..7a722d6c86c6 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -371,7 +371,7 @@ void clear_fixmap(unsigned int map)
     BUG_ON(res != 0);
 }
 
-#ifdef CONFIG_DOMAIN_PAGE
+#ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE
 /*
  * Prepare the area that will be used to map domheap pages. They are
  * mapped in 2MB chunks, so we need to allocate the page-tables up to
diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 6bed72b79141..6a7825f4ba3c 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -8,6 +8,7 @@ config X86
 	select ACPI_LEGACY_TABLES_LOOKUP
 	select ACPI_NUMA
 	select ALTERNATIVE_CALL
+	select ARCH_MAP_DOMAIN_PAGE
 	select ARCH_SUPPORTS_INT128
 	select CORE_PARKING
 	select HAS_ALTERNATIVE
diff --git a/xen/arch/x86/include/asm/config.h b/xen/arch/x86/include/asm/config.h
index 07bcd158314b..fbc4bb3416bd 100644
--- a/xen/arch/x86/include/asm/config.h
+++ b/xen/arch/x86/include/asm/config.h
@@ -22,7 +22,6 @@
 #define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 1
 #define CONFIG_DISCONTIGMEM 1
 #define CONFIG_NUMA_EMU 1
-#define CONFIG_DOMAIN_PAGE 1
 
 #define CONFIG_PAGEALLOC_MAX_ORDER (2 * PAGETABLE_ORDER)
 #define CONFIG_DOMU_MAX_ORDER      PAGETABLE_ORDER
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index 41a67891bcc8..f1ea3199c8eb 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -25,6 +25,9 @@ config GRANT_TABLE
 config ALTERNATIVE_CALL
 	bool
 
+config ARCH_MAP_DOMAIN_PAGE
+	bool
+
 config HAS_ALTERNATIVE
 	bool
 
diff --git a/xen/include/xen/domain_page.h b/xen/include/xen/domain_page.h
index a182d33b6701..149b217b9619 100644
--- a/xen/include/xen/domain_page.h
+++ b/xen/include/xen/domain_page.h
@@ -17,7 +17,7 @@
 void clear_domain_page(mfn_t mfn);
 void copy_domain_page(mfn_t dst, const mfn_t src);
 
-#ifdef CONFIG_DOMAIN_PAGE
+#ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE
 
 /*
  * Map a given page frame, returning the mapped virtual address. The page is
@@ -51,7 +51,7 @@ static inline void *__map_domain_page_global(const struct page_info *pg)
     return map_domain_page_global(page_to_mfn(pg));
 }
 
-#else /* !CONFIG_DOMAIN_PAGE */
+#else /* !CONFIG_ARCH_MAP_DOMAIN_PAGE */
 
 #define map_domain_page(mfn)                __mfn_to_virt(mfn_x(mfn))
 #define __map_domain_page(pg)               page_to_virt(pg)
@@ -70,7 +70,7 @@ static inline void *__map_domain_page_global(const struct page_info *pg)
 
 static inline void unmap_domain_page_global(const void *va) {};
 
-#endif /* !CONFIG_DOMAIN_PAGE */
+#endif /* !CONFIG_ARCH_MAP_DOMAIN_PAGE */
 
 #define UNMAP_DOMAIN_PAGE(p) do {   \
     unmap_domain_page(p);           \
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 4/5] xen/arm: mm: Move domain_{,un}map_* helpers in a separate file
  2022-07-20 18:44 [PATCH v2 0/5] xen/arm: mm: Bunch of clean-ups Julien Grall
                   ` (2 preceding siblings ...)
  2022-07-20 18:44 ` [PATCH v2 3/5] xen: Rename CONFIG_DOMAIN_PAGE to CONFIG_ARCH_MAP_DOMAIN_PAGE and Julien Grall
@ 2022-07-20 18:44 ` Julien Grall
  2022-07-21 10:07   ` Jan Beulich
  2022-07-21 10:41   ` Bertrand Marquis
  2022-07-20 18:44 ` [PATCH v2 5/5] xen/arm: mm: Reduce the area that xen_second covers Julien Grall
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 20+ messages in thread
From: Julien Grall @ 2022-07-20 18:44 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Jan Beulich,
	Wei Liu

From: Julien Grall <jgrall@amazon.com>

The file xen/arch/mm.c has been growing quite a lot. It now contains
various independent part of the MM subsytem.

One of them is the helpers to map/unmap a page which is only used
by arm32 and protected by CONFIG_ARCH_MAP_DOMAIN_PAGE. Move them in a
new file xen/arch/arm/domain_page.c.

Signed-off-by: Julien Grall <jgrall@amazon.com>

----
    Changes in v2:
        - Move CONFIG_* to Kconfig is now in a separate patch
---
 xen/arch/arm/Makefile               |   1 +
 xen/arch/arm/domain_page.c          | 193 +++++++++++++++++++++++++++
 xen/arch/arm/include/asm/arm32/mm.h |   6 +
 xen/arch/arm/include/asm/lpae.h     |  17 +++
 xen/arch/arm/mm.c                   | 198 +---------------------------
 xen/common/Kconfig                  |   3 +
 6 files changed, 222 insertions(+), 196 deletions(-)
 create mode 100644 xen/arch/arm/domain_page.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index bb7a6151c13c..4d076b278b10 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -17,6 +17,7 @@ obj-y += device.o
 obj-$(CONFIG_IOREQ_SERVER) += dm.o
 obj-y += domain.o
 obj-y += domain_build.init.o
+obj-$(CONFIG_ARCH_MAP_DOMAIN_PAGE) += domain_page.o
 obj-y += domctl.o
 obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
 obj-y += efi/
diff --git a/xen/arch/arm/domain_page.c b/xen/arch/arm/domain_page.c
new file mode 100644
index 000000000000..63e97730cf57
--- /dev/null
+++ b/xen/arch/arm/domain_page.c
@@ -0,0 +1,193 @@
+#include <xen/mm.h>
+#include <xen/pmap.h>
+#include <xen/vmap.h>
+
+/* Override macros from asm/page.h to make them work with mfn_t */
+#undef virt_to_mfn
+#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
+
+/* cpu0's domheap page tables */
+static DEFINE_PAGE_TABLES(cpu0_dommap, DOMHEAP_SECOND_PAGES);
+
+/*
+ * xen_dommap == pages used by map_domain_page, these pages contain
+ * the second level pagetables which map the domheap region
+ * starting at DOMHEAP_VIRT_START in 2MB chunks.
+ */
+static DEFINE_PER_CPU(lpae_t *, xen_dommap);
+
+/*
+ * Prepare the area that will be used to map domheap pages. They are
+ * mapped in 2MB chunks, so we need to allocate the page-tables up to
+ * the 2nd level.
+ *
+ * The caller should make sure the root page-table for @cpu has been
+ * allocated.
+ */
+bool init_domheap_mappings(unsigned int cpu)
+{
+    unsigned int order = get_order_from_pages(DOMHEAP_SECOND_PAGES);
+    lpae_t *root = per_cpu(xen_pgtable, cpu);
+    unsigned int i, first_idx;
+    lpae_t *domheap;
+    mfn_t mfn;
+
+    ASSERT(root);
+    ASSERT(!per_cpu(xen_dommap, cpu));
+
+    /*
+     * The domheap for cpu0 is before the heap is initialized. So we
+     * need to use pre-allocated pages.
+     */
+    if ( !cpu )
+        domheap = cpu0_dommap;
+    else
+        domheap = alloc_xenheap_pages(order, 0);
+
+    if ( !domheap )
+        return false;
+
+    /* Ensure the domheap has no stray mappings */
+    memset(domheap, 0, DOMHEAP_SECOND_PAGES * PAGE_SIZE);
+
+    /*
+     * Update the first level mapping to reference the local CPUs
+     * domheap mapping pages.
+     */
+    mfn = virt_to_mfn(domheap);
+    first_idx = first_table_offset(DOMHEAP_VIRT_START);
+    for ( i = 0; i < DOMHEAP_SECOND_PAGES; i++ )
+    {
+        lpae_t pte = mfn_to_xen_entry(mfn_add(mfn, i), MT_NORMAL);
+        pte.pt.table = 1;
+        write_pte(&root[first_idx + i], pte);
+    }
+
+    per_cpu(xen_dommap, cpu) = domheap;
+
+    return true;
+}
+
+void *map_domain_page_global(mfn_t mfn)
+{
+    return vmap(&mfn, 1);
+}
+
+void unmap_domain_page_global(const void *va)
+{
+    vunmap(va);
+}
+
+/* Map a page of domheap memory */
+void *map_domain_page(mfn_t mfn)
+{
+    unsigned long flags;
+    lpae_t *map = this_cpu(xen_dommap);
+    unsigned long slot_mfn = mfn_x(mfn) & ~XEN_PT_LPAE_ENTRY_MASK;
+    vaddr_t va;
+    lpae_t pte;
+    int i, slot;
+
+    local_irq_save(flags);
+
+    /* The map is laid out as an open-addressed hash table where each
+     * entry is a 2MB superpage pte.  We use the available bits of each
+     * PTE as a reference count; when the refcount is zero the slot can
+     * be reused. */
+    for ( slot = (slot_mfn >> XEN_PT_LPAE_SHIFT) % DOMHEAP_ENTRIES, i = 0;
+          i < DOMHEAP_ENTRIES;
+          slot = (slot + 1) % DOMHEAP_ENTRIES, i++ )
+    {
+        if ( map[slot].pt.avail < 0xf &&
+             map[slot].pt.base == slot_mfn &&
+             map[slot].pt.valid )
+        {
+            /* This slot already points to the right place; reuse it */
+            map[slot].pt.avail++;
+            break;
+        }
+        else if ( map[slot].pt.avail == 0 )
+        {
+            /* Commandeer this 2MB slot */
+            pte = mfn_to_xen_entry(_mfn(slot_mfn), MT_NORMAL);
+            pte.pt.avail = 1;
+            write_pte(map + slot, pte);
+            break;
+        }
+
+    }
+    /* If the map fills up, the callers have misbehaved. */
+    BUG_ON(i == DOMHEAP_ENTRIES);
+
+#ifndef NDEBUG
+    /* Searching the hash could get slow if the map starts filling up.
+     * Cross that bridge when we come to it */
+    {
+        static int max_tries = 32;
+        if ( i >= max_tries )
+        {
+            dprintk(XENLOG_WARNING, "Domheap map is filling: %i tries\n", i);
+            max_tries *= 2;
+        }
+    }
+#endif
+
+    local_irq_restore(flags);
+
+    va = (DOMHEAP_VIRT_START
+          + (slot << SECOND_SHIFT)
+          + ((mfn_x(mfn) & XEN_PT_LPAE_ENTRY_MASK) << THIRD_SHIFT));
+
+    /*
+     * We may not have flushed this specific subpage at map time,
+     * since we only flush the 4k page not the superpage
+     */
+    flush_xen_tlb_range_va_local(va, PAGE_SIZE);
+
+    return (void *)va;
+}
+
+/* Release a mapping taken with map_domain_page() */
+void unmap_domain_page(const void *va)
+{
+    unsigned long flags;
+    lpae_t *map = this_cpu(xen_dommap);
+    int slot = ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
+
+    if ( !va )
+        return;
+
+    local_irq_save(flags);
+
+    ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
+    ASSERT(map[slot].pt.avail != 0);
+
+    map[slot].pt.avail--;
+
+    local_irq_restore(flags);
+}
+
+mfn_t domain_page_map_to_mfn(const void *ptr)
+{
+    unsigned long va = (unsigned long)ptr;
+    lpae_t *map = this_cpu(xen_dommap);
+    int slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
+    unsigned long offset = (va>>THIRD_SHIFT) & XEN_PT_LPAE_ENTRY_MASK;
+
+    if ( (va >= VMAP_VIRT_START) && ((va - VMAP_VIRT_START) < VMAP_VIRT_SIZE) )
+        return virt_to_mfn(va);
+
+    ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
+    ASSERT(map[slot].pt.avail != 0);
+
+    return mfn_add(lpae_get_mfn(map[slot]), offset);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/include/asm/arm32/mm.h b/xen/arch/arm/include/asm/arm32/mm.h
index 575373aeb985..8bfc906e7178 100644
--- a/xen/arch/arm/include/asm/arm32/mm.h
+++ b/xen/arch/arm/include/asm/arm32/mm.h
@@ -1,6 +1,12 @@
 #ifndef __ARM_ARM32_MM_H__
 #define __ARM_ARM32_MM_H__
 
+#include <xen/percpu.h>
+
+#include <asm/lpae.h>
+
+DECLARE_PER_CPU(lpae_t *, xen_pgtable);
+
 /*
  * Only a limited amount of RAM, called xenheap, is always mapped on ARM32.
  * For convenience always return false.
diff --git a/xen/arch/arm/include/asm/lpae.h b/xen/arch/arm/include/asm/lpae.h
index fc19cbd84772..3fdd5d0de28e 100644
--- a/xen/arch/arm/include/asm/lpae.h
+++ b/xen/arch/arm/include/asm/lpae.h
@@ -261,6 +261,23 @@ lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr);
 #define third_table_offset(va)  TABLE_OFFSET(third_linear_offset(va))
 #define zeroeth_table_offset(va)  TABLE_OFFSET(zeroeth_linear_offset(va))
 
+/*
+ * Macros to define page-tables:
+ *  - DEFINE_BOOT_PAGE_TABLE is used to define page-table that are used
+ *  in assembly code before BSS is zeroed.
+ *  - DEFINE_PAGE_TABLE{,S} are used to define one or multiple
+ *  page-tables to be used after BSS is zeroed (typically they are only used
+ *  in C).
+ */
+#define DEFINE_BOOT_PAGE_TABLE(name)                                          \
+lpae_t __aligned(PAGE_SIZE) __section(".data.page_aligned")                   \
+    name[XEN_PT_LPAE_ENTRIES]
+
+#define DEFINE_PAGE_TABLES(name, nr)                    \
+lpae_t __aligned(PAGE_SIZE) name[XEN_PT_LPAE_ENTRIES * (nr)]
+
+#define DEFINE_PAGE_TABLE(name) DEFINE_PAGE_TABLES(name, 1)
+
 #endif /* __ARM_LPAE_H__ */
 
 /*
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 7a722d6c86c6..ad26ad740308 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -57,23 +57,6 @@ mm_printk(const char *fmt, ...) {}
     } while (0)
 #endif
 
-/*
- * Macros to define page-tables:
- *  - DEFINE_BOOT_PAGE_TABLE is used to define page-table that are used
- *  in assembly code before BSS is zeroed.
- *  - DEFINE_PAGE_TABLE{,S} are used to define one or multiple
- *  page-tables to be used after BSS is zeroed (typically they are only used
- *  in C).
- */
-#define DEFINE_BOOT_PAGE_TABLE(name)                                          \
-lpae_t __aligned(PAGE_SIZE) __section(".data.page_aligned")                   \
-    name[XEN_PT_LPAE_ENTRIES]
-
-#define DEFINE_PAGE_TABLES(name, nr)                    \
-lpae_t __aligned(PAGE_SIZE) name[XEN_PT_LPAE_ENTRIES * (nr)]
-
-#define DEFINE_PAGE_TABLE(name) DEFINE_PAGE_TABLES(name, 1)
-
 /* Static start-of-day pagetables that we use before the allocators
  * are up. These are used by all CPUs during bringup before switching
  * to the CPUs own pagetables.
@@ -110,7 +93,7 @@ DEFINE_BOOT_PAGE_TABLE(boot_third);
 /* Main runtime page tables */
 
 /*
- * For arm32 xen_pgtable and xen_dommap are per-PCPU and are allocated before
+ * For arm32 xen_pgtable are per-PCPU and are allocated before
  * bringing up each CPU. For arm64 xen_pgtable is common to all PCPUs.
  *
  * xen_second, xen_fixmap and xen_xenmap are always shared between all
@@ -126,18 +109,10 @@ static DEFINE_PAGE_TABLE(xen_first);
 #define HYP_PT_ROOT_LEVEL 1
 /* Per-CPU pagetable pages */
 /* xen_pgtable == root of the trie (zeroeth level on 64-bit, first on 32-bit) */
-static DEFINE_PER_CPU(lpae_t *, xen_pgtable);
+DEFINE_PER_CPU(lpae_t *, xen_pgtable);
 #define THIS_CPU_PGTABLE this_cpu(xen_pgtable)
-/*
- * xen_dommap == pages used by map_domain_page, these pages contain
- * the second level pagetables which map the domheap region
- * starting at DOMHEAP_VIRT_START in 2MB chunks.
- */
-static DEFINE_PER_CPU(lpae_t *, xen_dommap);
 /* Root of the trie for cpu0, other CPU's PTs are dynamically allocated */
 static DEFINE_PAGE_TABLE(cpu0_pgtable);
-/* cpu0's domheap page tables */
-static DEFINE_PAGE_TABLES(cpu0_dommap, DOMHEAP_SECOND_PAGES);
 #endif
 
 /* Common pagetable leaves */
@@ -371,175 +346,6 @@ void clear_fixmap(unsigned int map)
     BUG_ON(res != 0);
 }
 
-#ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE
-/*
- * Prepare the area that will be used to map domheap pages. They are
- * mapped in 2MB chunks, so we need to allocate the page-tables up to
- * the 2nd level.
- *
- * The caller should make sure the root page-table for @cpu has been
- * allocated.
- */
-bool init_domheap_mappings(unsigned int cpu)
-{
-    unsigned int order = get_order_from_pages(DOMHEAP_SECOND_PAGES);
-    lpae_t *root = per_cpu(xen_pgtable, cpu);
-    unsigned int i, first_idx;
-    lpae_t *domheap;
-    mfn_t mfn;
-
-    ASSERT(root);
-    ASSERT(!per_cpu(xen_dommap, cpu));
-
-    /*
-     * The domheap for cpu0 is before the heap is initialized. So we
-     * need to use pre-allocated pages.
-     */
-    if ( !cpu )
-        domheap = cpu0_dommap;
-    else
-        domheap = alloc_xenheap_pages(order, 0);
-
-    if ( !domheap )
-        return false;
-
-    /* Ensure the domheap has no stray mappings */
-    memset(domheap, 0, DOMHEAP_SECOND_PAGES * PAGE_SIZE);
-
-    /*
-     * Update the first level mapping to reference the local CPUs
-     * domheap mapping pages.
-     */
-    mfn = virt_to_mfn(domheap);
-    first_idx = first_table_offset(DOMHEAP_VIRT_START);
-    for ( i = 0; i < DOMHEAP_SECOND_PAGES; i++ )
-    {
-        lpae_t pte = mfn_to_xen_entry(mfn_add(mfn, i), MT_NORMAL);
-        pte.pt.table = 1;
-        write_pte(&root[first_idx + i], pte);
-    }
-
-    per_cpu(xen_dommap, cpu) = domheap;
-
-    return true;
-}
-
-void *map_domain_page_global(mfn_t mfn)
-{
-    return vmap(&mfn, 1);
-}
-
-void unmap_domain_page_global(const void *va)
-{
-    vunmap(va);
-}
-
-/* Map a page of domheap memory */
-void *map_domain_page(mfn_t mfn)
-{
-    unsigned long flags;
-    lpae_t *map = this_cpu(xen_dommap);
-    unsigned long slot_mfn = mfn_x(mfn) & ~XEN_PT_LPAE_ENTRY_MASK;
-    vaddr_t va;
-    lpae_t pte;
-    int i, slot;
-
-    local_irq_save(flags);
-
-    /* The map is laid out as an open-addressed hash table where each
-     * entry is a 2MB superpage pte.  We use the available bits of each
-     * PTE as a reference count; when the refcount is zero the slot can
-     * be reused. */
-    for ( slot = (slot_mfn >> XEN_PT_LPAE_SHIFT) % DOMHEAP_ENTRIES, i = 0;
-          i < DOMHEAP_ENTRIES;
-          slot = (slot + 1) % DOMHEAP_ENTRIES, i++ )
-    {
-        if ( map[slot].pt.avail < 0xf &&
-             map[slot].pt.base == slot_mfn &&
-             map[slot].pt.valid )
-        {
-            /* This slot already points to the right place; reuse it */
-            map[slot].pt.avail++;
-            break;
-        }
-        else if ( map[slot].pt.avail == 0 )
-        {
-            /* Commandeer this 2MB slot */
-            pte = mfn_to_xen_entry(_mfn(slot_mfn), MT_NORMAL);
-            pte.pt.avail = 1;
-            write_pte(map + slot, pte);
-            break;
-        }
-
-    }
-    /* If the map fills up, the callers have misbehaved. */
-    BUG_ON(i == DOMHEAP_ENTRIES);
-
-#ifndef NDEBUG
-    /* Searching the hash could get slow if the map starts filling up.
-     * Cross that bridge when we come to it */
-    {
-        static int max_tries = 32;
-        if ( i >= max_tries )
-        {
-            dprintk(XENLOG_WARNING, "Domheap map is filling: %i tries\n", i);
-            max_tries *= 2;
-        }
-    }
-#endif
-
-    local_irq_restore(flags);
-
-    va = (DOMHEAP_VIRT_START
-          + (slot << SECOND_SHIFT)
-          + ((mfn_x(mfn) & XEN_PT_LPAE_ENTRY_MASK) << THIRD_SHIFT));
-
-    /*
-     * We may not have flushed this specific subpage at map time,
-     * since we only flush the 4k page not the superpage
-     */
-    flush_xen_tlb_range_va_local(va, PAGE_SIZE);
-
-    return (void *)va;
-}
-
-/* Release a mapping taken with map_domain_page() */
-void unmap_domain_page(const void *va)
-{
-    unsigned long flags;
-    lpae_t *map = this_cpu(xen_dommap);
-    int slot = ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
-
-    if ( !va )
-        return;
-
-    local_irq_save(flags);
-
-    ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
-    ASSERT(map[slot].pt.avail != 0);
-
-    map[slot].pt.avail--;
-
-    local_irq_restore(flags);
-}
-
-mfn_t domain_page_map_to_mfn(const void *ptr)
-{
-    unsigned long va = (unsigned long)ptr;
-    lpae_t *map = this_cpu(xen_dommap);
-    int slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
-    unsigned long offset = (va>>THIRD_SHIFT) & XEN_PT_LPAE_ENTRY_MASK;
-
-    if ( (va >= VMAP_VIRT_START) && ((va - VMAP_VIRT_START) < VMAP_VIRT_SIZE) )
-        return virt_to_mfn(va);
-
-    ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
-    ASSERT(map[slot].pt.avail != 0);
-
-    return mfn_add(lpae_get_mfn(map[slot]), offset);
-}
-#endif
-
 void flush_page_to_ram(unsigned long mfn, bool sync_icache)
 {
     void *v = map_domain_page(_mfn(mfn));
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index f1ea3199c8eb..f0aee2cfd9f8 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -11,6 +11,9 @@ config COMPAT
 config CORE_PARKING
 	bool
 
+config DOMAIN_PAGE
+	bool
+
 config GRANT_TABLE
 	bool "Grant table support" if EXPERT
 	default y
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 5/5] xen/arm: mm: Reduce the area that xen_second covers
  2022-07-20 18:44 [PATCH v2 0/5] xen/arm: mm: Bunch of clean-ups Julien Grall
                   ` (3 preceding siblings ...)
  2022-07-20 18:44 ` [PATCH v2 4/5] xen/arm: mm: Move domain_{,un}map_* helpers in a separate file Julien Grall
@ 2022-07-20 18:44 ` Julien Grall
  2022-07-21 10:47   ` Bertrand Marquis
  2022-07-21  8:59 ` [PATCH v2 0/5] xen/arm: mm: Bunch of clean-ups Bertrand Marquis
  2022-07-29 22:04 ` Julien Grall
  6 siblings, 1 reply; 20+ messages in thread
From: Julien Grall @ 2022-07-20 18:44 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk, Michal Orzel

From: Julien Grall <jgrall@amazon.com>

At the moment, xen_second is used to cover the first 2GB of the
virtual address space. With the recent rework of the page-tables,
only the first 1GB region (where Xen resides) is effectively used.

In addition to that, I would like to reshuffle the memory layout.
So Xen mappings may not be anymore in the first 2GB of the virtual
address space.

Therefore, rework xen_second so it only covers the 1GB region where
Xen will reside.

With this change, xen_second doesn't cover anymore the xenheap area
on arm32. So, we first need to add memory to the boot allocator before
setting up the xenheap mappings.

Take the opportunity to update the comments on top of xen_fixmap and
xen_xenmap.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Michal Orzel <michal.orzel@arm.com>

----
    Changes in v2:
        - Add Michal's reviewed-by
---
 xen/arch/arm/mm.c    | 32 +++++++++++---------------------
 xen/arch/arm/setup.c | 13 +++++++++++--
 2 files changed, 22 insertions(+), 23 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index ad26ad740308..3d2c046bbb5c 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -116,17 +116,14 @@ static DEFINE_PAGE_TABLE(cpu0_pgtable);
 #endif
 
 /* Common pagetable leaves */
-/* Second level page tables.
- *
- * The second-level table is 2 contiguous pages long, and covers all
- * addresses from 0 to 0x7fffffff. Offsets into it are calculated
- * with second_linear_offset(), not second_table_offset().
- */
-static DEFINE_PAGE_TABLES(xen_second, 2);
-/* First level page table used for fixmap */
+/* Second level page table used to cover Xen virtual address space */
+static DEFINE_PAGE_TABLE(xen_second);
+/* Third level page table used for fixmap */
 DEFINE_BOOT_PAGE_TABLE(xen_fixmap);
-/* First level page table used to map Xen itself with the XN bit set
- * as appropriate. */
+/*
+ * Third level page table used to map Xen itself with the XN bit set
+ * as appropriate.
+ */
 static DEFINE_PAGE_TABLE(xen_xenmap);
 
 /* Non-boot CPUs use this to find the correct pagetables. */
@@ -168,7 +165,6 @@ static void __init __maybe_unused build_assertions(void)
     BUILD_BUG_ON(zeroeth_table_offset(XEN_VIRT_START));
 #endif
     BUILD_BUG_ON(first_table_offset(XEN_VIRT_START));
-    BUILD_BUG_ON(second_linear_offset(XEN_VIRT_START) >= XEN_PT_LPAE_ENTRIES);
 #ifdef CONFIG_DOMAIN_PAGE
     BUILD_BUG_ON(DOMHEAP_VIRT_START & ~FIRST_MASK);
 #endif
@@ -482,14 +478,10 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
     p = (void *) cpu0_pgtable;
 #endif
 
-    /* Initialise first level entries, to point to second level entries */
-    for ( i = 0; i < 2; i++)
-    {
-        p[i] = pte_of_xenaddr((uintptr_t)(xen_second +
-                                          i * XEN_PT_LPAE_ENTRIES));
-        p[i].pt.table = 1;
-        p[i].pt.xn = 0;
-    }
+    /* Map xen second level page-table */
+    p[0] = pte_of_xenaddr((uintptr_t)(xen_second));
+    p[0].pt.table = 1;
+    p[0].pt.xn = 0;
 
     /* Break up the Xen mapping into 4k pages and protect them separately. */
     for ( i = 0; i < XEN_PT_LPAE_ENTRIES; i++ )
@@ -618,8 +610,6 @@ void __init setup_xenheap_mappings(unsigned long base_mfn,
 
     /* Record where the xenheap is, for translation routines. */
     xenheap_virt_end = XENHEAP_VIRT_START + nr_mfns * PAGE_SIZE;
-    xenheap_mfn_start = _mfn(base_mfn);
-    xenheap_mfn_end = _mfn(base_mfn + nr_mfns);
 }
 #else /* CONFIG_ARM_64 */
 void __init setup_xenheap_mappings(unsigned long base_mfn,
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 068e84b10335..500307edc08d 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -774,11 +774,20 @@ static void __init setup_mm(void)
            opt_xenheap_megabytes ? ", from command-line" : "");
     printk("Dom heap: %lu pages\n", domheap_pages);
 
-    setup_xenheap_mappings((e >> PAGE_SHIFT) - xenheap_pages, xenheap_pages);
+    /*
+     * We need some memory to allocate the page-tables used for the
+     * xenheap mappings. So populate the boot allocator first.
+     *
+     * This requires us to set xenheap_mfn_{start, end} first so the Xenheap
+     * region can be avoided.
+     */
+    xenheap_mfn_start = _mfn((e >> PAGE_SHIFT) - xenheap_pages);
+    xenheap_mfn_end = mfn_add(xenheap_mfn_start, xenheap_pages);
 
-    /* Add non-xenheap memory */
     populate_boot_allocator();
 
+    setup_xenheap_mappings(mfn_x(xenheap_mfn_start), xenheap_pages);
+
     /* Frame table covers all of RAM region, including holes */
     setup_frametable_mappings(ram_start, ram_end);
     max_page = PFN_DOWN(ram_end);
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 1/5] xen/arm: Remove most of the *_VIRT_END defines
  2022-07-20 18:44 ` [PATCH v2 1/5] xen/arm: Remove most of the *_VIRT_END defines Julien Grall
@ 2022-07-21  8:09   ` Bertrand Marquis
  2022-07-21  8:35   ` Luca Fancellu
  1 sibling, 0 replies; 20+ messages in thread
From: Bertrand Marquis @ 2022-07-21  8:09 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Julien Grall, Stefano Stabellini, Volodymyr Babchuk,
	Konrad Rzeszutek Wilk, Ross Lagerwall

HI Julien,

> On 20 Jul 2022, at 19:44, Julien Grall <julien@xen.org> wrote:
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, *_VIRT_END may either point to the address after the end
> or the last address of the region.
> 
> The lack of consistency make quite difficult to reason with them.
> 
> Furthermore, there is a risk of overflow in the case where the address
> points past to the end. I am not aware of any cases, so this is only a
> latent bug.
> 
> Start to solve the problem by removing all the *_VIRT_END exclusively used
> by the Arm code and add *_VIRT_SIZE when it is not present.
> 
> Take the opportunity to rename BOOT_FDT_SLOT_SIZE to BOOT_FDT_VIRT_SIZE
> for better consistency and use _AT(vaddr_t, ).
> 
> Also take the opportunity to fix the coding style of the comment touched
> in mm.c.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 1/5] xen/arm: Remove most of the *_VIRT_END defines
  2022-07-20 18:44 ` [PATCH v2 1/5] xen/arm: Remove most of the *_VIRT_END defines Julien Grall
  2022-07-21  8:09   ` Bertrand Marquis
@ 2022-07-21  8:35   ` Luca Fancellu
  1 sibling, 0 replies; 20+ messages in thread
From: Luca Fancellu @ 2022-07-21  8:35 UTC (permalink / raw)
  To: Julien Grall
  Cc: Xen development discussion, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk, Konrad Rzeszutek Wilk,
	Ross Lagerwall



> On 20 Jul 2022, at 19:44, Julien Grall <julien@xen.org> wrote:
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, *_VIRT_END may either point to the address after the end
> or the last address of the region.
> 
> The lack of consistency make quite difficult to reason with them.
> 
> Furthermore, there is a risk of overflow in the case where the address
> points past to the end. I am not aware of any cases, so this is only a
> latent bug.
> 
> Start to solve the problem by removing all the *_VIRT_END exclusively used
> by the Arm code and add *_VIRT_SIZE when it is not present.
> 
> Take the opportunity to rename BOOT_FDT_SLOT_SIZE to BOOT_FDT_VIRT_SIZE
> for better consistency and use _AT(vaddr_t, ).
> 
> Also take the opportunity to fix the coding style of the comment touched
> in mm.c.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 

Hi Julien,

It look good to me,

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

I’ve also tested on fvp, starting Dom0 and few guests

Tested-By: Luca Fancellu <luca.fancellu@arm.com>

Cheers,
Luca



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 2/5] xen/arm32: mm: Consolidate the domheap mappings initialization
  2022-07-20 18:44 ` [PATCH v2 2/5] xen/arm32: mm: Consolidate the domheap mappings initialization Julien Grall
@ 2022-07-21  8:36   ` Bertrand Marquis
  2022-07-21  9:42   ` Luca Fancellu
  1 sibling, 0 replies; 20+ messages in thread
From: Bertrand Marquis @ 2022-07-21  8:36 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Julien Grall, Stefano Stabellini, Volodymyr Babchuk

Hi Julien,

> On 20 Jul 2022, at 19:44, Julien Grall <julien@xen.org> wrote:
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, the domheap mappings initialization is done separately for
> the boot CPU and secondary CPUs. The main difference is for the former
> the pages are part of Xen binary whilst for the latter they are
> dynamically allocated.
> 
> It would be good to have a single helper so it is easier to rework
> on the domheap is initialized.
> 
> For CPU0, we still need to use pre-allocated pages because the
> allocators may use domain_map_page(), so we need to have the domheap
> area ready first. But we can still delay the initialization to setup_mm().
> 
> Introduce a new helper init_domheap_mappings() that will be called
> from setup_mm() for the boot CPU and from init_secondary_pagetables()
> for secondary CPUs.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

With a small typo mentioned after which can be fixed on commit:

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

> 
> ----
>    Changes in v2:
>        - Fix function name in the commit message
>        - Remove duplicated 'been' in the comment
> ---
> xen/arch/arm/include/asm/arm32/mm.h |  2 +
> xen/arch/arm/mm.c                   | 92 +++++++++++++++++++----------
> xen/arch/arm/setup.c                |  8 +++
> 3 files changed, 71 insertions(+), 31 deletions(-)
> 
> diff --git a/xen/arch/arm/include/asm/arm32/mm.h b/xen/arch/arm/include/asm/arm32/mm.h
> index 6b039d9ceaa2..575373aeb985 100644
> --- a/xen/arch/arm/include/asm/arm32/mm.h
> +++ b/xen/arch/arm/include/asm/arm32/mm.h
> @@ -10,6 +10,8 @@ static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr)
>     return false;
> }
> 
> +bool init_domheap_mappings(unsigned int cpu);
> +
> #endif /* __ARM_ARM32_MM_H__ */
> 
> /*
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 0177bc6b34d2..9311f3530066 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -372,6 +372,58 @@ void clear_fixmap(unsigned int map)
> }
> 
> #ifdef CONFIG_DOMAIN_PAGE
> +/*
> + * Prepare the area that will be used to map domheap pages. They are
> + * mapped in 2MB chunks, so we need to allocate the page-tables up to
> + * the 2nd level.
> + *
> + * The caller should make sure the root page-table for @cpu has been
> + * allocated.
> + */
> +bool init_domheap_mappings(unsigned int cpu)
> +{
> +    unsigned int order = get_order_from_pages(DOMHEAP_SECOND_PAGES);
> +    lpae_t *root = per_cpu(xen_pgtable, cpu);
> +    unsigned int i, first_idx;
> +    lpae_t *domheap;
> +    mfn_t mfn;
> +
> +    ASSERT(root);
> +    ASSERT(!per_cpu(xen_dommap, cpu));
> +
> +    /*
> +     * The domheap for cpu0 is before the heap is initialized. So we

There is a “initialized” missing.

Cheers
Bertrand

> +     * need to use pre-allocated pages.
> +     */
> +    if ( !cpu )
> +        domheap = cpu0_dommap;
> +    else
> +        domheap = alloc_xenheap_pages(order, 0);
> +
> +    if ( !domheap )
> +        return false;
> +
> +    /* Ensure the domheap has no stray mappings */
> +    memset(domheap, 0, DOMHEAP_SECOND_PAGES * PAGE_SIZE);
> +
> +    /*
> +     * Update the first level mapping to reference the local CPUs
> +     * domheap mapping pages.
> +     */
> +    mfn = virt_to_mfn(domheap);
> +    first_idx = first_table_offset(DOMHEAP_VIRT_START);
> +    for ( i = 0; i < DOMHEAP_SECOND_PAGES; i++ )
> +    {
> +        lpae_t pte = mfn_to_xen_entry(mfn_add(mfn, i), MT_NORMAL);
> +        pte.pt.table = 1;
> +        write_pte(&root[first_idx + i], pte);
> +    }
> +
> +    per_cpu(xen_dommap, cpu) = domheap;
> +
> +    return true;
> +}
> +
> void *map_domain_page_global(mfn_t mfn)
> {
>     return vmap(&mfn, 1);
> @@ -633,16 +685,6 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
>         p[i].pt.xn = 0;
>     }
> 
> -#ifdef CONFIG_ARM_32
> -    for ( i = 0; i < DOMHEAP_SECOND_PAGES; i++ )
> -    {
> -        p[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)]
> -            = pte_of_xenaddr((uintptr_t)(cpu0_dommap +
> -                                         i * XEN_PT_LPAE_ENTRIES));
> -        p[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)].pt.table = 1;
> -    }
> -#endif
> -
>     /* Break up the Xen mapping into 4k pages and protect them separately. */
>     for ( i = 0; i < XEN_PT_LPAE_ENTRIES; i++ )
>     {
> @@ -686,7 +728,6 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
> 
> #ifdef CONFIG_ARM_32
>     per_cpu(xen_pgtable, 0) = cpu0_pgtable;
> -    per_cpu(xen_dommap, 0) = cpu0_dommap;
> #endif
> }
> 
> @@ -719,39 +760,28 @@ int init_secondary_pagetables(int cpu)
> #else
> int init_secondary_pagetables(int cpu)
> {
> -    lpae_t *first, *domheap, pte;
> -    int i;
> +    lpae_t *first;
> 
>     first = alloc_xenheap_page(); /* root == first level on 32-bit 3-level trie */
> -    domheap = alloc_xenheap_pages(get_order_from_pages(DOMHEAP_SECOND_PAGES), 0);
> 
> -    if ( domheap == NULL || first == NULL )
> +    if ( !first )
>     {
> -        printk("Not enough free memory for secondary CPU%d pagetables\n", cpu);
> -        free_xenheap_pages(domheap, get_order_from_pages(DOMHEAP_SECOND_PAGES));
> -        free_xenheap_page(first);
> +        printk("CPU%u: Unable to allocate the first page-table\n", cpu);
>         return -ENOMEM;
>     }
> 
>     /* Initialise root pagetable from root of boot tables */
>     memcpy(first, cpu0_pgtable, PAGE_SIZE);
> +    per_cpu(xen_pgtable, cpu) = first;
> 
> -    /* Ensure the domheap has no stray mappings */
> -    memset(domheap, 0, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
> -
> -    /* Update the first level mapping to reference the local CPUs
> -     * domheap mapping pages. */
> -    for ( i = 0; i < DOMHEAP_SECOND_PAGES; i++ )
> +    if ( !init_domheap_mappings(cpu) )
>     {
> -        pte = mfn_to_xen_entry(virt_to_mfn(domheap + i * XEN_PT_LPAE_ENTRIES),
> -                               MT_NORMAL);
> -        pte.pt.table = 1;
> -        write_pte(&first[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)], pte);
> +        printk("CPU%u: Unable to prepare the domheap page-tables\n", cpu);
> +        per_cpu(xen_pgtable, cpu) = NULL;
> +        free_xenheap_page(first);
> +        return -ENOMEM;
>     }
> 
> -    per_cpu(xen_pgtable, cpu) = first;
> -    per_cpu(xen_dommap, cpu) = domheap;
> -
>     clear_boot_pagetables();
> 
>     /* Set init_ttbr for this CPU coming up */
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 85ff956ec2e3..068e84b10335 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -783,6 +783,14 @@ static void __init setup_mm(void)
>     setup_frametable_mappings(ram_start, ram_end);
>     max_page = PFN_DOWN(ram_end);
> 
> +    /*
> +     * The allocators may need to use map_domain_page() (such as for
> +     * scrubbing pages). So we need to prepare the domheap area first.
> +     */
> +    if ( !init_domheap_mappings(smp_processor_id()) )
> +        panic("CPU%u: Unable to prepare the domheap page-tables\n",
> +              smp_processor_id());
> +
>     /* Add xenheap memory that was not already added to the boot allocator. */
>     init_xenheap_pages(mfn_to_maddr(xenheap_mfn_start),
>                        mfn_to_maddr(xenheap_mfn_end));
> -- 
> 2.32.0
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/5] xen: Rename CONFIG_DOMAIN_PAGE to CONFIG_ARCH_MAP_DOMAIN_PAGE and...
  2022-07-20 18:44 ` [PATCH v2 3/5] xen: Rename CONFIG_DOMAIN_PAGE to CONFIG_ARCH_MAP_DOMAIN_PAGE and Julien Grall
@ 2022-07-21  8:40   ` Bertrand Marquis
  2022-07-29 21:54     ` Julien Grall
  2022-07-25 15:51   ` Jan Beulich
  1 sibling, 1 reply; 20+ messages in thread
From: Bertrand Marquis @ 2022-07-21  8:40 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Julien Grall, Stefano Stabellini, Volodymyr Babchuk,
	Andrew Cooper, George Dunlap, Jan Beulich, Wei Liu,
	Roger Pau Monné

Hi Julien,


> On 20 Jul 2022, at 19:44, Julien Grall <julien@xen.org> wrote:
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> move it to Kconfig.
> 
> The define CONFIG_DOMAIN_PAGE indicates whether the architecture provide
> helpers to map/unmap a domain page. Rename it to the define to

Maybe “the define to” can be removed in this sentence or it needs some rephrasing.

> CONFIG_ARCH_MAP_DOMAIN_PAGE so it is clearer that this will not remove
> support for domain page (this is not a concept that Xen can't get
> away with).
> 
> Take the opportunity to move CONFIG_MAP_DOMAIN_PAGE to Kconfig as this
> will soon be necessary to use it in the Makefile.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

With this fixed:
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com> #arm part

Cheers
Bertrand

> 
> ----
>    Changes in v2:
>        - New patch
> ---
> xen/arch/arm/Kconfig              | 1 +
> xen/arch/arm/include/asm/config.h | 1 -
> xen/arch/arm/mm.c                 | 2 +-
> xen/arch/x86/Kconfig              | 1 +
> xen/arch/x86/include/asm/config.h | 1 -
> xen/common/Kconfig                | 3 +++
> xen/include/xen/domain_page.h     | 6 +++---
> 7 files changed, 9 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index be9eff014120..33e004d702bf 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -1,6 +1,7 @@
> config ARM_32
> 	def_bool y
> 	depends on "$(ARCH)" = "arm32"
> +	select ARCH_MAP_DOMAIN_PAGE
> 
> config ARM_64
> 	def_bool y
> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
> index 66db618b34e7..2fafb9f2283c 100644
> --- a/xen/arch/arm/include/asm/config.h
> +++ b/xen/arch/arm/include/asm/config.h
> @@ -122,7 +122,6 @@
> 
> #ifdef CONFIG_ARM_32
> 
> -#define CONFIG_DOMAIN_PAGE 1
> #define CONFIG_SEPARATE_XENHEAP 1
> 
> #define FRAMETABLE_VIRT_START  _AT(vaddr_t,0x02000000)
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 9311f3530066..7a722d6c86c6 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -371,7 +371,7 @@ void clear_fixmap(unsigned int map)
>     BUG_ON(res != 0);
> }
> 
> -#ifdef CONFIG_DOMAIN_PAGE
> +#ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE
> /*
>  * Prepare the area that will be used to map domheap pages. They are
>  * mapped in 2MB chunks, so we need to allocate the page-tables up to
> diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
> index 6bed72b79141..6a7825f4ba3c 100644
> --- a/xen/arch/x86/Kconfig
> +++ b/xen/arch/x86/Kconfig
> @@ -8,6 +8,7 @@ config X86
> 	select ACPI_LEGACY_TABLES_LOOKUP
> 	select ACPI_NUMA
> 	select ALTERNATIVE_CALL
> +	select ARCH_MAP_DOMAIN_PAGE
> 	select ARCH_SUPPORTS_INT128
> 	select CORE_PARKING
> 	select HAS_ALTERNATIVE
> diff --git a/xen/arch/x86/include/asm/config.h b/xen/arch/x86/include/asm/config.h
> index 07bcd158314b..fbc4bb3416bd 100644
> --- a/xen/arch/x86/include/asm/config.h
> +++ b/xen/arch/x86/include/asm/config.h
> @@ -22,7 +22,6 @@
> #define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 1
> #define CONFIG_DISCONTIGMEM 1
> #define CONFIG_NUMA_EMU 1
> -#define CONFIG_DOMAIN_PAGE 1
> 
> #define CONFIG_PAGEALLOC_MAX_ORDER (2 * PAGETABLE_ORDER)
> #define CONFIG_DOMU_MAX_ORDER      PAGETABLE_ORDER
> diff --git a/xen/common/Kconfig b/xen/common/Kconfig
> index 41a67891bcc8..f1ea3199c8eb 100644
> --- a/xen/common/Kconfig
> +++ b/xen/common/Kconfig
> @@ -25,6 +25,9 @@ config GRANT_TABLE
> config ALTERNATIVE_CALL
> 	bool
> 
> +config ARCH_MAP_DOMAIN_PAGE
> +	bool
> +
> config HAS_ALTERNATIVE
> 	bool
> 
> diff --git a/xen/include/xen/domain_page.h b/xen/include/xen/domain_page.h
> index a182d33b6701..149b217b9619 100644
> --- a/xen/include/xen/domain_page.h
> +++ b/xen/include/xen/domain_page.h
> @@ -17,7 +17,7 @@
> void clear_domain_page(mfn_t mfn);
> void copy_domain_page(mfn_t dst, const mfn_t src);
> 
> -#ifdef CONFIG_DOMAIN_PAGE
> +#ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE
> 
> /*
>  * Map a given page frame, returning the mapped virtual address. The page is
> @@ -51,7 +51,7 @@ static inline void *__map_domain_page_global(const struct page_info *pg)
>     return map_domain_page_global(page_to_mfn(pg));
> }
> 
> -#else /* !CONFIG_DOMAIN_PAGE */
> +#else /* !CONFIG_ARCH_MAP_DOMAIN_PAGE */
> 
> #define map_domain_page(mfn)                __mfn_to_virt(mfn_x(mfn))
> #define __map_domain_page(pg)               page_to_virt(pg)
> @@ -70,7 +70,7 @@ static inline void *__map_domain_page_global(const struct page_info *pg)
> 
> static inline void unmap_domain_page_global(const void *va) {};
> 
> -#endif /* !CONFIG_DOMAIN_PAGE */
> +#endif /* !CONFIG_ARCH_MAP_DOMAIN_PAGE */
> 
> #define UNMAP_DOMAIN_PAGE(p) do {   \
>     unmap_domain_page(p);           \
> -- 
> 2.32.0
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 0/5] xen/arm: mm: Bunch of clean-ups
  2022-07-20 18:44 [PATCH v2 0/5] xen/arm: mm: Bunch of clean-ups Julien Grall
                   ` (4 preceding siblings ...)
  2022-07-20 18:44 ` [PATCH v2 5/5] xen/arm: mm: Reduce the area that xen_second covers Julien Grall
@ 2022-07-21  8:59 ` Bertrand Marquis
  2022-07-29 22:04 ` Julien Grall
  6 siblings, 0 replies; 20+ messages in thread
From: Bertrand Marquis @ 2022-07-21  8:59 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Julien Grall, Stefano Stabellini, Volodymyr Babchuk,
	Konrad Rzeszutek Wilk, Ross Lagerwall, Andrew Cooper,
	George Dunlap, Jan Beulich, Wei Liu, Roger Pau Monné

Hi Julien,

> On 20 Jul 2022, at 19:44, Julien Grall <julien@xen.org> wrote:
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> Hi all,
> 
> This series is a collection of patches to clean-up the MM subsystem
> I have done in preparation for the next revision of "xen/arm: Don't
> switch TTBR while the MMU is on" [1].
> 
> Cheers,
> 
> [1] https://lore.kernel.org/all/20220309112048.17377-1-julien@xen.org/
> 


I tested the whole serie with (including starting a guest) on qemu x86, qemu arm32, qemu arm64 and fvp base.

So for the whole serie:
Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> Julien Grall (5):
>  xen/arm: Remove most of the *_VIRT_END defines
>  xen/arm32: mm: Consolidate the domheap mappings initialization
>  xen: Rename CONFIG_DOMAIN_PAGE to CONFIG_ARCH_MAP_DOMAIN_PAGE and...
>  xen/arm: mm: Move domain_{,un}map_* helpers in a separate file
>  xen/arm: mm: Reduce the area that xen_second covers
> 
> xen/arch/arm/Kconfig                |   1 +
> xen/arch/arm/Makefile               |   1 +
> xen/arch/arm/domain_page.c          | 193 ++++++++++++++++++++++++
> xen/arch/arm/include/asm/arm32/mm.h |   8 +
> xen/arch/arm/include/asm/config.h   |  19 +--
> xen/arch/arm/include/asm/lpae.h     |  17 +++
> xen/arch/arm/livepatch.c            |   2 +-
> xen/arch/arm/mm.c                   | 221 ++++------------------------
> xen/arch/arm/setup.c                |  21 ++-
> xen/arch/x86/Kconfig                |   1 +
> xen/arch/x86/include/asm/config.h   |   1 -
> xen/common/Kconfig                  |   6 +
> xen/include/xen/domain_page.h       |   6 +-
> 13 files changed, 283 insertions(+), 214 deletions(-)
> create mode 100644 xen/arch/arm/domain_page.c
> 
> -- 
> 2.32.0
> 
> 



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 2/5] xen/arm32: mm: Consolidate the domheap mappings initialization
  2022-07-20 18:44 ` [PATCH v2 2/5] xen/arm32: mm: Consolidate the domheap mappings initialization Julien Grall
  2022-07-21  8:36   ` Bertrand Marquis
@ 2022-07-21  9:42   ` Luca Fancellu
  1 sibling, 0 replies; 20+ messages in thread
From: Luca Fancellu @ 2022-07-21  9:42 UTC (permalink / raw)
  To: Julien Grall
  Cc: Xen development discussion, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk



> On 20 Jul 2022, at 19:44, Julien Grall <julien@xen.org> wrote:
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, the domheap mappings initialization is done separately for
> the boot CPU and secondary CPUs. The main difference is for the former
> the pages are part of Xen binary whilst for the latter they are
> dynamically allocated.
> 
> It would be good to have a single helper so it is easier to rework
> on the domheap is initialized.
> 
> For CPU0, we still need to use pre-allocated pages because the
> allocators may use domain_map_page(), so we need to have the domheap
> area ready first. But we can still delay the initialization to setup_mm().
> 
> Introduce a new helper init_domheap_mappings() that will be called
> from setup_mm() for the boot CPU and from init_secondary_pagetables()
> for secondary CPUs.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 

Hi Julien,

With Bertrand's comment addressed:

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

I’ve also tested with fvp using aarch32 configuration starting Dom0 and few guests
and everything works.

Tested-by: Luca Fancellu <luca.fancellu@arm.com>

Cheers,
Luca


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 4/5] xen/arm: mm: Move domain_{,un}map_* helpers in a separate file
  2022-07-20 18:44 ` [PATCH v2 4/5] xen/arm: mm: Move domain_{,un}map_* helpers in a separate file Julien Grall
@ 2022-07-21 10:07   ` Jan Beulich
  2022-07-21 10:09     ` Julien Grall
  2022-07-21 10:41   ` Bertrand Marquis
  1 sibling, 1 reply; 20+ messages in thread
From: Jan Beulich @ 2022-07-21 10:07 UTC (permalink / raw)
  To: Julien Grall
  Cc: Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	xen-devel

On 20.07.2022 20:44, Julien Grall wrote:
> --- a/xen/common/Kconfig
> +++ b/xen/common/Kconfig
> @@ -11,6 +11,9 @@ config COMPAT
>  config CORE_PARKING
>  	bool
>  
> +config DOMAIN_PAGE
> +	bool
> +
>  config GRANT_TABLE
>  	bool "Grant table support" if EXPERT
>  	default y

Is this an unintended leftover? I can't spot any use in this patch.

Jan


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 4/5] xen/arm: mm: Move domain_{,un}map_* helpers in a separate file
  2022-07-21 10:07   ` Jan Beulich
@ 2022-07-21 10:09     ` Julien Grall
  0 siblings, 0 replies; 20+ messages in thread
From: Julien Grall @ 2022-07-21 10:09 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	xen-devel

Hi,

On 21/07/2022 11:07, Jan Beulich wrote:
> On 20.07.2022 20:44, Julien Grall wrote:
>> --- a/xen/common/Kconfig
>> +++ b/xen/common/Kconfig
>> @@ -11,6 +11,9 @@ config COMPAT
>>   config CORE_PARKING
>>   	bool
>>   
>> +config DOMAIN_PAGE
>> +	bool
>> +
>>   config GRANT_TABLE
>>   	bool "Grant table support" if EXPERT
>>   	default y
> 
> Is this an unintended leftover? I can't spot any use in this patch.

This was a mistake when splitting/renaming the config. I will drop it.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 4/5] xen/arm: mm: Move domain_{,un}map_* helpers in a separate file
  2022-07-20 18:44 ` [PATCH v2 4/5] xen/arm: mm: Move domain_{,un}map_* helpers in a separate file Julien Grall
  2022-07-21 10:07   ` Jan Beulich
@ 2022-07-21 10:41   ` Bertrand Marquis
  1 sibling, 0 replies; 20+ messages in thread
From: Bertrand Marquis @ 2022-07-21 10:41 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Julien Grall, Stefano Stabellini, Volodymyr Babchuk,
	Andrew Cooper, George Dunlap, Jan Beulich, Wei Liu

Hi Julien,

> On 20 Jul 2022, at 19:44, Julien Grall <julien@xen.org> wrote:
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> The file xen/arch/mm.c has been growing quite a lot. It now contains
> various independent part of the MM subsytem.
> 
> One of them is the helpers to map/unmap a page which is only used
> by arm32 and protected by CONFIG_ARCH_MAP_DOMAIN_PAGE. Move them in a
> new file xen/arch/arm/domain_page.c.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

With the kconfig part removed:

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> 
> ----
>    Changes in v2:
>        - Move CONFIG_* to Kconfig is now in a separate patch
> ---
> xen/arch/arm/Makefile               |   1 +
> xen/arch/arm/domain_page.c          | 193 +++++++++++++++++++++++++++
> xen/arch/arm/include/asm/arm32/mm.h |   6 +
> xen/arch/arm/include/asm/lpae.h     |  17 +++
> xen/arch/arm/mm.c                   | 198 +---------------------------
> xen/common/Kconfig                  |   3 +
> 6 files changed, 222 insertions(+), 196 deletions(-)
> create mode 100644 xen/arch/arm/domain_page.c
> 
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index bb7a6151c13c..4d076b278b10 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -17,6 +17,7 @@ obj-y += device.o
> obj-$(CONFIG_IOREQ_SERVER) += dm.o
> obj-y += domain.o
> obj-y += domain_build.init.o
> +obj-$(CONFIG_ARCH_MAP_DOMAIN_PAGE) += domain_page.o
> obj-y += domctl.o
> obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
> obj-y += efi/
> diff --git a/xen/arch/arm/domain_page.c b/xen/arch/arm/domain_page.c
> new file mode 100644
> index 000000000000..63e97730cf57
> --- /dev/null
> +++ b/xen/arch/arm/domain_page.c
> @@ -0,0 +1,193 @@
> +#include <xen/mm.h>
> +#include <xen/pmap.h>
> +#include <xen/vmap.h>
> +
> +/* Override macros from asm/page.h to make them work with mfn_t */
> +#undef virt_to_mfn
> +#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
> +
> +/* cpu0's domheap page tables */
> +static DEFINE_PAGE_TABLES(cpu0_dommap, DOMHEAP_SECOND_PAGES);
> +
> +/*
> + * xen_dommap == pages used by map_domain_page, these pages contain
> + * the second level pagetables which map the domheap region
> + * starting at DOMHEAP_VIRT_START in 2MB chunks.
> + */
> +static DEFINE_PER_CPU(lpae_t *, xen_dommap);
> +
> +/*
> + * Prepare the area that will be used to map domheap pages. They are
> + * mapped in 2MB chunks, so we need to allocate the page-tables up to
> + * the 2nd level.
> + *
> + * The caller should make sure the root page-table for @cpu has been
> + * allocated.
> + */
> +bool init_domheap_mappings(unsigned int cpu)
> +{
> +    unsigned int order = get_order_from_pages(DOMHEAP_SECOND_PAGES);
> +    lpae_t *root = per_cpu(xen_pgtable, cpu);
> +    unsigned int i, first_idx;
> +    lpae_t *domheap;
> +    mfn_t mfn;
> +
> +    ASSERT(root);
> +    ASSERT(!per_cpu(xen_dommap, cpu));
> +
> +    /*
> +     * The domheap for cpu0 is before the heap is initialized. So we
> +     * need to use pre-allocated pages.
> +     */
> +    if ( !cpu )
> +        domheap = cpu0_dommap;
> +    else
> +        domheap = alloc_xenheap_pages(order, 0);
> +
> +    if ( !domheap )
> +        return false;
> +
> +    /* Ensure the domheap has no stray mappings */
> +    memset(domheap, 0, DOMHEAP_SECOND_PAGES * PAGE_SIZE);
> +
> +    /*
> +     * Update the first level mapping to reference the local CPUs
> +     * domheap mapping pages.
> +     */
> +    mfn = virt_to_mfn(domheap);
> +    first_idx = first_table_offset(DOMHEAP_VIRT_START);
> +    for ( i = 0; i < DOMHEAP_SECOND_PAGES; i++ )
> +    {
> +        lpae_t pte = mfn_to_xen_entry(mfn_add(mfn, i), MT_NORMAL);
> +        pte.pt.table = 1;
> +        write_pte(&root[first_idx + i], pte);
> +    }
> +
> +    per_cpu(xen_dommap, cpu) = domheap;
> +
> +    return true;
> +}
> +
> +void *map_domain_page_global(mfn_t mfn)
> +{
> +    return vmap(&mfn, 1);
> +}
> +
> +void unmap_domain_page_global(const void *va)
> +{
> +    vunmap(va);
> +}
> +
> +/* Map a page of domheap memory */
> +void *map_domain_page(mfn_t mfn)
> +{
> +    unsigned long flags;
> +    lpae_t *map = this_cpu(xen_dommap);
> +    unsigned long slot_mfn = mfn_x(mfn) & ~XEN_PT_LPAE_ENTRY_MASK;
> +    vaddr_t va;
> +    lpae_t pte;
> +    int i, slot;
> +
> +    local_irq_save(flags);
> +
> +    /* The map is laid out as an open-addressed hash table where each
> +     * entry is a 2MB superpage pte.  We use the available bits of each
> +     * PTE as a reference count; when the refcount is zero the slot can
> +     * be reused. */
> +    for ( slot = (slot_mfn >> XEN_PT_LPAE_SHIFT) % DOMHEAP_ENTRIES, i = 0;
> +          i < DOMHEAP_ENTRIES;
> +          slot = (slot + 1) % DOMHEAP_ENTRIES, i++ )
> +    {
> +        if ( map[slot].pt.avail < 0xf &&
> +             map[slot].pt.base == slot_mfn &&
> +             map[slot].pt.valid )
> +        {
> +            /* This slot already points to the right place; reuse it */
> +            map[slot].pt.avail++;
> +            break;
> +        }
> +        else if ( map[slot].pt.avail == 0 )
> +        {
> +            /* Commandeer this 2MB slot */
> +            pte = mfn_to_xen_entry(_mfn(slot_mfn), MT_NORMAL);
> +            pte.pt.avail = 1;
> +            write_pte(map + slot, pte);
> +            break;
> +        }
> +
> +    }
> +    /* If the map fills up, the callers have misbehaved. */
> +    BUG_ON(i == DOMHEAP_ENTRIES);
> +
> +#ifndef NDEBUG
> +    /* Searching the hash could get slow if the map starts filling up.
> +     * Cross that bridge when we come to it */
> +    {
> +        static int max_tries = 32;
> +        if ( i >= max_tries )
> +        {
> +            dprintk(XENLOG_WARNING, "Domheap map is filling: %i tries\n", i);
> +            max_tries *= 2;
> +        }
> +    }
> +#endif
> +
> +    local_irq_restore(flags);
> +
> +    va = (DOMHEAP_VIRT_START
> +          + (slot << SECOND_SHIFT)
> +          + ((mfn_x(mfn) & XEN_PT_LPAE_ENTRY_MASK) << THIRD_SHIFT));
> +
> +    /*
> +     * We may not have flushed this specific subpage at map time,
> +     * since we only flush the 4k page not the superpage
> +     */
> +    flush_xen_tlb_range_va_local(va, PAGE_SIZE);
> +
> +    return (void *)va;
> +}
> +
> +/* Release a mapping taken with map_domain_page() */
> +void unmap_domain_page(const void *va)
> +{
> +    unsigned long flags;
> +    lpae_t *map = this_cpu(xen_dommap);
> +    int slot = ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> +
> +    if ( !va )
> +        return;
> +
> +    local_irq_save(flags);
> +
> +    ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
> +    ASSERT(map[slot].pt.avail != 0);
> +
> +    map[slot].pt.avail--;
> +
> +    local_irq_restore(flags);
> +}
> +
> +mfn_t domain_page_map_to_mfn(const void *ptr)
> +{
> +    unsigned long va = (unsigned long)ptr;
> +    lpae_t *map = this_cpu(xen_dommap);
> +    int slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> +    unsigned long offset = (va>>THIRD_SHIFT) & XEN_PT_LPAE_ENTRY_MASK;
> +
> +    if ( (va >= VMAP_VIRT_START) && ((va - VMAP_VIRT_START) < VMAP_VIRT_SIZE) )
> +        return virt_to_mfn(va);
> +
> +    ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
> +    ASSERT(map[slot].pt.avail != 0);
> +
> +    return mfn_add(lpae_get_mfn(map[slot]), offset);
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/include/asm/arm32/mm.h b/xen/arch/arm/include/asm/arm32/mm.h
> index 575373aeb985..8bfc906e7178 100644
> --- a/xen/arch/arm/include/asm/arm32/mm.h
> +++ b/xen/arch/arm/include/asm/arm32/mm.h
> @@ -1,6 +1,12 @@
> #ifndef __ARM_ARM32_MM_H__
> #define __ARM_ARM32_MM_H__
> 
> +#include <xen/percpu.h>
> +
> +#include <asm/lpae.h>
> +
> +DECLARE_PER_CPU(lpae_t *, xen_pgtable);
> +
> /*
>  * Only a limited amount of RAM, called xenheap, is always mapped on ARM32.
>  * For convenience always return false.
> diff --git a/xen/arch/arm/include/asm/lpae.h b/xen/arch/arm/include/asm/lpae.h
> index fc19cbd84772..3fdd5d0de28e 100644
> --- a/xen/arch/arm/include/asm/lpae.h
> +++ b/xen/arch/arm/include/asm/lpae.h
> @@ -261,6 +261,23 @@ lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr);
> #define third_table_offset(va)  TABLE_OFFSET(third_linear_offset(va))
> #define zeroeth_table_offset(va)  TABLE_OFFSET(zeroeth_linear_offset(va))
> 
> +/*
> + * Macros to define page-tables:
> + *  - DEFINE_BOOT_PAGE_TABLE is used to define page-table that are used
> + *  in assembly code before BSS is zeroed.
> + *  - DEFINE_PAGE_TABLE{,S} are used to define one or multiple
> + *  page-tables to be used after BSS is zeroed (typically they are only used
> + *  in C).
> + */
> +#define DEFINE_BOOT_PAGE_TABLE(name)                                          \
> +lpae_t __aligned(PAGE_SIZE) __section(".data.page_aligned")                   \
> +    name[XEN_PT_LPAE_ENTRIES]
> +
> +#define DEFINE_PAGE_TABLES(name, nr)                    \
> +lpae_t __aligned(PAGE_SIZE) name[XEN_PT_LPAE_ENTRIES * (nr)]
> +
> +#define DEFINE_PAGE_TABLE(name) DEFINE_PAGE_TABLES(name, 1)
> +
> #endif /* __ARM_LPAE_H__ */
> 
> /*
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 7a722d6c86c6..ad26ad740308 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -57,23 +57,6 @@ mm_printk(const char *fmt, ...) {}
>     } while (0)
> #endif
> 
> -/*
> - * Macros to define page-tables:
> - *  - DEFINE_BOOT_PAGE_TABLE is used to define page-table that are used
> - *  in assembly code before BSS is zeroed.
> - *  - DEFINE_PAGE_TABLE{,S} are used to define one or multiple
> - *  page-tables to be used after BSS is zeroed (typically they are only used
> - *  in C).
> - */
> -#define DEFINE_BOOT_PAGE_TABLE(name)                                          \
> -lpae_t __aligned(PAGE_SIZE) __section(".data.page_aligned")                   \
> -    name[XEN_PT_LPAE_ENTRIES]
> -
> -#define DEFINE_PAGE_TABLES(name, nr)                    \
> -lpae_t __aligned(PAGE_SIZE) name[XEN_PT_LPAE_ENTRIES * (nr)]
> -
> -#define DEFINE_PAGE_TABLE(name) DEFINE_PAGE_TABLES(name, 1)
> -
> /* Static start-of-day pagetables that we use before the allocators
>  * are up. These are used by all CPUs during bringup before switching
>  * to the CPUs own pagetables.
> @@ -110,7 +93,7 @@ DEFINE_BOOT_PAGE_TABLE(boot_third);
> /* Main runtime page tables */
> 
> /*
> - * For arm32 xen_pgtable and xen_dommap are per-PCPU and are allocated before
> + * For arm32 xen_pgtable are per-PCPU and are allocated before
>  * bringing up each CPU. For arm64 xen_pgtable is common to all PCPUs.
>  *
>  * xen_second, xen_fixmap and xen_xenmap are always shared between all
> @@ -126,18 +109,10 @@ static DEFINE_PAGE_TABLE(xen_first);
> #define HYP_PT_ROOT_LEVEL 1
> /* Per-CPU pagetable pages */
> /* xen_pgtable == root of the trie (zeroeth level on 64-bit, first on 32-bit) */
> -static DEFINE_PER_CPU(lpae_t *, xen_pgtable);
> +DEFINE_PER_CPU(lpae_t *, xen_pgtable);
> #define THIS_CPU_PGTABLE this_cpu(xen_pgtable)
> -/*
> - * xen_dommap == pages used by map_domain_page, these pages contain
> - * the second level pagetables which map the domheap region
> - * starting at DOMHEAP_VIRT_START in 2MB chunks.
> - */
> -static DEFINE_PER_CPU(lpae_t *, xen_dommap);
> /* Root of the trie for cpu0, other CPU's PTs are dynamically allocated */
> static DEFINE_PAGE_TABLE(cpu0_pgtable);
> -/* cpu0's domheap page tables */
> -static DEFINE_PAGE_TABLES(cpu0_dommap, DOMHEAP_SECOND_PAGES);
> #endif
> 
> /* Common pagetable leaves */
> @@ -371,175 +346,6 @@ void clear_fixmap(unsigned int map)
>     BUG_ON(res != 0);
> }
> 
> -#ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE
> -/*
> - * Prepare the area that will be used to map domheap pages. They are
> - * mapped in 2MB chunks, so we need to allocate the page-tables up to
> - * the 2nd level.
> - *
> - * The caller should make sure the root page-table for @cpu has been
> - * allocated.
> - */
> -bool init_domheap_mappings(unsigned int cpu)
> -{
> -    unsigned int order = get_order_from_pages(DOMHEAP_SECOND_PAGES);
> -    lpae_t *root = per_cpu(xen_pgtable, cpu);
> -    unsigned int i, first_idx;
> -    lpae_t *domheap;
> -    mfn_t mfn;
> -
> -    ASSERT(root);
> -    ASSERT(!per_cpu(xen_dommap, cpu));
> -
> -    /*
> -     * The domheap for cpu0 is before the heap is initialized. So we
> -     * need to use pre-allocated pages.
> -     */
> -    if ( !cpu )
> -        domheap = cpu0_dommap;
> -    else
> -        domheap = alloc_xenheap_pages(order, 0);
> -
> -    if ( !domheap )
> -        return false;
> -
> -    /* Ensure the domheap has no stray mappings */
> -    memset(domheap, 0, DOMHEAP_SECOND_PAGES * PAGE_SIZE);
> -
> -    /*
> -     * Update the first level mapping to reference the local CPUs
> -     * domheap mapping pages.
> -     */
> -    mfn = virt_to_mfn(domheap);
> -    first_idx = first_table_offset(DOMHEAP_VIRT_START);
> -    for ( i = 0; i < DOMHEAP_SECOND_PAGES; i++ )
> -    {
> -        lpae_t pte = mfn_to_xen_entry(mfn_add(mfn, i), MT_NORMAL);
> -        pte.pt.table = 1;
> -        write_pte(&root[first_idx + i], pte);
> -    }
> -
> -    per_cpu(xen_dommap, cpu) = domheap;
> -
> -    return true;
> -}
> -
> -void *map_domain_page_global(mfn_t mfn)
> -{
> -    return vmap(&mfn, 1);
> -}
> -
> -void unmap_domain_page_global(const void *va)
> -{
> -    vunmap(va);
> -}
> -
> -/* Map a page of domheap memory */
> -void *map_domain_page(mfn_t mfn)
> -{
> -    unsigned long flags;
> -    lpae_t *map = this_cpu(xen_dommap);
> -    unsigned long slot_mfn = mfn_x(mfn) & ~XEN_PT_LPAE_ENTRY_MASK;
> -    vaddr_t va;
> -    lpae_t pte;
> -    int i, slot;
> -
> -    local_irq_save(flags);
> -
> -    /* The map is laid out as an open-addressed hash table where each
> -     * entry is a 2MB superpage pte.  We use the available bits of each
> -     * PTE as a reference count; when the refcount is zero the slot can
> -     * be reused. */
> -    for ( slot = (slot_mfn >> XEN_PT_LPAE_SHIFT) % DOMHEAP_ENTRIES, i = 0;
> -          i < DOMHEAP_ENTRIES;
> -          slot = (slot + 1) % DOMHEAP_ENTRIES, i++ )
> -    {
> -        if ( map[slot].pt.avail < 0xf &&
> -             map[slot].pt.base == slot_mfn &&
> -             map[slot].pt.valid )
> -        {
> -            /* This slot already points to the right place; reuse it */
> -            map[slot].pt.avail++;
> -            break;
> -        }
> -        else if ( map[slot].pt.avail == 0 )
> -        {
> -            /* Commandeer this 2MB slot */
> -            pte = mfn_to_xen_entry(_mfn(slot_mfn), MT_NORMAL);
> -            pte.pt.avail = 1;
> -            write_pte(map + slot, pte);
> -            break;
> -        }
> -
> -    }
> -    /* If the map fills up, the callers have misbehaved. */
> -    BUG_ON(i == DOMHEAP_ENTRIES);
> -
> -#ifndef NDEBUG
> -    /* Searching the hash could get slow if the map starts filling up.
> -     * Cross that bridge when we come to it */
> -    {
> -        static int max_tries = 32;
> -        if ( i >= max_tries )
> -        {
> -            dprintk(XENLOG_WARNING, "Domheap map is filling: %i tries\n", i);
> -            max_tries *= 2;
> -        }
> -    }
> -#endif
> -
> -    local_irq_restore(flags);
> -
> -    va = (DOMHEAP_VIRT_START
> -          + (slot << SECOND_SHIFT)
> -          + ((mfn_x(mfn) & XEN_PT_LPAE_ENTRY_MASK) << THIRD_SHIFT));
> -
> -    /*
> -     * We may not have flushed this specific subpage at map time,
> -     * since we only flush the 4k page not the superpage
> -     */
> -    flush_xen_tlb_range_va_local(va, PAGE_SIZE);
> -
> -    return (void *)va;
> -}
> -
> -/* Release a mapping taken with map_domain_page() */
> -void unmap_domain_page(const void *va)
> -{
> -    unsigned long flags;
> -    lpae_t *map = this_cpu(xen_dommap);
> -    int slot = ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> -
> -    if ( !va )
> -        return;
> -
> -    local_irq_save(flags);
> -
> -    ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
> -    ASSERT(map[slot].pt.avail != 0);
> -
> -    map[slot].pt.avail--;
> -
> -    local_irq_restore(flags);
> -}
> -
> -mfn_t domain_page_map_to_mfn(const void *ptr)
> -{
> -    unsigned long va = (unsigned long)ptr;
> -    lpae_t *map = this_cpu(xen_dommap);
> -    int slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> -    unsigned long offset = (va>>THIRD_SHIFT) & XEN_PT_LPAE_ENTRY_MASK;
> -
> -    if ( (va >= VMAP_VIRT_START) && ((va - VMAP_VIRT_START) < VMAP_VIRT_SIZE) )
> -        return virt_to_mfn(va);
> -
> -    ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
> -    ASSERT(map[slot].pt.avail != 0);
> -
> -    return mfn_add(lpae_get_mfn(map[slot]), offset);
> -}
> -#endif
> -
> void flush_page_to_ram(unsigned long mfn, bool sync_icache)
> {
>     void *v = map_domain_page(_mfn(mfn));
> diff --git a/xen/common/Kconfig b/xen/common/Kconfig
> index f1ea3199c8eb..f0aee2cfd9f8 100644
> --- a/xen/common/Kconfig
> +++ b/xen/common/Kconfig
> @@ -11,6 +11,9 @@ config COMPAT
> config CORE_PARKING
> 	bool
> 
> +config DOMAIN_PAGE
> +	bool
> +
> config GRANT_TABLE
> 	bool "Grant table support" if EXPERT
> 	default y
> -- 
> 2.32.0
> 
> 



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 5/5] xen/arm: mm: Reduce the area that xen_second covers
  2022-07-20 18:44 ` [PATCH v2 5/5] xen/arm: mm: Reduce the area that xen_second covers Julien Grall
@ 2022-07-21 10:47   ` Bertrand Marquis
  0 siblings, 0 replies; 20+ messages in thread
From: Bertrand Marquis @ 2022-07-21 10:47 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Julien Grall, Stefano Stabellini, Volodymyr Babchuk,
	Michal Orzel

Hi Julien,

> On 20 Jul 2022, at 19:44, Julien Grall <julien@xen.org> wrote:
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, xen_second is used to cover the first 2GB of the
> virtual address space. With the recent rework of the page-tables,
> only the first 1GB region (where Xen resides) is effectively used.
> 
> In addition to that, I would like to reshuffle the memory layout.
> So Xen mappings may not be anymore in the first 2GB of the virtual
> address space.
> 
> Therefore, rework xen_second so it only covers the 1GB region where
> Xen will reside.
> 
> With this change, xen_second doesn't cover anymore the xenheap area
> on arm32. So, we first need to add memory to the boot allocator before
> setting up the xenheap mappings.
> 
> Take the opportunity to update the comments on top of xen_fixmap and
> xen_xenmap.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Michal Orzel <michal.orzel@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/5] xen: Rename CONFIG_DOMAIN_PAGE to CONFIG_ARCH_MAP_DOMAIN_PAGE and...
  2022-07-20 18:44 ` [PATCH v2 3/5] xen: Rename CONFIG_DOMAIN_PAGE to CONFIG_ARCH_MAP_DOMAIN_PAGE and Julien Grall
  2022-07-21  8:40   ` Bertrand Marquis
@ 2022-07-25 15:51   ` Jan Beulich
  2022-07-29 21:53     ` Julien Grall
  1 sibling, 1 reply; 20+ messages in thread
From: Jan Beulich @ 2022-07-25 15:51 UTC (permalink / raw)
  To: Julien Grall
  Cc: Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	Roger Pau Monné,
	xen-devel

On 20.07.2022 20:44, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> move it to Kconfig.
> 
> The define CONFIG_DOMAIN_PAGE indicates whether the architecture provide
> helpers to map/unmap a domain page. Rename it to the define to
> CONFIG_ARCH_MAP_DOMAIN_PAGE so it is clearer that this will not remove
> support for domain page (this is not a concept that Xen can't get
> away with).

Especially the part in parentheses reads odd, if not backwards.

> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -371,7 +371,7 @@ void clear_fixmap(unsigned int map)
>      BUG_ON(res != 0);
>  }
>  
> -#ifdef CONFIG_DOMAIN_PAGE
> +#ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE
>  /*
>   * Prepare the area that will be used to map domheap pages. They are
>   * mapped in 2MB chunks, so we need to allocate the page-tables up to

What about the other #ifdef in build_assertions()? With that also
converted (and preferably with the description adjusted)
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/5] xen: Rename CONFIG_DOMAIN_PAGE to CONFIG_ARCH_MAP_DOMAIN_PAGE and...
  2022-07-25 15:51   ` Jan Beulich
@ 2022-07-29 21:53     ` Julien Grall
  0 siblings, 0 replies; 20+ messages in thread
From: Julien Grall @ 2022-07-29 21:53 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk, Andrew Cooper, George Dunlap, Wei Liu,
	Roger Pau Monné,
	xen-devel

Hi Jan,

On 25/07/2022 16:51, Jan Beulich wrote:
> On 20.07.2022 20:44, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> move it to Kconfig.
>>
>> The define CONFIG_DOMAIN_PAGE indicates whether the architecture provide
>> helpers to map/unmap a domain page. Rename it to the define to
>> CONFIG_ARCH_MAP_DOMAIN_PAGE so it is clearer that this will not remove
>> support for domain page (this is not a concept that Xen can't get
>> away with).
> 
> Especially the part in parentheses reads odd, if not backwards.

Indeed. I tweaked the sentence to:

Rename it to CONFIG_ARCH_MAP_DOMAIN_PAGE so it is clearer that support 
for domain page is not something that can be disabled in Xen.

> 
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -371,7 +371,7 @@ void clear_fixmap(unsigned int map)
>>       BUG_ON(res != 0);
>>   }
>>   
>> -#ifdef CONFIG_DOMAIN_PAGE
>> +#ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE
>>   /*
>>    * Prepare the area that will be used to map domheap pages. They are
>>    * mapped in 2MB chunks, so we need to allocate the page-tables up to
> 
> What about the other #ifdef in build_assertions()? With that also
> converted (and preferably with the description adjusted)

Good catch. I update the patch.

> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks for the review!

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/5] xen: Rename CONFIG_DOMAIN_PAGE to CONFIG_ARCH_MAP_DOMAIN_PAGE and...
  2022-07-21  8:40   ` Bertrand Marquis
@ 2022-07-29 21:54     ` Julien Grall
  0 siblings, 0 replies; 20+ messages in thread
From: Julien Grall @ 2022-07-29 21:54 UTC (permalink / raw)
  To: Bertrand Marquis
  Cc: xen-devel, Julien Grall, Stefano Stabellini, Volodymyr Babchuk,
	Andrew Cooper, George Dunlap, Jan Beulich, Wei Liu,
	Roger Pau Monné

On 21/07/2022 09:40, Bertrand Marquis wrote:
> Hi Julien,

Hi Bertrand,

>> On 20 Jul 2022, at 19:44, Julien Grall <julien@xen.org> wrote:
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> move it to Kconfig.
>>
>> The define CONFIG_DOMAIN_PAGE indicates whether the architecture provide
>> helpers to map/unmap a domain page. Rename it to the define to
> 
> Maybe “the define to” can be removed in this sentence or it needs some rephrasing.

I have removed "the define to".

> 
>> CONFIG_ARCH_MAP_DOMAIN_PAGE so it is clearer that this will not remove
>> support for domain page (this is not a concept that Xen can't get
>> away with).
>>
>> Take the opportunity to move CONFIG_MAP_DOMAIN_PAGE to Kconfig as this
>> will soon be necessary to use it in the Makefile.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> With this fixed:
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com> #arm part

Thanks!

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 0/5] xen/arm: mm: Bunch of clean-ups
  2022-07-20 18:44 [PATCH v2 0/5] xen/arm: mm: Bunch of clean-ups Julien Grall
                   ` (5 preceding siblings ...)
  2022-07-21  8:59 ` [PATCH v2 0/5] xen/arm: mm: Bunch of clean-ups Bertrand Marquis
@ 2022-07-29 22:04 ` Julien Grall
  6 siblings, 0 replies; 20+ messages in thread
From: Julien Grall @ 2022-07-29 22:04 UTC (permalink / raw)
  To: xen-devel
  Cc: Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk, Konrad Rzeszutek Wilk, Ross Lagerwall,
	Andrew Cooper, George Dunlap, Jan Beulich, Wei Liu,
	Roger Pau Monné

Hi,

On 20/07/2022 19:44, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Hi all,
> 
> This series is a collection of patches to clean-up the MM subsystem
> I have done in preparation for the next revision of "xen/arm: Don't
> switch TTBR while the MMU is on" [1].
> 
> Cheers,
> 
> [1] https://lore.kernel.org/all/20220309112048.17377-1-julien@xen.org/
> 
> Julien Grall (5):
>    xen/arm: Remove most of the *_VIRT_END defines
>    xen/arm32: mm: Consolidate the domheap mappings initialization
>    xen: Rename CONFIG_DOMAIN_PAGE to CONFIG_ARCH_MAP_DOMAIN_PAGE and...
>    xen/arm: mm: Move domain_{,un}map_* helpers in a separate file
>    xen/arm: mm: Reduce the area that xen_second covers

I have committed this series.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2022-07-29 22:04 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-20 18:44 [PATCH v2 0/5] xen/arm: mm: Bunch of clean-ups Julien Grall
2022-07-20 18:44 ` [PATCH v2 1/5] xen/arm: Remove most of the *_VIRT_END defines Julien Grall
2022-07-21  8:09   ` Bertrand Marquis
2022-07-21  8:35   ` Luca Fancellu
2022-07-20 18:44 ` [PATCH v2 2/5] xen/arm32: mm: Consolidate the domheap mappings initialization Julien Grall
2022-07-21  8:36   ` Bertrand Marquis
2022-07-21  9:42   ` Luca Fancellu
2022-07-20 18:44 ` [PATCH v2 3/5] xen: Rename CONFIG_DOMAIN_PAGE to CONFIG_ARCH_MAP_DOMAIN_PAGE and Julien Grall
2022-07-21  8:40   ` Bertrand Marquis
2022-07-29 21:54     ` Julien Grall
2022-07-25 15:51   ` Jan Beulich
2022-07-29 21:53     ` Julien Grall
2022-07-20 18:44 ` [PATCH v2 4/5] xen/arm: mm: Move domain_{,un}map_* helpers in a separate file Julien Grall
2022-07-21 10:07   ` Jan Beulich
2022-07-21 10:09     ` Julien Grall
2022-07-21 10:41   ` Bertrand Marquis
2022-07-20 18:44 ` [PATCH v2 5/5] xen/arm: mm: Reduce the area that xen_second covers Julien Grall
2022-07-21 10:47   ` Bertrand Marquis
2022-07-21  8:59 ` [PATCH v2 0/5] xen/arm: mm: Bunch of clean-ups Bertrand Marquis
2022-07-29 22:04 ` Julien Grall

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.