All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on
@ 2023-01-13 10:11 Julien Grall
  2023-01-13 10:11 ` [PATCH v4 01/14] xen/arm64: flushtlb: Reduce scope of barrier for local TLB flush Julien Grall
                   ` (14 more replies)
  0 siblings, 15 replies; 52+ messages in thread
From: Julien Grall @ 2023-01-13 10:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk

From: Julien Grall <jgrall@amazon.com>

Hi all,

Currently, Xen on Arm will switch TTBR whilst the MMU is on. This is
similar to replacing existing mappings with new ones. So we need to
follow a break-before-make sequence.

When switching the TTBR, we need to temporarily disable the MMU
before updating the TTBR. This means the page-tables must contain an
identity mapping.

The current memory layout is not very flexible and has an higher chance
to clash with the identity mapping.

On Arm64, we have plenty of unused virtual address space Therefore, we can
simply reshuffle the layout to leave the first part of the virtual
address space empty.

On Arm32, the virtual address space is already quite full. Even if we
find space, it would be necessary to have a dynamic layout. So a
different approach will be necessary. The chosen one is to have
a temporary mapping that will be used to jumped from the ID mapping
to the runtime mapping (or vice versa). The temporary mapping will
be overlapping with the domheap area as it should not be used when
switching on/off the MMU.

The Arm32 part is not yet addressed and will be handled in a follow-up
series.

After this series, most of Xen page-table code should be compliant
with the Arm Arm. The last two issues I am aware of are:
 - domheap: Mappings are replaced without using the Break-Before-Make
   approach.
 - The cache is not cleaned/invalidated when updating the page-tables
   with Data cache off (like during early boot).

The long term plan is to get rid of boot_* page tables and then
directly use the runtime pages. This means for coloring, we will
need to build the pages in the relocated Xen rather than the current
Xen.

For convience, I pushed a branch with everything applied:

https://xenbits.xen.org/git-http/people/julieng/xen-unstable.git
branch boot-pt-rework-v4

Cheers,

Julien Grall (14):
  xen/arm64: flushtlb: Reduce scope of barrier for local TLB flush
  xen/arm64: flushtlb: Implement the TLBI repeat workaround for TLB
    flush by VA
  xen/arm32: flushtlb: Reduce scope of barrier for local TLB flush
  xen/arm: flushtlb: Reduce scope of barrier for the TLB range flush
  xen/arm: Clean-up the memory layout
  xen/arm32: head: Replace "ldr rX, =<label>" with "mov_w rX, <label>"
  xen/arm32: head: Jump to the runtime mapping in enable_mmu()
  xen/arm32: head: Introduce an helper to flush the TLBs
  xen/arm32: head: Remove restriction where to load Xen
  xen/arm32: head: Widen the use of the temporary mapping
  xen/arm64: Rework the memory layout
  xen/arm64: mm: Introduce helpers to prepare/enable/disable the
    identity mapping
  xen/arm64: mm: Rework switch_ttbr()
  xen/arm64: smpboot: Directly switch to the runtime page-tables

 xen/arch/arm/arm32/head.S                 | 283 ++++++++++++++--------
 xen/arch/arm/arm32/smpboot.c              |   4 +
 xen/arch/arm/arm64/Makefile               |   1 +
 xen/arch/arm/arm64/head.S                 |  82 ++++---
 xen/arch/arm/arm64/mm.c                   | 160 ++++++++++++
 xen/arch/arm/arm64/smpboot.c              |  15 +-
 xen/arch/arm/include/asm/arm32/flushtlb.h |  27 ++-
 xen/arch/arm/include/asm/arm32/mm.h       |   4 +
 xen/arch/arm/include/asm/arm64/flushtlb.h |  56 +++--
 xen/arch/arm/include/asm/arm64/mm.h       |  13 +
 xen/arch/arm/include/asm/config.h         |  72 ++++--
 xen/arch/arm/include/asm/flushtlb.h       |  10 +-
 xen/arch/arm/include/asm/mm.h             |   2 +
 xen/arch/arm/include/asm/setup.h          |  11 +
 xen/arch/arm/include/asm/smp.h            |   1 +
 xen/arch/arm/mm.c                         |  33 ++-
 xen/arch/arm/smpboot.c                    |   1 +
 17 files changed, 566 insertions(+), 209 deletions(-)
 create mode 100644 xen/arch/arm/arm64/mm.c

-- 
2.38.1



^ permalink raw reply	[flat|nested] 52+ messages in thread

* [PATCH v4 01/14] xen/arm64: flushtlb: Reduce scope of barrier for local TLB flush
  2023-01-13 10:11 [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on Julien Grall
@ 2023-01-13 10:11 ` Julien Grall
  2023-01-13 11:36   ` Henry Wang
  2023-01-13 10:11 ` [PATCH v4 02/14] xen/arm64: flushtlb: Implement the TLBI repeat workaround for TLB flush by VA Julien Grall
                   ` (13 subsequent siblings)
  14 siblings, 1 reply; 52+ messages in thread
From: Julien Grall @ 2023-01-13 10:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk, Michal Orzel

From: Julien Grall <jgrall@amazon.com>

Per D5-4929 in ARM DDI 0487H.a:
"A DSB NSH is sufficient to ensure completion of TLB maintenance
 instructions that apply to a single PE. A DSB ISH is sufficient to
 ensure completion of TLB maintenance instructions that apply to PEs
 in the same Inner Shareable domain.
"

This means barrier after local TLB flushes could be reduced to
non-shareable.

Note that the scope of the barrier in the workaround has not been
changed because Linux v6.1-rc8 is also using 'ish' and I couldn't
find anything in the Neoverse N1 suggesting that a 'nsh' would
be sufficient.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

----

    I have used an older version of the Arm Arm because the explanation
    in the latest (ARM DDI 0487I.a) is less obvious. I reckon the paragraph
    about DSB in D8.13.8 is missing the shareability. But this is implied
    in B2.3.11:

    "If the required access types of the DSB is reads and writes, the
     following instructions issued by PEe before the DSB are complete for
     the required shareability domain:

     [...]

     — All TLB maintenance instructions.
    "

    Changes in v4:
        - Fix typoes
        - Don't modify the shareability in the workaround
        - Add Michal's reviewed-by tag

    Changes in v3:
        - Patch added
---
 xen/arch/arm/include/asm/arm64/flushtlb.h | 25 ++++++++++++++---------
 1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/include/asm/arm64/flushtlb.h b/xen/arch/arm/include/asm/arm64/flushtlb.h
index 7c5431518741..a40693b08dd3 100644
--- a/xen/arch/arm/include/asm/arm64/flushtlb.h
+++ b/xen/arch/arm/include/asm/arm64/flushtlb.h
@@ -12,8 +12,9 @@
  * ARM64_WORKAROUND_REPEAT_TLBI:
  * Modification of the translation table for a virtual address might lead to
  * read-after-read ordering violation.
- * The workaround repeats TLBI+DSB operation for all the TLB flush operations.
- * While this is stricly not necessary, we don't want to take any risk.
+ * The workaround repeats TLBI+DSB ISH operation for all the TLB flush
+ * operations. While this is strictly not necessary, we don't want to
+ * take any risk.
  *
  * For Xen page-tables the ISB will discard any instructions fetched
  * from the old mappings.
@@ -21,12 +22,16 @@
  * For the Stage-2 page-tables the ISB ensures the completion of the DSB
  * (and therefore the TLB invalidation) before continuing. So we know
  * the TLBs cannot contain an entry for a mapping we may have removed.
+ *
+ * Note that for local TLB flush, using non-shareable (nsh) is sufficient
+ * (see D5-4929 in ARM DDI 0487H.a). Although, the memory barrier in
+ * for the workaround is left as inner-shareable to match with Linux.
  */
-#define TLB_HELPER(name, tlbop)                  \
+#define TLB_HELPER(name, tlbop, sh)              \
 static inline void name(void)                    \
 {                                                \
     asm volatile(                                \
-        "dsb  ishst;"                            \
+        "dsb  "  # sh  "st;"                     \
         "tlbi "  # tlbop  ";"                    \
         ALTERNATIVE(                             \
             "nop; nop;",                         \
@@ -34,25 +39,25 @@ static inline void name(void)                    \
             "tlbi "  # tlbop  ";",               \
             ARM64_WORKAROUND_REPEAT_TLBI,        \
             CONFIG_ARM64_WORKAROUND_REPEAT_TLBI) \
-        "dsb  ish;"                              \
+        "dsb  "  # sh  ";"                       \
         "isb;"                                   \
         : : : "memory");                         \
 }
 
 /* Flush local TLBs, current VMID only. */
-TLB_HELPER(flush_guest_tlb_local, vmalls12e1);
+TLB_HELPER(flush_guest_tlb_local, vmalls12e1, nsh);
 
 /* Flush innershareable TLBs, current VMID only */
-TLB_HELPER(flush_guest_tlb, vmalls12e1is);
+TLB_HELPER(flush_guest_tlb, vmalls12e1is, ish);
 
 /* Flush local TLBs, all VMIDs, non-hypervisor mode */
-TLB_HELPER(flush_all_guests_tlb_local, alle1);
+TLB_HELPER(flush_all_guests_tlb_local, alle1, nsh);
 
 /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
-TLB_HELPER(flush_all_guests_tlb, alle1is);
+TLB_HELPER(flush_all_guests_tlb, alle1is, ish);
 
 /* Flush all hypervisor mappings from the TLB of the local processor. */
-TLB_HELPER(flush_xen_tlb_local, alle2);
+TLB_HELPER(flush_xen_tlb_local, alle2, nsh);
 
 /* Flush TLB of local processor for address va. */
 static inline void  __flush_xen_tlb_one_local(vaddr_t va)
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 02/14] xen/arm64: flushtlb: Implement the TLBI repeat workaround for TLB flush by VA
  2023-01-13 10:11 [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on Julien Grall
  2023-01-13 10:11 ` [PATCH v4 01/14] xen/arm64: flushtlb: Reduce scope of barrier for local TLB flush Julien Grall
@ 2023-01-13 10:11 ` Julien Grall
  2023-01-13 13:22   ` Henry Wang
  2023-01-13 17:56   ` Luca Fancellu
  2023-01-13 10:11 ` [PATCH v4 03/14] xen/arm32: flushtlb: Reduce scope of barrier for local TLB flush Julien Grall
                   ` (12 subsequent siblings)
  14 siblings, 2 replies; 52+ messages in thread
From: Julien Grall @ 2023-01-13 10:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk, Michal Orzel

From: Julien Grall <jgrall@amazon.com>

Looking at the Neoverse N1 errata document, it is not clear to me
why the TLBI repeat workaround is not applied for TLB flush by VA.

The TBL flush by VA helpers are used in flush_xen_tlb_range_va_local()
and flush_xen_tlb_range_va(). So if the range size if a fixed size smaller
than a PAGE_SIZE, it would be possible that the compiler remove the loop
and therefore replicate the sequence described in the erratum 1286807.

So the TLBI repeat workaround should also be applied for the TLB flush
by VA helpers.

Fixes: 22e323d115d8 ("xen/arm: Add workaround for Cortex-A76/Neoverse-N1 erratum #1286807")
Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

----
    This was spotted while looking at reducing the scope of the memory
    barriers. I don't have any HW affected.

    Changes in v4:
        - Add Michal's reviewed-by tag

    Changes in v3:
        - Patch added
---
 xen/arch/arm/include/asm/arm64/flushtlb.h | 31 +++++++++++++++++------
 1 file changed, 23 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/include/asm/arm64/flushtlb.h b/xen/arch/arm/include/asm/arm64/flushtlb.h
index a40693b08dd3..7f81cbbb93f9 100644
--- a/xen/arch/arm/include/asm/arm64/flushtlb.h
+++ b/xen/arch/arm/include/asm/arm64/flushtlb.h
@@ -44,6 +44,27 @@ static inline void name(void)                    \
         : : : "memory");                         \
 }
 
+/*
+ * FLush TLB by VA. This will likely be used in a loop, so the caller
+ * is responsible to use the appropriate memory barriers before/after
+ * the sequence.
+ *
+ * See above about the ARM64_WORKAROUND_REPEAT_TLBI sequence.
+ */
+#define TLB_HELPER_VA(name, tlbop)               \
+static inline void name(vaddr_t va)              \
+{                                                \
+    asm volatile(                                \
+        "tlbi "  # tlbop  ", %0;"                \
+        ALTERNATIVE(                             \
+            "nop; nop;",                         \
+            "dsb  ish;"                          \
+            "tlbi "  # tlbop  ", %0;",           \
+            ARM64_WORKAROUND_REPEAT_TLBI,        \
+            CONFIG_ARM64_WORKAROUND_REPEAT_TLBI) \
+        : : "r" (va >> PAGE_SHIFT) : "memory");  \
+}
+
 /* Flush local TLBs, current VMID only. */
 TLB_HELPER(flush_guest_tlb_local, vmalls12e1, nsh);
 
@@ -60,16 +81,10 @@ TLB_HELPER(flush_all_guests_tlb, alle1is, ish);
 TLB_HELPER(flush_xen_tlb_local, alle2, nsh);
 
 /* Flush TLB of local processor for address va. */
-static inline void  __flush_xen_tlb_one_local(vaddr_t va)
-{
-    asm volatile("tlbi vae2, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
-}
+TLB_HELPER_VA(__flush_xen_tlb_one_local, vae2);
 
 /* Flush TLB of all processors in the inner-shareable domain for address va. */
-static inline void __flush_xen_tlb_one(vaddr_t va)
-{
-    asm volatile("tlbi vae2is, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
-}
+TLB_HELPER_VA(__flush_xen_tlb_one, vae2is);
 
 #endif /* __ASM_ARM_ARM64_FLUSHTLB_H__ */
 /*
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 03/14] xen/arm32: flushtlb: Reduce scope of barrier for local TLB flush
  2023-01-13 10:11 [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on Julien Grall
  2023-01-13 10:11 ` [PATCH v4 01/14] xen/arm64: flushtlb: Reduce scope of barrier for local TLB flush Julien Grall
  2023-01-13 10:11 ` [PATCH v4 02/14] xen/arm64: flushtlb: Implement the TLBI repeat workaround for TLB flush by VA Julien Grall
@ 2023-01-13 10:11 ` Julien Grall
  2023-01-13 13:46   ` Henry Wang
  2023-01-13 10:11 ` [PATCH v4 04/14] xen/arm: flushtlb: Reduce scope of barrier for the TLB range flush Julien Grall
                   ` (11 subsequent siblings)
  14 siblings, 1 reply; 52+ messages in thread
From: Julien Grall @ 2023-01-13 10:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk, Michal Orzel

From: Julien Grall <jgrall@amazon.com>

Per G5-9224 in ARM DDI 0487I.a:

"A DSB NSH is sufficient to ensure completion of TLB maintenance
 instructions that apply to a single PE. A DSB ISH is sufficient to
 ensure completion of TLB maintenance instructions that apply to PEs
 in the same Inner Shareable domain.
"

This is quoting the Armv8 specification because I couldn't find an
explicit statement in the Armv7 specification. Instead, I could find
bits in various places that confirm the same implementation.

Furthermore, Linux has been using 'nsh' since 2013 (62cbbc42e001
"ARM: tlb: reduce scope of barrier domains for TLB invalidation").

This means barrier after local TLB flushes could be reduced to
non-shareable.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

----

    Changes in v4:
        - Add Michal's reviewed-by tag

    Changes in v3:
        - Patch added
---
 xen/arch/arm/include/asm/arm32/flushtlb.h | 27 +++++++++++++----------
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/include/asm/arm32/flushtlb.h b/xen/arch/arm/include/asm/arm32/flushtlb.h
index 9085e6501153..7ae6a12f8155 100644
--- a/xen/arch/arm/include/asm/arm32/flushtlb.h
+++ b/xen/arch/arm/include/asm/arm32/flushtlb.h
@@ -15,30 +15,33 @@
  * For the Stage-2 page-tables the ISB ensures the completion of the DSB
  * (and therefore the TLB invalidation) before continuing. So we know
  * the TLBs cannot contain an entry for a mapping we may have removed.
+ *
+ * Note that for local TLB flush, using non-shareable (nsh) is sufficient
+ * (see G5-9224 in ARM DDI 0487I.a).
  */
-#define TLB_HELPER(name, tlbop) \
-static inline void name(void)   \
-{                               \
-    dsb(ishst);                 \
-    WRITE_CP32(0, tlbop);       \
-    dsb(ish);                   \
-    isb();                      \
+#define TLB_HELPER(name, tlbop, sh) \
+static inline void name(void)       \
+{                                   \
+    dsb(sh ## st);                  \
+    WRITE_CP32(0, tlbop);           \
+    dsb(sh);                        \
+    isb();                          \
 }
 
 /* Flush local TLBs, current VMID only */
-TLB_HELPER(flush_guest_tlb_local, TLBIALL);
+TLB_HELPER(flush_guest_tlb_local, TLBIALL, nsh);
 
 /* Flush inner shareable TLBs, current VMID only */
-TLB_HELPER(flush_guest_tlb, TLBIALLIS);
+TLB_HELPER(flush_guest_tlb, TLBIALLIS, ish);
 
 /* Flush local TLBs, all VMIDs, non-hypervisor mode */
-TLB_HELPER(flush_all_guests_tlb_local, TLBIALLNSNH);
+TLB_HELPER(flush_all_guests_tlb_local, TLBIALLNSNH, nsh);
 
 /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
-TLB_HELPER(flush_all_guests_tlb, TLBIALLNSNHIS);
+TLB_HELPER(flush_all_guests_tlb, TLBIALLNSNHIS, ish);
 
 /* Flush all hypervisor mappings from the TLB of the local processor. */
-TLB_HELPER(flush_xen_tlb_local, TLBIALLH);
+TLB_HELPER(flush_xen_tlb_local, TLBIALLH, nsh);
 
 /* Flush TLB of local processor for address va. */
 static inline void __flush_xen_tlb_one_local(vaddr_t va)
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 04/14] xen/arm: flushtlb: Reduce scope of barrier for the TLB range flush
  2023-01-13 10:11 [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on Julien Grall
                   ` (2 preceding siblings ...)
  2023-01-13 10:11 ` [PATCH v4 03/14] xen/arm32: flushtlb: Reduce scope of barrier for local TLB flush Julien Grall
@ 2023-01-13 10:11 ` Julien Grall
  2023-01-13 13:53   ` Henry Wang
  2023-01-13 10:11 ` [PATCH v4 05/14] xen/arm: Clean-up the memory layout Julien Grall
                   ` (10 subsequent siblings)
  14 siblings, 1 reply; 52+ messages in thread
From: Julien Grall @ 2023-01-13 10:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk, Michal Orzel

From: Julien Grall <jgrall@amazon.com>

At the moment, flush_xen_tlb_range_va{,_local}() are using system
wide memory barrier. This is quite expensive and unnecessary.

For the local version, a non-shareable barrier is sufficient.
For the SMP version, an inner-shareable barrier is sufficient.

Furthermore, the initial barrier only needs to a store barrier.

For the full explanation of the sequence see asm/arm{32,64}/flushtlb.h.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

----
    Changes in v4:
        - Add Michal's reviewed-by tag

    Changes in v3:
        - Patch added
---
 xen/arch/arm/include/asm/flushtlb.h | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/include/asm/flushtlb.h b/xen/arch/arm/include/asm/flushtlb.h
index 125a141975e0..e45fb6d97b02 100644
--- a/xen/arch/arm/include/asm/flushtlb.h
+++ b/xen/arch/arm/include/asm/flushtlb.h
@@ -37,13 +37,14 @@ static inline void flush_xen_tlb_range_va_local(vaddr_t va,
 {
     vaddr_t end = va + size;
 
-    dsb(sy); /* Ensure preceding are visible */
+    /* See asm/arm{32,64}/flushtlb.h for the explanation of the sequence. */
+    dsb(nshst); /* Ensure prior page-tables updates have completed */
     while ( va < end )
     {
         __flush_xen_tlb_one_local(va);
         va += PAGE_SIZE;
     }
-    dsb(sy); /* Ensure completion of the TLB flush */
+    dsb(nsh); /* Ensure the TLB invalidation has completed */
     isb();
 }
 
@@ -56,13 +57,14 @@ static inline void flush_xen_tlb_range_va(vaddr_t va,
 {
     vaddr_t end = va + size;
 
-    dsb(sy); /* Ensure preceding are visible */
+    /* See asm/arm{32,64}/flushtlb.h for the explanation of the sequence. */
+    dsb(ishst); /* Ensure prior page-tables updates have completed */
     while ( va < end )
     {
         __flush_xen_tlb_one(va);
         va += PAGE_SIZE;
     }
-    dsb(sy); /* Ensure completion of the TLB flush */
+    dsb(ish); /* Ensure the TLB invalidation has completed */
     isb();
 }
 
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 05/14] xen/arm: Clean-up the memory layout
  2023-01-13 10:11 [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on Julien Grall
                   ` (3 preceding siblings ...)
  2023-01-13 10:11 ` [PATCH v4 04/14] xen/arm: flushtlb: Reduce scope of barrier for the TLB range flush Julien Grall
@ 2023-01-13 10:11 ` Julien Grall
  2023-01-13 13:57   ` Henry Wang
  2023-01-24 19:30   ` Julien Grall
  2023-01-13 10:11 ` [PATCH v4 06/14] xen/arm32: head: Replace "ldr rX, =<label>" with "mov_w rX, <label>" Julien Grall
                   ` (9 subsequent siblings)
  14 siblings, 2 replies; 52+ messages in thread
From: Julien Grall @ 2023-01-13 10:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk, Michal Orzel

From: Julien Grall <jgrall@amazon.com>

In a follow-up patch, the base address for the common mappings will
vary between arm32 and arm64. To avoid any duplication, define
every mapping in the common region from the previous one.

Take the opportunity to:
    * add missing *_SIZE for FIXMAP_VIRT_* and XEN_VIRT_*
    * switch to MB()/GB() to avoid hexadecimal (easier to read)

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

----
    Changes in v4:
        - Add Michal's reviewed-by tag
        - Fix typo in the commit message

    Changes in v3:
        - Switch more macros to use MB()/GB()
        - Remove duplicated sentence in the commit message

    Changes in v2:
        - Use _AT(vaddr_t, ...) to build on 32-bit.
        - Drop COMMON_VIRT_START
---
 xen/arch/arm/include/asm/config.h | 23 ++++++++++++++---------
 1 file changed, 14 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
index 0fefed1b8aa9..87851e677701 100644
--- a/xen/arch/arm/include/asm/config.h
+++ b/xen/arch/arm/include/asm/config.h
@@ -107,14 +107,19 @@
  *  Unused
  */
 
-#define XEN_VIRT_START         _AT(vaddr_t,0x00200000)
-#define FIXMAP_ADDR(n)        (_AT(vaddr_t,0x00400000) + (n) * PAGE_SIZE)
+#define XEN_VIRT_START          _AT(vaddr_t, MB(2))
+#define XEN_VIRT_SIZE           _AT(vaddr_t, MB(2))
 
-#define BOOT_FDT_VIRT_START    _AT(vaddr_t,0x00600000)
-#define BOOT_FDT_VIRT_SIZE     _AT(vaddr_t, MB(4))
+#define FIXMAP_VIRT_START       (XEN_VIRT_START + XEN_VIRT_SIZE)
+#define FIXMAP_VIRT_SIZE        _AT(vaddr_t, MB(2))
+
+#define FIXMAP_ADDR(n)          (FIXMAP_VIRT_START + (n) * PAGE_SIZE)
+
+#define BOOT_FDT_VIRT_START     (FIXMAP_VIRT_START + FIXMAP_VIRT_SIZE)
+#define BOOT_FDT_VIRT_SIZE      _AT(vaddr_t, MB(4))
 
 #ifdef CONFIG_LIVEPATCH
-#define LIVEPATCH_VMAP_START   _AT(vaddr_t,0x00a00000)
+#define LIVEPATCH_VMAP_START    (BOOT_FDT_VIRT_START + BOOT_FDT_VIRT_SIZE)
 #define LIVEPATCH_VMAP_SIZE    _AT(vaddr_t, MB(2))
 #endif
 
@@ -124,18 +129,18 @@
 
 #define CONFIG_SEPARATE_XENHEAP 1
 
-#define FRAMETABLE_VIRT_START  _AT(vaddr_t,0x02000000)
+#define FRAMETABLE_VIRT_START  _AT(vaddr_t, MB(32))
 #define FRAMETABLE_SIZE        MB(128-32)
 #define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
 #define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - 1)
 
-#define VMAP_VIRT_START        _AT(vaddr_t,0x10000000)
+#define VMAP_VIRT_START        _AT(vaddr_t, MB(256))
 #define VMAP_VIRT_SIZE         _AT(vaddr_t, GB(1) - MB(256))
 
-#define XENHEAP_VIRT_START     _AT(vaddr_t,0x40000000)
+#define XENHEAP_VIRT_START     _AT(vaddr_t, GB(1))
 #define XENHEAP_VIRT_SIZE      _AT(vaddr_t, GB(1))
 
-#define DOMHEAP_VIRT_START     _AT(vaddr_t,0x80000000)
+#define DOMHEAP_VIRT_START     _AT(vaddr_t, GB(2))
 #define DOMHEAP_VIRT_SIZE      _AT(vaddr_t, GB(2))
 
 #define DOMHEAP_ENTRIES        1024  /* 1024 2MB mapping slots */
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 06/14] xen/arm32: head: Replace "ldr rX, =<label>" with "mov_w rX, <label>"
  2023-01-13 10:11 [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on Julien Grall
                   ` (4 preceding siblings ...)
  2023-01-13 10:11 ` [PATCH v4 05/14] xen/arm: Clean-up the memory layout Julien Grall
@ 2023-01-13 10:11 ` Julien Grall
  2023-01-13 10:45   ` Michal Orzel
  2023-01-14  0:51   ` Henry Wang
  2023-01-13 10:11 ` [PATCH v4 07/14] xen/arm32: head: Jump to the runtime mapping in enable_mmu() Julien Grall
                   ` (8 subsequent siblings)
  14 siblings, 2 replies; 52+ messages in thread
From: Julien Grall @ 2023-01-13 10:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk

From: Julien Grall <jgrall@amazon.com>

"ldr rX, =<label>" is used to load a value from the literal pool. This
implies a memory access.

This can be avoided by using the macro mov_w which encode the value in
the immediate of two instructions.

So replace all "ldr rX, =<label>" with "mov_w rX, <label>".

No functional changes intended.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

----
    Changes in v4:
        * Add Stefano's reviewed-by tag
        * Add missing space
        * Add Michal's reviewed-by tag

    Changes in v3:
        * Patch added
---
 xen/arch/arm/arm32/head.S | 38 +++++++++++++++++++-------------------
 1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 5c1044710386..b680a4553fb6 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -62,7 +62,7 @@
 .endm
 
 .macro load_paddr rb, sym
-        ldr   \rb, =\sym
+        mov_w \rb, \sym
         add   \rb, \rb, r10
 .endm
 
@@ -149,7 +149,7 @@ past_zImage:
         mov   r8, r2                 /* r8 := DTB base address */
 
         /* Find out where we are */
-        ldr   r0, =start
+        mov_w r0, start
         adr   r9, start              /* r9  := paddr (start) */
         sub   r10, r9, r0            /* r10 := phys-offset */
 
@@ -170,7 +170,7 @@ past_zImage:
         bl    enable_mmu
 
         /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
-        ldr   r0, =primary_switched
+        mov_w r0, primary_switched
         mov   pc, r0
 primary_switched:
         /*
@@ -190,7 +190,7 @@ primary_switched:
         /* Setup the arguments for start_xen and jump to C world */
         mov   r0, r10                /* r0 := Physical offset */
         mov   r1, r8                 /* r1 := paddr(FDT) */
-        ldr   r2, =start_xen
+        mov_w r2, start_xen
         b     launch
 ENDPROC(start)
 
@@ -198,7 +198,7 @@ GLOBAL(init_secondary)
         cpsid aif                    /* Disable all interrupts */
 
         /* Find out where we are */
-        ldr   r0, =start
+        mov_w r0, start
         adr   r9, start              /* r9  := paddr (start) */
         sub   r10, r9, r0            /* r10 := phys-offset */
 
@@ -227,7 +227,7 @@ GLOBAL(init_secondary)
 
 
         /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
-        ldr   r0, =secondary_switched
+        mov_w r0, secondary_switched
         mov   pc, r0
 secondary_switched:
         /*
@@ -236,7 +236,7 @@ secondary_switched:
          *
          * XXX: This is not compliant with the Arm Arm.
          */
-        ldr   r4, =init_ttbr         /* VA of HTTBR value stashed by CPU 0 */
+        mov_w r4, init_ttbr          /* VA of HTTBR value stashed by CPU 0 */
         ldrd  r4, r5, [r4]           /* Actual value */
         dsb
         mcrr  CP64(r4, r5, HTTBR)
@@ -254,7 +254,7 @@ secondary_switched:
 #endif
         PRINT("- Ready -\r\n")
         /* Jump to C world */
-        ldr   r2, =start_secondary
+        mov_w r2, start_secondary
         b     launch
 ENDPROC(init_secondary)
 
@@ -297,8 +297,8 @@ ENDPROC(check_cpu_mode)
  */
 zero_bss:
         PRINT("- Zero BSS -\r\n")
-        ldr   r0, =__bss_start       /* r0 := vaddr(__bss_start) */
-        ldr   r1, =__bss_end         /* r1 := vaddr(__bss_start) */
+        mov_w r0, __bss_start        /* r0 := vaddr(__bss_start) */
+        mov_w r1, __bss_end          /* r1 := vaddr(__bss_start) */
 
         mov   r2, #0
 1:      str   r2, [r0], #4
@@ -330,8 +330,8 @@ cpu_init:
 
 cpu_init_done:
         /* Set up memory attribute type tables */
-        ldr   r0, =MAIR0VAL
-        ldr   r1, =MAIR1VAL
+        mov_w r0, MAIR0VAL
+        mov_w r1, MAIR1VAL
         mcr   CP32(r0, HMAIR0)
         mcr   CP32(r1, HMAIR1)
 
@@ -341,10 +341,10 @@ cpu_init_done:
          * PT walks are write-back, write-allocate in both cache levels,
          * Full 32-bit address space goes through this table.
          */
-        ldr   r0, =(TCR_RES1|TCR_SH0_IS|TCR_ORGN0_WBWA|TCR_IRGN0_WBWA|TCR_T0SZ(0))
+        mov_w r0, (TCR_RES1|TCR_SH0_IS|TCR_ORGN0_WBWA|TCR_IRGN0_WBWA|TCR_T0SZ(0))
         mcr   CP32(r0, HTCR)
 
-        ldr   r0, =HSCTLR_SET
+        mov_w r0, HSCTLR_SET
         mcr   CP32(r0, HSCTLR)
         isb
 
@@ -452,7 +452,7 @@ ENDPROC(cpu_init)
  */
 create_page_tables:
         /* Prepare the page-tables for mapping Xen */
-        ldr   r0, =XEN_VIRT_START
+        mov_w r0, XEN_VIRT_START
         create_table_entry boot_pgtable, boot_second, r0, 1
         create_table_entry boot_second, boot_third, r0, 2
 
@@ -576,7 +576,7 @@ remove_identity_mapping:
         cmp   r1, #XEN_FIRST_SLOT
         beq   1f
         /* It is not in slot 0, remove the entry */
-        ldr   r0, =boot_pgtable      /* r0 := root table */
+        mov_w r0, boot_pgtable       /* r0 := root table */
         lsl   r1, r1, #3             /* r1 := Slot offset */
         strd  r2, r3, [r0, r1]
         b     identity_mapping_removed
@@ -590,7 +590,7 @@ remove_identity_mapping:
         cmp   r1, #XEN_SECOND_SLOT
         beq   identity_mapping_removed
         /* It is not in slot 1, remove the entry */
-        ldr   r0, =boot_second       /* r0 := second table */
+        mov_w r0, boot_second        /* r0 := second table */
         lsl   r1, r1, #3             /* r1 := Slot offset */
         strd  r2, r3, [r0, r1]
 
@@ -620,7 +620,7 @@ ENDPROC(remove_identity_mapping)
 setup_fixmap:
 #if defined(CONFIG_EARLY_PRINTK)
         /* Add UART to the fixmap table */
-        ldr   r0, =EARLY_UART_VIRTUAL_ADDRESS
+        mov_w r0, EARLY_UART_VIRTUAL_ADDRESS
         create_mapping_entry xen_fixmap, r0, r11, type=PT_DEV_L3
 #endif
         /* Map fixmap into boot_second */
@@ -643,7 +643,7 @@ ENDPROC(setup_fixmap)
  * Clobbers r3
  */
 launch:
-        ldr   r3, =init_data
+        mov_w r3, init_data
         add   r3, #INITINFO_stack    /* Find the boot-time stack */
         ldr   sp, [r3]
         add   sp, #STACK_SIZE        /* (which grows down from the top). */
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 07/14] xen/arm32: head: Jump to the runtime mapping in enable_mmu()
  2023-01-13 10:11 [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on Julien Grall
                   ` (5 preceding siblings ...)
  2023-01-13 10:11 ` [PATCH v4 06/14] xen/arm32: head: Replace "ldr rX, =<label>" with "mov_w rX, <label>" Julien Grall
@ 2023-01-13 10:11 ` Julien Grall
  2023-01-14  1:33   ` Henry Wang
  2023-01-13 10:11 ` [PATCH v4 08/14] xen/arm32: head: Introduce an helper to flush the TLBs Julien Grall
                   ` (7 subsequent siblings)
  14 siblings, 1 reply; 52+ messages in thread
From: Julien Grall @ 2023-01-13 10:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk

From: Julien Grall <jgrall@amazon.com>

At the moment, enable_mmu() will return to an address in the 1:1 mapping
and each path is responsible to switch to the runtime mapping.

In a follow-up patch, the behavior to switch to the runtime mapping
will become more complex. So to avoid more code/comment duplication,
move the switch in enable_mmu().

Lastly, take the opportunity to replace load from literal pool with
mov_w.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

----
    Changes in v4:
        - Add Stefano's reviewed-by tag

    Changes in v3:
        - Fix typo in the commit message

    Changes in v2:
        - Patch added
---
 xen/arch/arm/arm32/head.S | 50 +++++++++++++++++++++++----------------
 1 file changed, 30 insertions(+), 20 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index b680a4553fb6..50ad6c948be2 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -167,19 +167,11 @@ past_zImage:
         bl    check_cpu_mode
         bl    cpu_init
         bl    create_page_tables
-        bl    enable_mmu
 
-        /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
-        mov_w r0, primary_switched
-        mov   pc, r0
+        /* Address in the runtime mapping to jump to after the MMU is enabled */
+        mov_w lr, primary_switched
+        b     enable_mmu
 primary_switched:
-        /*
-         * The 1:1 map may clash with other parts of the Xen virtual memory
-         * layout. As it is not used anymore, remove it completely to
-         * avoid having to worry about replacing existing mapping
-         * afterwards.
-         */
-        bl    remove_identity_mapping
         bl    setup_fixmap
 #ifdef CONFIG_EARLY_PRINTK
         /* Use a virtual address to access the UART. */
@@ -223,12 +215,10 @@ GLOBAL(init_secondary)
         bl    check_cpu_mode
         bl    cpu_init
         bl    create_page_tables
-        bl    enable_mmu
 
-
-        /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
-        mov_w r0, secondary_switched
-        mov   pc, r0
+        /* Address in the runtime mapping to jump to after the MMU is enabled */
+        mov_w lr, secondary_switched
+        b     enable_mmu
 secondary_switched:
         /*
          * Non-boot CPUs need to move on to the proper pagetables, which were
@@ -523,9 +513,12 @@ virtphys_clash:
 ENDPROC(create_page_tables)
 
 /*
- * Turn on the Data Cache and the MMU. The function will return on the 1:1
- * mapping. In other word, the caller is responsible to switch to the runtime
- * mapping.
+ * Turn on the Data Cache and the MMU. The function will return
+ * to the virtual address provided in LR (e.g. the runtime mapping).
+ *
+ * Inputs:
+ *   r9 : paddr(start)
+ *   lr : Virtual address to return to
  *
  * Clobbers r0 - r3
  */
@@ -551,7 +544,24 @@ enable_mmu:
         dsb                          /* Flush PTE writes and finish reads */
         mcr   CP32(r0, HSCTLR)       /* now paging is enabled */
         isb                          /* Now, flush the icache */
-        mov   pc, lr
+
+        /*
+         * The MMU is turned on and we are in the 1:1 mapping. Switch
+         * to the runtime mapping.
+         */
+        mov_w r0, 1f
+        mov   pc, r0
+1:
+        /*
+         * The 1:1 map may clash with other parts of the Xen virtual memory
+         * layout. As it is not used anymore, remove it completely to
+         * avoid having to worry about replacing existing mapping
+         * afterwards.
+         *
+         * On return this will jump to the virtual address requested by
+         * the caller.
+         */
+        b     remove_identity_mapping
 ENDPROC(enable_mmu)
 
 /*
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 08/14] xen/arm32: head: Introduce an helper to flush the TLBs
  2023-01-13 10:11 [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on Julien Grall
                   ` (6 preceding siblings ...)
  2023-01-13 10:11 ` [PATCH v4 07/14] xen/arm32: head: Jump to the runtime mapping in enable_mmu() Julien Grall
@ 2023-01-13 10:11 ` Julien Grall
  2023-01-13 10:46   ` Michal Orzel
  2023-01-14  2:16   ` Henry Wang
  2023-01-13 10:11 ` [PATCH v4 09/14] xen/arm32: head: Remove restriction where to load Xen Julien Grall
                   ` (6 subsequent siblings)
  14 siblings, 2 replies; 52+ messages in thread
From: Julien Grall @ 2023-01-13 10:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk

From: Julien Grall <jgrall@amazon.com>

The sequence for flushing the TLBs is 4 instruction long and often
requires an explanation how it works.

So create a helper and use it in the boot code (switch_ttbr() is left
alone until we decide the semantic of the call).

Note that in secondary_switched, we were also flushing the instruction
cache and branch predictor. Neither of them was necessary because:
    * We are only supporting IVIPT cache on arm32, so the instruction
      cache flush is only necessary when executable code is modified.
      None of the boot code is doing that.
    * The instruction cache is not invalidated and misprediction is not
      a problem at boot.

Signed-off-by: Julien Grall <jgrall@amazon.com>

----
    Changes in v4:
        - Expand the commit message to explain why switch_ttbr() is
          not updated.
        - Remove extra spaces in the comment
        - Fix typo in the commit message

    Changes in v3:
        * Fix typo
        * Update the documentation
        * Rename the argument from tmp1 to tmp
---
 xen/arch/arm/arm32/head.S | 30 +++++++++++++++++-------------
 1 file changed, 17 insertions(+), 13 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 50ad6c948be2..67b910808b74 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -66,6 +66,20 @@
         add   \rb, \rb, r10
 .endm
 
+/*
+ * Flush local TLBs
+ *
+ * @tmp: Scratch register
+ *
+ * See asm/arm32/flushtlb.h for the explanation of the sequence.
+ */
+.macro flush_xen_tlb_local tmp
+        dsb   nshst
+        mcr   CP32(\tmp, TLBIALLH)
+        dsb   nsh
+        isb
+.endm
+
 /*
  * Common register usage in this file:
  *   r0  -
@@ -232,11 +246,7 @@ secondary_switched:
         mcrr  CP64(r4, r5, HTTBR)
         dsb
         isb
-        mcr   CP32(r0, TLBIALLH)     /* Flush hypervisor TLB */
-        mcr   CP32(r0, ICIALLU)      /* Flush I-cache */
-        mcr   CP32(r0, BPIALL)       /* Flush branch predictor */
-        dsb                          /* Ensure completion of TLB+BP flush */
-        isb
+        flush_xen_tlb_local r0
 
 #ifdef CONFIG_EARLY_PRINTK
         /* Use a virtual address to access the UART. */
@@ -529,8 +539,7 @@ enable_mmu:
          * The state of the TLBs is unknown before turning on the MMU.
          * Flush them to avoid stale one.
          */
-        mcr   CP32(r0, TLBIALLH)     /* Flush hypervisor TLBs */
-        dsb   nsh
+        flush_xen_tlb_local r0
 
         /* Write Xen's PT's paddr into the HTTBR */
         load_paddr r0, boot_pgtable
@@ -605,12 +614,7 @@ remove_identity_mapping:
         strd  r2, r3, [r0, r1]
 
 identity_mapping_removed:
-        /* See asm/arm32/flushtlb.h for the explanation of the sequence. */
-        dsb   nshst
-        mcr   CP32(r0, TLBIALLH)
-        dsb   nsh
-        isb
-
+        flush_xen_tlb_local r0
         mov   pc, lr
 ENDPROC(remove_identity_mapping)
 
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 09/14] xen/arm32: head: Remove restriction where to load Xen
  2023-01-13 10:11 [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on Julien Grall
                   ` (7 preceding siblings ...)
  2023-01-13 10:11 ` [PATCH v4 08/14] xen/arm32: head: Introduce an helper to flush the TLBs Julien Grall
@ 2023-01-13 10:11 ` Julien Grall
  2023-01-13 14:58   ` Luca Fancellu
  2023-01-16  8:14   ` Michal Orzel
  2023-01-13 10:11 ` [PATCH v4 10/14] xen/arm32: head: Widen the use of the temporary mapping Julien Grall
                   ` (5 subsequent siblings)
  14 siblings, 2 replies; 52+ messages in thread
From: Julien Grall @ 2023-01-13 10:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk

From: Julien Grall <jgrall@amazon.com>

At the moment, bootloaders can load Xen anywhere in memory but the
region 2MB - 4MB. While I am not aware of any issue, we have no way
to tell the bootloader to avoid that region.

In addition to that, in the future, Xen may grow over 2MB if we
enable feature like UBSAN or GCOV. To avoid widening the restriction
on the load address, it would be better to get rid of it.

When the identity mapping is clashing with the Xen runtime mapping,
we need an extra indirection to be able to replace the identity
mapping with the Xen runtime mapping.

Reserve a new memory region that will be used to temporarily map Xen.
For convenience, the new area is re-using the same first slot as the
domheap which is used for per-cpu temporary mapping after a CPU has
booted.

Furthermore, directly map boot_second (which cover Xen and more)
to the temporary area. This will avoid to allocate an extra page-table
for the second-level and will helpful for follow-up patches (we will
want to use the fixmap whilst in the temporary mapping).

Lastly, some part of the code now needs to know whether the temporary
mapping was created. So reserve r12 to store this information.

Signed-off-by: Julien Grall <jgrall@amazon.com>
----
    Changes in v4:
        - Remove spurious newline

    Changes in v3:
        - Remove the ASSERT() in init_domheap_mappings() because it was
          bogus (secondary CPU root tables are initialized to the CPU0
          root table so the entry will be valid). Also, it is not
          related to this patch as the CPU0 root table are rebuilt
          during boot. The ASSERT() will be re-introduced later.

    Changes in v2:
        - Patch added
---
 xen/arch/arm/arm32/head.S         | 139 ++++++++++++++++++++++++++----
 xen/arch/arm/include/asm/config.h |  14 +++
 xen/arch/arm/mm.c                 |  14 +++
 3 files changed, 152 insertions(+), 15 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 67b910808b74..3800efb44169 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -35,6 +35,9 @@
 #define XEN_FIRST_SLOT      first_table_offset(XEN_VIRT_START)
 #define XEN_SECOND_SLOT     second_table_offset(XEN_VIRT_START)
 
+/* Offset between the early boot xen mapping and the runtime xen mapping */
+#define XEN_TEMPORARY_OFFSET      (TEMPORARY_XEN_VIRT_START - XEN_VIRT_START)
+
 #if defined(CONFIG_EARLY_PRINTK) && defined(CONFIG_EARLY_PRINTK_INC)
 #include CONFIG_EARLY_PRINTK_INC
 #endif
@@ -94,7 +97,7 @@
  *   r9  - paddr(start)
  *   r10 - phys offset
  *   r11 - UART address
- *   r12 -
+ *   r12 - Temporary mapping created
  *   r13 - SP
  *   r14 - LR
  *   r15 - PC
@@ -445,6 +448,9 @@ ENDPROC(cpu_init)
  *   r9 : paddr(start)
  *   r10: phys offset
  *
+ * Output:
+ *   r12: Was a temporary mapping created?
+ *
  * Clobbers r0 - r4, r6
  *
  * Register usage within this function:
@@ -484,7 +490,11 @@ create_page_tables:
         /*
          * Setup the 1:1 mapping so we can turn the MMU on. Note that
          * only the first page of Xen will be part of the 1:1 mapping.
+         *
+         * In all the cases, we will link boot_third_id. So create the
+         * mapping in advance.
          */
+        create_mapping_entry boot_third_id, r9, r9
 
         /*
          * Find the first slot used. If the slot is not XEN_FIRST_SLOT,
@@ -501,8 +511,7 @@ create_page_tables:
         /*
          * Find the second slot used. If the slot is XEN_SECOND_SLOT, then the
          * 1:1 mapping will use its own set of page-tables from the
-         * third level. For slot XEN_SECOND_SLOT, Xen is not yet able to handle
-         * it.
+         * third level.
          */
         get_table_slot r1, r9, 2     /* r1 := second slot */
         cmp   r1, #XEN_SECOND_SLOT
@@ -513,13 +522,33 @@ create_page_tables:
 link_from_second_id:
         create_table_entry boot_second_id, boot_third_id, r9, 2
 link_from_third_id:
-        create_mapping_entry boot_third_id, r9, r9
+        /* Good news, we are not clashing with Xen virtual mapping */
+        mov   r12, #0                /* r12 := temporary mapping not created */
         mov   pc, lr
 
 virtphys_clash:
-        /* Identity map clashes with boot_third, which we cannot handle yet */
-        PRINT("- Unable to build boot page tables - virt and phys addresses clash. -\r\n")
-        b     fail
+        /*
+         * The identity map clashes with boot_third. Link boot_first_id and
+         * map Xen to a temporary mapping. See switch_to_runtime_mapping
+         * for more details.
+         */
+        PRINT("- Virt and Phys addresses clash  -\r\n")
+        PRINT("- Create temporary mapping -\r\n")
+
+        /*
+         * This will override the link to boot_second in XEN_FIRST_SLOT.
+         * The page-tables are not live yet. So no need to use
+         * break-before-make.
+         */
+        create_table_entry boot_pgtable, boot_second_id, r9, 1
+        create_table_entry boot_second_id, boot_third_id, r9, 2
+
+        /* Map boot_second (cover Xen mappings) to the temporary 1st slot */
+        mov_w r0, TEMPORARY_XEN_VIRT_START
+        create_table_entry boot_pgtable, boot_second, r0, 1
+
+        mov   r12, #1                /* r12 := temporary mapping created */
+        mov   pc, lr
 ENDPROC(create_page_tables)
 
 /*
@@ -528,9 +557,10 @@ ENDPROC(create_page_tables)
  *
  * Inputs:
  *   r9 : paddr(start)
+ *  r12 : Was the temporary mapping created?
  *   lr : Virtual address to return to
  *
- * Clobbers r0 - r3
+ * Clobbers r0 - r5
  */
 enable_mmu:
         PRINT("- Turning on paging -\r\n")
@@ -558,21 +588,79 @@ enable_mmu:
          * The MMU is turned on and we are in the 1:1 mapping. Switch
          * to the runtime mapping.
          */
-        mov_w r0, 1f
-        mov   pc, r0
+        mov   r5, lr                /* Save LR before overwritting it */
+        mov_w lr, 1f                /* Virtual address in the runtime mapping */
+        b     switch_to_runtime_mapping
 1:
+        mov   lr, r5                /* Restore LR */
         /*
-         * The 1:1 map may clash with other parts of the Xen virtual memory
-         * layout. As it is not used anymore, remove it completely to
-         * avoid having to worry about replacing existing mapping
-         * afterwards.
+         * At this point, either the 1:1 map or the temporary mapping
+         * will be present. The former may clash with other parts of the
+         * Xen virtual memory layout. As both of them are not used
+         * anymore, remove them completely to avoid having to worry
+         * about replacing existing mapping afterwards.
          *
          * On return this will jump to the virtual address requested by
          * the caller.
          */
-        b     remove_identity_mapping
+        teq   r12, #0
+        beq   remove_identity_mapping
+        b     remove_temporary_mapping
 ENDPROC(enable_mmu)
 
+/*
+ * Switch to the runtime mapping. The logic depends on whether the
+ * runtime virtual region is clashing with the physical address
+ *
+ *  - If it is not clashing, we can directly jump to the address in
+ *    the runtime mapping.
+ *  - If it is clashing, create_page_tables() would have mapped Xen to
+ *    a temporary virtual address. We need to switch to the temporary
+ *    mapping so we can remove the identity mapping and map Xen at the
+ *    correct position.
+ *
+ * Inputs
+ *    r9: paddr(start)
+ *   r12: Was a temporary mapping created?
+ *    lr: Address in the runtime mapping to jump to
+ *
+ * Clobbers r0 - r4
+ */
+switch_to_runtime_mapping:
+        /*
+         * Jump to the runtime mapping if the virt and phys are not
+         * clashing
+         */
+        teq   r12, #0
+        beq   ready_to_switch
+
+        /* We are still in the 1:1 mapping. Jump to the temporary Virtual address. */
+        mov_w r0, 1f
+        add   r0, r0, #XEN_TEMPORARY_OFFSET /* r0 := address in temporary mapping */
+        mov   pc, r0
+
+1:
+        /* Remove boot_second_id */
+        mov   r2, #0
+        mov   r3, #0
+        adr_l r0, boot_pgtable
+        get_table_slot r1, r9, 1            /* r1 := first slot */
+        lsl   r1, r1, #3                    /* r1 := first slot offset */
+        strd  r2, r3, [r0, r1]
+
+        flush_xen_tlb_local r0
+
+        /* Map boot_second into boot_pgtable */
+        mov_w r0, XEN_VIRT_START
+        create_table_entry boot_pgtable, boot_second, r0, 1
+
+        /* Ensure any page table updates are visible before continuing */
+        dsb   nsh
+
+ready_to_switch:
+        mov   pc, lr
+ENDPROC(switch_to_runtime_mapping)
+
 /*
  * Remove the 1:1 map from the page-tables. It is not easy to keep track
  * where the 1:1 map was mapped, so we will look for the top-level entry
@@ -618,6 +706,27 @@ identity_mapping_removed:
         mov   pc, lr
 ENDPROC(remove_identity_mapping)
 
+/*
+ * Remove the temporary mapping of Xen starting at TEMPORARY_XEN_VIRT_START.
+ *
+ * Clobbers r0 - r1
+ */
+remove_temporary_mapping:
+        /* r2:r3 := invalid page-table entry */
+        mov   r2, #0
+        mov   r3, #0
+
+        adr_l r0, boot_pgtable
+        mov_w r1, TEMPORARY_XEN_VIRT_START
+        get_table_slot r1, r1, 1     /* r1 := first slot */
+        lsl   r1, r1, #3             /* r1 := first slot offset */
+        strd  r2, r3, [r0, r1]
+
+        flush_xen_tlb_local r0
+
+        mov  pc, lr
+ENDPROC(remove_temporary_mapping)
+
 /*
  * Map the UART in the fixmap (when earlyprintk is used) and hook the
  * fixmap table in the page tables.
diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
index 87851e677701..6c1b762e976d 100644
--- a/xen/arch/arm/include/asm/config.h
+++ b/xen/arch/arm/include/asm/config.h
@@ -148,6 +148,20 @@
 /* Number of domheap pagetable pages required at the second level (2MB mappings) */
 #define DOMHEAP_SECOND_PAGES (DOMHEAP_VIRT_SIZE >> FIRST_SHIFT)
 
+/*
+ * The temporary area is overlapping with the domheap area. This may
+ * be used to create an alias of the first slot containing Xen mappings
+ * when turning on/off the MMU.
+ */
+#define TEMPORARY_AREA_FIRST_SLOT    (first_table_offset(DOMHEAP_VIRT_START))
+
+/* Calculate the address in the temporary area */
+#define TEMPORARY_AREA_ADDR(addr)                           \
+     (((addr) & ~XEN_PT_LEVEL_MASK(1)) |                    \
+      (TEMPORARY_AREA_FIRST_SLOT << XEN_PT_LEVEL_SHIFT(1)))
+
+#define TEMPORARY_XEN_VIRT_START    TEMPORARY_AREA_ADDR(XEN_VIRT_START)
+
 #else /* ARM_64 */
 
 #define SLOT0_ENTRY_BITS  39
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0fc6f2992dd1..9ebc2d0f609e 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -167,6 +167,9 @@ static void __init __maybe_unused build_assertions(void)
 #define CHECK_SAME_SLOT(level, virt1, virt2) \
     BUILD_BUG_ON(level##_table_offset(virt1) != level##_table_offset(virt2))
 
+#define CHECK_DIFFERENT_SLOT(level, virt1, virt2) \
+    BUILD_BUG_ON(level##_table_offset(virt1) == level##_table_offset(virt2))
+
 #ifdef CONFIG_ARM_64
     CHECK_SAME_SLOT(zeroeth, XEN_VIRT_START, FIXMAP_ADDR(0));
     CHECK_SAME_SLOT(zeroeth, XEN_VIRT_START, BOOT_FDT_VIRT_START);
@@ -174,7 +177,18 @@ static void __init __maybe_unused build_assertions(void)
     CHECK_SAME_SLOT(first, XEN_VIRT_START, FIXMAP_ADDR(0));
     CHECK_SAME_SLOT(first, XEN_VIRT_START, BOOT_FDT_VIRT_START);
 
+    /*
+     * For arm32, the temporary mapping will re-use the domheap
+     * first slot and the second slots will match.
+     */
+#ifdef CONFIG_ARM_32
+    CHECK_SAME_SLOT(first, TEMPORARY_XEN_VIRT_START, DOMHEAP_VIRT_START);
+    CHECK_DIFFERENT_SLOT(first, XEN_VIRT_START, TEMPORARY_XEN_VIRT_START);
+    CHECK_SAME_SLOT(second, XEN_VIRT_START, TEMPORARY_XEN_VIRT_START);
+#endif
+
 #undef CHECK_SAME_SLOT
+#undef CHECK_DIFFERENT_SLOT
 }
 
 static lpae_t *xen_map_table(mfn_t mfn)
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 10/14] xen/arm32: head: Widen the use of the temporary mapping
  2023-01-13 10:11 [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on Julien Grall
                   ` (8 preceding siblings ...)
  2023-01-13 10:11 ` [PATCH v4 09/14] xen/arm32: head: Remove restriction where to load Xen Julien Grall
@ 2023-01-13 10:11 ` Julien Grall
  2023-01-13 15:37   ` Luca Fancellu
  2023-01-16  8:20   ` Michal Orzel
  2023-01-13 10:11 ` [PATCH v4 11/14] xen/arm64: Rework the memory layout Julien Grall
                   ` (4 subsequent siblings)
  14 siblings, 2 replies; 52+ messages in thread
From: Julien Grall @ 2023-01-13 10:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk

From: Julien Grall <jgrall@amazon.com>

At the moment, the temporary mapping is only used when the virtual
runtime region of Xen is clashing with the physical region.

In follow-up patches, we will rework how secondary CPU bring-up works
and it will be convenient to use the fixmap area for accessing
the root page-table (it is per-cpu).

Rework the code to use temporary mapping when the Xen physical address
is not overlapping with the temporary mapping.

This also has the advantage to simplify the logic to identity map
Xen.

Signed-off-by: Julien Grall <jgrall@amazon.com>

----

Even if this patch is rewriting part of the previous patch, I decided
to keep them separated to help the review.

The "folow-up patches" are still in draft at the moment. I still haven't
find a way to split them nicely and not require too much more work
in the coloring side.

I have provided some medium-term goal in the cover letter.

    Changes in v3:
        - Resolve conflicts after switching from "ldr rX, <label>" to
          "mov_w rX, <label>" in a previous patch

    Changes in v2:
        - Patch added
---
 xen/arch/arm/arm32/head.S | 82 +++++++--------------------------------
 1 file changed, 15 insertions(+), 67 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 3800efb44169..ce858e9fc4da 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -459,7 +459,6 @@ ENDPROC(cpu_init)
 create_page_tables:
         /* Prepare the page-tables for mapping Xen */
         mov_w r0, XEN_VIRT_START
-        create_table_entry boot_pgtable, boot_second, r0, 1
         create_table_entry boot_second, boot_third, r0, 2
 
         /* Setup boot_third: */
@@ -479,67 +478,37 @@ create_page_tables:
         cmp   r1, #(XEN_PT_LPAE_ENTRIES<<3) /* 512*8-byte entries per page */
         blo   1b
 
-        /*
-         * If Xen is loaded at exactly XEN_VIRT_START then we don't
-         * need an additional 1:1 mapping, the virtual mapping will
-         * suffice.
-         */
-        cmp   r9, #XEN_VIRT_START
-        moveq pc, lr
-
         /*
          * Setup the 1:1 mapping so we can turn the MMU on. Note that
          * only the first page of Xen will be part of the 1:1 mapping.
-         *
-         * In all the cases, we will link boot_third_id. So create the
-         * mapping in advance.
          */
+        create_table_entry boot_pgtable, boot_second_id, r9, 1
+        create_table_entry boot_second_id, boot_third_id, r9, 2
         create_mapping_entry boot_third_id, r9, r9
 
         /*
-         * Find the first slot used. If the slot is not XEN_FIRST_SLOT,
-         * then the 1:1 mapping will use its own set of page-tables from
-         * the second level.
+         * Find the first slot used. If the slot is not the same
+         * as XEN_TMP_FIRST_SLOT, then we will want to switch
+         * to the temporary mapping before jumping to the runtime
+         * virtual mapping.
          */
         get_table_slot r1, r9, 1     /* r1 := first slot */
-        cmp   r1, #XEN_FIRST_SLOT
-        beq   1f
-        create_table_entry boot_pgtable, boot_second_id, r9, 1
-        b     link_from_second_id
-
-1:
-        /*
-         * Find the second slot used. If the slot is XEN_SECOND_SLOT, then the
-         * 1:1 mapping will use its own set of page-tables from the
-         * third level.
-         */
-        get_table_slot r1, r9, 2     /* r1 := second slot */
-        cmp   r1, #XEN_SECOND_SLOT
-        beq   virtphys_clash
-        create_table_entry boot_second, boot_third_id, r9, 2
-        b     link_from_third_id
+        cmp   r1, #TEMPORARY_AREA_FIRST_SLOT
+        bne   use_temporary_mapping
 
-link_from_second_id:
-        create_table_entry boot_second_id, boot_third_id, r9, 2
-link_from_third_id:
-        /* Good news, we are not clashing with Xen virtual mapping */
+        mov_w r0, XEN_VIRT_START
+        create_table_entry boot_pgtable, boot_second, r0, 1
         mov   r12, #0                /* r12 := temporary mapping not created */
         mov   pc, lr
 
-virtphys_clash:
+use_temporary_mapping:
         /*
-         * The identity map clashes with boot_third. Link boot_first_id and
-         * map Xen to a temporary mapping. See switch_to_runtime_mapping
-         * for more details.
+         * The identity mapping is not using the first slot
+         * TEMPORARY_AREA_FIRST_SLOT. Create a temporary mapping.
+         * See switch_to_runtime_mapping for more details.
          */
-        PRINT("- Virt and Phys addresses clash  -\r\n")
         PRINT("- Create temporary mapping -\r\n")
 
-        /*
-         * This will override the link to boot_second in XEN_FIRST_SLOT.
-         * The page-tables are not live yet. So no need to use
-         * break-before-make.
-         */
         create_table_entry boot_pgtable, boot_second_id, r9, 1
         create_table_entry boot_second_id, boot_third_id, r9, 2
 
@@ -675,33 +644,12 @@ remove_identity_mapping:
         /* r2:r3 := invalid page-table entry */
         mov   r2, #0x0
         mov   r3, #0x0
-        /*
-         * Find the first slot used. Remove the entry for the first
-         * table if the slot is not XEN_FIRST_SLOT.
-         */
+        /* Find the first slot used and remove it */
         get_table_slot r1, r9, 1     /* r1 := first slot */
-        cmp   r1, #XEN_FIRST_SLOT
-        beq   1f
-        /* It is not in slot 0, remove the entry */
         mov_w r0, boot_pgtable       /* r0 := root table */
         lsl   r1, r1, #3             /* r1 := Slot offset */
         strd  r2, r3, [r0, r1]
-        b     identity_mapping_removed
-
-1:
-        /*
-         * Find the second slot used. Remove the entry for the first
-         * table if the slot is not XEN_SECOND_SLOT.
-         */
-        get_table_slot r1, r9, 2     /* r1 := second slot */
-        cmp   r1, #XEN_SECOND_SLOT
-        beq   identity_mapping_removed
-        /* It is not in slot 1, remove the entry */
-        mov_w r0, boot_second        /* r0 := second table */
-        lsl   r1, r1, #3             /* r1 := Slot offset */
-        strd  r2, r3, [r0, r1]
 
-identity_mapping_removed:
         flush_xen_tlb_local r0
         mov   pc, lr
 ENDPROC(remove_identity_mapping)
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 11/14] xen/arm64: Rework the memory layout
  2023-01-13 10:11 [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on Julien Grall
                   ` (9 preceding siblings ...)
  2023-01-13 10:11 ` [PATCH v4 10/14] xen/arm32: head: Widen the use of the temporary mapping Julien Grall
@ 2023-01-13 10:11 ` Julien Grall
  2023-01-13 15:58   ` Luca Fancellu
  2023-01-16  8:46   ` Michal Orzel
  2023-01-13 10:11 ` [PATCH v4 12/14] xen/arm64: mm: Introduce helpers to prepare/enable/disable the identity mapping Julien Grall
                   ` (3 subsequent siblings)
  14 siblings, 2 replies; 52+ messages in thread
From: Julien Grall @ 2023-01-13 10:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk

From: Julien Grall <jgrall@amazon.com>

Xen is currently not fully compliant with the Arm Arm because it will
switch the TTBR with the MMU on.

In order to be compliant, we need to disable the MMU before
switching the TTBR. The implication is the page-tables should
contain an identity mapping of the code switching the TTBR.

In most of the case we expect Xen to be loaded in low memory. I am aware
of one platform (i.e AMD Seattle) where the memory start above 512GB.
To give us some slack, consider that Xen may be loaded in the first 2TB
of the physical address space.

The memory layout is reshuffled to keep the first two slots of the zeroeth
level free. Xen will now be loaded at (2TB + 2MB). This requires a slight
tweak of the boot code because XEN_VIRT_START cannot be used as an
immediate.

This reshuffle will make trivial to create a 1:1 mapping when Xen is
loaded below 2TB.

Signed-off-by: Julien Grall <jgrall@amazon.com>
----
    Changes in v4:
        - Correct the documentation
        - The start address is 2TB, so slot0 is 4 not 2.

    Changes in v2:
        - Reword the commit message
        - Load Xen at 2TB + 2MB
        - Update the documentation to reflect the new layout
---
 xen/arch/arm/arm64/head.S         |  3 ++-
 xen/arch/arm/include/asm/config.h | 35 ++++++++++++++++++++-----------
 xen/arch/arm/mm.c                 | 11 +++++-----
 3 files changed, 31 insertions(+), 18 deletions(-)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 4a3f87117c83..663f5813b12e 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -607,7 +607,8 @@ create_page_tables:
          * need an additional 1:1 mapping, the virtual mapping will
          * suffice.
          */
-        cmp   x19, #XEN_VIRT_START
+        ldr   x0, =XEN_VIRT_START
+        cmp   x19, x0
         bne   1f
         ret
 1:
diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
index 6c1b762e976d..c5d407a7495f 100644
--- a/xen/arch/arm/include/asm/config.h
+++ b/xen/arch/arm/include/asm/config.h
@@ -72,15 +72,12 @@
 #include <xen/page-size.h>
 
 /*
- * Common ARM32 and ARM64 layout:
+ * ARM32 layout:
  *   0  -   2M   Unmapped
  *   2M -   4M   Xen text, data, bss
  *   4M -   6M   Fixmap: special-purpose 4K mapping slots
  *   6M -  10M   Early boot mapping of FDT
- *   10M - 12M   Livepatch vmap (if compiled in)
- *
- * ARM32 layout:
- *   0  -  12M   <COMMON>
+ *  10M -  12M   Livepatch vmap (if compiled in)
  *
  *  32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
  * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
@@ -90,14 +87,22 @@
  *   2G -   4G   Domheap: on-demand-mapped
  *
  * ARM64 layout:
- * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
- *   0  -  12M   <COMMON>
+ * 0x0000000000000000 - 0x00001fffffffffff (2TB, L0 slots [0..3])
+ *  Reserved to identity map Xen
+ *
+ * 0x0000020000000000 - 0x000028fffffffff (512GB, L0 slot [4]
+ *  (Relative offsets)
+ *   0  -   2M   Unmapped
+ *   2M -   4M   Xen text, data, bss
+ *   4M -   6M   Fixmap: special-purpose 4K mapping slots
+ *   6M -  10M   Early boot mapping of FDT
+ *  10M -  12M   Livepatch vmap (if compiled in)
  *
  *   1G -   2G   VMAP: ioremap and early_ioremap
  *
  *  32G -  64G   Frametable: 24 bytes per page for 5.3TB of RAM
  *
- * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
+ * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [5..255])
  *  Unused
  *
  * 0x0000800000000000 - 0x000084ffffffffff (5TB, L0 slots [256..265])
@@ -107,7 +112,17 @@
  *  Unused
  */
 
+#ifdef CONFIG_ARM_32
 #define XEN_VIRT_START          _AT(vaddr_t, MB(2))
+#else
+
+#define SLOT0_ENTRY_BITS  39
+#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
+#define SLOT0_ENTRY_SIZE  SLOT0(1)
+
+#define XEN_VIRT_START          (SLOT0(4) + _AT(vaddr_t, MB(2)))
+#endif
+
 #define XEN_VIRT_SIZE           _AT(vaddr_t, MB(2))
 
 #define FIXMAP_VIRT_START       (XEN_VIRT_START + XEN_VIRT_SIZE)
@@ -164,10 +179,6 @@
 
 #else /* ARM_64 */
 
-#define SLOT0_ENTRY_BITS  39
-#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
-#define SLOT0_ENTRY_SIZE  SLOT0(1)
-
 #define VMAP_VIRT_START  GB(1)
 #define VMAP_VIRT_SIZE   GB(1)
 
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 9ebc2d0f609e..0cf7ad4f0e8c 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -153,7 +153,7 @@ static void __init __maybe_unused build_assertions(void)
 #endif
     /* Page table structure constraints */
 #ifdef CONFIG_ARM_64
-    BUILD_BUG_ON(zeroeth_table_offset(XEN_VIRT_START));
+    BUILD_BUG_ON(zeroeth_table_offset(XEN_VIRT_START) < 2);
 #endif
     BUILD_BUG_ON(first_table_offset(XEN_VIRT_START));
 #ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE
@@ -496,10 +496,11 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
     phys_offset = boot_phys_offset;
 
 #ifdef CONFIG_ARM_64
-    p = (void *) xen_pgtable;
-    p[0] = pte_of_xenaddr((uintptr_t)xen_first);
-    p[0].pt.table = 1;
-    p[0].pt.xn = 0;
+    pte = pte_of_xenaddr((uintptr_t)xen_first);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+    xen_pgtable[zeroeth_table_offset(XEN_VIRT_START)] = pte;
+
     p = (void *) xen_first;
 #else
     p = (void *) cpu0_pgtable;
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 12/14] xen/arm64: mm: Introduce helpers to prepare/enable/disable the identity mapping
  2023-01-13 10:11 [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on Julien Grall
                   ` (10 preceding siblings ...)
  2023-01-13 10:11 ` [PATCH v4 11/14] xen/arm64: Rework the memory layout Julien Grall
@ 2023-01-13 10:11 ` Julien Grall
  2023-01-13 16:26   ` Luca Fancellu
  2023-01-16  8:53   ` Michal Orzel
  2023-01-13 10:11 ` [PATCH v4 13/14] xen/arm64: mm: Rework switch_ttbr() Julien Grall
                   ` (2 subsequent siblings)
  14 siblings, 2 replies; 52+ messages in thread
From: Julien Grall @ 2023-01-13 10:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk

From: Julien Grall <jgrall@amazon.com>

In follow-up patches we will need to have part of Xen identity mapped in
order to safely switch the TTBR.

On some platform, the identity mapping may have to start at 0. If we always
keep the identity region mapped, NULL pointer dereference would lead to
access to valid mapping.

It would be possible to relocate Xen to avoid clashing with address 0.
However the identity mapping is only meant to be used in very limited
places. Therefore it would be better to keep the identity region invalid
for most of the time.

Two new external helpers are introduced:
    - arch_setup_page_tables() will setup the page-tables so it is
      easy to create the mapping afterwards.
    - update_identity_mapping() will create/remove the identity mapping

Signed-off-by: Julien Grall <jgrall@amazon.com>

----
    Changes in v4:
        - Fix typo in a comment
        - Clarify which page-tables are updated

    Changes in v2:
        - Remove the arm32 part
        - Use a different logic for the boot page tables and runtime
          one because Xen may be running in a different place.
---
 xen/arch/arm/arm64/Makefile         |   1 +
 xen/arch/arm/arm64/mm.c             | 130 ++++++++++++++++++++++++++++
 xen/arch/arm/include/asm/arm32/mm.h |   4 +
 xen/arch/arm/include/asm/arm64/mm.h |  13 +++
 xen/arch/arm/include/asm/setup.h    |  11 +++
 xen/arch/arm/mm.c                   |   6 +-
 6 files changed, 163 insertions(+), 2 deletions(-)
 create mode 100644 xen/arch/arm/arm64/mm.c

diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
index 6d507da0d44d..28481393e98f 100644
--- a/xen/arch/arm/arm64/Makefile
+++ b/xen/arch/arm/arm64/Makefile
@@ -10,6 +10,7 @@ obj-y += entry.o
 obj-y += head.o
 obj-y += insn.o
 obj-$(CONFIG_LIVEPATCH) += livepatch.o
+obj-y += mm.o
 obj-y += smc.o
 obj-y += smpboot.o
 obj-y += traps.o
diff --git a/xen/arch/arm/arm64/mm.c b/xen/arch/arm/arm64/mm.c
new file mode 100644
index 000000000000..798ae93ad73c
--- /dev/null
+++ b/xen/arch/arm/arm64/mm.c
@@ -0,0 +1,130 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <xen/init.h>
+#include <xen/mm.h>
+
+#include <asm/setup.h>
+
+/* Override macros from asm/page.h to make them work with mfn_t */
+#undef virt_to_mfn
+#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
+
+static DEFINE_PAGE_TABLE(xen_first_id);
+static DEFINE_PAGE_TABLE(xen_second_id);
+static DEFINE_PAGE_TABLE(xen_third_id);
+
+/*
+ * The identity mapping may start at physical address 0. So we don't want
+ * to keep it mapped longer than necessary.
+ *
+ * When this is called, we are still using the boot_pgtable.
+ *
+ * We need to prepare the identity mapping for both the boot page tables
+ * and runtime page tables.
+ *
+ * The logic to create the entry is slightly different because Xen may
+ * be running at a different location at runtime.
+ */
+static void __init prepare_boot_identity_mapping(void)
+{
+    paddr_t id_addr = virt_to_maddr(_start);
+    lpae_t pte;
+    DECLARE_OFFSETS(id_offsets, id_addr);
+
+    /*
+     * We will be re-using the boot ID tables. They may not have been
+     * zeroed but they should be unlinked. So it is fine to use
+     * clear_page().
+     */
+    clear_page(boot_first_id);
+    clear_page(boot_second_id);
+    clear_page(boot_third_id);
+
+    if ( id_offsets[0] != 0 )
+        panic("Cannot handled ID mapping above 512GB\n");
+
+    /* Link first ID table */
+    pte = mfn_to_xen_entry(virt_to_mfn(boot_first_id), MT_NORMAL);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&boot_pgtable[id_offsets[0]], pte);
+
+    /* Link second ID table */
+    pte = mfn_to_xen_entry(virt_to_mfn(boot_second_id), MT_NORMAL);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&boot_first_id[id_offsets[1]], pte);
+
+    /* Link third ID table */
+    pte = mfn_to_xen_entry(virt_to_mfn(boot_third_id), MT_NORMAL);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&boot_second_id[id_offsets[2]], pte);
+
+    /* The mapping in the third table will be created at a later stage */
+}
+
+static void __init prepare_runtime_identity_mapping(void)
+{
+    paddr_t id_addr = virt_to_maddr(_start);
+    lpae_t pte;
+    DECLARE_OFFSETS(id_offsets, id_addr);
+
+    if ( id_offsets[0] != 0 )
+        panic("Cannot handled ID mapping above 512GB\n");
+
+    /* Link first ID table */
+    pte = pte_of_xenaddr((vaddr_t)xen_first_id);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&xen_pgtable[id_offsets[0]], pte);
+
+    /* Link second ID table */
+    pte = pte_of_xenaddr((vaddr_t)xen_second_id);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&xen_first_id[id_offsets[1]], pte);
+
+    /* Link third ID table */
+    pte = pte_of_xenaddr((vaddr_t)xen_third_id);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&xen_second_id[id_offsets[2]], pte);
+
+    /* The mapping in the third table will be created at a later stage */
+}
+
+void __init arch_setup_page_tables(void)
+{
+    prepare_boot_identity_mapping();
+    prepare_runtime_identity_mapping();
+}
+
+void update_identity_mapping(bool enable)
+{
+    paddr_t id_addr = virt_to_maddr(_start);
+    int rc;
+
+    if ( enable )
+        rc = map_pages_to_xen(id_addr, maddr_to_mfn(id_addr), 1,
+                              PAGE_HYPERVISOR_RX);
+    else
+        rc = destroy_xen_mappings(id_addr, id_addr + PAGE_SIZE);
+
+    BUG_ON(rc);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/include/asm/arm32/mm.h b/xen/arch/arm/include/asm/arm32/mm.h
index 8bfc906e7178..856f2dbec4ad 100644
--- a/xen/arch/arm/include/asm/arm32/mm.h
+++ b/xen/arch/arm/include/asm/arm32/mm.h
@@ -18,6 +18,10 @@ static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr)
 
 bool init_domheap_mappings(unsigned int cpu);
 
+static inline void arch_setup_page_tables(void)
+{
+}
+
 #endif /* __ARM_ARM32_MM_H__ */
 
 /*
diff --git a/xen/arch/arm/include/asm/arm64/mm.h b/xen/arch/arm/include/asm/arm64/mm.h
index aa2adac63189..e7059a36bf17 100644
--- a/xen/arch/arm/include/asm/arm64/mm.h
+++ b/xen/arch/arm/include/asm/arm64/mm.h
@@ -1,6 +1,8 @@
 #ifndef __ARM_ARM64_MM_H__
 #define __ARM_ARM64_MM_H__
 
+extern DEFINE_PAGE_TABLE(xen_pgtable);
+
 /*
  * On ARM64, all the RAM is currently direct mapped in Xen.
  * Hence return always true.
@@ -10,6 +12,17 @@ static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr)
     return true;
 }
 
+void arch_setup_page_tables(void);
+
+/*
+ * Enable/disable the identity mapping in the live page-tables (i.e.
+ * the one pointer by TTBR_EL2).
+ *
+ * Note that nested a call (e.g. enable=true, enable=true) is not
+ * supported.
+ */
+void update_identity_mapping(bool enable);
+
 #endif /* __ARM_ARM64_MM_H__ */
 
 /*
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index a926f30a2be4..66b27f2b57c1 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -166,6 +166,17 @@ u32 device_tree_get_u32(const void *fdt, int node,
 int map_range_to_domain(const struct dt_device_node *dev,
                         u64 addr, u64 len, void *data);
 
+extern DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
+
+#ifdef CONFIG_ARM_64
+extern DEFINE_BOOT_PAGE_TABLE(boot_first_id);
+#endif
+extern DEFINE_BOOT_PAGE_TABLE(boot_second_id);
+extern DEFINE_BOOT_PAGE_TABLE(boot_third_id);
+
+/* Find where Xen will be residing at runtime and return a PT entry */
+lpae_t pte_of_xenaddr(vaddr_t);
+
 extern const char __ro_after_init_start[], __ro_after_init_end[];
 
 struct init_info
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0cf7ad4f0e8c..39e0d9e03c9c 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -93,7 +93,7 @@ DEFINE_BOOT_PAGE_TABLE(boot_third);
 
 #ifdef CONFIG_ARM_64
 #define HYP_PT_ROOT_LEVEL 0
-static DEFINE_PAGE_TABLE(xen_pgtable);
+DEFINE_PAGE_TABLE(xen_pgtable);
 static DEFINE_PAGE_TABLE(xen_first);
 #define THIS_CPU_PGTABLE xen_pgtable
 #else
@@ -388,7 +388,7 @@ void flush_page_to_ram(unsigned long mfn, bool sync_icache)
         invalidate_icache();
 }
 
-static inline lpae_t pte_of_xenaddr(vaddr_t va)
+lpae_t pte_of_xenaddr(vaddr_t va)
 {
     paddr_t ma = va + phys_offset;
 
@@ -495,6 +495,8 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
 
     phys_offset = boot_phys_offset;
 
+    arch_setup_page_tables();
+
 #ifdef CONFIG_ARM_64
     pte = pte_of_xenaddr((uintptr_t)xen_first);
     pte.pt.table = 1;
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 13/14] xen/arm64: mm: Rework switch_ttbr()
  2023-01-13 10:11 [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on Julien Grall
                   ` (11 preceding siblings ...)
  2023-01-13 10:11 ` [PATCH v4 12/14] xen/arm64: mm: Introduce helpers to prepare/enable/disable the identity mapping Julien Grall
@ 2023-01-13 10:11 ` Julien Grall
  2023-01-13 16:50   ` Luca Fancellu
  2023-01-16  9:23   ` Michal Orzel
  2023-01-13 10:11 ` [PATCH v4 14/14] xen/arm64: smpboot: Directly switch to the runtime page-tables Julien Grall
  2023-01-24 19:35 ` [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on Julien Grall
  14 siblings, 2 replies; 52+ messages in thread
From: Julien Grall @ 2023-01-13 10:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk

From: Julien Grall <jgrall@amazon.com>

At the moment, switch_ttbr() is switching the TTBR whilst the MMU is
still on.

Switching TTBR is like replacing existing mappings with new ones. So
we need to follow the break-before-make sequence.

In this case, it means the MMU needs to be switched off while the
TTBR is updated. In order to disable the MMU, we need to first
jump to an identity mapping.

Rename switch_ttbr() to switch_ttbr_id() and create an helper on
top to temporary map the identity mapping and call switch_ttbr()
via the identity address.

switch_ttbr_id() is now reworked to temporarily turn off the MMU
before updating the TTBR.

We also need to make sure the helper switch_ttbr() is part of the
identity mapping. So move _end_boot past it.

The arm32 code will use a different approach. So this issue is for now
only resolved on arm64.

Signed-off-by: Julien Grall <jgrall@amazon.com>

----
    Changes in v4:
        - Don't modify setup_pagetables() as we don't handle arm32.
        - Move the clearing of the boot page tables in an earlier patch
        - Fix the numbering

    Changes in v2:
        - Remove the arm32 changes. This will be addressed differently
        - Re-instate the instruct cache flush. This is not strictly
          necessary but kept it for safety.
        - Use "dsb ish"  rather than "dsb sy".


    TODO:
        * Handle the case where the runtime Xen is loaded at a different
          position for cache coloring. This will be dealt separately.
---
 xen/arch/arm/arm64/head.S     | 50 +++++++++++++++++++++++------------
 xen/arch/arm/arm64/mm.c       | 30 +++++++++++++++++++++
 xen/arch/arm/include/asm/mm.h |  2 ++
 xen/arch/arm/mm.c             |  2 --
 4 files changed, 65 insertions(+), 19 deletions(-)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 663f5813b12e..5efd442b24af 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -816,30 +816,46 @@ ENDPROC(fail)
  * Switch TTBR
  *
  * x0    ttbr
- *
- * TODO: This code does not comply with break-before-make.
  */
-ENTRY(switch_ttbr)
-        dsb   sy                     /* Ensure the flushes happen before
-                                      * continuing */
-        isb                          /* Ensure synchronization with previous
-                                      * changes to text */
-        tlbi   alle2                 /* Flush hypervisor TLB */
-        ic     iallu                 /* Flush I-cache */
-        dsb    sy                    /* Ensure completion of TLB flush */
+ENTRY(switch_ttbr_id)
+        /* 1) Ensure any previous read/write have completed */
+        dsb    ish
+        isb
+
+        /* 2) Turn off MMU */
+        mrs    x1, SCTLR_EL2
+        bic    x1, x1, #SCTLR_Axx_ELx_M
+        msr    SCTLR_EL2, x1
+        isb
+
+        /*
+         * 3) Flush the TLBs.
+         * See asm/arm64/flushtlb.h for the explanation of the sequence.
+         */
+        dsb   nshst
+        tlbi  alle2
+        dsb   nsh
+        isb
+
+        /* 4) Update the TTBR */
+        msr   TTBR0_EL2, x0
         isb
 
-        msr    TTBR0_EL2, x0
+        /*
+         * 5) Flush I-cache
+         * This should not be necessary but it is kept for safety.
+         */
+        ic     iallu
+        isb
 
-        isb                          /* Ensure synchronization with previous
-                                      * changes to text */
-        tlbi   alle2                 /* Flush hypervisor TLB */
-        ic     iallu                 /* Flush I-cache */
-        dsb    sy                    /* Ensure completion of TLB flush */
+        /* 6) Turn on the MMU */
+        mrs   x1, SCTLR_EL2
+        orr   x1, x1, #SCTLR_Axx_ELx_M  /* Enable MMU */
+        msr   SCTLR_EL2, x1
         isb
 
         ret
-ENDPROC(switch_ttbr)
+ENDPROC(switch_ttbr_id)
 
 #ifdef CONFIG_EARLY_PRINTK
 /*
diff --git a/xen/arch/arm/arm64/mm.c b/xen/arch/arm/arm64/mm.c
index 798ae93ad73c..2ede4e75ae33 100644
--- a/xen/arch/arm/arm64/mm.c
+++ b/xen/arch/arm/arm64/mm.c
@@ -120,6 +120,36 @@ void update_identity_mapping(bool enable)
     BUG_ON(rc);
 }
 
+extern void switch_ttbr_id(uint64_t ttbr);
+
+typedef void (switch_ttbr_fn)(uint64_t ttbr);
+
+void __init switch_ttbr(uint64_t ttbr)
+{
+    vaddr_t id_addr = virt_to_maddr(switch_ttbr_id);
+    switch_ttbr_fn *fn = (switch_ttbr_fn *)id_addr;
+    lpae_t pte;
+
+    /* Enable the identity mapping in the boot page tables */
+    update_identity_mapping(true);
+    /* Enable the identity mapping in the runtime page tables */
+    pte = pte_of_xenaddr((vaddr_t)switch_ttbr_id);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+    pte.pt.ro = 1;
+    write_pte(&xen_third_id[third_table_offset(id_addr)], pte);
+
+    /* Switch TTBR */
+    fn(ttbr);
+
+    /*
+     * Disable the identity mapping in the runtime page tables.
+     * Note it is not necessary to disable it in the boot page tables
+     * because they are not going to be used by this CPU anymore.
+     */
+    update_identity_mapping(false);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index 68adcac9fa8d..bff6923f3ea9 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -196,6 +196,8 @@ extern unsigned long total_pages;
 extern void setup_pagetables(unsigned long boot_phys_offset);
 /* Map FDT in boot pagetable */
 extern void *early_fdt_map(paddr_t fdt_paddr);
+/* Switch to a new root page-tables */
+extern void switch_ttbr(uint64_t ttbr);
 /* Remove early mappings */
 extern void remove_early_mappings(void);
 /* Allocate and initialise pagetables for a secondary CPU. Sets init_ttbr to the
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 39e0d9e03c9c..b9c698088b25 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -476,8 +476,6 @@ static void xen_pt_enforce_wnx(void)
     flush_xen_tlb_local();
 }
 
-extern void switch_ttbr(uint64_t ttbr);
-
 /* Clear a translation table and clean & invalidate the cache */
 static void clear_table(void *table)
 {
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 14/14] xen/arm64: smpboot: Directly switch to the runtime page-tables
  2023-01-13 10:11 [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on Julien Grall
                   ` (12 preceding siblings ...)
  2023-01-13 10:11 ` [PATCH v4 13/14] xen/arm64: mm: Rework switch_ttbr() Julien Grall
@ 2023-01-13 10:11 ` Julien Grall
  2023-01-13 17:42   ` Luca Fancellu
  2023-01-24 19:35 ` [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on Julien Grall
  14 siblings, 1 reply; 52+ messages in thread
From: Julien Grall @ 2023-01-13 10:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Volodymyr Babchuk

From: Julien Grall <jgrall@amazon.com>

Switching TTBR while the MMU is on is not safe. Now that the identity
mapping will not clash with the rest of the memory layout, we can avoid
creating temporary page-tables every time a CPU is brought up.

The arm32 code will use a different approach. So this issue is for now
only resolved on arm64.

Signed-off-by: Julien Grall <jgrall@amazon.com>
----
    Changes in v4:
        - Somehow I forgot to send it in v3. So re-include it.

    Changes in v2:
        - Remove arm32 code
---
 xen/arch/arm/arm32/smpboot.c   |  4 ++++
 xen/arch/arm/arm64/head.S      | 29 +++++++++--------------------
 xen/arch/arm/arm64/smpboot.c   | 15 ++++++++++++++-
 xen/arch/arm/include/asm/smp.h |  1 +
 xen/arch/arm/smpboot.c         |  1 +
 5 files changed, 29 insertions(+), 21 deletions(-)

diff --git a/xen/arch/arm/arm32/smpboot.c b/xen/arch/arm/arm32/smpboot.c
index e7368665d50d..518e9f9c7e70 100644
--- a/xen/arch/arm/arm32/smpboot.c
+++ b/xen/arch/arm/arm32/smpboot.c
@@ -21,6 +21,10 @@ int arch_cpu_up(int cpu)
     return platform_cpu_up(cpu);
 }
 
+void arch_cpu_up_finish(void)
+{
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 5efd442b24af..a61b4d3c2738 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -308,6 +308,7 @@ real_start_efi:
         bl    check_cpu_mode
         bl    cpu_init
         bl    create_page_tables
+        load_paddr x0, boot_pgtable
         bl    enable_mmu
 
         /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
@@ -365,29 +366,14 @@ GLOBAL(init_secondary)
 #endif
         bl    check_cpu_mode
         bl    cpu_init
-        bl    create_page_tables
+        load_paddr x0, init_ttbr
+        ldr   x0, [x0]
         bl    enable_mmu
 
         /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
         ldr   x0, =secondary_switched
         br    x0
 secondary_switched:
-        /*
-         * Non-boot CPUs need to move on to the proper pagetables, which were
-         * setup in init_secondary_pagetables.
-         *
-         * XXX: This is not compliant with the Arm Arm.
-         */
-        ldr   x4, =init_ttbr         /* VA of TTBR0_EL2 stashed by CPU 0 */
-        ldr   x4, [x4]               /* Actual value */
-        dsb   sy
-        msr   TTBR0_EL2, x4
-        dsb   sy
-        isb
-        tlbi  alle2
-        dsb   sy                     /* Ensure completion of TLB flush */
-        isb
-
 #ifdef CONFIG_EARLY_PRINTK
         /* Use a virtual address to access the UART. */
         ldr   x23, =EARLY_UART_VIRTUAL_ADDRESS
@@ -672,9 +658,13 @@ ENDPROC(create_page_tables)
  * mapping. In other word, the caller is responsible to switch to the runtime
  * mapping.
  *
- * Clobbers x0 - x3
+ * Inputs:
+ *   x0 : Physical address of the page tables.
+ *
+ * Clobbers x0 - x4
  */
 enable_mmu:
+        mov   x4, x0
         PRINT("- Turning on paging -\r\n")
 
         /*
@@ -685,8 +675,7 @@ enable_mmu:
         dsb   nsh
 
         /* Write Xen's PT's paddr into TTBR0_EL2 */
-        load_paddr x0, boot_pgtable
-        msr   TTBR0_EL2, x0
+        msr   TTBR0_EL2, x4
         isb
 
         mrs   x0, SCTLR_EL2
diff --git a/xen/arch/arm/arm64/smpboot.c b/xen/arch/arm/arm64/smpboot.c
index 694fbf67e62a..9637f424699e 100644
--- a/xen/arch/arm/arm64/smpboot.c
+++ b/xen/arch/arm/arm64/smpboot.c
@@ -106,10 +106,23 @@ int __init arch_cpu_init(int cpu, struct dt_device_node *dn)
 
 int arch_cpu_up(int cpu)
 {
+    int rc;
+
     if ( !smp_enable_ops[cpu].prepare_cpu )
         return -ENODEV;
 
-    return smp_enable_ops[cpu].prepare_cpu(cpu);
+    update_identity_mapping(true);
+
+    rc = smp_enable_ops[cpu].prepare_cpu(cpu);
+    if ( rc )
+        update_identity_mapping(false);
+
+    return rc;
+}
+
+void arch_cpu_up_finish(void)
+{
+    update_identity_mapping(false);
 }
 
 /*
diff --git a/xen/arch/arm/include/asm/smp.h b/xen/arch/arm/include/asm/smp.h
index 8133d5c29572..a37ca55bff2c 100644
--- a/xen/arch/arm/include/asm/smp.h
+++ b/xen/arch/arm/include/asm/smp.h
@@ -25,6 +25,7 @@ extern void noreturn stop_cpu(void);
 extern int arch_smp_init(void);
 extern int arch_cpu_init(int cpu, struct dt_device_node *dn);
 extern int arch_cpu_up(int cpu);
+extern void arch_cpu_up_finish(void);
 
 int cpu_up_send_sgi(int cpu);
 
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 412ae2286906..4a89b3a8345b 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -500,6 +500,7 @@ int __cpu_up(unsigned int cpu)
     init_data.cpuid = ~0;
     smp_up_cpu = MPIDR_INVALID;
     clean_dcache(smp_up_cpu);
+    arch_cpu_up_finish();
 
     if ( !cpu_online(cpu) )
     {
-- 
2.38.1



^ permalink raw reply related	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 06/14] xen/arm32: head: Replace "ldr rX, =<label>" with "mov_w rX, <label>"
  2023-01-13 10:11 ` [PATCH v4 06/14] xen/arm32: head: Replace "ldr rX, =<label>" with "mov_w rX, <label>" Julien Grall
@ 2023-01-13 10:45   ` Michal Orzel
  2023-01-13 10:47     ` Julien Grall
  2023-01-14  0:51   ` Henry Wang
  1 sibling, 1 reply; 52+ messages in thread
From: Michal Orzel @ 2023-01-13 10:45 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk



On 13/01/2023 11:11, Julien Grall wrote:
> Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding.
> 
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> "ldr rX, =<label>" is used to load a value from the literal pool. This
> implies a memory access.
> 
> This can be avoided by using the macro mov_w which encode the value in
> the immediate of two instructions.
> 
> So replace all "ldr rX, =<label>" with "mov_w rX, <label>".
> 
> No functional changes intended.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> ----
>     Changes in v4:
>         * Add Stefano's reviewed-by tag
>         * Add missing space
>         * Add Michal's reviewed-by tag
It looks like you forgot to add it, so to make b4 happy:
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal



^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 08/14] xen/arm32: head: Introduce an helper to flush the TLBs
  2023-01-13 10:11 ` [PATCH v4 08/14] xen/arm32: head: Introduce an helper to flush the TLBs Julien Grall
@ 2023-01-13 10:46   ` Michal Orzel
  2023-01-14  2:16   ` Henry Wang
  1 sibling, 0 replies; 52+ messages in thread
From: Michal Orzel @ 2023-01-13 10:46 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk



On 13/01/2023 11:11, Julien Grall wrote:
> 
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> The sequence for flushing the TLBs is 4 instruction long and often
> requires an explanation how it works.
> 
> So create a helper and use it in the boot code (switch_ttbr() is left
> alone until we decide the semantic of the call).
> 
> Note that in secondary_switched, we were also flushing the instruction
> cache and branch predictor. Neither of them was necessary because:
>     * We are only supporting IVIPT cache on arm32, so the instruction
>       cache flush is only necessary when executable code is modified.
>       None of the boot code is doing that.
>     * The instruction cache is not invalidated and misprediction is not
>       a problem at boot.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ----
>     Changes in v4:
>         - Expand the commit message to explain why switch_ttbr() is
>           not updated.
>         - Remove extra spaces in the comment
>         - Fix typo in the commit message

Thanks,
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal




^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 06/14] xen/arm32: head: Replace "ldr rX, =<label>" with "mov_w rX, <label>"
  2023-01-13 10:45   ` Michal Orzel
@ 2023-01-13 10:47     ` Julien Grall
  0 siblings, 0 replies; 52+ messages in thread
From: Julien Grall @ 2023-01-13 10:47 UTC (permalink / raw)
  To: Michal Orzel, xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk

Hi,

On 13/01/2023 10:45, Michal Orzel wrote:
> 
> 
> On 13/01/2023 11:11, Julien Grall wrote:
>> Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding.
>>
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> "ldr rX, =<label>" is used to load a value from the literal pool. This
>> implies a memory access.
>>
>> This can be avoided by using the macro mov_w which encode the value in
>> the immediate of two instructions.
>>
>> So replace all "ldr rX, =<label>" with "mov_w rX, <label>".
>>
>> No functional changes intended.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>>
>> ----
>>      Changes in v4:
>>          * Add Stefano's reviewed-by tag
>>          * Add missing space
>>          * Add Michal's reviewed-by tag
> It looks like you forgot to add it, so to make b4 happy:

Ah yes. Sorry.

> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

Thanks! I also forgot to replace the "----" with "---" before sending 
:/. I am not planning to respin the series just for that.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 52+ messages in thread

* RE: [PATCH v4 01/14] xen/arm64: flushtlb: Reduce scope of barrier for local TLB flush
  2023-01-13 10:11 ` [PATCH v4 01/14] xen/arm64: flushtlb: Reduce scope of barrier for local TLB flush Julien Grall
@ 2023-01-13 11:36   ` Henry Wang
  0 siblings, 0 replies; 52+ messages in thread
From: Henry Wang @ 2023-01-13 11:36 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Luca Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk, Michal Orzel

Hi Julien,

> -----Original Message-----
> Subject: [PATCH v4 01/14] xen/arm64: flushtlb: Reduce scope of barrier for
> local TLB flush
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> Per D5-4929 in ARM DDI 0487H.a:
> "A DSB NSH is sufficient to ensure completion of TLB maintenance
>  instructions that apply to a single PE. A DSB ISH is sufficient to
>  ensure completion of TLB maintenance instructions that apply to PEs
>  in the same Inner Shareable domain.
> "
> 
> This means barrier after local TLB flushes could be reduced to
> non-shareable.
> 
> Note that the scope of the barrier in the workaround has not been
> changed because Linux v6.1-rc8 is also using 'ish' and I couldn't
> find anything in the Neoverse N1 suggesting that a 'nsh' would
> be sufficient.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

I've tested this patch on FVP. The dom0 boots fine, linux & zephyr
guests can be started and destroyed without issue, so:

Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


^ permalink raw reply	[flat|nested] 52+ messages in thread

* RE: [PATCH v4 02/14] xen/arm64: flushtlb: Implement the TLBI repeat workaround for TLB flush by VA
  2023-01-13 10:11 ` [PATCH v4 02/14] xen/arm64: flushtlb: Implement the TLBI repeat workaround for TLB flush by VA Julien Grall
@ 2023-01-13 13:22   ` Henry Wang
  2023-01-13 17:56   ` Luca Fancellu
  1 sibling, 0 replies; 52+ messages in thread
From: Henry Wang @ 2023-01-13 13:22 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Luca Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk, Michal Orzel

Hi Julien,

> -----Original Message-----
> Subject: [PATCH v4 02/14] xen/arm64: flushtlb: Implement the TLBI repeat
> workaround for TLB flush by VA
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> Looking at the Neoverse N1 errata document, it is not clear to me
> why the TLBI repeat workaround is not applied for TLB flush by VA.
> 
> The TBL flush by VA helpers are used in flush_xen_tlb_range_va_local()
> and flush_xen_tlb_range_va(). So if the range size if a fixed size smaller
> than a PAGE_SIZE, it would be possible that the compiler remove the loop
> and therefore replicate the sequence described in the erratum 1286807.
> 
> So the TLBI repeat workaround should also be applied for the TLB flush
> by VA helpers.
> 
> Fixes: 22e323d115d8 ("xen/arm: Add workaround for Cortex-A76/Neoverse-
> N1 erratum #1286807")
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>
> 
> ----
>     This was spotted while looking at reducing the scope of the memory
>     barriers. I don't have any HW affected.

Seeing this scissors line comment, I tried to test this patch using basically
the same approach that I did for patch#1 on every board that I can find,
including some Neoverse N1 boards, and this patch looks good, so:

Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry



^ permalink raw reply	[flat|nested] 52+ messages in thread

* RE: [PATCH v4 03/14] xen/arm32: flushtlb: Reduce scope of barrier for local TLB flush
  2023-01-13 10:11 ` [PATCH v4 03/14] xen/arm32: flushtlb: Reduce scope of barrier for local TLB flush Julien Grall
@ 2023-01-13 13:46   ` Henry Wang
  0 siblings, 0 replies; 52+ messages in thread
From: Henry Wang @ 2023-01-13 13:46 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Luca Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk, Michal Orzel

Hi Julien,

> -----Original Message-----
> Subject: [PATCH v4 03/14] xen/arm32: flushtlb: Reduce scope of barrier for
> local TLB flush
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> Per G5-9224 in ARM DDI 0487I.a:
> 
> "A DSB NSH is sufficient to ensure completion of TLB maintenance
>  instructions that apply to a single PE. A DSB ISH is sufficient to
>  ensure completion of TLB maintenance instructions that apply to PEs
>  in the same Inner Shareable domain.
> "
> 
> This is quoting the Armv8 specification because I couldn't find an
> explicit statement in the Armv7 specification. Instead, I could find
> bits in various places that confirm the same implementation.
> 
> Furthermore, Linux has been using 'nsh' since 2013 (62cbbc42e001
> "ARM: tlb: reduce scope of barrier domains for TLB invalidation").
> 
> This means barrier after local TLB flushes could be reduced to
> non-shareable.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

I've tested this patch on FVP in arm32 execution mode using the same
testing approach for patch#1, and this patch is good, so:

Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


^ permalink raw reply	[flat|nested] 52+ messages in thread

* RE: [PATCH v4 04/14] xen/arm: flushtlb: Reduce scope of barrier for the TLB range flush
  2023-01-13 10:11 ` [PATCH v4 04/14] xen/arm: flushtlb: Reduce scope of barrier for the TLB range flush Julien Grall
@ 2023-01-13 13:53   ` Henry Wang
  0 siblings, 0 replies; 52+ messages in thread
From: Henry Wang @ 2023-01-13 13:53 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Luca Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk, Michal Orzel

Hi Julien,

> -----Original Message-----
> Subject: [PATCH v4 04/14] xen/arm: flushtlb: Reduce scope of barrier for the
> TLB range flush
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, flush_xen_tlb_range_va{,_local}() are using system
> wide memory barrier. This is quite expensive and unnecessary.
> 
> For the local version, a non-shareable barrier is sufficient.
> For the SMP version, an inner-shareable barrier is sufficient.
> 
> Furthermore, the initial barrier only needs to a store barrier.
> 
> For the full explanation of the sequence see asm/arm{32,64}/flushtlb.h.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


^ permalink raw reply	[flat|nested] 52+ messages in thread

* RE: [PATCH v4 05/14] xen/arm: Clean-up the memory layout
  2023-01-13 10:11 ` [PATCH v4 05/14] xen/arm: Clean-up the memory layout Julien Grall
@ 2023-01-13 13:57   ` Henry Wang
  2023-01-24 19:30   ` Julien Grall
  1 sibling, 0 replies; 52+ messages in thread
From: Henry Wang @ 2023-01-13 13:57 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Luca Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk, Michal Orzel

Hi Julien,

> -----Original Message-----
> Subject: [PATCH v4 05/14] xen/arm: Clean-up the memory layout
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> In a follow-up patch, the base address for the common mappings will
> vary between arm32 and arm64. To avoid any duplication, define
> every mapping in the common region from the previous one.
> 
> Take the opportunity to:
>     * add missing *_SIZE for FIXMAP_VIRT_* and XEN_VIRT_*
>     * switch to MB()/GB() to avoid hexadecimal (easier to read)
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 09/14] xen/arm32: head: Remove restriction where to load Xen
  2023-01-13 10:11 ` [PATCH v4 09/14] xen/arm32: head: Remove restriction where to load Xen Julien Grall
@ 2023-01-13 14:58   ` Luca Fancellu
  2023-01-16  8:43     ` Julien Grall
  2023-01-16  8:14   ` Michal Orzel
  1 sibling, 1 reply; 52+ messages in thread
From: Luca Fancellu @ 2023-01-13 14:58 UTC (permalink / raw)
  To: Julien Grall
  Cc: Xen-devel, Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk



> On 13 Jan 2023, at 10:11, Julien Grall <julien@xen.org> wrote:
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, bootloaders can load Xen anywhere in memory but the
> region 2MB - 4MB. While I am not aware of any issue, we have no way
> to tell the bootloader to avoid that region.
> 
> In addition to that, in the future, Xen may grow over 2MB if we
> enable feature like UBSAN or GCOV. To avoid widening the restriction
> on the load address, it would be better to get rid of it.
> 
> When the identity mapping is clashing with the Xen runtime mapping,
> we need an extra indirection to be able to replace the identity
> mapping with the Xen runtime mapping.
> 
> Reserve a new memory region that will be used to temporarily map Xen.
> For convenience, the new area is re-using the same first slot as the
> domheap which is used for per-cpu temporary mapping after a CPU has
> booted.
> 
> Furthermore, directly map boot_second (which cover Xen and more)
> to the temporary area. This will avoid to allocate an extra page-table
> for the second-level and will helpful for follow-up patches (we will
> want to use the fixmap whilst in the temporary mapping).
> 
> Lastly, some part of the code now needs to know whether the temporary
> mapping was created. So reserve r12 to store this information.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ----
> 

Hi Julien,

> 
> +/*
> + * Remove the temporary mapping of Xen starting at TEMPORARY_XEN_VIRT_START.
> + *
> + * Clobbers r0 - r1

NIT: r0 - r3?

> + */
> +remove_temporary_mapping:
> +        /* r2:r3 := invalid page-table entry */
> +        mov   r2, #0
> +        mov   r3, #0
> +
> +        adr_l r0, boot_pgtable
> +        mov_w r1, TEMPORARY_XEN_VIRT_START
> +        get_table_slot r1, r1, 1     /* r1 := first slot */
> +        lsl   r1, r1, #3             /* r1 := first slot offset */
> +        strd  r2, r3, [r0, r1]
> +
> +        flush_xen_tlb_local r0
> +
> +        mov  pc, lr
> +ENDPROC(remove_temporary_mapping)
> +

The rest looks good to me, I’ve also built for arm64/32 and test this patch on fvp aarch32 mode,
booting Dom0 and creating/running/destroying some guests.

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
Tested-by: Luca Fancellu <luca.fancellu@arm.com>

Cheers,
Luca

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 10/14] xen/arm32: head: Widen the use of the temporary mapping
  2023-01-13 10:11 ` [PATCH v4 10/14] xen/arm32: head: Widen the use of the temporary mapping Julien Grall
@ 2023-01-13 15:37   ` Luca Fancellu
  2023-01-16  8:20   ` Michal Orzel
  1 sibling, 0 replies; 52+ messages in thread
From: Luca Fancellu @ 2023-01-13 15:37 UTC (permalink / raw)
  To: Julien Grall
  Cc: Xen-devel, Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk



> On 13 Jan 2023, at 10:11, Julien Grall <julien@xen.org> wrote:
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, the temporary mapping is only used when the virtual
> runtime region of Xen is clashing with the physical region.
> 
> In follow-up patches, we will rework how secondary CPU bring-up works
> and it will be convenient to use the fixmap area for accessing
> the root page-table (it is per-cpu).
> 
> Rework the code to use temporary mapping when the Xen physical address
> is not overlapping with the temporary mapping.
> 
> This also has the advantage to simplify the logic to identity map
> Xen.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ----

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

I’ve also built for arm32 and test this patch on fvp aarch32 mode,
booting Dom0 and creating/running/destroying some guests

Tested-by: Luca Fancellu <luca.fancellu@arm.com>



^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 11/14] xen/arm64: Rework the memory layout
  2023-01-13 10:11 ` [PATCH v4 11/14] xen/arm64: Rework the memory layout Julien Grall
@ 2023-01-13 15:58   ` Luca Fancellu
  2023-01-16  8:46   ` Michal Orzel
  1 sibling, 0 replies; 52+ messages in thread
From: Luca Fancellu @ 2023-01-13 15:58 UTC (permalink / raw)
  To: Julien Grall
  Cc: Xen-devel, Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk



> On 13 Jan 2023, at 10:11, Julien Grall <julien@xen.org> wrote:
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> Xen is currently not fully compliant with the Arm Arm because it will
> switch the TTBR with the MMU on.
> 
> In order to be compliant, we need to disable the MMU before
> switching the TTBR. The implication is the page-tables should
> contain an identity mapping of the code switching the TTBR.
> 
> In most of the case we expect Xen to be loaded in low memory. I am aware
> of one platform (i.e AMD Seattle) where the memory start above 512GB.
> To give us some slack, consider that Xen may be loaded in the first 2TB
> of the physical address space.
> 
> The memory layout is reshuffled to keep the first two slots of the zeroeth
> level free. Xen will now be loaded at (2TB + 2MB). This requires a slight
> tweak of the boot code because XEN_VIRT_START cannot be used as an
> immediate.
> 
> This reshuffle will make trivial to create a 1:1 mapping when Xen is
> loaded below 2TB.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ----

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

I’ve also built for arm64 and test this patch on fvp, booting Dom0
and creating/running/destroying some guests

Tested-by: Luca Fancellu <luca.fancellu@arm.com>



^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 12/14] xen/arm64: mm: Introduce helpers to prepare/enable/disable the identity mapping
  2023-01-13 10:11 ` [PATCH v4 12/14] xen/arm64: mm: Introduce helpers to prepare/enable/disable the identity mapping Julien Grall
@ 2023-01-13 16:26   ` Luca Fancellu
  2023-01-16  8:53   ` Michal Orzel
  1 sibling, 0 replies; 52+ messages in thread
From: Luca Fancellu @ 2023-01-13 16:26 UTC (permalink / raw)
  To: Julien Grall
  Cc: Xen-devel, Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk



> On 13 Jan 2023, at 10:11, Julien Grall <julien@xen.org> wrote:
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> In follow-up patches we will need to have part of Xen identity mapped in
> order to safely switch the TTBR.
> 
> On some platform, the identity mapping may have to start at 0. If we always
> keep the identity region mapped, NULL pointer dereference would lead to
> access to valid mapping.
> 
> It would be possible to relocate Xen to avoid clashing with address 0.
> However the identity mapping is only meant to be used in very limited
> places. Therefore it would be better to keep the identity region invalid
> for most of the time.
> 
> Two new external helpers are introduced:
>    - arch_setup_page_tables() will setup the page-tables so it is
>      easy to create the mapping afterwards.
>    - update_identity_mapping() will create/remove the identity mapping
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ----

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

I’ve also built for arm32/64 and test this patch on fvp, booting Dom0
and creating/running/destroying some guests

Tested-by: Luca Fancellu <luca.fancellu@arm.com>




^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 13/14] xen/arm64: mm: Rework switch_ttbr()
  2023-01-13 10:11 ` [PATCH v4 13/14] xen/arm64: mm: Rework switch_ttbr() Julien Grall
@ 2023-01-13 16:50   ` Luca Fancellu
  2023-01-16  9:23   ` Michal Orzel
  1 sibling, 0 replies; 52+ messages in thread
From: Luca Fancellu @ 2023-01-13 16:50 UTC (permalink / raw)
  To: Julien Grall
  Cc: Xen-devel, Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk



> On 13 Jan 2023, at 10:11, Julien Grall <julien@xen.org> wrote:
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, switch_ttbr() is switching the TTBR whilst the MMU is
> still on.
> 
> Switching TTBR is like replacing existing mappings with new ones. So
> we need to follow the break-before-make sequence.
> 
> In this case, it means the MMU needs to be switched off while the
> TTBR is updated. In order to disable the MMU, we need to first
> jump to an identity mapping.
> 
> Rename switch_ttbr() to switch_ttbr_id() and create an helper on
> top to temporary map the identity mapping and call switch_ttbr()
> via the identity address.
> 
> switch_ttbr_id() is now reworked to temporarily turn off the MMU
> before updating the TTBR.
> 
> We also need to make sure the helper switch_ttbr() is part of the
> identity mapping. So move _end_boot past it.
> 
> The arm32 code will use a different approach. So this issue is for now
> only resolved on arm64.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

The sequence looks ok to me, also the reasoning about barriers and
register dependencies discussed in the previous version.

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

I’ve also built for arm32/64 and test this patch on fvp, booting Dom0
and creating/running/destroying some guests

Tested-by: Luca Fancellu <luca.fancellu@arm.com>




^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 14/14] xen/arm64: smpboot: Directly switch to the runtime page-tables
  2023-01-13 10:11 ` [PATCH v4 14/14] xen/arm64: smpboot: Directly switch to the runtime page-tables Julien Grall
@ 2023-01-13 17:42   ` Luca Fancellu
  2023-01-16  9:06     ` Luca Fancellu
  0 siblings, 1 reply; 52+ messages in thread
From: Luca Fancellu @ 2023-01-13 17:42 UTC (permalink / raw)
  To: Julien Grall
  Cc: Xen-devel, Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk



> On 13 Jan 2023, at 10:11, Julien Grall <julien@xen.org> wrote:
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> Switching TTBR while the MMU is on is not safe. Now that the identity
> mapping will not clash with the rest of the memory layout, we can avoid
> creating temporary page-tables every time a CPU is brought up.
> 
> The arm32 code will use a different approach. So this issue is for now
> only resolved on arm64.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ----
>    Changes in v4:
>        - Somehow I forgot to send it in v3. So re-include it.
> 
>    Changes in v2:
>        - Remove arm32 code
> ---

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

I’ve also built for arm32/64 and test this patch (so this serie) on fvp, n1sdp, raspberry pi4, Juno,
booting Dom0 and creating/running/destroying some guests, on a first shot everything works.

Tested-by: Luca Fancellu <luca.fancellu@arm.com>

I’ve left the boards to test all night, so on Monday I will be 100% sure this serie
Is not introducing any issue.



^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 02/14] xen/arm64: flushtlb: Implement the TLBI repeat workaround for TLB flush by VA
  2023-01-13 10:11 ` [PATCH v4 02/14] xen/arm64: flushtlb: Implement the TLBI repeat workaround for TLB flush by VA Julien Grall
  2023-01-13 13:22   ` Henry Wang
@ 2023-01-13 17:56   ` Luca Fancellu
  2023-01-16  8:36     ` Julien Grall
  1 sibling, 1 reply; 52+ messages in thread
From: Luca Fancellu @ 2023-01-13 17:56 UTC (permalink / raw)
  To: Julien Grall
  Cc: Xen-devel, Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk, Michal Orzel



> On 13 Jan 2023, at 10:11, Julien Grall <julien@xen.org> wrote:
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> Looking at the Neoverse N1 errata document, it is not clear to me
> why the TLBI repeat workaround is not applied for TLB flush by VA.
> 
> The TBL flush by VA helpers are used in flush_xen_tlb_range_va_local()
> and flush_xen_tlb_range_va(). So if the range size if a fixed size smaller

NIT: is a fixed size




^ permalink raw reply	[flat|nested] 52+ messages in thread

* RE: [PATCH v4 06/14] xen/arm32: head: Replace "ldr rX, =<label>" with "mov_w rX, <label>"
  2023-01-13 10:11 ` [PATCH v4 06/14] xen/arm32: head: Replace "ldr rX, =<label>" with "mov_w rX, <label>" Julien Grall
  2023-01-13 10:45   ` Michal Orzel
@ 2023-01-14  0:51   ` Henry Wang
  1 sibling, 0 replies; 52+ messages in thread
From: Henry Wang @ 2023-01-14  0:51 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Luca Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk

Hi Julien,

> -----Original Message-----
> Subject: [PATCH v4 06/14] xen/arm32: head: Replace "ldr rX, =<label>" with
> "mov_w rX, <label>"
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> "ldr rX, =<label>" is used to load a value from the literal pool. This
> implies a memory access.
> 
> This can be avoided by using the macro mov_w which encode the value in
> the immediate of two instructions.
> 
> So replace all "ldr rX, =<label>" with "mov_w rX, <label>".
> 
> No functional changes intended.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

I've tested this patch on FVP in arm32 execution mode, and
this patch is good, so:

Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


^ permalink raw reply	[flat|nested] 52+ messages in thread

* RE: [PATCH v4 07/14] xen/arm32: head: Jump to the runtime mapping in enable_mmu()
  2023-01-13 10:11 ` [PATCH v4 07/14] xen/arm32: head: Jump to the runtime mapping in enable_mmu() Julien Grall
@ 2023-01-14  1:33   ` Henry Wang
  0 siblings, 0 replies; 52+ messages in thread
From: Henry Wang @ 2023-01-14  1:33 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Luca Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk

Hi Julien,

> -----Original Message-----
> Subject: [PATCH v4 07/14] xen/arm32: head: Jump to the runtime mapping in
> enable_mmu()
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, enable_mmu() will return to an address in the 1:1 mapping
> and each path is responsible to switch to the runtime mapping.
> 
> In a follow-up patch, the behavior to switch to the runtime mapping
> will become more complex. So to avoid more code/comment duplication,
> move the switch in enable_mmu().
> 
> Lastly, take the opportunity to replace load from literal pool with
> mov_w.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

I've tested this patch on FVP in arm32 execution mode, and
this patch is good, so:

Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


^ permalink raw reply	[flat|nested] 52+ messages in thread

* RE: [PATCH v4 08/14] xen/arm32: head: Introduce an helper to flush the TLBs
  2023-01-13 10:11 ` [PATCH v4 08/14] xen/arm32: head: Introduce an helper to flush the TLBs Julien Grall
  2023-01-13 10:46   ` Michal Orzel
@ 2023-01-14  2:16   ` Henry Wang
  1 sibling, 0 replies; 52+ messages in thread
From: Henry Wang @ 2023-01-14  2:16 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Luca Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk

Hi Julien,

> -----Original Message-----
> Subject: [PATCH v4 08/14] xen/arm32: head: Introduce an helper to flush the
> TLBs
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> The sequence for flushing the TLBs is 4 instruction long and often
> requires an explanation how it works.
> 
> So create a helper and use it in the boot code (switch_ttbr() is left
> alone until we decide the semantic of the call).
> 
> Note that in secondary_switched, we were also flushing the instruction
> cache and branch predictor. Neither of them was necessary because:
>     * We are only supporting IVIPT cache on arm32, so the instruction
>       cache flush is only necessary when executable code is modified.
>       None of the boot code is doing that.
>     * The instruction cache is not invalidated and misprediction is not
>       a problem at boot.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

I've also tested this patch on FVP in arm32 execution mode, and
this patch is good, so:

Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 09/14] xen/arm32: head: Remove restriction where to load Xen
  2023-01-13 10:11 ` [PATCH v4 09/14] xen/arm32: head: Remove restriction where to load Xen Julien Grall
  2023-01-13 14:58   ` Luca Fancellu
@ 2023-01-16  8:14   ` Michal Orzel
  2023-01-16  8:55     ` Julien Grall
  1 sibling, 1 reply; 52+ messages in thread
From: Michal Orzel @ 2023-01-16  8:14 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk

Hi Julien,

On 13/01/2023 11:11, Julien Grall wrote:
> 
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, bootloaders can load Xen anywhere in memory but the
> region 2MB - 4MB. While I am not aware of any issue, we have no way
> to tell the bootloader to avoid that region.
> 
> In addition to that, in the future, Xen may grow over 2MB if we
> enable feature like UBSAN or GCOV. To avoid widening the restriction
> on the load address, it would be better to get rid of it.
> 
> When the identity mapping is clashing with the Xen runtime mapping,
> we need an extra indirection to be able to replace the identity
> mapping with the Xen runtime mapping.
> 
> Reserve a new memory region that will be used to temporarily map Xen.
> For convenience, the new area is re-using the same first slot as the
> domheap which is used for per-cpu temporary mapping after a CPU has
> booted.
> 
> Furthermore, directly map boot_second (which cover Xen and more)
> to the temporary area. This will avoid to allocate an extra page-table
> for the second-level and will helpful for follow-up patches (we will
> want to use the fixmap whilst in the temporary mapping).
> 
> Lastly, some part of the code now needs to know whether the temporary
> mapping was created. So reserve r12 to store this information.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ----
>     Changes in v4:
>         - Remove spurious newline
> 
>     Changes in v3:
>         - Remove the ASSERT() in init_domheap_mappings() because it was
>           bogus (secondary CPU root tables are initialized to the CPU0
>           root table so the entry will be valid). Also, it is not
>           related to this patch as the CPU0 root table are rebuilt
>           during boot. The ASSERT() will be re-introduced later.
> 
>     Changes in v2:
>         - Patch added
> ---
>  xen/arch/arm/arm32/head.S         | 139 ++++++++++++++++++++++++++----
>  xen/arch/arm/include/asm/config.h |  14 +++
>  xen/arch/arm/mm.c                 |  14 +++
>  3 files changed, 152 insertions(+), 15 deletions(-)
> 
> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
> index 67b910808b74..3800efb44169 100644
> --- a/xen/arch/arm/arm32/head.S
> +++ b/xen/arch/arm/arm32/head.S
> @@ -35,6 +35,9 @@
>  #define XEN_FIRST_SLOT      first_table_offset(XEN_VIRT_START)
>  #define XEN_SECOND_SLOT     second_table_offset(XEN_VIRT_START)
> 
> +/* Offset between the early boot xen mapping and the runtime xen mapping */
> +#define XEN_TEMPORARY_OFFSET      (TEMPORARY_XEN_VIRT_START - XEN_VIRT_START)
> +
>  #if defined(CONFIG_EARLY_PRINTK) && defined(CONFIG_EARLY_PRINTK_INC)
>  #include CONFIG_EARLY_PRINTK_INC
>  #endif
> @@ -94,7 +97,7 @@
>   *   r9  - paddr(start)
>   *   r10 - phys offset
>   *   r11 - UART address
> - *   r12 -
> + *   r12 - Temporary mapping created
>   *   r13 - SP
>   *   r14 - LR
>   *   r15 - PC
> @@ -445,6 +448,9 @@ ENDPROC(cpu_init)
>   *   r9 : paddr(start)
>   *   r10: phys offset
>   *
> + * Output:
> + *   r12: Was a temporary mapping created?
> + *
>   * Clobbers r0 - r4, r6
>   *
>   * Register usage within this function:
> @@ -484,7 +490,11 @@ create_page_tables:
>          /*
>           * Setup the 1:1 mapping so we can turn the MMU on. Note that
>           * only the first page of Xen will be part of the 1:1 mapping.
> +         *
> +         * In all the cases, we will link boot_third_id. So create the
> +         * mapping in advance.
>           */
> +        create_mapping_entry boot_third_id, r9, r9
> 
>          /*
>           * Find the first slot used. If the slot is not XEN_FIRST_SLOT,
> @@ -501,8 +511,7 @@ create_page_tables:
>          /*
>           * Find the second slot used. If the slot is XEN_SECOND_SLOT, then the
>           * 1:1 mapping will use its own set of page-tables from the
> -         * third level. For slot XEN_SECOND_SLOT, Xen is not yet able to handle
> -         * it.
> +         * third level.
>           */
>          get_table_slot r1, r9, 2     /* r1 := second slot */
>          cmp   r1, #XEN_SECOND_SLOT
> @@ -513,13 +522,33 @@ create_page_tables:
>  link_from_second_id:
>          create_table_entry boot_second_id, boot_third_id, r9, 2
>  link_from_third_id:
> -        create_mapping_entry boot_third_id, r9, r9
> +        /* Good news, we are not clashing with Xen virtual mapping */
> +        mov   r12, #0                /* r12 := temporary mapping not created */
>          mov   pc, lr
> 
>  virtphys_clash:
> -        /* Identity map clashes with boot_third, which we cannot handle yet */
> -        PRINT("- Unable to build boot page tables - virt and phys addresses clash. -\r\n")
> -        b     fail
> +        /*
> +         * The identity map clashes with boot_third. Link boot_first_id and
> +         * map Xen to a temporary mapping. See switch_to_runtime_mapping
> +         * for more details.
> +         */
> +        PRINT("- Virt and Phys addresses clash  -\r\n")
> +        PRINT("- Create temporary mapping -\r\n")
> +
> +        /*
> +         * This will override the link to boot_second in XEN_FIRST_SLOT.
> +         * The page-tables are not live yet. So no need to use
> +         * break-before-make.
> +         */
> +        create_table_entry boot_pgtable, boot_second_id, r9, 1
> +        create_table_entry boot_second_id, boot_third_id, r9, 2
> +
> +        /* Map boot_second (cover Xen mappings) to the temporary 1st slot */
> +        mov_w r0, TEMPORARY_XEN_VIRT_START
> +        create_table_entry boot_pgtable, boot_second, r0, 1
> +
> +        mov   r12, #1                /* r12 := temporary mapping created */
> +        mov   pc, lr
>  ENDPROC(create_page_tables)
> 
>  /*
> @@ -528,9 +557,10 @@ ENDPROC(create_page_tables)
>   *
>   * Inputs:
>   *   r9 : paddr(start)
> + *  r12 : Was the temporary mapping created?
>   *   lr : Virtual address to return to
>   *
> - * Clobbers r0 - r3
> + * Clobbers r0 - r5
>   */
>  enable_mmu:
>          PRINT("- Turning on paging -\r\n")
> @@ -558,21 +588,79 @@ enable_mmu:
>           * The MMU is turned on and we are in the 1:1 mapping. Switch
>           * to the runtime mapping.
>           */
> -        mov_w r0, 1f
> -        mov   pc, r0
> +        mov   r5, lr                /* Save LR before overwritting it */
> +        mov_w lr, 1f                /* Virtual address in the runtime mapping */
> +        b     switch_to_runtime_mapping
>  1:
> +        mov   lr, r5                /* Restore LR */
>          /*
> -         * The 1:1 map may clash with other parts of the Xen virtual memory
> -         * layout. As it is not used anymore, remove it completely to
> -         * avoid having to worry about replacing existing mapping
> -         * afterwards.
> +         * At this point, either the 1:1 map or the temporary mapping
> +         * will be present. The former may clash with other parts of the
> +         * Xen virtual memory layout. As both of them are not used
> +         * anymore, remove them completely to avoid having to worry
> +         * about replacing existing mapping afterwards.
>           *
>           * On return this will jump to the virtual address requested by
>           * the caller.
>           */
> -        b     remove_identity_mapping
> +        teq   r12, #0
> +        beq   remove_identity_mapping
> +        b     remove_temporary_mapping
>  ENDPROC(enable_mmu)
> 
> +/*
> + * Switch to the runtime mapping. The logic depends on whether the
> + * runtime virtual region is clashing with the physical address
> + *
> + *  - If it is not clashing, we can directly jump to the address in
> + *    the runtime mapping.
> + *  - If it is clashing, create_page_tables() would have mapped Xen to
> + *    a temporary virtual address. We need to switch to the temporary
> + *    mapping so we can remove the identity mapping and map Xen at the
> + *    correct position.
> + *
> + * Inputs
> + *    r9: paddr(start)
> + *   r12: Was a temporary mapping created?
> + *    lr: Address in the runtime mapping to jump to
> + *
> + * Clobbers r0 - r4
> + */
> +switch_to_runtime_mapping:
> +        /*
> +         * Jump to the runtime mapping if the virt and phys are not
> +         * clashing
> +         */
> +        teq   r12, #0
> +        beq   ready_to_switch
> +
> +        /* We are still in the 1:1 mapping. Jump to the temporary Virtual address. */
> +        mov_w r0, 1f
> +        add   r0, r0, #XEN_TEMPORARY_OFFSET /* r0 := address in temporary mapping */
> +        mov   pc, r0
> +
> +1:
> +        /* Remove boot_second_id */
> +        mov   r2, #0
> +        mov   r3, #0
> +        adr_l r0, boot_pgtable
> +        get_table_slot r1, r9, 1            /* r1 := first slot */
> +        lsl   r1, r1, #3                    /* r1 := first slot offset */
> +        strd  r2, r3, [r0, r1]
> +
> +        flush_xen_tlb_local r0
> +
> +        /* Map boot_second into boot_pgtable */
> +        mov_w r0, XEN_VIRT_START
> +        create_table_entry boot_pgtable, boot_second, r0, 1
> +
> +        /* Ensure any page table updates are visible before continuing */
> +        dsb   nsh
> +
> +ready_to_switch:
> +        mov   pc, lr
> +ENDPROC(switch_to_runtime_mapping)
> +
>  /*
>   * Remove the 1:1 map from the page-tables. It is not easy to keep track
>   * where the 1:1 map was mapped, so we will look for the top-level entry
> @@ -618,6 +706,27 @@ identity_mapping_removed:
>          mov   pc, lr
>  ENDPROC(remove_identity_mapping)
> 
> +/*
> + * Remove the temporary mapping of Xen starting at TEMPORARY_XEN_VIRT_START.
> + *
> + * Clobbers r0 - r1
> + */
> +remove_temporary_mapping:
> +        /* r2:r3 := invalid page-table entry */
> +        mov   r2, #0
> +        mov   r3, #0
> +
> +        adr_l r0, boot_pgtable
> +        mov_w r1, TEMPORARY_XEN_VIRT_START
> +        get_table_slot r1, r1, 1     /* r1 := first slot */
Can't we just use TEMPORARY_AREA_FIRST_SLOT?

> +        lsl   r1, r1, #3             /* r1 := first slot offset */
> +        strd  r2, r3, [r0, r1]
> +
> +        flush_xen_tlb_local r0
> +
> +        mov  pc, lr
> +ENDPROC(remove_temporary_mapping)
> +
>  /*
>   * Map the UART in the fixmap (when earlyprintk is used) and hook the
>   * fixmap table in the page tables.
> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
> index 87851e677701..6c1b762e976d 100644
> --- a/xen/arch/arm/include/asm/config.h
> +++ b/xen/arch/arm/include/asm/config.h
> @@ -148,6 +148,20 @@
>  /* Number of domheap pagetable pages required at the second level (2MB mappings) */
>  #define DOMHEAP_SECOND_PAGES (DOMHEAP_VIRT_SIZE >> FIRST_SHIFT)
> 
> +/*
> + * The temporary area is overlapping with the domheap area. This may
> + * be used to create an alias of the first slot containing Xen mappings
> + * when turning on/off the MMU.
> + */
> +#define TEMPORARY_AREA_FIRST_SLOT    (first_table_offset(DOMHEAP_VIRT_START))
> +
> +/* Calculate the address in the temporary area */
> +#define TEMPORARY_AREA_ADDR(addr)                           \
> +     (((addr) & ~XEN_PT_LEVEL_MASK(1)) |                    \
> +      (TEMPORARY_AREA_FIRST_SLOT << XEN_PT_LEVEL_SHIFT(1)))
XEN_PT_LEVEL_{MASK/SHIFT} should be used when we do not know the level upfront.
Otherwise, no need for opencoding and you should use FIRST_MASK and FIRST_SHIFT.

~Michal


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 10/14] xen/arm32: head: Widen the use of the temporary mapping
  2023-01-13 10:11 ` [PATCH v4 10/14] xen/arm32: head: Widen the use of the temporary mapping Julien Grall
  2023-01-13 15:37   ` Luca Fancellu
@ 2023-01-16  8:20   ` Michal Orzel
  2023-01-24 19:43     ` Julien Grall
  1 sibling, 1 reply; 52+ messages in thread
From: Michal Orzel @ 2023-01-16  8:20 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk

Hi Julien,

On 13/01/2023 11:11, Julien Grall wrote:
> 
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, the temporary mapping is only used when the virtual
> runtime region of Xen is clashing with the physical region.
> 
> In follow-up patches, we will rework how secondary CPU bring-up works
> and it will be convenient to use the fixmap area for accessing
> the root page-table (it is per-cpu).
> 
> Rework the code to use temporary mapping when the Xen physical address
> is not overlapping with the temporary mapping.
> 
> This also has the advantage to simplify the logic to identity map
> Xen.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ----
> 
> Even if this patch is rewriting part of the previous patch, I decided
> to keep them separated to help the review.
> 
> The "folow-up patches" are still in draft at the moment. I still haven't
> find a way to split them nicely and not require too much more work
> in the coloring side.
> 
> I have provided some medium-term goal in the cover letter.
> 
>     Changes in v3:
>         - Resolve conflicts after switching from "ldr rX, <label>" to
>           "mov_w rX, <label>" in a previous patch
> 
>     Changes in v2:
>         - Patch added
> ---
>  xen/arch/arm/arm32/head.S | 82 +++++++--------------------------------
>  1 file changed, 15 insertions(+), 67 deletions(-)
> 
> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
> index 3800efb44169..ce858e9fc4da 100644
> --- a/xen/arch/arm/arm32/head.S
> +++ b/xen/arch/arm/arm32/head.S
> @@ -459,7 +459,6 @@ ENDPROC(cpu_init)
>  create_page_tables:
>          /* Prepare the page-tables for mapping Xen */
>          mov_w r0, XEN_VIRT_START
> -        create_table_entry boot_pgtable, boot_second, r0, 1
>          create_table_entry boot_second, boot_third, r0, 2
> 
>          /* Setup boot_third: */
> @@ -479,67 +478,37 @@ create_page_tables:
>          cmp   r1, #(XEN_PT_LPAE_ENTRIES<<3) /* 512*8-byte entries per page */
>          blo   1b
> 
> -        /*
> -         * If Xen is loaded at exactly XEN_VIRT_START then we don't
> -         * need an additional 1:1 mapping, the virtual mapping will
> -         * suffice.
> -         */
> -        cmp   r9, #XEN_VIRT_START
> -        moveq pc, lr
> -
>          /*
>           * Setup the 1:1 mapping so we can turn the MMU on. Note that
>           * only the first page of Xen will be part of the 1:1 mapping.
> -         *
> -         * In all the cases, we will link boot_third_id. So create the
> -         * mapping in advance.
>           */
> +        create_table_entry boot_pgtable, boot_second_id, r9, 1
> +        create_table_entry boot_second_id, boot_third_id, r9, 2
>          create_mapping_entry boot_third_id, r9, r9
> 
>          /*
> -         * Find the first slot used. If the slot is not XEN_FIRST_SLOT,
> -         * then the 1:1 mapping will use its own set of page-tables from
> -         * the second level.
> +         * Find the first slot used. If the slot is not the same
> +         * as XEN_TMP_FIRST_SLOT, then we will want to switch
Do you mean TEMPORARY_AREA_FIRST_SLOT?

> +         * to the temporary mapping before jumping to the runtime
> +         * virtual mapping.
>           */
>          get_table_slot r1, r9, 1     /* r1 := first slot */
> -        cmp   r1, #XEN_FIRST_SLOT
> -        beq   1f
> -        create_table_entry boot_pgtable, boot_second_id, r9, 1
> -        b     link_from_second_id
> -
> -1:
> -        /*
> -         * Find the second slot used. If the slot is XEN_SECOND_SLOT, then the
> -         * 1:1 mapping will use its own set of page-tables from the
> -         * third level.
> -         */
> -        get_table_slot r1, r9, 2     /* r1 := second slot */
> -        cmp   r1, #XEN_SECOND_SLOT
> -        beq   virtphys_clash
> -        create_table_entry boot_second, boot_third_id, r9, 2
> -        b     link_from_third_id
> +        cmp   r1, #TEMPORARY_AREA_FIRST_SLOT
> +        bne   use_temporary_mapping
> 
> -link_from_second_id:
> -        create_table_entry boot_second_id, boot_third_id, r9, 2
> -link_from_third_id:
> -        /* Good news, we are not clashing with Xen virtual mapping */
> +        mov_w r0, XEN_VIRT_START
> +        create_table_entry boot_pgtable, boot_second, r0, 1
>          mov   r12, #0                /* r12 := temporary mapping not created */
>          mov   pc, lr
> 
> -virtphys_clash:
> +use_temporary_mapping:
>          /*
> -         * The identity map clashes with boot_third. Link boot_first_id and
> -         * map Xen to a temporary mapping. See switch_to_runtime_mapping
> -         * for more details.
> +         * The identity mapping is not using the first slot
> +         * TEMPORARY_AREA_FIRST_SLOT. Create a temporary mapping.
> +         * See switch_to_runtime_mapping for more details.
>           */
> -        PRINT("- Virt and Phys addresses clash  -\r\n")
>          PRINT("- Create temporary mapping -\r\n")
> 
> -        /*
> -         * This will override the link to boot_second in XEN_FIRST_SLOT.
> -         * The page-tables are not live yet. So no need to use
> -         * break-before-make.
> -         */
>          create_table_entry boot_pgtable, boot_second_id, r9, 1
>          create_table_entry boot_second_id, boot_third_id, r9, 2
Do we need to duplicate this if we just did the same in create_page_tables before branching to
use_temporary_mapping?

~Michal



^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 02/14] xen/arm64: flushtlb: Implement the TLBI repeat workaround for TLB flush by VA
  2023-01-13 17:56   ` Luca Fancellu
@ 2023-01-16  8:36     ` Julien Grall
  0 siblings, 0 replies; 52+ messages in thread
From: Julien Grall @ 2023-01-16  8:36 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: Xen-devel, Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk, Michal Orzel

Hi Luca,

On 13/01/2023 17:56, Luca Fancellu wrote:
> 
> 
>> On 13 Jan 2023, at 10:11, Julien Grall <julien@xen.org> wrote:
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Looking at the Neoverse N1 errata document, it is not clear to me
>> why the TLBI repeat workaround is not applied for TLB flush by VA.
>>
>> The TBL flush by VA helpers are used in flush_xen_tlb_range_va_local()

s/TBL/TLB/

>> and flush_xen_tlb_range_va(). So if the range size if a fixed size smaller
> 
> NIT: is a fixed size

Thanks. I will fix it on commit unless there is another round needed.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 09/14] xen/arm32: head: Remove restriction where to load Xen
  2023-01-13 14:58   ` Luca Fancellu
@ 2023-01-16  8:43     ` Julien Grall
  0 siblings, 0 replies; 52+ messages in thread
From: Julien Grall @ 2023-01-16  8:43 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: Xen-devel, Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk

Hi Luca,

On 13/01/2023 14:58, Luca Fancellu wrote:
>> +/*
>> + * Remove the temporary mapping of Xen starting at TEMPORARY_XEN_VIRT_START.
>> + *
>> + * Clobbers r0 - r1
> 
> NIT: r0 - r3?

Yes. I have updated the version in my tree.

> 
>> + */
>> +remove_temporary_mapping:
>> +        /* r2:r3 := invalid page-table entry */
>> +        mov   r2, #0
>> +        mov   r3, #0
>> +
>> +        adr_l r0, boot_pgtable
>> +        mov_w r1, TEMPORARY_XEN_VIRT_START
>> +        get_table_slot r1, r1, 1     /* r1 := first slot */
>> +        lsl   r1, r1, #3             /* r1 := first slot offset */
>> +        strd  r2, r3, [r0, r1]
>> +
>> +        flush_xen_tlb_local r0
>> +
>> +        mov  pc, lr
>> +ENDPROC(remove_temporary_mapping)
>> +
> 
> The rest looks good to me, I’ve also built for arm64/32 and test this patch on fvp aarch32 mode,
> booting Dom0 and creating/running/destroying some guests.
> 
> Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
> Tested-by: Luca Fancellu <luca.fancellu@arm.com>

Thanks!

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 11/14] xen/arm64: Rework the memory layout
  2023-01-13 10:11 ` [PATCH v4 11/14] xen/arm64: Rework the memory layout Julien Grall
  2023-01-13 15:58   ` Luca Fancellu
@ 2023-01-16  8:46   ` Michal Orzel
  2023-01-16  9:29     ` Julien Grall
  1 sibling, 1 reply; 52+ messages in thread
From: Michal Orzel @ 2023-01-16  8:46 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk

Hi Julien,

On 13/01/2023 11:11, Julien Grall wrote:
> 
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> Xen is currently not fully compliant with the Arm Arm because it will
> switch the TTBR with the MMU on.
> 
> In order to be compliant, we need to disable the MMU before
> switching the TTBR. The implication is the page-tables should
> contain an identity mapping of the code switching the TTBR.
> 
> In most of the case we expect Xen to be loaded in low memory. I am aware
> of one platform (i.e AMD Seattle) where the memory start above 512GB.
> To give us some slack, consider that Xen may be loaded in the first 2TB
> of the physical address space.
> 
> The memory layout is reshuffled to keep the first two slots of the zeroeth
Should be "four slots" instead of "two".

> level free. Xen will now be loaded at (2TB + 2MB). This requires a slight
> tweak of the boot code because XEN_VIRT_START cannot be used as an
> immediate.
> 
> This reshuffle will make trivial to create a 1:1 mapping when Xen is
> loaded below 2TB.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ----
>     Changes in v4:
>         - Correct the documentation
>         - The start address is 2TB, so slot0 is 4 not 2.
> 
>     Changes in v2:
>         - Reword the commit message
>         - Load Xen at 2TB + 2MB
>         - Update the documentation to reflect the new layout
> ---
>  xen/arch/arm/arm64/head.S         |  3 ++-
>  xen/arch/arm/include/asm/config.h | 35 ++++++++++++++++++++-----------
>  xen/arch/arm/mm.c                 | 11 +++++-----
>  3 files changed, 31 insertions(+), 18 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 4a3f87117c83..663f5813b12e 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -607,7 +607,8 @@ create_page_tables:
>           * need an additional 1:1 mapping, the virtual mapping will
>           * suffice.
>           */
> -        cmp   x19, #XEN_VIRT_START
> +        ldr   x0, =XEN_VIRT_START
> +        cmp   x19, x0
>          bne   1f
>          ret
>  1:
> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
> index 6c1b762e976d..c5d407a7495f 100644
> --- a/xen/arch/arm/include/asm/config.h
> +++ b/xen/arch/arm/include/asm/config.h
> @@ -72,15 +72,12 @@
>  #include <xen/page-size.h>
> 
>  /*
> - * Common ARM32 and ARM64 layout:
> + * ARM32 layout:
>   *   0  -   2M   Unmapped
>   *   2M -   4M   Xen text, data, bss
>   *   4M -   6M   Fixmap: special-purpose 4K mapping slots
>   *   6M -  10M   Early boot mapping of FDT
> - *   10M - 12M   Livepatch vmap (if compiled in)
> - *
> - * ARM32 layout:
> - *   0  -  12M   <COMMON>
> + *  10M -  12M   Livepatch vmap (if compiled in)
>   *
>   *  32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
>   * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
> @@ -90,14 +87,22 @@
>   *   2G -   4G   Domheap: on-demand-mapped
>   *
>   * ARM64 layout:
> - * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
> - *   0  -  12M   <COMMON>
> + * 0x0000000000000000 - 0x00001fffffffffff (2TB, L0 slots [0..3])
End address should be 0x1FFFFFFFFFF (one less f).

> + *  Reserved to identity map Xen
> + *
> + * 0x0000020000000000 - 0x000028fffffffff (512GB, L0 slot [4]
End address should be 0x27FFFFFFFFF.

> + *  (Relative offsets)
> + *   0  -   2M   Unmapped
> + *   2M -   4M   Xen text, data, bss
> + *   4M -   6M   Fixmap: special-purpose 4K mapping slots
> + *   6M -  10M   Early boot mapping of FDT
> + *  10M -  12M   Livepatch vmap (if compiled in)
>   *
>   *   1G -   2G   VMAP: ioremap and early_ioremap
>   *
>   *  32G -  64G   Frametable: 24 bytes per page for 5.3TB of RAM
>   *
> - * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
> + * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [5..255])
Start address should be 0x28000000000.

Not related to this patch:
I took a look at config.h and spotted two things:
1) DIRECTMAP_SIZE calculation is incorrect. It is defined as (SLOT0_ENTRY_SIZE * (265-256))
but it actually should be (SLOT0_ENTRY_SIZE * (266-256)) i.e. 10 slots and not 9. Due to this
bug we actually support 4.5TB of direct-map and not 5TB.

2) frametable information
struct page_info is no longer 24B but 56B for arm64 and 32B for arm32. It looks like SUPPORT.md
took this into account when stating that we support 12GB for arm32 and 2TB for arm64. However,
this is also wrong as it does not take into account physical address compression. With PDX that
is enabled by default we could fit tens of TB in 32GB frametable. I think we want to get rid of
comments like "Frametable: 24 bytes per page for 16GB of RAM" in favor of just "Frametable".
This is to because the struct page_info size may change again and it is rather difficult to
calculate the max RAM size supported with PDX enabled.

If you want, I can push the patches for these issues.

~Michal



^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 12/14] xen/arm64: mm: Introduce helpers to prepare/enable/disable the identity mapping
  2023-01-13 10:11 ` [PATCH v4 12/14] xen/arm64: mm: Introduce helpers to prepare/enable/disable the identity mapping Julien Grall
  2023-01-13 16:26   ` Luca Fancellu
@ 2023-01-16  8:53   ` Michal Orzel
  2023-01-27 19:30     ` Julien Grall
  1 sibling, 1 reply; 52+ messages in thread
From: Michal Orzel @ 2023-01-16  8:53 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk

Hi Julien,

On 13/01/2023 11:11, Julien Grall wrote:
> 
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> In follow-up patches we will need to have part of Xen identity mapped in
> order to safely switch the TTBR.
> 
> On some platform, the identity mapping may have to start at 0. If we always
> keep the identity region mapped, NULL pointer dereference would lead to
> access to valid mapping.
> 
> It would be possible to relocate Xen to avoid clashing with address 0.
> However the identity mapping is only meant to be used in very limited
> places. Therefore it would be better to keep the identity region invalid
> for most of the time.
> 
> Two new external helpers are introduced:
>     - arch_setup_page_tables() will setup the page-tables so it is
>       easy to create the mapping afterwards.
>     - update_identity_mapping() will create/remove the identity mapping
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ----
>     Changes in v4:
>         - Fix typo in a comment
>         - Clarify which page-tables are updated
> 
>     Changes in v2:
>         - Remove the arm32 part
>         - Use a different logic for the boot page tables and runtime
>           one because Xen may be running in a different place.
> ---
>  xen/arch/arm/arm64/Makefile         |   1 +
>  xen/arch/arm/arm64/mm.c             | 130 ++++++++++++++++++++++++++++
>  xen/arch/arm/include/asm/arm32/mm.h |   4 +
>  xen/arch/arm/include/asm/arm64/mm.h |  13 +++
>  xen/arch/arm/include/asm/setup.h    |  11 +++
>  xen/arch/arm/mm.c                   |   6 +-
>  6 files changed, 163 insertions(+), 2 deletions(-)
>  create mode 100644 xen/arch/arm/arm64/mm.c
> 
> diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
> index 6d507da0d44d..28481393e98f 100644
> --- a/xen/arch/arm/arm64/Makefile
> +++ b/xen/arch/arm/arm64/Makefile
> @@ -10,6 +10,7 @@ obj-y += entry.o
>  obj-y += head.o
>  obj-y += insn.o
>  obj-$(CONFIG_LIVEPATCH) += livepatch.o
> +obj-y += mm.o
>  obj-y += smc.o
>  obj-y += smpboot.o
>  obj-y += traps.o
> diff --git a/xen/arch/arm/arm64/mm.c b/xen/arch/arm/arm64/mm.c
> new file mode 100644
> index 000000000000..798ae93ad73c
> --- /dev/null
> +++ b/xen/arch/arm/arm64/mm.c
> @@ -0,0 +1,130 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#include <xen/init.h>
> +#include <xen/mm.h>
> +
> +#include <asm/setup.h>
> +
> +/* Override macros from asm/page.h to make them work with mfn_t */
> +#undef virt_to_mfn
> +#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
> +
> +static DEFINE_PAGE_TABLE(xen_first_id);
> +static DEFINE_PAGE_TABLE(xen_second_id);
> +static DEFINE_PAGE_TABLE(xen_third_id);
> +
> +/*
> + * The identity mapping may start at physical address 0. So we don't want
> + * to keep it mapped longer than necessary.
> + *
> + * When this is called, we are still using the boot_pgtable.
> + *
> + * We need to prepare the identity mapping for both the boot page tables
> + * and runtime page tables.
> + *
> + * The logic to create the entry is slightly different because Xen may
> + * be running at a different location at runtime.
> + */
> +static void __init prepare_boot_identity_mapping(void)
> +{
> +    paddr_t id_addr = virt_to_maddr(_start);
> +    lpae_t pte;
> +    DECLARE_OFFSETS(id_offsets, id_addr);
> +
> +    /*
> +     * We will be re-using the boot ID tables. They may not have been
> +     * zeroed but they should be unlinked. So it is fine to use
> +     * clear_page().
> +     */
> +    clear_page(boot_first_id);
> +    clear_page(boot_second_id);
> +    clear_page(boot_third_id);
> +
> +    if ( id_offsets[0] != 0 )
> +        panic("Cannot handled ID mapping above 512GB\n");
I might be lost but didn't we say before that we can load Xen in the first 2TB?
Then, how does this check correspond to it?

~Michal


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 09/14] xen/arm32: head: Remove restriction where to load Xen
  2023-01-16  8:14   ` Michal Orzel
@ 2023-01-16  8:55     ` Julien Grall
  2023-01-16  9:32       ` Michal Orzel
  0 siblings, 1 reply; 52+ messages in thread
From: Julien Grall @ 2023-01-16  8:55 UTC (permalink / raw)
  To: Michal Orzel, xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk



On 16/01/2023 08:14, Michal Orzel wrote:
> Hi Julien,

Hi Luca,

> On 13/01/2023 11:11, Julien Grall wrote:
>> +/*
>> + * Remove the temporary mapping of Xen starting at TEMPORARY_XEN_VIRT_START.
>> + *
>> + * Clobbers r0 - r1
>> + */
>> +remove_temporary_mapping:
>> +        /* r2:r3 := invalid page-table entry */
>> +        mov   r2, #0
>> +        mov   r3, #0
>> +
>> +        adr_l r0, boot_pgtable
>> +        mov_w r1, TEMPORARY_XEN_VIRT_START
>> +        get_table_slot r1, r1, 1     /* r1 := first slot */
> Can't we just use TEMPORARY_AREA_FIRST_SLOT?

IMHO, it would make the code a bit more difficult to read because the 
connection between TEMPORARY_XEN_VIRT_START and 
TEMPORARY_AREA_FIRST_SLOT is not totally obvious.

So I would rather prefer if this stays like that.

> 
>> +        lsl   r1, r1, #3             /* r1 := first slot offset */
>> +        strd  r2, r3, [r0, r1]
>> +
>> +        flush_xen_tlb_local r0
>> +
>> +        mov  pc, lr
>> +ENDPROC(remove_temporary_mapping)
>> +
>>   /*
>>    * Map the UART in the fixmap (when earlyprintk is used) and hook the
>>    * fixmap table in the page tables.
>> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
>> index 87851e677701..6c1b762e976d 100644
>> --- a/xen/arch/arm/include/asm/config.h
>> +++ b/xen/arch/arm/include/asm/config.h
>> @@ -148,6 +148,20 @@
>>   /* Number of domheap pagetable pages required at the second level (2MB mappings) */
>>   #define DOMHEAP_SECOND_PAGES (DOMHEAP_VIRT_SIZE >> FIRST_SHIFT)
>>
>> +/*
>> + * The temporary area is overlapping with the domheap area. This may
>> + * be used to create an alias of the first slot containing Xen mappings
>> + * when turning on/off the MMU.
>> + */
>> +#define TEMPORARY_AREA_FIRST_SLOT    (first_table_offset(DOMHEAP_VIRT_START))
>> +
>> +/* Calculate the address in the temporary area */
>> +#define TEMPORARY_AREA_ADDR(addr)                           \
>> +     (((addr) & ~XEN_PT_LEVEL_MASK(1)) |                    \
>> +      (TEMPORARY_AREA_FIRST_SLOT << XEN_PT_LEVEL_SHIFT(1)))
> XEN_PT_LEVEL_{MASK/SHIFT} should be used when we do not know the level upfront.
> Otherwise, no need for opencoding and you should use FIRST_MASK and FIRST_SHIFT.

We discussed in the past to phase out the use of FIRST_MASK, FIRST_SHIFT 
because the name is too generic. So for new code, we should use 
XEN_PT_LEVEL_{MASK/SHIFT}.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 14/14] xen/arm64: smpboot: Directly switch to the runtime page-tables
  2023-01-13 17:42   ` Luca Fancellu
@ 2023-01-16  9:06     ` Luca Fancellu
  2023-01-27 19:39       ` Julien Grall
  0 siblings, 1 reply; 52+ messages in thread
From: Luca Fancellu @ 2023-01-16  9:06 UTC (permalink / raw)
  To: Julien Grall
  Cc: Xen-devel, Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk


Hi Julien,

> 
> I’ve left the boards to test all night, so on Monday I will be 100% sure this serie
> Is not introducing any issue.

The serie passed the overnight tests on neoverse board, raspberry pi 4, Juno board.

Cheers,
Luca


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 13/14] xen/arm64: mm: Rework switch_ttbr()
  2023-01-13 10:11 ` [PATCH v4 13/14] xen/arm64: mm: Rework switch_ttbr() Julien Grall
  2023-01-13 16:50   ` Luca Fancellu
@ 2023-01-16  9:23   ` Michal Orzel
  2023-01-16  9:32     ` Julien Grall
  1 sibling, 1 reply; 52+ messages in thread
From: Michal Orzel @ 2023-01-16  9:23 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk

Hi Julien,

On 13/01/2023 11:11, Julien Grall wrote:
> 
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, switch_ttbr() is switching the TTBR whilst the MMU is
> still on.
> 
> Switching TTBR is like replacing existing mappings with new ones. So
> we need to follow the break-before-make sequence.
> 
> In this case, it means the MMU needs to be switched off while the
> TTBR is updated. In order to disable the MMU, we need to first
> jump to an identity mapping.
> 
> Rename switch_ttbr() to switch_ttbr_id() and create an helper on
> top to temporary map the identity mapping and call switch_ttbr()
> via the identity address.
> 
> switch_ttbr_id() is now reworked to temporarily turn off the MMU
> before updating the TTBR.
> 
> We also need to make sure the helper switch_ttbr() is part of the
> identity mapping. So move _end_boot past it.
> 
> The arm32 code will use a different approach. So this issue is for now
> only resolved on arm64.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ----
>     Changes in v4:
>         - Don't modify setup_pagetables() as we don't handle arm32.
>         - Move the clearing of the boot page tables in an earlier patch
>         - Fix the numbering
> 
>     Changes in v2:
>         - Remove the arm32 changes. This will be addressed differently
>         - Re-instate the instruct cache flush. This is not strictly
>           necessary but kept it for safety.
>         - Use "dsb ish"  rather than "dsb sy".
> 
> 
>     TODO:
>         * Handle the case where the runtime Xen is loaded at a different
>           position for cache coloring. This will be dealt separately.
> ---
>  xen/arch/arm/arm64/head.S     | 50 +++++++++++++++++++++++------------
>  xen/arch/arm/arm64/mm.c       | 30 +++++++++++++++++++++
>  xen/arch/arm/include/asm/mm.h |  2 ++
>  xen/arch/arm/mm.c             |  2 --
>  4 files changed, 65 insertions(+), 19 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 663f5813b12e..5efd442b24af 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -816,30 +816,46 @@ ENDPROC(fail)
>   * Switch TTBR
>   *
>   * x0    ttbr
> - *
> - * TODO: This code does not comply with break-before-make.
>   */
> -ENTRY(switch_ttbr)
> -        dsb   sy                     /* Ensure the flushes happen before
> -                                      * continuing */
> -        isb                          /* Ensure synchronization with previous
> -                                      * changes to text */
> -        tlbi   alle2                 /* Flush hypervisor TLB */
> -        ic     iallu                 /* Flush I-cache */
> -        dsb    sy                    /* Ensure completion of TLB flush */
> +ENTRY(switch_ttbr_id)
> +        /* 1) Ensure any previous read/write have completed */
> +        dsb    ish
> +        isb
> +
> +        /* 2) Turn off MMU */
> +        mrs    x1, SCTLR_EL2
> +        bic    x1, x1, #SCTLR_Axx_ELx_M
> +        msr    SCTLR_EL2, x1
> +        isb
> +
> +        /*
> +         * 3) Flush the TLBs.
> +         * See asm/arm64/flushtlb.h for the explanation of the sequence.
> +         */
> +        dsb   nshst
> +        tlbi  alle2
> +        dsb   nsh
> +        isb
> +
> +        /* 4) Update the TTBR */
> +        msr   TTBR0_EL2, x0
>          isb
> 
> -        msr    TTBR0_EL2, x0
> +        /*
> +         * 5) Flush I-cache
> +         * This should not be necessary but it is kept for safety.
> +         */
> +        ic     iallu
> +        isb
> 
> -        isb                          /* Ensure synchronization with previous
> -                                      * changes to text */
> -        tlbi   alle2                 /* Flush hypervisor TLB */
> -        ic     iallu                 /* Flush I-cache */
> -        dsb    sy                    /* Ensure completion of TLB flush */
> +        /* 6) Turn on the MMU */
> +        mrs   x1, SCTLR_EL2
> +        orr   x1, x1, #SCTLR_Axx_ELx_M  /* Enable MMU */
> +        msr   SCTLR_EL2, x1
>          isb
> 
>          ret
> -ENDPROC(switch_ttbr)
> +ENDPROC(switch_ttbr_id)
> 
>  #ifdef CONFIG_EARLY_PRINTK
>  /*
> diff --git a/xen/arch/arm/arm64/mm.c b/xen/arch/arm/arm64/mm.c
> index 798ae93ad73c..2ede4e75ae33 100644
> --- a/xen/arch/arm/arm64/mm.c
> +++ b/xen/arch/arm/arm64/mm.c
> @@ -120,6 +120,36 @@ void update_identity_mapping(bool enable)
>      BUG_ON(rc);
>  }
> 
> +extern void switch_ttbr_id(uint64_t ttbr);
> +
> +typedef void (switch_ttbr_fn)(uint64_t ttbr);
> +
> +void __init switch_ttbr(uint64_t ttbr)
> +{
> +    vaddr_t id_addr = virt_to_maddr(switch_ttbr_id);
Shouldn't id_addr be of type paddr_t?

> +    switch_ttbr_fn *fn = (switch_ttbr_fn *)id_addr;
> +    lpae_t pte;
> +
> +    /* Enable the identity mapping in the boot page tables */
> +    update_identity_mapping(true);
Could you please add an empty line here?

~Michal


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 11/14] xen/arm64: Rework the memory layout
  2023-01-16  8:46   ` Michal Orzel
@ 2023-01-16  9:29     ` Julien Grall
  2023-01-16 10:59       ` Michal Orzel
  0 siblings, 1 reply; 52+ messages in thread
From: Julien Grall @ 2023-01-16  9:29 UTC (permalink / raw)
  To: Michal Orzel, xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk



On 16/01/2023 08:46, Michal Orzel wrote:
> Hi Julien,

Hi Michal,

> On 13/01/2023 11:11, Julien Grall wrote:
>>
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Xen is currently not fully compliant with the Arm Arm because it will
>> switch the TTBR with the MMU on.
>>
>> In order to be compliant, we need to disable the MMU before
>> switching the TTBR. The implication is the page-tables should
>> contain an identity mapping of the code switching the TTBR.
>>
>> In most of the case we expect Xen to be loaded in low memory. I am aware
>> of one platform (i.e AMD Seattle) where the memory start above 512GB.
>> To give us some slack, consider that Xen may be loaded in the first 2TB
>> of the physical address space.
>>
>> The memory layout is reshuffled to keep the first two slots of the zeroeth
> Should be "four slots" instead of "two".
> 
>> level free. Xen will now be loaded at (2TB + 2MB). This requires a slight
>> tweak of the boot code because XEN_VIRT_START cannot be used as an
>> immediate.
>>
>> This reshuffle will make trivial to create a 1:1 mapping when Xen is
>> loaded below 2TB.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>> ----
>>      Changes in v4:
>>          - Correct the documentation
>>          - The start address is 2TB, so slot0 is 4 not 2.
>>
>>      Changes in v2:
>>          - Reword the commit message
>>          - Load Xen at 2TB + 2MB
>>          - Update the documentation to reflect the new layout
>> ---
>>   xen/arch/arm/arm64/head.S         |  3 ++-
>>   xen/arch/arm/include/asm/config.h | 35 ++++++++++++++++++++-----------
>>   xen/arch/arm/mm.c                 | 11 +++++-----
>>   3 files changed, 31 insertions(+), 18 deletions(-)
>>
>> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
>> index 4a3f87117c83..663f5813b12e 100644
>> --- a/xen/arch/arm/arm64/head.S
>> +++ b/xen/arch/arm/arm64/head.S
>> @@ -607,7 +607,8 @@ create_page_tables:
>>            * need an additional 1:1 mapping, the virtual mapping will
>>            * suffice.
>>            */
>> -        cmp   x19, #XEN_VIRT_START
>> +        ldr   x0, =XEN_VIRT_START
>> +        cmp   x19, x0
>>           bne   1f
>>           ret
>>   1:
>> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
>> index 6c1b762e976d..c5d407a7495f 100644
>> --- a/xen/arch/arm/include/asm/config.h
>> +++ b/xen/arch/arm/include/asm/config.h
>> @@ -72,15 +72,12 @@
>>   #include <xen/page-size.h>
>>
>>   /*
>> - * Common ARM32 and ARM64 layout:
>> + * ARM32 layout:
>>    *   0  -   2M   Unmapped
>>    *   2M -   4M   Xen text, data, bss
>>    *   4M -   6M   Fixmap: special-purpose 4K mapping slots
>>    *   6M -  10M   Early boot mapping of FDT
>> - *   10M - 12M   Livepatch vmap (if compiled in)
>> - *
>> - * ARM32 layout:
>> - *   0  -  12M   <COMMON>
>> + *  10M -  12M   Livepatch vmap (if compiled in)
>>    *
>>    *  32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
>>    * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
>> @@ -90,14 +87,22 @@
>>    *   2G -   4G   Domheap: on-demand-mapped
>>    *
>>    * ARM64 layout:
>> - * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
>> - *   0  -  12M   <COMMON>
>> + * 0x0000000000000000 - 0x00001fffffffffff (2TB, L0 slots [0..3])
> End address should be 0x1FFFFFFFFFF (one less f).
> 
>> + *  Reserved to identity map Xen
>> + *
>> + * 0x0000020000000000 - 0x000028fffffffff (512GB, L0 slot [4]
> End address should be 0x27FFFFFFFFF.
> 
>> + *  (Relative offsets)
>> + *   0  -   2M   Unmapped
>> + *   2M -   4M   Xen text, data, bss
>> + *   4M -   6M   Fixmap: special-purpose 4K mapping slots
>> + *   6M -  10M   Early boot mapping of FDT
>> + *  10M -  12M   Livepatch vmap (if compiled in)
>>    *
>>    *   1G -   2G   VMAP: ioremap and early_ioremap
>>    *
>>    *  32G -  64G   Frametable: 24 bytes per page for 5.3TB of RAM
>>    *
>> - * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
>> + * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [5..255])
> Start address should be 0x28000000000.

I have updated all the addresses.

> 
> Not related to this patch:
> I took a look at config.h and spotted two things:
> 1) DIRECTMAP_SIZE calculation is incorrect. It is defined as (SLOT0_ENTRY_SIZE * (265-256))
> but it actually should be (SLOT0_ENTRY_SIZE * (266-256)) i.e. 10 slots and not 9. Due to this
> bug we actually support 4.5TB of direct-map and not 5TB.


> 
> 2) frametable information
> struct page_info is no longer 24B but 56B for arm64 and 32B for arm32.

The values were always wrong. I have an action in my todo list to look 
at it, but never got the time.

There are two problems with the current values:
   1) The size of the frametable is not big enough as you pointed one below.
   2) The struct page_info could cross a cache line. We should decide 
whether we want to increase the size or attempt to reduce it.

  It looks like SUPPORT.md
> took this into account when stating that we support 12GB for arm32 and 2TB for arm64. However,
> this is also wrong as it does not take into account physical address compression. With PDX that
> is enabled by default we could fit tens of TB in 32GB frametable.
I don't understand your argument. Yes the PDX can compress, but it will 
compress non-RAM pages. So while I agree that this could cover tens of 
TB of physical address space, we will always be able to support a fixed 
amount of RAM.

> I think we want to get rid of
> comments like "Frametable: 24 bytes per page for 16GB of RAM" in favor of just "Frametable".

I would rather update the comments because we need a way to explain how 
we came up with the size.

> This is to because the struct page_info size may change again
We could have a BUILD_BUG_ON() confirming the size of the page_info.

> and it is rather difficult to
> calculate the max RAM size supported with PDX enabled.
See above about the max RAM size.

> 
> If you want, I can push the patches for these issues.

Happy to review them.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 13/14] xen/arm64: mm: Rework switch_ttbr()
  2023-01-16  9:23   ` Michal Orzel
@ 2023-01-16  9:32     ` Julien Grall
  0 siblings, 0 replies; 52+ messages in thread
From: Julien Grall @ 2023-01-16  9:32 UTC (permalink / raw)
  To: Michal Orzel, xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk



On 16/01/2023 09:23, Michal Orzel wrote:
> Hi Julien,

Hi Michal,

> On 13/01/2023 11:11, Julien Grall wrote:
>> diff --git a/xen/arch/arm/arm64/mm.c b/xen/arch/arm/arm64/mm.c
>> index 798ae93ad73c..2ede4e75ae33 100644
>> --- a/xen/arch/arm/arm64/mm.c
>> +++ b/xen/arch/arm/arm64/mm.c
>> @@ -120,6 +120,36 @@ void update_identity_mapping(bool enable)
>>       BUG_ON(rc);
>>   }
>>
>> +extern void switch_ttbr_id(uint64_t ttbr);
>> +
>> +typedef void (switch_ttbr_fn)(uint64_t ttbr);
>> +
>> +void __init switch_ttbr(uint64_t ttbr)
>> +{
>> +    vaddr_t id_addr = virt_to_maddr(switch_ttbr_id);
> Shouldn't id_addr be of type paddr_t?

No because...

> 
>> +    switch_ttbr_fn *fn = (switch_ttbr_fn *)id_addr;

... here it will be used as a virtual address.

>> +    lpae_t pte;
>> +
>> +    /* Enable the identity mapping in the boot page tables */
>> +    update_identity_mapping(true);
> Could you please add an empty line here?

Sure.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 09/14] xen/arm32: head: Remove restriction where to load Xen
  2023-01-16  8:55     ` Julien Grall
@ 2023-01-16  9:32       ` Michal Orzel
  0 siblings, 0 replies; 52+ messages in thread
From: Michal Orzel @ 2023-01-16  9:32 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk



On 16/01/2023 09:55, Julien Grall wrote:
> 
> 
> On 16/01/2023 08:14, Michal Orzel wrote:
>> Hi Julien,
> 
> Hi Luca,
> 
>> On 13/01/2023 11:11, Julien Grall wrote:
>>> +/*
>>> + * Remove the temporary mapping of Xen starting at TEMPORARY_XEN_VIRT_START.
>>> + *
>>> + * Clobbers r0 - r1
>>> + */
>>> +remove_temporary_mapping:
>>> +        /* r2:r3 := invalid page-table entry */
>>> +        mov   r2, #0
>>> +        mov   r3, #0
>>> +
>>> +        adr_l r0, boot_pgtable
>>> +        mov_w r1, TEMPORARY_XEN_VIRT_START
>>> +        get_table_slot r1, r1, 1     /* r1 := first slot */
>> Can't we just use TEMPORARY_AREA_FIRST_SLOT?
> 
> IMHO, it would make the code a bit more difficult to read because the
> connection between TEMPORARY_XEN_VIRT_START and
> TEMPORARY_AREA_FIRST_SLOT is not totally obvious.
> 
> So I would rather prefer if this stays like that.
> 
>>
>>> +        lsl   r1, r1, #3             /* r1 := first slot offset */
>>> +        strd  r2, r3, [r0, r1]
>>> +
>>> +        flush_xen_tlb_local r0
>>> +
>>> +        mov  pc, lr
>>> +ENDPROC(remove_temporary_mapping)
>>> +
>>>   /*
>>>    * Map the UART in the fixmap (when earlyprintk is used) and hook the
>>>    * fixmap table in the page tables.
>>> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
>>> index 87851e677701..6c1b762e976d 100644
>>> --- a/xen/arch/arm/include/asm/config.h
>>> +++ b/xen/arch/arm/include/asm/config.h
>>> @@ -148,6 +148,20 @@
>>>   /* Number of domheap pagetable pages required at the second level (2MB mappings) */
>>>   #define DOMHEAP_SECOND_PAGES (DOMHEAP_VIRT_SIZE >> FIRST_SHIFT)
>>>
>>> +/*
>>> + * The temporary area is overlapping with the domheap area. This may
>>> + * be used to create an alias of the first slot containing Xen mappings
>>> + * when turning on/off the MMU.
>>> + */
>>> +#define TEMPORARY_AREA_FIRST_SLOT    (first_table_offset(DOMHEAP_VIRT_START))
>>> +
>>> +/* Calculate the address in the temporary area */
>>> +#define TEMPORARY_AREA_ADDR(addr)                           \
>>> +     (((addr) & ~XEN_PT_LEVEL_MASK(1)) |                    \
>>> +      (TEMPORARY_AREA_FIRST_SLOT << XEN_PT_LEVEL_SHIFT(1)))
>> XEN_PT_LEVEL_{MASK/SHIFT} should be used when we do not know the level upfront.
>> Otherwise, no need for opencoding and you should use FIRST_MASK and FIRST_SHIFT.
> 
> We discussed in the past to phase out the use of FIRST_MASK, FIRST_SHIFT
> because the name is too generic. So for new code, we should use
> XEN_PT_LEVEL_{MASK/SHIFT}.
In that case:
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 11/14] xen/arm64: Rework the memory layout
  2023-01-16  9:29     ` Julien Grall
@ 2023-01-16 10:59       ` Michal Orzel
  0 siblings, 0 replies; 52+ messages in thread
From: Michal Orzel @ 2023-01-16 10:59 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk

Hi Julien,

On 16/01/2023 10:29, Julien Grall wrote:
> 
> 
> On 16/01/2023 08:46, Michal Orzel wrote:
>> Hi Julien,
> 
> Hi Michal,
> 
>> On 13/01/2023 11:11, Julien Grall wrote:
>>>
>>>
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> Xen is currently not fully compliant with the Arm Arm because it will
>>> switch the TTBR with the MMU on.
>>>
>>> In order to be compliant, we need to disable the MMU before
>>> switching the TTBR. The implication is the page-tables should
>>> contain an identity mapping of the code switching the TTBR.
>>>
>>> In most of the case we expect Xen to be loaded in low memory. I am aware
>>> of one platform (i.e AMD Seattle) where the memory start above 512GB.
>>> To give us some slack, consider that Xen may be loaded in the first 2TB
>>> of the physical address space.
>>>
>>> The memory layout is reshuffled to keep the first two slots of the zeroeth
>> Should be "four slots" instead of "two".
>>
>>> level free. Xen will now be loaded at (2TB + 2MB). This requires a slight
>>> tweak of the boot code because XEN_VIRT_START cannot be used as an
>>> immediate.
>>>
>>> This reshuffle will make trivial to create a 1:1 mapping when Xen is
>>> loaded below 2TB.
>>>
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>> ----
>>>      Changes in v4:
>>>          - Correct the documentation
>>>          - The start address is 2TB, so slot0 is 4 not 2.
>>>
>>>      Changes in v2:
>>>          - Reword the commit message
>>>          - Load Xen at 2TB + 2MB
>>>          - Update the documentation to reflect the new layout
>>> ---
>>>   xen/arch/arm/arm64/head.S         |  3 ++-
>>>   xen/arch/arm/include/asm/config.h | 35 ++++++++++++++++++++-----------
>>>   xen/arch/arm/mm.c                 | 11 +++++-----
>>>   3 files changed, 31 insertions(+), 18 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
>>> index 4a3f87117c83..663f5813b12e 100644
>>> --- a/xen/arch/arm/arm64/head.S
>>> +++ b/xen/arch/arm/arm64/head.S
>>> @@ -607,7 +607,8 @@ create_page_tables:
>>>            * need an additional 1:1 mapping, the virtual mapping will
>>>            * suffice.
>>>            */
>>> -        cmp   x19, #XEN_VIRT_START
>>> +        ldr   x0, =XEN_VIRT_START
>>> +        cmp   x19, x0
>>>           bne   1f
>>>           ret
>>>   1:
>>> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
>>> index 6c1b762e976d..c5d407a7495f 100644
>>> --- a/xen/arch/arm/include/asm/config.h
>>> +++ b/xen/arch/arm/include/asm/config.h
>>> @@ -72,15 +72,12 @@
>>>   #include <xen/page-size.h>
>>>
>>>   /*
>>> - * Common ARM32 and ARM64 layout:
>>> + * ARM32 layout:
>>>    *   0  -   2M   Unmapped
>>>    *   2M -   4M   Xen text, data, bss
>>>    *   4M -   6M   Fixmap: special-purpose 4K mapping slots
>>>    *   6M -  10M   Early boot mapping of FDT
>>> - *   10M - 12M   Livepatch vmap (if compiled in)
>>> - *
>>> - * ARM32 layout:
>>> - *   0  -  12M   <COMMON>
>>> + *  10M -  12M   Livepatch vmap (if compiled in)
>>>    *
>>>    *  32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
>>>    * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
>>> @@ -90,14 +87,22 @@
>>>    *   2G -   4G   Domheap: on-demand-mapped
>>>    *
>>>    * ARM64 layout:
>>> - * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
>>> - *   0  -  12M   <COMMON>
>>> + * 0x0000000000000000 - 0x00001fffffffffff (2TB, L0 slots [0..3])
>> End address should be 0x1FFFFFFFFFF (one less f).
>>
>>> + *  Reserved to identity map Xen
>>> + *
>>> + * 0x0000020000000000 - 0x000028fffffffff (512GB, L0 slot [4]
>> End address should be 0x27FFFFFFFFF.
>>
>>> + *  (Relative offsets)
>>> + *   0  -   2M   Unmapped
>>> + *   2M -   4M   Xen text, data, bss
>>> + *   4M -   6M   Fixmap: special-purpose 4K mapping slots
>>> + *   6M -  10M   Early boot mapping of FDT
>>> + *  10M -  12M   Livepatch vmap (if compiled in)
>>>    *
>>>    *   1G -   2G   VMAP: ioremap and early_ioremap
>>>    *
>>>    *  32G -  64G   Frametable: 24 bytes per page for 5.3TB of RAM
>>>    *
>>> - * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
>>> + * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [5..255])
>> Start address should be 0x28000000000.
> 
> I have updated all the addresses.
Thanks, in that case you can add my:
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

> 
>>
>> Not related to this patch:
>> I took a look at config.h and spotted two things:
>> 1) DIRECTMAP_SIZE calculation is incorrect. It is defined as (SLOT0_ENTRY_SIZE * (265-256))
>> but it actually should be (SLOT0_ENTRY_SIZE * (266-256)) i.e. 10 slots and not 9. Due to this
>> bug we actually support 4.5TB of direct-map and not 5TB.
> 
> 
>>
>> 2) frametable information
>> struct page_info is no longer 24B but 56B for arm64 and 32B for arm32.
> 
> The values were always wrong. I have an action in my todo list to look
> at it, but never got the time.
> 
> There are two problems with the current values:
>    1) The size of the frametable is not big enough as you pointed one below.
>    2) The struct page_info could cross a cache line. We should decide
> whether we want to increase the size or attempt to reduce it.
> 
>   It looks like SUPPORT.md
>> took this into account when stating that we support 12GB for arm32 and 2TB for arm64. However,
>> this is also wrong as it does not take into account physical address compression. With PDX that
>> is enabled by default we could fit tens of TB in 32GB frametable.
> I don't understand your argument. Yes the PDX can compress, but it will
> compress non-RAM pages. So while I agree that this could cover tens of
> TB of physical address space, we will always be able to support a fixed
> amount of RAM.
Right.

> 
>> I think we want to get rid of
>> comments like "Frametable: 24 bytes per page for 16GB of RAM" in favor of just "Frametable".
> 
> I would rather update the comments because we need a way to explain how
> we came up with the size.
> 
>> This is to because the struct page_info size may change again
> We could have a BUILD_BUG_ON() confirming the size of the page_info.
So, apart from fixing a DIRECTMAP_SIZE, I would like to send a patch correcting
a frametable information in config.h. In this patch I'd take the opportunity
to add the following in setup_frametable_mappings:
- BUILD_BUG_ON to check the size of page_info
For that, I could add a new macro e.g. CONFIG_PAGE_INFO_SIZE in config.h to set it to 56 for arm64
and 32 for arm32 to avoid ifdefery in a function itself.
- if ( frametable_size >= FRAMETABLE_SIZE )
to call a panic "RAM is too big to fit in a frametable area", as we do not have any check at the moment.

~Michal


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 05/14] xen/arm: Clean-up the memory layout
  2023-01-13 10:11 ` [PATCH v4 05/14] xen/arm: Clean-up the memory layout Julien Grall
  2023-01-13 13:57   ` Henry Wang
@ 2023-01-24 19:30   ` Julien Grall
  1 sibling, 0 replies; 52+ messages in thread
From: Julien Grall @ 2023-01-24 19:30 UTC (permalink / raw)
  To: xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk, Michal Orzel

Hi,

On 13/01/2023 10:11, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> In a follow-up patch, the base address for the common mappings will
> vary between arm32 and arm64. To avoid any duplication, define
> every mapping in the common region from the previous one.
> 
> Take the opportunity to:
>      * add missing *_SIZE for FIXMAP_VIRT_* and XEN_VIRT_*
>      * switch to MB()/GB() to avoid hexadecimal (easier to read)
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

There was a context conflict with Michal's recent patch (see 
b2220f85256a). As this is minor, I have decided to handle it on commit.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on
  2023-01-13 10:11 [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on Julien Grall
                   ` (13 preceding siblings ...)
  2023-01-13 10:11 ` [PATCH v4 14/14] xen/arm64: smpboot: Directly switch to the runtime page-tables Julien Grall
@ 2023-01-24 19:35 ` Julien Grall
  14 siblings, 0 replies; 52+ messages in thread
From: Julien Grall @ 2023-01-24 19:35 UTC (permalink / raw)
  To: xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk

Hi,

On 13/01/2023 10:11, Julien Grall wrote:
> Julien Grall (14):
>    xen/arm64: flushtlb: Reduce scope of barrier for local TLB flush
>    xen/arm64: flushtlb: Implement the TLBI repeat workaround for TLB
>      flush by VA
>    xen/arm32: flushtlb: Reduce scope of barrier for local TLB flush
>    xen/arm: flushtlb: Reduce scope of barrier for the TLB range flush
>    xen/arm: Clean-up the memory layout
>    xen/arm32: head: Replace "ldr rX, =<label>" with "mov_w rX, <label>"
>    xen/arm32: head: Jump to the runtime mapping in enable_mmu()
>    xen/arm32: head: Introduce an helper to flush the TLBs
>    xen/arm32: head: Remove restriction where to load Xen

I have committed up to this patch. I still need to go through the 
comments of the rest.

>    xen/arm32: head: Widen the use of the temporary mapping
>    xen/arm64: Rework the memory layout
>    xen/arm64: mm: Introduce helpers to prepare/enable/disable the
>      identity mapping
>    xen/arm64: mm: Rework switch_ttbr()
>    xen/arm64: smpboot: Directly switch to the runtime page-tables
> 

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 10/14] xen/arm32: head: Widen the use of the temporary mapping
  2023-01-16  8:20   ` Michal Orzel
@ 2023-01-24 19:43     ` Julien Grall
  2023-01-27 19:19       ` Julien Grall
  0 siblings, 1 reply; 52+ messages in thread
From: Julien Grall @ 2023-01-24 19:43 UTC (permalink / raw)
  To: Michal Orzel, xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk



On 16/01/2023 08:20, Michal Orzel wrote:
> Hi Julien,

Hi Michal,

> 
> On 13/01/2023 11:11, Julien Grall wrote:
>>
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> At the moment, the temporary mapping is only used when the virtual
>> runtime region of Xen is clashing with the physical region.
>>
>> In follow-up patches, we will rework how secondary CPU bring-up works
>> and it will be convenient to use the fixmap area for accessing
>> the root page-table (it is per-cpu).
>>
>> Rework the code to use temporary mapping when the Xen physical address
>> is not overlapping with the temporary mapping.
>>
>> This also has the advantage to simplify the logic to identity map
>> Xen.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> ----
>>
>> Even if this patch is rewriting part of the previous patch, I decided
>> to keep them separated to help the review.
>>
>> The "folow-up patches" are still in draft at the moment. I still haven't
>> find a way to split them nicely and not require too much more work
>> in the coloring side.
>>
>> I have provided some medium-term goal in the cover letter.
>>
>>      Changes in v3:
>>          - Resolve conflicts after switching from "ldr rX, <label>" to
>>            "mov_w rX, <label>" in a previous patch
>>
>>      Changes in v2:
>>          - Patch added
>> ---
>>   xen/arch/arm/arm32/head.S | 82 +++++++--------------------------------
>>   1 file changed, 15 insertions(+), 67 deletions(-)
>>
>> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
>> index 3800efb44169..ce858e9fc4da 100644
>> --- a/xen/arch/arm/arm32/head.S
>> +++ b/xen/arch/arm/arm32/head.S
>> @@ -459,7 +459,6 @@ ENDPROC(cpu_init)
>>   create_page_tables:
>>           /* Prepare the page-tables for mapping Xen */
>>           mov_w r0, XEN_VIRT_START
>> -        create_table_entry boot_pgtable, boot_second, r0, 1
>>           create_table_entry boot_second, boot_third, r0, 2
>>
>>           /* Setup boot_third: */
>> @@ -479,67 +478,37 @@ create_page_tables:
>>           cmp   r1, #(XEN_PT_LPAE_ENTRIES<<3) /* 512*8-byte entries per page */
>>           blo   1b
>>
>> -        /*
>> -         * If Xen is loaded at exactly XEN_VIRT_START then we don't
>> -         * need an additional 1:1 mapping, the virtual mapping will
>> -         * suffice.
>> -         */
>> -        cmp   r9, #XEN_VIRT_START
>> -        moveq pc, lr
>> -
>>           /*
>>            * Setup the 1:1 mapping so we can turn the MMU on. Note that
>>            * only the first page of Xen will be part of the 1:1 mapping.
>> -         *
>> -         * In all the cases, we will link boot_third_id. So create the
>> -         * mapping in advance.
>>            */
>> +        create_table_entry boot_pgtable, boot_second_id, r9, 1
>> +        create_table_entry boot_second_id, boot_third_id, r9, 2
>>           create_mapping_entry boot_third_id, r9, r9
>>
>>           /*
>> -         * Find the first slot used. If the slot is not XEN_FIRST_SLOT,
>> -         * then the 1:1 mapping will use its own set of page-tables from
>> -         * the second level.
>> +         * Find the first slot used. If the slot is not the same
>> +         * as XEN_TMP_FIRST_SLOT, then we will want to switch
> Do you mean TEMPORARY_AREA_FIRST_SLOT?

Yes. I have fixed it in my tree.

> 
>> +         * to the temporary mapping before jumping to the runtime
>> +         * virtual mapping.
>>            */
>>           get_table_slot r1, r9, 1     /* r1 := first slot */
>> -        cmp   r1, #XEN_FIRST_SLOT
>> -        beq   1f
>> -        create_table_entry boot_pgtable, boot_second_id, r9, 1
>> -        b     link_from_second_id
>> -
>> -1:
>> -        /*
>> -         * Find the second slot used. If the slot is XEN_SECOND_SLOT, then the
>> -         * 1:1 mapping will use its own set of page-tables from the
>> -         * third level.
>> -         */
>> -        get_table_slot r1, r9, 2     /* r1 := second slot */
>> -        cmp   r1, #XEN_SECOND_SLOT
>> -        beq   virtphys_clash
>> -        create_table_entry boot_second, boot_third_id, r9, 2
>> -        b     link_from_third_id
>> +        cmp   r1, #TEMPORARY_AREA_FIRST_SLOT
>> +        bne   use_temporary_mapping
>>
>> -link_from_second_id:
>> -        create_table_entry boot_second_id, boot_third_id, r9, 2
>> -link_from_third_id:
>> -        /* Good news, we are not clashing with Xen virtual mapping */
>> +        mov_w r0, XEN_VIRT_START
>> +        create_table_entry boot_pgtable, boot_second, r0, 1
>>           mov   r12, #0                /* r12 := temporary mapping not created */
>>           mov   pc, lr
>>
>> -virtphys_clash:
>> +use_temporary_mapping:
>>           /*
>> -         * The identity map clashes with boot_third. Link boot_first_id and
>> -         * map Xen to a temporary mapping. See switch_to_runtime_mapping
>> -         * for more details.
>> +         * The identity mapping is not using the first slot
>> +         * TEMPORARY_AREA_FIRST_SLOT. Create a temporary mapping.
>> +         * See switch_to_runtime_mapping for more details.
>>            */
>> -        PRINT("- Virt and Phys addresses clash  -\r\n")
>>           PRINT("- Create temporary mapping -\r\n")
>>
>> -        /*
>> -         * This will override the link to boot_second in XEN_FIRST_SLOT.
>> -         * The page-tables are not live yet. So no need to use
>> -         * break-before-make.
>> -         */
>>           create_table_entry boot_pgtable, boot_second_id, r9, 1
>>           create_table_entry boot_second_id, boot_third_id, r9, 2
> Do we need to duplicate this if we just did the same in create_page_tables before branching to
> use_temporary_mapping?

Hmmm... Possibly not. I will give a try and let you know.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 10/14] xen/arm32: head: Widen the use of the temporary mapping
  2023-01-24 19:43     ` Julien Grall
@ 2023-01-27 19:19       ` Julien Grall
  0 siblings, 0 replies; 52+ messages in thread
From: Julien Grall @ 2023-01-27 19:19 UTC (permalink / raw)
  To: Michal Orzel, xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk

Hi,

On 24/01/2023 19:43, Julien Grall wrote:
>>> -        /*
>>> -         * This will override the link to boot_second in 
>>> XEN_FIRST_SLOT.
>>> -         * The page-tables are not live yet. So no need to use
>>> -         * break-before-make.
>>> -         */
>>>           create_table_entry boot_pgtable, boot_second_id, r9, 1
>>>           create_table_entry boot_second_id, boot_third_id, r9, 2
>> Do we need to duplicate this if we just did the same in 
>> create_page_tables before branching to
>> use_temporary_mapping?
> 
> Hmmm... Possibly not. I will give a try and let you know.

I confirm this is not necessary. So I have removed the two lines.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 12/14] xen/arm64: mm: Introduce helpers to prepare/enable/disable the identity mapping
  2023-01-16  8:53   ` Michal Orzel
@ 2023-01-27 19:30     ` Julien Grall
  0 siblings, 0 replies; 52+ messages in thread
From: Julien Grall @ 2023-01-27 19:30 UTC (permalink / raw)
  To: Michal Orzel, xen-devel
  Cc: Luca.Fancellu, Julien Grall, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk

Hi Michal,

On 16/01/2023 08:53, Michal Orzel wrote:
> On 13/01/2023 11:11, Julien Grall wrote:
>> +static void __init prepare_boot_identity_mapping(void)
>> +{
>> +    paddr_t id_addr = virt_to_maddr(_start);
>> +    lpae_t pte;
>> +    DECLARE_OFFSETS(id_offsets, id_addr);
>> +
>> +    /*
>> +     * We will be re-using the boot ID tables. They may not have been
>> +     * zeroed but they should be unlinked. So it is fine to use
>> +     * clear_page().
>> +     */
>> +    clear_page(boot_first_id);
>> +    clear_page(boot_second_id);
>> +    clear_page(boot_third_id);
>> +
>> +    if ( id_offsets[0] != 0 )
>> +        panic("Cannot handled ID mapping above 512GB\n");
> I might be lost but didn't we say before that we can load Xen in the first 2TB?
> Then, how does this check correspond to it?

I forgot to change the check after we decided to extend the reserved 
area from 512GB to 2TB. I will update it in the next version.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 14/14] xen/arm64: smpboot: Directly switch to the runtime page-tables
  2023-01-16  9:06     ` Luca Fancellu
@ 2023-01-27 19:39       ` Julien Grall
  0 siblings, 0 replies; 52+ messages in thread
From: Julien Grall @ 2023-01-27 19:39 UTC (permalink / raw)
  To: Luca Fancellu
  Cc: Xen-devel, Julien Grall, Stefano Stabellini, Bertrand Marquis,
	Volodymyr Babchuk

On 16/01/2023 09:06, Luca Fancellu wrote:
> 
> Hi Julien,

Hi Luca,

>>
>> I’ve left the boards to test all night, so on Monday I will be 100% sure this serie
>> Is not introducing any issue.
> 
> The serie passed the overnight tests on neoverse board, raspberry pi 4, Juno board.

Thanks for testing!

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 52+ messages in thread

end of thread, other threads:[~2023-01-27 19:40 UTC | newest]

Thread overview: 52+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-13 10:11 [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on Julien Grall
2023-01-13 10:11 ` [PATCH v4 01/14] xen/arm64: flushtlb: Reduce scope of barrier for local TLB flush Julien Grall
2023-01-13 11:36   ` Henry Wang
2023-01-13 10:11 ` [PATCH v4 02/14] xen/arm64: flushtlb: Implement the TLBI repeat workaround for TLB flush by VA Julien Grall
2023-01-13 13:22   ` Henry Wang
2023-01-13 17:56   ` Luca Fancellu
2023-01-16  8:36     ` Julien Grall
2023-01-13 10:11 ` [PATCH v4 03/14] xen/arm32: flushtlb: Reduce scope of barrier for local TLB flush Julien Grall
2023-01-13 13:46   ` Henry Wang
2023-01-13 10:11 ` [PATCH v4 04/14] xen/arm: flushtlb: Reduce scope of barrier for the TLB range flush Julien Grall
2023-01-13 13:53   ` Henry Wang
2023-01-13 10:11 ` [PATCH v4 05/14] xen/arm: Clean-up the memory layout Julien Grall
2023-01-13 13:57   ` Henry Wang
2023-01-24 19:30   ` Julien Grall
2023-01-13 10:11 ` [PATCH v4 06/14] xen/arm32: head: Replace "ldr rX, =<label>" with "mov_w rX, <label>" Julien Grall
2023-01-13 10:45   ` Michal Orzel
2023-01-13 10:47     ` Julien Grall
2023-01-14  0:51   ` Henry Wang
2023-01-13 10:11 ` [PATCH v4 07/14] xen/arm32: head: Jump to the runtime mapping in enable_mmu() Julien Grall
2023-01-14  1:33   ` Henry Wang
2023-01-13 10:11 ` [PATCH v4 08/14] xen/arm32: head: Introduce an helper to flush the TLBs Julien Grall
2023-01-13 10:46   ` Michal Orzel
2023-01-14  2:16   ` Henry Wang
2023-01-13 10:11 ` [PATCH v4 09/14] xen/arm32: head: Remove restriction where to load Xen Julien Grall
2023-01-13 14:58   ` Luca Fancellu
2023-01-16  8:43     ` Julien Grall
2023-01-16  8:14   ` Michal Orzel
2023-01-16  8:55     ` Julien Grall
2023-01-16  9:32       ` Michal Orzel
2023-01-13 10:11 ` [PATCH v4 10/14] xen/arm32: head: Widen the use of the temporary mapping Julien Grall
2023-01-13 15:37   ` Luca Fancellu
2023-01-16  8:20   ` Michal Orzel
2023-01-24 19:43     ` Julien Grall
2023-01-27 19:19       ` Julien Grall
2023-01-13 10:11 ` [PATCH v4 11/14] xen/arm64: Rework the memory layout Julien Grall
2023-01-13 15:58   ` Luca Fancellu
2023-01-16  8:46   ` Michal Orzel
2023-01-16  9:29     ` Julien Grall
2023-01-16 10:59       ` Michal Orzel
2023-01-13 10:11 ` [PATCH v4 12/14] xen/arm64: mm: Introduce helpers to prepare/enable/disable the identity mapping Julien Grall
2023-01-13 16:26   ` Luca Fancellu
2023-01-16  8:53   ` Michal Orzel
2023-01-27 19:30     ` Julien Grall
2023-01-13 10:11 ` [PATCH v4 13/14] xen/arm64: mm: Rework switch_ttbr() Julien Grall
2023-01-13 16:50   ` Luca Fancellu
2023-01-16  9:23   ` Michal Orzel
2023-01-16  9:32     ` Julien Grall
2023-01-13 10:11 ` [PATCH v4 14/14] xen/arm64: smpboot: Directly switch to the runtime page-tables Julien Grall
2023-01-13 17:42   ` Luca Fancellu
2023-01-16  9:06     ` Luca Fancellu
2023-01-27 19:39       ` Julien Grall
2023-01-24 19:35 ` [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on Julien Grall

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.