xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH v4 0/8] xen/arm: Rework head.S to make it more compliant with the Arm Arm
@ 2019-09-17 18:12 Julien Grall
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 1/8] xen/arm64: head: Remove 1:1 mapping as soon as it is not used Julien Grall
                   ` (7 more replies)
  0 siblings, 8 replies; 17+ messages in thread
From: Julien Grall @ 2019-09-17 18:12 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Stefano Stabellini, Volodymyr Babchuk

Hi all,

This is part of the boot/memory rework for Xen on Arm, but not sent as
MM-PARTx as this is focusing on the boot code.

Similar to the memory code, the boot code is not following the Arm Arm and
could lead to memory corruption/TLB conflict abort. I am not aware
of any platforms where Xen fails to boot, yet it should be fixed sooner
rather than later.

While making the code more compliant, I have also took the opportunity
to simplify the boot and also add more documentation.

After this series, the boot CPU and secondary CPUs path is mostly compliant
with the Arm Arm. The only non-compliant places I am aware of are:
    1) create_page_tables: Some rework is necessary to update the page-tables
       safely without the MMU on.
    2) The switches between boot and runtime page-tables (for both boot CPU
       and secondary CPUs) are not safe.

All will be addressed in follow-up series. The boot code would also benefits
another proof read for missing isb()/dsb().


For convenience I provided a branch based on staging:
   git://xenbits.xen.org/people/julieng/xen-unstable.git branch boot/v3

Cheers,

Julien Grall (8):
  xen/arm64: head: Remove 1:1 mapping as soon as it is not used
  xen/arm64: head: Rework and document setup_fixmap()
  xen/arm32: head: Remove 1:1 mapping as soon as it is not used
  xen/arm32: head: Rework and document setup_fixmap()
  xen/arm64: head: Introduce macros to create table and mapping entry
  xen/arm64: head: Use a page mapping for the 1:1 mapping in
    create_page_tables()
  xen/arm32: head: Introduce macros to create table and mapping entry
  xen/arm32: head: Use a page mapping for the 1:1 mapping in
    create_page_tables()

 xen/arch/arm/arm32/head.S | 324 +++++++++++++++++++++++++++++--------------
 xen/arch/arm/arm64/head.S | 347 +++++++++++++++++++++++++++++-----------------
 xen/arch/arm/mm.c         |   2 +
 3 files changed, 440 insertions(+), 233 deletions(-)

-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Xen-devel] [PATCH v4 1/8] xen/arm64: head: Remove 1:1 mapping as soon as it is not used
  2019-09-17 18:12 [Xen-devel] [PATCH v4 0/8] xen/arm: Rework head.S to make it more compliant with the Arm Arm Julien Grall
@ 2019-09-17 18:12 ` Julien Grall
  2019-09-25 20:28   ` Stefano Stabellini
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 2/8] xen/arm64: head: Rework and document setup_fixmap() Julien Grall
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Julien Grall @ 2019-09-17 18:12 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Stefano Stabellini, Volodymyr Babchuk

The 1:1 mapping may clash with other parts of the Xen virtual memory
layout. At the moment, Xen is handling the clash by only creating a
mapping to the runtime virtual address before enabling the MMU.

The rest of the mappings (such as the fixmap) will be mapped after the
MMU is enabled. However, the code doing the mapping is not safe as it
replace mapping without using the Break-Before-Make sequence.

As the 1:1 mapping can be anywhere in the memory, it is easier to remove
all the entries added as soon as the 1:1 mapping is not used rather than
adding the Break-Before-Make sequence everywhere.

It is difficult to track where exactly the 1:1 mapping was created
without a full rework of create_page_tables(). Instead, introduce a new
function remove_identity_mapping() will look where is the top-level entry
for the 1:1 mapping and remove it.

The new function is only called for the boot CPU. Secondary CPUs will
switch directly to the runtime page-tables so there are no need to
remove the 1:1 mapping. Note that this still doesn't make the Secondary
CPUs path safe but it is not making it worst.

Note that the TLB flush sequence is same sequence as described in
asm-arm/arm32/flushtlb.h with a twist. Per D5-2530 ARM DDI 0487D.a,
a dsb nsh is sufficient for local flush. This part of the Arm Arm
was missed while reworking the header and therefore a more conservative
way were adopted.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    It is very likely we will need to re-introduce the 1:1 mapping to cater
    secondary CPUs boot and suspend/resume. For now, the attempt is to make
    boot CPU path fully Arm Arm compliant.

    Changes in v4:
        - Fix typo
        - Remove unnecessary comments
        - Update the commit message to mention the difference between
        the sequence described in tlbflush.h and the one used in the
        code.

    Changes in v3:
        - Avoid hardcoding slots

    Changes in v2:
        - s/ID map/1:1 mapping/
        - Rename remove_id_map() to remove_identity_mapping()
        - Add missing signed-off-by
---
 xen/arch/arm/arm64/head.S | 90 +++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 75 insertions(+), 15 deletions(-)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index ba24b05fa2..4c9a69be63 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -33,6 +33,11 @@
 #define PT_DEV    0xe71 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=0 P=1 */
 #define PT_DEV_L3 0xe73 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=1 P=1 */
 
+/* Convenience defines to get slot used by Xen mapping. */
+#define XEN_ZEROETH_SLOT    zeroeth_table_offset(XEN_VIRT_START)
+#define XEN_FIRST_SLOT      first_table_offset(XEN_VIRT_START)
+#define XEN_SECOND_SLOT     second_table_offset(XEN_VIRT_START)
+
 #define __HEAD_FLAG_PAGE_SIZE   ((PAGE_SHIFT - 10) / 2)
 
 #define __HEAD_FLAG_PHYS_BASE   1
@@ -312,6 +317,13 @@ real_start_efi:
         ldr   x0, =primary_switched
         br    x0
 primary_switched:
+        /*
+         * The 1:1 map may clash with other parts of the Xen virtual memory
+         * layout. As it is not used anymore, remove it completely to
+         * avoid having to worry about replacing existing mapping
+         * afterwards.
+         */
+        bl    remove_identity_mapping
         bl    setup_fixmap
 #ifdef CONFIG_EARLY_PRINTK
         /* Use a virtual address to access the UART. */
@@ -648,10 +660,67 @@ enable_mmu:
         ret
 ENDPROC(enable_mmu)
 
+/*
+ * Remove the 1:1 map from the page-tables. It is not easy to keep track
+ * where the 1:1 map was mapped, so we will look for the top-level entry
+ * exclusive to the 1:1 map and remove it.
+ *
+ * Inputs:
+ *   x19: paddr(start)
+ *
+ * Clobbers x0 - x1
+ */
+remove_identity_mapping:
+        /*
+         * Find the zeroeth slot used. Remove the entry from zeroeth
+         * table if the slot is not XEN_ZEROETH_SLOT.
+         */
+        lsr   x1, x19, #ZEROETH_SHIFT   /* x1 := zeroeth slot */
+        cmp   x1, #XEN_ZEROETH_SLOT
+        beq   1f
+        /* It is not in slot XEN_ZEROETH_SLOT, remove the entry. */
+        ldr   x0, =boot_pgtable         /* x0 := root table */
+        str   xzr, [x0, x1, lsl #3]
+        b     identity_mapping_removed
+
+1:
+        /*
+         * Find the first slot used. Remove the entry for the first
+         * table if the slot is not XEN_FIRST_SLOT.
+         */
+        lsr   x1, x19, #FIRST_SHIFT
+        and   x1, x1, #LPAE_ENTRY_MASK  /* x1 := first slot */
+        cmp   x1, #XEN_FIRST_SLOT
+        beq   1f
+        /* It is not in slot XEN_FIRST_SLOT, remove the entry. */
+        ldr   x0, =boot_first           /* x0 := first table */
+        str   xzr, [x0, x1, lsl #3]
+        b     identity_mapping_removed
+
+1:
+        /*
+         * Find the second slot used. Remove the entry for the first
+         * table if the slot is not XEN_SECOND_SLOT.
+         */
+        lsr   x1, x19, #SECOND_SHIFT
+        and   x1, x1, #LPAE_ENTRY_MASK  /* x1 := first slot */
+        cmp   x1, #XEN_SECOND_SLOT
+        beq   identity_mapping_removed
+        /* It is not in slot 1, remove the entry */
+        ldr   x0, =boot_second          /* x0 := second table */
+        str   xzr, [x0, x1, lsl #3]
+
+identity_mapping_removed:
+        /* See asm-arm/arm64/flushtlb.h for the explanation of the sequence. */
+        dsb   nshst
+        tlbi  alle2
+        dsb   nsh
+        isb
+
+        ret
+ENDPROC(remove_identity_mapping)
+
 setup_fixmap:
-        /* Now we can install the fixmap and dtb mappings, since we
-         * don't need the 1:1 map any more */
-        dsb   sy
 #if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */
         /* Add UART to the fixmap table */
         ldr   x1, =xen_fixmap        /* x1 := vaddr (xen_fixmap) */
@@ -669,19 +738,10 @@ setup_fixmap:
         ldr   x1, =FIXMAP_ADDR(0)
         lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x1 := Slot for FIXMAP(0) */
         str   x2, [x4, x1]           /* Map it in the fixmap's slot */
-#endif
 
-        /*
-         * Flush the TLB in case the 1:1 mapping happens to clash with
-         * the virtual addresses used by the fixmap or DTB.
-         */
-        dsb   sy                     /* Ensure any page table updates made above
-                                      * have occurred. */
-
-        isb
-        tlbi  alle2
-        dsb   sy                     /* Ensure completion of TLB flush */
-        isb
+        /* Ensure any page table updates made above have occurred. */
+        dsb   nshst
+#endif
         ret
 ENDPROC(setup_fixmap)
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Xen-devel] [PATCH v4 2/8] xen/arm64: head: Rework and document setup_fixmap()
  2019-09-17 18:12 [Xen-devel] [PATCH v4 0/8] xen/arm: Rework head.S to make it more compliant with the Arm Arm Julien Grall
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 1/8] xen/arm64: head: Remove 1:1 mapping as soon as it is not used Julien Grall
@ 2019-09-17 18:12 ` Julien Grall
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 3/8] xen/arm32: head: Remove 1:1 mapping as soon as it is not used Julien Grall
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Julien Grall @ 2019-09-17 18:12 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Stefano Stabellini, Volodymyr Babchuk

At the moment, the fixmap table is only hooked when earlyprintk is used.
This is fine today because in C land, the fixmap is not used by anyone
until the the boot CPU is switching to the runtime page-tables.

In the future, the boot CPU will not switch between page-tables to
avoid TLB incoherency. Thus, the fixmap table will need to be always
hooked before any use. Let's start doing it now in setup_fixmap().

Lastly, document the behavior and the main registers usage within the
function.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v3:
        - Fix typo in the commit message
        - Add Stefano's Acked-by

    Changes in v2:
        - Update the comment to reflect that we clobbers x1 - x4 and not
        x0 - x1.
        - Add the list of input registers
        - s/ID map/1:1 mapping/
        - Reword the commit message
---
 xen/arch/arm/arm64/head.S | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 4c9a69be63..177cec4e45 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -720,8 +720,21 @@ identity_mapping_removed:
         ret
 ENDPROC(remove_identity_mapping)
 
+/*
+ * Map the UART in the fixmap (when earlyprintk is used) and hook the
+ * fixmap table in the page tables.
+ *
+ * The fixmap cannot be mapped in create_page_tables because it may
+ * clash with the 1:1 mapping.
+ *
+ * Inputs:
+ *   x20: Physical offset
+ *   x23: Early UART base physical address
+ *
+ * Clobbers x1 - x4
+ */
 setup_fixmap:
-#if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */
+#ifdef CONFIG_EARLY_PRINTK
         /* Add UART to the fixmap table */
         ldr   x1, =xen_fixmap        /* x1 := vaddr (xen_fixmap) */
         lsr   x2, x23, #THIRD_SHIFT
@@ -729,6 +742,7 @@ setup_fixmap:
         mov   x3, #PT_DEV_L3
         orr   x2, x2, x3             /* x2 := 4K dev map including UART */
         str   x2, [x1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
+#endif
 
         /* Map fixmap into boot_second */
         ldr   x4, =boot_second       /* x4 := vaddr (boot_second) */
@@ -741,7 +755,7 @@ setup_fixmap:
 
         /* Ensure any page table updates made above have occurred. */
         dsb   nshst
-#endif
+
         ret
 ENDPROC(setup_fixmap)
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Xen-devel] [PATCH v4 3/8] xen/arm32: head: Remove 1:1 mapping as soon as it is not used
  2019-09-17 18:12 [Xen-devel] [PATCH v4 0/8] xen/arm: Rework head.S to make it more compliant with the Arm Arm Julien Grall
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 1/8] xen/arm64: head: Remove 1:1 mapping as soon as it is not used Julien Grall
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 2/8] xen/arm64: head: Rework and document setup_fixmap() Julien Grall
@ 2019-09-17 18:12 ` Julien Grall
  2019-09-25 20:33   ` Stefano Stabellini
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 4/8] xen/arm32: head: Rework and document setup_fixmap() Julien Grall
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Julien Grall @ 2019-09-17 18:12 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Stefano Stabellini, Volodymyr Babchuk

The 1:1 mapping may clash with other parts of the Xen virtual memory
layout. At the moment, Xen is handling the clash by only creating a
mapping to the runtime virtual address before enabling the MMU.

The rest of the mappings (such as the fixmap) will be mapped after the
MMU is enabled. However, the code doing the mapping is not safe as it
replace mapping without using the Break-Before-Make sequence.

As the 1:1 mapping can be anywhere in the memory, it is easier to remove
all the entries added as soon as the 1:1 mapping is not used rather than
adding the Break-Before-Make sequence everywhere.

It is difficult to track where exactly the 1:1 mapping was created
without a full rework of create_page_tables(). Instead, introduce a new
function remove_identity_mapping() will look where is the top-level entry
for the 1:1 mapping and remove it.

The new function is only called for the boot CPU. Secondary CPUs will
switch directly to the runtime page-tables so there are no need to
remove the 1:1 mapping. Note that this still doesn't make the Secondary
CPUs path safe but it is not making it worst.

Note that the TLB flush sequence is same sequence as described in
asm-arm/arm32/flushtlb.h with a twist. Per G5-5532 ARM DDI 0487D.a,
a dsb nsh is sufficient for local flushed. Note the section is from the
AArch32 Armv8 spec, I wasn't able to find the same exact section in the
Armv7 spec but this is dotted as local operations only applies to
non-shareable domain. This was missed while reworking the header and
therefore a more conservative way were adopted.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    It is very likely we will need to re-introduce the 1:1 mapping to cater
    secondary CPUs boot and suspend/resume. For now, the attempt is to make
    boot CPU path fully Arm Arm compliant.

    Changes in v4:
        - Fix typo
        - Fix indentation
        - Remove unnecessary comments

    Changes in v3:
        - Remove unused label
        - Avoid harcoding slots

    Changes in v2:
        - Patch added
---
 xen/arch/arm/arm32/head.S | 84 +++++++++++++++++++++++++++++++++++++----------
 1 file changed, 67 insertions(+), 17 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 999233452d..65b7e0d711 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -32,6 +32,10 @@
 #define PT_UPPER(x) (PT_##x & 0xf00)
 #define PT_LOWER(x) (PT_##x & 0x0ff)
 
+/* Convenience defines to get slot used by Xen mapping. */
+#define XEN_FIRST_SLOT      first_table_offset(XEN_VIRT_START)
+#define XEN_SECOND_SLOT     second_table_offset(XEN_VIRT_START)
+
 #if (defined (CONFIG_EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
 #include EARLY_PRINTK_INC
 #endif
@@ -157,6 +161,13 @@ past_zImage:
         ldr   r0, =primary_switched
         mov   pc, r0
 primary_switched:
+        /*
+         * The 1:1 map may clash with other parts of the Xen virtual memory
+         * layout. As it is not used anymore, remove it completely to
+         * avoid having to worry about replacing existing mapping
+         * afterwards.
+         */
+        bl    remove_identity_mapping
         bl    setup_fixmap
 #ifdef CONFIG_EARLY_PRINTK
         /* Use a virtual address to access the UART. */
@@ -481,12 +492,61 @@ enable_mmu:
         mov   pc, lr
 ENDPROC(enable_mmu)
 
-setup_fixmap:
+/*
+ * Remove the 1:1 map from the page-tables. It is not easy to keep track
+ * where the 1:1 map was mapped, so we will look for the top-level entry
+ * exclusive to the 1:1 map and remove it.
+ *
+ * Inputs:
+ *   r9 : paddr(start)
+ *
+ * Clobbers r0 - r3
+ */
+remove_identity_mapping:
+        /* r2:r3 := invalid page-table entry */
+        mov   r2, #0x0
+        mov   r3, #0x0
         /*
-         * Now we can install the fixmap and dtb mappings, since we
-         * don't need the 1:1 map any more
+         * Find the first slot used. Remove the entry for the first
+         * table if the slot is not XEN_FIRST_SLOT.
          */
-        dsb
+        lsr   r1, r9, #FIRST_SHIFT
+        mov_w r0, LPAE_ENTRY_MASK
+        and   r1, r1, r0              /* r1 := first slot */
+        cmp   r1, #XEN_FIRST_SLOT
+        beq   1f
+        /* It is not in slot 0, remove the entry */
+        ldr   r0, =boot_pgtable      /* r0 := root table */
+        lsl   r1, r1, #3             /* r1 := Slot offset */
+        strd  r2, r3, [r0, r1]
+        b     identity_mapping_removed
+
+1:
+        /*
+         * Find the second slot used. Remove the entry for the first
+         * table if the slot is not XEN_SECOND_SLOT.
+         */
+        lsr   r1, r9, #SECOND_SHIFT
+        mov_w r0, LPAE_ENTRY_MASK
+        and   r1, r1, r0             /* r1 := second slot */
+        cmp   r1, #XEN_SECOND_SLOT
+        beq   identity_mapping_removed
+        /* It is not in slot 1, remove the entry */
+        ldr   r0, =boot_second       /* r0 := second table */
+        lsl   r1, r1, #3             /* r1 := Slot offset */
+        strd  r2, r3, [r0, r1]
+
+identity_mapping_removed:
+        /* See asm-arm/arm32/flushtlb.h for the explanation of the sequence. */
+        dsb   nshst
+        mcr   CP32(r0, TLBIALLH)
+        dsb   nsh
+        isb
+
+        mov   pc, lr
+ENDPROC(remove_identity_mapping)
+
+setup_fixmap:
 #if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */
         /* Add UART to the fixmap table */
         ldr   r1, =xen_fixmap        /* r1 := vaddr (xen_fixmap) */
@@ -496,7 +556,6 @@ setup_fixmap:
         orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
         mov   r3, #0x0
         strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
-1:
 
         /* Map fixmap into boot_second */
         ldr   r1, =boot_second       /* r1 := vaddr (boot_second) */
@@ -508,19 +567,10 @@ setup_fixmap:
         mov   r4, r4, lsr #(SECOND_SHIFT - 3)   /* r4 := Slot for FIXMAP(0) */
         mov   r3, #0x0
         strd  r2, r3, [r1, r4]       /* Map it in the fixmap's slot */
-#endif
-
-        /*
-         * Flush the TLB in case the 1:1 mapping happens to clash with
-         * the virtual addresses used by the fixmap or DTB.
-         */
-        dsb                          /* Ensure any page table updates made above
-                                      * have occurred. */
 
-        isb
-        mcr   CP32(r0, TLBIALLH)     /* Flush hypervisor TLB */
-        dsb                          /* Ensure completion of TLB flush */
-        isb
+        /* Ensure any page table updates made above have occurred. */
+        dsb   nshst
+#endif
         mov   pc, lr
 ENDPROC(setup_fixmap)
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Xen-devel] [PATCH v4 4/8] xen/arm32: head: Rework and document setup_fixmap()
  2019-09-17 18:12 [Xen-devel] [PATCH v4 0/8] xen/arm: Rework head.S to make it more compliant with the Arm Arm Julien Grall
                   ` (2 preceding siblings ...)
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 3/8] xen/arm32: head: Remove 1:1 mapping as soon as it is not used Julien Grall
@ 2019-09-17 18:12 ` Julien Grall
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 5/8] xen/arm64: head: Introduce macros to create table and mapping entry Julien Grall
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Julien Grall @ 2019-09-17 18:12 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Stefano Stabellini, Volodymyr Babchuk

At the moment, the fixmap table is only hooked when earlyprintk is used.
This is fine today because in C land, the fixmap is not used by anyone
until the the boot CPU is switching to the runtime page-tables.

In the future, the boot CPU will not switch between page-tables to
avoid TLB incoherency. Thus, the fixmap table will need to be always
hooked beofre any use. Let's start doing it now in setup_fixmap().

Lastly, document the behavior and the main registers usage within the
function.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v3:
        - The unused label is now removed in the previous patch
        - Add Stefano's reviewed-by

    Changes in v2:
        - Patch added
---
 xen/arch/arm/arm32/head.S | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 65b7e0d711..f58d0fcb80 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -546,8 +546,21 @@ identity_mapping_removed:
         mov   pc, lr
 ENDPROC(remove_identity_mapping)
 
+/*
+ * Map the UART in the fixmap (when earlyprintk is used) and hook the
+ * fixmap table in the page tables.
+ *
+ * The fixmap cannot be mapped in create_page_tables because it may
+ * clash with the 1:1 mapping.
+ *
+ * Inputs:
+ *   r10: Physical offset
+ *   r11: Early UART base physical address
+ *
+ * Clobbers r1 - r4
+ */
 setup_fixmap:
-#if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */
+#if defined(CONFIG_EARLY_PRINTK)
         /* Add UART to the fixmap table */
         ldr   r1, =xen_fixmap        /* r1 := vaddr (xen_fixmap) */
         lsr   r2, r11, #THIRD_SHIFT
@@ -556,6 +569,7 @@ setup_fixmap:
         orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
         mov   r3, #0x0
         strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
+#endif
 
         /* Map fixmap into boot_second */
         ldr   r1, =boot_second       /* r1 := vaddr (boot_second) */
@@ -570,7 +584,7 @@ setup_fixmap:
 
         /* Ensure any page table updates made above have occurred. */
         dsb   nshst
-#endif
+
         mov   pc, lr
 ENDPROC(setup_fixmap)
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Xen-devel] [PATCH v4 5/8] xen/arm64: head: Introduce macros to create table and mapping entry
  2019-09-17 18:12 [Xen-devel] [PATCH v4 0/8] xen/arm: Rework head.S to make it more compliant with the Arm Arm Julien Grall
                   ` (3 preceding siblings ...)
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 4/8] xen/arm32: head: Rework and document setup_fixmap() Julien Grall
@ 2019-09-17 18:12 ` Julien Grall
  2019-09-25 20:14   ` Stefano Stabellini
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 6/8] xen/arm64: head: Use a page mapping for the 1:1 mapping in create_page_tables() Julien Grall
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Julien Grall @ 2019-09-17 18:12 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Stefano Stabellini, Volodymyr Babchuk

At the moment, any update to the boot-pages are open-coded. This is
making more difficult to understand the logic of a function as each
update roughly requires 6 instructions.

To ease the readability, two new macros are introduced:
    - create_table_entry: Create a page-table entry in a given table.
    This can work at any level.
    - create_mapping_entry: Create a mapping entry in a given table.
    None of the users will require to map at any other level than 3rd
    (i.e page granularity). So the macro is only supporting 3rd level
    mapping.

Furthermore, the two macros are capable to work independently of the
state of the MMU.

Lastly, take the opportunity to replace open-coded version in
setup_fixmap() by the two new macros. The ones in create_page_tables()
will be replaced in a follow-up patch.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    Changes in v4:
        - Fix typo
        - /tlb/ptlb/ in create_mapping_entry macro

    Changes in v3:
        - Patch added
---
 xen/arch/arm/arm64/head.S | 83 ++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 67 insertions(+), 16 deletions(-)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 177cec4e45..2cce342217 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -492,6 +492,68 @@ cpu_init:
 ENDPROC(cpu_init)
 
 /*
+ * Macro to create a page table entry in \ptbl to \tbl
+ *
+ * ptbl:    table symbol where the entry will be created
+ * tbl:     table symbol to point to
+ * virt:    virtual address
+ * shift:   #imm page table shift
+ * tmp1:    scratch register
+ * tmp2:    scratch register
+ * tmp3:    scratch register
+ *
+ * Preserves \virt
+ * Clobbers \tmp1, \tmp2, \tmp3
+ *
+ * Also use x20 for the phys offset.
+ *
+ * Note that all parameters using registers should be distinct.
+ */
+.macro create_table_entry, ptbl, tbl, virt, shift, tmp1, tmp2, tmp3
+        lsr   \tmp1, \virt, #\shift
+        and   \tmp1, \tmp1, #LPAE_ENTRY_MASK/* \tmp1 := slot in \tlb */
+
+        load_paddr \tmp2, \tbl
+        mov   \tmp3, #PT_PT                 /* \tmp3 := right for linear PT */
+        orr   \tmp3, \tmp3, \tmp2           /*          + \tlb paddr */
+
+        adr_l \tmp2, \ptbl
+
+        str   \tmp3, [\tmp2, \tmp1, lsl #3]
+.endm
+
+/*
+ * Macro to create a mapping entry in \tbl to \phys. Only mapping in 3rd
+ * level table (i.e page granularity) is supported.
+ *
+ * ptbl:     table symbol where the entry will be created
+ * virt:    virtual address
+ * phys:    physical address (should be page aligned)
+ * tmp1:    scratch register
+ * tmp2:    scratch register
+ * tmp3:    scratch register
+ * type:    mapping type. If not specified it will be normal memory (PT_MEM_L3)
+ *
+ * Preserves \virt, \phys
+ * Clobbers \tmp1, \tmp2, \tmp3
+ *
+ * Note that all parameters using registers should be distinct.
+ */
+.macro create_mapping_entry, ptbl, virt, phys, tmp1, tmp2, tmp3, type=PT_MEM_L3
+        and   \tmp3, \phys, #THIRD_MASK     /* \tmp3 := PAGE_ALIGNED(phys) */
+
+        lsr   \tmp1, \virt, #THIRD_SHIFT
+        and   \tmp1, \tmp1, #LPAE_ENTRY_MASK/* \tmp1 := slot in \tlb */
+
+        mov   \tmp2, #\type                 /* \tmp2 := right for section PT */
+        orr   \tmp2, \tmp2, \tmp3           /*          + PAGE_ALIGNED(phys) */
+
+        adr_l \tmp3, \ptbl
+
+        str   \tmp2, [\tmp3, \tmp1, lsl #3]
+.endm
+
+/*
  * Rebuild the boot pagetable's first-level entries. The structure
  * is described in mm.c.
  *
@@ -731,28 +793,17 @@ ENDPROC(remove_identity_mapping)
  *   x20: Physical offset
  *   x23: Early UART base physical address
  *
- * Clobbers x1 - x4
+ * Clobbers x0 - x3
  */
 setup_fixmap:
 #ifdef CONFIG_EARLY_PRINTK
         /* Add UART to the fixmap table */
-        ldr   x1, =xen_fixmap        /* x1 := vaddr (xen_fixmap) */
-        lsr   x2, x23, #THIRD_SHIFT
-        lsl   x2, x2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
-        mov   x3, #PT_DEV_L3
-        orr   x2, x2, x3             /* x2 := 4K dev map including UART */
-        str   x2, [x1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
+        ldr   x0, =EARLY_UART_VIRTUAL_ADDRESS
+        create_mapping_entry xen_fixmap, x0, x23, x1, x2, x3, type=PT_DEV_L3
 #endif
-
         /* Map fixmap into boot_second */
-        ldr   x4, =boot_second       /* x4 := vaddr (boot_second) */
-        load_paddr x2, xen_fixmap
-        mov   x3, #PT_PT
-        orr   x2, x2, x3             /* x2 := table map of xen_fixmap */
-        ldr   x1, =FIXMAP_ADDR(0)
-        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x1 := Slot for FIXMAP(0) */
-        str   x2, [x4, x1]           /* Map it in the fixmap's slot */
-
+        ldr   x0, =FIXMAP_ADDR(0)
+        create_table_entry boot_second, xen_fixmap, x0, SECOND_SHIFT, x1, x2, x3
         /* Ensure any page table updates made above have occurred. */
         dsb   nshst
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Xen-devel] [PATCH v4 6/8] xen/arm64: head: Use a page mapping for the 1:1 mapping in create_page_tables()
  2019-09-17 18:12 [Xen-devel] [PATCH v4 0/8] xen/arm: Rework head.S to make it more compliant with the Arm Arm Julien Grall
                   ` (4 preceding siblings ...)
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 5/8] xen/arm64: head: Introduce macros to create table and mapping entry Julien Grall
@ 2019-09-17 18:12 ` Julien Grall
  2019-09-26  4:13   ` Stefano Stabellini
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 7/8] xen/arm32: head: Introduce macros to create table and mapping entry Julien Grall
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 8/8] xen/arm32: head: Use a page mapping for the 1:1 mapping in create_page_tables() Julien Grall
  7 siblings, 1 reply; 17+ messages in thread
From: Julien Grall @ 2019-09-17 18:12 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Stefano Stabellini, Volodymyr Babchuk

At the moment the function create_page_tables() will use 1GB/2MB
mapping for the identity mapping. As we don't know what is present
before and after Xen in memory, we may end up to map
device/reserved-memory with cacheable memory. This may result to
mismatched attributes as other users may access the same region
differently.

To prevent any issues, we should only map the strict minimum in the
1:1 mapping. A check in xen.lds.S already guarantees anything
necessary for turning on the MMU fits in a page (at the moment 4K).

As only one page will be mapped for the 1:1 mapping, it is necessary
to pre-allocate a page for the 3rd level table.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    Changes in v4:
        - Don't pre-link the page-tables for the 1:1 mapping. Instead
        only link what's necessary.

    Changes in v3:
        - Patch added
---
 xen/arch/arm/arm64/head.S | 166 ++++++++++++++++++----------------------------
 xen/arch/arm/mm.c         |   2 +
 2 files changed, 68 insertions(+), 100 deletions(-)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 2cce342217..e5015f93a2 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -566,100 +566,17 @@ ENDPROC(cpu_init)
  *   x19: paddr(start)
  *   x20: phys offset
  *
- * Clobbers x0 - x4, x25
- *
- * Register usage within this function:
- *   x25: Identity map in place
+ * Clobbers x0 - x4
  */
 create_page_tables:
-        /*
-         * If Xen is loaded at exactly XEN_VIRT_START then we don't
-         * need an additional 1:1 mapping, the virtual mapping will
-         * suffice.
-         */
-        cmp   x19, #XEN_VIRT_START
-        cset  x25, eq                /* x25 := identity map in place, or not */
-
-        load_paddr x4, boot_pgtable
-
-        /* Setup boot_pgtable: */
-        load_paddr x1, boot_first
-
-        /* ... map boot_first in boot_pgtable[0] */
-        mov   x3, #PT_PT             /* x2 := table map of boot_first */
-        orr   x2, x1, x3             /*       + rights for linear PT */
-        str   x2, [x4, #0]           /* Map it in slot 0 */
-
-        /* ... map of paddr(start) in boot_pgtable+boot_first_id */
-        lsr   x1, x19, #ZEROETH_SHIFT/* Offset of base paddr in boot_pgtable */
-        cbz   x1, 1f                 /* It's in slot 0, map in boot_first
-                                      * or boot_second later on */
+        /* Prepare the page-tables for mapping Xen */
+        ldr   x0, =XEN_VIRT_START
+        create_table_entry boot_pgtable, boot_first, x0, ZEROETH_SHIFT, x1, x2, x3
+        create_table_entry boot_first, boot_second, x0, FIRST_SHIFT, x1, x2, x3
+        create_table_entry boot_second, boot_third, x0, SECOND_SHIFT, x1, x2, x3
 
-        /*
-         * Level zero does not support superpage mappings, so we have
-         * to use an extra first level page in which we create a 1GB mapping.
-         */
-        load_paddr x2, boot_first_id
-
-        mov   x3, #PT_PT             /* x2 := table map of boot_first_id */
-        orr   x2, x2, x3             /*       + rights for linear PT */
-        str   x2, [x4, x1, lsl #3]
-
-        load_paddr x4, boot_first_id
-
-        lsr   x1, x19, #FIRST_SHIFT  /* x1 := Offset of base paddr in boot_first_id */
-        lsl   x2, x1, #FIRST_SHIFT   /* x2 := Base address for 1GB mapping */
-        mov   x3, #PT_MEM            /* x2 := Section map */
-        orr   x2, x2, x3
-        and   x1, x1, #LPAE_ENTRY_MASK /* x1 := Slot offset */
-        str   x2, [x4, x1, lsl #3]   /* Mapping of paddr(start) */
-        mov   x25, #1                /* x25 := identity map now in place */
-
-1:      /* Setup boot_first: */
-        load_paddr x4, boot_first   /* Next level into boot_first */
-
-        /* ... map boot_second in boot_first[0] */
-        load_paddr x1, boot_second
-        mov   x3, #PT_PT             /* x2 := table map of boot_second */
-        orr   x2, x1, x3             /*       + rights for linear PT */
-        str   x2, [x4, #0]           /* Map it in slot 0 */
-
-        /* ... map of paddr(start) in boot_first */
-        cbnz  x25, 1f                /* x25 is set if already created */
-        lsr   x2, x19, #FIRST_SHIFT  /* x2 := Offset of base paddr in boot_first */
-        and   x1, x2, #LPAE_ENTRY_MASK /* x1 := Slot to use */
-        cbz   x1, 1f                 /* It's in slot 0, map in boot_second */
-
-        lsl   x2, x2, #FIRST_SHIFT   /* Base address for 1GB mapping */
-        mov   x3, #PT_MEM            /* x2 := Section map */
-        orr   x2, x2, x3
-        str   x2, [x4, x1, lsl #3]   /* Create mapping of paddr(start)*/
-        mov   x25, #1                /* x25 := identity map now in place */
-
-1:      /* Setup boot_second: */
-        load_paddr x4, boot_second
-
-        /* ... map boot_third in boot_second[1] */
-        load_paddr x1, boot_third
-        mov   x3, #PT_PT             /* x2 := table map of boot_third */
-        orr   x2, x1, x3             /*       + rights for linear PT */
-        str   x2, [x4, #8]           /* Map it in slot 1 */
-
-        /* ... map of paddr(start) in boot_second */
-        cbnz  x25, 1f                /* x25 is set if already created */
-        lsr   x2, x19, #SECOND_SHIFT /* x2 := Offset of base paddr in boot_second */
-        and   x1, x2, #LPAE_ENTRY_MASK /* x1 := Slot to use */
-        cmp   x1, #1
-        b.eq  virtphys_clash         /* It's in slot 1, which we cannot handle */
-
-        lsl   x2, x2, #SECOND_SHIFT  /* Base address for 2MB mapping */
-        mov   x3, #PT_MEM            /* x2 := Section map */
-        orr   x2, x2, x3
-        str   x2, [x4, x1, lsl #3]   /* Create mapping of paddr(start)*/
-        mov   x25, #1                /* x25 := identity map now in place */
-
-1:      /* Setup boot_third: */
-        load_paddr x4, boot_third
+        /* Map Xen */
+        adr_l x4, boot_third
 
         lsr   x2, x19, #THIRD_SHIFT  /* Base address for 4K mapping */
         lsl   x2, x2, #THIRD_SHIFT
@@ -674,21 +591,70 @@ create_page_tables:
         cmp   x1, #(LPAE_ENTRIES<<3) /* 512 entries per page */
         b.lt  1b
 
-        /* Defer fixmap and dtb mapping until after paging enabled, to
-         * avoid them clashing with the 1:1 mapping. */
+        /*
+         * If Xen is loaded at exactly XEN_VIRT_START then we don't
+         * need an additional 1:1 mapping, the virtual mapping will
+         * suffice.
+         */
+        cmp   x19, #XEN_VIRT_START
+        bne   1f
+        ret
+1:
+        /*
+         * Setup the 1:1 mapping so we can turn the MMU on. Note that
+         * only the first page of Xen will be part of the 1:1 mapping.
+         */
+
+        /*
+         * Find the zeroeth slot used. If the slot is not
+         * XEN_ZEROETH_SLOT, then the 1:1 mapping will use its own set of
+         * page-tables from the first level.
+         */
+        lsr   x0, x19, #ZEROETH_SHIFT   /* x0 := zeroeth slot */
+        cmp   x0, #XEN_ZEROETH_SLOT
+        beq   1f
+        create_table_entry boot_pgtable, boot_first_id, x19, ZEROETH_SHIFT, x0, x1, x2
+        b     link_from_first_id
+
+1:
+        /*
+         * Find the first slot used. If the slot is not XEN_FIRST_SLOT,
+         * then the 1:1 mapping will use its own set of page-tables from
+         * the second level.
+         */
+        lsr   x0, x19, #FIRST_SHIFT
+        and   x0, x0, #LPAE_ENTRY_MASK  /* x0 := first slot */
+        cmp   x0, #XEN_FIRST_SLOT
+        beq   1f
+        create_table_entry boot_first, boot_second_id, x19, FIRST_SHIFT, x0, x1, x2
+        b     link_from_second_id
 
-        /* boot pagetable setup complete */
+1:
+        /*
+         * Find the second slot used. If the slot is XEN_SECOND_SLOT, then the
+         * 1:1 mapping will use its own set of page-tables from the
+         * third level. For slot XEN_SECOND_SLOT, Xen is not yet able to handle
+         * it.
+         */
+        lsr   x0, x19, #SECOND_SHIFT
+        and   x0, x0, #LPAE_ENTRY_MASK  /* x0 := first slot */
+        cmp   x0, #XEN_SECOND_SLOT
+        beq   virtphys_clash
+        create_table_entry boot_second, boot_third_id, x19, SECOND_SHIFT, x0, x1, x2
+        b     link_from_third_id
+
+link_from_first_id:
+        create_table_entry boot_first_id, boot_second_id, x19, FIRST_SHIFT, x0, x1, x2
+link_from_second_id:
+        create_table_entry boot_second_id, boot_third_id, x19, SECOND_SHIFT, x0, x1, x2
+link_from_third_id:
+        create_mapping_entry boot_third_id, x19, x19, x0, x1, x2
+        ret
 
-        cbnz  x25, 1f                /* Did we manage to create an identity mapping ? */
-        PRINT("Unable to build boot page tables - Failed to identity map Xen.\r\n")
-        b     fail
 virtphys_clash:
         /* Identity map clashes with boot_third, which we cannot handle yet */
         PRINT("- Unable to build boot page tables - virt and phys addresses clash. -\r\n")
         b     fail
-
-1:
-        ret
 ENDPROC(create_page_tables)
 
 /*
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 65552da4ba..72ffea7472 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -105,6 +105,8 @@ DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
 #ifdef CONFIG_ARM_64
 DEFINE_BOOT_PAGE_TABLE(boot_first);
 DEFINE_BOOT_PAGE_TABLE(boot_first_id);
+DEFINE_BOOT_PAGE_TABLE(boot_second_id);
+DEFINE_BOOT_PAGE_TABLE(boot_third_id);
 #endif
 DEFINE_BOOT_PAGE_TABLE(boot_second);
 DEFINE_BOOT_PAGE_TABLE(boot_third);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Xen-devel] [PATCH v4 7/8] xen/arm32: head: Introduce macros to create table and mapping entry
  2019-09-17 18:12 [Xen-devel] [PATCH v4 0/8] xen/arm: Rework head.S to make it more compliant with the Arm Arm Julien Grall
                   ` (5 preceding siblings ...)
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 6/8] xen/arm64: head: Use a page mapping for the 1:1 mapping in create_page_tables() Julien Grall
@ 2019-09-17 18:12 ` Julien Grall
  2019-09-25 20:22   ` Stefano Stabellini
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 8/8] xen/arm32: head: Use a page mapping for the 1:1 mapping in create_page_tables() Julien Grall
  7 siblings, 1 reply; 17+ messages in thread
From: Julien Grall @ 2019-09-17 18:12 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Stefano Stabellini, Volodymyr Babchuk

At the moment, any update to the boot-pages are open-coded. This is
making more difficult to understand the logic of a function as each
update roughly requires 6 instructions.

To ease the readability, two new macros are introduced:
    - create_table_entry: Create a page-table entry in a given table.
    This can work at any level.
    - create_mapping_entry: Create a mapping entry in a given table.
    None of the users will require to map at any other level than 3rd
    (i.e page granularity). So the macro is only supporting 3rd level
    mapping.

Unlike arm64, there are no easy way to have a PC relative address within
the range -/+4GB. In order to have the possibility to use the macro in
context with MMU on/off, the user needs to tell the state of the MMU.

Lastly, take the opportunity to replace open-coded version in
setup_fixmap() by the two new macros. The ones in create_page_tables()
will be replaced in a follow-up patch.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    The adr_l hack is a bit ugly, but I can't find nicer way to avoid
    code duplication and improve readability.

    Changes in v4:
        - Fix typo
        - s/tlb/ptlb/ in create_mapping_entry macro
        - Expand comment on top of addr_l macro
        - Re-order code in create_mapping_entry

    Changes in v3:
        - Patch added
---
 xen/arch/arm/arm32/head.S | 111 ++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 92 insertions(+), 19 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index f58d0fcb80..175f0c9760 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -50,6 +50,23 @@
 .endm
 
 /*
+ * There are no easy way to have a PC relative address within the range
+ * +/- 4GB of the PC.
+ *
+ * This macro workaround it by asking the user to tell whether the MMU
+ * has been turned on or not.
+ *
+ * When the MMU is turned off, we need to apply the physical offset
+ * (r10) in order to find the associated physical address.
+ */
+.macro adr_l, dst, sym, mmu
+        ldr   \dst, =\sym
+        .if \mmu == 0
+        add   \dst, \dst, r10
+        .endif
+.endm
+
+/*
  * Common register usage in this file:
  *   r0  -
  *   r1  -
@@ -342,6 +359,76 @@ cpu_init_done:
 ENDPROC(cpu_init)
 
 /*
+ * Macro to create a page table entry in \ptbl to \tbl
+ *
+ * ptbl:    table symbol where the entry will be created
+ * tbl:     table symbol to point to
+ * virt:    virtual address
+ * shift:   #imm page table shift
+ * mmu:     Is the MMU turned on/off. If not specified it will be off
+ *
+ * Preserves \virt
+ * Clobbers r1 - r4
+ *
+ * Also use r10 for the phys offset.
+ *
+ * Note that \virt should be in a register other than r1 - r4
+ */
+.macro create_table_entry, ptbl, tbl, virt, shift, mmu=0
+        lsr   r1, \virt, #\shift
+        mov_w r2, LPAE_ENTRY_MASK
+        and   r1, r1, r2             /* r1 := slot in \tlb */
+        lsl   r1, r1, #3             /* r1 := slot offset in \tlb */
+
+        ldr   r4, =\tbl
+        add   r4, r4, r10            /* r4 := paddr(\tlb) */
+
+        mov   r2, #PT_PT             /* r2:r3 := right for linear PT */
+        orr   r2, r2, r4             /*           + \tlb paddr */
+        mov   r3, #0
+
+        adr_l r4, \ptbl, \mmu
+
+        strd  r2, r3, [r4, r1]
+.endm
+
+/*
+ * Macro to create a mapping entry in \tbl to \paddr. Only mapping in 3rd
+ * level table (i.e page granularity) is supported.
+ *
+ * ptbl:     table symbol where the entry will be created
+ * virt:    virtual address
+ * phys:    physical address
+ * type:    mapping type. If not specified it will be normal memory (PT_MEM_L3)
+ * mmu:     Is the MMU turned on/off. If not specified it will be off
+ *
+ * Preserves \virt, \phys
+ * Clobbers r1 - r4
+ *
+ * * Also use r10 for the phys offset.
+ *
+ * Note that \virt and \paddr should be in other registers than r1 - r4
+ * and be distinct.
+ */
+.macro create_mapping_entry, ptbl, virt, phys, type=PT_MEM_L3, mmu=0
+        mov_w r2, LPAE_ENTRY_MASK
+        lsr   r1, \virt, #THIRD_SHIFT
+        and   r1, r1, r2             /* r1 := slot in \tlb */
+        lsl   r1, r1, #3             /* r1 := slot offset in \tlb */
+
+        lsr   r4, \phys, #THIRD_SHIFT
+        lsl   r4, r4, #THIRD_SHIFT   /* r4 := PAGE_ALIGNED(phys) */
+
+        mov   r2, #\type             /* r2:r3 := right for section PT */
+        orr   r2, r2, r4             /*          + PAGE_ALIGNED(phys) */
+        mov   r3, #0
+
+        adr_l r4, \ptbl, \mmu
+
+        strd  r2, r3, [r4, r1]
+.endm
+
+/*
  * Rebuild the boot pagetable's first-level entries. The structure
  * is described in mm.c.
  *
@@ -557,31 +644,17 @@ ENDPROC(remove_identity_mapping)
  *   r10: Physical offset
  *   r11: Early UART base physical address
  *
- * Clobbers r1 - r4
+ * Clobbers r0 - r4
  */
 setup_fixmap:
 #if defined(CONFIG_EARLY_PRINTK)
         /* Add UART to the fixmap table */
-        ldr   r1, =xen_fixmap        /* r1 := vaddr (xen_fixmap) */
-        lsr   r2, r11, #THIRD_SHIFT
-        lsl   r2, r2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
-        orr   r2, r2, #PT_UPPER(DEV_L3)
-        orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
-        mov   r3, #0x0
-        strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
+        ldr   r0, =EARLY_UART_VIRTUAL_ADDRESS
+        create_mapping_entry xen_fixmap, r0, r11, type=PT_DEV_L3, mmu=1
 #endif
-
         /* Map fixmap into boot_second */
-        ldr   r1, =boot_second       /* r1 := vaddr (boot_second) */
-        ldr   r2, =xen_fixmap
-        add   r2, r2, r10            /* r2 := paddr (xen_fixmap) */
-        orr   r2, r2, #PT_UPPER(PT)
-        orr   r2, r2, #PT_LOWER(PT)  /* r2:r3 := table map of xen_fixmap */
-        ldr   r4, =FIXMAP_ADDR(0)
-        mov   r4, r4, lsr #(SECOND_SHIFT - 3)   /* r4 := Slot for FIXMAP(0) */
-        mov   r3, #0x0
-        strd  r2, r3, [r1, r4]       /* Map it in the fixmap's slot */
-
+        mov_w r0, FIXMAP_ADDR(0)
+        create_table_entry boot_second, xen_fixmap, r0, SECOND_SHIFT, mmu=1
         /* Ensure any page table updates made above have occurred. */
         dsb   nshst
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Xen-devel] [PATCH v4 8/8] xen/arm32: head: Use a page mapping for the 1:1 mapping in create_page_tables()
  2019-09-17 18:12 [Xen-devel] [PATCH v4 0/8] xen/arm: Rework head.S to make it more compliant with the Arm Arm Julien Grall
                   ` (6 preceding siblings ...)
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 7/8] xen/arm32: head: Introduce macros to create table and mapping entry Julien Grall
@ 2019-09-17 18:12 ` Julien Grall
  2019-09-26  4:24   ` Stefano Stabellini
  7 siblings, 1 reply; 17+ messages in thread
From: Julien Grall @ 2019-09-17 18:12 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Stefano Stabellini, Volodymyr Babchuk

At the moment the function create_page_tables() will use 1GB/2MB
mapping for the identity mapping. As we don't know what is present
before and after Xen in memory, we may end up to map
device/reserved-memory with cacheable memory. This may result to
mismatched attributes as other users may access the same region
differently.

To prevent any issues, we should only map the strict minimum in the
1:1 mapping. A check in xen.lds.S already guarantees anything
necessary for turning on the MMU fits in a page (at the moment 4K).

As only one page will be mapped for the 1:1 mapping, it is necessary
to pre-allocate a page for the 3rd level table.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    Changes in v4:
        - Use XEN_{FIRST, SECOND}_SLOT rather than hardcoded value
        - Don't pre-link the page-tables for the 1:1 mapping

    Changes in v3:
        - Patch added
---
 xen/arch/arm/arm32/head.S | 121 +++++++++++++++++++---------------------------
 xen/arch/arm/mm.c         |   2 +-
 2 files changed, 50 insertions(+), 73 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 175f0c9760..7b5109db26 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -447,73 +447,13 @@ ENDPROC(cpu_init)
  *   r6 : Identity map in place
  */
 create_page_tables:
-        /*
-         * If Xen is loaded at exactly XEN_VIRT_START then we don't
-         * need an additional 1:1 mapping, the virtual mapping will
-         * suffice.
-         */
-        cmp   r9, #XEN_VIRT_START
-        moveq r6, #1                 /* r6 := identity map now in place */
-        movne r6, #0                 /* r6 := identity map not yet in place */
-
-        ldr   r4, =boot_pgtable
-        add   r4, r4, r10            /* r4 := paddr (boot_pagetable) */
-
-        /* Setup boot_pgtable: */
-        ldr   r1, =boot_second
-        add   r1, r1, r10            /* r1 := paddr (boot_second) */
-
-        /* ... map boot_second in boot_pgtable[0] */
-        orr   r2, r1, #PT_UPPER(PT)  /* r2:r3 := table map of boot_second */
-        orr   r2, r2, #PT_LOWER(PT)  /* (+ rights for linear PT) */
-        mov   r3, #0x0
-        strd  r2, r3, [r4, #0]       /* Map it in slot 0 */
-
-        /* ... map of paddr(start) in boot_pgtable */
-        lsrs  r1, r9, #FIRST_SHIFT   /* Offset of base paddr in boot_pgtable */
-        beq   1f                     /* If it is in slot 0 then map in boot_second
-                                      * later on */
-        lsl   r2, r1, #FIRST_SHIFT   /* Base address for 1GB mapping */
-        orr   r2, r2, #PT_UPPER(MEM) /* r2:r3 := section map */
-        orr   r2, r2, #PT_LOWER(MEM)
-        lsl   r1, r1, #3             /* r1 := Slot offset */
-        mov   r3, #0x0
-        strd  r2, r3, [r4, r1]       /* Mapping of paddr(start) */
-        mov   r6, #1                 /* r6 := identity map now in place */
-
-1:      /* Setup boot_second: */
-        ldr   r4, =boot_second
-        add   r4, r4, r10            /* r4 := paddr (boot_second) */
-
-        ldr   r1, =boot_third
-        add   r1, r1, r10            /* r1 := paddr (boot_third) */
-
-        /* ... map boot_third in boot_second[1] */
-        orr   r2, r1, #PT_UPPER(PT)  /* r2:r3 := table map of boot_third */
-        orr   r2, r2, #PT_LOWER(PT)  /* (+ rights for linear PT) */
-        mov   r3, #0x0
-        strd  r2, r3, [r4, #8]       /* Map it in slot 1 */
-
-        /* ... map of paddr(start) in boot_second */
-        cmp   r6, #1                 /* r6 is set if already created */
-        beq   1f
-        lsr   r2, r9, #SECOND_SHIFT  /* Offset of base paddr in boot_second */
-        ldr   r3, =LPAE_ENTRY_MASK
-        and   r1, r2, r3
-        cmp   r1, #1
-        beq   virtphys_clash         /* It's in slot 1, which we cannot handle */
-
-        lsl   r2, r2, #SECOND_SHIFT  /* Base address for 2MB mapping */
-        orr   r2, r2, #PT_UPPER(MEM) /* r2:r3 := section map */
-        orr   r2, r2, #PT_LOWER(MEM)
-        mov   r3, #0x0
-        lsl   r1, r1, #3             /* r1 := Slot offset */
-        strd  r2, r3, [r4, r1]       /* Mapping of paddr(start) */
-        mov   r6, #1                 /* r6 := identity map now in place */
+        /* Prepare the page-tables for mapping Xen */
+        ldr   r0, =XEN_VIRT_START
+        create_table_entry boot_pgtable, boot_second, r0, FIRST_SHIFT
+        create_table_entry boot_second, boot_third, r0, SECOND_SHIFT
 
         /* Setup boot_third: */
-1:      ldr   r4, =boot_third
-        add   r4, r4, r10            /* r4 := paddr (boot_third) */
+        adr_l r4, boot_third, mmu=0
 
         lsr   r2, r9, #THIRD_SHIFT  /* Base address for 4K mapping */
         lsl   r2, r2, #THIRD_SHIFT
@@ -530,16 +470,53 @@ create_page_tables:
         blo   1b
 
         /*
-         * Defer fixmap and dtb mapping until after paging enabled, to
-         * avoid them clashing with the 1:1 mapping.
+         * If Xen is loaded at exactly XEN_VIRT_START then we don't
+         * need an additional 1:1 mapping, the virtual mapping will
+         * suffice.
          */
+        cmp   r9, #XEN_VIRT_START
+        moveq pc, lr
 
-        /* boot pagetable setup complete */
+1:
+        /*
+         * Setup the 1:1 mapping so we can turn the MMU on. Note that
+         * only the first page of Xen will be part of the 1:1 mapping.
+         */
+
+        /*
+         * Find the first slot used. If the slot is not XEN_FIRST_SLOT,
+         * then the 1:1 mapping will use its own set of page-tables from
+         * the second level.
+         */
+        lsr   r1, r9, #FIRST_SHIFT
+        mov_w r0, LPAE_ENTRY_MASK
+        and   r1, r1, r0              /* r1 := first slot */
+        cmp   r1, #XEN_FIRST_SLOT
+        beq   1f
+        create_table_entry boot_pgtable, boot_second_id, r9, FIRST_SHIFT
+        b     link_from_second_id
+
+1:
+        /*
+         * Find the second slot used. If the slot is XEN_SECOND_SLOT, then the
+         * 1:1 mapping will use its own set of page-tables from the
+         * third level. For slot XEN_SECOND_SLOT, Xen is not yet able to handle
+         * it.
+         */
+        lsr   r1, r9, #SECOND_SHIFT
+        mov_w r0, LPAE_ENTRY_MASK
+        and   r1, r1, r0             /* r1 := second slot */
+        cmp   r1, #XEN_SECOND_SLOT
+        beq   virtphys_clash
+        create_table_entry boot_second, boot_third_id, r9, SECOND_SHIFT
+        b     link_from_third_id
+
+link_from_second_id:
+        create_table_entry boot_second_id, boot_third_id, r9, SECOND_SHIFT
+link_from_third_id:
+        create_mapping_entry boot_third_id, r9, r9
+        mov   pc, lr
 
-        cmp   r6, #1                /* Did we manage to create an identity mapping ? */
-        moveq pc, lr
-        PRINT("Unable to build boot page tables - Failed to identity map Xen.\r\n")
-        b     fail
 virtphys_clash:
         /* Identity map clashes with boot_third, which we cannot handle yet */
         PRINT("- Unable to build boot page tables - virt and phys addresses clash. -\r\n")
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 72ffea7472..9e0fdc39f9 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -105,9 +105,9 @@ DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
 #ifdef CONFIG_ARM_64
 DEFINE_BOOT_PAGE_TABLE(boot_first);
 DEFINE_BOOT_PAGE_TABLE(boot_first_id);
+#endif
 DEFINE_BOOT_PAGE_TABLE(boot_second_id);
 DEFINE_BOOT_PAGE_TABLE(boot_third_id);
-#endif
 DEFINE_BOOT_PAGE_TABLE(boot_second);
 DEFINE_BOOT_PAGE_TABLE(boot_third);
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH v4 5/8] xen/arm64: head: Introduce macros to create table and mapping entry
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 5/8] xen/arm64: head: Introduce macros to create table and mapping entry Julien Grall
@ 2019-09-25 20:14   ` Stefano Stabellini
  0 siblings, 0 replies; 17+ messages in thread
From: Stefano Stabellini @ 2019-09-25 20:14 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Stefano Stabellini, Volodymyr Babchuk

On Tue, 17 Sep 2019, Julien Grall wrote:
> At the moment, any update to the boot-pages are open-coded. This is
> making more difficult to understand the logic of a function as each
> update roughly requires 6 instructions.
> 
> To ease the readability, two new macros are introduced:
>     - create_table_entry: Create a page-table entry in a given table.
>     This can work at any level.
>     - create_mapping_entry: Create a mapping entry in a given table.
>     None of the users will require to map at any other level than 3rd
>     (i.e page granularity). So the macro is only supporting 3rd level
>     mapping.
> 
> Furthermore, the two macros are capable to work independently of the
> state of the MMU.
> 
> Lastly, take the opportunity to replace open-coded version in
> setup_fixmap() by the two new macros. The ones in create_page_tables()
> will be replaced in a follow-up patch.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v4:
>         - Fix typo
>         - /tlb/ptlb/ in create_mapping_entry macro
> 
>     Changes in v3:
>         - Patch added
> ---
>  xen/arch/arm/arm64/head.S | 83 ++++++++++++++++++++++++++++++++++++++---------
>  1 file changed, 67 insertions(+), 16 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 177cec4e45..2cce342217 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -492,6 +492,68 @@ cpu_init:
>  ENDPROC(cpu_init)
>  
>  /*
> + * Macro to create a page table entry in \ptbl to \tbl
> + *
> + * ptbl:    table symbol where the entry will be created
> + * tbl:     table symbol to point to
> + * virt:    virtual address
> + * shift:   #imm page table shift
> + * tmp1:    scratch register
> + * tmp2:    scratch register
> + * tmp3:    scratch register
> + *
> + * Preserves \virt
> + * Clobbers \tmp1, \tmp2, \tmp3
> + *
> + * Also use x20 for the phys offset.
> + *
> + * Note that all parameters using registers should be distinct.
> + */
> +.macro create_table_entry, ptbl, tbl, virt, shift, tmp1, tmp2, tmp3
> +        lsr   \tmp1, \virt, #\shift
> +        and   \tmp1, \tmp1, #LPAE_ENTRY_MASK/* \tmp1 := slot in \tlb */
> +
> +        load_paddr \tmp2, \tbl
> +        mov   \tmp3, #PT_PT                 /* \tmp3 := right for linear PT */
> +        orr   \tmp3, \tmp3, \tmp2           /*          + \tlb paddr */
> +
> +        adr_l \tmp2, \ptbl
> +
> +        str   \tmp3, [\tmp2, \tmp1, lsl #3]
> +.endm
> +
> +/*
> + * Macro to create a mapping entry in \tbl to \phys. Only mapping in 3rd
> + * level table (i.e page granularity) is supported.
> + *
> + * ptbl:     table symbol where the entry will be created
> + * virt:    virtual address
> + * phys:    physical address (should be page aligned)
> + * tmp1:    scratch register
> + * tmp2:    scratch register
> + * tmp3:    scratch register
> + * type:    mapping type. If not specified it will be normal memory (PT_MEM_L3)
> + *
> + * Preserves \virt, \phys
> + * Clobbers \tmp1, \tmp2, \tmp3
> + *
> + * Note that all parameters using registers should be distinct.
> + */
> +.macro create_mapping_entry, ptbl, virt, phys, tmp1, tmp2, tmp3, type=PT_MEM_L3
> +        and   \tmp3, \phys, #THIRD_MASK     /* \tmp3 := PAGE_ALIGNED(phys) */
> +
> +        lsr   \tmp1, \virt, #THIRD_SHIFT
> +        and   \tmp1, \tmp1, #LPAE_ENTRY_MASK/* \tmp1 := slot in \tlb */
> +
> +        mov   \tmp2, #\type                 /* \tmp2 := right for section PT */
> +        orr   \tmp2, \tmp2, \tmp3           /*          + PAGE_ALIGNED(phys) */
> +
> +        adr_l \tmp3, \ptbl
> +
> +        str   \tmp2, [\tmp3, \tmp1, lsl #3]
> +.endm
> +
> +/*
>   * Rebuild the boot pagetable's first-level entries. The structure
>   * is described in mm.c.
>   *
> @@ -731,28 +793,17 @@ ENDPROC(remove_identity_mapping)
>   *   x20: Physical offset
>   *   x23: Early UART base physical address
>   *
> - * Clobbers x1 - x4
> + * Clobbers x0 - x3
>   */
>  setup_fixmap:
>  #ifdef CONFIG_EARLY_PRINTK
>          /* Add UART to the fixmap table */
> -        ldr   x1, =xen_fixmap        /* x1 := vaddr (xen_fixmap) */
> -        lsr   x2, x23, #THIRD_SHIFT
> -        lsl   x2, x2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
> -        mov   x3, #PT_DEV_L3
> -        orr   x2, x2, x3             /* x2 := 4K dev map including UART */
> -        str   x2, [x1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
> +        ldr   x0, =EARLY_UART_VIRTUAL_ADDRESS
> +        create_mapping_entry xen_fixmap, x0, x23, x1, x2, x3, type=PT_DEV_L3
>  #endif
> -
>          /* Map fixmap into boot_second */
> -        ldr   x4, =boot_second       /* x4 := vaddr (boot_second) */
> -        load_paddr x2, xen_fixmap
> -        mov   x3, #PT_PT
> -        orr   x2, x2, x3             /* x2 := table map of xen_fixmap */
> -        ldr   x1, =FIXMAP_ADDR(0)
> -        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x1 := Slot for FIXMAP(0) */
> -        str   x2, [x4, x1]           /* Map it in the fixmap's slot */
> -
> +        ldr   x0, =FIXMAP_ADDR(0)
> +        create_table_entry boot_second, xen_fixmap, x0, SECOND_SHIFT, x1, x2, x3
>          /* Ensure any page table updates made above have occurred. */
>          dsb   nshst
>  
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH v4 7/8] xen/arm32: head: Introduce macros to create table and mapping entry
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 7/8] xen/arm32: head: Introduce macros to create table and mapping entry Julien Grall
@ 2019-09-25 20:22   ` Stefano Stabellini
  0 siblings, 0 replies; 17+ messages in thread
From: Stefano Stabellini @ 2019-09-25 20:22 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Stefano Stabellini, Volodymyr Babchuk

On Tue, 17 Sep 2019, Julien Grall wrote:
> At the moment, any update to the boot-pages are open-coded. This is
> making more difficult to understand the logic of a function as each
> update roughly requires 6 instructions.
> 
> To ease the readability, two new macros are introduced:
>     - create_table_entry: Create a page-table entry in a given table.
>     This can work at any level.
>     - create_mapping_entry: Create a mapping entry in a given table.
>     None of the users will require to map at any other level than 3rd
>     (i.e page granularity). So the macro is only supporting 3rd level
>     mapping.
> 
> Unlike arm64, there are no easy way to have a PC relative address within
> the range -/+4GB. In order to have the possibility to use the macro in
> context with MMU on/off, the user needs to tell the state of the MMU.
> 
> Lastly, take the opportunity to replace open-coded version in
> setup_fixmap() by the two new macros. The ones in create_page_tables()
> will be replaced in a follow-up patch.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     The adr_l hack is a bit ugly, but I can't find nicer way to avoid
>     code duplication and improve readability.
> 
>     Changes in v4:
>         - Fix typo
>         - s/tlb/ptlb/ in create_mapping_entry macro
>         - Expand comment on top of addr_l macro
>         - Re-order code in create_mapping_entry
> 
>     Changes in v3:
>         - Patch added
> ---
>  xen/arch/arm/arm32/head.S | 111 ++++++++++++++++++++++++++++++++++++++--------
>  1 file changed, 92 insertions(+), 19 deletions(-)
> 
> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
> index f58d0fcb80..175f0c9760 100644
> --- a/xen/arch/arm/arm32/head.S
> +++ b/xen/arch/arm/arm32/head.S
> @@ -50,6 +50,23 @@
>  .endm
>  
>  /*
> + * There are no easy way to have a PC relative address within the range
> + * +/- 4GB of the PC.
> + *
> + * This macro workaround it by asking the user to tell whether the MMU
> + * has been turned on or not.
> + *
> + * When the MMU is turned off, we need to apply the physical offset
> + * (r10) in order to find the associated physical address.
> + */
> +.macro adr_l, dst, sym, mmu
> +        ldr   \dst, =\sym
> +        .if \mmu == 0
> +        add   \dst, \dst, r10
> +        .endif
> +.endm
> +
> +/*
>   * Common register usage in this file:
>   *   r0  -
>   *   r1  -
> @@ -342,6 +359,76 @@ cpu_init_done:
>  ENDPROC(cpu_init)
>  
>  /*
> + * Macro to create a page table entry in \ptbl to \tbl
> + *
> + * ptbl:    table symbol where the entry will be created
> + * tbl:     table symbol to point to
> + * virt:    virtual address
> + * shift:   #imm page table shift
> + * mmu:     Is the MMU turned on/off. If not specified it will be off
> + *
> + * Preserves \virt
> + * Clobbers r1 - r4
> + *
> + * Also use r10 for the phys offset.
> + *
> + * Note that \virt should be in a register other than r1 - r4
> + */
> +.macro create_table_entry, ptbl, tbl, virt, shift, mmu=0
> +        lsr   r1, \virt, #\shift
> +        mov_w r2, LPAE_ENTRY_MASK
> +        and   r1, r1, r2             /* r1 := slot in \tlb */
> +        lsl   r1, r1, #3             /* r1 := slot offset in \tlb */
> +
> +        ldr   r4, =\tbl
> +        add   r4, r4, r10            /* r4 := paddr(\tlb) */
> +
> +        mov   r2, #PT_PT             /* r2:r3 := right for linear PT */
> +        orr   r2, r2, r4             /*           + \tlb paddr */
> +        mov   r3, #0
> +
> +        adr_l r4, \ptbl, \mmu
> +
> +        strd  r2, r3, [r4, r1]
> +.endm
> +
> +/*
> + * Macro to create a mapping entry in \tbl to \paddr. Only mapping in 3rd
> + * level table (i.e page granularity) is supported.
> + *
> + * ptbl:     table symbol where the entry will be created
> + * virt:    virtual address
> + * phys:    physical address
> + * type:    mapping type. If not specified it will be normal memory (PT_MEM_L3)
> + * mmu:     Is the MMU turned on/off. If not specified it will be off
> + *
> + * Preserves \virt, \phys
> + * Clobbers r1 - r4
> + *
> + * * Also use r10 for the phys offset.
> + *
> + * Note that \virt and \paddr should be in other registers than r1 - r4
> + * and be distinct.
> + */
> +.macro create_mapping_entry, ptbl, virt, phys, type=PT_MEM_L3, mmu=0
> +        mov_w r2, LPAE_ENTRY_MASK
> +        lsr   r1, \virt, #THIRD_SHIFT
> +        and   r1, r1, r2             /* r1 := slot in \tlb */
> +        lsl   r1, r1, #3             /* r1 := slot offset in \tlb */
> +
> +        lsr   r4, \phys, #THIRD_SHIFT
> +        lsl   r4, r4, #THIRD_SHIFT   /* r4 := PAGE_ALIGNED(phys) */
> +
> +        mov   r2, #\type             /* r2:r3 := right for section PT */
> +        orr   r2, r2, r4             /*          + PAGE_ALIGNED(phys) */
> +        mov   r3, #0
> +
> +        adr_l r4, \ptbl, \mmu
> +
> +        strd  r2, r3, [r4, r1]
> +.endm
> +
> +/*
>   * Rebuild the boot pagetable's first-level entries. The structure
>   * is described in mm.c.
>   *
> @@ -557,31 +644,17 @@ ENDPROC(remove_identity_mapping)
>   *   r10: Physical offset
>   *   r11: Early UART base physical address
>   *
> - * Clobbers r1 - r4
> + * Clobbers r0 - r4
>   */
>  setup_fixmap:
>  #if defined(CONFIG_EARLY_PRINTK)
>          /* Add UART to the fixmap table */
> -        ldr   r1, =xen_fixmap        /* r1 := vaddr (xen_fixmap) */
> -        lsr   r2, r11, #THIRD_SHIFT
> -        lsl   r2, r2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
> -        orr   r2, r2, #PT_UPPER(DEV_L3)
> -        orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
> -        mov   r3, #0x0
> -        strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
> +        ldr   r0, =EARLY_UART_VIRTUAL_ADDRESS
> +        create_mapping_entry xen_fixmap, r0, r11, type=PT_DEV_L3, mmu=1
>  #endif
> -
>          /* Map fixmap into boot_second */
> -        ldr   r1, =boot_second       /* r1 := vaddr (boot_second) */
> -        ldr   r2, =xen_fixmap
> -        add   r2, r2, r10            /* r2 := paddr (xen_fixmap) */
> -        orr   r2, r2, #PT_UPPER(PT)
> -        orr   r2, r2, #PT_LOWER(PT)  /* r2:r3 := table map of xen_fixmap */
> -        ldr   r4, =FIXMAP_ADDR(0)
> -        mov   r4, r4, lsr #(SECOND_SHIFT - 3)   /* r4 := Slot for FIXMAP(0) */
> -        mov   r3, #0x0
> -        strd  r2, r3, [r1, r4]       /* Map it in the fixmap's slot */
> -
> +        mov_w r0, FIXMAP_ADDR(0)
> +        create_table_entry boot_second, xen_fixmap, r0, SECOND_SHIFT, mmu=1
>          /* Ensure any page table updates made above have occurred. */
>          dsb   nshst
>  
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH v4 1/8] xen/arm64: head: Remove 1:1 mapping as soon as it is not used
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 1/8] xen/arm64: head: Remove 1:1 mapping as soon as it is not used Julien Grall
@ 2019-09-25 20:28   ` Stefano Stabellini
  0 siblings, 0 replies; 17+ messages in thread
From: Stefano Stabellini @ 2019-09-25 20:28 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Stefano Stabellini, Volodymyr Babchuk

On Tue, 17 Sep 2019, Julien Grall wrote:
> The 1:1 mapping may clash with other parts of the Xen virtual memory
> layout. At the moment, Xen is handling the clash by only creating a
> mapping to the runtime virtual address before enabling the MMU.
> 
> The rest of the mappings (such as the fixmap) will be mapped after the
> MMU is enabled. However, the code doing the mapping is not safe as it
> replace mapping without using the Break-Before-Make sequence.
> 
> As the 1:1 mapping can be anywhere in the memory, it is easier to remove
> all the entries added as soon as the 1:1 mapping is not used rather than
> adding the Break-Before-Make sequence everywhere.
> 
> It is difficult to track where exactly the 1:1 mapping was created
> without a full rework of create_page_tables(). Instead, introduce a new
> function remove_identity_mapping() will look where is the top-level entry
> for the 1:1 mapping and remove it.
> 
> The new function is only called for the boot CPU. Secondary CPUs will
> switch directly to the runtime page-tables so there are no need to
> remove the 1:1 mapping. Note that this still doesn't make the Secondary
> CPUs path safe but it is not making it worst.
> 
> Note that the TLB flush sequence is same sequence as described in
> asm-arm/arm32/flushtlb.h with a twist. Per D5-2530 ARM DDI 0487D.a,
> a dsb nsh is sufficient for local flush. This part of the Arm Arm
> was missed while reworking the header and therefore a more conservative
> way were adopted.

NIT: was adopted

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>



> Signed-off-by: Julien Grall <julien.grall@arm.com>
> 
> ---
>     It is very likely we will need to re-introduce the 1:1 mapping to cater
>     secondary CPUs boot and suspend/resume. For now, the attempt is to make
>     boot CPU path fully Arm Arm compliant.
> 
>     Changes in v4:
>         - Fix typo
>         - Remove unnecessary comments
>         - Update the commit message to mention the difference between
>         the sequence described in tlbflush.h and the one used in the
>         code.
> 
>     Changes in v3:
>         - Avoid hardcoding slots
> 
>     Changes in v2:
>         - s/ID map/1:1 mapping/
>         - Rename remove_id_map() to remove_identity_mapping()
>         - Add missing signed-off-by
> ---
>  xen/arch/arm/arm64/head.S | 90 +++++++++++++++++++++++++++++++++++++++--------
>  1 file changed, 75 insertions(+), 15 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index ba24b05fa2..4c9a69be63 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -33,6 +33,11 @@
>  #define PT_DEV    0xe71 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=0 P=1 */
>  #define PT_DEV_L3 0xe73 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=1 P=1 */
>  
> +/* Convenience defines to get slot used by Xen mapping. */
> +#define XEN_ZEROETH_SLOT    zeroeth_table_offset(XEN_VIRT_START)
> +#define XEN_FIRST_SLOT      first_table_offset(XEN_VIRT_START)
> +#define XEN_SECOND_SLOT     second_table_offset(XEN_VIRT_START)
> +
>  #define __HEAD_FLAG_PAGE_SIZE   ((PAGE_SHIFT - 10) / 2)
>  
>  #define __HEAD_FLAG_PHYS_BASE   1
> @@ -312,6 +317,13 @@ real_start_efi:
>          ldr   x0, =primary_switched
>          br    x0
>  primary_switched:
> +        /*
> +         * The 1:1 map may clash with other parts of the Xen virtual memory
> +         * layout. As it is not used anymore, remove it completely to
> +         * avoid having to worry about replacing existing mapping
> +         * afterwards.
> +         */
> +        bl    remove_identity_mapping
>          bl    setup_fixmap
>  #ifdef CONFIG_EARLY_PRINTK
>          /* Use a virtual address to access the UART. */
> @@ -648,10 +660,67 @@ enable_mmu:
>          ret
>  ENDPROC(enable_mmu)
>  
> +/*
> + * Remove the 1:1 map from the page-tables. It is not easy to keep track
> + * where the 1:1 map was mapped, so we will look for the top-level entry
> + * exclusive to the 1:1 map and remove it.
> + *
> + * Inputs:
> + *   x19: paddr(start)
> + *
> + * Clobbers x0 - x1
> + */
> +remove_identity_mapping:
> +        /*
> +         * Find the zeroeth slot used. Remove the entry from zeroeth
> +         * table if the slot is not XEN_ZEROETH_SLOT.
> +         */
> +        lsr   x1, x19, #ZEROETH_SHIFT   /* x1 := zeroeth slot */
> +        cmp   x1, #XEN_ZEROETH_SLOT
> +        beq   1f
> +        /* It is not in slot XEN_ZEROETH_SLOT, remove the entry. */
> +        ldr   x0, =boot_pgtable         /* x0 := root table */
> +        str   xzr, [x0, x1, lsl #3]
> +        b     identity_mapping_removed
> +
> +1:
> +        /*
> +         * Find the first slot used. Remove the entry for the first
> +         * table if the slot is not XEN_FIRST_SLOT.
> +         */
> +        lsr   x1, x19, #FIRST_SHIFT
> +        and   x1, x1, #LPAE_ENTRY_MASK  /* x1 := first slot */
> +        cmp   x1, #XEN_FIRST_SLOT
> +        beq   1f
> +        /* It is not in slot XEN_FIRST_SLOT, remove the entry. */
> +        ldr   x0, =boot_first           /* x0 := first table */
> +        str   xzr, [x0, x1, lsl #3]
> +        b     identity_mapping_removed
> +
> +1:
> +        /*
> +         * Find the second slot used. Remove the entry for the first
> +         * table if the slot is not XEN_SECOND_SLOT.
> +         */
> +        lsr   x1, x19, #SECOND_SHIFT
> +        and   x1, x1, #LPAE_ENTRY_MASK  /* x1 := first slot */
> +        cmp   x1, #XEN_SECOND_SLOT
> +        beq   identity_mapping_removed
> +        /* It is not in slot 1, remove the entry */
> +        ldr   x0, =boot_second          /* x0 := second table */
> +        str   xzr, [x0, x1, lsl #3]
> +
> +identity_mapping_removed:
> +        /* See asm-arm/arm64/flushtlb.h for the explanation of the sequence. */
> +        dsb   nshst
> +        tlbi  alle2
> +        dsb   nsh
> +        isb
> +
> +        ret
> +ENDPROC(remove_identity_mapping)
> +
>  setup_fixmap:
> -        /* Now we can install the fixmap and dtb mappings, since we
> -         * don't need the 1:1 map any more */
> -        dsb   sy
>  #if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */
>          /* Add UART to the fixmap table */
>          ldr   x1, =xen_fixmap        /* x1 := vaddr (xen_fixmap) */
> @@ -669,19 +738,10 @@ setup_fixmap:
>          ldr   x1, =FIXMAP_ADDR(0)
>          lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x1 := Slot for FIXMAP(0) */
>          str   x2, [x4, x1]           /* Map it in the fixmap's slot */
> -#endif
>  
> -        /*
> -         * Flush the TLB in case the 1:1 mapping happens to clash with
> -         * the virtual addresses used by the fixmap or DTB.
> -         */
> -        dsb   sy                     /* Ensure any page table updates made above
> -                                      * have occurred. */
> -
> -        isb
> -        tlbi  alle2
> -        dsb   sy                     /* Ensure completion of TLB flush */
> -        isb
> +        /* Ensure any page table updates made above have occurred. */
> +        dsb   nshst
> +#endif
>          ret
>  ENDPROC(setup_fixmap)
>  
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH v4 3/8] xen/arm32: head: Remove 1:1 mapping as soon as it is not used
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 3/8] xen/arm32: head: Remove 1:1 mapping as soon as it is not used Julien Grall
@ 2019-09-25 20:33   ` Stefano Stabellini
  2019-09-25 21:29     ` Julien Grall
  0 siblings, 1 reply; 17+ messages in thread
From: Stefano Stabellini @ 2019-09-25 20:33 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Stefano Stabellini, Volodymyr Babchuk

On Tue, 17 Sep 2019, Julien Grall wrote:
> The 1:1 mapping may clash with other parts of the Xen virtual memory
> layout. At the moment, Xen is handling the clash by only creating a
> mapping to the runtime virtual address before enabling the MMU.
> 
> The rest of the mappings (such as the fixmap) will be mapped after the
> MMU is enabled. However, the code doing the mapping is not safe as it
> replace mapping without using the Break-Before-Make sequence.
> 
> As the 1:1 mapping can be anywhere in the memory, it is easier to remove
> all the entries added as soon as the 1:1 mapping is not used rather than
> adding the Break-Before-Make sequence everywhere.
> 
> It is difficult to track where exactly the 1:1 mapping was created
> without a full rework of create_page_tables(). Instead, introduce a new
> function remove_identity_mapping() will look where is the top-level entry
> for the 1:1 mapping and remove it.
> 
> The new function is only called for the boot CPU. Secondary CPUs will
> switch directly to the runtime page-tables so there are no need to
> remove the 1:1 mapping. Note that this still doesn't make the Secondary
> CPUs path safe but it is not making it worst.
> 
> Note that the TLB flush sequence is same sequence as described in
> asm-arm/arm32/flushtlb.h with a twist. Per G5-5532 ARM DDI 0487D.a,
> a dsb nsh is sufficient for local flushed. Note the section is from the
> AArch32 Armv8 spec, I wasn't able to find the same exact section in the
> Armv7 spec but this is dotted as local operations only applies to
> non-shareable domain. This was missed while reworking the header and
> therefore a more conservative way were adopted.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     It is very likely we will need to re-introduce the 1:1 mapping to cater
>     secondary CPUs boot and suspend/resume. For now, the attempt is to make
>     boot CPU path fully Arm Arm compliant.
> 
>     Changes in v4:
>         - Fix typo
>         - Fix indentation
>         - Remove unnecessary comments
> 
>     Changes in v3:
>         - Remove unused label
>         - Avoid harcoding slots
> 
>     Changes in v2:
>         - Patch added
> ---
>  xen/arch/arm/arm32/head.S | 84 +++++++++++++++++++++++++++++++++++++----------
>  1 file changed, 67 insertions(+), 17 deletions(-)
> 
> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
> index 999233452d..65b7e0d711 100644
> --- a/xen/arch/arm/arm32/head.S
> +++ b/xen/arch/arm/arm32/head.S
> @@ -32,6 +32,10 @@
>  #define PT_UPPER(x) (PT_##x & 0xf00)
>  #define PT_LOWER(x) (PT_##x & 0x0ff)
>  
> +/* Convenience defines to get slot used by Xen mapping. */
> +#define XEN_FIRST_SLOT      first_table_offset(XEN_VIRT_START)
> +#define XEN_SECOND_SLOT     second_table_offset(XEN_VIRT_START)
> +
>  #if (defined (CONFIG_EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
>  #include EARLY_PRINTK_INC
>  #endif
> @@ -157,6 +161,13 @@ past_zImage:
>          ldr   r0, =primary_switched
>          mov   pc, r0
>  primary_switched:
> +        /*
> +         * The 1:1 map may clash with other parts of the Xen virtual memory
> +         * layout. As it is not used anymore, remove it completely to
> +         * avoid having to worry about replacing existing mapping
> +         * afterwards.
> +         */
> +        bl    remove_identity_mapping
>          bl    setup_fixmap
>  #ifdef CONFIG_EARLY_PRINTK
>          /* Use a virtual address to access the UART. */
> @@ -481,12 +492,61 @@ enable_mmu:
>          mov   pc, lr
>  ENDPROC(enable_mmu)
>  
> -setup_fixmap:
> +/*
> + * Remove the 1:1 map from the page-tables. It is not easy to keep track
> + * where the 1:1 map was mapped, so we will look for the top-level entry
> + * exclusive to the 1:1 map and remove it.
> + *
> + * Inputs:
> + *   r9 : paddr(start)
> + *
> + * Clobbers r0 - r3
> + */
> +remove_identity_mapping:
> +        /* r2:r3 := invalid page-table entry */
> +        mov   r2, #0x0
> +        mov   r3, #0x0
>          /*
> -         * Now we can install the fixmap and dtb mappings, since we
> -         * don't need the 1:1 map any more
> +         * Find the first slot used. Remove the entry for the first
> +         * table if the slot is not XEN_FIRST_SLOT.
>           */
> -        dsb
> +        lsr   r1, r9, #FIRST_SHIFT
> +        mov_w r0, LPAE_ENTRY_MASK
> +        and   r1, r1, r0              /* r1 := first slot */
> +        cmp   r1, #XEN_FIRST_SLOT
> +        beq   1f
> +        /* It is not in slot 0, remove the entry */
> +        ldr   r0, =boot_pgtable      /* r0 := root table */
> +        lsl   r1, r1, #3             /* r1 := Slot offset */
> +        strd  r2, r3, [r0, r1]
> +        b     identity_mapping_removed
> +
> +1:
> +        /*
> +         * Find the second slot used. Remove the entry for the first
> +         * table if the slot is not XEN_SECOND_SLOT.
> +         */
> +        lsr   r1, r9, #SECOND_SHIFT
> +        mov_w r0, LPAE_ENTRY_MASK
> +        and   r1, r1, r0             /* r1 := second slot */
> +        cmp   r1, #XEN_SECOND_SLOT
> +        beq   identity_mapping_removed
> +        /* It is not in slot 1, remove the entry */
> +        ldr   r0, =boot_second       /* r0 := second table */
> +        lsl   r1, r1, #3             /* r1 := Slot offset */
> +        strd  r2, r3, [r0, r1]
> +
> +identity_mapping_removed:
> +        /* See asm-arm/arm32/flushtlb.h for the explanation of the sequence. */
> +        dsb   nshst
> +        mcr   CP32(r0, TLBIALLH)
> +        dsb   nsh
> +        isb
> +
> +        mov   pc, lr
> +ENDPROC(remove_identity_mapping)
> +
> +setup_fixmap:
>  #if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */
>          /* Add UART to the fixmap table */
>          ldr   r1, =xen_fixmap        /* r1 := vaddr (xen_fixmap) */
> @@ -496,7 +556,6 @@ setup_fixmap:
>          orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
>          mov   r3, #0x0
>          strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
> -1:
>  
>          /* Map fixmap into boot_second */
>          ldr   r1, =boot_second       /* r1 := vaddr (boot_second) */
> @@ -508,19 +567,10 @@ setup_fixmap:
>          mov   r4, r4, lsr #(SECOND_SHIFT - 3)   /* r4 := Slot for FIXMAP(0) */
>          mov   r3, #0x0
>          strd  r2, r3, [r1, r4]       /* Map it in the fixmap's slot */
> -#endif
> -
> -        /*
> -         * Flush the TLB in case the 1:1 mapping happens to clash with
> -         * the virtual addresses used by the fixmap or DTB.
> -         */
> -        dsb                          /* Ensure any page table updates made above
> -                                      * have occurred. */
>  
> -        isb
> -        mcr   CP32(r0, TLBIALLH)     /* Flush hypervisor TLB */
> -        dsb                          /* Ensure completion of TLB flush */
> -        isb
> +        /* Ensure any page table updates made above have occurred. */
> +        dsb   nshst
> +#endif
>          mov   pc, lr
>  ENDPROC(setup_fixmap)
>  
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH v4 3/8] xen/arm32: head: Remove 1:1 mapping as soon as it is not used
  2019-09-25 20:33   ` Stefano Stabellini
@ 2019-09-25 21:29     ` Julien Grall
  0 siblings, 0 replies; 17+ messages in thread
From: Julien Grall @ 2019-09-25 21:29 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, nd, Volodymyr Babchuk

Hi,

On 25/09/2019 21:33, Stefano Stabellini wrote:
> On Tue, 17 Sep 2019, Julien Grall wrote:
>> The 1:1 mapping may clash with other parts of the Xen virtual memory
>> layout. At the moment, Xen is handling the clash by only creating a
>> mapping to the runtime virtual address before enabling the MMU.
>>
>> The rest of the mappings (such as the fixmap) will be mapped after the
>> MMU is enabled. However, the code doing the mapping is not safe as it
>> replace mapping without using the Break-Before-Make sequence.
>>
>> As the 1:1 mapping can be anywhere in the memory, it is easier to remove
>> all the entries added as soon as the 1:1 mapping is not used rather than
>> adding the Break-Before-Make sequence everywhere.
>>
>> It is difficult to track where exactly the 1:1 mapping was created
>> without a full rework of create_page_tables(). Instead, introduce a new
>> function remove_identity_mapping() will look where is the top-level entry
>> for the 1:1 mapping and remove it.
>>
>> The new function is only called for the boot CPU. Secondary CPUs will
>> switch directly to the runtime page-tables so there are no need to
>> remove the 1:1 mapping. Note that this still doesn't make the Secondary
>> CPUs path safe but it is not making it worst.
>>
>> Note that the TLB flush sequence is same sequence as described in
>> asm-arm/arm32/flushtlb.h with a twist. Per G5-5532 ARM DDI 0487D.a,
>> a dsb nsh is sufficient for local flushed. Note the section is from the
>> AArch32 Armv8 spec, I wasn't able to find the same exact section in the
>> Armv7 spec but this is dotted as local operations only applies to
>> non-shareable domain. This was missed while reworking the header and
>> therefore a more conservative way were adopted.

I guess the NIT mention in patch #1 should also be fixed here.

>>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
> 
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

Thank you for the review.

Cheers,

-- 
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH v4 6/8] xen/arm64: head: Use a page mapping for the 1:1 mapping in create_page_tables()
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 6/8] xen/arm64: head: Use a page mapping for the 1:1 mapping in create_page_tables() Julien Grall
@ 2019-09-26  4:13   ` Stefano Stabellini
  0 siblings, 0 replies; 17+ messages in thread
From: Stefano Stabellini @ 2019-09-26  4:13 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Stefano Stabellini, Volodymyr Babchuk

On Tue, 17 Sep 2019, Julien Grall wrote:
> At the moment the function create_page_tables() will use 1GB/2MB
> mapping for the identity mapping. As we don't know what is present
> before and after Xen in memory, we may end up to map
> device/reserved-memory with cacheable memory. This may result to
> mismatched attributes as other users may access the same region
> differently.
> 
> To prevent any issues, we should only map the strict minimum in the
> 1:1 mapping. A check in xen.lds.S already guarantees anything
> necessary for turning on the MMU fits in a page (at the moment 4K).
> 
> As only one page will be mapped for the 1:1 mapping, it is necessary
> to pre-allocate a page for the 3rd level table.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v4:
>         - Don't pre-link the page-tables for the 1:1 mapping. Instead
>         only link what's necessary.
> 
>     Changes in v3:
>         - Patch added
> ---
>  xen/arch/arm/arm64/head.S | 166 ++++++++++++++++++----------------------------
>  xen/arch/arm/mm.c         |   2 +
>  2 files changed, 68 insertions(+), 100 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 2cce342217..e5015f93a2 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -566,100 +566,17 @@ ENDPROC(cpu_init)
>   *   x19: paddr(start)
>   *   x20: phys offset
>   *
> - * Clobbers x0 - x4, x25
> - *
> - * Register usage within this function:
> - *   x25: Identity map in place
> + * Clobbers x0 - x4
>   */
>  create_page_tables:
> -        /*
> -         * If Xen is loaded at exactly XEN_VIRT_START then we don't
> -         * need an additional 1:1 mapping, the virtual mapping will
> -         * suffice.
> -         */
> -        cmp   x19, #XEN_VIRT_START
> -        cset  x25, eq                /* x25 := identity map in place, or not */
> -
> -        load_paddr x4, boot_pgtable
> -
> -        /* Setup boot_pgtable: */
> -        load_paddr x1, boot_first
> -
> -        /* ... map boot_first in boot_pgtable[0] */
> -        mov   x3, #PT_PT             /* x2 := table map of boot_first */
> -        orr   x2, x1, x3             /*       + rights for linear PT */
> -        str   x2, [x4, #0]           /* Map it in slot 0 */
> -
> -        /* ... map of paddr(start) in boot_pgtable+boot_first_id */
> -        lsr   x1, x19, #ZEROETH_SHIFT/* Offset of base paddr in boot_pgtable */
> -        cbz   x1, 1f                 /* It's in slot 0, map in boot_first
> -                                      * or boot_second later on */
> +        /* Prepare the page-tables for mapping Xen */
> +        ldr   x0, =XEN_VIRT_START
> +        create_table_entry boot_pgtable, boot_first, x0, ZEROETH_SHIFT, x1, x2, x3
> +        create_table_entry boot_first, boot_second, x0, FIRST_SHIFT, x1, x2, x3
> +        create_table_entry boot_second, boot_third, x0, SECOND_SHIFT, x1, x2, x3
>  
> -        /*
> -         * Level zero does not support superpage mappings, so we have
> -         * to use an extra first level page in which we create a 1GB mapping.
> -         */
> -        load_paddr x2, boot_first_id
> -
> -        mov   x3, #PT_PT             /* x2 := table map of boot_first_id */
> -        orr   x2, x2, x3             /*       + rights for linear PT */
> -        str   x2, [x4, x1, lsl #3]
> -
> -        load_paddr x4, boot_first_id
> -
> -        lsr   x1, x19, #FIRST_SHIFT  /* x1 := Offset of base paddr in boot_first_id */
> -        lsl   x2, x1, #FIRST_SHIFT   /* x2 := Base address for 1GB mapping */
> -        mov   x3, #PT_MEM            /* x2 := Section map */
> -        orr   x2, x2, x3
> -        and   x1, x1, #LPAE_ENTRY_MASK /* x1 := Slot offset */
> -        str   x2, [x4, x1, lsl #3]   /* Mapping of paddr(start) */
> -        mov   x25, #1                /* x25 := identity map now in place */
> -
> -1:      /* Setup boot_first: */
> -        load_paddr x4, boot_first   /* Next level into boot_first */
> -
> -        /* ... map boot_second in boot_first[0] */
> -        load_paddr x1, boot_second
> -        mov   x3, #PT_PT             /* x2 := table map of boot_second */
> -        orr   x2, x1, x3             /*       + rights for linear PT */
> -        str   x2, [x4, #0]           /* Map it in slot 0 */
> -
> -        /* ... map of paddr(start) in boot_first */
> -        cbnz  x25, 1f                /* x25 is set if already created */
> -        lsr   x2, x19, #FIRST_SHIFT  /* x2 := Offset of base paddr in boot_first */
> -        and   x1, x2, #LPAE_ENTRY_MASK /* x1 := Slot to use */
> -        cbz   x1, 1f                 /* It's in slot 0, map in boot_second */
> -
> -        lsl   x2, x2, #FIRST_SHIFT   /* Base address for 1GB mapping */
> -        mov   x3, #PT_MEM            /* x2 := Section map */
> -        orr   x2, x2, x3
> -        str   x2, [x4, x1, lsl #3]   /* Create mapping of paddr(start)*/
> -        mov   x25, #1                /* x25 := identity map now in place */
> -
> -1:      /* Setup boot_second: */
> -        load_paddr x4, boot_second
> -
> -        /* ... map boot_third in boot_second[1] */
> -        load_paddr x1, boot_third
> -        mov   x3, #PT_PT             /* x2 := table map of boot_third */
> -        orr   x2, x1, x3             /*       + rights for linear PT */
> -        str   x2, [x4, #8]           /* Map it in slot 1 */
> -
> -        /* ... map of paddr(start) in boot_second */
> -        cbnz  x25, 1f                /* x25 is set if already created */
> -        lsr   x2, x19, #SECOND_SHIFT /* x2 := Offset of base paddr in boot_second */
> -        and   x1, x2, #LPAE_ENTRY_MASK /* x1 := Slot to use */
> -        cmp   x1, #1
> -        b.eq  virtphys_clash         /* It's in slot 1, which we cannot handle */
> -
> -        lsl   x2, x2, #SECOND_SHIFT  /* Base address for 2MB mapping */
> -        mov   x3, #PT_MEM            /* x2 := Section map */
> -        orr   x2, x2, x3
> -        str   x2, [x4, x1, lsl #3]   /* Create mapping of paddr(start)*/
> -        mov   x25, #1                /* x25 := identity map now in place */
> -
> -1:      /* Setup boot_third: */
> -        load_paddr x4, boot_third
> +        /* Map Xen */
> +        adr_l x4, boot_third
>  
>          lsr   x2, x19, #THIRD_SHIFT  /* Base address for 4K mapping */
>          lsl   x2, x2, #THIRD_SHIFT
> @@ -674,21 +591,70 @@ create_page_tables:
>          cmp   x1, #(LPAE_ENTRIES<<3) /* 512 entries per page */
>          b.lt  1b
>  
> -        /* Defer fixmap and dtb mapping until after paging enabled, to
> -         * avoid them clashing with the 1:1 mapping. */
> +        /*
> +         * If Xen is loaded at exactly XEN_VIRT_START then we don't
> +         * need an additional 1:1 mapping, the virtual mapping will
> +         * suffice.
> +         */
> +        cmp   x19, #XEN_VIRT_START
> +        bne   1f
> +        ret
> +1:
> +        /*
> +         * Setup the 1:1 mapping so we can turn the MMU on. Note that
> +         * only the first page of Xen will be part of the 1:1 mapping.
> +         */
> +
> +        /*
> +         * Find the zeroeth slot used. If the slot is not
> +         * XEN_ZEROETH_SLOT, then the 1:1 mapping will use its own set of
> +         * page-tables from the first level.
> +         */
> +        lsr   x0, x19, #ZEROETH_SHIFT   /* x0 := zeroeth slot */
> +        cmp   x0, #XEN_ZEROETH_SLOT
> +        beq   1f
> +        create_table_entry boot_pgtable, boot_first_id, x19, ZEROETH_SHIFT, x0, x1, x2
> +        b     link_from_first_id
> +
> +1:
> +        /*
> +         * Find the first slot used. If the slot is not XEN_FIRST_SLOT,
> +         * then the 1:1 mapping will use its own set of page-tables from
> +         * the second level.
> +         */
> +        lsr   x0, x19, #FIRST_SHIFT
> +        and   x0, x0, #LPAE_ENTRY_MASK  /* x0 := first slot */
> +        cmp   x0, #XEN_FIRST_SLOT
> +        beq   1f
> +        create_table_entry boot_first, boot_second_id, x19, FIRST_SHIFT, x0, x1, x2
> +        b     link_from_second_id
>  
> -        /* boot pagetable setup complete */
> +1:
> +        /*
> +         * Find the second slot used. If the slot is XEN_SECOND_SLOT, then the
> +         * 1:1 mapping will use its own set of page-tables from the
> +         * third level. For slot XEN_SECOND_SLOT, Xen is not yet able to handle
> +         * it.
> +         */
> +        lsr   x0, x19, #SECOND_SHIFT
> +        and   x0, x0, #LPAE_ENTRY_MASK  /* x0 := first slot */
> +        cmp   x0, #XEN_SECOND_SLOT
> +        beq   virtphys_clash
> +        create_table_entry boot_second, boot_third_id, x19, SECOND_SHIFT, x0, x1, x2
> +        b     link_from_third_id
> +
> +link_from_first_id:
> +        create_table_entry boot_first_id, boot_second_id, x19, FIRST_SHIFT, x0, x1, x2
> +link_from_second_id:
> +        create_table_entry boot_second_id, boot_third_id, x19, SECOND_SHIFT, x0, x1, x2
> +link_from_third_id:
> +        create_mapping_entry boot_third_id, x19, x19, x0, x1, x2
> +        ret
>  
> -        cbnz  x25, 1f                /* Did we manage to create an identity mapping ? */
> -        PRINT("Unable to build boot page tables - Failed to identity map Xen.\r\n")
> -        b     fail
>  virtphys_clash:
>          /* Identity map clashes with boot_third, which we cannot handle yet */
>          PRINT("- Unable to build boot page tables - virt and phys addresses clash. -\r\n")
>          b     fail
> -
> -1:
> -        ret
>  ENDPROC(create_page_tables)
>  
>  /*
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 65552da4ba..72ffea7472 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -105,6 +105,8 @@ DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
>  #ifdef CONFIG_ARM_64
>  DEFINE_BOOT_PAGE_TABLE(boot_first);
>  DEFINE_BOOT_PAGE_TABLE(boot_first_id);
> +DEFINE_BOOT_PAGE_TABLE(boot_second_id);
> +DEFINE_BOOT_PAGE_TABLE(boot_third_id);
>  #endif
>  DEFINE_BOOT_PAGE_TABLE(boot_second);
>  DEFINE_BOOT_PAGE_TABLE(boot_third);
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH v4 8/8] xen/arm32: head: Use a page mapping for the 1:1 mapping in create_page_tables()
  2019-09-17 18:12 ` [Xen-devel] [PATCH v4 8/8] xen/arm32: head: Use a page mapping for the 1:1 mapping in create_page_tables() Julien Grall
@ 2019-09-26  4:24   ` Stefano Stabellini
  2019-09-26 15:02     ` Julien Grall
  0 siblings, 1 reply; 17+ messages in thread
From: Stefano Stabellini @ 2019-09-26  4:24 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Stefano Stabellini, Volodymyr Babchuk

On Tue, 17 Sep 2019, Julien Grall wrote:
> At the moment the function create_page_tables() will use 1GB/2MB
> mapping for the identity mapping. As we don't know what is present
> before and after Xen in memory, we may end up to map
> device/reserved-memory with cacheable memory. This may result to
> mismatched attributes as other users may access the same region
> differently.
> 
> To prevent any issues, we should only map the strict minimum in the
> 1:1 mapping. A check in xen.lds.S already guarantees anything
> necessary for turning on the MMU fits in a page (at the moment 4K).
> 
> As only one page will be mapped for the 1:1 mapping, it is necessary
> to pre-allocate a page for the 3rd level table.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> 
> ---
>     Changes in v4:
>         - Use XEN_{FIRST, SECOND}_SLOT rather than hardcoded value
>         - Don't pre-link the page-tables for the 1:1 mapping
> 
>     Changes in v3:
>         - Patch added
> ---
>  xen/arch/arm/arm32/head.S | 121 +++++++++++++++++++---------------------------
>  xen/arch/arm/mm.c         |   2 +-
>  2 files changed, 50 insertions(+), 73 deletions(-)
> 
> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
> index 175f0c9760..7b5109db26 100644
> --- a/xen/arch/arm/arm32/head.S
> +++ b/xen/arch/arm/arm32/head.S
> @@ -447,73 +447,13 @@ ENDPROC(cpu_init)
>   *   r6 : Identity map in place
>   */
>  create_page_tables:
> -        /*
> -         * If Xen is loaded at exactly XEN_VIRT_START then we don't
> -         * need an additional 1:1 mapping, the virtual mapping will
> -         * suffice.
> -         */
> -        cmp   r9, #XEN_VIRT_START
> -        moveq r6, #1                 /* r6 := identity map now in place */
> -        movne r6, #0                 /* r6 := identity map not yet in place */
> -
> -        ldr   r4, =boot_pgtable
> -        add   r4, r4, r10            /* r4 := paddr (boot_pagetable) */
> -
> -        /* Setup boot_pgtable: */
> -        ldr   r1, =boot_second
> -        add   r1, r1, r10            /* r1 := paddr (boot_second) */
> -
> -        /* ... map boot_second in boot_pgtable[0] */
> -        orr   r2, r1, #PT_UPPER(PT)  /* r2:r3 := table map of boot_second */
> -        orr   r2, r2, #PT_LOWER(PT)  /* (+ rights for linear PT) */
> -        mov   r3, #0x0
> -        strd  r2, r3, [r4, #0]       /* Map it in slot 0 */
> -
> -        /* ... map of paddr(start) in boot_pgtable */
> -        lsrs  r1, r9, #FIRST_SHIFT   /* Offset of base paddr in boot_pgtable */
> -        beq   1f                     /* If it is in slot 0 then map in boot_second
> -                                      * later on */
> -        lsl   r2, r1, #FIRST_SHIFT   /* Base address for 1GB mapping */
> -        orr   r2, r2, #PT_UPPER(MEM) /* r2:r3 := section map */
> -        orr   r2, r2, #PT_LOWER(MEM)
> -        lsl   r1, r1, #3             /* r1 := Slot offset */
> -        mov   r3, #0x0
> -        strd  r2, r3, [r4, r1]       /* Mapping of paddr(start) */
> -        mov   r6, #1                 /* r6 := identity map now in place */
> -
> -1:      /* Setup boot_second: */
> -        ldr   r4, =boot_second
> -        add   r4, r4, r10            /* r4 := paddr (boot_second) */
> -
> -        ldr   r1, =boot_third
> -        add   r1, r1, r10            /* r1 := paddr (boot_third) */
> -
> -        /* ... map boot_third in boot_second[1] */
> -        orr   r2, r1, #PT_UPPER(PT)  /* r2:r3 := table map of boot_third */
> -        orr   r2, r2, #PT_LOWER(PT)  /* (+ rights for linear PT) */
> -        mov   r3, #0x0
> -        strd  r2, r3, [r4, #8]       /* Map it in slot 1 */
> -
> -        /* ... map of paddr(start) in boot_second */
> -        cmp   r6, #1                 /* r6 is set if already created */
> -        beq   1f
> -        lsr   r2, r9, #SECOND_SHIFT  /* Offset of base paddr in boot_second */
> -        ldr   r3, =LPAE_ENTRY_MASK
> -        and   r1, r2, r3
> -        cmp   r1, #1
> -        beq   virtphys_clash         /* It's in slot 1, which we cannot handle */
> -
> -        lsl   r2, r2, #SECOND_SHIFT  /* Base address for 2MB mapping */
> -        orr   r2, r2, #PT_UPPER(MEM) /* r2:r3 := section map */
> -        orr   r2, r2, #PT_LOWER(MEM)
> -        mov   r3, #0x0
> -        lsl   r1, r1, #3             /* r1 := Slot offset */
> -        strd  r2, r3, [r4, r1]       /* Mapping of paddr(start) */
> -        mov   r6, #1                 /* r6 := identity map now in place */
> +        /* Prepare the page-tables for mapping Xen */
> +        ldr   r0, =XEN_VIRT_START
> +        create_table_entry boot_pgtable, boot_second, r0, FIRST_SHIFT
> +        create_table_entry boot_second, boot_third, r0, SECOND_SHIFT
>  
>          /* Setup boot_third: */
> -1:      ldr   r4, =boot_third
> -        add   r4, r4, r10            /* r4 := paddr (boot_third) */
> +        adr_l r4, boot_third, mmu=0
>  
>          lsr   r2, r9, #THIRD_SHIFT  /* Base address for 4K mapping */
>          lsl   r2, r2, #THIRD_SHIFT
> @@ -530,16 +470,53 @@ create_page_tables:
>          blo   1b
>  
>          /*
> -         * Defer fixmap and dtb mapping until after paging enabled, to
> -         * avoid them clashing with the 1:1 mapping.
> +         * If Xen is loaded at exactly XEN_VIRT_START then we don't
> +         * need an additional 1:1 mapping, the virtual mapping will
> +         * suffice.
>           */
> +        cmp   r9, #XEN_VIRT_START
> +        moveq pc, lr
>  
> -        /* boot pagetable setup complete */
> +1:

As far as I can tell, this 1 label is unused. If so, we should remove
it. With that gone:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> +        /*
> +         * Setup the 1:1 mapping so we can turn the MMU on. Note that
> +         * only the first page of Xen will be part of the 1:1 mapping.
> +         */
> +
> +        /*
> +         * Find the first slot used. If the slot is not XEN_FIRST_SLOT,
> +         * then the 1:1 mapping will use its own set of page-tables from
> +         * the second level.
> +         */
> +        lsr   r1, r9, #FIRST_SHIFT
> +        mov_w r0, LPAE_ENTRY_MASK
> +        and   r1, r1, r0              /* r1 := first slot */
> +        cmp   r1, #XEN_FIRST_SLOT
> +        beq   1f
> +        create_table_entry boot_pgtable, boot_second_id, r9, FIRST_SHIFT
> +        b     link_from_second_id
> +
> +1:
> +        /*
> +         * Find the second slot used. If the slot is XEN_SECOND_SLOT, then the
> +         * 1:1 mapping will use its own set of page-tables from the
> +         * third level. For slot XEN_SECOND_SLOT, Xen is not yet able to handle
> +         * it.
> +         */
> +        lsr   r1, r9, #SECOND_SHIFT
> +        mov_w r0, LPAE_ENTRY_MASK
> +        and   r1, r1, r0             /* r1 := second slot */
> +        cmp   r1, #XEN_SECOND_SLOT
> +        beq   virtphys_clash
> +        create_table_entry boot_second, boot_third_id, r9, SECOND_SHIFT
> +        b     link_from_third_id
> +
> +link_from_second_id:
> +        create_table_entry boot_second_id, boot_third_id, r9, SECOND_SHIFT
> +link_from_third_id:
> +        create_mapping_entry boot_third_id, r9, r9
> +        mov   pc, lr
>  
> -        cmp   r6, #1                /* Did we manage to create an identity mapping ? */
> -        moveq pc, lr
> -        PRINT("Unable to build boot page tables - Failed to identity map Xen.\r\n")
> -        b     fail
>  virtphys_clash:
>          /* Identity map clashes with boot_third, which we cannot handle yet */
>          PRINT("- Unable to build boot page tables - virt and phys addresses clash. -\r\n")
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 72ffea7472..9e0fdc39f9 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -105,9 +105,9 @@ DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
>  #ifdef CONFIG_ARM_64
>  DEFINE_BOOT_PAGE_TABLE(boot_first);
>  DEFINE_BOOT_PAGE_TABLE(boot_first_id);
> +#endif
>  DEFINE_BOOT_PAGE_TABLE(boot_second_id);
>  DEFINE_BOOT_PAGE_TABLE(boot_third_id);
> -#endif
>  DEFINE_BOOT_PAGE_TABLE(boot_second);
>  DEFINE_BOOT_PAGE_TABLE(boot_third);
>  
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Xen-devel] [PATCH v4 8/8] xen/arm32: head: Use a page mapping for the 1:1 mapping in create_page_tables()
  2019-09-26  4:24   ` Stefano Stabellini
@ 2019-09-26 15:02     ` Julien Grall
  0 siblings, 0 replies; 17+ messages in thread
From: Julien Grall @ 2019-09-26 15:02 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, Volodymyr Babchuk

Hi,

On 9/26/19 5:24 AM, Stefano Stabellini wrote:
>> @@ -530,16 +470,53 @@ create_page_tables:
>>           blo   1b
>>   
>>           /*
>> -         * Defer fixmap and dtb mapping until after paging enabled, to
>> -         * avoid them clashing with the 1:1 mapping.
>> +         * If Xen is loaded at exactly XEN_VIRT_START then we don't
>> +         * need an additional 1:1 mapping, the virtual mapping will
>> +         * suffice.
>>            */
>> +        cmp   r9, #XEN_VIRT_START
>> +        moveq pc, lr
>>   
>> -        /* boot pagetable setup complete */
>> +1:
> 
> As far as I can tell, this 1 label is unused. If so, we should remove
> it. With that gone:

Hmmm, yes it is.

> 
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

Thank you!

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2019-09-26 15:02 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-17 18:12 [Xen-devel] [PATCH v4 0/8] xen/arm: Rework head.S to make it more compliant with the Arm Arm Julien Grall
2019-09-17 18:12 ` [Xen-devel] [PATCH v4 1/8] xen/arm64: head: Remove 1:1 mapping as soon as it is not used Julien Grall
2019-09-25 20:28   ` Stefano Stabellini
2019-09-17 18:12 ` [Xen-devel] [PATCH v4 2/8] xen/arm64: head: Rework and document setup_fixmap() Julien Grall
2019-09-17 18:12 ` [Xen-devel] [PATCH v4 3/8] xen/arm32: head: Remove 1:1 mapping as soon as it is not used Julien Grall
2019-09-25 20:33   ` Stefano Stabellini
2019-09-25 21:29     ` Julien Grall
2019-09-17 18:12 ` [Xen-devel] [PATCH v4 4/8] xen/arm32: head: Rework and document setup_fixmap() Julien Grall
2019-09-17 18:12 ` [Xen-devel] [PATCH v4 5/8] xen/arm64: head: Introduce macros to create table and mapping entry Julien Grall
2019-09-25 20:14   ` Stefano Stabellini
2019-09-17 18:12 ` [Xen-devel] [PATCH v4 6/8] xen/arm64: head: Use a page mapping for the 1:1 mapping in create_page_tables() Julien Grall
2019-09-26  4:13   ` Stefano Stabellini
2019-09-17 18:12 ` [Xen-devel] [PATCH v4 7/8] xen/arm32: head: Introduce macros to create table and mapping entry Julien Grall
2019-09-25 20:22   ` Stefano Stabellini
2019-09-17 18:12 ` [Xen-devel] [PATCH v4 8/8] xen/arm32: head: Use a page mapping for the 1:1 mapping in create_page_tables() Julien Grall
2019-09-26  4:24   ` Stefano Stabellini
2019-09-26 15:02     ` Julien Grall

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).