All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/6] xen/riscv: introduce identity mapping
@ 2023-06-19 13:34 Oleksii Kurochko
  2023-06-19 13:34 ` [PATCH v2 1/6] xen/riscv: add .sbss section to .bss Oleksii Kurochko
                   ` (6 more replies)
  0 siblings, 7 replies; 25+ messages in thread
From: Oleksii Kurochko @ 2023-06-19 13:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Jan Beulich, Andrew Cooper, Julien Grall, Oleksii Kurochko,
	Bob Eshleman, Alistair Francis, Connor Davis

The patch series introduces things necessary to implement identity mapping:
  1. Make identity mapping for the entire Xen.
  2. Enable MMU.
  3. Jump to the virtual address world
  4. Remove identity mapping.

Also current patch series introduces the calculation of physical offset before
MMU is enabled as access to physical offset will be calculated wrong after
MMU will be enabled because access to phys_off variable is PC-relative and
in the case when linker address != load address, it will cause MMU fault.

One more thing that was done is:
  * Added SPDX tags.
  * move extern of cpu0_boot_stack to a header.

The reason for this patch series can be found here:
https://lore.kernel.org/xen-devel/4e336121-fc0c-b007-bf7b-430352563d55@citrix.com/

---
Changes in V2:
 - update the patch series message.
 - drop patches from the previous version of the patch series:
   * xen/riscv: add __ASSEMBLY__ guards". ( merged )
   * xen/riscv: make sure that identity mapping isn't bigger then page size
     ( entire Xen is 1:1 mapped so there is no need for the checks from the patch )
 - add .sbss.* and put it befor .bss* .
 - move out reset_stack() to .text section.
 - add '__ro_after_init' for phys_offset variable.
 - add '__init' for calc_phys_offset().
 - declaring variable phys_off as non static as it will be used in head.S.
 - update definition of PGTBL_INITIAL_COUNT and the comment above.
 - code style fixes.
 - remove id_addrs array becase entire Xen is mapped.
 - reverse condition for cycle inside remove_identity_mapping().
 - fix page table walk in remove_identity_mapping().
 - save hart_id and dtb_addr before call MMU related C functions
 - use phys_offset variable instead of doing calcultations to get phys offset
   in head.S file. ( it can be easily done as entire Xen is 1:1 mapped now )
 - declare enable_muu() as __init.
 - Update SPDX tags.
 - Add Review-By/Suggested-By for some patches.
 - code style fixes.

Oleksii Kurochko (6):
  xen/riscv: add .sbss section to .bss
  xen/riscv: introduce reset_stack() function
  xen/riscv: introduce function for physical offset calculation
  xen/riscv: introduce identity mapping
  xen/riscv: add SPDX tags
  xen/riscv: move extern of cpu0_boot_stack to header

 xen/arch/riscv/include/asm/config.h       |   2 +
 xen/arch/riscv/include/asm/current.h      |   2 +
 xen/arch/riscv/include/asm/early_printk.h |   2 +
 xen/arch/riscv/include/asm/mm.h           |   9 +-
 xen/arch/riscv/include/asm/page-bits.h    |   2 +
 xen/arch/riscv/include/asm/page.h         |   2 +
 xen/arch/riscv/include/asm/traps.h        |   2 +
 xen/arch/riscv/include/asm/types.h        |   2 +
 xen/arch/riscv/mm.c                       | 104 +++++++++++++---------
 xen/arch/riscv/riscv64/head.S             |  46 +++++++++-
 xen/arch/riscv/setup.c                    |  16 +---
 xen/arch/riscv/xen.lds.S                  |   2 +-
 12 files changed, 136 insertions(+), 55 deletions(-)

-- 
2.40.1



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v2 1/6] xen/riscv: add .sbss section to .bss
  2023-06-19 13:34 [PATCH v2 0/6] xen/riscv: introduce identity mapping Oleksii Kurochko
@ 2023-06-19 13:34 ` Oleksii Kurochko
  2023-06-19 13:34 ` [PATCH v2 2/6] xen/riscv: introduce reset_stack() function Oleksii Kurochko
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 25+ messages in thread
From: Oleksii Kurochko @ 2023-06-19 13:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Jan Beulich, Andrew Cooper, Julien Grall, Oleksii Kurochko,
	Bob Eshleman, Alistair Francis, Connor Davis

Sometimes variables are located in .sbss section but it won't
be mapped after MMU will be enabled.
To avoid MMU failures .sbss should be mapped

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
---
Changes in V2:
  - add .sbss.*.
  - move .sbss* ahead of .bss*.
  - add Acked-by: Alistair Francis <alistair.francis@wdc.com>
---
 xen/arch/riscv/xen.lds.S | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
index 878130f313..9064852173 100644
--- a/xen/arch/riscv/xen.lds.S
+++ b/xen/arch/riscv/xen.lds.S
@@ -149,7 +149,7 @@ SECTIONS
         *(.bss.percpu.read_mostly)
         . = ALIGN(SMP_CACHE_BYTES);
         __per_cpu_data_end = .;
-        *(.bss .bss.*)
+        *(.sbss .sbss.* .bss .bss.*)
         . = ALIGN(POINTER_ALIGN);
         __bss_end = .;
     } :text
-- 
2.40.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 2/6] xen/riscv: introduce reset_stack() function
  2023-06-19 13:34 [PATCH v2 0/6] xen/riscv: introduce identity mapping Oleksii Kurochko
  2023-06-19 13:34 ` [PATCH v2 1/6] xen/riscv: add .sbss section to .bss Oleksii Kurochko
@ 2023-06-19 13:34 ` Oleksii Kurochko
  2023-07-06 11:17   ` Jan Beulich
  2023-06-19 13:34 ` [PATCH v2 3/6] xen/riscv: introduce function for physical offset calculation Oleksii Kurochko
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 25+ messages in thread
From: Oleksii Kurochko @ 2023-06-19 13:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Jan Beulich, Andrew Cooper, Julien Grall, Oleksii Kurochko,
	Bob Eshleman, Alistair Francis, Connor Davis

The reason for reset_stack() introduction is that stack should be
reset twice:
1. Before jumping to C world at the start of _start() function.
2. After jumping from 1:1 mapping world.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
Changes in V2:
  - update the commit message.
  - move out reset_stack() from .text.header to .text.
  - add Reviewed-by: Alistair Francis <alistair.francis@wdc.com>.
---
 xen/arch/riscv/riscv64/head.S | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
index 8887f0cbd4..2c0304646a 100644
--- a/xen/arch/riscv/riscv64/head.S
+++ b/xen/arch/riscv/riscv64/head.S
@@ -27,8 +27,16 @@ ENTRY(start)
         add     t3, t3, __SIZEOF_POINTER__
         bltu    t3, t4, .L_clear_bss
 
+        jal     reset_stack
+
+        tail    start_xen
+
+        .section .text, "ax", %progbits
+
+ENTRY(reset_stack)
         la      sp, cpu0_boot_stack
         li      t0, STACK_SIZE
         add     sp, sp, t0
 
-        tail    start_xen
+        ret
+
-- 
2.40.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 3/6] xen/riscv: introduce function for physical offset calculation
  2023-06-19 13:34 [PATCH v2 0/6] xen/riscv: introduce identity mapping Oleksii Kurochko
  2023-06-19 13:34 ` [PATCH v2 1/6] xen/riscv: add .sbss section to .bss Oleksii Kurochko
  2023-06-19 13:34 ` [PATCH v2 2/6] xen/riscv: introduce reset_stack() function Oleksii Kurochko
@ 2023-06-19 13:34 ` Oleksii Kurochko
  2023-07-06 11:18   ` Jan Beulich
  2023-06-19 13:34 ` [PATCH v2 4/6] xen/riscv: introduce identity mapping Oleksii Kurochko
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 25+ messages in thread
From: Oleksii Kurochko @ 2023-06-19 13:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Jan Beulich, Andrew Cooper, Julien Grall, Oleksii Kurochko,
	Bob Eshleman, Alistair Francis, Connor Davis

The function was introduced to calculate and save physical
offset before MMU is enabled because access to start() is
PC-relative and in case of linker_addr != load_addr it will
result in incorrect value in phys_offset.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V2:
  - add __ro_after_init for phys_offset variable.
  - remove double blank lines.
  - add __init for calc_phys_offset().
  - update the commit message.
  - declaring variable phys_off as non static as it will be used in head.S.
---
 xen/arch/riscv/include/asm/mm.h |  2 ++
 xen/arch/riscv/mm.c             | 18 +++++++++++++++---
 xen/arch/riscv/riscv64/head.S   |  2 ++
 3 files changed, 19 insertions(+), 3 deletions(-)

diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h
index 64293eacee..996041ce81 100644
--- a/xen/arch/riscv/include/asm/mm.h
+++ b/xen/arch/riscv/include/asm/mm.h
@@ -11,4 +11,6 @@ void setup_initial_pagetables(void);
 void enable_mmu(void);
 void cont_after_mmu_is_enabled(void);
 
+void calc_phys_offset(void);
+
 #endif /* _ASM_RISCV_MM_H */
diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
index 8ceed445cf..570033f17f 100644
--- a/xen/arch/riscv/mm.c
+++ b/xen/arch/riscv/mm.c
@@ -1,3 +1,4 @@
+#include <xen/cache.h>
 #include <xen/compiler.h>
 #include <xen/init.h>
 #include <xen/kernel.h>
@@ -19,9 +20,10 @@ struct mmu_desc {
 
 extern unsigned char cpu0_boot_stack[STACK_SIZE];
 
-#define PHYS_OFFSET ((unsigned long)_start - XEN_VIRT_START)
-#define LOAD_TO_LINK(addr) ((addr) - PHYS_OFFSET)
-#define LINK_TO_LOAD(addr) ((addr) + PHYS_OFFSET)
+unsigned long __ro_after_init phys_offset;
+
+#define LOAD_TO_LINK(addr) ((unsigned long)(addr) - phys_offset)
+#define LINK_TO_LOAD(addr) ((unsigned long)(addr) + phys_offset)
 
 /*
  * It is expected that Xen won't be more then 2 MB.
@@ -273,3 +275,13 @@ void __init noreturn noinline enable_mmu()
     switch_stack_and_jump((unsigned long)cpu0_boot_stack + STACK_SIZE,
                           cont_after_mmu_is_enabled);
 }
+
+/*
+ * calc_phys_offset() should be used before MMU is enabled because access to
+ * start() is PC-relative and in case when load_addr != linker_addr phys_offset
+ * will have an incorrect value
+ */
+void __init calc_phys_offset(void)
+{
+    phys_offset = (unsigned long)start - XEN_VIRT_START;
+}
diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
index 2c0304646a..850fbb58a7 100644
--- a/xen/arch/riscv/riscv64/head.S
+++ b/xen/arch/riscv/riscv64/head.S
@@ -29,6 +29,8 @@ ENTRY(start)
 
         jal     reset_stack
 
+        jal     calc_phys_offset
+
         tail    start_xen
 
         .section .text, "ax", %progbits
-- 
2.40.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 4/6] xen/riscv: introduce identity mapping
  2023-06-19 13:34 [PATCH v2 0/6] xen/riscv: introduce identity mapping Oleksii Kurochko
                   ` (2 preceding siblings ...)
  2023-06-19 13:34 ` [PATCH v2 3/6] xen/riscv: introduce function for physical offset calculation Oleksii Kurochko
@ 2023-06-19 13:34 ` Oleksii Kurochko
  2023-07-06 11:35   ` Jan Beulich
  2023-06-19 13:34 ` [PATCH v2 5/6] xen/riscv: add SPDX tags Oleksii Kurochko
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 25+ messages in thread
From: Oleksii Kurochko @ 2023-06-19 13:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Jan Beulich, Andrew Cooper, Julien Grall, Oleksii Kurochko,
	Bob Eshleman, Alistair Francis, Connor Davis

The way how switch to virtual address was implemented in the
commit e66003e7be ("xen/riscv: introduce setup_initial_pages")
isn't safe enough as:
* enable_mmu() depends on hooking all exceptions
  and pagefault.
* Any exception other than pagefault, or not taking a pagefault
  causes it to malfunction, which means you will fail to boot
  depending on where Xen was loaded into memory.

Instead of the proposed way of switching to virtual addresses was
decided to use identity mapping of the entrire Xen and after
switching to virtual addresses identity mapping is removed from
page-tables.
Since it is not easy to keep track where the identity map was mapped,
so we will look for the top-level entry exclusive to the identity
map and remove it.

Fixes: e66003e7be ("xen/riscv: introduce setup_initial_pages")
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
Changes in V2:
  - update definition of PGTBL_INITIAL_COUNT and the comment above.
  - code style fixes.
  - 1:1 mapping for entire Xen.
  - remove id_addrs array becase entire Xen is mapped.
  - reverse condition for cycle inside remove_identity_mapping().
  - fix page table walk in remove_identity_mapping().
  - update the commit message.
  - add Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
  - save hart_id and dtb_addr before call MMU related C functions.
  - use phys_offset variable instead of doing calcultations to get phys offset
    in head.S file. ( it can be easily done as entire Xen is 1:1 mapped )
  - declare enable_muu() as __init.
---
 xen/arch/riscv/include/asm/config.h |  2 +
 xen/arch/riscv/include/asm/mm.h     |  3 +-
 xen/arch/riscv/mm.c                 | 84 ++++++++++++++++-------------
 xen/arch/riscv/riscv64/head.S       | 34 ++++++++++++
 xen/arch/riscv/setup.c              | 14 +----
 5 files changed, 88 insertions(+), 49 deletions(-)

diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
index 38862df0b8..fa90ae0898 100644
--- a/xen/arch/riscv/include/asm/config.h
+++ b/xen/arch/riscv/include/asm/config.h
@@ -1,3 +1,5 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
 #ifndef __RISCV_CONFIG_H__
 #define __RISCV_CONFIG_H__
 
diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h
index 996041ce81..500fdc9c5a 100644
--- a/xen/arch/riscv/include/asm/mm.h
+++ b/xen/arch/riscv/include/asm/mm.h
@@ -9,7 +9,8 @@
 void setup_initial_pagetables(void);
 
 void enable_mmu(void);
-void cont_after_mmu_is_enabled(void);
+
+void remove_identity_mapping(void);
 
 void calc_phys_offset(void);
 
diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
index 570033f17f..2693b817c5 100644
--- a/xen/arch/riscv/mm.c
+++ b/xen/arch/riscv/mm.c
@@ -25,6 +25,12 @@ unsigned long __ro_after_init phys_offset;
 #define LOAD_TO_LINK(addr) ((unsigned long)(addr) - phys_offset)
 #define LINK_TO_LOAD(addr) ((unsigned long)(addr) + phys_offset)
 
+/*
+ * Should be removed as soon as enough headers will be merged for inclusion of
+ * <xen/lib.h>.
+ */
+#define ARRAY_SIZE(arr)		(sizeof(arr) / sizeof((arr)[0]))
+
 /*
  * It is expected that Xen won't be more then 2 MB.
  * The check in xen.lds.S guarantees that.
@@ -35,8 +41,10 @@ unsigned long __ro_after_init phys_offset;
  *
  * It might be needed one more page table in case when Xen load address
  * isn't 2 MB aligned.
+ *
+ * (CONFIG_PAGING_LEVELS - 1) page tables are needed for identity mapping.
  */
-#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) + 1)
+#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) * 2 + 1)
 
 pte_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
 stage1_pgtbl_root[PAGETABLE_ENTRIES];
@@ -75,6 +83,7 @@ static void __init setup_initial_mapping(struct mmu_desc *mmu_desc,
     unsigned int index;
     pte_t *pgtbl;
     unsigned long page_addr;
+    bool is_identity_mapping = map_start == pa_start;
 
     if ( (unsigned long)_start % XEN_PT_LEVEL_SIZE(0) )
     {
@@ -108,16 +117,18 @@ static void __init setup_initial_mapping(struct mmu_desc *mmu_desc,
             {
                 unsigned long paddr = (page_addr - map_start) + pa_start;
                 unsigned int permissions = PTE_LEAF_DEFAULT;
+                unsigned long addr = is_identity_mapping
+                                     ? page_addr : LINK_TO_LOAD(page_addr);
                 pte_t pte_to_be_written;
 
                 index = pt_index(0, page_addr);
 
-                if ( is_kernel_text(LINK_TO_LOAD(page_addr)) ||
-                     is_kernel_inittext(LINK_TO_LOAD(page_addr)) )
-                    permissions =
-                        PTE_EXECUTABLE | PTE_READABLE | PTE_VALID;
+                if ( is_kernel_text(addr) ||
+                     is_kernel_inittext(addr) )
+                        permissions =
+                            PTE_EXECUTABLE | PTE_READABLE | PTE_VALID;
 
-                if ( is_kernel_rodata(LINK_TO_LOAD(page_addr)) )
+                if ( is_kernel_rodata(addr) )
                     permissions = PTE_READABLE | PTE_VALID;
 
                 pte_to_be_written = paddr_to_pte(paddr, permissions);
@@ -232,22 +243,18 @@ void __init setup_initial_pagetables(void)
                           linker_start,
                           linker_end,
                           load_start);
+
+    if ( linker_start == load_start )
+        return;
+
+    setup_initial_mapping(&mmu_desc,
+                          load_start,
+                          load_end,
+                          load_start);
 }
 
-void __init noreturn noinline enable_mmu()
+void __init enable_mmu(void)
 {
-    /*
-     * Calculate a linker time address of the mmu_is_enabled
-     * label and update CSR_STVEC with it.
-     * MMU is configured in a way where linker addresses are mapped
-     * on load addresses so in a case when linker addresses are not equal
-     * to load addresses, after MMU is enabled, it will cause
-     * an exception and jump to linker time addresses.
-     * Otherwise if load addresses are equal to linker addresses the code
-     * after mmu_is_enabled label will be executed without exception.
-     */
-    csr_write(CSR_STVEC, LOAD_TO_LINK((unsigned long)&&mmu_is_enabled));
-
     /* Ensure page table writes precede loading the SATP */
     sfence_vma();
 
@@ -255,25 +262,30 @@ void __init noreturn noinline enable_mmu()
     csr_write(CSR_SATP,
               PFN_DOWN((unsigned long)stage1_pgtbl_root) |
               RV_STAGE1_MODE << SATP_MODE_SHIFT);
+}
 
-    asm volatile ( ".p2align 2" );
- mmu_is_enabled:
-    /*
-     * Stack should be re-inited as:
-     * 1. Right now an address of the stack is relative to load time
-     *    addresses what will cause an issue in case of load start address
-     *    isn't equal to linker start address.
-     * 2. Addresses in stack are all load time relative which can be an
-     *    issue in case when load start address isn't equal to linker
-     *    start address.
-     *
-     * We can't return to the caller because the stack was reseted
-     * and it may have stash some variable on the stack.
-     * Jump to a brand new function as the stack was reseted
-     */
+void __init remove_identity_mapping(void)
+{
+    unsigned int i;
+    pte_t *pgtbl;
+    unsigned int index, xen_index;
+    unsigned long load_addr = LINK_TO_LOAD(_start);
 
-    switch_stack_and_jump((unsigned long)cpu0_boot_stack + STACK_SIZE,
-                          cont_after_mmu_is_enabled);
+    for ( pgtbl = stage1_pgtbl_root, i = 0;
+          i <= (CONFIG_PAGING_LEVELS - 1);
+          i++ )
+    {
+        index = pt_index(CONFIG_PAGING_LEVELS - 1 - i, load_addr);
+        xen_index = pt_index(CONFIG_PAGING_LEVELS - 1 - i, XEN_VIRT_START);
+
+        if ( index != xen_index )
+        {
+            pgtbl[index].pte = 0;
+            break;
+        }
+
+        pgtbl = (pte_t *)pte_to_paddr(pgtbl[index]);
+    }
 }
 
 /*
diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
index 850fbb58a7..41983ffe63 100644
--- a/xen/arch/riscv/riscv64/head.S
+++ b/xen/arch/riscv/riscv64/head.S
@@ -29,8 +29,42 @@ ENTRY(start)
 
         jal     reset_stack
 
+        /*
+         * save hart_id and dtb_base as a0 and a1 register can be used
+         * by C code ( f.e. setup_initial_pagetables will update a0 and
+         * a1 )
+         */
+        mv      s0, a0
+        mv      s1, a1
+
         jal     calc_phys_offset
 
+        jal     setup_initial_pagetables
+
+        jal     enable_mmu
+
+        la      t1, phys_offset
+        REG_L   t1, (t1)
+
+        /* Calculate proper VA after jump from 1:1 mapping */
+        la      t0, .L_primary_switched
+        sub     t0, t0, t1
+
+        /* Jump from 1:1 mapping world */
+        jr      t0
+
+.L_primary_switched:
+        /*
+         * cpu0_boot_stack address is 1:1 mapping related so it should be
+         * recalculated after jump from 1:1 mapping world as 1:1 mapping
+         * will be removed soon in start_xen().
+         */
+        jal     reset_stack
+
+        /* restore bootcpu_id and dtb address */
+        mv      a0, s0
+        mv      a1, s1
+
         tail    start_xen
 
         .section .text, "ax", %progbits
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 845d18d86f..c4ef0b3165 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -11,20 +11,10 @@ unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
 void __init noreturn start_xen(unsigned long bootcpu_id,
                                paddr_t dtb_addr)
 {
-    early_printk("Hello from C env\n");
-
-    setup_initial_pagetables();
-
-    enable_mmu();
-
-    for ( ;; )
-        asm volatile ("wfi");
+    remove_identity_mapping();
 
-    unreachable();
-}
+    early_printk("Hello from C env\n");
 
-void __init noreturn cont_after_mmu_is_enabled(void)
-{
     early_printk("All set up\n");
 
     for ( ;; )
-- 
2.40.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 5/6] xen/riscv: add SPDX tags
  2023-06-19 13:34 [PATCH v2 0/6] xen/riscv: introduce identity mapping Oleksii Kurochko
                   ` (3 preceding siblings ...)
  2023-06-19 13:34 ` [PATCH v2 4/6] xen/riscv: introduce identity mapping Oleksii Kurochko
@ 2023-06-19 13:34 ` Oleksii Kurochko
  2023-06-22  2:13   ` Alistair Francis
  2023-06-19 13:34 ` [PATCH v2 6/6] xen/riscv: move extern of cpu0_boot_stack to header Oleksii Kurochko
  2023-07-03 11:49 ` [PATCH v2 0/6] xen/riscv: introduce identity mapping Oleksii
  6 siblings, 1 reply; 25+ messages in thread
From: Oleksii Kurochko @ 2023-06-19 13:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Jan Beulich, Andrew Cooper, Julien Grall, Oleksii Kurochko,
	Bob Eshleman, Alistair Francis, Connor Davis

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
Changes in V2:
  - change SPDX tags from GPL-2.0-or-later to GPL-2.0-only.
  - add Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>.
---
 xen/arch/riscv/include/asm/current.h      | 2 ++
 xen/arch/riscv/include/asm/early_printk.h | 2 ++
 xen/arch/riscv/include/asm/mm.h           | 2 ++
 xen/arch/riscv/include/asm/page-bits.h    | 2 ++
 xen/arch/riscv/include/asm/page.h         | 2 ++
 xen/arch/riscv/include/asm/traps.h        | 2 ++
 xen/arch/riscv/include/asm/types.h        | 2 ++
 xen/arch/riscv/mm.c                       | 2 ++
 xen/arch/riscv/setup.c                    | 2 ++
 9 files changed, 18 insertions(+)

diff --git a/xen/arch/riscv/include/asm/current.h b/xen/arch/riscv/include/asm/current.h
index d87e6717e0..d84f15dc50 100644
--- a/xen/arch/riscv/include/asm/current.h
+++ b/xen/arch/riscv/include/asm/current.h
@@ -1,3 +1,5 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
 #ifndef __ASM_CURRENT_H
 #define __ASM_CURRENT_H
 
diff --git a/xen/arch/riscv/include/asm/early_printk.h b/xen/arch/riscv/include/asm/early_printk.h
index 05106e160d..85e60df33a 100644
--- a/xen/arch/riscv/include/asm/early_printk.h
+++ b/xen/arch/riscv/include/asm/early_printk.h
@@ -1,3 +1,5 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
 #ifndef __EARLY_PRINTK_H__
 #define __EARLY_PRINTK_H__
 
diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h
index 500fdc9c5a..3f694a43ef 100644
--- a/xen/arch/riscv/include/asm/mm.h
+++ b/xen/arch/riscv/include/asm/mm.h
@@ -1,3 +1,5 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
 #ifndef _ASM_RISCV_MM_H
 #define _ASM_RISCV_MM_H
 
diff --git a/xen/arch/riscv/include/asm/page-bits.h b/xen/arch/riscv/include/asm/page-bits.h
index 4a3e33589a..8f1f474371 100644
--- a/xen/arch/riscv/include/asm/page-bits.h
+++ b/xen/arch/riscv/include/asm/page-bits.h
@@ -1,3 +1,5 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
 #ifndef __RISCV_PAGE_BITS_H__
 #define __RISCV_PAGE_BITS_H__
 
diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm/page.h
index 22b36e03a4..95074e29b3 100644
--- a/xen/arch/riscv/include/asm/page.h
+++ b/xen/arch/riscv/include/asm/page.h
@@ -1,3 +1,5 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
 #ifndef _ASM_RISCV_PAGE_H
 #define _ASM_RISCV_PAGE_H
 
diff --git a/xen/arch/riscv/include/asm/traps.h b/xen/arch/riscv/include/asm/traps.h
index f3fb6b25d1..3fef318478 100644
--- a/xen/arch/riscv/include/asm/traps.h
+++ b/xen/arch/riscv/include/asm/traps.h
@@ -1,3 +1,5 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
 #ifndef __ASM_TRAPS_H__
 #define __ASM_TRAPS_H__
 
diff --git a/xen/arch/riscv/include/asm/types.h b/xen/arch/riscv/include/asm/types.h
index 0c0ce78c8f..c2cdfd9caa 100644
--- a/xen/arch/riscv/include/asm/types.h
+++ b/xen/arch/riscv/include/asm/types.h
@@ -1,3 +1,5 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
 #ifndef __RISCV_TYPES_H__
 #define __RISCV_TYPES_H__
 
diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
index 2693b817c5..663048c783 100644
--- a/xen/arch/riscv/mm.c
+++ b/xen/arch/riscv/mm.c
@@ -1,3 +1,5 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
 #include <xen/cache.h>
 #include <xen/compiler.h>
 #include <xen/init.h>
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index c4ef0b3165..6593f601c1 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -1,3 +1,5 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
 #include <xen/compile.h>
 #include <xen/init.h>
 
-- 
2.40.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 6/6] xen/riscv: move extern of cpu0_boot_stack to header
  2023-06-19 13:34 [PATCH v2 0/6] xen/riscv: introduce identity mapping Oleksii Kurochko
                   ` (4 preceding siblings ...)
  2023-06-19 13:34 ` [PATCH v2 5/6] xen/riscv: add SPDX tags Oleksii Kurochko
@ 2023-06-19 13:34 ` Oleksii Kurochko
  2023-06-22  2:13   ` Alistair Francis
  2023-07-03 11:49 ` [PATCH v2 0/6] xen/riscv: introduce identity mapping Oleksii
  6 siblings, 1 reply; 25+ messages in thread
From: Oleksii Kurochko @ 2023-06-19 13:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Jan Beulich, Andrew Cooper, Julien Grall, Oleksii Kurochko,
	Bob Eshleman, Alistair Francis, Connor Davis

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
Changes in V2:
  - add Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>.
---
 xen/arch/riscv/include/asm/mm.h | 2 ++
 xen/arch/riscv/mm.c             | 2 --
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h
index 3f694a43ef..085eaab7fb 100644
--- a/xen/arch/riscv/include/asm/mm.h
+++ b/xen/arch/riscv/include/asm/mm.h
@@ -8,6 +8,8 @@
 #define pfn_to_paddr(pfn) ((paddr_t)(pfn) << PAGE_SHIFT)
 #define paddr_to_pfn(pa)  ((unsigned long)((pa) >> PAGE_SHIFT))
 
+extern unsigned char cpu0_boot_stack[];
+
 void setup_initial_pagetables(void);
 
 void enable_mmu(void);
diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
index 663048c783..602b89aeed 100644
--- a/xen/arch/riscv/mm.c
+++ b/xen/arch/riscv/mm.c
@@ -20,8 +20,6 @@ struct mmu_desc {
     pte_t *pgtbl_base;
 };
 
-extern unsigned char cpu0_boot_stack[STACK_SIZE];
-
 unsigned long __ro_after_init phys_offset;
 
 #define LOAD_TO_LINK(addr) ((unsigned long)(addr) - phys_offset)
-- 
2.40.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 5/6] xen/riscv: add SPDX tags
  2023-06-19 13:34 ` [PATCH v2 5/6] xen/riscv: add SPDX tags Oleksii Kurochko
@ 2023-06-22  2:13   ` Alistair Francis
  0 siblings, 0 replies; 25+ messages in thread
From: Alistair Francis @ 2023-06-22  2:13 UTC (permalink / raw)
  To: Oleksii Kurochko
  Cc: xen-devel, Jan Beulich, Andrew Cooper, Julien Grall,
	Bob Eshleman, Alistair Francis, Connor Davis

On Mon, Jun 19, 2023 at 11:35 PM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
> Changes in V2:
>   - change SPDX tags from GPL-2.0-or-later to GPL-2.0-only.
>   - add Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>.
> ---
>  xen/arch/riscv/include/asm/current.h      | 2 ++
>  xen/arch/riscv/include/asm/early_printk.h | 2 ++
>  xen/arch/riscv/include/asm/mm.h           | 2 ++
>  xen/arch/riscv/include/asm/page-bits.h    | 2 ++
>  xen/arch/riscv/include/asm/page.h         | 2 ++
>  xen/arch/riscv/include/asm/traps.h        | 2 ++
>  xen/arch/riscv/include/asm/types.h        | 2 ++
>  xen/arch/riscv/mm.c                       | 2 ++
>  xen/arch/riscv/setup.c                    | 2 ++
>  9 files changed, 18 insertions(+)
>
> diff --git a/xen/arch/riscv/include/asm/current.h b/xen/arch/riscv/include/asm/current.h
> index d87e6717e0..d84f15dc50 100644
> --- a/xen/arch/riscv/include/asm/current.h
> +++ b/xen/arch/riscv/include/asm/current.h
> @@ -1,3 +1,5 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +
>  #ifndef __ASM_CURRENT_H
>  #define __ASM_CURRENT_H
>
> diff --git a/xen/arch/riscv/include/asm/early_printk.h b/xen/arch/riscv/include/asm/early_printk.h
> index 05106e160d..85e60df33a 100644
> --- a/xen/arch/riscv/include/asm/early_printk.h
> +++ b/xen/arch/riscv/include/asm/early_printk.h
> @@ -1,3 +1,5 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +
>  #ifndef __EARLY_PRINTK_H__
>  #define __EARLY_PRINTK_H__
>
> diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h
> index 500fdc9c5a..3f694a43ef 100644
> --- a/xen/arch/riscv/include/asm/mm.h
> +++ b/xen/arch/riscv/include/asm/mm.h
> @@ -1,3 +1,5 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +
>  #ifndef _ASM_RISCV_MM_H
>  #define _ASM_RISCV_MM_H
>
> diff --git a/xen/arch/riscv/include/asm/page-bits.h b/xen/arch/riscv/include/asm/page-bits.h
> index 4a3e33589a..8f1f474371 100644
> --- a/xen/arch/riscv/include/asm/page-bits.h
> +++ b/xen/arch/riscv/include/asm/page-bits.h
> @@ -1,3 +1,5 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +
>  #ifndef __RISCV_PAGE_BITS_H__
>  #define __RISCV_PAGE_BITS_H__
>
> diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm/page.h
> index 22b36e03a4..95074e29b3 100644
> --- a/xen/arch/riscv/include/asm/page.h
> +++ b/xen/arch/riscv/include/asm/page.h
> @@ -1,3 +1,5 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +
>  #ifndef _ASM_RISCV_PAGE_H
>  #define _ASM_RISCV_PAGE_H
>
> diff --git a/xen/arch/riscv/include/asm/traps.h b/xen/arch/riscv/include/asm/traps.h
> index f3fb6b25d1..3fef318478 100644
> --- a/xen/arch/riscv/include/asm/traps.h
> +++ b/xen/arch/riscv/include/asm/traps.h
> @@ -1,3 +1,5 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +
>  #ifndef __ASM_TRAPS_H__
>  #define __ASM_TRAPS_H__
>
> diff --git a/xen/arch/riscv/include/asm/types.h b/xen/arch/riscv/include/asm/types.h
> index 0c0ce78c8f..c2cdfd9caa 100644
> --- a/xen/arch/riscv/include/asm/types.h
> +++ b/xen/arch/riscv/include/asm/types.h
> @@ -1,3 +1,5 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +
>  #ifndef __RISCV_TYPES_H__
>  #define __RISCV_TYPES_H__
>
> diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
> index 2693b817c5..663048c783 100644
> --- a/xen/arch/riscv/mm.c
> +++ b/xen/arch/riscv/mm.c
> @@ -1,3 +1,5 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +
>  #include <xen/cache.h>
>  #include <xen/compiler.h>
>  #include <xen/init.h>
> diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
> index c4ef0b3165..6593f601c1 100644
> --- a/xen/arch/riscv/setup.c
> +++ b/xen/arch/riscv/setup.c
> @@ -1,3 +1,5 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +
>  #include <xen/compile.h>
>  #include <xen/init.h>
>
> --
> 2.40.1
>
>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 6/6] xen/riscv: move extern of cpu0_boot_stack to header
  2023-06-19 13:34 ` [PATCH v2 6/6] xen/riscv: move extern of cpu0_boot_stack to header Oleksii Kurochko
@ 2023-06-22  2:13   ` Alistair Francis
  0 siblings, 0 replies; 25+ messages in thread
From: Alistair Francis @ 2023-06-22  2:13 UTC (permalink / raw)
  To: Oleksii Kurochko
  Cc: xen-devel, Jan Beulich, Andrew Cooper, Julien Grall,
	Bob Eshleman, Alistair Francis, Connor Davis

On Mon, Jun 19, 2023 at 11:35 PM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
> Changes in V2:
>   - add Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>.
> ---
>  xen/arch/riscv/include/asm/mm.h | 2 ++
>  xen/arch/riscv/mm.c             | 2 --
>  2 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h
> index 3f694a43ef..085eaab7fb 100644
> --- a/xen/arch/riscv/include/asm/mm.h
> +++ b/xen/arch/riscv/include/asm/mm.h
> @@ -8,6 +8,8 @@
>  #define pfn_to_paddr(pfn) ((paddr_t)(pfn) << PAGE_SHIFT)
>  #define paddr_to_pfn(pa)  ((unsigned long)((pa) >> PAGE_SHIFT))
>
> +extern unsigned char cpu0_boot_stack[];
> +
>  void setup_initial_pagetables(void);
>
>  void enable_mmu(void);
> diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
> index 663048c783..602b89aeed 100644
> --- a/xen/arch/riscv/mm.c
> +++ b/xen/arch/riscv/mm.c
> @@ -20,8 +20,6 @@ struct mmu_desc {
>      pte_t *pgtbl_base;
>  };
>
> -extern unsigned char cpu0_boot_stack[STACK_SIZE];
> -
>  unsigned long __ro_after_init phys_offset;
>
>  #define LOAD_TO_LINK(addr) ((unsigned long)(addr) - phys_offset)
> --
> 2.40.1
>
>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 0/6] xen/riscv: introduce identity mapping
  2023-06-19 13:34 [PATCH v2 0/6] xen/riscv: introduce identity mapping Oleksii Kurochko
                   ` (5 preceding siblings ...)
  2023-06-19 13:34 ` [PATCH v2 6/6] xen/riscv: move extern of cpu0_boot_stack to header Oleksii Kurochko
@ 2023-07-03 11:49 ` Oleksii
  6 siblings, 0 replies; 25+ messages in thread
From: Oleksii @ 2023-07-03 11:49 UTC (permalink / raw)
  To: xen-devel, Bob Eshleman, Alistair Francis
  Cc: Jan Beulich, Andrew Cooper, Connor Davis


Hello Bobby and Alistair,

Some patches of the current patch series were merged, but some still
require your Ack-by/Reviewed-by.

Could you please look at the following patches:
1. [PATCH v2 3/6] xen/riscv: introduce function for physical offset
calculation
2. [PATCH v2 4/6] xen/riscv: introduce identity mapping

Thank you so much for your attention and participation.

~ Oleksii

On Mon, 2023-06-19 at 16:34 +0300, Oleksii Kurochko wrote:
> The patch series introduces things necessary to implement identity
> mapping:
>   1. Make identity mapping for the entire Xen.
>   2. Enable MMU.
>   3. Jump to the virtual address world
>   4. Remove identity mapping.
> 
> Also current patch series introduces the calculation of physical
> offset before
> MMU is enabled as access to physical offset will be calculated wrong
> after
> MMU will be enabled because access to phys_off variable is PC-
> relative and
> in the case when linker address != load address, it will cause MMU
> fault.
> 
> One more thing that was done is:
>   * Added SPDX tags.
>   * move extern of cpu0_boot_stack to a header.
> 
> The reason for this patch series can be found here:
> https://lore.kernel.org/xen-devel/4e336121-fc0c-b007-bf7b-430352563d55@citrix.com/
> 
> ---
> Changes in V2:
>  - update the patch series message.
>  - drop patches from the previous version of the patch series:
>    * xen/riscv: add __ASSEMBLY__ guards". ( merged )
>    * xen/riscv: make sure that identity mapping isn't bigger then
> page size
>      ( entire Xen is 1:1 mapped so there is no need for the checks
> from the patch )
>  - add .sbss.* and put it befor .bss* .
>  - move out reset_stack() to .text section.
>  - add '__ro_after_init' for phys_offset variable.
>  - add '__init' for calc_phys_offset().
>  - declaring variable phys_off as non static as it will be used in
> head.S.
>  - update definition of PGTBL_INITIAL_COUNT and the comment above.
>  - code style fixes.
>  - remove id_addrs array becase entire Xen is mapped.
>  - reverse condition for cycle inside remove_identity_mapping().
>  - fix page table walk in remove_identity_mapping().
>  - save hart_id and dtb_addr before call MMU related C functions
>  - use phys_offset variable instead of doing calcultations to get
> phys offset
>    in head.S file. ( it can be easily done as entire Xen is 1:1
> mapped now )
>  - declare enable_muu() as __init.
>  - Update SPDX tags.
>  - Add Review-By/Suggested-By for some patches.
>  - code style fixes.
> 
> Oleksii Kurochko (6):
>   xen/riscv: add .sbss section to .bss
>   xen/riscv: introduce reset_stack() function
>   xen/riscv: introduce function for physical offset calculation
>   xen/riscv: introduce identity mapping
>   xen/riscv: add SPDX tags
>   xen/riscv: move extern of cpu0_boot_stack to header
> 
>  xen/arch/riscv/include/asm/config.h       |   2 +
>  xen/arch/riscv/include/asm/current.h      |   2 +
>  xen/arch/riscv/include/asm/early_printk.h |   2 +
>  xen/arch/riscv/include/asm/mm.h           |   9 +-
>  xen/arch/riscv/include/asm/page-bits.h    |   2 +
>  xen/arch/riscv/include/asm/page.h         |   2 +
>  xen/arch/riscv/include/asm/traps.h        |   2 +
>  xen/arch/riscv/include/asm/types.h        |   2 +
>  xen/arch/riscv/mm.c                       | 104 +++++++++++++-------
> --
>  xen/arch/riscv/riscv64/head.S             |  46 +++++++++-
>  xen/arch/riscv/setup.c                    |  16 +---
>  xen/arch/riscv/xen.lds.S                  |   2 +-
>  12 files changed, 136 insertions(+), 55 deletions(-)
> 



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/6] xen/riscv: introduce reset_stack() function
  2023-06-19 13:34 ` [PATCH v2 2/6] xen/riscv: introduce reset_stack() function Oleksii Kurochko
@ 2023-07-06 11:17   ` Jan Beulich
  2023-07-07  9:08     ` Oleksii
  0 siblings, 1 reply; 25+ messages in thread
From: Jan Beulich @ 2023-07-06 11:17 UTC (permalink / raw)
  To: Oleksii Kurochko
  Cc: Andrew Cooper, Julien Grall, Bob Eshleman, Alistair Francis,
	Connor Davis, xen-devel

On 19.06.2023 15:34, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/riscv64/head.S
> +++ b/xen/arch/riscv/riscv64/head.S
> @@ -27,8 +27,16 @@ ENTRY(start)
>          add     t3, t3, __SIZEOF_POINTER__
>          bltu    t3, t4, .L_clear_bss
>  
> +        jal     reset_stack
> +
> +        tail    start_xen
> +
> +        .section .text, "ax", %progbits
> +
> +ENTRY(reset_stack)
>          la      sp, cpu0_boot_stack
>          li      t0, STACK_SIZE
>          add     sp, sp, t0
>  
> -        tail    start_xen
> +        ret
> +

Looking at patch 4 you will want to add a comment here to emphasize
that a0 and a1 have to remain unclobbered.

Jan


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 3/6] xen/riscv: introduce function for physical offset calculation
  2023-06-19 13:34 ` [PATCH v2 3/6] xen/riscv: introduce function for physical offset calculation Oleksii Kurochko
@ 2023-07-06 11:18   ` Jan Beulich
  2023-07-07  9:12     ` Oleksii
  0 siblings, 1 reply; 25+ messages in thread
From: Jan Beulich @ 2023-07-06 11:18 UTC (permalink / raw)
  To: Oleksii Kurochko
  Cc: Andrew Cooper, Julien Grall, Bob Eshleman, Alistair Francis,
	Connor Davis, xen-devel

On 19.06.2023 15:34, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/riscv64/head.S
> +++ b/xen/arch/riscv/riscv64/head.S
> @@ -29,6 +29,8 @@ ENTRY(start)
>  
>          jal     reset_stack
>  
> +        jal     calc_phys_offset
> +
>          tail    start_xen
>  
>          .section .text, "ax", %progbits

Since you call a C function, the code to save/restore a0/a1 needs to
move here (from patch 4).

Jan


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 4/6] xen/riscv: introduce identity mapping
  2023-06-19 13:34 ` [PATCH v2 4/6] xen/riscv: introduce identity mapping Oleksii Kurochko
@ 2023-07-06 11:35   ` Jan Beulich
  2023-07-07 10:37     ` Oleksii
  0 siblings, 1 reply; 25+ messages in thread
From: Jan Beulich @ 2023-07-06 11:35 UTC (permalink / raw)
  To: Oleksii Kurochko
  Cc: Andrew Cooper, Julien Grall, Bob Eshleman, Alistair Francis,
	Connor Davis, xen-devel

On 19.06.2023 15:34, Oleksii Kurochko wrote:
> Since it is not easy to keep track where the identity map was mapped,
> so we will look for the top-level entry exclusive to the identity
> map and remove it.

I think you mean "top-most" or some such, as it's not necessarily the
top-level entry you zap.

> --- a/xen/arch/riscv/include/asm/config.h
> +++ b/xen/arch/riscv/include/asm/config.h
> @@ -1,3 +1,5 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +
>  #ifndef __RISCV_CONFIG_H__
>  #define __RISCV_CONFIG_H__
>  

Unrelated change?

> --- a/xen/arch/riscv/mm.c
> +++ b/xen/arch/riscv/mm.c
> @@ -25,6 +25,12 @@ unsigned long __ro_after_init phys_offset;
>  #define LOAD_TO_LINK(addr) ((unsigned long)(addr) - phys_offset)
>  #define LINK_TO_LOAD(addr) ((unsigned long)(addr) + phys_offset)
>  
> +/*
> + * Should be removed as soon as enough headers will be merged for inclusion of
> + * <xen/lib.h>.
> + */
> +#define ARRAY_SIZE(arr)		(sizeof(arr) / sizeof((arr)[0]))
> +
>  /*
>   * It is expected that Xen won't be more then 2 MB.
>   * The check in xen.lds.S guarantees that.
> @@ -35,8 +41,10 @@ unsigned long __ro_after_init phys_offset;
>   *
>   * It might be needed one more page table in case when Xen load address
>   * isn't 2 MB aligned.
> + *
> + * (CONFIG_PAGING_LEVELS - 1) page tables are needed for identity mapping.
>   */
> -#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) + 1)
> +#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) * 2 + 1)

How come the extra page (see the comment sentence in context) isn't
needed for the identity-mapping case?

> @@ -255,25 +262,30 @@ void __init noreturn noinline enable_mmu()
>      csr_write(CSR_SATP,
>                PFN_DOWN((unsigned long)stage1_pgtbl_root) |
>                RV_STAGE1_MODE << SATP_MODE_SHIFT);
> +}
>  
> -    asm volatile ( ".p2align 2" );
> - mmu_is_enabled:
> -    /*
> -     * Stack should be re-inited as:
> -     * 1. Right now an address of the stack is relative to load time
> -     *    addresses what will cause an issue in case of load start address
> -     *    isn't equal to linker start address.
> -     * 2. Addresses in stack are all load time relative which can be an
> -     *    issue in case when load start address isn't equal to linker
> -     *    start address.
> -     *
> -     * We can't return to the caller because the stack was reseted
> -     * and it may have stash some variable on the stack.
> -     * Jump to a brand new function as the stack was reseted
> -     */
> +void __init remove_identity_mapping(void)
> +{
> +    unsigned int i;
> +    pte_t *pgtbl;
> +    unsigned int index, xen_index;
> +    unsigned long load_addr = LINK_TO_LOAD(_start);
>  
> -    switch_stack_and_jump((unsigned long)cpu0_boot_stack + STACK_SIZE,
> -                          cont_after_mmu_is_enabled);
> +    for ( pgtbl = stage1_pgtbl_root, i = 0;
> +          i <= (CONFIG_PAGING_LEVELS - 1);

i < CONFIG_PAGING_LEVELS ? But maybe it would be easier for i to start
at CONFIG_PAGING_LEVELS and be decremented, simplifying ...

> +          i++ )
> +    {
> +        index = pt_index(CONFIG_PAGING_LEVELS - 1 - i, load_addr);
> +        xen_index = pt_index(CONFIG_PAGING_LEVELS - 1 - i, XEN_VIRT_START);

... these two expressions?

> +        if ( index != xen_index )
> +        {
> +            pgtbl[index].pte = 0;
> +            break;
> +        }

Is this enough? When load and link address are pretty close (but not
overlapping), can't they share a leaf table, in which case you need
to clear more than just a single entry? The present overlap check
looks to be 4k-granular, not 2M (in which case this couldn't happen;
IOW adjusting the overlap check may also be a way out).

Jan


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/6] xen/riscv: introduce reset_stack() function
  2023-07-06 11:17   ` Jan Beulich
@ 2023-07-07  9:08     ` Oleksii
  2023-07-07  9:33       ` Jan Beulich
  0 siblings, 1 reply; 25+ messages in thread
From: Oleksii @ 2023-07-07  9:08 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Julien Grall, Bob Eshleman, Alistair Francis,
	Connor Davis, xen-devel

On Thu, 2023-07-06 at 13:17 +0200, Jan Beulich wrote:
> On 19.06.2023 15:34, Oleksii Kurochko wrote:
> > --- a/xen/arch/riscv/riscv64/head.S
> > +++ b/xen/arch/riscv/riscv64/head.S
> > @@ -27,8 +27,16 @@ ENTRY(start)
> >          add     t3, t3, __SIZEOF_POINTER__
> >          bltu    t3, t4, .L_clear_bss
> >  
> > +        jal     reset_stack
> > +
> > +        tail    start_xen
> > +
> > +        .section .text, "ax", %progbits
> > +
> > +ENTRY(reset_stack)
> >          la      sp, cpu0_boot_stack
> >          li      t0, STACK_SIZE
> >          add     sp, sp, t0
> >  
> > -        tail    start_xen
> > +        ret
> > +
> 
> Looking at patch 4 you will want to add a comment here to emphasize
> that a0 and a1 have to remain unclobbered.
Thanks for a note. I'll add it in the new patch version

~ Oleksii


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 3/6] xen/riscv: introduce function for physical offset calculation
  2023-07-06 11:18   ` Jan Beulich
@ 2023-07-07  9:12     ` Oleksii
  2023-07-07  9:17       ` Julien Grall
  2023-07-07  9:35       ` Jan Beulich
  0 siblings, 2 replies; 25+ messages in thread
From: Oleksii @ 2023-07-07  9:12 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Julien Grall, Bob Eshleman, Alistair Francis,
	Connor Davis, xen-devel

On Thu, 2023-07-06 at 13:18 +0200, Jan Beulich wrote:
> On 19.06.2023 15:34, Oleksii Kurochko wrote:
> > --- a/xen/arch/riscv/riscv64/head.S
> > +++ b/xen/arch/riscv/riscv64/head.S
> > @@ -29,6 +29,8 @@ ENTRY(start)
> >  
> >          jal     reset_stack
> >  
> > +        jal     calc_phys_offset
> > +
> >          tail    start_xen
> >  
> >          .section .text, "ax", %progbits
> 
> Since you call a C function, the code to save/restore a0/a1 needs to
> move here (from patch 4).
Thanks. It makes sense.
It would be better to move save/restore a0/a1 ( from patch 4 )code
here.

The only one reason I didn't do that before that calc_phys_offset
doesn't touch that and it is guaranteed that it will not ( as it
doesn't have arguments )

~ Oleksii



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 3/6] xen/riscv: introduce function for physical offset calculation
  2023-07-07  9:12     ` Oleksii
@ 2023-07-07  9:17       ` Julien Grall
  2023-07-10  8:11         ` Oleksii
  2023-07-07  9:35       ` Jan Beulich
  1 sibling, 1 reply; 25+ messages in thread
From: Julien Grall @ 2023-07-07  9:17 UTC (permalink / raw)
  To: Oleksii, Jan Beulich
  Cc: Andrew Cooper, Bob Eshleman, Alistair Francis, Connor Davis, xen-devel



On 07/07/2023 10:12, Oleksii wrote:
> On Thu, 2023-07-06 at 13:18 +0200, Jan Beulich wrote:
>> On 19.06.2023 15:34, Oleksii Kurochko wrote:
>>> --- a/xen/arch/riscv/riscv64/head.S
>>> +++ b/xen/arch/riscv/riscv64/head.S
>>> @@ -29,6 +29,8 @@ ENTRY(start)
>>>   
>>>           jal     reset_stack
>>>   
>>> +        jal     calc_phys_offset
>>> +
>>>           tail    start_xen
>>>   
>>>           .section .text, "ax", %progbits
>>
>> Since you call a C function, the code to save/restore a0/a1 needs to
>> move here (from patch 4).
> Thanks. It makes sense.
> It would be better to move save/restore a0/a1 ( from patch 4 )code
> here.
> 
> The only one reason I didn't do that before that calc_phys_offset
> doesn't touch that and it is guaranteed that it will not ( as it
> doesn't have arguments )

IIUC, the calling convention requires a0/a1 to be caller saved. So even 
if they are not used for arguments, such callee is still free to use 
them for internal purpose.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/6] xen/riscv: introduce reset_stack() function
  2023-07-07  9:08     ` Oleksii
@ 2023-07-07  9:33       ` Jan Beulich
  2023-07-10  8:09         ` Oleksii
  0 siblings, 1 reply; 25+ messages in thread
From: Jan Beulich @ 2023-07-07  9:33 UTC (permalink / raw)
  To: Oleksii
  Cc: Andrew Cooper, Julien Grall, Bob Eshleman, Alistair Francis,
	Connor Davis, xen-devel

On 07.07.2023 11:08, Oleksii wrote:
> On Thu, 2023-07-06 at 13:17 +0200, Jan Beulich wrote:
>> On 19.06.2023 15:34, Oleksii Kurochko wrote:
>>> --- a/xen/arch/riscv/riscv64/head.S
>>> +++ b/xen/arch/riscv/riscv64/head.S
>>> @@ -27,8 +27,16 @@ ENTRY(start)
>>>          add     t3, t3, __SIZEOF_POINTER__
>>>          bltu    t3, t4, .L_clear_bss
>>>  
>>> +        jal     reset_stack
>>> +
>>> +        tail    start_xen
>>> +
>>> +        .section .text, "ax", %progbits
>>> +
>>> +ENTRY(reset_stack)
>>>          la      sp, cpu0_boot_stack
>>>          li      t0, STACK_SIZE
>>>          add     sp, sp, t0
>>>  
>>> -        tail    start_xen
>>> +        ret
>>> +
>>
>> Looking at patch 4 you will want to add a comment here to emphasize
>> that a0 and a1 have to remain unclobbered.
> Thanks for a note. I'll add it in the new patch version

Having seen how things end up by the end of the series, there's an
alternative: You could save a0 and a1 ahead of the 1st call to
reset_stack, rather than immediately afterwards.

Jan


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 3/6] xen/riscv: introduce function for physical offset calculation
  2023-07-07  9:12     ` Oleksii
  2023-07-07  9:17       ` Julien Grall
@ 2023-07-07  9:35       ` Jan Beulich
  2023-07-10  8:15         ` Oleksii
  1 sibling, 1 reply; 25+ messages in thread
From: Jan Beulich @ 2023-07-07  9:35 UTC (permalink / raw)
  To: Oleksii
  Cc: Andrew Cooper, Julien Grall, Bob Eshleman, Alistair Francis,
	Connor Davis, xen-devel

On 07.07.2023 11:12, Oleksii wrote:
> On Thu, 2023-07-06 at 13:18 +0200, Jan Beulich wrote:
>> On 19.06.2023 15:34, Oleksii Kurochko wrote:
>>> --- a/xen/arch/riscv/riscv64/head.S
>>> +++ b/xen/arch/riscv/riscv64/head.S
>>> @@ -29,6 +29,8 @@ ENTRY(start)
>>>  
>>>          jal     reset_stack
>>>  
>>> +        jal     calc_phys_offset
>>> +
>>>          tail    start_xen
>>>  
>>>          .section .text, "ax", %progbits
>>
>> Since you call a C function, the code to save/restore a0/a1 needs to
>> move here (from patch 4).
> Thanks. It makes sense.
> It would be better to move save/restore a0/a1 ( from patch 4 )code
> here.
> 
> The only one reason I didn't do that before that calc_phys_offset
> doesn't touch that and it is guaranteed that it will not ( as it
> doesn't have arguments )

How does a function not having parameters guarantee that registers
used for parameter passing aren't touched? Inside a function, the
compiler is free to use argument-passing registers just like other
temporary ones; their values don't need preserving, from all I know
(otherwise the RISC-V ABI would be different to all other ABIs I
know of).

Jan


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 4/6] xen/riscv: introduce identity mapping
  2023-07-06 11:35   ` Jan Beulich
@ 2023-07-07 10:37     ` Oleksii
  2023-07-07 10:51       ` Jan Beulich
  0 siblings, 1 reply; 25+ messages in thread
From: Oleksii @ 2023-07-07 10:37 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Julien Grall, Bob Eshleman, Alistair Francis,
	Connor Davis, xen-devel

On Thu, 2023-07-06 at 13:35 +0200, Jan Beulich wrote:
> On 19.06.2023 15:34, Oleksii Kurochko wrote:
> > Since it is not easy to keep track where the identity map was
> > mapped,
> > so we will look for the top-level entry exclusive to the identity
> > map and remove it.
> 
> I think you mean "top-most" or some such, as it's not necessarily the
> top-level entry you zap.
Yeah, 'top-most' is more appropriate in this context.
> 
> > --- a/xen/arch/riscv/include/asm/config.h
> > +++ b/xen/arch/riscv/include/asm/config.h
> > @@ -1,3 +1,5 @@
> > +/* SPDX-License-Identifier: GPL-2.0-only */
> > +
> >  #ifndef __RISCV_CONFIG_H__
> >  #define __RISCV_CONFIG_H__
> >  
> 
> Unrelated change?
It  should be part of [PATCH v2 5/6] xen/riscv: introduce identity
mapping.

> 
> > --- a/xen/arch/riscv/mm.c
> > +++ b/xen/arch/riscv/mm.c
> > @@ -25,6 +25,12 @@ unsigned long __ro_after_init phys_offset;
> >  #define LOAD_TO_LINK(addr) ((unsigned long)(addr) - phys_offset)
> >  #define LINK_TO_LOAD(addr) ((unsigned long)(addr) + phys_offset)
> >  
> > +/*
> > + * Should be removed as soon as enough headers will be merged for
> > inclusion of
> > + * <xen/lib.h>.
> > + */
> > +#define ARRAY_SIZE(arr)                (sizeof(arr) /
> > sizeof((arr)[0]))
> > +
> >  /*
> >   * It is expected that Xen won't be more then 2 MB.
> >   * The check in xen.lds.S guarantees that.
> > @@ -35,8 +41,10 @@ unsigned long __ro_after_init phys_offset;
> >   *
> >   * It might be needed one more page table in case when Xen load
> > address
> >   * isn't 2 MB aligned.
> > + *
> > + * (CONFIG_PAGING_LEVELS - 1) page tables are needed for identity
> > mapping.
> >   */
> > -#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) + 1)
> > +#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) * 2 + 1)
> 
> How come the extra page (see the comment sentence in context) isn't
> needed for the identity-mapping case?
It is needed to allocate no more than two 'nonroot' page tables (L0 and
L1 in case of Sv39 ) as page 'root' table ( L2 in case of Sv39 ) is
always re-used.

The same ( only 'nonroot' page tables might be needed to allocate )
works for any MMU mode.

> 
> > @@ -255,25 +262,30 @@ void __init noreturn noinline enable_mmu()
> >      csr_write(CSR_SATP,
> >                PFN_DOWN((unsigned long)stage1_pgtbl_root) |
> >                RV_STAGE1_MODE << SATP_MODE_SHIFT);
> > +}
> >  
> > -    asm volatile ( ".p2align 2" );
> > - mmu_is_enabled:
> > -    /*
> > -     * Stack should be re-inited as:
> > -     * 1. Right now an address of the stack is relative to load
> > time
> > -     *    addresses what will cause an issue in case of load start
> > address
> > -     *    isn't equal to linker start address.
> > -     * 2. Addresses in stack are all load time relative which can
> > be an
> > -     *    issue in case when load start address isn't equal to
> > linker
> > -     *    start address.
> > -     *
> > -     * We can't return to the caller because the stack was reseted
> > -     * and it may have stash some variable on the stack.
> > -     * Jump to a brand new function as the stack was reseted
> > -     */
> > +void __init remove_identity_mapping(void)
> > +{
> > +    unsigned int i;
> > +    pte_t *pgtbl;
> > +    unsigned int index, xen_index;
> > +    unsigned long load_addr = LINK_TO_LOAD(_start);
> >  
> > -    switch_stack_and_jump((unsigned long)cpu0_boot_stack +
> > STACK_SIZE,
> > -                          cont_after_mmu_is_enabled);
> > +    for ( pgtbl = stage1_pgtbl_root, i = 0;
> > +          i <= (CONFIG_PAGING_LEVELS - 1);
> 
> i < CONFIG_PAGING_LEVELS ? But maybe it would be easier for i to
> start
> at CONFIG_PAGING_LEVELS and be decremented, simplifying ...
> 
> > +          i++ )
> > +    {
> > +        index = pt_index(CONFIG_PAGING_LEVELS - 1 - i, load_addr);
> > +        xen_index = pt_index(CONFIG_PAGING_LEVELS - 1 - i,
> > XEN_VIRT_START);
> 
> ... these two expressions?
It makes sense. I'll update this part of the code.

> 
> > +        if ( index != xen_index )
> > +        {
> > +            pgtbl[index].pte = 0;
> > +            break;
> > +        }
> 
> Is this enough? When load and link address are pretty close (but not
> overlapping), can't they share a leaf table, in which case you need
> to clear more than just a single entry? The present overlap check
> looks to be 4k-granular, not 2M (in which case this couldn't happen;
> IOW adjusting the overlap check may also be a way out).
At the start of setup_initial_pagetables() there is a code which checks
that load and link address don't overlap:

    if ( (linker_start != load_start) &&
         (linker_start <= load_end) && (load_start <= linker_end) )
    {
        early_printk("(XEN) linker and load address ranges overlap\n");
        die();
    }

So the closest difference between load and link address can be 4kb.
Otherwise load and link address ranges are equal ( as we can't map less
then 4kb).

Let's take concrete examples:
  Load address range is   0x8020_0000 - 0x8020_0FFF
  Linker address range is 0x8020_1000 - 0x8020_1FFF
  MMU mode: Sv39 ( so we have 3 page tables )

  So we have:
    * L2 index = 2, L1 index = 1, L0 index = 0 for load address
    * L2 index = 2, L1 index = 1, L0 index = 1 for linker address
  Thereby we have two different L0 tables for load and linker address 
ranges.
  And it looks like it is pretty safe to remove only one L0 index that
was used for identity mapping.

Is it possible that I missed something?

~ Oleksii


  




^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 4/6] xen/riscv: introduce identity mapping
  2023-07-07 10:37     ` Oleksii
@ 2023-07-07 10:51       ` Jan Beulich
  2023-07-10  8:53         ` Oleksii
  0 siblings, 1 reply; 25+ messages in thread
From: Jan Beulich @ 2023-07-07 10:51 UTC (permalink / raw)
  To: Oleksii
  Cc: Andrew Cooper, Julien Grall, Bob Eshleman, Alistair Francis,
	Connor Davis, xen-devel

On 07.07.2023 12:37, Oleksii wrote:
> On Thu, 2023-07-06 at 13:35 +0200, Jan Beulich wrote:
>> On 19.06.2023 15:34, Oleksii Kurochko wrote:
>>> --- a/xen/arch/riscv/include/asm/config.h
>>> +++ b/xen/arch/riscv/include/asm/config.h
>>> @@ -1,3 +1,5 @@
>>> +/* SPDX-License-Identifier: GPL-2.0-only */
>>> +
>>>  #ifndef __RISCV_CONFIG_H__
>>>  #define __RISCV_CONFIG_H__
>>>  
>>
>> Unrelated change?
> It  should be part of [PATCH v2 5/6] xen/riscv: introduce identity
> mapping.

Hmm, here we're discussing "[PATCH v2 4/6] xen/riscv: introduce identity
mapping". I'm confused, I guess.

>>> --- a/xen/arch/riscv/mm.c
>>> +++ b/xen/arch/riscv/mm.c
>>> @@ -25,6 +25,12 @@ unsigned long __ro_after_init phys_offset;
>>>  #define LOAD_TO_LINK(addr) ((unsigned long)(addr) - phys_offset)
>>>  #define LINK_TO_LOAD(addr) ((unsigned long)(addr) + phys_offset)
>>>  
>>> +/*
>>> + * Should be removed as soon as enough headers will be merged for
>>> inclusion of
>>> + * <xen/lib.h>.
>>> + */
>>> +#define ARRAY_SIZE(arr)                (sizeof(arr) /
>>> sizeof((arr)[0]))
>>> +
>>>  /*
>>>   * It is expected that Xen won't be more then 2 MB.
>>>   * The check in xen.lds.S guarantees that.
>>> @@ -35,8 +41,10 @@ unsigned long __ro_after_init phys_offset;
>>>   *
>>>   * It might be needed one more page table in case when Xen load
>>> address
>>>   * isn't 2 MB aligned.
>>> + *
>>> + * (CONFIG_PAGING_LEVELS - 1) page tables are needed for identity
>>> mapping.
>>>   */
>>> -#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) + 1)
>>> +#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) * 2 + 1)
>>
>> How come the extra page (see the comment sentence in context) isn't
>> needed for the identity-mapping case?
> It is needed to allocate no more than two 'nonroot' page tables (L0 and
> L1 in case of Sv39 ) as page 'root' table ( L2 in case of Sv39 ) is
> always re-used.
> 
> The same ( only 'nonroot' page tables might be needed to allocate )
> works for any MMU mode.

Of course, but if you cross a 2Mb boundary you'll need 2 L0 tables.

>>> @@ -255,25 +262,30 @@ void __init noreturn noinline enable_mmu()
>>>      csr_write(CSR_SATP,
>>>                PFN_DOWN((unsigned long)stage1_pgtbl_root) |
>>>                RV_STAGE1_MODE << SATP_MODE_SHIFT);
>>> +}
>>>  
>>> -    asm volatile ( ".p2align 2" );
>>> - mmu_is_enabled:
>>> -    /*
>>> -     * Stack should be re-inited as:
>>> -     * 1. Right now an address of the stack is relative to load
>>> time
>>> -     *    addresses what will cause an issue in case of load start
>>> address
>>> -     *    isn't equal to linker start address.
>>> -     * 2. Addresses in stack are all load time relative which can
>>> be an
>>> -     *    issue in case when load start address isn't equal to
>>> linker
>>> -     *    start address.
>>> -     *
>>> -     * We can't return to the caller because the stack was reseted
>>> -     * and it may have stash some variable on the stack.
>>> -     * Jump to a brand new function as the stack was reseted
>>> -     */
>>> +void __init remove_identity_mapping(void)
>>> +{
>>> +    unsigned int i;
>>> +    pte_t *pgtbl;
>>> +    unsigned int index, xen_index;
>>> +    unsigned long load_addr = LINK_TO_LOAD(_start);
>>>  
>>> -    switch_stack_and_jump((unsigned long)cpu0_boot_stack +
>>> STACK_SIZE,
>>> -                          cont_after_mmu_is_enabled);
>>> +    for ( pgtbl = stage1_pgtbl_root, i = 0;
>>> +          i <= (CONFIG_PAGING_LEVELS - 1);
>>
>> i < CONFIG_PAGING_LEVELS ? But maybe it would be easier for i to
>> start
>> at CONFIG_PAGING_LEVELS and be decremented, simplifying ...
>>
>>> +          i++ )
>>> +    {
>>> +        index = pt_index(CONFIG_PAGING_LEVELS - 1 - i, load_addr);
>>> +        xen_index = pt_index(CONFIG_PAGING_LEVELS - 1 - i,
>>> XEN_VIRT_START);
>>
>> ... these two expressions?
> It makes sense. I'll update this part of the code.
> 
>>
>>> +        if ( index != xen_index )
>>> +        {
>>> +            pgtbl[index].pte = 0;
>>> +            break;
>>> +        }
>>
>> Is this enough? When load and link address are pretty close (but not
>> overlapping), can't they share a leaf table, in which case you need
>> to clear more than just a single entry? The present overlap check
>> looks to be 4k-granular, not 2M (in which case this couldn't happen;
>> IOW adjusting the overlap check may also be a way out).
> At the start of setup_initial_pagetables() there is a code which checks
> that load and link address don't overlap:
> 
>     if ( (linker_start != load_start) &&
>          (linker_start <= load_end) && (load_start <= linker_end) )
>     {
>         early_printk("(XEN) linker and load address ranges overlap\n");
>         die();
>     }
> 
> So the closest difference between load and link address can be 4kb.
> Otherwise load and link address ranges are equal ( as we can't map less
> then 4kb).
> 
> Let's take concrete examples:
>   Load address range is   0x8020_0000 - 0x8020_0FFF
>   Linker address range is 0x8020_1000 - 0x8020_1FFF
>   MMU mode: Sv39 ( so we have 3 page tables )
> 
>   So we have:
>     * L2 index = 2, L1 index = 1, L0 index = 0 for load address
>     * L2 index = 2, L1 index = 1, L0 index = 1 for linker address
>   Thereby we have two different L0 tables for load and linker address 
> ranges.
>   And it looks like it is pretty safe to remove only one L0 index that
> was used for identity mapping.
> 
> Is it possible that I missed something?

Looks as if you are thinking of only a Xen which fits in 4k. The code
here wants to cope with Xen getting quite a bit bigger.

Jan


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/6] xen/riscv: introduce reset_stack() function
  2023-07-07  9:33       ` Jan Beulich
@ 2023-07-10  8:09         ` Oleksii
  0 siblings, 0 replies; 25+ messages in thread
From: Oleksii @ 2023-07-10  8:09 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Julien Grall, Bob Eshleman, Alistair Francis,
	Connor Davis, xen-devel

On Fri, 2023-07-07 at 11:33 +0200, Jan Beulich wrote:
> On 07.07.2023 11:08, Oleksii wrote:
> > On Thu, 2023-07-06 at 13:17 +0200, Jan Beulich wrote:
> > > On 19.06.2023 15:34, Oleksii Kurochko wrote:
> > > > --- a/xen/arch/riscv/riscv64/head.S
> > > > +++ b/xen/arch/riscv/riscv64/head.S
> > > > @@ -27,8 +27,16 @@ ENTRY(start)
> > > >          add     t3, t3, __SIZEOF_POINTER__
> > > >          bltu    t3, t4, .L_clear_bss
> > > >  
> > > > +        jal     reset_stack
> > > > +
> > > > +        tail    start_xen
> > > > +
> > > > +        .section .text, "ax", %progbits
> > > > +
> > > > +ENTRY(reset_stack)
> > > >          la      sp, cpu0_boot_stack
> > > >          li      t0, STACK_SIZE
> > > >          add     sp, sp, t0
> > > >  
> > > > -        tail    start_xen
> > > > +        ret
> > > > +
> > > 
> > > Looking at patch 4 you will want to add a comment here to
> > > emphasize
> > > that a0 and a1 have to remain unclobbered.
> > Thanks for a note. I'll add it in the new patch version
> 
> Having seen how things end up by the end of the series, there's an
> alternative: You could save a0 and a1 ahead of the 1st call to
> reset_stack, rather than immediately afterwards.
It makes sense. So lets stick to saving of a0 and a1 before 1st call of
reset_stack().

~ Oleksii


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 3/6] xen/riscv: introduce function for physical offset calculation
  2023-07-07  9:17       ` Julien Grall
@ 2023-07-10  8:11         ` Oleksii
  0 siblings, 0 replies; 25+ messages in thread
From: Oleksii @ 2023-07-10  8:11 UTC (permalink / raw)
  To: Julien Grall, Jan Beulich
  Cc: Andrew Cooper, Bob Eshleman, Alistair Francis, Connor Davis, xen-devel

On Fri, 2023-07-07 at 10:17 +0100, Julien Grall wrote:
> 
> 
> On 07/07/2023 10:12, Oleksii wrote:
> > On Thu, 2023-07-06 at 13:18 +0200, Jan Beulich wrote:
> > > On 19.06.2023 15:34, Oleksii Kurochko wrote:
> > > > --- a/xen/arch/riscv/riscv64/head.S
> > > > +++ b/xen/arch/riscv/riscv64/head.S
> > > > @@ -29,6 +29,8 @@ ENTRY(start)
> > > >   
> > > >           jal     reset_stack
> > > >   
> > > > +        jal     calc_phys_offset
> > > > +
> > > >           tail    start_xen
> > > >   
> > > >           .section .text, "ax", %progbits
> > > 
> > > Since you call a C function, the code to save/restore a0/a1 needs
> > > to
> > > move here (from patch 4).
> > Thanks. It makes sense.
> > It would be better to move save/restore a0/a1 ( from patch 4 )code
> > here.
> > 
> > The only one reason I didn't do that before that calc_phys_offset
> > doesn't touch that and it is guaranteed that it will not ( as it
> > doesn't have arguments )
> 
> IIUC, the calling convention requires a0/a1 to be caller saved. So
> even 
> if they are not used for arguments, such callee is still free to use 
> them for internal purpose.
You are right.

I haven't seen that compiler use them if 'void' is passed to function
as an argument. But I agree that we have to follow the calling
convention to be sure that all is fine.

~ Oleksii


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 3/6] xen/riscv: introduce function for physical offset calculation
  2023-07-07  9:35       ` Jan Beulich
@ 2023-07-10  8:15         ` Oleksii
  0 siblings, 0 replies; 25+ messages in thread
From: Oleksii @ 2023-07-10  8:15 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Julien Grall, Bob Eshleman, Alistair Francis,
	Connor Davis, xen-devel

On Fri, 2023-07-07 at 11:35 +0200, Jan Beulich wrote:
> On 07.07.2023 11:12, Oleksii wrote:
> > On Thu, 2023-07-06 at 13:18 +0200, Jan Beulich wrote:
> > > On 19.06.2023 15:34, Oleksii Kurochko wrote:
> > > > --- a/xen/arch/riscv/riscv64/head.S
> > > > +++ b/xen/arch/riscv/riscv64/head.S
> > > > @@ -29,6 +29,8 @@ ENTRY(start)
> > > >  
> > > >          jal     reset_stack
> > > >  
> > > > +        jal     calc_phys_offset
> > > > +
> > > >          tail    start_xen
> > > >  
> > > >          .section .text, "ax", %progbits
> > > 
> > > Since you call a C function, the code to save/restore a0/a1 needs
> > > to
> > > move here (from patch 4).
> > Thanks. It makes sense.
> > It would be better to move save/restore a0/a1 ( from patch 4 )code
> > here.
> > 
> > The only one reason I didn't do that before that calc_phys_offset
> > doesn't touch that and it is guaranteed that it will not ( as it
> > doesn't have arguments )
> 
> How does a function not having parameters guarantee that registers
> used for parameter passing aren't touched? Inside a function, the
> compiler is free to use argument-passing registers just like other
> temporary ones; their values don't need preserving, from all I know
> (otherwise the RISC-V ABI would be different to all other ABIs I
> know of).
Well, you are right that it doesn't guarantee and the calling
convention tells that arg registers should be saved/restored
before/after function call.

But I haven't seen yet that compiler touch arg registers if function
accepts 'void' as an function argument. So 'guarantee' isn't correct
word.

Thanks for the note.

~ Oleksii



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 4/6] xen/riscv: introduce identity mapping
  2023-07-07 10:51       ` Jan Beulich
@ 2023-07-10  8:53         ` Oleksii
  2023-07-10  9:34           ` Jan Beulich
  0 siblings, 1 reply; 25+ messages in thread
From: Oleksii @ 2023-07-10  8:53 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Julien Grall, Bob Eshleman, Alistair Francis,
	Connor Davis, xen-devel

On Fri, 2023-07-07 at 12:51 +0200, Jan Beulich wrote:
> On 07.07.2023 12:37, Oleksii wrote:
> > On Thu, 2023-07-06 at 13:35 +0200, Jan Beulich wrote:
> > > On 19.06.2023 15:34, Oleksii Kurochko wrote:
> > > > --- a/xen/arch/riscv/include/asm/config.h
> > > > +++ b/xen/arch/riscv/include/asm/config.h
> > > > @@ -1,3 +1,5 @@
> > > > +/* SPDX-License-Identifier: GPL-2.0-only */
> > > > +
> > > >  #ifndef __RISCV_CONFIG_H__
> > > >  #define __RISCV_CONFIG_H__
> > > >  
> > > 
> > > Unrelated change?
> > It  should be part of [PATCH v2 5/6] xen/riscv: introduce identity
> > mapping.
> 
> Hmm, here we're discussing "[PATCH v2 4/6] xen/riscv: introduce
> identity
> mapping". I'm confused, I guess.
Sorry for confusion. i meant the patch: [PATCH v2 5/6] xen/riscv: add
SPDX tags.

> 
> > > > --- a/xen/arch/riscv/mm.c
> > > > +++ b/xen/arch/riscv/mm.c
> > > > @@ -25,6 +25,12 @@ unsigned long __ro_after_init phys_offset;
> > > >  #define LOAD_TO_LINK(addr) ((unsigned long)(addr) -
> > > > phys_offset)
> > > >  #define LINK_TO_LOAD(addr) ((unsigned long)(addr) +
> > > > phys_offset)
> > > >  
> > > > +/*
> > > > + * Should be removed as soon as enough headers will be merged
> > > > for
> > > > inclusion of
> > > > + * <xen/lib.h>.
> > > > + */
> > > > +#define ARRAY_SIZE(arr)                (sizeof(arr) /
> > > > sizeof((arr)[0]))
> > > > +
> > > >  /*
> > > >   * It is expected that Xen won't be more then 2 MB.
> > > >   * The check in xen.lds.S guarantees that.
> > > > @@ -35,8 +41,10 @@ unsigned long __ro_after_init phys_offset;
> > > >   *
> > > >   * It might be needed one more page table in case when Xen
> > > > load
> > > > address
> > > >   * isn't 2 MB aligned.
> > > > + *
> > > > + * (CONFIG_PAGING_LEVELS - 1) page tables are needed for
> > > > identity
> > > > mapping.
> > > >   */
> > > > -#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) + 1)
> > > > +#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) * 2 +
> > > > 1)
> > > 
> > > How come the extra page (see the comment sentence in context)
> > > isn't
> > > needed for the identity-mapping case?
> > It is needed to allocate no more than two 'nonroot' page tables (L0
> > and
> > L1 in case of Sv39 ) as page 'root' table ( L2 in case of Sv39 ) is
> > always re-used.
> > 
> > The same ( only 'nonroot' page tables might be needed to allocate )
> > works for any MMU mode.
> 
> Of course, but if you cross a 2Mb boundary you'll need 2 L0 tables.
Yes, in the case of crossing a 2Mb boundary, it will require 2 L0
tables.

Then, the number of required page tables is needed depending on Xen
size and load address alignment. Because for each 2Mb, we need a new L0
table.

Sure, this is not needed now ( as in xen.lds.S, we have a Xen size
check ), but if someone increases Xen size binary to 4Mb, then the
amount of page tables should be updated too.
Should we take into account that?

> 
> > > > @@ -255,25 +262,30 @@ void __init noreturn noinline
> > > > enable_mmu()
> > > >      csr_write(CSR_SATP,
> > > >                PFN_DOWN((unsigned long)stage1_pgtbl_root) |
> > > >                RV_STAGE1_MODE << SATP_MODE_SHIFT);
> > > > +}
> > > >  
> > > > -    asm volatile ( ".p2align 2" );
> > > > - mmu_is_enabled:
> > > > -    /*
> > > > -     * Stack should be re-inited as:
> > > > -     * 1. Right now an address of the stack is relative to
> > > > load
> > > > time
> > > > -     *    addresses what will cause an issue in case of load
> > > > start
> > > > address
> > > > -     *    isn't equal to linker start address.
> > > > -     * 2. Addresses in stack are all load time relative which
> > > > can
> > > > be an
> > > > -     *    issue in case when load start address isn't equal to
> > > > linker
> > > > -     *    start address.
> > > > -     *
> > > > -     * We can't return to the caller because the stack was
> > > > reseted
> > > > -     * and it may have stash some variable on the stack.
> > > > -     * Jump to a brand new function as the stack was reseted
> > > > -     */
> > > > +void __init remove_identity_mapping(void)
> > > > +{
> > > > +    unsigned int i;
> > > > +    pte_t *pgtbl;
> > > > +    unsigned int index, xen_index;
> > > > +    unsigned long load_addr = LINK_TO_LOAD(_start);
> > > >  
> > > > -    switch_stack_and_jump((unsigned long)cpu0_boot_stack +
> > > > STACK_SIZE,
> > > > -                          cont_after_mmu_is_enabled);
> > > > +    for ( pgtbl = stage1_pgtbl_root, i = 0;
> > > > +          i <= (CONFIG_PAGING_LEVELS - 1);
> > > 
> > > i < CONFIG_PAGING_LEVELS ? But maybe it would be easier for i to
> > > start
> > > at CONFIG_PAGING_LEVELS and be decremented, simplifying ...
> > > 
> > > > +          i++ )
> > > > +    {
> > > > +        index = pt_index(CONFIG_PAGING_LEVELS - 1 - i,
> > > > load_addr);
> > > > +        xen_index = pt_index(CONFIG_PAGING_LEVELS - 1 - i,
> > > > XEN_VIRT_START);
> > > 
> > > ... these two expressions?
> > It makes sense. I'll update this part of the code.
> > 
> > > 
> > > > +        if ( index != xen_index )
> > > > +        {
> > > > +            pgtbl[index].pte = 0;
> > > > +            break;
> > > > +        }
> > > 
> > > Is this enough? When load and link address are pretty close (but
> > > not
> > > overlapping), can't they share a leaf table, in which case you
> > > need
> > > to clear more than just a single entry? The present overlap check
> > > looks to be 4k-granular, not 2M (in which case this couldn't
> > > happen;
> > > IOW adjusting the overlap check may also be a way out).
> > At the start of setup_initial_pagetables() there is a code which
> > checks
> > that load and link address don't overlap:
> > 
> >     if ( (linker_start != load_start) &&
> >          (linker_start <= load_end) && (load_start <= linker_end) )
> >     {
> >         early_printk("(XEN) linker and load address ranges
> > overlap\n");
> >         die();
> >     }
> > 
> > So the closest difference between load and link address can be 4kb.
> > Otherwise load and link address ranges are equal ( as we can't map
> > less
> > then 4kb).
> > 
> > Let's take concrete examples:
> >   Load address range is   0x8020_0000 - 0x8020_0FFF
> >   Linker address range is 0x8020_1000 - 0x8020_1FFF
> >   MMU mode: Sv39 ( so we have 3 page tables )
> > 
> >   So we have:
> >     * L2 index = 2, L1 index = 1, L0 index = 0 for load address
> >     * L2 index = 2, L1 index = 1, L0 index = 1 for linker address
> >   Thereby we have two different L0 tables for load and linker
> > address 
> > ranges.
> >   And it looks like it is pretty safe to remove only one L0 index
> > that
> > was used for identity mapping.
> > 
> > Is it possible that I missed something?
> 
> Looks as if you are thinking of only a Xen which fits in 4k. The code
> here wants to cope with Xen getting quite a bit bigger.

Yeah, I missed that when I tried to come up with an example.

So it will probably be more universal if we recursively go through the
whole identity mapping and unmap each pte individually.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 4/6] xen/riscv: introduce identity mapping
  2023-07-10  8:53         ` Oleksii
@ 2023-07-10  9:34           ` Jan Beulich
  0 siblings, 0 replies; 25+ messages in thread
From: Jan Beulich @ 2023-07-10  9:34 UTC (permalink / raw)
  To: Oleksii
  Cc: Andrew Cooper, Julien Grall, Bob Eshleman, Alistair Francis,
	Connor Davis, xen-devel

On 10.07.2023 10:53, Oleksii wrote:
> On Fri, 2023-07-07 at 12:51 +0200, Jan Beulich wrote:
>> On 07.07.2023 12:37, Oleksii wrote:
>>> On Thu, 2023-07-06 at 13:35 +0200, Jan Beulich wrote:
>>>> On 19.06.2023 15:34, Oleksii Kurochko wrote:
>>>>> --- a/xen/arch/riscv/mm.c
>>>>> +++ b/xen/arch/riscv/mm.c
>>>>> @@ -25,6 +25,12 @@ unsigned long __ro_after_init phys_offset;
>>>>>  #define LOAD_TO_LINK(addr) ((unsigned long)(addr) -
>>>>> phys_offset)
>>>>>  #define LINK_TO_LOAD(addr) ((unsigned long)(addr) +
>>>>> phys_offset)
>>>>>  
>>>>> +/*
>>>>> + * Should be removed as soon as enough headers will be merged
>>>>> for
>>>>> inclusion of
>>>>> + * <xen/lib.h>.
>>>>> + */
>>>>> +#define ARRAY_SIZE(arr)                (sizeof(arr) /
>>>>> sizeof((arr)[0]))
>>>>> +
>>>>>  /*
>>>>>   * It is expected that Xen won't be more then 2 MB.
>>>>>   * The check in xen.lds.S guarantees that.
>>>>> @@ -35,8 +41,10 @@ unsigned long __ro_after_init phys_offset;
>>>>>   *
>>>>>   * It might be needed one more page table in case when Xen
>>>>> load
>>>>> address
>>>>>   * isn't 2 MB aligned.
>>>>> + *
>>>>> + * (CONFIG_PAGING_LEVELS - 1) page tables are needed for
>>>>> identity
>>>>> mapping.
>>>>>   */
>>>>> -#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) + 1)
>>>>> +#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) * 2 +
>>>>> 1)
>>>>
>>>> How come the extra page (see the comment sentence in context)
>>>> isn't
>>>> needed for the identity-mapping case?
>>> It is needed to allocate no more than two 'nonroot' page tables (L0
>>> and
>>> L1 in case of Sv39 ) as page 'root' table ( L2 in case of Sv39 ) is
>>> always re-used.
>>>
>>> The same ( only 'nonroot' page tables might be needed to allocate )
>>> works for any MMU mode.
>>
>> Of course, but if you cross a 2Mb boundary you'll need 2 L0 tables.
> Yes, in the case of crossing a 2Mb boundary, it will require 2 L0
> tables.
> 
> Then, the number of required page tables is needed depending on Xen
> size and load address alignment. Because for each 2Mb, we need a new L0
> table.
> 
> Sure, this is not needed now ( as in xen.lds.S, we have a Xen size
> check ), but if someone increases Xen size binary to 4Mb, then the
> amount of page tables should be updated too.
> Should we take into account that?

Perhaps by way of a BUILD_BUG_ON(), yes. We want to avoid setting up
a (runtime) trap for someone to fall into.

Jan


^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2023-07-10  9:34 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-19 13:34 [PATCH v2 0/6] xen/riscv: introduce identity mapping Oleksii Kurochko
2023-06-19 13:34 ` [PATCH v2 1/6] xen/riscv: add .sbss section to .bss Oleksii Kurochko
2023-06-19 13:34 ` [PATCH v2 2/6] xen/riscv: introduce reset_stack() function Oleksii Kurochko
2023-07-06 11:17   ` Jan Beulich
2023-07-07  9:08     ` Oleksii
2023-07-07  9:33       ` Jan Beulich
2023-07-10  8:09         ` Oleksii
2023-06-19 13:34 ` [PATCH v2 3/6] xen/riscv: introduce function for physical offset calculation Oleksii Kurochko
2023-07-06 11:18   ` Jan Beulich
2023-07-07  9:12     ` Oleksii
2023-07-07  9:17       ` Julien Grall
2023-07-10  8:11         ` Oleksii
2023-07-07  9:35       ` Jan Beulich
2023-07-10  8:15         ` Oleksii
2023-06-19 13:34 ` [PATCH v2 4/6] xen/riscv: introduce identity mapping Oleksii Kurochko
2023-07-06 11:35   ` Jan Beulich
2023-07-07 10:37     ` Oleksii
2023-07-07 10:51       ` Jan Beulich
2023-07-10  8:53         ` Oleksii
2023-07-10  9:34           ` Jan Beulich
2023-06-19 13:34 ` [PATCH v2 5/6] xen/riscv: add SPDX tags Oleksii Kurochko
2023-06-22  2:13   ` Alistair Francis
2023-06-19 13:34 ` [PATCH v2 6/6] xen/riscv: move extern of cpu0_boot_stack to header Oleksii Kurochko
2023-06-22  2:13   ` Alistair Francis
2023-07-03 11:49 ` [PATCH v2 0/6] xen/riscv: introduce identity mapping Oleksii

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.