* [PATCH] arm64: relax Image placement requirement
@ 2015-03-20 17:02 Ard Biesheuvel
2015-03-20 17:49 ` Ard Biesheuvel
2015-03-23 9:07 ` [PATCH v2 0/3] " Ard Biesheuvel
0 siblings, 2 replies; 12+ messages in thread
From: Ard Biesheuvel @ 2015-03-20 17:02 UTC (permalink / raw)
To: linux-arm-kernel
Hi all,
This is another possible approach to fixing the current, flawed
EFI Image placement logic: instead of fixing that logic and the
documentation, why not relax the requirement that the kernel
Image not be placed across a 512 MB boundary?
---------------->8-----------------
This patch changes the early page table code so that two adjacent
entries are used to map two pages worth of block entries at the
lowest level, both for idmap_pg_dir and swapper_pg_dir.
The purpose is to allow the kernel Image to cross a 512 MB or 1 GB
alignment boundary (depending on page size), which is something that
is not specifically banned by the current wording of the boot protocol.
(The boot protocol stipulates that the kernel must be placed within
512 MB if the beginning of RAM. However, the beginning of RAM is not
necessarily aligned to 512 MB)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
arch/arm64/include/asm/page.h | 4 ++--
arch/arm64/kernel/head.S | 17 +++++++++++++----
2 files changed, 15 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
index 3d02b1869eb8..d2189c359364 100644
--- a/arch/arm64/include/asm/page.h
+++ b/arch/arm64/include/asm/page.h
@@ -43,8 +43,8 @@
#define SWAPPER_PGTABLE_LEVELS (CONFIG_ARM64_PGTABLE_LEVELS - 1)
#endif
-#define SWAPPER_DIR_SIZE (SWAPPER_PGTABLE_LEVELS * PAGE_SIZE)
-#define IDMAP_DIR_SIZE (3 * PAGE_SIZE)
+#define SWAPPER_DIR_SIZE ((SWAPPER_PGTABLE_LEVELS + 1) * PAGE_SIZE)
+#define IDMAP_DIR_SIZE (4 * PAGE_SIZE)
#ifndef __ASSEMBLY__
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 0dbdb4f3634f..9c3f95f30421 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -274,18 +274,21 @@ ENDPROC(preserve_boot_args)
* virt: virtual address
* shift: #imm page table shift
* ptrs: #imm pointers per table page
+ * offset: #imm offset into the lowest translation level, in pages
*
* Preserves: virt
* Corrupts: tmp1, tmp2
* Returns: tbl -> next level table page address
*/
- .macro create_table_entry, tbl, virt, shift, ptrs, tmp1, tmp2
+ .macro create_table_entry, tbl, virt, shift, ptrs, tmp1, tmp2, offset=0
lsr \tmp1, \virt, #\shift
+ .if \offset
+ add \tmp1, \tmp1, #\offset
+ .endif
and \tmp1, \tmp1, #\ptrs - 1 // table index
- add \tmp2, \tbl, #PAGE_SIZE
+ add \tmp2, \tbl, #(\offset + 1) * PAGE_SIZE
orr \tmp2, \tmp2, #PMD_TYPE_TABLE // address of next table and entry type
str \tmp2, [\tbl, \tmp1, lsl #3]
- add \tbl, \tbl, #PAGE_SIZE // next level table page
.endm
/*
@@ -297,9 +300,14 @@ ENDPROC(preserve_boot_args)
*/
.macro create_pgd_entry, tbl, virt, tmp1, tmp2
create_table_entry \tbl, \virt, PGDIR_SHIFT, PTRS_PER_PGD, \tmp1, \tmp2
-#if SWAPPER_PGTABLE_LEVELS == 3
+#if SWAPPER_PGTABLE_LEVELS != 3
+ create_table_entry \tbl, \virt, PGDIR_SHIFT, PTRS_PER_PGD, \tmp1, \tmp2, 1
+#else
+ add \tbl, \tbl, #PAGE_SIZE // next level table page
create_table_entry \tbl, \virt, TABLE_SHIFT, PTRS_PER_PTE, \tmp1, \tmp2
+ create_table_entry \tbl, \virt, TABLE_SHIFT, PTRS_PER_PTE, \tmp1, \tmp2, 1
#endif
+ add \tbl, \tbl, #PAGE_SIZE // next level table page
.endm
/*
@@ -396,6 +404,7 @@ __create_page_tables:
str_l x5, idmap_t0sz, x6
create_table_entry x0, x3, EXTRA_SHIFT, EXTRA_PTRS, x5, x6
+ add x0, x0, #PAGE_SIZE // next level table page
1:
#endif
--
1.8.3.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH] arm64: relax Image placement requirement
2015-03-20 17:02 [PATCH] arm64: relax Image placement requirement Ard Biesheuvel
@ 2015-03-20 17:49 ` Ard Biesheuvel
2015-03-22 15:36 ` Ard Biesheuvel
2015-03-23 9:07 ` [PATCH v2 0/3] " Ard Biesheuvel
1 sibling, 1 reply; 12+ messages in thread
From: Ard Biesheuvel @ 2015-03-20 17:49 UTC (permalink / raw)
To: linux-arm-kernel
On 20 March 2015 at 18:02, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> Hi all,
>
> This is another possible approach to fixing the current, flawed
> EFI Image placement logic: instead of fixing that logic and the
> documentation, why not relax the requirement that the kernel
> Image not be placed across a 512 MB boundary?
>
Hmm, it seems this still needs a pinch of refinement, but we could at
least discuss the general idea.
--
Ard.
> ---------------->8-----------------
>
> This patch changes the early page table code so that two adjacent
> entries are used to map two pages worth of block entries at the
> lowest level, both for idmap_pg_dir and swapper_pg_dir.
>
> The purpose is to allow the kernel Image to cross a 512 MB or 1 GB
> alignment boundary (depending on page size), which is something that
> is not specifically banned by the current wording of the boot protocol.
> (The boot protocol stipulates that the kernel must be placed within
> 512 MB if the beginning of RAM. However, the beginning of RAM is not
> necessarily aligned to 512 MB)
>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
> arch/arm64/include/asm/page.h | 4 ++--
> arch/arm64/kernel/head.S | 17 +++++++++++++----
> 2 files changed, 15 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
> index 3d02b1869eb8..d2189c359364 100644
> --- a/arch/arm64/include/asm/page.h
> +++ b/arch/arm64/include/asm/page.h
> @@ -43,8 +43,8 @@
> #define SWAPPER_PGTABLE_LEVELS (CONFIG_ARM64_PGTABLE_LEVELS - 1)
> #endif
>
> -#define SWAPPER_DIR_SIZE (SWAPPER_PGTABLE_LEVELS * PAGE_SIZE)
> -#define IDMAP_DIR_SIZE (3 * PAGE_SIZE)
> +#define SWAPPER_DIR_SIZE ((SWAPPER_PGTABLE_LEVELS + 1) * PAGE_SIZE)
> +#define IDMAP_DIR_SIZE (4 * PAGE_SIZE)
>
> #ifndef __ASSEMBLY__
>
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index 0dbdb4f3634f..9c3f95f30421 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -274,18 +274,21 @@ ENDPROC(preserve_boot_args)
> * virt: virtual address
> * shift: #imm page table shift
> * ptrs: #imm pointers per table page
> + * offset: #imm offset into the lowest translation level, in pages
> *
> * Preserves: virt
> * Corrupts: tmp1, tmp2
> * Returns: tbl -> next level table page address
> */
> - .macro create_table_entry, tbl, virt, shift, ptrs, tmp1, tmp2
> + .macro create_table_entry, tbl, virt, shift, ptrs, tmp1, tmp2, offset=0
> lsr \tmp1, \virt, #\shift
> + .if \offset
> + add \tmp1, \tmp1, #\offset
> + .endif
> and \tmp1, \tmp1, #\ptrs - 1 // table index
> - add \tmp2, \tbl, #PAGE_SIZE
> + add \tmp2, \tbl, #(\offset + 1) * PAGE_SIZE
> orr \tmp2, \tmp2, #PMD_TYPE_TABLE // address of next table and entry type
> str \tmp2, [\tbl, \tmp1, lsl #3]
> - add \tbl, \tbl, #PAGE_SIZE // next level table page
> .endm
>
> /*
> @@ -297,9 +300,14 @@ ENDPROC(preserve_boot_args)
> */
> .macro create_pgd_entry, tbl, virt, tmp1, tmp2
> create_table_entry \tbl, \virt, PGDIR_SHIFT, PTRS_PER_PGD, \tmp1, \tmp2
> -#if SWAPPER_PGTABLE_LEVELS == 3
> +#if SWAPPER_PGTABLE_LEVELS != 3
> + create_table_entry \tbl, \virt, PGDIR_SHIFT, PTRS_PER_PGD, \tmp1, \tmp2, 1
> +#else
> + add \tbl, \tbl, #PAGE_SIZE // next level table page
> create_table_entry \tbl, \virt, TABLE_SHIFT, PTRS_PER_PTE, \tmp1, \tmp2
> + create_table_entry \tbl, \virt, TABLE_SHIFT, PTRS_PER_PTE, \tmp1, \tmp2, 1
> #endif
> + add \tbl, \tbl, #PAGE_SIZE // next level table page
> .endm
>
> /*
> @@ -396,6 +404,7 @@ __create_page_tables:
> str_l x5, idmap_t0sz, x6
>
> create_table_entry x0, x3, EXTRA_SHIFT, EXTRA_PTRS, x5, x6
> + add x0, x0, #PAGE_SIZE // next level table page
> 1:
> #endif
>
> --
> 1.8.3.2
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH] arm64: relax Image placement requirement
2015-03-20 17:49 ` Ard Biesheuvel
@ 2015-03-22 15:36 ` Ard Biesheuvel
0 siblings, 0 replies; 12+ messages in thread
From: Ard Biesheuvel @ 2015-03-22 15:36 UTC (permalink / raw)
To: linux-arm-kernel
On 20 March 2015 at 18:49, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> On 20 March 2015 at 18:02, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>> Hi all,
>>
>> This is another possible approach to fixing the current, flawed
>> EFI Image placement logic: instead of fixing that logic and the
>> documentation, why not relax the requirement that the kernel
>> Image not be placed across a 512 MB boundary?
>>
>
> Hmm, it seems this still needs a pinch of refinement, but we could at
> least discuss the general idea.
>
Replying to self again:
it appears that, in order to relax the restriction that no 512 MB
alignment boundary may be crossed, it would be sufficient to shrink
the ID mapping to a single page, containing only the couple of
instructions we execute between enabling the MMU and jumping into the
virtually remapped kernel text. Since mapping a single page can never
cross such a boundary, no extended translation tables are needed.
--
Ard.
>> ---------------->8-----------------
>>
>> This patch changes the early page table code so that two adjacent
>> entries are used to map two pages worth of block entries at the
>> lowest level, both for idmap_pg_dir and swapper_pg_dir.
>>
>> The purpose is to allow the kernel Image to cross a 512 MB or 1 GB
>> alignment boundary (depending on page size), which is something that
>> is not specifically banned by the current wording of the boot protocol.
>> (The boot protocol stipulates that the kernel must be placed within
>> 512 MB if the beginning of RAM. However, the beginning of RAM is not
>> necessarily aligned to 512 MB)
>>
>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>> ---
>> arch/arm64/include/asm/page.h | 4 ++--
>> arch/arm64/kernel/head.S | 17 +++++++++++++----
>> 2 files changed, 15 insertions(+), 6 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
>> index 3d02b1869eb8..d2189c359364 100644
>> --- a/arch/arm64/include/asm/page.h
>> +++ b/arch/arm64/include/asm/page.h
>> @@ -43,8 +43,8 @@
>> #define SWAPPER_PGTABLE_LEVELS (CONFIG_ARM64_PGTABLE_LEVELS - 1)
>> #endif
>>
>> -#define SWAPPER_DIR_SIZE (SWAPPER_PGTABLE_LEVELS * PAGE_SIZE)
>> -#define IDMAP_DIR_SIZE (3 * PAGE_SIZE)
>> +#define SWAPPER_DIR_SIZE ((SWAPPER_PGTABLE_LEVELS + 1) * PAGE_SIZE)
>> +#define IDMAP_DIR_SIZE (4 * PAGE_SIZE)
>>
>> #ifndef __ASSEMBLY__
>>
>> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
>> index 0dbdb4f3634f..9c3f95f30421 100644
>> --- a/arch/arm64/kernel/head.S
>> +++ b/arch/arm64/kernel/head.S
>> @@ -274,18 +274,21 @@ ENDPROC(preserve_boot_args)
>> * virt: virtual address
>> * shift: #imm page table shift
>> * ptrs: #imm pointers per table page
>> + * offset: #imm offset into the lowest translation level, in pages
>> *
>> * Preserves: virt
>> * Corrupts: tmp1, tmp2
>> * Returns: tbl -> next level table page address
>> */
>> - .macro create_table_entry, tbl, virt, shift, ptrs, tmp1, tmp2
>> + .macro create_table_entry, tbl, virt, shift, ptrs, tmp1, tmp2, offset=0
>> lsr \tmp1, \virt, #\shift
>> + .if \offset
>> + add \tmp1, \tmp1, #\offset
>> + .endif
>> and \tmp1, \tmp1, #\ptrs - 1 // table index
>> - add \tmp2, \tbl, #PAGE_SIZE
>> + add \tmp2, \tbl, #(\offset + 1) * PAGE_SIZE
>> orr \tmp2, \tmp2, #PMD_TYPE_TABLE // address of next table and entry type
>> str \tmp2, [\tbl, \tmp1, lsl #3]
>> - add \tbl, \tbl, #PAGE_SIZE // next level table page
>> .endm
>>
>> /*
>> @@ -297,9 +300,14 @@ ENDPROC(preserve_boot_args)
>> */
>> .macro create_pgd_entry, tbl, virt, tmp1, tmp2
>> create_table_entry \tbl, \virt, PGDIR_SHIFT, PTRS_PER_PGD, \tmp1, \tmp2
>> -#if SWAPPER_PGTABLE_LEVELS == 3
>> +#if SWAPPER_PGTABLE_LEVELS != 3
>> + create_table_entry \tbl, \virt, PGDIR_SHIFT, PTRS_PER_PGD, \tmp1, \tmp2, 1
>> +#else
>> + add \tbl, \tbl, #PAGE_SIZE // next level table page
>> create_table_entry \tbl, \virt, TABLE_SHIFT, PTRS_PER_PTE, \tmp1, \tmp2
>> + create_table_entry \tbl, \virt, TABLE_SHIFT, PTRS_PER_PTE, \tmp1, \tmp2, 1
>> #endif
>> + add \tbl, \tbl, #PAGE_SIZE // next level table page
>> .endm
>>
>> /*
>> @@ -396,6 +404,7 @@ __create_page_tables:
>> str_l x5, idmap_t0sz, x6
>>
>> create_table_entry x0, x3, EXTRA_SHIFT, EXTRA_PTRS, x5, x6
>> + add x0, x0, #PAGE_SIZE // next level table page
>> 1:
>> #endif
>>
>> --
>> 1.8.3.2
>>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 0/3] arm64: relax Image placement requirement
2015-03-20 17:02 [PATCH] arm64: relax Image placement requirement Ard Biesheuvel
2015-03-20 17:49 ` Ard Biesheuvel
@ 2015-03-23 9:07 ` Ard Biesheuvel
2015-03-23 9:07 ` [PATCH v2 1/3] arm64: remove soft_restart() and friends Ard Biesheuvel
` (3 more replies)
1 sibling, 4 replies; 12+ messages in thread
From: Ard Biesheuvel @ 2015-03-23 9:07 UTC (permalink / raw)
To: linux-arm-kernel
So as it turns out, the 512 MB alignment boundary restriction appears
to have been introduced by accident when increasing the ID map to cover
the entire kernel Image.
So this reverts that change, by reducing the ID map to something that
can never cross a 512 MB boundary by construction.
Patch #1 removes some functions that are unused, so that I don't have
to worry about them in patch #2
Patch #2 introduces the reduced ID map, using a separate linker section
that contains code the manipulates the state of the MMU.
Patch #3 removes the sleep_idmap_phys global which always points to
the ID map anyway
Ard Biesheuvel (3):
arm64: remove soft_restart() and friends
arm64: reduce ID map to a single page
arm64: drop sleep_idmap_phys
arch/arm64/include/asm/mmu.h | 1 -
arch/arm64/include/asm/proc-fns.h | 3 ---
arch/arm64/include/asm/system_misc.h | 1 -
arch/arm64/kernel/head.S | 13 +++++++------
arch/arm64/kernel/process.c | 12 +-----------
arch/arm64/kernel/sleep.S | 9 ++++-----
arch/arm64/kernel/suspend.c | 3 ---
arch/arm64/kernel/vmlinux.lds.S | 11 ++++++++++-
arch/arm64/mm/mmu.c | 11 -----------
arch/arm64/mm/proc.S | 33 ---------------------------------
10 files changed, 22 insertions(+), 75 deletions(-)
--
1.8.3.2
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 1/3] arm64: remove soft_restart() and friends
2015-03-23 9:07 ` [PATCH v2 0/3] " Ard Biesheuvel
@ 2015-03-23 9:07 ` Ard Biesheuvel
2015-03-23 9:32 ` Ard Biesheuvel
2015-03-23 9:07 ` [PATCH v2 2/3] arm64: reduce ID map to a single page Ard Biesheuvel
` (2 subsequent siblings)
3 siblings, 1 reply; 12+ messages in thread
From: Ard Biesheuvel @ 2015-03-23 9:07 UTC (permalink / raw)
To: linux-arm-kernel
The implementation of soft_restart() and its subordinate functions
setup_mm_for_reboot(), cpu_soft_restart() and cpu_reset() have
non-trivial dependencies on the state of the caches and the MMU,
making them fragile from a maintenance pov. However, soft_restart()
isn't actually called anywhere, so just remove the whole bunch.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
arch/arm64/include/asm/mmu.h | 1 -
arch/arm64/include/asm/proc-fns.h | 3 ---
arch/arm64/include/asm/system_misc.h | 1 -
arch/arm64/kernel/process.c | 12 +-----------
arch/arm64/mm/mmu.c | 11 -----------
arch/arm64/mm/proc.S | 33 ---------------------------------
6 files changed, 1 insertion(+), 60 deletions(-)
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 3d311761e3c2..99064c075fc6 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -28,7 +28,6 @@ typedef struct {
#define ASID(mm) ((mm)->context.id & 0xffff)
extern void paging_init(void);
-extern void setup_mm_for_reboot(void);
extern void __iomem *early_io_map(phys_addr_t phys, unsigned long virt);
extern void init_mem_pgprot(void);
extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
diff --git a/arch/arm64/include/asm/proc-fns.h b/arch/arm64/include/asm/proc-fns.h
index 4d9ede7b6361..ab334467c6a4 100644
--- a/arch/arm64/include/asm/proc-fns.h
+++ b/arch/arm64/include/asm/proc-fns.h
@@ -31,9 +31,6 @@ struct cpu_suspend_ctx;
extern void cpu_cache_off(void);
extern void cpu_do_idle(void);
extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm);
-extern void cpu_reset(unsigned long addr) __attribute__((noreturn));
-void cpu_soft_restart(phys_addr_t cpu_reset,
- unsigned long addr) __attribute__((noreturn));
extern void cpu_do_suspend(struct cpu_suspend_ctx *ptr);
extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr);
diff --git a/arch/arm64/include/asm/system_misc.h b/arch/arm64/include/asm/system_misc.h
index 7a18fabbe0f6..659fbf5925de 100644
--- a/arch/arm64/include/asm/system_misc.h
+++ b/arch/arm64/include/asm/system_misc.h
@@ -41,7 +41,6 @@ struct mm_struct;
extern void show_pte(struct mm_struct *mm, unsigned long addr);
extern void __show_regs(struct pt_regs *);
-void soft_restart(unsigned long);
extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
#define UDBG_UNDEFINED (1 << 0)
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index c6b1f3b96f45..c506bee6b613 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -58,14 +58,6 @@ unsigned long __stack_chk_guard __read_mostly;
EXPORT_SYMBOL(__stack_chk_guard);
#endif
-void soft_restart(unsigned long addr)
-{
- setup_mm_for_reboot();
- cpu_soft_restart(virt_to_phys(cpu_reset), addr);
- /* Should never get here */
- BUG();
-}
-
/*
* Function pointers to optional machine specific functions
*/
@@ -136,9 +128,7 @@ void machine_power_off(void)
/*
* Restart requires that the secondary CPUs stop performing any activity
- * while the primary CPU resets the system. Systems with a single CPU can
- * use soft_restart() as their machine descriptor's .restart hook, since that
- * will cause the only available CPU to reset. Systems with multiple CPUs must
+ * while the primary CPU resets the system. Systems with multiple CPUs must
* provide a HW restart implementation, to ensure that all CPUs reset@once.
* This is required so that any code running after reset on the primary CPU
* doesn't have to co-ordinate with other CPUs to ensure they aren't still
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 428aaf86c95b..15ac396229d5 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -460,17 +460,6 @@ void __init paging_init(void)
}
/*
- * Enable the identity mapping to allow the MMU disabling.
- */
-void setup_mm_for_reboot(void)
-{
- cpu_set_reserved_ttbr0();
- flush_tlb_all();
- cpu_set_idmap_tcr_t0sz();
- cpu_switch_mm(idmap_pg_dir, &init_mm);
-}
-
-/*
* Check whether a kernel address is valid (derived from arch/x86/).
*/
int kern_addr_valid(unsigned long addr)
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index cdd754e19b9b..a2059ec61463 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -59,39 +59,6 @@ ENTRY(cpu_cache_off)
ENDPROC(cpu_cache_off)
/*
- * cpu_reset(loc)
- *
- * Perform a soft reset of the system. Put the CPU into the same state
- * as it would be if it had been reset, and branch to what would be the
- * reset vector. It must be executed with the flat identity mapping.
- *
- * - loc - location to jump to for soft reset
- */
- .align 5
-ENTRY(cpu_reset)
- mrs x1, sctlr_el1
- bic x1, x1, #1
- msr sctlr_el1, x1 // disable the MMU
- isb
- ret x0
-ENDPROC(cpu_reset)
-
-ENTRY(cpu_soft_restart)
- /* Save address of cpu_reset() and reset address */
- mov x19, x0
- mov x20, x1
-
- /* Turn D-cache off */
- bl cpu_cache_off
-
- /* Push out all dirty data, and ensure cache is empty */
- bl flush_cache_all
-
- mov x0, x20
- ret x19
-ENDPROC(cpu_soft_restart)
-
-/*
* cpu_do_idle()
*
* Idle the processor (wait for interrupt).
--
1.8.3.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v2 2/3] arm64: reduce ID map to a single page
2015-03-23 9:07 ` [PATCH v2 0/3] " Ard Biesheuvel
2015-03-23 9:07 ` [PATCH v2 1/3] arm64: remove soft_restart() and friends Ard Biesheuvel
@ 2015-03-23 9:07 ` Ard Biesheuvel
2015-03-23 9:07 ` [PATCH v2 3/3] arm64: drop sleep_idmap_phys Ard Biesheuvel
2015-03-23 10:45 ` [PATCH v2 0/3] arm64: relax Image placement requirement Lorenzo Pieralisi
3 siblings, 0 replies; 12+ messages in thread
From: Ard Biesheuvel @ 2015-03-23 9:07 UTC (permalink / raw)
To: linux-arm-kernel
Commit ea8c2e112445 ("arm64: Extend the idmap to the whole kernel
image") changed the early page table code so that the entire kernel
Image is covered by the identity map. This allows functions that
need to enable or disable the MMU to reside anywhere in the kernel
Image.
However, this change has the unfortunate side effect that the Image
cannot cross a physical 512 MB alignment boundary anymore, since the
early page table code cannot deal with the Image crossing a /virtual/
512 MB alignment boundary.
So instead, reduce the ID map to a single page, that is populated by
the contents of the .idmap.text section. Only two functions reside
there at the moment: __enable_mmu() and cpu_resume_mmu(). If new
code is introduced that needs to manipulate the MMU state, it
should be added to this section as well.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
arch/arm64/kernel/head.S | 13 +++++++------
arch/arm64/kernel/sleep.S | 2 ++
arch/arm64/kernel/vmlinux.lds.S | 11 ++++++++++-
3 files changed, 19 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 51c9811e683c..5b78bc211e65 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -382,7 +382,7 @@ __create_page_tables:
* Create the identity mapping.
*/
mov x0, x25 // idmap_pg_dir
- adrp x3, KERNEL_START // __pa(KERNEL_START)
+ adrp x3, __idmap_text_start // __pa(__idmap_text_start)
#ifndef CONFIG_ARM64_VA_BITS_48
#define EXTRA_SHIFT (PGDIR_SHIFT + PAGE_SHIFT - 3)
@@ -405,11 +405,11 @@ __create_page_tables:
/*
* Calculate the maximum allowed value for TCR_EL1.T0SZ so that the
- * entire kernel image can be ID mapped. As T0SZ == (64 - #bits used),
+ * entire ID map region can be mapped. As T0SZ == (64 - #bits used),
* this number conveniently equals the number of leading zeroes in
- * the physical address of KERNEL_END.
+ * the physical address of __idmap_text_end.
*/
- adrp x5, KERNEL_END
+ adrp x5, __idmap_text_end
clz x5, x5
cmp x5, TCR_T0SZ(VA_BITS) // default T0SZ small enough?
b.ge 1f // .. then skip additional level
@@ -421,8 +421,8 @@ __create_page_tables:
#endif
create_pgd_entry x0, x3, x5, x6
- mov x5, x3 // __pa(KERNEL_START)
- adr_l x6, KERNEL_END // __pa(KERNEL_END)
+ mov x5, x3
+ mov x6, x3
create_block_map x0, x7, x3, x5, x6
/*
@@ -665,6 +665,7 @@ ENDPROC(__secondary_switched)
*
* other registers depend on the function called upon completion
*/
+ .section ".idmap.text", #alloc, #execinstr
__enable_mmu:
ldr x5, =vectors
msr vbar_el1, x5
diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
index ede186cdd452..04dc9aa2831e 100644
--- a/arch/arm64/kernel/sleep.S
+++ b/arch/arm64/kernel/sleep.S
@@ -130,12 +130,14 @@ ENDPROC(__cpu_suspend_enter)
/*
* x0 must contain the sctlr value retrieved from restored context
*/
+ .pushsection ".idmap.text", #alloc, #execinstr
ENTRY(cpu_resume_mmu)
ldr x3, =cpu_resume_after_mmu
msr sctlr_el1, x0 // restore sctlr_el1
isb
br x3 // global jump to virtual address
ENDPROC(cpu_resume_mmu)
+ .popsection
cpu_resume_after_mmu:
mov x0, #0 // return zero on success
ldp x19, x20, [sp, #16]
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index a2c29865c3fe..98073332e2d0 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -38,6 +38,12 @@ jiffies = jiffies_64;
*(.hyp.text) \
VMLINUX_SYMBOL(__hyp_text_end) = .;
+#define IDMAP_TEXT \
+ . = ALIGN(SZ_4K); \
+ VMLINUX_SYMBOL(__idmap_text_start) = .; \
+ *(.idmap.text) \
+ VMLINUX_SYMBOL(__idmap_text_end) = .;
+
/*
* The size of the PE/COFF section that covers the kernel image, which
* runs from stext to _edata, must be a round multiple of the PE/COFF
@@ -95,6 +101,7 @@ SECTIONS
SCHED_TEXT
LOCK_TEXT
HYPERVISOR_TEXT
+ IDMAP_TEXT
*(.fixup)
*(.gnu.warning)
. = ALIGN(16);
@@ -167,11 +174,13 @@ SECTIONS
}
/*
- * The HYP init code can't be more than a page long,
+ * The HYP init code and ID map text can't be longer than a page each,
* and should not cross a page boundary.
*/
ASSERT(__hyp_idmap_text_end - (__hyp_idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K,
"HYP init code too big or misaligned")
+ASSERT(__idmap_text_end - (__idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K,
+ "ID map text too big or misaligned")
/*
* If padding is applied before .head.text, virt<->phys conversions will fail.
--
1.8.3.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v2 3/3] arm64: drop sleep_idmap_phys
2015-03-23 9:07 ` [PATCH v2 0/3] " Ard Biesheuvel
2015-03-23 9:07 ` [PATCH v2 1/3] arm64: remove soft_restart() and friends Ard Biesheuvel
2015-03-23 9:07 ` [PATCH v2 2/3] arm64: reduce ID map to a single page Ard Biesheuvel
@ 2015-03-23 9:07 ` Ard Biesheuvel
2015-03-23 15:58 ` Lorenzo Pieralisi
2015-03-23 10:45 ` [PATCH v2 0/3] arm64: relax Image placement requirement Lorenzo Pieralisi
3 siblings, 1 reply; 12+ messages in thread
From: Ard Biesheuvel @ 2015-03-23 9:07 UTC (permalink / raw)
To: linux-arm-kernel
The global variable sleep_idmap_phys always points to idmap_pg_dir,
so we can just use that value directly in the CPU resume path.
Also unclutters the load of sleep_save_sp::save_ptr_stash_phys.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
arch/arm64/kernel/sleep.S | 7 ++-----
arch/arm64/kernel/suspend.c | 3 ---
2 files changed, 2 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
index 04dc9aa2831e..3c98fe1c1c2f 100644
--- a/arch/arm64/kernel/sleep.S
+++ b/arch/arm64/kernel/sleep.S
@@ -164,15 +164,12 @@ ENTRY(cpu_resume)
#else
mov x7, xzr
#endif
- adrp x0, sleep_save_sp
- add x0, x0, #:lo12:sleep_save_sp
- ldr x0, [x0, #SLEEP_SAVE_SP_PHYS]
+ ldr_l x0, sleep_save_sp + SLEEP_SAVE_SP_PHYS
ldr x0, [x0, x7, lsl #3]
/* load sp from context */
ldr x2, [x0, #CPU_CTX_SP]
- adrp x1, sleep_idmap_phys
/* load physical address of identity map page table in x1 */
- ldr x1, [x1, #:lo12:sleep_idmap_phys]
+ adrp x1, idmap_pg_dir
mov sp, x2
/*
* cpu_do_resume expects x0 to contain context physical address
diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
index d7daf45ae7a2..f6073c27d65f 100644
--- a/arch/arm64/kernel/suspend.c
+++ b/arch/arm64/kernel/suspend.c
@@ -118,7 +118,6 @@ int __cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
}
struct sleep_save_sp sleep_save_sp;
-phys_addr_t sleep_idmap_phys;
static int __init cpu_suspend_init(void)
{
@@ -132,9 +131,7 @@ static int __init cpu_suspend_init(void)
sleep_save_sp.save_ptr_stash = ctx_ptr;
sleep_save_sp.save_ptr_stash_phys = virt_to_phys(ctx_ptr);
- sleep_idmap_phys = virt_to_phys(idmap_pg_dir);
__flush_dcache_area(&sleep_save_sp, sizeof(struct sleep_save_sp));
- __flush_dcache_area(&sleep_idmap_phys, sizeof(sleep_idmap_phys));
return 0;
}
--
1.8.3.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v2 1/3] arm64: remove soft_restart() and friends
2015-03-23 9:07 ` [PATCH v2 1/3] arm64: remove soft_restart() and friends Ard Biesheuvel
@ 2015-03-23 9:32 ` Ard Biesheuvel
0 siblings, 0 replies; 12+ messages in thread
From: Ard Biesheuvel @ 2015-03-23 9:32 UTC (permalink / raw)
To: linux-arm-kernel
On 23 March 2015 at 10:07, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> The implementation of soft_restart() and its subordinate functions
> setup_mm_for_reboot(), cpu_soft_restart() and cpu_reset() have
> non-trivial dependencies on the state of the caches and the MMU,
> making them fragile from a maintenance pov. However, soft_restart()
> isn't actually called anywhere, so just remove the whole bunch.
>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Hmm, I guess kexec will be using soft_restart() so never mind this patch.
> ---
> arch/arm64/include/asm/mmu.h | 1 -
> arch/arm64/include/asm/proc-fns.h | 3 ---
> arch/arm64/include/asm/system_misc.h | 1 -
> arch/arm64/kernel/process.c | 12 +-----------
> arch/arm64/mm/mmu.c | 11 -----------
> arch/arm64/mm/proc.S | 33 ---------------------------------
> 6 files changed, 1 insertion(+), 60 deletions(-)
>
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 3d311761e3c2..99064c075fc6 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -28,7 +28,6 @@ typedef struct {
> #define ASID(mm) ((mm)->context.id & 0xffff)
>
> extern void paging_init(void);
> -extern void setup_mm_for_reboot(void);
> extern void __iomem *early_io_map(phys_addr_t phys, unsigned long virt);
> extern void init_mem_pgprot(void);
> extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
> diff --git a/arch/arm64/include/asm/proc-fns.h b/arch/arm64/include/asm/proc-fns.h
> index 4d9ede7b6361..ab334467c6a4 100644
> --- a/arch/arm64/include/asm/proc-fns.h
> +++ b/arch/arm64/include/asm/proc-fns.h
> @@ -31,9 +31,6 @@ struct cpu_suspend_ctx;
> extern void cpu_cache_off(void);
> extern void cpu_do_idle(void);
> extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm);
> -extern void cpu_reset(unsigned long addr) __attribute__((noreturn));
> -void cpu_soft_restart(phys_addr_t cpu_reset,
> - unsigned long addr) __attribute__((noreturn));
> extern void cpu_do_suspend(struct cpu_suspend_ctx *ptr);
> extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr);
>
> diff --git a/arch/arm64/include/asm/system_misc.h b/arch/arm64/include/asm/system_misc.h
> index 7a18fabbe0f6..659fbf5925de 100644
> --- a/arch/arm64/include/asm/system_misc.h
> +++ b/arch/arm64/include/asm/system_misc.h
> @@ -41,7 +41,6 @@ struct mm_struct;
> extern void show_pte(struct mm_struct *mm, unsigned long addr);
> extern void __show_regs(struct pt_regs *);
>
> -void soft_restart(unsigned long);
> extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
>
> #define UDBG_UNDEFINED (1 << 0)
> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index c6b1f3b96f45..c506bee6b613 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -58,14 +58,6 @@ unsigned long __stack_chk_guard __read_mostly;
> EXPORT_SYMBOL(__stack_chk_guard);
> #endif
>
> -void soft_restart(unsigned long addr)
> -{
> - setup_mm_for_reboot();
> - cpu_soft_restart(virt_to_phys(cpu_reset), addr);
> - /* Should never get here */
> - BUG();
> -}
> -
> /*
> * Function pointers to optional machine specific functions
> */
> @@ -136,9 +128,7 @@ void machine_power_off(void)
>
> /*
> * Restart requires that the secondary CPUs stop performing any activity
> - * while the primary CPU resets the system. Systems with a single CPU can
> - * use soft_restart() as their machine descriptor's .restart hook, since that
> - * will cause the only available CPU to reset. Systems with multiple CPUs must
> + * while the primary CPU resets the system. Systems with multiple CPUs must
> * provide a HW restart implementation, to ensure that all CPUs reset at once.
> * This is required so that any code running after reset on the primary CPU
> * doesn't have to co-ordinate with other CPUs to ensure they aren't still
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 428aaf86c95b..15ac396229d5 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -460,17 +460,6 @@ void __init paging_init(void)
> }
>
> /*
> - * Enable the identity mapping to allow the MMU disabling.
> - */
> -void setup_mm_for_reboot(void)
> -{
> - cpu_set_reserved_ttbr0();
> - flush_tlb_all();
> - cpu_set_idmap_tcr_t0sz();
> - cpu_switch_mm(idmap_pg_dir, &init_mm);
> -}
> -
> -/*
> * Check whether a kernel address is valid (derived from arch/x86/).
> */
> int kern_addr_valid(unsigned long addr)
> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> index cdd754e19b9b..a2059ec61463 100644
> --- a/arch/arm64/mm/proc.S
> +++ b/arch/arm64/mm/proc.S
> @@ -59,39 +59,6 @@ ENTRY(cpu_cache_off)
> ENDPROC(cpu_cache_off)
>
> /*
> - * cpu_reset(loc)
> - *
> - * Perform a soft reset of the system. Put the CPU into the same state
> - * as it would be if it had been reset, and branch to what would be the
> - * reset vector. It must be executed with the flat identity mapping.
> - *
> - * - loc - location to jump to for soft reset
> - */
> - .align 5
> -ENTRY(cpu_reset)
> - mrs x1, sctlr_el1
> - bic x1, x1, #1
> - msr sctlr_el1, x1 // disable the MMU
> - isb
> - ret x0
> -ENDPROC(cpu_reset)
> -
> -ENTRY(cpu_soft_restart)
> - /* Save address of cpu_reset() and reset address */
> - mov x19, x0
> - mov x20, x1
> -
> - /* Turn D-cache off */
> - bl cpu_cache_off
> -
> - /* Push out all dirty data, and ensure cache is empty */
> - bl flush_cache_all
> -
> - mov x0, x20
> - ret x19
> -ENDPROC(cpu_soft_restart)
> -
> -/*
> * cpu_do_idle()
> *
> * Idle the processor (wait for interrupt).
> --
> 1.8.3.2
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 0/3] arm64: relax Image placement requirement
2015-03-23 9:07 ` [PATCH v2 0/3] " Ard Biesheuvel
` (2 preceding siblings ...)
2015-03-23 9:07 ` [PATCH v2 3/3] arm64: drop sleep_idmap_phys Ard Biesheuvel
@ 2015-03-23 10:45 ` Lorenzo Pieralisi
2015-03-23 10:59 ` Ard Biesheuvel
3 siblings, 1 reply; 12+ messages in thread
From: Lorenzo Pieralisi @ 2015-03-23 10:45 UTC (permalink / raw)
To: linux-arm-kernel
On Mon, Mar 23, 2015 at 09:07:23AM +0000, Ard Biesheuvel wrote:
> So as it turns out, the 512 MB alignment boundary restriction appears
> to have been introduced by accident when increasing the ID map to cover
> the entire kernel Image.
>
> So this reverts that change, by reducing the ID map to something that
> can never cross a 512 MB boundary by construction.
>
> Patch #1 removes some functions that are unused, so that I don't have
> to worry about them in patch #2
>
> Patch #2 introduces the reduced ID map, using a separate linker section
> that contains code the manipulates the state of the MMU.
>
> Patch #3 removes the sleep_idmap_phys global which always points to
> the ID map anyway
Patch 1 and 2 do not apply for me, what tree/commit are they based against ?
Please let me know so that I can give patch 3 a go.
Thanks,
Lorenzo
>
>
> Ard Biesheuvel (3):
> arm64: remove soft_restart() and friends
> arm64: reduce ID map to a single page
> arm64: drop sleep_idmap_phys
>
> arch/arm64/include/asm/mmu.h | 1 -
> arch/arm64/include/asm/proc-fns.h | 3 ---
> arch/arm64/include/asm/system_misc.h | 1 -
> arch/arm64/kernel/head.S | 13 +++++++------
> arch/arm64/kernel/process.c | 12 +-----------
> arch/arm64/kernel/sleep.S | 9 ++++-----
> arch/arm64/kernel/suspend.c | 3 ---
> arch/arm64/kernel/vmlinux.lds.S | 11 ++++++++++-
> arch/arm64/mm/mmu.c | 11 -----------
> arch/arm64/mm/proc.S | 33 ---------------------------------
> 10 files changed, 22 insertions(+), 75 deletions(-)
>
> --
> 1.8.3.2
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 0/3] arm64: relax Image placement requirement
2015-03-23 10:45 ` [PATCH v2 0/3] arm64: relax Image placement requirement Lorenzo Pieralisi
@ 2015-03-23 10:59 ` Ard Biesheuvel
0 siblings, 0 replies; 12+ messages in thread
From: Ard Biesheuvel @ 2015-03-23 10:59 UTC (permalink / raw)
To: linux-arm-kernel
On 23 March 2015 at 11:45, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> wrote:
> On Mon, Mar 23, 2015 at 09:07:23AM +0000, Ard Biesheuvel wrote:
>> So as it turns out, the 512 MB alignment boundary restriction appears
>> to have been introduced by accident when increasing the ID map to cover
>> the entire kernel Image.
>>
>> So this reverts that change, by reducing the ID map to something that
>> can never cross a 512 MB boundary by construction.
>>
>> Patch #1 removes some functions that are unused, so that I don't have
>> to worry about them in patch #2
>>
>> Patch #2 introduces the reduced ID map, using a separate linker section
>> that contains code the manipulates the state of the MMU.
>>
>> Patch #3 removes the sleep_idmap_phys global which always points to
>> the ID map anyway
>
> Patch 1 and 2 do not apply for me, what tree/commit are they based against ?
>
> Please let me know so that I can give patch 3 a go.
>
They should apply onto for-next/core, but I pushed them to a public branch here:
https://git.linaro.org/people/ard.biesheuvel/linux-arm.git/shortlog/refs/heads/arm64-idmap
Regards,
Ard.
>> Ard Biesheuvel (3):
>> arm64: remove soft_restart() and friends
>> arm64: reduce ID map to a single page
>> arm64: drop sleep_idmap_phys
>>
>> arch/arm64/include/asm/mmu.h | 1 -
>> arch/arm64/include/asm/proc-fns.h | 3 ---
>> arch/arm64/include/asm/system_misc.h | 1 -
>> arch/arm64/kernel/head.S | 13 +++++++------
>> arch/arm64/kernel/process.c | 12 +-----------
>> arch/arm64/kernel/sleep.S | 9 ++++-----
>> arch/arm64/kernel/suspend.c | 3 ---
>> arch/arm64/kernel/vmlinux.lds.S | 11 ++++++++++-
>> arch/arm64/mm/mmu.c | 11 -----------
>> arch/arm64/mm/proc.S | 33 ---------------------------------
>> 10 files changed, 22 insertions(+), 75 deletions(-)
>>
>> --
>> 1.8.3.2
>>
>>
>> _______________________________________________
>> linux-arm-kernel mailing list
>> linux-arm-kernel at lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 3/3] arm64: drop sleep_idmap_phys
2015-03-23 9:07 ` [PATCH v2 3/3] arm64: drop sleep_idmap_phys Ard Biesheuvel
@ 2015-03-23 15:58 ` Lorenzo Pieralisi
2015-03-23 16:29 ` Ard Biesheuvel
0 siblings, 1 reply; 12+ messages in thread
From: Lorenzo Pieralisi @ 2015-03-23 15:58 UTC (permalink / raw)
To: linux-arm-kernel
On Mon, Mar 23, 2015 at 09:07:26AM +0000, Ard Biesheuvel wrote:
> The global variable sleep_idmap_phys always points to idmap_pg_dir,
> so we can just use that value directly in the CPU resume path.
>
> Also unclutters the load of sleep_save_sp::save_ptr_stash_phys.
>
Nit: I understand why you want to do more than one change at once,
I think you should reword the subject though for completeness.
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
> arch/arm64/kernel/sleep.S | 7 ++-----
> arch/arm64/kernel/suspend.c | 3 ---
> 2 files changed, 2 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
> index 04dc9aa2831e..3c98fe1c1c2f 100644
> --- a/arch/arm64/kernel/sleep.S
> +++ b/arch/arm64/kernel/sleep.S
> @@ -164,15 +164,12 @@ ENTRY(cpu_resume)
> #else
> mov x7, xzr
> #endif
> - adrp x0, sleep_save_sp
> - add x0, x0, #:lo12:sleep_save_sp
> - ldr x0, [x0, #SLEEP_SAVE_SP_PHYS]
> + ldr_l x0, sleep_save_sp + SLEEP_SAVE_SP_PHYS
> ldr x0, [x0, x7, lsl #3]
> /* load sp from context */
> ldr x2, [x0, #CPU_CTX_SP]
> - adrp x1, sleep_idmap_phys
> /* load physical address of identity map page table in x1 */
> - ldr x1, [x1, #:lo12:sleep_idmap_phys]
> + adrp x1, idmap_pg_dir
> mov sp, x2
> /*
> * cpu_do_resume expects x0 to contain context physical address
> diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
> index d7daf45ae7a2..f6073c27d65f 100644
> --- a/arch/arm64/kernel/suspend.c
> +++ b/arch/arm64/kernel/suspend.c
> @@ -118,7 +118,6 @@ int __cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
> }
>
> struct sleep_save_sp sleep_save_sp;
> -phys_addr_t sleep_idmap_phys;
>
> static int __init cpu_suspend_init(void)
> {
> @@ -132,9 +131,7 @@ static int __init cpu_suspend_init(void)
>
> sleep_save_sp.save_ptr_stash = ctx_ptr;
> sleep_save_sp.save_ptr_stash_phys = virt_to_phys(ctx_ptr);
> - sleep_idmap_phys = virt_to_phys(idmap_pg_dir);
> __flush_dcache_area(&sleep_save_sp, sizeof(struct sleep_save_sp));
> - __flush_dcache_area(&sleep_idmap_phys, sizeof(sleep_idmap_phys));
>
> return 0;
> }
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 3/3] arm64: drop sleep_idmap_phys
2015-03-23 15:58 ` Lorenzo Pieralisi
@ 2015-03-23 16:29 ` Ard Biesheuvel
0 siblings, 0 replies; 12+ messages in thread
From: Ard Biesheuvel @ 2015-03-23 16:29 UTC (permalink / raw)
To: linux-arm-kernel
On 23 March 2015 at 16:58, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> wrote:
> On Mon, Mar 23, 2015 at 09:07:26AM +0000, Ard Biesheuvel wrote:
>> The global variable sleep_idmap_phys always points to idmap_pg_dir,
>> so we can just use that value directly in the CPU resume path.
>>
>> Also unclutters the load of sleep_save_sp::save_ptr_stash_phys.
>>
>
> Nit: I understand why you want to do more than one change at once,
> I think you should reword the subject though for completeness.
>
ok
>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>> ---
>> arch/arm64/kernel/sleep.S | 7 ++-----
>> arch/arm64/kernel/suspend.c | 3 ---
>> 2 files changed, 2 insertions(+), 8 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
>> index 04dc9aa2831e..3c98fe1c1c2f 100644
>> --- a/arch/arm64/kernel/sleep.S
>> +++ b/arch/arm64/kernel/sleep.S
>> @@ -164,15 +164,12 @@ ENTRY(cpu_resume)
>> #else
>> mov x7, xzr
>> #endif
>> - adrp x0, sleep_save_sp
>> - add x0, x0, #:lo12:sleep_save_sp
>> - ldr x0, [x0, #SLEEP_SAVE_SP_PHYS]
>> + ldr_l x0, sleep_save_sp + SLEEP_SAVE_SP_PHYS
>> ldr x0, [x0, x7, lsl #3]
>> /* load sp from context */
>> ldr x2, [x0, #CPU_CTX_SP]
>> - adrp x1, sleep_idmap_phys
>> /* load physical address of identity map page table in x1 */
>> - ldr x1, [x1, #:lo12:sleep_idmap_phys]
>> + adrp x1, idmap_pg_dir
>> mov sp, x2
>> /*
>> * cpu_do_resume expects x0 to contain context physical address
>> diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
>> index d7daf45ae7a2..f6073c27d65f 100644
>> --- a/arch/arm64/kernel/suspend.c
>> +++ b/arch/arm64/kernel/suspend.c
>> @@ -118,7 +118,6 @@ int __cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
>> }
>>
>> struct sleep_save_sp sleep_save_sp;
>> -phys_addr_t sleep_idmap_phys;
>>
>> static int __init cpu_suspend_init(void)
>> {
>> @@ -132,9 +131,7 @@ static int __init cpu_suspend_init(void)
>>
>> sleep_save_sp.save_ptr_stash = ctx_ptr;
>> sleep_save_sp.save_ptr_stash_phys = virt_to_phys(ctx_ptr);
>> - sleep_idmap_phys = virt_to_phys(idmap_pg_dir);
>> __flush_dcache_area(&sleep_save_sp, sizeof(struct sleep_save_sp));
>> - __flush_dcache_area(&sleep_idmap_phys, sizeof(sleep_idmap_phys));
>>
>> return 0;
>> }
>
> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
> Tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Thanks,
Ard.
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2015-03-23 16:29 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-20 17:02 [PATCH] arm64: relax Image placement requirement Ard Biesheuvel
2015-03-20 17:49 ` Ard Biesheuvel
2015-03-22 15:36 ` Ard Biesheuvel
2015-03-23 9:07 ` [PATCH v2 0/3] " Ard Biesheuvel
2015-03-23 9:07 ` [PATCH v2 1/3] arm64: remove soft_restart() and friends Ard Biesheuvel
2015-03-23 9:32 ` Ard Biesheuvel
2015-03-23 9:07 ` [PATCH v2 2/3] arm64: reduce ID map to a single page Ard Biesheuvel
2015-03-23 9:07 ` [PATCH v2 3/3] arm64: drop sleep_idmap_phys Ard Biesheuvel
2015-03-23 15:58 ` Lorenzo Pieralisi
2015-03-23 16:29 ` Ard Biesheuvel
2015-03-23 10:45 ` [PATCH v2 0/3] arm64: relax Image placement requirement Lorenzo Pieralisi
2015-03-23 10:59 ` Ard Biesheuvel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).