* [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 @ 2017-01-03 17:21 Laura Abbott 2017-01-03 17:21 ` [PATCHv6 01/11] lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL Laura Abbott ` (11 more replies) 0 siblings, 12 replies; 32+ messages in thread From: Laura Abbott @ 2017-01-03 17:21 UTC (permalink / raw) To: Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm, Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel, Christoffer Dall, Marc Zyngier, Lorenzo Pieralisi, xen-devel, Boris Ostrovsky, David Vrabel, Juergen Gross, Eric Biederman, kexec, Alexander Potapenko, Dmitry Vyukov, kasan-dev, Andrey Ryabinin, Kees Cook Happy New Year! This is a very minor rebase from v5. It only moves a few headers around. I think this series should be ready to be queued up for 4.11. Thanks, Laura Laura Abbott (11): lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL mm/cma: Cleanup highmem check arm64: Move some macros under #ifndef __ASSEMBLY__ arm64: Add cast for virt_to_pfn mm: Introduce lm_alias arm64: Use __pa_symbol for kernel symbols drivers: firmware: psci: Use __pa_symbol for kernel symbol kexec: Switch to __pa_symbol mm/kasan: Switch to using __pa_symbol and lm_alias mm/usercopy: Switch to using lm_alias arm64: Add support for CONFIG_DEBUG_VIRTUAL arch/arm64/Kconfig | 1 + arch/arm64/include/asm/kvm_mmu.h | 4 +- arch/arm64/include/asm/memory.h | 66 +++++++++++++++++++++---------- arch/arm64/include/asm/mmu_context.h | 6 +-- arch/arm64/include/asm/pgtable.h | 2 +- arch/arm64/kernel/acpi_parking_protocol.c | 3 +- arch/arm64/kernel/cpu-reset.h | 2 +- arch/arm64/kernel/cpufeature.c | 3 +- arch/arm64/kernel/hibernate.c | 20 +++------- arch/arm64/kernel/insn.c | 2 +- arch/arm64/kernel/psci.c | 3 +- arch/arm64/kernel/setup.c | 9 +++-- arch/arm64/kernel/smp_spin_table.c | 3 +- arch/arm64/kernel/vdso.c | 8 +++- arch/arm64/mm/Makefile | 2 + arch/arm64/mm/init.c | 12 +++--- arch/arm64/mm/kasan_init.c | 22 +++++++---- arch/arm64/mm/mmu.c | 33 ++++++++++------ arch/arm64/mm/physaddr.c | 30 ++++++++++++++ arch/x86/Kconfig | 1 + drivers/firmware/psci.c | 2 +- include/linux/mm.h | 4 ++ kernel/kexec_core.c | 2 +- lib/Kconfig.debug | 5 ++- mm/cma.c | 15 +++---- mm/kasan/kasan_init.c | 15 +++---- mm/usercopy.c | 4 +- 27 files changed, 180 insertions(+), 99 deletions(-) create mode 100644 arch/arm64/mm/physaddr.c -- 2.7.4 ^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCHv6 01/11] lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL 2017-01-03 17:21 [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott @ 2017-01-03 17:21 ` Laura Abbott 2017-01-03 17:21 ` [PATCHv6 02/11] mm/cma: Cleanup highmem check Laura Abbott ` (10 subsequent siblings) 11 siblings, 0 replies; 32+ messages in thread From: Laura Abbott @ 2017-01-03 17:21 UTC (permalink / raw) To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas Cc: Laura Abbott, x86, linux-kernel, linux-mm, Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel DEBUG_VIRTUAL currently depends on DEBUG_KERNEL && X86. arm64 is getting the same support. Rather than add a list of architectures, switch this to ARCH_HAS_DEBUG_VIRTUAL and let architectures select it as appropriate. Acked-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Laura Abbott <labbott@redhat.com> --- arch/x86/Kconfig | 1 + lib/Kconfig.debug | 5 ++++- 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index e487493..f1d4e8f 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -46,6 +46,7 @@ config X86 select ARCH_CLOCKSOURCE_DATA select ARCH_DISCARD_MEMBLOCK select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI + select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEVMEM_IS_ALLOWED select ARCH_HAS_ELF_RANDOMIZE select ARCH_HAS_FAST_MULTIPLIER diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index b06848a..2aed316 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -622,9 +622,12 @@ config DEBUG_VM_PGFLAGS If unsure, say N. +config ARCH_HAS_DEBUG_VIRTUAL + bool + config DEBUG_VIRTUAL bool "Debug VM translations" - depends on DEBUG_KERNEL && X86 + depends on DEBUG_KERNEL && ARCH_HAS_DEBUG_VIRTUAL help Enable some costly sanity checks in virtual to page code. This can catch mistakes with virt_to_page() and friends. -- 2.7.4 ^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCHv6 02/11] mm/cma: Cleanup highmem check 2017-01-03 17:21 [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott 2017-01-03 17:21 ` [PATCHv6 01/11] lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL Laura Abbott @ 2017-01-03 17:21 ` Laura Abbott 2017-01-03 17:21 ` [PATCHv6 03/11] arm64: Move some macros under #ifndef __ASSEMBLY__ Laura Abbott ` (9 subsequent siblings) 11 siblings, 0 replies; 32+ messages in thread From: Laura Abbott @ 2017-01-03 17:21 UTC (permalink / raw) To: Marek Szyprowski, Joonsoo Kim, Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm, Andrew Morton, linux-arm-kernel 6b101e2a3ce4 ("mm/CMA: fix boot regression due to physical address of high_memory") added checks to use __pa_nodebug on x86 since CONFIG_DEBUG_VIRTUAL complains about high_memory not being linearlly mapped. arm64 is now getting support for CONFIG_DEBUG_VIRTUAL as well. Rather than add an explosion of arches to the #ifdef, switch to an alternate method to calculate the physical start of highmem using the page before highmem starts. This avoids the need for the #ifdef and extra __pa_nodebug calls. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Laura Abbott <labbott@redhat.com> --- mm/cma.c | 15 +++++---------- 1 file changed, 5 insertions(+), 10 deletions(-) diff --git a/mm/cma.c b/mm/cma.c index c960459..94b3460 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -235,18 +235,13 @@ int __init cma_declare_contiguous(phys_addr_t base, phys_addr_t highmem_start; int ret = 0; -#ifdef CONFIG_X86 /* - * high_memory isn't direct mapped memory so retrieving its physical - * address isn't appropriate. But it would be useful to check the - * physical address of the highmem boundary so it's justifiable to get - * the physical address from it. On x86 there is a validation check for - * this case, so the following workaround is needed to avoid it. + * We can't use __pa(high_memory) directly, since high_memory + * isn't a valid direct map VA, and DEBUG_VIRTUAL will (validly) + * complain. Find the boundary by adding one to the last valid + * address. */ - highmem_start = __pa_nodebug(high_memory); -#else - highmem_start = __pa(high_memory); -#endif + highmem_start = __pa(high_memory - 1) + 1; pr_debug("%s(size %pa, base %pa, limit %pa alignment %pa)\n", __func__, &size, &base, &limit, &alignment); -- 2.7.4 ^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCHv6 03/11] arm64: Move some macros under #ifndef __ASSEMBLY__ 2017-01-03 17:21 [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott 2017-01-03 17:21 ` [PATCHv6 01/11] lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL Laura Abbott 2017-01-03 17:21 ` [PATCHv6 02/11] mm/cma: Cleanup highmem check Laura Abbott @ 2017-01-03 17:21 ` Laura Abbott 2017-01-03 17:21 ` [PATCHv6 04/11] arm64: Add cast for virt_to_pfn Laura Abbott ` (8 subsequent siblings) 11 siblings, 0 replies; 32+ messages in thread From: Laura Abbott @ 2017-01-03 17:21 UTC (permalink / raw) To: Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm, Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel Several macros for various x_to_y exist outside the bounds of an __ASSEMBLY__ guard. Move them in preparation for support for CONFIG_DEBUG_VIRTUAL. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Laura Abbott <labbott@redhat.com> --- arch/arm64/include/asm/memory.h | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index bfe6328..f80a8e4 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -102,25 +102,6 @@ #endif /* - * Physical vs virtual RAM address space conversion. These are - * private definitions which should NOT be used outside memory.h - * files. Use virt_to_phys/phys_to_virt/__pa/__va instead. - */ -#define __virt_to_phys(x) ({ \ - phys_addr_t __x = (phys_addr_t)(x); \ - __x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET : \ - (__x - kimage_voffset); }) - -#define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET) -#define __phys_to_kimg(x) ((unsigned long)((x) + kimage_voffset)) - -/* - * Convert a page to/from a physical address - */ -#define page_to_phys(page) (__pfn_to_phys(page_to_pfn(page))) -#define phys_to_page(phys) (pfn_to_page(__phys_to_pfn(phys))) - -/* * Memory types available. */ #define MT_DEVICE_nGnRnE 0 @@ -187,6 +168,25 @@ static inline unsigned long kaslr_offset(void) #define PHYS_PFN_OFFSET (PHYS_OFFSET >> PAGE_SHIFT) /* + * Physical vs virtual RAM address space conversion. These are + * private definitions which should NOT be used outside memory.h + * files. Use virt_to_phys/phys_to_virt/__pa/__va instead. + */ +#define __virt_to_phys(x) ({ \ + phys_addr_t __x = (phys_addr_t)(x); \ + __x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET : \ + (__x - kimage_voffset); }) + +#define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET) +#define __phys_to_kimg(x) ((unsigned long)((x) + kimage_voffset)) + +/* + * Convert a page to/from a physical address + */ +#define page_to_phys(page) (__pfn_to_phys(page_to_pfn(page))) +#define phys_to_page(phys) (pfn_to_page(__phys_to_pfn(phys))) + +/* * Note: Drivers should NOT use these. They are the wrong * translation for translating DMA addresses. Use the driver * DMA support - see dma-mapping.h. -- 2.7.4 ^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCHv6 04/11] arm64: Add cast for virt_to_pfn 2017-01-03 17:21 [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott ` (2 preceding siblings ...) 2017-01-03 17:21 ` [PATCHv6 03/11] arm64: Move some macros under #ifndef __ASSEMBLY__ Laura Abbott @ 2017-01-03 17:21 ` Laura Abbott 2017-01-03 17:21 ` [PATCHv6 05/11] mm: Introduce lm_alias Laura Abbott ` (7 subsequent siblings) 11 siblings, 0 replies; 32+ messages in thread From: Laura Abbott @ 2017-01-03 17:21 UTC (permalink / raw) To: Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm, Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel virt_to_pfn lacks a cast at the top level. Don't rely on __virt_to_phys and explicitly cast to unsigned long. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Laura Abbott <labbott@redhat.com> --- arch/arm64/include/asm/memory.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index f80a8e4..cd6e3ee 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -209,7 +209,7 @@ static inline void *phys_to_virt(phys_addr_t x) #define __pa(x) __virt_to_phys((unsigned long)(x)) #define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x))) #define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT) -#define virt_to_pfn(x) __phys_to_pfn(__virt_to_phys(x)) +#define virt_to_pfn(x) __phys_to_pfn(__virt_to_phys((unsigned long)(x))) /* * virt_to_page(k) convert a _valid_ virtual address to struct page * -- 2.7.4 ^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCHv6 05/11] mm: Introduce lm_alias 2017-01-03 17:21 [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott ` (3 preceding siblings ...) 2017-01-03 17:21 ` [PATCHv6 04/11] arm64: Add cast for virt_to_pfn Laura Abbott @ 2017-01-03 17:21 ` Laura Abbott 2017-01-03 17:21 ` [PATCHv6 06/11] arm64: Use __pa_symbol for kernel symbols Laura Abbott ` (6 subsequent siblings) 11 siblings, 0 replies; 32+ messages in thread From: Laura Abbott @ 2017-01-03 17:21 UTC (permalink / raw) To: Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas, Christoffer Dall, Marc Zyngier, Lorenzo Pieralisi Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm, Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel Certain architectures may have the kernel image mapped separately to alias the linear map. Introduce a macro lm_alias to translate a kernel image symbol into its linear alias. This is used in part with work to add CONFIG_DEBUG_VIRTUAL support for arm64. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Laura Abbott <labbott@redhat.com> --- include/linux/mm.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index fe6b403..5dc9c46 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -76,6 +76,10 @@ extern int mmap_rnd_compat_bits __read_mostly; #define page_to_virt(x) __va(PFN_PHYS(page_to_pfn(x))) #endif +#ifndef lm_alias +#define lm_alias(x) __va(__pa_symbol(x)) +#endif + /* * To prevent common memory management code establishing * a zero page mapping on a read fault. -- 2.7.4 ^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCHv6 06/11] arm64: Use __pa_symbol for kernel symbols 2017-01-03 17:21 [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott ` (4 preceding siblings ...) 2017-01-03 17:21 ` [PATCHv6 05/11] mm: Introduce lm_alias Laura Abbott @ 2017-01-03 17:21 ` Laura Abbott 2017-01-03 17:21 ` [PATCHv6 07/11] drivers: firmware: psci: Use __pa_symbol for kernel symbol Laura Abbott ` (5 subsequent siblings) 11 siblings, 0 replies; 32+ messages in thread From: Laura Abbott @ 2017-01-03 17:21 UTC (permalink / raw) To: Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas, Christoffer Dall, Marc Zyngier, Lorenzo Pieralisi Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm, Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel __pa_symbol is technically the marcro that should be used for kernel symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which will do bounds checking. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Laura Abbott <labbott@redhat.com> --- arch/arm64/include/asm/kvm_mmu.h | 4 ++-- arch/arm64/include/asm/memory.h | 1 + arch/arm64/include/asm/mmu_context.h | 6 +++--- arch/arm64/include/asm/pgtable.h | 2 +- arch/arm64/kernel/acpi_parking_protocol.c | 3 ++- arch/arm64/kernel/cpu-reset.h | 2 +- arch/arm64/kernel/cpufeature.c | 3 ++- arch/arm64/kernel/hibernate.c | 20 +++++-------------- arch/arm64/kernel/insn.c | 2 +- arch/arm64/kernel/psci.c | 3 ++- arch/arm64/kernel/setup.c | 9 +++++---- arch/arm64/kernel/smp_spin_table.c | 3 ++- arch/arm64/kernel/vdso.c | 8 ++++++-- arch/arm64/mm/init.c | 12 ++++++----- arch/arm64/mm/kasan_init.c | 22 ++++++++++++++------- arch/arm64/mm/mmu.c | 33 ++++++++++++++++++++----------- 16 files changed, 76 insertions(+), 57 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 6f72fe8..55772c1 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -47,7 +47,7 @@ * If the page is in the bottom half, we have to use the top half. If * the page is in the top half, we have to use the bottom half: * - * T = __virt_to_phys(__hyp_idmap_text_start) + * T = __pa_symbol(__hyp_idmap_text_start) * if (T & BIT(VA_BITS - 1)) * HYP_VA_MIN = 0 //idmap in upper half * else @@ -271,7 +271,7 @@ static inline void __kvm_flush_dcache_pud(pud_t pud) kvm_flush_dcache_to_poc(page_address(page), PUD_SIZE); } -#define kvm_virt_to_phys(x) __virt_to_phys((unsigned long)(x)) +#define kvm_virt_to_phys(x) __pa_symbol(x) void kvm_set_way_flush(struct kvm_vcpu *vcpu); void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled); diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index cd6e3ee..0ff237a 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -210,6 +210,7 @@ static inline void *phys_to_virt(phys_addr_t x) #define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x))) #define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT) #define virt_to_pfn(x) __phys_to_pfn(__virt_to_phys((unsigned long)(x))) +#define sym_to_pfn(x) __phys_to_pfn(__pa_symbol(x)) /* * virt_to_page(k) convert a _valid_ virtual address to struct page * diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index 0363fe8..63e9982 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -45,7 +45,7 @@ static inline void contextidr_thread_switch(struct task_struct *next) */ static inline void cpu_set_reserved_ttbr0(void) { - unsigned long ttbr = virt_to_phys(empty_zero_page); + unsigned long ttbr = __pa_symbol(empty_zero_page); write_sysreg(ttbr, ttbr0_el1); isb(); @@ -114,7 +114,7 @@ static inline void cpu_install_idmap(void) local_flush_tlb_all(); cpu_set_idmap_tcr_t0sz(); - cpu_switch_mm(idmap_pg_dir, &init_mm); + cpu_switch_mm(lm_alias(idmap_pg_dir), &init_mm); } /* @@ -129,7 +129,7 @@ static inline void cpu_replace_ttbr1(pgd_t *pgd) phys_addr_t pgd_phys = virt_to_phys(pgd); - replace_phys = (void *)virt_to_phys(idmap_cpu_replace_ttbr1); + replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1); cpu_install_idmap(); replace_phys(pgd_phys); diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index ffbb9a5..090134c 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -52,7 +52,7 @@ extern void __pgd_error(const char *file, int line, unsigned long val); * for zero-mapped memory areas etc.. */ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; -#define ZERO_PAGE(vaddr) pfn_to_page(PHYS_PFN(__pa(empty_zero_page))) +#define ZERO_PAGE(vaddr) phys_to_page(__pa_symbol(empty_zero_page)) #define pte_ERROR(pte) __pte_error(__FILE__, __LINE__, pte_val(pte)) diff --git a/arch/arm64/kernel/acpi_parking_protocol.c b/arch/arm64/kernel/acpi_parking_protocol.c index a32b401..1f5655c 100644 --- a/arch/arm64/kernel/acpi_parking_protocol.c +++ b/arch/arm64/kernel/acpi_parking_protocol.c @@ -17,6 +17,7 @@ * along with this program. If not, see <http://www.gnu.org/licenses/>. */ #include <linux/acpi.h> +#include <linux/mm.h> #include <linux/types.h> #include <asm/cpu_ops.h> @@ -109,7 +110,7 @@ static int acpi_parking_protocol_cpu_boot(unsigned int cpu) * that read this address need to convert this address to the * Boot-Loader's endianness before jumping. */ - writeq_relaxed(__pa(secondary_entry), &mailbox->entry_point); + writeq_relaxed(__pa_symbol(secondary_entry), &mailbox->entry_point); writel_relaxed(cpu_entry->gic_cpu_id, &mailbox->cpu_id); arch_send_wakeup_ipi_mask(cpumask_of(cpu)); diff --git a/arch/arm64/kernel/cpu-reset.h b/arch/arm64/kernel/cpu-reset.h index d4e9ecb..6c2b1b4 100644 --- a/arch/arm64/kernel/cpu-reset.h +++ b/arch/arm64/kernel/cpu-reset.h @@ -24,7 +24,7 @@ static inline void __noreturn cpu_soft_restart(unsigned long el2_switch, el2_switch = el2_switch && !is_kernel_in_hyp_mode() && is_hyp_mode_available(); - restart = (void *)virt_to_phys(__cpu_soft_restart); + restart = (void *)__pa_symbol(__cpu_soft_restart); cpu_install_idmap(); restart(el2_switch, entry, arg0, arg1, arg2); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index fdf8f04..0ec6a1e 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -23,6 +23,7 @@ #include <linux/sort.h> #include <linux/stop_machine.h> #include <linux/types.h> +#include <linux/mm.h> #include <asm/cpu.h> #include <asm/cpufeature.h> #include <asm/cpu_ops.h> @@ -737,7 +738,7 @@ static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused static bool hyp_offset_low(const struct arm64_cpu_capabilities *entry, int __unused) { - phys_addr_t idmap_addr = virt_to_phys(__hyp_idmap_text_start); + phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start); /* * Activate the lower HYP offset only if: diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index fe301cb..3e94a45 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -50,9 +50,6 @@ */ extern int in_suspend; -/* Find a symbols alias in the linear map */ -#define LMADDR(x) phys_to_virt(virt_to_phys(x)) - /* Do we need to reset el2? */ #define el2_reset_needed() (is_hyp_mode_available() && !is_kernel_in_hyp_mode()) @@ -102,8 +99,8 @@ static inline void arch_hdr_invariants(struct arch_hibernate_hdr_invariants *i) int pfn_is_nosave(unsigned long pfn) { - unsigned long nosave_begin_pfn = virt_to_pfn(&__nosave_begin); - unsigned long nosave_end_pfn = virt_to_pfn(&__nosave_end - 1); + unsigned long nosave_begin_pfn = sym_to_pfn(&__nosave_begin); + unsigned long nosave_end_pfn = sym_to_pfn(&__nosave_end - 1); return (pfn >= nosave_begin_pfn) && (pfn <= nosave_end_pfn); } @@ -125,12 +122,12 @@ int arch_hibernation_header_save(void *addr, unsigned int max_size) return -EOVERFLOW; arch_hdr_invariants(&hdr->invariants); - hdr->ttbr1_el1 = virt_to_phys(swapper_pg_dir); + hdr->ttbr1_el1 = __pa_symbol(swapper_pg_dir); hdr->reenter_kernel = _cpu_resume; /* We can't use __hyp_get_vectors() because kvm may still be loaded */ if (el2_reset_needed()) - hdr->__hyp_stub_vectors = virt_to_phys(__hyp_stub_vectors); + hdr->__hyp_stub_vectors = __pa_symbol(__hyp_stub_vectors); else hdr->__hyp_stub_vectors = 0; @@ -460,7 +457,6 @@ int swsusp_arch_resume(void) void *zero_page; size_t exit_size; pgd_t *tmp_pg_dir; - void *lm_restore_pblist; phys_addr_t phys_hibernate_exit; void __noreturn (*hibernate_exit)(phys_addr_t, phys_addr_t, void *, void *, phys_addr_t, phys_addr_t); @@ -481,12 +477,6 @@ int swsusp_arch_resume(void) goto out; /* - * Since we only copied the linear map, we need to find restore_pblist's - * linear map address. - */ - lm_restore_pblist = LMADDR(restore_pblist); - - /* * We need a zero page that is zero before & after resume in order to * to break before make on the ttbr1 page tables. */ @@ -537,7 +527,7 @@ int swsusp_arch_resume(void) } hibernate_exit(virt_to_phys(tmp_pg_dir), resume_hdr.ttbr1_el1, - resume_hdr.reenter_kernel, lm_restore_pblist, + resume_hdr.reenter_kernel, restore_pblist, resume_hdr.__hyp_stub_vectors, virt_to_phys(zero_page)); out: diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c index 94b62c1..682f1a6 100644 --- a/arch/arm64/kernel/insn.c +++ b/arch/arm64/kernel/insn.c @@ -96,7 +96,7 @@ static void __kprobes *patch_map(void *addr, int fixmap) if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX)) page = vmalloc_to_page(addr); else if (!module) - page = pfn_to_page(PHYS_PFN(__pa(addr))); + page = phys_to_page(__pa_symbol(addr)); else return addr; diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c index 42816be..e8edbf1 100644 --- a/arch/arm64/kernel/psci.c +++ b/arch/arm64/kernel/psci.c @@ -20,6 +20,7 @@ #include <linux/smp.h> #include <linux/delay.h> #include <linux/psci.h> +#include <linux/mm.h> #include <uapi/linux/psci.h> @@ -45,7 +46,7 @@ static int __init cpu_psci_cpu_prepare(unsigned int cpu) static int cpu_psci_cpu_boot(unsigned int cpu) { - int err = psci_ops.cpu_on(cpu_logical_map(cpu), __pa(secondary_entry)); + int err = psci_ops.cpu_on(cpu_logical_map(cpu), __pa_symbol(secondary_entry)); if (err) pr_err("failed to boot CPU%d (%d)\n", cpu, err); diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index b051367..669fc9f 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -42,6 +42,7 @@ #include <linux/of_fdt.h> #include <linux/efi.h> #include <linux/psci.h> +#include <linux/mm.h> #include <asm/acpi.h> #include <asm/fixmap.h> @@ -199,10 +200,10 @@ static void __init request_standard_resources(void) struct memblock_region *region; struct resource *res; - kernel_code.start = virt_to_phys(_text); - kernel_code.end = virt_to_phys(__init_begin - 1); - kernel_data.start = virt_to_phys(_sdata); - kernel_data.end = virt_to_phys(_end - 1); + kernel_code.start = __pa_symbol(_text); + kernel_code.end = __pa_symbol(__init_begin - 1); + kernel_data.start = __pa_symbol(_sdata); + kernel_data.end = __pa_symbol(_end - 1); for_each_memblock(memory, region) { res = alloc_bootmem_low(sizeof(*res)); diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c index 9a00eee..9303465 100644 --- a/arch/arm64/kernel/smp_spin_table.c +++ b/arch/arm64/kernel/smp_spin_table.c @@ -21,6 +21,7 @@ #include <linux/of.h> #include <linux/smp.h> #include <linux/types.h> +#include <linux/mm.h> #include <asm/cacheflush.h> #include <asm/cpu_ops.h> @@ -98,7 +99,7 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu) * boot-loader's endianess before jumping. This is mandated by * the boot protocol. */ - writeq_relaxed(__pa(secondary_holding_pen), release_addr); + writeq_relaxed(__pa_symbol(secondary_holding_pen), release_addr); __flush_dcache_area((__force void *)release_addr, sizeof(*release_addr)); diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index a2c2478..41b6e31 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -123,6 +123,7 @@ static int __init vdso_init(void) { int i; struct page **vdso_pagelist; + unsigned long pfn; if (memcmp(&vdso_start, "\177ELF", 4)) { pr_err("vDSO is not a valid ELF object!\n"); @@ -140,11 +141,14 @@ static int __init vdso_init(void) return -ENOMEM; /* Grab the vDSO data page. */ - vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa(vdso_data))); + vdso_pagelist[0] = phys_to_page(__pa_symbol(vdso_data)); + /* Grab the vDSO code pages. */ + pfn = sym_to_pfn(&vdso_start); + for (i = 0; i < vdso_pages; i++) - vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa(&vdso_start)) + i); + vdso_pagelist[i + 1] = pfn_to_page(pfn + i); vdso_spec[0].pages = &vdso_pagelist[0]; vdso_spec[1].pages = &vdso_pagelist[1]; diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 212c4d1..8af2ad6 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -36,6 +36,7 @@ #include <linux/efi.h> #include <linux/swiotlb.h> #include <linux/vmalloc.h> +#include <linux/mm.h> #include <asm/boot.h> #include <asm/fixmap.h> @@ -209,8 +210,8 @@ void __init arm64_memblock_init(void) * linear mapping. Take care not to clip the kernel which may be * high in memory. */ - memblock_remove(max_t(u64, memstart_addr + linear_region_size, __pa(_end)), - ULLONG_MAX); + memblock_remove(max_t(u64, memstart_addr + linear_region_size, + __pa_symbol(_end)), ULLONG_MAX); if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) { /* ensure that memstart_addr remains sufficiently aligned */ memstart_addr = round_up(memblock_end_of_DRAM() - linear_region_size, @@ -225,7 +226,7 @@ void __init arm64_memblock_init(void) */ if (memory_limit != (phys_addr_t)ULLONG_MAX) { memblock_mem_limit_remove_map(memory_limit); - memblock_add(__pa(_text), (u64)(_end - _text)); + memblock_add(__pa_symbol(_text), (u64)(_end - _text)); } if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && initrd_start) { @@ -278,7 +279,7 @@ void __init arm64_memblock_init(void) * Register the kernel text, kernel data, initrd, and initial * pagetables with memblock. */ - memblock_reserve(__pa(_text), _end - _text); + memblock_reserve(__pa_symbol(_text), _end - _text); #ifdef CONFIG_BLK_DEV_INITRD if (initrd_start) { memblock_reserve(initrd_start, initrd_end - initrd_start); @@ -483,7 +484,8 @@ void __init mem_init(void) void free_initmem(void) { - free_reserved_area(__va(__pa(__init_begin)), __va(__pa(__init_end)), + free_reserved_area(lm_alias(__init_begin), + lm_alias(__init_end), 0, "unused kernel"); /* * Unmap the __init region but leave the VM area in place. This diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index 757009d..201d918 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -15,6 +15,7 @@ #include <linux/kernel.h> #include <linux/memblock.h> #include <linux/start_kernel.h> +#include <linux/mm.h> #include <asm/mmu_context.h> #include <asm/kernel-pgtable.h> @@ -26,6 +27,13 @@ static pgd_t tmp_pg_dir[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE); +/* + * The p*d_populate functions call virt_to_phys implicitly so they can't be used + * directly on kernel symbols (bm_p*d). All the early functions are called too + * early to use lm_alias so __p*d_populate functions must be used to populate + * with the physical address from __pa_symbol. + */ + static void __init kasan_early_pte_populate(pmd_t *pmd, unsigned long addr, unsigned long end) { @@ -33,12 +41,12 @@ static void __init kasan_early_pte_populate(pmd_t *pmd, unsigned long addr, unsigned long next; if (pmd_none(*pmd)) - pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte); + __pmd_populate(pmd, __pa_symbol(kasan_zero_pte), PMD_TYPE_TABLE); pte = pte_offset_kimg(pmd, addr); do { next = addr + PAGE_SIZE; - set_pte(pte, pfn_pte(virt_to_pfn(kasan_zero_page), + set_pte(pte, pfn_pte(sym_to_pfn(kasan_zero_page), PAGE_KERNEL)); } while (pte++, addr = next, addr != end && pte_none(*pte)); } @@ -51,7 +59,7 @@ static void __init kasan_early_pmd_populate(pud_t *pud, unsigned long next; if (pud_none(*pud)) - pud_populate(&init_mm, pud, kasan_zero_pmd); + __pud_populate(pud, __pa_symbol(kasan_zero_pmd), PMD_TYPE_TABLE); pmd = pmd_offset_kimg(pud, addr); do { @@ -68,7 +76,7 @@ static void __init kasan_early_pud_populate(pgd_t *pgd, unsigned long next; if (pgd_none(*pgd)) - pgd_populate(&init_mm, pgd, kasan_zero_pud); + __pgd_populate(pgd, __pa_symbol(kasan_zero_pud), PUD_TYPE_TABLE); pud = pud_offset_kimg(pgd, addr); do { @@ -148,7 +156,7 @@ void __init kasan_init(void) */ memcpy(tmp_pg_dir, swapper_pg_dir, sizeof(tmp_pg_dir)); dsb(ishst); - cpu_replace_ttbr1(tmp_pg_dir); + cpu_replace_ttbr1(lm_alias(tmp_pg_dir)); clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END); @@ -199,10 +207,10 @@ void __init kasan_init(void) */ for (i = 0; i < PTRS_PER_PTE; i++) set_pte(&kasan_zero_pte[i], - pfn_pte(virt_to_pfn(kasan_zero_page), PAGE_KERNEL_RO)); + pfn_pte(sym_to_pfn(kasan_zero_page), PAGE_KERNEL_RO)); memset(kasan_zero_page, 0, PAGE_SIZE); - cpu_replace_ttbr1(swapper_pg_dir); + cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); /* At this point kasan is fully initialized. Enable error messages */ init_task.kasan_depth = 0; diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 17243e4..a434157 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -28,6 +28,7 @@ #include <linux/memblock.h> #include <linux/fs.h> #include <linux/io.h> +#include <linux/mm.h> #include <asm/barrier.h> #include <asm/cputype.h> @@ -359,8 +360,8 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt, static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end) { - unsigned long kernel_start = __pa(_text); - unsigned long kernel_end = __pa(__init_begin); + unsigned long kernel_start = __pa_symbol(_text); + unsigned long kernel_end = __pa_symbol(__init_begin); /* * Take care not to create a writable alias for the @@ -427,14 +428,14 @@ void mark_rodata_ro(void) unsigned long section_size; section_size = (unsigned long)_etext - (unsigned long)_text; - create_mapping_late(__pa(_text), (unsigned long)_text, + create_mapping_late(__pa_symbol(_text), (unsigned long)_text, section_size, PAGE_KERNEL_ROX); /* * mark .rodata as read only. Use __init_begin rather than __end_rodata * to cover NOTES and EXCEPTION_TABLE. */ section_size = (unsigned long)__init_begin - (unsigned long)__start_rodata; - create_mapping_late(__pa(__start_rodata), (unsigned long)__start_rodata, + create_mapping_late(__pa_symbol(__start_rodata), (unsigned long)__start_rodata, section_size, PAGE_KERNEL_RO); /* flush the TLBs after updating live kernel mappings */ @@ -446,7 +447,7 @@ void mark_rodata_ro(void) static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end, pgprot_t prot, struct vm_struct *vma) { - phys_addr_t pa_start = __pa(va_start); + phys_addr_t pa_start = __pa_symbol(va_start); unsigned long size = va_end - va_start; BUG_ON(!PAGE_ALIGNED(pa_start)); @@ -494,7 +495,7 @@ static void __init map_kernel(pgd_t *pgd) */ BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES)); set_pud(pud_set_fixmap_offset(pgd, FIXADDR_START), - __pud(__pa(bm_pmd) | PUD_TYPE_TABLE)); + __pud(__pa_symbol(bm_pmd) | PUD_TYPE_TABLE)); pud_clear_fixmap(); } else { BUG(); @@ -525,7 +526,7 @@ void __init paging_init(void) */ cpu_replace_ttbr1(__va(pgd_phys)); memcpy(swapper_pg_dir, pgd, PAGE_SIZE); - cpu_replace_ttbr1(swapper_pg_dir); + cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); pgd_clear_fixmap(); memblock_free(pgd_phys, PAGE_SIZE); @@ -534,7 +535,7 @@ void __init paging_init(void) * We only reuse the PGD from the swapper_pg_dir, not the pud + pmd * allocated with it. */ - memblock_free(__pa(swapper_pg_dir) + PAGE_SIZE, + memblock_free(__pa_symbol(swapper_pg_dir) + PAGE_SIZE, SWAPPER_DIR_SIZE - PAGE_SIZE); } @@ -645,6 +646,12 @@ static inline pte_t * fixmap_pte(unsigned long addr) return &bm_pte[pte_index(addr)]; } +/* + * The p*d_populate functions call virt_to_phys implicitly so they can't be used + * directly on kernel symbols (bm_p*d). This function is called too early to use + * lm_alias so __p*d_populate functions must be used to populate with the + * physical address from __pa_symbol. + */ void __init early_fixmap_init(void) { pgd_t *pgd; @@ -654,7 +661,7 @@ void __init early_fixmap_init(void) pgd = pgd_offset_k(addr); if (CONFIG_PGTABLE_LEVELS > 3 && - !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa(bm_pud))) { + !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa_symbol(bm_pud))) { /* * We only end up here if the kernel mapping and the fixmap * share the top level pgd entry, which should only happen on @@ -663,12 +670,14 @@ void __init early_fixmap_init(void) BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES)); pud = pud_offset_kimg(pgd, addr); } else { - pgd_populate(&init_mm, pgd, bm_pud); + if (pgd_none(*pgd)) + __pgd_populate(pgd, __pa_symbol(bm_pud), PUD_TYPE_TABLE); pud = fixmap_pud(addr); } - pud_populate(&init_mm, pud, bm_pmd); + if (pud_none(*pud)) + __pud_populate(pud, __pa_symbol(bm_pmd), PMD_TYPE_TABLE); pmd = fixmap_pmd(addr); - pmd_populate_kernel(&init_mm, pmd, bm_pte); + __pmd_populate(pmd, __pa_symbol(bm_pte), PMD_TYPE_TABLE); /* * The boot-ioremap range spans multiple pmds, for which -- 2.7.4 ^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCHv6 07/11] drivers: firmware: psci: Use __pa_symbol for kernel symbol 2017-01-03 17:21 [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott ` (5 preceding siblings ...) 2017-01-03 17:21 ` [PATCHv6 06/11] arm64: Use __pa_symbol for kernel symbols Laura Abbott @ 2017-01-03 17:21 ` Laura Abbott 2017-01-03 17:21 ` [PATCHv6 08/11] kexec: Switch to __pa_symbol Laura Abbott ` (4 subsequent siblings) 11 siblings, 0 replies; 32+ messages in thread From: Laura Abbott @ 2017-01-03 17:21 UTC (permalink / raw) To: Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas, Lorenzo Pieralisi Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm, Andrew Morton, Marek Szyprowski, Joonsoo Kim, Christoffer Dall, Marc Zyngier, linux-arm-kernel __pa_symbol is technically the macro that should be used for kernel symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which will do bounds checking. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Laura Abbott <labbott@redhat.com> --- drivers/firmware/psci.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c index 6c60a50..66a8793 100644 --- a/drivers/firmware/psci.c +++ b/drivers/firmware/psci.c @@ -383,7 +383,7 @@ static int psci_suspend_finisher(unsigned long index) u32 *state = __this_cpu_read(psci_power_state); return psci_ops.cpu_suspend(state[index - 1], - virt_to_phys(cpu_resume)); + __pa_symbol(cpu_resume)); } int psci_cpu_suspend_enter(unsigned long index) -- 2.7.4 ^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCHv6 08/11] kexec: Switch to __pa_symbol 2017-01-03 17:21 [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott ` (6 preceding siblings ...) 2017-01-03 17:21 ` [PATCHv6 07/11] drivers: firmware: psci: Use __pa_symbol for kernel symbol Laura Abbott @ 2017-01-03 17:21 ` Laura Abbott 2017-01-03 17:21 ` [PATCHv6 09/11] mm/kasan: Switch to using __pa_symbol and lm_alias Laura Abbott ` (3 subsequent siblings) 11 siblings, 0 replies; 32+ messages in thread From: Laura Abbott @ 2017-01-03 17:21 UTC (permalink / raw) To: Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas, Eric Biederman Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm, Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel, kexec __pa_symbol is the correct api to get the physical address of kernel symbols. Switch to it to allow for better debug checking. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Laura Abbott <labbott@redhat.com> --- kernel/kexec_core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index 5617cc4..a01974e 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -1399,7 +1399,7 @@ void __weak arch_crash_save_vmcoreinfo(void) phys_addr_t __weak paddr_vmcoreinfo_note(void) { - return __pa((unsigned long)(char *)&vmcoreinfo_note); + return __pa_symbol((unsigned long)(char *)&vmcoreinfo_note); } static int __init crash_save_vmcoreinfo_init(void) -- 2.7.4 ^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCHv6 09/11] mm/kasan: Switch to using __pa_symbol and lm_alias 2017-01-03 17:21 [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott ` (7 preceding siblings ...) 2017-01-03 17:21 ` [PATCHv6 08/11] kexec: Switch to __pa_symbol Laura Abbott @ 2017-01-03 17:21 ` Laura Abbott 2017-01-03 17:21 ` [PATCHv6 10/11] mm/usercopy: Switch to using lm_alias Laura Abbott ` (2 subsequent siblings) 11 siblings, 0 replies; 32+ messages in thread From: Laura Abbott @ 2017-01-03 17:21 UTC (permalink / raw) To: Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas, Andrey Ryabinin Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm, Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel, Alexander Potapenko, Dmitry Vyukov, kasan-dev __pa_symbol is the correct API to find the physical address of symbols. Switch to it to allow for debugging APIs to work correctly. Other functions such as p*d_populate may call __pa internally. Ensure that the address passed is in the linear region by calling lm_alias. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Laura Abbott <labbott@redhat.com> --- mm/kasan/kasan_init.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/mm/kasan/kasan_init.c b/mm/kasan/kasan_init.c index 3f9a41c..31238da 100644 --- a/mm/kasan/kasan_init.c +++ b/mm/kasan/kasan_init.c @@ -15,6 +15,7 @@ #include <linux/kasan.h> #include <linux/kernel.h> #include <linux/memblock.h> +#include <linux/mm.h> #include <linux/pfn.h> #include <asm/page.h> @@ -49,7 +50,7 @@ static void __init zero_pte_populate(pmd_t *pmd, unsigned long addr, pte_t *pte = pte_offset_kernel(pmd, addr); pte_t zero_pte; - zero_pte = pfn_pte(PFN_DOWN(__pa(kasan_zero_page)), PAGE_KERNEL); + zero_pte = pfn_pte(PFN_DOWN(__pa_symbol(kasan_zero_page)), PAGE_KERNEL); zero_pte = pte_wrprotect(zero_pte); while (addr + PAGE_SIZE <= end) { @@ -69,7 +70,7 @@ static void __init zero_pmd_populate(pud_t *pud, unsigned long addr, next = pmd_addr_end(addr, end); if (IS_ALIGNED(addr, PMD_SIZE) && end - addr >= PMD_SIZE) { - pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte); + pmd_populate_kernel(&init_mm, pmd, lm_alias(kasan_zero_pte)); continue; } @@ -92,9 +93,9 @@ static void __init zero_pud_populate(pgd_t *pgd, unsigned long addr, if (IS_ALIGNED(addr, PUD_SIZE) && end - addr >= PUD_SIZE) { pmd_t *pmd; - pud_populate(&init_mm, pud, kasan_zero_pmd); + pud_populate(&init_mm, pud, lm_alias(kasan_zero_pmd)); pmd = pmd_offset(pud, addr); - pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte); + pmd_populate_kernel(&init_mm, pmd, lm_alias(kasan_zero_pte)); continue; } @@ -135,11 +136,11 @@ void __init kasan_populate_zero_shadow(const void *shadow_start, * puds,pmds, so pgd_populate(), pud_populate() * is noops. */ - pgd_populate(&init_mm, pgd, kasan_zero_pud); + pgd_populate(&init_mm, pgd, lm_alias(kasan_zero_pud)); pud = pud_offset(pgd, addr); - pud_populate(&init_mm, pud, kasan_zero_pmd); + pud_populate(&init_mm, pud, lm_alias(kasan_zero_pmd)); pmd = pmd_offset(pud, addr); - pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte); + pmd_populate_kernel(&init_mm, pmd, lm_alias(kasan_zero_pte)); continue; } -- 2.7.4 ^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCHv6 10/11] mm/usercopy: Switch to using lm_alias 2017-01-03 17:21 [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott ` (8 preceding siblings ...) 2017-01-03 17:21 ` [PATCHv6 09/11] mm/kasan: Switch to using __pa_symbol and lm_alias Laura Abbott @ 2017-01-03 17:21 ` Laura Abbott 2017-01-03 17:21 ` [PATCHv6 11/11] arm64: Add support for CONFIG_DEBUG_VIRTUAL Laura Abbott 2017-01-03 22:56 ` [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 Florian Fainelli 11 siblings, 0 replies; 32+ messages in thread From: Laura Abbott @ 2017-01-03 17:21 UTC (permalink / raw) To: Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas, Kees Cook Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm, Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel The usercopy checking code currently calls __va(__pa(...)) to check for aliases on symbols. Switch to using lm_alias instead. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: Laura Abbott <labbott@redhat.com> --- mm/usercopy.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/usercopy.c b/mm/usercopy.c index 3c8da0a..8345299 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -108,13 +108,13 @@ static inline const char *check_kernel_text_object(const void *ptr, * __pa() is not just the reverse of __va(). This can be detected * and checked: */ - textlow_linear = (unsigned long)__va(__pa(textlow)); + textlow_linear = (unsigned long)lm_alias(textlow); /* No different mapping: we're done. */ if (textlow_linear == textlow) return NULL; /* Check the secondary mapping... */ - texthigh_linear = (unsigned long)__va(__pa(texthigh)); + texthigh_linear = (unsigned long)lm_alias(texthigh); if (overlaps(ptr, n, textlow_linear, texthigh_linear)) return "<linear kernel text>"; -- 2.7.4 ^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCHv6 11/11] arm64: Add support for CONFIG_DEBUG_VIRTUAL 2017-01-03 17:21 [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott ` (9 preceding siblings ...) 2017-01-03 17:21 ` [PATCHv6 10/11] mm/usercopy: Switch to using lm_alias Laura Abbott @ 2017-01-03 17:21 ` Laura Abbott 2017-01-03 22:56 ` [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 Florian Fainelli 11 siblings, 0 replies; 32+ messages in thread From: Laura Abbott @ 2017-01-03 17:21 UTC (permalink / raw) To: Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm, Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel x86 has an option CONFIG_DEBUG_VIRTUAL to do additional checks on virt_to_phys calls. The goal is to catch users who are calling virt_to_phys on non-linear addresses immediately. This inclues callers using virt_to_phys on image addresses instead of __pa_symbol. As features such as CONFIG_VMAP_STACK get enabled for arm64, this becomes increasingly important. Add checks to catch bad virt_to_phys usage. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Laura Abbott <labbott@redhat.com> --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/memory.h | 31 ++++++++++++++++++++++++++++--- arch/arm64/mm/Makefile | 2 ++ arch/arm64/mm/physaddr.c | 30 ++++++++++++++++++++++++++++++ 4 files changed, 61 insertions(+), 3 deletions(-) create mode 100644 arch/arm64/mm/physaddr.c diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 1117421..359bca2 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -6,6 +6,7 @@ config ARM64 select ACPI_MCFG if ACPI select ACPI_SPCR_TABLE if ACPI select ARCH_CLOCKSOURCE_DATA + select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEVMEM_IS_ALLOWED select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI select ARCH_HAS_ELF_RANDOMIZE diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 0ff237a..7011f08 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -172,10 +172,33 @@ static inline unsigned long kaslr_offset(void) * private definitions which should NOT be used outside memory.h * files. Use virt_to_phys/phys_to_virt/__pa/__va instead. */ -#define __virt_to_phys(x) ({ \ + + +/* + * The linear kernel range starts in the middle of the virtual adddress + * space. Testing the top bit for the start of the region is a + * sufficient check. + */ +#define __is_lm_address(addr) (!!((addr) & BIT(VA_BITS - 1))) + +#define __lm_to_phys(addr) (((addr) & ~PAGE_OFFSET) + PHYS_OFFSET) +#define __kimg_to_phys(addr) ((addr) - kimage_voffset) + +#define __virt_to_phys_nodebug(x) ({ \ phys_addr_t __x = (phys_addr_t)(x); \ - __x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET : \ - (__x - kimage_voffset); }) + __is_lm_address(__x) ? __lm_to_phys(__x) : \ + __kimg_to_phys(__x); \ +}) + +#define __pa_symbol_nodebug(x) __kimg_to_phys((phys_addr_t)(x)) + +#ifdef CONFIG_DEBUG_VIRTUAL +extern phys_addr_t __virt_to_phys(unsigned long x); +extern phys_addr_t __phys_addr_symbol(unsigned long x); +#else +#define __virt_to_phys(x) __virt_to_phys_nodebug(x) +#define __phys_addr_symbol(x) __pa_symbol_nodebug(x) +#endif #define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET) #define __phys_to_kimg(x) ((unsigned long)((x) + kimage_voffset)) @@ -207,6 +230,8 @@ static inline void *phys_to_virt(phys_addr_t x) * Drivers should NOT use these either. */ #define __pa(x) __virt_to_phys((unsigned long)(x)) +#define __pa_symbol(x) __phys_addr_symbol(RELOC_HIDE((unsigned long)(x), 0)) +#define __pa_nodebug(x) __virt_to_phys_nodebug((unsigned long)(x)) #define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x))) #define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT) #define virt_to_pfn(x) __phys_to_pfn(__virt_to_phys((unsigned long)(x))) diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index e703fb9..9b0ba19 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -6,6 +6,8 @@ obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o obj-$(CONFIG_ARM64_PTDUMP_CORE) += dump.o obj-$(CONFIG_ARM64_PTDUMP_DEBUGFS) += ptdump_debugfs.o obj-$(CONFIG_NUMA) += numa.o +obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o +KASAN_SANITIZE_physaddr.o += n obj-$(CONFIG_KASAN) += kasan_init.o KASAN_SANITIZE_kasan_init.o := n diff --git a/arch/arm64/mm/physaddr.c b/arch/arm64/mm/physaddr.c new file mode 100644 index 0000000..91371da --- /dev/null +++ b/arch/arm64/mm/physaddr.c @@ -0,0 +1,30 @@ +#include <linux/bug.h> +#include <linux/export.h> +#include <linux/types.h> +#include <linux/mmdebug.h> +#include <linux/mm.h> + +#include <asm/memory.h> + +phys_addr_t __virt_to_phys(unsigned long x) +{ + WARN(!__is_lm_address(x), + "virt_to_phys used for non-linear address: %pK (%pS)\n", + (void *)x, + (void *)x); + + return __virt_to_phys_nodebug(x); +} +EXPORT_SYMBOL(__virt_to_phys); + +phys_addr_t __phys_addr_symbol(unsigned long x) +{ + /* + * This is bounds checking against the kernel image only. + * __pa_symbol should only be used on kernel symbol addresses. + */ + VIRTUAL_BUG_ON(x < (unsigned long) KERNEL_START || + x > (unsigned long) KERNEL_END); + return __pa_symbol_nodebug(x); +} +EXPORT_SYMBOL(__phys_addr_symbol); -- 2.7.4 ^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 2017-01-03 17:21 [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott ` (10 preceding siblings ...) 2017-01-03 17:21 ` [PATCHv6 11/11] arm64: Add support for CONFIG_DEBUG_VIRTUAL Laura Abbott @ 2017-01-03 22:56 ` Florian Fainelli 2017-01-03 23:25 ` Laura Abbott 2017-01-04 1:14 ` [PATCH v5 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli 11 siblings, 2 replies; 32+ messages in thread From: Florian Fainelli @ 2017-01-03 22:56 UTC (permalink / raw) To: Laura Abbott, Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas Cc: linux-mm, Alexander Potapenko, H. Peter Anvin, Thomas Gleixner, Marek Szyprowski, Lorenzo Pieralisi, x86, kasan-dev, Ingo Molnar, linux-arm-kernel, xen-devel, David Vrabel, Kees Cook, Marc Zyngier, Andrey Ryabinin, Boris Ostrovsky, Andrew Morton, Dmitry Vyukov, Juergen Gross, kexec, linux-kernel, Eric Biederman, Joonsoo Kim, Christoffer Dall On 01/03/2017 09:21 AM, Laura Abbott wrote: > Happy New Year! > > This is a very minor rebase from v5. It only moves a few headers around. > I think this series should be ready to be queued up for 4.11. FWIW: Tested-by: Florian Fainelli <f.fainelli@gmail.com> How do we get this series included? I would like to get the ARM 32-bit counterpart included as well (will resubmit rebased shortly), but I have no clue which tree this should be going through. Thanks! > > Thanks, > Laura > > Laura Abbott (11): > lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL > mm/cma: Cleanup highmem check > arm64: Move some macros under #ifndef __ASSEMBLY__ > arm64: Add cast for virt_to_pfn > mm: Introduce lm_alias > arm64: Use __pa_symbol for kernel symbols > drivers: firmware: psci: Use __pa_symbol for kernel symbol > kexec: Switch to __pa_symbol > mm/kasan: Switch to using __pa_symbol and lm_alias > mm/usercopy: Switch to using lm_alias > arm64: Add support for CONFIG_DEBUG_VIRTUAL > > arch/arm64/Kconfig | 1 + > arch/arm64/include/asm/kvm_mmu.h | 4 +- > arch/arm64/include/asm/memory.h | 66 +++++++++++++++++++++---------- > arch/arm64/include/asm/mmu_context.h | 6 +-- > arch/arm64/include/asm/pgtable.h | 2 +- > arch/arm64/kernel/acpi_parking_protocol.c | 3 +- > arch/arm64/kernel/cpu-reset.h | 2 +- > arch/arm64/kernel/cpufeature.c | 3 +- > arch/arm64/kernel/hibernate.c | 20 +++------- > arch/arm64/kernel/insn.c | 2 +- > arch/arm64/kernel/psci.c | 3 +- > arch/arm64/kernel/setup.c | 9 +++-- > arch/arm64/kernel/smp_spin_table.c | 3 +- > arch/arm64/kernel/vdso.c | 8 +++- > arch/arm64/mm/Makefile | 2 + > arch/arm64/mm/init.c | 12 +++--- > arch/arm64/mm/kasan_init.c | 22 +++++++---- > arch/arm64/mm/mmu.c | 33 ++++++++++------ > arch/arm64/mm/physaddr.c | 30 ++++++++++++++ > arch/x86/Kconfig | 1 + > drivers/firmware/psci.c | 2 +- > include/linux/mm.h | 4 ++ > kernel/kexec_core.c | 2 +- > lib/Kconfig.debug | 5 ++- > mm/cma.c | 15 +++---- > mm/kasan/kasan_init.c | 15 +++---- > mm/usercopy.c | 4 +- > 27 files changed, 180 insertions(+), 99 deletions(-) > create mode 100644 arch/arm64/mm/physaddr.c > -- Florian ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 2017-01-03 22:56 ` [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 Florian Fainelli @ 2017-01-03 23:25 ` Laura Abbott 2017-01-04 11:44 ` Will Deacon 2017-01-04 1:14 ` [PATCH v5 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli 1 sibling, 1 reply; 32+ messages in thread From: Laura Abbott @ 2017-01-03 23:25 UTC (permalink / raw) To: Florian Fainelli, Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas Cc: linux-mm, Alexander Potapenko, H. Peter Anvin, Thomas Gleixner, Marek Szyprowski, Lorenzo Pieralisi, x86, kasan-dev, Ingo Molnar, linux-arm-kernel, xen-devel, David Vrabel, Kees Cook, Marc Zyngier, Andrey Ryabinin, Boris Ostrovsky, Andrew Morton, Dmitry Vyukov, Juergen Gross, kexec, linux-kernel, Eric Biederman, Joonsoo Kim, Christoffer Dall On 01/03/2017 02:56 PM, Florian Fainelli wrote: > On 01/03/2017 09:21 AM, Laura Abbott wrote: >> Happy New Year! >> >> This is a very minor rebase from v5. It only moves a few headers around. >> I think this series should be ready to be queued up for 4.11. > > FWIW: > > Tested-by: Florian Fainelli <f.fainelli@gmail.com> > Thanks! > How do we get this series included? I would like to get the ARM 32-bit > counterpart included as well (will resubmit rebased shortly), but I have > no clue which tree this should be going through. > I was assuming this would go through the arm64 tree unless Catalin/Will have an objection to that. > Thanks! > >> >> Thanks, >> Laura >> >> Laura Abbott (11): >> lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL >> mm/cma: Cleanup highmem check >> arm64: Move some macros under #ifndef __ASSEMBLY__ >> arm64: Add cast for virt_to_pfn >> mm: Introduce lm_alias >> arm64: Use __pa_symbol for kernel symbols >> drivers: firmware: psci: Use __pa_symbol for kernel symbol >> kexec: Switch to __pa_symbol >> mm/kasan: Switch to using __pa_symbol and lm_alias >> mm/usercopy: Switch to using lm_alias >> arm64: Add support for CONFIG_DEBUG_VIRTUAL >> >> arch/arm64/Kconfig | 1 + >> arch/arm64/include/asm/kvm_mmu.h | 4 +- >> arch/arm64/include/asm/memory.h | 66 +++++++++++++++++++++---------- >> arch/arm64/include/asm/mmu_context.h | 6 +-- >> arch/arm64/include/asm/pgtable.h | 2 +- >> arch/arm64/kernel/acpi_parking_protocol.c | 3 +- >> arch/arm64/kernel/cpu-reset.h | 2 +- >> arch/arm64/kernel/cpufeature.c | 3 +- >> arch/arm64/kernel/hibernate.c | 20 +++------- >> arch/arm64/kernel/insn.c | 2 +- >> arch/arm64/kernel/psci.c | 3 +- >> arch/arm64/kernel/setup.c | 9 +++-- >> arch/arm64/kernel/smp_spin_table.c | 3 +- >> arch/arm64/kernel/vdso.c | 8 +++- >> arch/arm64/mm/Makefile | 2 + >> arch/arm64/mm/init.c | 12 +++--- >> arch/arm64/mm/kasan_init.c | 22 +++++++---- >> arch/arm64/mm/mmu.c | 33 ++++++++++------ >> arch/arm64/mm/physaddr.c | 30 ++++++++++++++ >> arch/x86/Kconfig | 1 + >> drivers/firmware/psci.c | 2 +- >> include/linux/mm.h | 4 ++ >> kernel/kexec_core.c | 2 +- >> lib/Kconfig.debug | 5 ++- >> mm/cma.c | 15 +++---- >> mm/kasan/kasan_init.c | 15 +++---- >> mm/usercopy.c | 4 +- >> 27 files changed, 180 insertions(+), 99 deletions(-) >> create mode 100644 arch/arm64/mm/physaddr.c >> > > ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 2017-01-03 23:25 ` Laura Abbott @ 2017-01-04 11:44 ` Will Deacon 2017-01-04 22:30 ` Florian Fainelli 0 siblings, 1 reply; 32+ messages in thread From: Will Deacon @ 2017-01-04 11:44 UTC (permalink / raw) To: Laura Abbott Cc: Florian Fainelli, Mark Rutland, Ard Biesheuvel, Catalin Marinas, linux-mm, Alexander Potapenko, H. Peter Anvin, Thomas Gleixner, Marek Szyprowski, Lorenzo Pieralisi, x86, kasan-dev, Ingo Molnar, linux-arm-kernel, xen-devel, David Vrabel, Kees Cook, Marc Zyngier, Andrey Ryabinin, Boris Ostrovsky, Andrew Morton, Dmitry Vyukov, Juergen Gross, kexec, linux-kernel, Eric Biederman, Joonsoo Kim, Christoffer Dall On Tue, Jan 03, 2017 at 03:25:53PM -0800, Laura Abbott wrote: > On 01/03/2017 02:56 PM, Florian Fainelli wrote: > > On 01/03/2017 09:21 AM, Laura Abbott wrote: > >> Happy New Year! > >> > >> This is a very minor rebase from v5. It only moves a few headers around. > >> I think this series should be ready to be queued up for 4.11. > > > > FWIW: > > > > Tested-by: Florian Fainelli <f.fainelli@gmail.com> > > > > Thanks! > > > How do we get this series included? I would like to get the ARM 32-bit > > counterpart included as well (will resubmit rebased shortly), but I have > > no clue which tree this should be going through. > > > > I was assuming this would go through the arm64 tree unless Catalin/Will > have an objection to that. Yup, I was planning to pick it up for 4.11. Florian -- does your series depend on this? If so, then I'll need to co-ordinate with Russell (probably via a shared branch that we both pull) if you're aiming for 4.11 too. Will ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 2017-01-04 11:44 ` Will Deacon @ 2017-01-04 22:30 ` Florian Fainelli 2017-01-10 12:41 ` Will Deacon 0 siblings, 1 reply; 32+ messages in thread From: Florian Fainelli @ 2017-01-04 22:30 UTC (permalink / raw) To: Will Deacon, Laura Abbott Cc: Mark Rutland, Ard Biesheuvel, Catalin Marinas, linux-mm, Alexander Potapenko, H. Peter Anvin, Thomas Gleixner, Marek Szyprowski, Lorenzo Pieralisi, x86, kasan-dev, Ingo Molnar, linux-arm-kernel, xen-devel, David Vrabel, Kees Cook, Marc Zyngier, Andrey Ryabinin, Boris Ostrovsky, Andrew Morton, Dmitry Vyukov, Juergen Gross, kexec, linux-kernel, Eric Biederman, Joonsoo Kim, Christoffer Dall On 01/04/2017 03:44 AM, Will Deacon wrote: > On Tue, Jan 03, 2017 at 03:25:53PM -0800, Laura Abbott wrote: >> On 01/03/2017 02:56 PM, Florian Fainelli wrote: >>> On 01/03/2017 09:21 AM, Laura Abbott wrote: >>>> Happy New Year! >>>> >>>> This is a very minor rebase from v5. It only moves a few headers around. >>>> I think this series should be ready to be queued up for 4.11. >>> >>> FWIW: >>> >>> Tested-by: Florian Fainelli <f.fainelli@gmail.com> >>> >> >> Thanks! >> >>> How do we get this series included? I would like to get the ARM 32-bit >>> counterpart included as well (will resubmit rebased shortly), but I have >>> no clue which tree this should be going through. >>> >> >> I was assuming this would go through the arm64 tree unless Catalin/Will >> have an objection to that. > > Yup, I was planning to pick it up for 4.11. > > Florian -- does your series depend on this? If so, then I'll need to > co-ordinate with Russell (probably via a shared branch that we both pull) > if you're aiming for 4.11 too. Yes, pretty much everything in Laura's patch series is relevant, except the arm64 bits. I will get v6 out now addressing Laura's and Hartley's feedback and then, if you could holler when and where you have applied these, I can coordinate with Russell about how to get these included. Thanks and happy new year! -- Florian ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 2017-01-04 22:30 ` Florian Fainelli @ 2017-01-10 12:41 ` Will Deacon 0 siblings, 0 replies; 32+ messages in thread From: Will Deacon @ 2017-01-10 12:41 UTC (permalink / raw) To: Florian Fainelli Cc: Laura Abbott, Mark Rutland, Ard Biesheuvel, Catalin Marinas, linux-mm, Alexander Potapenko, H. Peter Anvin, Thomas Gleixner, Marek Szyprowski, Lorenzo Pieralisi, x86, kasan-dev, Ingo Molnar, linux-arm-kernel, xen-devel, David Vrabel, Kees Cook, Marc Zyngier, Andrey Ryabinin, Boris Ostrovsky, Andrew Morton, Dmitry Vyukov, Juergen Gross, kexec, linux-kernel, Eric Biederman, Joonsoo Kim, Christoffer Dall On Wed, Jan 04, 2017 at 02:30:50PM -0800, Florian Fainelli wrote: > On 01/04/2017 03:44 AM, Will Deacon wrote: > > On Tue, Jan 03, 2017 at 03:25:53PM -0800, Laura Abbott wrote: > >> On 01/03/2017 02:56 PM, Florian Fainelli wrote: > >>> On 01/03/2017 09:21 AM, Laura Abbott wrote: > >>>> Happy New Year! > >>>> > >>>> This is a very minor rebase from v5. It only moves a few headers around. > >>>> I think this series should be ready to be queued up for 4.11. > >>> > >>> FWIW: > >>> > >>> Tested-by: Florian Fainelli <f.fainelli@gmail.com> > >>> > >> > >> Thanks! > >> > >>> How do we get this series included? I would like to get the ARM 32-bit > >>> counterpart included as well (will resubmit rebased shortly), but I have > >>> no clue which tree this should be going through. > >>> > >> > >> I was assuming this would go through the arm64 tree unless Catalin/Will > >> have an objection to that. > > > > Yup, I was planning to pick it up for 4.11. > > > > Florian -- does your series depend on this? If so, then I'll need to > > co-ordinate with Russell (probably via a shared branch that we both pull) > > if you're aiming for 4.11 too. > > Yes, pretty much everything in Laura's patch series is relevant, except > the arm64 bits. Ok, then. Laura -- could you please reorder your patches so that the non-arm64 bits come first? That way, I can put those on a separate branch and have it pulled by both arm64 and rmk, so that the prequisities are shared between the architectures. Thanks, Will ^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v5 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL 2017-01-03 22:56 ` [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 Florian Fainelli 2017-01-03 23:25 ` Laura Abbott @ 2017-01-04 1:14 ` Florian Fainelli 2017-01-04 1:14 ` [PATCH v5 1/4] mtd: lart: Rename partition defines to be prefixed with PART_ Florian Fainelli ` (4 more replies) 1 sibling, 5 replies; 32+ messages in thread From: Florian Fainelli @ 2017-01-04 1:14 UTC (permalink / raw) To: linux-arm-kernel, catalin.marinas Cc: Florian Fainelli, linux, nicolas.pitre, panand, chris.brandt, arnd, jonathan.austin, pawel.moll, vladimir.murzin, mark.rutland, ard.biesheuvel, keescook, matt, labbott, kirill.shutemov, ben, js07.lee, stefan, linux-kernel, linux-mtd, cyrille.pitchen, richard, boris.brezillon, computersforpeace, dwmw2, will.deacon This patch series builds on top of Laura's [PATCHv6 00/10] CONFIG_DEBUG_VIRTUAL for arm64 to add support for CONFIG_DEBUG_VIRTUAL for ARM. This was tested on a Brahma B15 platform (ARMv7 + HIGHMEM + LPAE). Note that the treewide changes would involve a huge CC list, which is why it has been purposely trimmed to just focusing on the DEBUG_VIRTUAL aspect. Catalin, provided that you take Laura's series, I suppose I would submit this one through Russell's patch system if that's okay with everyone? Thanks! Changes in v5: - rebased against Laura's [PATCHv6 00/10] CONFIG_DEBUG_VIRTUAL for arm64 and v4.10-rc2 - added Russell's acked-by for patches 2 through 4 Changes in v4: - added Boris' ack for the first patch - reworked the virtual address check based on Laura's suggestion to make the code more readable Changes in v3: - fix build failures reported by Kbuild test robot Changes in v2: - Modified MTD LART driver not to create symbol conflicts with KERNEL_START - Fixed patch that defines and uses KERNEL_START/END - Fixed __pa_symbol()'s definition - Inline __pa_symbol() check wihtin the VIRTUAL_BUG_ON statement - Simplified check for virtual addresses - Added a tree-wide patch changing SMP/PM implementations to use __pa_symbol(), build tested against multi_v{5,7}_defconfig Florian Fainelli (4): mtd: lart: Rename partition defines to be prefixed with PART_ ARM: Define KERNEL_START and KERNEL_END ARM: Add support for CONFIG_DEBUG_VIRTUAL ARM: treewide: Replace uses of virt_to_phys with __pa_symbol arch/arm/Kconfig | 1 + arch/arm/common/mcpm_entry.c | 12 +++---- arch/arm/include/asm/memory.h | 23 +++++++++++-- arch/arm/mach-alpine/platsmp.c | 2 +- arch/arm/mach-axxia/platsmp.c | 2 +- arch/arm/mach-bcm/bcm63xx_smp.c | 2 +- arch/arm/mach-bcm/platsmp-brcmstb.c | 2 +- arch/arm/mach-bcm/platsmp.c | 4 +-- arch/arm/mach-berlin/platsmp.c | 2 +- arch/arm/mach-exynos/firmware.c | 4 +-- arch/arm/mach-exynos/mcpm-exynos.c | 2 +- arch/arm/mach-exynos/platsmp.c | 4 +-- arch/arm/mach-exynos/pm.c | 6 ++-- arch/arm/mach-exynos/suspend.c | 6 ++-- arch/arm/mach-hisi/platmcpm.c | 2 +- arch/arm/mach-hisi/platsmp.c | 6 ++-- arch/arm/mach-imx/platsmp.c | 2 +- arch/arm/mach-imx/pm-imx6.c | 2 +- arch/arm/mach-imx/src.c | 2 +- arch/arm/mach-mediatek/platsmp.c | 2 +- arch/arm/mach-mvebu/pm.c | 2 +- arch/arm/mach-mvebu/pmsu.c | 2 +- arch/arm/mach-mvebu/system-controller.c | 2 +- arch/arm/mach-omap2/control.c | 8 ++--- arch/arm/mach-omap2/omap-mpuss-lowpower.c | 12 +++---- arch/arm/mach-omap2/omap-smp.c | 4 +-- arch/arm/mach-prima2/platsmp.c | 2 +- arch/arm/mach-prima2/pm.c | 2 +- arch/arm/mach-pxa/palmz72.c | 2 +- arch/arm/mach-pxa/pxa25x.c | 2 +- arch/arm/mach-pxa/pxa27x.c | 2 +- arch/arm/mach-pxa/pxa3xx.c | 2 +- arch/arm/mach-realview/platsmp-dt.c | 2 +- arch/arm/mach-rockchip/platsmp.c | 4 +-- arch/arm/mach-rockchip/pm.c | 2 +- arch/arm/mach-s3c24xx/mach-jive.c | 2 +- arch/arm/mach-s3c24xx/pm-s3c2410.c | 2 +- arch/arm/mach-s3c24xx/pm-s3c2416.c | 2 +- arch/arm/mach-s3c64xx/pm.c | 2 +- arch/arm/mach-s5pv210/pm.c | 2 +- arch/arm/mach-sa1100/pm.c | 2 +- arch/arm/mach-shmobile/platsmp-apmu.c | 6 ++-- arch/arm/mach-shmobile/platsmp-scu.c | 4 +-- arch/arm/mach-socfpga/platsmp.c | 4 +-- arch/arm/mach-spear/platsmp.c | 2 +- arch/arm/mach-sti/platsmp.c | 2 +- arch/arm/mach-sunxi/platsmp.c | 4 +-- arch/arm/mach-tango/platsmp.c | 2 +- arch/arm/mach-tango/pm.c | 2 +- arch/arm/mach-tegra/reset.c | 4 +-- arch/arm/mach-ux500/platsmp.c | 2 +- arch/arm/mach-vexpress/dcscb.c | 2 +- arch/arm/mach-vexpress/platsmp.c | 2 +- arch/arm/mach-vexpress/tc2_pm.c | 4 +-- arch/arm/mach-zx/platsmp.c | 4 +-- arch/arm/mach-zynq/platsmp.c | 2 +- arch/arm/mm/Makefile | 1 + arch/arm/mm/init.c | 7 ++-- arch/arm/mm/mmu.c | 6 +--- arch/arm/mm/physaddr.c | 55 +++++++++++++++++++++++++++++++ drivers/mtd/devices/lart.c | 24 +++++++------- 61 files changed, 179 insertions(+), 110 deletions(-) create mode 100644 arch/arm/mm/physaddr.c -- 2.9.3 ^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v5 1/4] mtd: lart: Rename partition defines to be prefixed with PART_ 2017-01-04 1:14 ` [PATCH v5 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli @ 2017-01-04 1:14 ` Florian Fainelli 2017-01-04 1:14 ` [PATCH v5 2/4] ARM: Define KERNEL_START and KERNEL_END Florian Fainelli ` (3 subsequent siblings) 4 siblings, 0 replies; 32+ messages in thread From: Florian Fainelli @ 2017-01-04 1:14 UTC (permalink / raw) To: linux-arm-kernel, catalin.marinas Cc: Florian Fainelli, linux, nicolas.pitre, panand, chris.brandt, arnd, jonathan.austin, pawel.moll, vladimir.murzin, mark.rutland, ard.biesheuvel, keescook, matt, labbott, kirill.shutemov, ben, js07.lee, stefan, linux-kernel, linux-mtd, cyrille.pitchen, richard, boris.brezillon, computersforpeace, dwmw2, will.deacon In preparation for defining KERNEL_START on ARM, rename KERNEL_START to PART_KERNEL_START, and to be consistent, do this for all partition-related constants. Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> --- drivers/mtd/devices/lart.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/drivers/mtd/devices/lart.c b/drivers/mtd/devices/lart.c index 82bd00af5cc3..268aae45b514 100644 --- a/drivers/mtd/devices/lart.c +++ b/drivers/mtd/devices/lart.c @@ -75,18 +75,18 @@ static char module_name[] = "lart"; /* blob */ #define NUM_BLOB_BLOCKS FLASH_NUMBLOCKS_16m_PARAM -#define BLOB_START 0x00000000 -#define BLOB_LEN (NUM_BLOB_BLOCKS * FLASH_BLOCKSIZE_PARAM) +#define PART_BLOB_START 0x00000000 +#define PART_BLOB_LEN (NUM_BLOB_BLOCKS * FLASH_BLOCKSIZE_PARAM) /* kernel */ #define NUM_KERNEL_BLOCKS 7 -#define KERNEL_START (BLOB_START + BLOB_LEN) -#define KERNEL_LEN (NUM_KERNEL_BLOCKS * FLASH_BLOCKSIZE_MAIN) +#define PART_KERNEL_START (PART_BLOB_START + PART_BLOB_LEN) +#define PART_KERNEL_LEN (NUM_KERNEL_BLOCKS * FLASH_BLOCKSIZE_MAIN) /* initial ramdisk */ #define NUM_INITRD_BLOCKS 24 -#define INITRD_START (KERNEL_START + KERNEL_LEN) -#define INITRD_LEN (NUM_INITRD_BLOCKS * FLASH_BLOCKSIZE_MAIN) +#define PART_INITRD_START (PART_KERNEL_START + PART_KERNEL_LEN) +#define PART_INITRD_LEN (NUM_INITRD_BLOCKS * FLASH_BLOCKSIZE_MAIN) /* * See section 4.0 in "3 Volt Fast Boot Block Flash Memory" Intel Datasheet @@ -587,20 +587,20 @@ static struct mtd_partition lart_partitions[] = { /* blob */ { .name = "blob", - .offset = BLOB_START, - .size = BLOB_LEN, + .offset = PART_BLOB_START, + .size = PART_BLOB_LEN, }, /* kernel */ { .name = "kernel", - .offset = KERNEL_START, /* MTDPART_OFS_APPEND */ - .size = KERNEL_LEN, + .offset = PART_KERNEL_START, /* MTDPART_OFS_APPEND */ + .size = PART_KERNEL_LEN, }, /* initial ramdisk / file system */ { .name = "file system", - .offset = INITRD_START, /* MTDPART_OFS_APPEND */ - .size = INITRD_LEN, /* MTDPART_SIZ_FULL */ + .offset = PART_INITRD_START, /* MTDPART_OFS_APPEND */ + .size = PART_INITRD_LEN, /* MTDPART_SIZ_FULL */ } }; #define NUM_PARTITIONS ARRAY_SIZE(lart_partitions) -- 2.9.3 ^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v5 2/4] ARM: Define KERNEL_START and KERNEL_END 2017-01-04 1:14 ` [PATCH v5 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli 2017-01-04 1:14 ` [PATCH v5 1/4] mtd: lart: Rename partition defines to be prefixed with PART_ Florian Fainelli @ 2017-01-04 1:14 ` Florian Fainelli 2017-01-04 15:58 ` Hartley Sweeten 2017-01-04 1:14 ` [PATCH v5 3/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli ` (2 subsequent siblings) 4 siblings, 1 reply; 32+ messages in thread From: Florian Fainelli @ 2017-01-04 1:14 UTC (permalink / raw) To: linux-arm-kernel, catalin.marinas Cc: Florian Fainelli, linux, nicolas.pitre, panand, chris.brandt, arnd, jonathan.austin, pawel.moll, vladimir.murzin, mark.rutland, ard.biesheuvel, keescook, matt, labbott, kirill.shutemov, ben, js07.lee, stefan, linux-kernel, linux-mtd, cyrille.pitchen, richard, boris.brezillon, computersforpeace, dwmw2, will.deacon In preparation for adding CONFIG_DEBUG_VIRTUAL support, define a set of common constants: KERNEL_START and KERNEL_END which abstract CONFIG_XIP_KERNEL vs. !CONFIG_XIP_KERNEL. Update the code where relevant. Acked-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> --- arch/arm/include/asm/memory.h | 7 +++++++ arch/arm/mm/init.c | 7 ++----- arch/arm/mm/mmu.c | 6 +----- 3 files changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h index 76cbd9c674df..bee7511c5098 100644 --- a/arch/arm/include/asm/memory.h +++ b/arch/arm/include/asm/memory.h @@ -111,6 +111,13 @@ #endif /* !CONFIG_MMU */ +#ifdef CONFIG_XIP_KERNEL +#define KERNEL_START _sdata +#else +#define KERNEL_START _stext +#endif +#define KERNEL_END _end + /* * We fix the TCM memories max 32 KiB ITCM resp DTCM at these * locations diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index 370581aeb871..c87d0d5b65f2 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -230,11 +230,8 @@ phys_addr_t __init arm_memblock_steal(phys_addr_t size, phys_addr_t align) void __init arm_memblock_init(const struct machine_desc *mdesc) { /* Register the kernel text, kernel data and initrd with memblock. */ -#ifdef CONFIG_XIP_KERNEL - memblock_reserve(__pa(_sdata), _end - _sdata); -#else - memblock_reserve(__pa(_stext), _end - _stext); -#endif + memblock_reserve(__pa(KERNEL_START), _end - KERNEL_START); + #ifdef CONFIG_BLK_DEV_INITRD /* FDT scan will populate initrd_start */ if (initrd_start && !phys_initrd_size) { diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 4001dd15818d..f0fd1a2db036 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1437,11 +1437,7 @@ static void __init kmap_init(void) static void __init map_lowmem(void) { struct memblock_region *reg; -#ifdef CONFIG_XIP_KERNEL - phys_addr_t kernel_x_start = round_down(__pa(_sdata), SECTION_SIZE); -#else - phys_addr_t kernel_x_start = round_down(__pa(_stext), SECTION_SIZE); -#endif + phys_addr_t kernel_x_start = round_down(__pa(KERNEL_START), SECTION_SIZE); phys_addr_t kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE); /* Map all the lowmem memory banks. */ -- 2.9.3 ^ permalink raw reply related [flat|nested] 32+ messages in thread
* RE: [PATCH v5 2/4] ARM: Define KERNEL_START and KERNEL_END 2017-01-04 1:14 ` [PATCH v5 2/4] ARM: Define KERNEL_START and KERNEL_END Florian Fainelli @ 2017-01-04 15:58 ` Hartley Sweeten 2017-01-04 17:36 ` Florian Fainelli 0 siblings, 1 reply; 32+ messages in thread From: Hartley Sweeten @ 2017-01-04 15:58 UTC (permalink / raw) To: Florian Fainelli, linux-arm-kernel, catalin.marinas Cc: nicolas.pitre, mark.rutland, matt, will.deacon, stefan, chris.brandt, linux-mtd, cyrille.pitchen, panand, boris.brezillon, pawel.moll, richard, linux, ben, vladimir.murzin, keescook, arnd, labbott, jonathan.austin, ard.biesheuvel, linux-kernel, computersforpeace, dwmw2, kirill.shutemov, js07.lee On Tuesday, January 03, 2017 6:14 PM, Florian Fainelli wrote: > > In preparation for adding CONFIG_DEBUG_VIRTUAL support, define a set of > common constants: KERNEL_START and KERNEL_END which abstract > CONFIG_XIP_KERNEL vs. !CONFIG_XIP_KERNEL. Update the code where > relevant. > > Acked-by: Russell King <rmk+kernel@armlinux.org.uk> > Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> > --- > arch/arm/include/asm/memory.h | 7 +++++++ > arch/arm/mm/init.c | 7 ++----- > arch/arm/mm/mmu.c | 6 +----- > 3 files changed, 10 insertions(+), 10 deletions(-) > > diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h > index 76cbd9c674df..bee7511c5098 100644 > --- a/arch/arm/include/asm/memory.h > +++ b/arch/arm/include/asm/memory.h > @@ -111,6 +111,13 @@ > > #endif /* !CONFIG_MMU */ > > +#ifdef CONFIG_XIP_KERNEL > +#define KERNEL_START _sdata > +#else > +#define KERNEL_START _stext > +#endif > +#define KERNEL_END _end > + > /* > * We fix the TCM memories max 32 KiB ITCM resp DTCM at these > * locations > diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c > index 370581aeb871..c87d0d5b65f2 100644 > --- a/arch/arm/mm/init.c > +++ b/arch/arm/mm/init.c > @@ -230,11 +230,8 @@ phys_addr_t __init arm_memblock_steal(phys_addr_t size, phys_addr_t align) > void __init arm_memblock_init(const struct machine_desc *mdesc) > { > /* Register the kernel text, kernel data and initrd with memblock. */ > -#ifdef CONFIG_XIP_KERNEL > - memblock_reserve(__pa(_sdata), _end - _sdata); > -#else > - memblock_reserve(__pa(_stext), _end - _stext); > -#endif > + memblock_reserve(__pa(KERNEL_START), _end - KERNEL_START); Shouldn't the '_end' above be 'KERNEL_END'? Hartley ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v5 2/4] ARM: Define KERNEL_START and KERNEL_END 2017-01-04 15:58 ` Hartley Sweeten @ 2017-01-04 17:36 ` Florian Fainelli 0 siblings, 0 replies; 32+ messages in thread From: Florian Fainelli @ 2017-01-04 17:36 UTC (permalink / raw) To: Hartley Sweeten, linux-arm-kernel, catalin.marinas Cc: nicolas.pitre, mark.rutland, matt, will.deacon, stefan, chris.brandt, linux-mtd, cyrille.pitchen, panand, boris.brezillon, pawel.moll, richard, linux, ben, vladimir.murzin, keescook, arnd, labbott, jonathan.austin, ard.biesheuvel, linux-kernel, computersforpeace, dwmw2, kirill.shutemov, js07.lee On 01/04/2017 07:58 AM, Hartley Sweeten wrote: > On Tuesday, January 03, 2017 6:14 PM, Florian Fainelli wrote: >> >> In preparation for adding CONFIG_DEBUG_VIRTUAL support, define a set of >> common constants: KERNEL_START and KERNEL_END which abstract >> CONFIG_XIP_KERNEL vs. !CONFIG_XIP_KERNEL. Update the code where >> relevant. >> >> Acked-by: Russell King <rmk+kernel@armlinux.org.uk> >> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> >> --- >> arch/arm/include/asm/memory.h | 7 +++++++ >> arch/arm/mm/init.c | 7 ++----- >> arch/arm/mm/mmu.c | 6 +----- >> 3 files changed, 10 insertions(+), 10 deletions(-) >> >> diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h >> index 76cbd9c674df..bee7511c5098 100644 >> --- a/arch/arm/include/asm/memory.h >> +++ b/arch/arm/include/asm/memory.h >> @@ -111,6 +111,13 @@ >> >> #endif /* !CONFIG_MMU */ >> >> +#ifdef CONFIG_XIP_KERNEL >> +#define KERNEL_START _sdata >> +#else >> +#define KERNEL_START _stext >> +#endif >> +#define KERNEL_END _end >> + >> /* >> * We fix the TCM memories max 32 KiB ITCM resp DTCM at these >> * locations >> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c >> index 370581aeb871..c87d0d5b65f2 100644 >> --- a/arch/arm/mm/init.c >> +++ b/arch/arm/mm/init.c >> @@ -230,11 +230,8 @@ phys_addr_t __init arm_memblock_steal(phys_addr_t size, phys_addr_t align) >> void __init arm_memblock_init(const struct machine_desc *mdesc) >> { >> /* Register the kernel text, kernel data and initrd with memblock. */ >> -#ifdef CONFIG_XIP_KERNEL >> - memblock_reserve(__pa(_sdata), _end - _sdata); >> -#else >> - memblock_reserve(__pa(_stext), _end - _stext); >> -#endif >> + memblock_reserve(__pa(KERNEL_START), _end - KERNEL_START); > > Shouldn't the '_end' above be 'KERNEL_END'? I sort of intentionally not changed that line in order not to make the line exceed 80 columns and make checkpatch whine about it, but if you think this is clearer, I can add this change, since I need to respin to address Laura's feedback anyway. Thanks! -- Florian ^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v5 3/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL 2017-01-04 1:14 ` [PATCH v5 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli 2017-01-04 1:14 ` [PATCH v5 1/4] mtd: lart: Rename partition defines to be prefixed with PART_ Florian Fainelli 2017-01-04 1:14 ` [PATCH v5 2/4] ARM: Define KERNEL_START and KERNEL_END Florian Fainelli @ 2017-01-04 1:14 ` Florian Fainelli 2017-01-04 17:20 ` Laura Abbott 2017-01-04 1:14 ` [PATCH v5 4/4] ARM: treewide: Replace uses of virt_to_phys with __pa_symbol Florian Fainelli 2017-01-04 22:39 ` [PATCH v6 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli 4 siblings, 1 reply; 32+ messages in thread From: Florian Fainelli @ 2017-01-04 1:14 UTC (permalink / raw) To: linux-arm-kernel, catalin.marinas Cc: Florian Fainelli, linux, nicolas.pitre, panand, chris.brandt, arnd, jonathan.austin, pawel.moll, vladimir.murzin, mark.rutland, ard.biesheuvel, keescook, matt, labbott, kirill.shutemov, ben, js07.lee, stefan, linux-kernel, linux-mtd, cyrille.pitchen, richard, boris.brezillon, computersforpeace, dwmw2, will.deacon x86 has an option: CONFIG_DEBUG_VIRTUAL to do additional checks on virt_to_phys calls. The goal is to catch users who are calling virt_to_phys on non-linear addresses immediately. This includes caller using __virt_to_phys() on image addresses instead of __pa_symbol(). This is a generally useful debug feature to spot bad code (particulary in drivers). Acked-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> --- arch/arm/Kconfig | 1 + arch/arm/include/asm/memory.h | 16 +++++++++++-- arch/arm/mm/Makefile | 1 + arch/arm/mm/physaddr.c | 55 +++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 71 insertions(+), 2 deletions(-) create mode 100644 arch/arm/mm/physaddr.c diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 5fab553fd03a..4700294f4e09 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -2,6 +2,7 @@ config ARM bool default y select ARCH_CLOCKSOURCE_DATA + select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEVMEM_IS_ALLOWED select ARCH_HAS_ELF_RANDOMIZE select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h index bee7511c5098..d90300193adf 100644 --- a/arch/arm/include/asm/memory.h +++ b/arch/arm/include/asm/memory.h @@ -213,7 +213,7 @@ extern const void *__pv_table_begin, *__pv_table_end; : "r" (x), "I" (__PV_BITS_31_24) \ : "cc") -static inline phys_addr_t __virt_to_phys(unsigned long x) +static inline phys_addr_t __virt_to_phys_nodebug(unsigned long x) { phys_addr_t t; @@ -245,7 +245,7 @@ static inline unsigned long __phys_to_virt(phys_addr_t x) #define PHYS_OFFSET PLAT_PHYS_OFFSET #define PHYS_PFN_OFFSET ((unsigned long)(PHYS_OFFSET >> PAGE_SHIFT)) -static inline phys_addr_t __virt_to_phys(unsigned long x) +static inline phys_addr_t __virt_to_phys_nodebug(unsigned long x) { return (phys_addr_t)x - PAGE_OFFSET + PHYS_OFFSET; } @@ -261,6 +261,16 @@ static inline unsigned long __phys_to_virt(phys_addr_t x) ((((unsigned long)(kaddr) - PAGE_OFFSET) >> PAGE_SHIFT) + \ PHYS_PFN_OFFSET) +#define __pa_symbol_nodebug(x) __virt_to_phys_nodebug((x)) + +#ifdef CONFIG_DEBUG_VIRTUAL +extern phys_addr_t __virt_to_phys(unsigned long x); +extern phys_addr_t __phys_addr_symbol(unsigned long x); +#else +#define __virt_to_phys(x) __virt_to_phys_nodebug(x) +#define __phys_addr_symbol(x) __pa_symbol_nodebug(x) +#endif + /* * These are *only* valid on the kernel direct mapped RAM memory. * Note: Drivers should NOT use these. They are the wrong @@ -283,9 +293,11 @@ static inline void *phys_to_virt(phys_addr_t x) * Drivers should NOT use these either. */ #define __pa(x) __virt_to_phys((unsigned long)(x)) +#define __pa_symbol(x) __phys_addr_symbol(RELOC_HIDE((unsigned long)(x), 0)) #define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x))) #define pfn_to_kaddr(pfn) __va((phys_addr_t)(pfn) << PAGE_SHIFT) + extern long long arch_phys_to_idmap_offset; /* diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile index e8698241ece9..b3dea80715b4 100644 --- a/arch/arm/mm/Makefile +++ b/arch/arm/mm/Makefile @@ -14,6 +14,7 @@ endif obj-$(CONFIG_ARM_PTDUMP) += dump.o obj-$(CONFIG_MODULES) += proc-syms.o +obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o obj-$(CONFIG_ALIGNMENT_TRAP) += alignment.o obj-$(CONFIG_HIGHMEM) += highmem.o diff --git a/arch/arm/mm/physaddr.c b/arch/arm/mm/physaddr.c new file mode 100644 index 000000000000..f10bdcbcb155 --- /dev/null +++ b/arch/arm/mm/physaddr.c @@ -0,0 +1,55 @@ +#include <linux/bug.h> +#include <linux/export.h> +#include <linux/types.h> +#include <linux/mmdebug.h> +#include <linux/mm.h> + +#include <asm/sections.h> +#include <asm/memory.h> +#include <asm/fixmap.h> +#include <asm/dma.h> + +#include "mm.h" + +static inline bool __virt_addr_valid(unsigned long x) +{ + /* high_memory does not get immediately defined, and there + * are early callers of __pa() against PAGE_OFFSET + */ + if (!high_memory && x >= PAGE_OFFSET) + return true; + + if (high_memory && x >= PAGE_OFFSET && x < (unsigned long)high_memory) + return true; + + /* ARM uses the default per-CPU allocation routing which forces us to + * have an explicit check here to avoid a false positive + */ + if (x == MAX_DMA_ADDRESS) + return true; + + return false; +} + +phys_addr_t __virt_to_phys(unsigned long x) +{ + WARN(!__virt_addr_valid(x), + "virt_to_phys used for non-linear address: %pK (%pS)\n", + (void *)x, + (void *)x); + + return __virt_to_phys_nodebug(x); +} +EXPORT_SYMBOL(__virt_to_phys); + +phys_addr_t __phys_addr_symbol(unsigned long x) +{ + /* This is bounds checking against the kernel image only. + * __pa_symbol should only be used on kernel symbol addresses. + */ + VIRTUAL_BUG_ON(x < (unsigned long)KERNEL_START || + x > (unsigned long)KERNEL_END); + + return __pa_symbol_nodebug(x); +} +EXPORT_SYMBOL(__phys_addr_symbol); -- 2.9.3 ^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCH v5 3/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL 2017-01-04 1:14 ` [PATCH v5 3/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli @ 2017-01-04 17:20 ` Laura Abbott 0 siblings, 0 replies; 32+ messages in thread From: Laura Abbott @ 2017-01-04 17:20 UTC (permalink / raw) To: Florian Fainelli, linux-arm-kernel, catalin.marinas Cc: linux, nicolas.pitre, panand, chris.brandt, arnd, jonathan.austin, pawel.moll, vladimir.murzin, mark.rutland, ard.biesheuvel, keescook, matt, labbott, kirill.shutemov, ben, js07.lee, stefan, linux-kernel, linux-mtd, cyrille.pitchen, richard, boris.brezillon, computersforpeace, dwmw2, will.deacon On 01/03/2017 05:14 PM, Florian Fainelli wrote: > x86 has an option: CONFIG_DEBUG_VIRTUAL to do additional checks on > virt_to_phys calls. The goal is to catch users who are calling > virt_to_phys on non-linear addresses immediately. This includes caller > using __virt_to_phys() on image addresses instead of __pa_symbol(). This > is a generally useful debug feature to spot bad code (particulary in > drivers). > > Acked-by: Russell King <rmk+kernel@armlinux.org.uk> > Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> This mostly looks good with a few comments below > --- > arch/arm/Kconfig | 1 + > arch/arm/include/asm/memory.h | 16 +++++++++++-- > arch/arm/mm/Makefile | 1 + > arch/arm/mm/physaddr.c | 55 +++++++++++++++++++++++++++++++++++++++++++ > 4 files changed, 71 insertions(+), 2 deletions(-) > create mode 100644 arch/arm/mm/physaddr.c > > diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig > index 5fab553fd03a..4700294f4e09 100644 > --- a/arch/arm/Kconfig > +++ b/arch/arm/Kconfig > @@ -2,6 +2,7 @@ config ARM > bool > default y > select ARCH_CLOCKSOURCE_DATA > + select ARCH_HAS_DEBUG_VIRTUAL > select ARCH_HAS_DEVMEM_IS_ALLOWED > select ARCH_HAS_ELF_RANDOMIZE > select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST > diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h > index bee7511c5098..d90300193adf 100644 > --- a/arch/arm/include/asm/memory.h > +++ b/arch/arm/include/asm/memory.h > @@ -213,7 +213,7 @@ extern const void *__pv_table_begin, *__pv_table_end; > : "r" (x), "I" (__PV_BITS_31_24) \ > : "cc") > > -static inline phys_addr_t __virt_to_phys(unsigned long x) > +static inline phys_addr_t __virt_to_phys_nodebug(unsigned long x) > { > phys_addr_t t; > > @@ -245,7 +245,7 @@ static inline unsigned long __phys_to_virt(phys_addr_t x) > #define PHYS_OFFSET PLAT_PHYS_OFFSET > #define PHYS_PFN_OFFSET ((unsigned long)(PHYS_OFFSET >> PAGE_SHIFT)) > > -static inline phys_addr_t __virt_to_phys(unsigned long x) > +static inline phys_addr_t __virt_to_phys_nodebug(unsigned long x) > { > return (phys_addr_t)x - PAGE_OFFSET + PHYS_OFFSET; > } > @@ -261,6 +261,16 @@ static inline unsigned long __phys_to_virt(phys_addr_t x) > ((((unsigned long)(kaddr) - PAGE_OFFSET) >> PAGE_SHIFT) + \ > PHYS_PFN_OFFSET) > > +#define __pa_symbol_nodebug(x) __virt_to_phys_nodebug((x)) > + > +#ifdef CONFIG_DEBUG_VIRTUAL > +extern phys_addr_t __virt_to_phys(unsigned long x); > +extern phys_addr_t __phys_addr_symbol(unsigned long x); > +#else > +#define __virt_to_phys(x) __virt_to_phys_nodebug(x) > +#define __phys_addr_symbol(x) __pa_symbol_nodebug(x) > +#endif > + > /* > * These are *only* valid on the kernel direct mapped RAM memory. > * Note: Drivers should NOT use these. They are the wrong > @@ -283,9 +293,11 @@ static inline void *phys_to_virt(phys_addr_t x) > * Drivers should NOT use these either. > */ > #define __pa(x) __virt_to_phys((unsigned long)(x)) > +#define __pa_symbol(x) __phys_addr_symbol(RELOC_HIDE((unsigned long)(x), 0)) > #define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x))) > #define pfn_to_kaddr(pfn) __va((phys_addr_t)(pfn) << PAGE_SHIFT) > > + Extra blank here > extern long long arch_phys_to_idmap_offset; > > /* > diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile > index e8698241ece9..b3dea80715b4 100644 > --- a/arch/arm/mm/Makefile > +++ b/arch/arm/mm/Makefile > @@ -14,6 +14,7 @@ endif > > obj-$(CONFIG_ARM_PTDUMP) += dump.o > obj-$(CONFIG_MODULES) += proc-syms.o > +obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o > > obj-$(CONFIG_ALIGNMENT_TRAP) += alignment.o > obj-$(CONFIG_HIGHMEM) += highmem.o > diff --git a/arch/arm/mm/physaddr.c b/arch/arm/mm/physaddr.c > new file mode 100644 > index 000000000000..f10bdcbcb155 > --- /dev/null > +++ b/arch/arm/mm/physaddr.c > @@ -0,0 +1,55 @@ > +#include <linux/bug.h> > +#include <linux/export.h> > +#include <linux/types.h> > +#include <linux/mmdebug.h> > +#include <linux/mm.h> > + > +#include <asm/sections.h> > +#include <asm/memory.h> > +#include <asm/fixmap.h> > +#include <asm/dma.h> > + > +#include "mm.h" > + > +static inline bool __virt_addr_valid(unsigned long x) > +{ > + /* high_memory does not get immediately defined, and there > + * are early callers of __pa() against PAGE_OFFSET > + */ Nit: All the comments in this file should have the text starting on the next line after the /* > + if (!high_memory && x >= PAGE_OFFSET) > + return true; > + > + if (high_memory && x >= PAGE_OFFSET && x < (unsigned long)high_memory) > + return true; > + > + /* ARM uses the default per-CPU allocation routing which forces us to > + * have an explicit check here to avoid a false positive > + */ This comment isn't fully descriptive, MAX_DMA_ADDRESS could be used in more places than just per-CPU allocation. Suggestion: /* * MAX_DMA_ADDRESS is a virtual address that may not correspond to an actual * physical address. Enough code relies on __pa(MAX_DMA_ADDRESS) that we just * need to work around it and always return true. */ > + if (x == MAX_DMA_ADDRESS) > + return true; > + > + return false; > +} > + > +phys_addr_t __virt_to_phys(unsigned long x) > +{ > + WARN(!__virt_addr_valid(x), > + "virt_to_phys used for non-linear address: %pK (%pS)\n", > + (void *)x, > + (void *)x); > + > + return __virt_to_phys_nodebug(x); > +} > +EXPORT_SYMBOL(__virt_to_phys); > + > +phys_addr_t __phys_addr_symbol(unsigned long x) > +{ > + /* This is bounds checking against the kernel image only. > + * __pa_symbol should only be used on kernel symbol addresses. > + */ > + VIRTUAL_BUG_ON(x < (unsigned long)KERNEL_START || > + x > (unsigned long)KERNEL_END); > + > + return __pa_symbol_nodebug(x); > +} > +EXPORT_SYMBOL(__phys_addr_symbol); > With those comments, you can add Acked-by: Laura Abbott <labbott@redhat.com> ^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v5 4/4] ARM: treewide: Replace uses of virt_to_phys with __pa_symbol 2017-01-04 1:14 ` [PATCH v5 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli ` (2 preceding siblings ...) 2017-01-04 1:14 ` [PATCH v5 3/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli @ 2017-01-04 1:14 ` Florian Fainelli 2017-01-04 17:31 ` Laura Abbott 2017-01-04 22:39 ` [PATCH v6 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli 4 siblings, 1 reply; 32+ messages in thread From: Florian Fainelli @ 2017-01-04 1:14 UTC (permalink / raw) To: linux-arm-kernel, catalin.marinas Cc: Florian Fainelli, linux, nicolas.pitre, panand, chris.brandt, arnd, jonathan.austin, pawel.moll, vladimir.murzin, mark.rutland, ard.biesheuvel, keescook, matt, labbott, kirill.shutemov, ben, js07.lee, stefan, linux-kernel, linux-mtd, cyrille.pitchen, richard, boris.brezillon, computersforpeace, dwmw2, will.deacon All low-level PM/SMP code using virt_to_phys() should actually use __pa_symbol() against kernel symbols. Update code where relevant to move away from virt_to_phys(). Acked-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> --- arch/arm/common/mcpm_entry.c | 12 ++++++------ arch/arm/mach-alpine/platsmp.c | 2 +- arch/arm/mach-axxia/platsmp.c | 2 +- arch/arm/mach-bcm/bcm63xx_smp.c | 2 +- arch/arm/mach-bcm/platsmp-brcmstb.c | 2 +- arch/arm/mach-bcm/platsmp.c | 4 ++-- arch/arm/mach-berlin/platsmp.c | 2 +- arch/arm/mach-exynos/firmware.c | 4 ++-- arch/arm/mach-exynos/mcpm-exynos.c | 2 +- arch/arm/mach-exynos/platsmp.c | 4 ++-- arch/arm/mach-exynos/pm.c | 6 +++--- arch/arm/mach-exynos/suspend.c | 6 +++--- arch/arm/mach-hisi/platmcpm.c | 2 +- arch/arm/mach-hisi/platsmp.c | 6 +++--- arch/arm/mach-imx/platsmp.c | 2 +- arch/arm/mach-imx/pm-imx6.c | 2 +- arch/arm/mach-imx/src.c | 2 +- arch/arm/mach-mediatek/platsmp.c | 2 +- arch/arm/mach-mvebu/pm.c | 2 +- arch/arm/mach-mvebu/pmsu.c | 2 +- arch/arm/mach-mvebu/system-controller.c | 2 +- arch/arm/mach-omap2/control.c | 8 ++++---- arch/arm/mach-omap2/omap-mpuss-lowpower.c | 12 ++++++------ arch/arm/mach-omap2/omap-smp.c | 4 ++-- arch/arm/mach-prima2/platsmp.c | 2 +- arch/arm/mach-prima2/pm.c | 2 +- arch/arm/mach-pxa/palmz72.c | 2 +- arch/arm/mach-pxa/pxa25x.c | 2 +- arch/arm/mach-pxa/pxa27x.c | 2 +- arch/arm/mach-pxa/pxa3xx.c | 2 +- arch/arm/mach-realview/platsmp-dt.c | 2 +- arch/arm/mach-rockchip/platsmp.c | 4 ++-- arch/arm/mach-rockchip/pm.c | 2 +- arch/arm/mach-s3c24xx/mach-jive.c | 2 +- arch/arm/mach-s3c24xx/pm-s3c2410.c | 2 +- arch/arm/mach-s3c24xx/pm-s3c2416.c | 2 +- arch/arm/mach-s3c64xx/pm.c | 2 +- arch/arm/mach-s5pv210/pm.c | 2 +- arch/arm/mach-sa1100/pm.c | 2 +- arch/arm/mach-shmobile/platsmp-apmu.c | 6 +++--- arch/arm/mach-shmobile/platsmp-scu.c | 4 ++-- arch/arm/mach-socfpga/platsmp.c | 4 ++-- arch/arm/mach-spear/platsmp.c | 2 +- arch/arm/mach-sti/platsmp.c | 2 +- arch/arm/mach-sunxi/platsmp.c | 4 ++-- arch/arm/mach-tango/platsmp.c | 2 +- arch/arm/mach-tango/pm.c | 2 +- arch/arm/mach-tegra/reset.c | 4 ++-- arch/arm/mach-ux500/platsmp.c | 2 +- arch/arm/mach-vexpress/dcscb.c | 2 +- arch/arm/mach-vexpress/platsmp.c | 2 +- arch/arm/mach-vexpress/tc2_pm.c | 4 ++-- arch/arm/mach-zx/platsmp.c | 4 ++-- arch/arm/mach-zynq/platsmp.c | 2 +- 54 files changed, 86 insertions(+), 86 deletions(-) diff --git a/arch/arm/common/mcpm_entry.c b/arch/arm/common/mcpm_entry.c index a923524d1040..cf062472e07b 100644 --- a/arch/arm/common/mcpm_entry.c +++ b/arch/arm/common/mcpm_entry.c @@ -144,7 +144,7 @@ extern unsigned long mcpm_entry_vectors[MAX_NR_CLUSTERS][MAX_CPUS_PER_CLUSTER]; void mcpm_set_entry_vector(unsigned cpu, unsigned cluster, void *ptr) { - unsigned long val = ptr ? virt_to_phys(ptr) : 0; + unsigned long val = ptr ? __pa_symbol(ptr) : 0; mcpm_entry_vectors[cluster][cpu] = val; sync_cache_w(&mcpm_entry_vectors[cluster][cpu]); } @@ -299,8 +299,8 @@ void mcpm_cpu_power_down(void) * the kernel as if the power_up method just had deasserted reset * on the CPU. */ - phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset); - phys_reset(virt_to_phys(mcpm_entry_point)); + phys_reset = (phys_reset_t)(unsigned long)__pa_symbol(cpu_reset); + phys_reset(__pa_symbol(mcpm_entry_point)); /* should never get here */ BUG(); @@ -388,8 +388,8 @@ static int __init nocache_trampoline(unsigned long _arg) __mcpm_outbound_leave_critical(cluster, CLUSTER_DOWN); __mcpm_cpu_down(cpu, cluster); - phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset); - phys_reset(virt_to_phys(mcpm_entry_point)); + phys_reset = (phys_reset_t)(unsigned long)__pa_symbol(cpu_reset); + phys_reset(__pa_symbol(mcpm_entry_point)); BUG(); } @@ -449,7 +449,7 @@ int __init mcpm_sync_init( sync_cache_w(&mcpm_sync); if (power_up_setup) { - mcpm_power_up_setup_phys = virt_to_phys(power_up_setup); + mcpm_power_up_setup_phys = __pa_symbol(power_up_setup); sync_cache_w(&mcpm_power_up_setup_phys); } diff --git a/arch/arm/mach-alpine/platsmp.c b/arch/arm/mach-alpine/platsmp.c index dd77ea25e7ca..6dc6d491f88a 100644 --- a/arch/arm/mach-alpine/platsmp.c +++ b/arch/arm/mach-alpine/platsmp.c @@ -27,7 +27,7 @@ static int alpine_boot_secondary(unsigned int cpu, struct task_struct *idle) { phys_addr_t addr; - addr = virt_to_phys(secondary_startup); + addr = __pa_symbol(secondary_startup); if (addr > (phys_addr_t)(uint32_t)(-1)) { pr_err("FAIL: resume address over 32bit (%pa)", &addr); diff --git a/arch/arm/mach-axxia/platsmp.c b/arch/arm/mach-axxia/platsmp.c index ffbd71d45008..502e3df69f69 100644 --- a/arch/arm/mach-axxia/platsmp.c +++ b/arch/arm/mach-axxia/platsmp.c @@ -25,7 +25,7 @@ static void write_release_addr(u32 release_phys) { u32 *virt = (u32 *) phys_to_virt(release_phys); - writel_relaxed(virt_to_phys(secondary_startup), virt); + writel_relaxed(__pa_symbol(secondary_startup), virt); /* Make sure this store is visible to other CPUs */ smp_wmb(); __cpuc_flush_dcache_area(virt, sizeof(u32)); diff --git a/arch/arm/mach-bcm/bcm63xx_smp.c b/arch/arm/mach-bcm/bcm63xx_smp.c index 9b6727ed68cd..f5fb10b4376f 100644 --- a/arch/arm/mach-bcm/bcm63xx_smp.c +++ b/arch/arm/mach-bcm/bcm63xx_smp.c @@ -135,7 +135,7 @@ static int bcm63138_smp_boot_secondary(unsigned int cpu, } /* Write the secondary init routine to the BootLUT reset vector */ - val = virt_to_phys(secondary_startup); + val = __pa_symbol(secondary_startup); writel_relaxed(val, bootlut_base + BOOTLUT_RESET_VECT); /* Power up the core, will jump straight to its reset vector when we diff --git a/arch/arm/mach-bcm/platsmp-brcmstb.c b/arch/arm/mach-bcm/platsmp-brcmstb.c index 40dc8448445e..12379960e982 100644 --- a/arch/arm/mach-bcm/platsmp-brcmstb.c +++ b/arch/arm/mach-bcm/platsmp-brcmstb.c @@ -151,7 +151,7 @@ static void brcmstb_cpu_boot(u32 cpu) * Set the reset vector to point to the secondary_startup * routine */ - cpu_set_boot_addr(cpu, virt_to_phys(secondary_startup)); + cpu_set_boot_addr(cpu, __pa_symbol(secondary_startup)); /* Unhalt the cpu */ cpu_rst_cfg_set(cpu, 0); diff --git a/arch/arm/mach-bcm/platsmp.c b/arch/arm/mach-bcm/platsmp.c index 3ac3a9bc663c..582886d0d02f 100644 --- a/arch/arm/mach-bcm/platsmp.c +++ b/arch/arm/mach-bcm/platsmp.c @@ -116,7 +116,7 @@ static int nsp_write_lut(unsigned int cpu) return -ENOMEM; } - secondary_startup_phy = virt_to_phys(secondary_startup); + secondary_startup_phy = __pa_symbol(secondary_startup); BUG_ON(secondary_startup_phy > (phys_addr_t)U32_MAX); writel_relaxed(secondary_startup_phy, sku_rom_lut); @@ -189,7 +189,7 @@ static int kona_boot_secondary(unsigned int cpu, struct task_struct *idle) * Secondary cores will start in secondary_startup(), * defined in "arch/arm/kernel/head.S" */ - boot_func = virt_to_phys(secondary_startup); + boot_func = __pa_symbol(secondary_startup); BUG_ON(boot_func & BOOT_ADDR_CPUID_MASK); BUG_ON(boot_func > (phys_addr_t)U32_MAX); diff --git a/arch/arm/mach-berlin/platsmp.c b/arch/arm/mach-berlin/platsmp.c index 93f90688db18..1167b0ed92c8 100644 --- a/arch/arm/mach-berlin/platsmp.c +++ b/arch/arm/mach-berlin/platsmp.c @@ -92,7 +92,7 @@ static void __init berlin_smp_prepare_cpus(unsigned int max_cpus) * Write the secondary startup address into the SW reset address * vector. This is used by boot_inst. */ - writel(virt_to_phys(secondary_startup), vectors_base + SW_RESET_ADDR); + writel(__pa_symbol(secondary_startup), vectors_base + SW_RESET_ADDR); iounmap(vectors_base); unmap_scu: diff --git a/arch/arm/mach-exynos/firmware.c b/arch/arm/mach-exynos/firmware.c index fd6da5419b51..e81a78b125d9 100644 --- a/arch/arm/mach-exynos/firmware.c +++ b/arch/arm/mach-exynos/firmware.c @@ -41,7 +41,7 @@ static int exynos_do_idle(unsigned long mode) case FW_DO_IDLE_AFTR: if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9) exynos_save_cp15(); - writel_relaxed(virt_to_phys(exynos_cpu_resume_ns), + writel_relaxed(__pa_symbol(exynos_cpu_resume_ns), sysram_ns_base_addr + 0x24); writel_relaxed(EXYNOS_AFTR_MAGIC, sysram_ns_base_addr + 0x20); if (soc_is_exynos3250()) { @@ -135,7 +135,7 @@ static int exynos_suspend(void) exynos_save_cp15(); writel(EXYNOS_SLEEP_MAGIC, sysram_ns_base_addr + EXYNOS_BOOT_FLAG); - writel(virt_to_phys(exynos_cpu_resume_ns), + writel(__pa_symbol(exynos_cpu_resume_ns), sysram_ns_base_addr + EXYNOS_BOOT_ADDR); return cpu_suspend(0, exynos_cpu_suspend); diff --git a/arch/arm/mach-exynos/mcpm-exynos.c b/arch/arm/mach-exynos/mcpm-exynos.c index f086bf615b29..214a9cfa92e9 100644 --- a/arch/arm/mach-exynos/mcpm-exynos.c +++ b/arch/arm/mach-exynos/mcpm-exynos.c @@ -221,7 +221,7 @@ static void exynos_mcpm_setup_entry_point(void) */ __raw_writel(0xe59f0000, ns_sram_base_addr); /* ldr r0, [pc, #0] */ __raw_writel(0xe12fff10, ns_sram_base_addr + 4); /* bx r0 */ - __raw_writel(virt_to_phys(mcpm_entry_point), ns_sram_base_addr + 8); + __raw_writel(__pa_symbol(mcpm_entry_point), ns_sram_base_addr + 8); } static struct syscore_ops exynos_mcpm_syscore_ops = { diff --git a/arch/arm/mach-exynos/platsmp.c b/arch/arm/mach-exynos/platsmp.c index 98ffe1e62ad5..9f4949f7ed88 100644 --- a/arch/arm/mach-exynos/platsmp.c +++ b/arch/arm/mach-exynos/platsmp.c @@ -353,7 +353,7 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle) smp_rmb(); - boot_addr = virt_to_phys(exynos4_secondary_startup); + boot_addr = __pa_symbol(exynos4_secondary_startup); ret = exynos_set_boot_addr(core_id, boot_addr); if (ret) @@ -443,7 +443,7 @@ static void __init exynos_smp_prepare_cpus(unsigned int max_cpus) mpidr = cpu_logical_map(i); core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); - boot_addr = virt_to_phys(exynos4_secondary_startup); + boot_addr = __pa_symbol(exynos4_secondary_startup); ret = exynos_set_boot_addr(core_id, boot_addr); if (ret) diff --git a/arch/arm/mach-exynos/pm.c b/arch/arm/mach-exynos/pm.c index 487295f4a56b..1a7e5b5d08d8 100644 --- a/arch/arm/mach-exynos/pm.c +++ b/arch/arm/mach-exynos/pm.c @@ -132,7 +132,7 @@ static void exynos_set_wakeupmask(long mask) static void exynos_cpu_set_boot_vector(long flags) { - writel_relaxed(virt_to_phys(exynos_cpu_resume), + writel_relaxed(__pa_symbol(exynos_cpu_resume), exynos_boot_vector_addr()); writel_relaxed(flags, exynos_boot_vector_flag()); } @@ -238,7 +238,7 @@ static int exynos_cpu0_enter_aftr(void) abort: if (cpu_online(1)) { - unsigned long boot_addr = virt_to_phys(exynos_cpu_resume); + unsigned long boot_addr = __pa_symbol(exynos_cpu_resume); /* * Set the boot vector to something non-zero @@ -330,7 +330,7 @@ static int exynos_cpu1_powerdown(void) static void exynos_pre_enter_aftr(void) { - unsigned long boot_addr = virt_to_phys(exynos_cpu_resume); + unsigned long boot_addr = __pa_symbol(exynos_cpu_resume); (void)exynos_set_boot_addr(1, boot_addr); } diff --git a/arch/arm/mach-exynos/suspend.c b/arch/arm/mach-exynos/suspend.c index 06332f626565..97765be2cc12 100644 --- a/arch/arm/mach-exynos/suspend.c +++ b/arch/arm/mach-exynos/suspend.c @@ -344,7 +344,7 @@ static void exynos_pm_prepare(void) exynos_pm_enter_sleep_mode(); /* ensure at least INFORM0 has the resume address */ - pmu_raw_writel(virt_to_phys(exynos_cpu_resume), S5P_INFORM0); + pmu_raw_writel(__pa_symbol(exynos_cpu_resume), S5P_INFORM0); } static void exynos3250_pm_prepare(void) @@ -361,7 +361,7 @@ static void exynos3250_pm_prepare(void) exynos_pm_enter_sleep_mode(); /* ensure at least INFORM0 has the resume address */ - pmu_raw_writel(virt_to_phys(exynos_cpu_resume), S5P_INFORM0); + pmu_raw_writel(__pa_symbol(exynos_cpu_resume), S5P_INFORM0); } static void exynos5420_pm_prepare(void) @@ -386,7 +386,7 @@ static void exynos5420_pm_prepare(void) /* ensure at least INFORM0 has the resume address */ if (IS_ENABLED(CONFIG_EXYNOS5420_MCPM)) - pmu_raw_writel(virt_to_phys(mcpm_entry_point), S5P_INFORM0); + pmu_raw_writel(__pa_symbol(mcpm_entry_point), S5P_INFORM0); tmp = pmu_raw_readl(EXYNOS5_ARM_L2_OPTION); tmp &= ~EXYNOS5_USE_RETENTION; diff --git a/arch/arm/mach-hisi/platmcpm.c b/arch/arm/mach-hisi/platmcpm.c index 4b653a8cb75c..a6c117622d67 100644 --- a/arch/arm/mach-hisi/platmcpm.c +++ b/arch/arm/mach-hisi/platmcpm.c @@ -327,7 +327,7 @@ static int __init hip04_smp_init(void) */ writel_relaxed(hip04_boot_method[0], relocation); writel_relaxed(0xa5a5a5a5, relocation + 4); /* magic number */ - writel_relaxed(virt_to_phys(secondary_startup), relocation + 8); + writel_relaxed(__pa_symbol(secondary_startup), relocation + 8); writel_relaxed(0, relocation + 12); iounmap(relocation); diff --git a/arch/arm/mach-hisi/platsmp.c b/arch/arm/mach-hisi/platsmp.c index e1d67648d5d0..91bb02dec20f 100644 --- a/arch/arm/mach-hisi/platsmp.c +++ b/arch/arm/mach-hisi/platsmp.c @@ -28,7 +28,7 @@ void hi3xxx_set_cpu_jump(int cpu, void *jump_addr) cpu = cpu_logical_map(cpu); if (!cpu || !ctrl_base) return; - writel_relaxed(virt_to_phys(jump_addr), ctrl_base + ((cpu - 1) << 2)); + writel_relaxed(__pa_symbol(jump_addr), ctrl_base + ((cpu - 1) << 2)); } int hi3xxx_get_cpu_jump(int cpu) @@ -118,7 +118,7 @@ static int hix5hd2_boot_secondary(unsigned int cpu, struct task_struct *idle) { phys_addr_t jumpaddr; - jumpaddr = virt_to_phys(secondary_startup); + jumpaddr = __pa_symbol(secondary_startup); hix5hd2_set_scu_boot_addr(HIX5HD2_BOOT_ADDRESS, jumpaddr); hix5hd2_set_cpu(cpu, true); arch_send_wakeup_ipi_mask(cpumask_of(cpu)); @@ -156,7 +156,7 @@ static int hip01_boot_secondary(unsigned int cpu, struct task_struct *idle) struct device_node *node; - jumpaddr = virt_to_phys(secondary_startup); + jumpaddr = __pa_symbol(secondary_startup); hip01_set_boot_addr(HIP01_BOOT_ADDRESS, jumpaddr); node = of_find_compatible_node(NULL, NULL, "hisilicon,hip01-sysctrl"); diff --git a/arch/arm/mach-imx/platsmp.c b/arch/arm/mach-imx/platsmp.c index 711dbbd5badd..c2d1b329fba1 100644 --- a/arch/arm/mach-imx/platsmp.c +++ b/arch/arm/mach-imx/platsmp.c @@ -117,7 +117,7 @@ static void __init ls1021a_smp_prepare_cpus(unsigned int max_cpus) dcfg_base = of_iomap(np, 0); BUG_ON(!dcfg_base); - paddr = virt_to_phys(secondary_startup); + paddr = __pa_symbol(secondary_startup); writel_relaxed(cpu_to_be32(paddr), dcfg_base + DCFG_CCSR_SCRATCHRW1); iounmap(dcfg_base); diff --git a/arch/arm/mach-imx/pm-imx6.c b/arch/arm/mach-imx/pm-imx6.c index 1515e498d348..e61b1d1027e1 100644 --- a/arch/arm/mach-imx/pm-imx6.c +++ b/arch/arm/mach-imx/pm-imx6.c @@ -499,7 +499,7 @@ static int __init imx6q_suspend_init(const struct imx6_pm_socdata *socdata) memset(suspend_ocram_base, 0, sizeof(*pm_info)); pm_info = suspend_ocram_base; pm_info->pbase = ocram_pbase; - pm_info->resume_addr = virt_to_phys(v7_cpu_resume); + pm_info->resume_addr = __pa_symbol(v7_cpu_resume); pm_info->pm_info_size = sizeof(*pm_info); /* diff --git a/arch/arm/mach-imx/src.c b/arch/arm/mach-imx/src.c index 70b083fe934a..495d85d0fe7e 100644 --- a/arch/arm/mach-imx/src.c +++ b/arch/arm/mach-imx/src.c @@ -99,7 +99,7 @@ void imx_enable_cpu(int cpu, bool enable) void imx_set_cpu_jump(int cpu, void *jump_addr) { cpu = cpu_logical_map(cpu); - writel_relaxed(virt_to_phys(jump_addr), + writel_relaxed(__pa_symbol(jump_addr), src_base + SRC_GPR1 + cpu * 8); } diff --git a/arch/arm/mach-mediatek/platsmp.c b/arch/arm/mach-mediatek/platsmp.c index b821e34474b6..726eb69bb655 100644 --- a/arch/arm/mach-mediatek/platsmp.c +++ b/arch/arm/mach-mediatek/platsmp.c @@ -122,7 +122,7 @@ static void __init __mtk_smp_prepare_cpus(unsigned int max_cpus, int trustzone) * write the address of slave startup address into the system-wide * jump register */ - writel_relaxed(virt_to_phys(secondary_startup_arm), + writel_relaxed(__pa_symbol(secondary_startup_arm), mtk_smp_base + mtk_smp_info->jump_reg); } diff --git a/arch/arm/mach-mvebu/pm.c b/arch/arm/mach-mvebu/pm.c index 2990c5269b18..c487be61d6d8 100644 --- a/arch/arm/mach-mvebu/pm.c +++ b/arch/arm/mach-mvebu/pm.c @@ -110,7 +110,7 @@ static void mvebu_pm_store_armadaxp_bootinfo(u32 *store_addr) { phys_addr_t resume_pc; - resume_pc = virt_to_phys(armada_370_xp_cpu_resume); + resume_pc = __pa_symbol(armada_370_xp_cpu_resume); /* * The bootloader expects the first two words to be a magic diff --git a/arch/arm/mach-mvebu/pmsu.c b/arch/arm/mach-mvebu/pmsu.c index f39bd51bce18..27a78c80e5b1 100644 --- a/arch/arm/mach-mvebu/pmsu.c +++ b/arch/arm/mach-mvebu/pmsu.c @@ -112,7 +112,7 @@ static const struct of_device_id of_pmsu_table[] = { void mvebu_pmsu_set_cpu_boot_addr(int hw_cpu, void *boot_addr) { - writel(virt_to_phys(boot_addr), pmsu_mp_base + + writel(__pa_symbol(boot_addr), pmsu_mp_base + PMSU_BOOT_ADDR_REDIRECT_OFFSET(hw_cpu)); } diff --git a/arch/arm/mach-mvebu/system-controller.c b/arch/arm/mach-mvebu/system-controller.c index 76cbc82a7407..04d9ebe6a90a 100644 --- a/arch/arm/mach-mvebu/system-controller.c +++ b/arch/arm/mach-mvebu/system-controller.c @@ -153,7 +153,7 @@ void mvebu_system_controller_set_cpu_boot_addr(void *boot_addr) if (of_machine_is_compatible("marvell,armada375")) mvebu_armada375_smp_wa_init(); - writel(virt_to_phys(boot_addr), system_controller_base + + writel(__pa_symbol(boot_addr), system_controller_base + mvebu_sc->resume_boot_addr); } #endif diff --git a/arch/arm/mach-omap2/control.c b/arch/arm/mach-omap2/control.c index 1662071bb2cc..bd8089ff929f 100644 --- a/arch/arm/mach-omap2/control.c +++ b/arch/arm/mach-omap2/control.c @@ -315,15 +315,15 @@ void omap3_save_scratchpad_contents(void) scratchpad_contents.boot_config_ptr = 0x0; if (cpu_is_omap3630()) scratchpad_contents.public_restore_ptr = - virt_to_phys(omap3_restore_3630); + __pa_symbol(omap3_restore_3630); else if (omap_rev() != OMAP3430_REV_ES3_0 && omap_rev() != OMAP3430_REV_ES3_1 && omap_rev() != OMAP3430_REV_ES3_1_2) scratchpad_contents.public_restore_ptr = - virt_to_phys(omap3_restore); + __pa_symbol(omap3_restore); else scratchpad_contents.public_restore_ptr = - virt_to_phys(omap3_restore_es3); + __pa_symbol(omap3_restore_es3); if (omap_type() == OMAP2_DEVICE_TYPE_GP) scratchpad_contents.secure_ram_restore_ptr = 0x0; @@ -395,7 +395,7 @@ void omap3_save_scratchpad_contents(void) sdrc_block_contents.flags = 0x0; sdrc_block_contents.block_size = 0x0; - arm_context_addr = virt_to_phys(omap3_arm_context); + arm_context_addr = __pa_symbol(omap3_arm_context); /* Copy all the contents to the scratchpad location */ scratchpad_address = OMAP2_L4_IO_ADDRESS(OMAP343X_SCRATCHPAD); diff --git a/arch/arm/mach-omap2/omap-mpuss-lowpower.c b/arch/arm/mach-omap2/omap-mpuss-lowpower.c index 7d62ad48c7c9..113ab2dd2ee9 100644 --- a/arch/arm/mach-omap2/omap-mpuss-lowpower.c +++ b/arch/arm/mach-omap2/omap-mpuss-lowpower.c @@ -273,7 +273,7 @@ int omap4_enter_lowpower(unsigned int cpu, unsigned int power_state) cpu_clear_prev_logic_pwrst(cpu); pwrdm_set_next_pwrst(pm_info->pwrdm, power_state); pwrdm_set_logic_retst(pm_info->pwrdm, cpu_logic_state); - set_cpu_wakeup_addr(cpu, virt_to_phys(omap_pm_ops.resume)); + set_cpu_wakeup_addr(cpu, __pa_symbol(omap_pm_ops.resume)); omap_pm_ops.scu_prepare(cpu, power_state); l2x0_pwrst_prepare(cpu, save_state); @@ -325,7 +325,7 @@ int omap4_hotplug_cpu(unsigned int cpu, unsigned int power_state) pwrdm_clear_all_prev_pwrst(pm_info->pwrdm); pwrdm_set_next_pwrst(pm_info->pwrdm, power_state); - set_cpu_wakeup_addr(cpu, virt_to_phys(omap_pm_ops.hotplug_restart)); + set_cpu_wakeup_addr(cpu, __pa_symbol(omap_pm_ops.hotplug_restart)); omap_pm_ops.scu_prepare(cpu, power_state); /* @@ -467,13 +467,13 @@ void __init omap4_mpuss_early_init(void) sar_base = omap4_get_sar_ram_base(); if (cpu_is_omap443x()) - startup_pa = virt_to_phys(omap4_secondary_startup); + startup_pa = __pa_symbol(omap4_secondary_startup); else if (cpu_is_omap446x()) - startup_pa = virt_to_phys(omap4460_secondary_startup); + startup_pa = __pa_symbol(omap4460_secondary_startup); else if ((__boot_cpu_mode & MODE_MASK) == HYP_MODE) - startup_pa = virt_to_phys(omap5_secondary_hyp_startup); + startup_pa = __pa_symbol(omap5_secondary_hyp_startup); else - startup_pa = virt_to_phys(omap5_secondary_startup); + startup_pa = __pa_symbol(omap5_secondary_startup); if (cpu_is_omap44xx()) writel_relaxed(startup_pa, sar_base + diff --git a/arch/arm/mach-omap2/omap-smp.c b/arch/arm/mach-omap2/omap-smp.c index b4de3da6dffa..003353b0b794 100644 --- a/arch/arm/mach-omap2/omap-smp.c +++ b/arch/arm/mach-omap2/omap-smp.c @@ -316,9 +316,9 @@ static void __init omap4_smp_prepare_cpus(unsigned int max_cpus) * A barrier is added to ensure that write buffer is drained */ if (omap_secure_apis_support()) - omap_auxcoreboot_addr(virt_to_phys(cfg.startup_addr)); + omap_auxcoreboot_addr(__pa_symbol(cfg.startup_addr)); else - writel_relaxed(virt_to_phys(cfg.startup_addr), + writel_relaxed(__pa_symbol(cfg.startup_addr), base + OMAP_AUX_CORE_BOOT_1); } diff --git a/arch/arm/mach-prima2/platsmp.c b/arch/arm/mach-prima2/platsmp.c index 0875b99add18..75ef5d4be554 100644 --- a/arch/arm/mach-prima2/platsmp.c +++ b/arch/arm/mach-prima2/platsmp.c @@ -65,7 +65,7 @@ static int sirfsoc_boot_secondary(unsigned int cpu, struct task_struct *idle) * waiting for. This would wake up the secondary core from WFE */ #define SIRFSOC_CPU1_JUMPADDR_OFFSET 0x2bc - __raw_writel(virt_to_phys(sirfsoc_secondary_startup), + __raw_writel(__pa_symbol(sirfsoc_secondary_startup), clk_base + SIRFSOC_CPU1_JUMPADDR_OFFSET); #define SIRFSOC_CPU1_WAKEMAGIC_OFFSET 0x2b8 diff --git a/arch/arm/mach-prima2/pm.c b/arch/arm/mach-prima2/pm.c index 83e94c95e314..b0bcf1ff02dd 100644 --- a/arch/arm/mach-prima2/pm.c +++ b/arch/arm/mach-prima2/pm.c @@ -54,7 +54,7 @@ static void sirfsoc_set_sleep_mode(u32 mode) static int sirfsoc_pre_suspend_power_off(void) { - u32 wakeup_entry = virt_to_phys(cpu_resume); + u32 wakeup_entry = __pa_symbol(cpu_resume); sirfsoc_rtc_iobrg_writel(wakeup_entry, sirfsoc_pwrc_base + SIRFSOC_PWRC_SCRATCH_PAD1); diff --git a/arch/arm/mach-pxa/palmz72.c b/arch/arm/mach-pxa/palmz72.c index 9c308de158c6..29630061e700 100644 --- a/arch/arm/mach-pxa/palmz72.c +++ b/arch/arm/mach-pxa/palmz72.c @@ -249,7 +249,7 @@ static int palmz72_pm_suspend(void) store_ptr = *PALMZ72_SAVE_DWORD; /* Setting PSPR to a proper value */ - PSPR = virt_to_phys(&palmz72_resume_info); + PSPR = __pa_symbol(&palmz72_resume_info); return 0; } diff --git a/arch/arm/mach-pxa/pxa25x.c b/arch/arm/mach-pxa/pxa25x.c index c725baf119e1..ba431fad5c47 100644 --- a/arch/arm/mach-pxa/pxa25x.c +++ b/arch/arm/mach-pxa/pxa25x.c @@ -85,7 +85,7 @@ static void pxa25x_cpu_pm_enter(suspend_state_t state) static int pxa25x_cpu_pm_prepare(void) { /* set resume return address */ - PSPR = virt_to_phys(cpu_resume); + PSPR = __pa_symbol(cpu_resume); return 0; } diff --git a/arch/arm/mach-pxa/pxa27x.c b/arch/arm/mach-pxa/pxa27x.c index c0185c5c5a08..9b69be4e9fe3 100644 --- a/arch/arm/mach-pxa/pxa27x.c +++ b/arch/arm/mach-pxa/pxa27x.c @@ -168,7 +168,7 @@ static int pxa27x_cpu_pm_valid(suspend_state_t state) static int pxa27x_cpu_pm_prepare(void) { /* set resume return address */ - PSPR = virt_to_phys(cpu_resume); + PSPR = __pa_symbol(cpu_resume); return 0; } diff --git a/arch/arm/mach-pxa/pxa3xx.c b/arch/arm/mach-pxa/pxa3xx.c index 87acc96388c7..0cc9f124c9ac 100644 --- a/arch/arm/mach-pxa/pxa3xx.c +++ b/arch/arm/mach-pxa/pxa3xx.c @@ -123,7 +123,7 @@ static void pxa3xx_cpu_pm_suspend(void) PSPR = 0x5c014000; /* overwrite with the resume address */ - *p = virt_to_phys(cpu_resume); + *p = __pa_symbol(cpu_resume); cpu_suspend(0, pxa3xx_finish_suspend); diff --git a/arch/arm/mach-realview/platsmp-dt.c b/arch/arm/mach-realview/platsmp-dt.c index 70ca99eb52c6..c242423bf8db 100644 --- a/arch/arm/mach-realview/platsmp-dt.c +++ b/arch/arm/mach-realview/platsmp-dt.c @@ -76,7 +76,7 @@ static void __init realview_smp_prepare_cpus(unsigned int max_cpus) } /* Put the boot address in this magic register */ regmap_write(map, REALVIEW_SYS_FLAGSSET_OFFSET, - virt_to_phys(versatile_secondary_startup)); + __pa_symbol(versatile_secondary_startup)); } static const struct smp_operations realview_dt_smp_ops __initconst = { diff --git a/arch/arm/mach-rockchip/platsmp.c b/arch/arm/mach-rockchip/platsmp.c index 4d827a069d49..3abafdbdd7f4 100644 --- a/arch/arm/mach-rockchip/platsmp.c +++ b/arch/arm/mach-rockchip/platsmp.c @@ -156,7 +156,7 @@ static int rockchip_boot_secondary(unsigned int cpu, struct task_struct *idle) */ mdelay(1); /* ensure the cpus other than cpu0 to startup */ - writel(virt_to_phys(secondary_startup), sram_base_addr + 8); + writel(__pa_symbol(secondary_startup), sram_base_addr + 8); writel(0xDEADBEAF, sram_base_addr + 4); dsb_sev(); } @@ -195,7 +195,7 @@ static int __init rockchip_smp_prepare_sram(struct device_node *node) } /* set the boot function for the sram code */ - rockchip_boot_fn = virt_to_phys(secondary_startup); + rockchip_boot_fn = __pa_symbol(secondary_startup); /* copy the trampoline to sram, that runs during startup of the core */ memcpy(sram_base_addr, &rockchip_secondary_trampoline, trampoline_sz); diff --git a/arch/arm/mach-rockchip/pm.c b/arch/arm/mach-rockchip/pm.c index bee8c8051929..0592534e0b88 100644 --- a/arch/arm/mach-rockchip/pm.c +++ b/arch/arm/mach-rockchip/pm.c @@ -62,7 +62,7 @@ static inline u32 rk3288_l2_config(void) static void rk3288_config_bootdata(void) { rkpm_bootdata_cpusp = rk3288_bootram_phy + (SZ_4K - 8); - rkpm_bootdata_cpu_code = virt_to_phys(cpu_resume); + rkpm_bootdata_cpu_code = __pa_symbol(cpu_resume); rkpm_bootdata_l2ctlr_f = 1; rkpm_bootdata_l2ctlr = rk3288_l2_config(); diff --git a/arch/arm/mach-s3c24xx/mach-jive.c b/arch/arm/mach-s3c24xx/mach-jive.c index 895aca225952..f5b5c49b56ac 100644 --- a/arch/arm/mach-s3c24xx/mach-jive.c +++ b/arch/arm/mach-s3c24xx/mach-jive.c @@ -484,7 +484,7 @@ static int jive_pm_suspend(void) * correct address to resume from. */ __raw_writel(0x2BED, S3C2412_INFORM0); - __raw_writel(virt_to_phys(s3c_cpu_resume), S3C2412_INFORM1); + __raw_writel(__pa_symbol(s3c_cpu_resume), S3C2412_INFORM1); return 0; } diff --git a/arch/arm/mach-s3c24xx/pm-s3c2410.c b/arch/arm/mach-s3c24xx/pm-s3c2410.c index 20e481d8a33a..a4588daeddb0 100644 --- a/arch/arm/mach-s3c24xx/pm-s3c2410.c +++ b/arch/arm/mach-s3c24xx/pm-s3c2410.c @@ -45,7 +45,7 @@ static void s3c2410_pm_prepare(void) { /* ensure at least GSTATUS3 has the resume address */ - __raw_writel(virt_to_phys(s3c_cpu_resume), S3C2410_GSTATUS3); + __raw_writel(__pa_symbol(s3c_cpu_resume), S3C2410_GSTATUS3); S3C_PMDBG("GSTATUS3 0x%08x\n", __raw_readl(S3C2410_GSTATUS3)); S3C_PMDBG("GSTATUS4 0x%08x\n", __raw_readl(S3C2410_GSTATUS4)); diff --git a/arch/arm/mach-s3c24xx/pm-s3c2416.c b/arch/arm/mach-s3c24xx/pm-s3c2416.c index c0e328e37bd6..b5bbf0d5985c 100644 --- a/arch/arm/mach-s3c24xx/pm-s3c2416.c +++ b/arch/arm/mach-s3c24xx/pm-s3c2416.c @@ -48,7 +48,7 @@ static void s3c2416_pm_prepare(void) * correct address to resume from. */ __raw_writel(0x2BED, S3C2412_INFORM0); - __raw_writel(virt_to_phys(s3c_cpu_resume), S3C2412_INFORM1); + __raw_writel(__pa_symbol(s3c_cpu_resume), S3C2412_INFORM1); } static int s3c2416_pm_add(struct device *dev, struct subsys_interface *sif) diff --git a/arch/arm/mach-s3c64xx/pm.c b/arch/arm/mach-s3c64xx/pm.c index 59d91b83b03d..945a9d1e1a71 100644 --- a/arch/arm/mach-s3c64xx/pm.c +++ b/arch/arm/mach-s3c64xx/pm.c @@ -304,7 +304,7 @@ static void s3c64xx_pm_prepare(void) wake_irqs, ARRAY_SIZE(wake_irqs)); /* store address of resume. */ - __raw_writel(virt_to_phys(s3c_cpu_resume), S3C64XX_INFORM0); + __raw_writel(__pa_symbol(s3c_cpu_resume), S3C64XX_INFORM0); /* ensure previous wakeup state is cleared before sleeping */ __raw_writel(__raw_readl(S3C64XX_WAKEUP_STAT), S3C64XX_WAKEUP_STAT); diff --git a/arch/arm/mach-s5pv210/pm.c b/arch/arm/mach-s5pv210/pm.c index 21b4b13c5ab7..2d5f08015e34 100644 --- a/arch/arm/mach-s5pv210/pm.c +++ b/arch/arm/mach-s5pv210/pm.c @@ -69,7 +69,7 @@ static void s5pv210_pm_prepare(void) __raw_writel(s5pv210_irqwake_intmask, S5P_WAKEUP_MASK); /* ensure at least INFORM0 has the resume address */ - __raw_writel(virt_to_phys(s5pv210_cpu_resume), S5P_INFORM0); + __raw_writel(__pa_symbol(s5pv210_cpu_resume), S5P_INFORM0); tmp = __raw_readl(S5P_SLEEP_CFG); tmp &= ~(S5P_SLEEP_CFG_OSC_EN | S5P_SLEEP_CFG_USBOSC_EN); diff --git a/arch/arm/mach-sa1100/pm.c b/arch/arm/mach-sa1100/pm.c index 34853d5dfda2..9a7079f565bd 100644 --- a/arch/arm/mach-sa1100/pm.c +++ b/arch/arm/mach-sa1100/pm.c @@ -73,7 +73,7 @@ static int sa11x0_pm_enter(suspend_state_t state) RCSR = RCSR_HWR | RCSR_SWR | RCSR_WDR | RCSR_SMR; /* set resume return address */ - PSPR = virt_to_phys(cpu_resume); + PSPR = __pa_symbol(cpu_resume); /* go zzz */ cpu_suspend(0, sa1100_finish_suspend); diff --git a/arch/arm/mach-shmobile/platsmp-apmu.c b/arch/arm/mach-shmobile/platsmp-apmu.c index 0c6bb458b7a4..71729b8d1900 100644 --- a/arch/arm/mach-shmobile/platsmp-apmu.c +++ b/arch/arm/mach-shmobile/platsmp-apmu.c @@ -171,7 +171,7 @@ static void apmu_parse_dt(void (*fn)(struct resource *res, int cpu, int bit)) static void __init shmobile_smp_apmu_setup_boot(void) { /* install boot code shared by all CPUs */ - shmobile_boot_fn = virt_to_phys(shmobile_smp_boot); + shmobile_boot_fn = __pa_symbol(shmobile_smp_boot); } void __init shmobile_smp_apmu_prepare_cpus(unsigned int max_cpus, @@ -185,7 +185,7 @@ void __init shmobile_smp_apmu_prepare_cpus(unsigned int max_cpus, int shmobile_smp_apmu_boot_secondary(unsigned int cpu, struct task_struct *idle) { /* For this particular CPU register boot vector */ - shmobile_smp_hook(cpu, virt_to_phys(secondary_startup), 0); + shmobile_smp_hook(cpu, __pa_symbol(secondary_startup), 0); return apmu_wrap(cpu, apmu_power_on); } @@ -301,7 +301,7 @@ int shmobile_smp_apmu_cpu_kill(unsigned int cpu) #if defined(CONFIG_SUSPEND) static int shmobile_smp_apmu_do_suspend(unsigned long cpu) { - shmobile_smp_hook(cpu, virt_to_phys(cpu_resume), 0); + shmobile_smp_hook(cpu, __pa_symbol(cpu_resume), 0); shmobile_smp_apmu_cpu_shutdown(cpu); cpu_do_idle(); /* WFI selects Core Standby */ return 1; diff --git a/arch/arm/mach-shmobile/platsmp-scu.c b/arch/arm/mach-shmobile/platsmp-scu.c index d1ecaf37d142..f1a1efde4beb 100644 --- a/arch/arm/mach-shmobile/platsmp-scu.c +++ b/arch/arm/mach-shmobile/platsmp-scu.c @@ -24,7 +24,7 @@ static void __iomem *shmobile_scu_base; static int shmobile_scu_cpu_prepare(unsigned int cpu) { /* For this particular CPU register SCU SMP boot vector */ - shmobile_smp_hook(cpu, virt_to_phys(shmobile_boot_scu), + shmobile_smp_hook(cpu, __pa_symbol(shmobile_boot_scu), shmobile_scu_base_phys); return 0; } @@ -33,7 +33,7 @@ void __init shmobile_smp_scu_prepare_cpus(phys_addr_t scu_base_phys, unsigned int max_cpus) { /* install boot code shared by all CPUs */ - shmobile_boot_fn = virt_to_phys(shmobile_smp_boot); + shmobile_boot_fn = __pa_symbol(shmobile_smp_boot); /* enable SCU and cache coherency on booting CPU */ shmobile_scu_base_phys = scu_base_phys; diff --git a/arch/arm/mach-socfpga/platsmp.c b/arch/arm/mach-socfpga/platsmp.c index 07945748b571..0ee76772b507 100644 --- a/arch/arm/mach-socfpga/platsmp.c +++ b/arch/arm/mach-socfpga/platsmp.c @@ -40,7 +40,7 @@ static int socfpga_boot_secondary(unsigned int cpu, struct task_struct *idle) memcpy(phys_to_virt(0), &secondary_trampoline, trampoline_size); - writel(virt_to_phys(secondary_startup), + writel(__pa_symbol(secondary_startup), sys_manager_base_addr + (socfpga_cpu1start_addr & 0x000000ff)); flush_cache_all(); @@ -63,7 +63,7 @@ static int socfpga_a10_boot_secondary(unsigned int cpu, struct task_struct *idle SOCFPGA_A10_RSTMGR_MODMPURST); memcpy(phys_to_virt(0), &secondary_trampoline, trampoline_size); - writel(virt_to_phys(secondary_startup), + writel(__pa_symbol(secondary_startup), sys_manager_base_addr + (socfpga_cpu1start_addr & 0x00000fff)); flush_cache_all(); diff --git a/arch/arm/mach-spear/platsmp.c b/arch/arm/mach-spear/platsmp.c index 8d1e2d551786..39038a03836a 100644 --- a/arch/arm/mach-spear/platsmp.c +++ b/arch/arm/mach-spear/platsmp.c @@ -117,7 +117,7 @@ static void __init spear13xx_smp_prepare_cpus(unsigned int max_cpus) * (presently it is in SRAM). The BootMonitor waits until it receives a * soft interrupt, and then the secondary CPU branches to this address. */ - __raw_writel(virt_to_phys(spear13xx_secondary_startup), SYS_LOCATION); + __raw_writel(__pa_symbol(spear13xx_secondary_startup), SYS_LOCATION); } const struct smp_operations spear13xx_smp_ops __initconst = { diff --git a/arch/arm/mach-sti/platsmp.c b/arch/arm/mach-sti/platsmp.c index ea5a2277ee46..231f19e17436 100644 --- a/arch/arm/mach-sti/platsmp.c +++ b/arch/arm/mach-sti/platsmp.c @@ -103,7 +103,7 @@ static void __init sti_smp_prepare_cpus(unsigned int max_cpus) u32 __iomem *cpu_strt_ptr; u32 release_phys; int cpu; - unsigned long entry_pa = virt_to_phys(sti_secondary_startup); + unsigned long entry_pa = __pa_symbol(sti_secondary_startup); np = of_find_compatible_node(NULL, NULL, "arm,cortex-a9-scu"); diff --git a/arch/arm/mach-sunxi/platsmp.c b/arch/arm/mach-sunxi/platsmp.c index 6642267812c9..8fb5088464db 100644 --- a/arch/arm/mach-sunxi/platsmp.c +++ b/arch/arm/mach-sunxi/platsmp.c @@ -80,7 +80,7 @@ static int sun6i_smp_boot_secondary(unsigned int cpu, spin_lock(&cpu_lock); /* Set CPU boot address */ - writel(virt_to_phys(secondary_startup), + writel(__pa_symbol(secondary_startup), cpucfg_membase + CPUCFG_PRIVATE0_REG); /* Assert the CPU core in reset */ @@ -162,7 +162,7 @@ static int sun8i_smp_boot_secondary(unsigned int cpu, spin_lock(&cpu_lock); /* Set CPU boot address */ - writel(virt_to_phys(secondary_startup), + writel(__pa_symbol(secondary_startup), cpucfg_membase + CPUCFG_PRIVATE0_REG); /* Assert the CPU core in reset */ diff --git a/arch/arm/mach-tango/platsmp.c b/arch/arm/mach-tango/platsmp.c index 98c62a4a8623..2f0c6c050fed 100644 --- a/arch/arm/mach-tango/platsmp.c +++ b/arch/arm/mach-tango/platsmp.c @@ -5,7 +5,7 @@ static int tango_boot_secondary(unsigned int cpu, struct task_struct *idle) { - tango_set_aux_boot_addr(virt_to_phys(secondary_startup)); + tango_set_aux_boot_addr(__pa_symbol(secondary_startup)); tango_start_aux_core(cpu); return 0; } diff --git a/arch/arm/mach-tango/pm.c b/arch/arm/mach-tango/pm.c index b05c6d6f99d0..406c0814eb6e 100644 --- a/arch/arm/mach-tango/pm.c +++ b/arch/arm/mach-tango/pm.c @@ -5,7 +5,7 @@ static int tango_pm_powerdown(unsigned long arg) { - tango_suspend(virt_to_phys(cpu_resume)); + tango_suspend(__pa_symbol(cpu_resume)); return -EIO; /* tango_suspend has failed */ } diff --git a/arch/arm/mach-tegra/reset.c b/arch/arm/mach-tegra/reset.c index 6fd9db54887e..dc558892753c 100644 --- a/arch/arm/mach-tegra/reset.c +++ b/arch/arm/mach-tegra/reset.c @@ -94,14 +94,14 @@ void __init tegra_cpu_reset_handler_init(void) __tegra_cpu_reset_handler_data[TEGRA_RESET_MASK_PRESENT] = *((u32 *)cpu_possible_mask); __tegra_cpu_reset_handler_data[TEGRA_RESET_STARTUP_SECONDARY] = - virt_to_phys((void *)secondary_startup); + __pa_symbol((void *)secondary_startup); #endif #ifdef CONFIG_PM_SLEEP __tegra_cpu_reset_handler_data[TEGRA_RESET_STARTUP_LP1] = TEGRA_IRAM_LPx_RESUME_AREA; __tegra_cpu_reset_handler_data[TEGRA_RESET_STARTUP_LP2] = - virt_to_phys((void *)tegra_resume); + __pa_symbol((void *)tegra_resume); #endif tegra_cpu_reset_handler_enable(); diff --git a/arch/arm/mach-ux500/platsmp.c b/arch/arm/mach-ux500/platsmp.c index 8f2f615ff958..8c8f26389067 100644 --- a/arch/arm/mach-ux500/platsmp.c +++ b/arch/arm/mach-ux500/platsmp.c @@ -54,7 +54,7 @@ static void wakeup_secondary(void) * backup ram register at offset 0x1FF0, which is what boot rom code * is waiting for. This will wake up the secondary core from WFE. */ - writel(virt_to_phys(secondary_startup), + writel(__pa_symbol(secondary_startup), backupram + UX500_CPU1_JUMPADDR_OFFSET); writel(0xA1FEED01, backupram + UX500_CPU1_WAKEMAGIC_OFFSET); diff --git a/arch/arm/mach-vexpress/dcscb.c b/arch/arm/mach-vexpress/dcscb.c index 5cedcf572104..ee2a0faafaa1 100644 --- a/arch/arm/mach-vexpress/dcscb.c +++ b/arch/arm/mach-vexpress/dcscb.c @@ -166,7 +166,7 @@ static int __init dcscb_init(void) * Future entries into the kernel can now go * through the cluster entry vectors. */ - vexpress_flags_set(virt_to_phys(mcpm_entry_point)); + vexpress_flags_set(__pa_symbol(mcpm_entry_point)); return 0; } diff --git a/arch/arm/mach-vexpress/platsmp.c b/arch/arm/mach-vexpress/platsmp.c index 98e29dee91e8..742499bac6d0 100644 --- a/arch/arm/mach-vexpress/platsmp.c +++ b/arch/arm/mach-vexpress/platsmp.c @@ -79,7 +79,7 @@ static void __init vexpress_smp_dt_prepare_cpus(unsigned int max_cpus) * until it receives a soft interrupt, and then the * secondary CPU branches to this address. */ - vexpress_flags_set(virt_to_phys(versatile_secondary_startup)); + vexpress_flags_set(__pa_symbol(versatile_secondary_startup)); } const struct smp_operations vexpress_smp_dt_ops __initconst = { diff --git a/arch/arm/mach-vexpress/tc2_pm.c b/arch/arm/mach-vexpress/tc2_pm.c index 1aa4ccece69f..9b5f3c427086 100644 --- a/arch/arm/mach-vexpress/tc2_pm.c +++ b/arch/arm/mach-vexpress/tc2_pm.c @@ -54,7 +54,7 @@ static int tc2_pm_cpu_powerup(unsigned int cpu, unsigned int cluster) if (cluster >= TC2_CLUSTERS || cpu >= tc2_nr_cpus[cluster]) return -EINVAL; ve_spc_set_resume_addr(cluster, cpu, - virt_to_phys(mcpm_entry_point)); + __pa_symbol(mcpm_entry_point)); ve_spc_cpu_wakeup_irq(cluster, cpu, true); return 0; } @@ -159,7 +159,7 @@ static int tc2_pm_wait_for_powerdown(unsigned int cpu, unsigned int cluster) static void tc2_pm_cpu_suspend_prepare(unsigned int cpu, unsigned int cluster) { - ve_spc_set_resume_addr(cluster, cpu, virt_to_phys(mcpm_entry_point)); + ve_spc_set_resume_addr(cluster, cpu, __pa_symbol(mcpm_entry_point)); } static void tc2_pm_cpu_is_up(unsigned int cpu, unsigned int cluster) diff --git a/arch/arm/mach-zx/platsmp.c b/arch/arm/mach-zx/platsmp.c index 0297f92084e0..afb9a82dedc3 100644 --- a/arch/arm/mach-zx/platsmp.c +++ b/arch/arm/mach-zx/platsmp.c @@ -76,7 +76,7 @@ void __init zx_smp_prepare_cpus(unsigned int max_cpus) * until it receives a soft interrupt, and then the * secondary CPU branches to this address. */ - __raw_writel(virt_to_phys(zx_secondary_startup), + __raw_writel(__pa_symbol(zx_secondary_startup), aonsysctrl_base + AON_SYS_CTRL_RESERVED1); iounmap(aonsysctrl_base); @@ -94,7 +94,7 @@ void __init zx_smp_prepare_cpus(unsigned int max_cpus) /* Map the first 4 KB IRAM for suspend usage */ sys_iram = __arm_ioremap_exec(ZX_IRAM_BASE, PAGE_SIZE, false); - zx_secondary_startup_pa = virt_to_phys(zx_secondary_startup); + zx_secondary_startup_pa = __pa_symbol(zx_secondary_startup); fncpy(sys_iram, &zx_resume_jump, zx_suspend_iram_sz); } diff --git a/arch/arm/mach-zynq/platsmp.c b/arch/arm/mach-zynq/platsmp.c index 7cd9865bdeb7..caa6d5fe9078 100644 --- a/arch/arm/mach-zynq/platsmp.c +++ b/arch/arm/mach-zynq/platsmp.c @@ -89,7 +89,7 @@ EXPORT_SYMBOL(zynq_cpun_start); static int zynq_boot_secondary(unsigned int cpu, struct task_struct *idle) { - return zynq_cpun_start(virt_to_phys(secondary_startup), cpu); + return zynq_cpun_start(__pa_symbol(secondary_startup), cpu); } /* -- 2.9.3 ^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCH v5 4/4] ARM: treewide: Replace uses of virt_to_phys with __pa_symbol 2017-01-04 1:14 ` [PATCH v5 4/4] ARM: treewide: Replace uses of virt_to_phys with __pa_symbol Florian Fainelli @ 2017-01-04 17:31 ` Laura Abbott 0 siblings, 0 replies; 32+ messages in thread From: Laura Abbott @ 2017-01-04 17:31 UTC (permalink / raw) To: Florian Fainelli, linux-arm-kernel, catalin.marinas Cc: linux, nicolas.pitre, panand, chris.brandt, arnd, jonathan.austin, pawel.moll, vladimir.murzin, mark.rutland, ard.biesheuvel, keescook, matt, labbott, kirill.shutemov, ben, js07.lee, stefan, linux-kernel, linux-mtd, cyrille.pitchen, richard, boris.brezillon, computersforpeace, dwmw2, will.deacon On 01/03/2017 05:14 PM, Florian Fainelli wrote: > All low-level PM/SMP code using virt_to_phys() should actually use > __pa_symbol() against kernel symbols. Update code where relevant to move > away from virt_to_phys(). > Reviewed-by: Laura Abbott <labbott@redhat.com> > Acked-by: Russell King <rmk+kernel@armlinux.org.uk> > Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> > --- > arch/arm/common/mcpm_entry.c | 12 ++++++------ > arch/arm/mach-alpine/platsmp.c | 2 +- > arch/arm/mach-axxia/platsmp.c | 2 +- > arch/arm/mach-bcm/bcm63xx_smp.c | 2 +- > arch/arm/mach-bcm/platsmp-brcmstb.c | 2 +- > arch/arm/mach-bcm/platsmp.c | 4 ++-- > arch/arm/mach-berlin/platsmp.c | 2 +- > arch/arm/mach-exynos/firmware.c | 4 ++-- > arch/arm/mach-exynos/mcpm-exynos.c | 2 +- > arch/arm/mach-exynos/platsmp.c | 4 ++-- > arch/arm/mach-exynos/pm.c | 6 +++--- > arch/arm/mach-exynos/suspend.c | 6 +++--- > arch/arm/mach-hisi/platmcpm.c | 2 +- > arch/arm/mach-hisi/platsmp.c | 6 +++--- > arch/arm/mach-imx/platsmp.c | 2 +- > arch/arm/mach-imx/pm-imx6.c | 2 +- > arch/arm/mach-imx/src.c | 2 +- > arch/arm/mach-mediatek/platsmp.c | 2 +- > arch/arm/mach-mvebu/pm.c | 2 +- > arch/arm/mach-mvebu/pmsu.c | 2 +- > arch/arm/mach-mvebu/system-controller.c | 2 +- > arch/arm/mach-omap2/control.c | 8 ++++---- > arch/arm/mach-omap2/omap-mpuss-lowpower.c | 12 ++++++------ > arch/arm/mach-omap2/omap-smp.c | 4 ++-- > arch/arm/mach-prima2/platsmp.c | 2 +- > arch/arm/mach-prima2/pm.c | 2 +- > arch/arm/mach-pxa/palmz72.c | 2 +- > arch/arm/mach-pxa/pxa25x.c | 2 +- > arch/arm/mach-pxa/pxa27x.c | 2 +- > arch/arm/mach-pxa/pxa3xx.c | 2 +- > arch/arm/mach-realview/platsmp-dt.c | 2 +- > arch/arm/mach-rockchip/platsmp.c | 4 ++-- > arch/arm/mach-rockchip/pm.c | 2 +- > arch/arm/mach-s3c24xx/mach-jive.c | 2 +- > arch/arm/mach-s3c24xx/pm-s3c2410.c | 2 +- > arch/arm/mach-s3c24xx/pm-s3c2416.c | 2 +- > arch/arm/mach-s3c64xx/pm.c | 2 +- > arch/arm/mach-s5pv210/pm.c | 2 +- > arch/arm/mach-sa1100/pm.c | 2 +- > arch/arm/mach-shmobile/platsmp-apmu.c | 6 +++--- > arch/arm/mach-shmobile/platsmp-scu.c | 4 ++-- > arch/arm/mach-socfpga/platsmp.c | 4 ++-- > arch/arm/mach-spear/platsmp.c | 2 +- > arch/arm/mach-sti/platsmp.c | 2 +- > arch/arm/mach-sunxi/platsmp.c | 4 ++-- > arch/arm/mach-tango/platsmp.c | 2 +- > arch/arm/mach-tango/pm.c | 2 +- > arch/arm/mach-tegra/reset.c | 4 ++-- > arch/arm/mach-ux500/platsmp.c | 2 +- > arch/arm/mach-vexpress/dcscb.c | 2 +- > arch/arm/mach-vexpress/platsmp.c | 2 +- > arch/arm/mach-vexpress/tc2_pm.c | 4 ++-- > arch/arm/mach-zx/platsmp.c | 4 ++-- > arch/arm/mach-zynq/platsmp.c | 2 +- > 54 files changed, 86 insertions(+), 86 deletions(-) > > diff --git a/arch/arm/common/mcpm_entry.c b/arch/arm/common/mcpm_entry.c > index a923524d1040..cf062472e07b 100644 > --- a/arch/arm/common/mcpm_entry.c > +++ b/arch/arm/common/mcpm_entry.c > @@ -144,7 +144,7 @@ extern unsigned long mcpm_entry_vectors[MAX_NR_CLUSTERS][MAX_CPUS_PER_CLUSTER]; > > void mcpm_set_entry_vector(unsigned cpu, unsigned cluster, void *ptr) > { > - unsigned long val = ptr ? virt_to_phys(ptr) : 0; > + unsigned long val = ptr ? __pa_symbol(ptr) : 0; > mcpm_entry_vectors[cluster][cpu] = val; > sync_cache_w(&mcpm_entry_vectors[cluster][cpu]); > } > @@ -299,8 +299,8 @@ void mcpm_cpu_power_down(void) > * the kernel as if the power_up method just had deasserted reset > * on the CPU. > */ > - phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset); > - phys_reset(virt_to_phys(mcpm_entry_point)); > + phys_reset = (phys_reset_t)(unsigned long)__pa_symbol(cpu_reset); > + phys_reset(__pa_symbol(mcpm_entry_point)); > > /* should never get here */ > BUG(); > @@ -388,8 +388,8 @@ static int __init nocache_trampoline(unsigned long _arg) > __mcpm_outbound_leave_critical(cluster, CLUSTER_DOWN); > __mcpm_cpu_down(cpu, cluster); > > - phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset); > - phys_reset(virt_to_phys(mcpm_entry_point)); > + phys_reset = (phys_reset_t)(unsigned long)__pa_symbol(cpu_reset); > + phys_reset(__pa_symbol(mcpm_entry_point)); > BUG(); > } > > @@ -449,7 +449,7 @@ int __init mcpm_sync_init( > sync_cache_w(&mcpm_sync); > > if (power_up_setup) { > - mcpm_power_up_setup_phys = virt_to_phys(power_up_setup); > + mcpm_power_up_setup_phys = __pa_symbol(power_up_setup); > sync_cache_w(&mcpm_power_up_setup_phys); > } > > diff --git a/arch/arm/mach-alpine/platsmp.c b/arch/arm/mach-alpine/platsmp.c > index dd77ea25e7ca..6dc6d491f88a 100644 > --- a/arch/arm/mach-alpine/platsmp.c > +++ b/arch/arm/mach-alpine/platsmp.c > @@ -27,7 +27,7 @@ static int alpine_boot_secondary(unsigned int cpu, struct task_struct *idle) > { > phys_addr_t addr; > > - addr = virt_to_phys(secondary_startup); > + addr = __pa_symbol(secondary_startup); > > if (addr > (phys_addr_t)(uint32_t)(-1)) { > pr_err("FAIL: resume address over 32bit (%pa)", &addr); > diff --git a/arch/arm/mach-axxia/platsmp.c b/arch/arm/mach-axxia/platsmp.c > index ffbd71d45008..502e3df69f69 100644 > --- a/arch/arm/mach-axxia/platsmp.c > +++ b/arch/arm/mach-axxia/platsmp.c > @@ -25,7 +25,7 @@ > static void write_release_addr(u32 release_phys) > { > u32 *virt = (u32 *) phys_to_virt(release_phys); > - writel_relaxed(virt_to_phys(secondary_startup), virt); > + writel_relaxed(__pa_symbol(secondary_startup), virt); > /* Make sure this store is visible to other CPUs */ > smp_wmb(); > __cpuc_flush_dcache_area(virt, sizeof(u32)); > diff --git a/arch/arm/mach-bcm/bcm63xx_smp.c b/arch/arm/mach-bcm/bcm63xx_smp.c > index 9b6727ed68cd..f5fb10b4376f 100644 > --- a/arch/arm/mach-bcm/bcm63xx_smp.c > +++ b/arch/arm/mach-bcm/bcm63xx_smp.c > @@ -135,7 +135,7 @@ static int bcm63138_smp_boot_secondary(unsigned int cpu, > } > > /* Write the secondary init routine to the BootLUT reset vector */ > - val = virt_to_phys(secondary_startup); > + val = __pa_symbol(secondary_startup); > writel_relaxed(val, bootlut_base + BOOTLUT_RESET_VECT); > > /* Power up the core, will jump straight to its reset vector when we > diff --git a/arch/arm/mach-bcm/platsmp-brcmstb.c b/arch/arm/mach-bcm/platsmp-brcmstb.c > index 40dc8448445e..12379960e982 100644 > --- a/arch/arm/mach-bcm/platsmp-brcmstb.c > +++ b/arch/arm/mach-bcm/platsmp-brcmstb.c > @@ -151,7 +151,7 @@ static void brcmstb_cpu_boot(u32 cpu) > * Set the reset vector to point to the secondary_startup > * routine > */ > - cpu_set_boot_addr(cpu, virt_to_phys(secondary_startup)); > + cpu_set_boot_addr(cpu, __pa_symbol(secondary_startup)); > > /* Unhalt the cpu */ > cpu_rst_cfg_set(cpu, 0); > diff --git a/arch/arm/mach-bcm/platsmp.c b/arch/arm/mach-bcm/platsmp.c > index 3ac3a9bc663c..582886d0d02f 100644 > --- a/arch/arm/mach-bcm/platsmp.c > +++ b/arch/arm/mach-bcm/platsmp.c > @@ -116,7 +116,7 @@ static int nsp_write_lut(unsigned int cpu) > return -ENOMEM; > } > > - secondary_startup_phy = virt_to_phys(secondary_startup); > + secondary_startup_phy = __pa_symbol(secondary_startup); > BUG_ON(secondary_startup_phy > (phys_addr_t)U32_MAX); > > writel_relaxed(secondary_startup_phy, sku_rom_lut); > @@ -189,7 +189,7 @@ static int kona_boot_secondary(unsigned int cpu, struct task_struct *idle) > * Secondary cores will start in secondary_startup(), > * defined in "arch/arm/kernel/head.S" > */ > - boot_func = virt_to_phys(secondary_startup); > + boot_func = __pa_symbol(secondary_startup); > BUG_ON(boot_func & BOOT_ADDR_CPUID_MASK); > BUG_ON(boot_func > (phys_addr_t)U32_MAX); > > diff --git a/arch/arm/mach-berlin/platsmp.c b/arch/arm/mach-berlin/platsmp.c > index 93f90688db18..1167b0ed92c8 100644 > --- a/arch/arm/mach-berlin/platsmp.c > +++ b/arch/arm/mach-berlin/platsmp.c > @@ -92,7 +92,7 @@ static void __init berlin_smp_prepare_cpus(unsigned int max_cpus) > * Write the secondary startup address into the SW reset address > * vector. This is used by boot_inst. > */ > - writel(virt_to_phys(secondary_startup), vectors_base + SW_RESET_ADDR); > + writel(__pa_symbol(secondary_startup), vectors_base + SW_RESET_ADDR); > > iounmap(vectors_base); > unmap_scu: > diff --git a/arch/arm/mach-exynos/firmware.c b/arch/arm/mach-exynos/firmware.c > index fd6da5419b51..e81a78b125d9 100644 > --- a/arch/arm/mach-exynos/firmware.c > +++ b/arch/arm/mach-exynos/firmware.c > @@ -41,7 +41,7 @@ static int exynos_do_idle(unsigned long mode) > case FW_DO_IDLE_AFTR: > if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9) > exynos_save_cp15(); > - writel_relaxed(virt_to_phys(exynos_cpu_resume_ns), > + writel_relaxed(__pa_symbol(exynos_cpu_resume_ns), > sysram_ns_base_addr + 0x24); > writel_relaxed(EXYNOS_AFTR_MAGIC, sysram_ns_base_addr + 0x20); > if (soc_is_exynos3250()) { > @@ -135,7 +135,7 @@ static int exynos_suspend(void) > exynos_save_cp15(); > > writel(EXYNOS_SLEEP_MAGIC, sysram_ns_base_addr + EXYNOS_BOOT_FLAG); > - writel(virt_to_phys(exynos_cpu_resume_ns), > + writel(__pa_symbol(exynos_cpu_resume_ns), > sysram_ns_base_addr + EXYNOS_BOOT_ADDR); > > return cpu_suspend(0, exynos_cpu_suspend); > diff --git a/arch/arm/mach-exynos/mcpm-exynos.c b/arch/arm/mach-exynos/mcpm-exynos.c > index f086bf615b29..214a9cfa92e9 100644 > --- a/arch/arm/mach-exynos/mcpm-exynos.c > +++ b/arch/arm/mach-exynos/mcpm-exynos.c > @@ -221,7 +221,7 @@ static void exynos_mcpm_setup_entry_point(void) > */ > __raw_writel(0xe59f0000, ns_sram_base_addr); /* ldr r0, [pc, #0] */ > __raw_writel(0xe12fff10, ns_sram_base_addr + 4); /* bx r0 */ > - __raw_writel(virt_to_phys(mcpm_entry_point), ns_sram_base_addr + 8); > + __raw_writel(__pa_symbol(mcpm_entry_point), ns_sram_base_addr + 8); > } > > static struct syscore_ops exynos_mcpm_syscore_ops = { > diff --git a/arch/arm/mach-exynos/platsmp.c b/arch/arm/mach-exynos/platsmp.c > index 98ffe1e62ad5..9f4949f7ed88 100644 > --- a/arch/arm/mach-exynos/platsmp.c > +++ b/arch/arm/mach-exynos/platsmp.c > @@ -353,7 +353,7 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle) > > smp_rmb(); > > - boot_addr = virt_to_phys(exynos4_secondary_startup); > + boot_addr = __pa_symbol(exynos4_secondary_startup); > > ret = exynos_set_boot_addr(core_id, boot_addr); > if (ret) > @@ -443,7 +443,7 @@ static void __init exynos_smp_prepare_cpus(unsigned int max_cpus) > > mpidr = cpu_logical_map(i); > core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); > - boot_addr = virt_to_phys(exynos4_secondary_startup); > + boot_addr = __pa_symbol(exynos4_secondary_startup); > > ret = exynos_set_boot_addr(core_id, boot_addr); > if (ret) > diff --git a/arch/arm/mach-exynos/pm.c b/arch/arm/mach-exynos/pm.c > index 487295f4a56b..1a7e5b5d08d8 100644 > --- a/arch/arm/mach-exynos/pm.c > +++ b/arch/arm/mach-exynos/pm.c > @@ -132,7 +132,7 @@ static void exynos_set_wakeupmask(long mask) > > static void exynos_cpu_set_boot_vector(long flags) > { > - writel_relaxed(virt_to_phys(exynos_cpu_resume), > + writel_relaxed(__pa_symbol(exynos_cpu_resume), > exynos_boot_vector_addr()); > writel_relaxed(flags, exynos_boot_vector_flag()); > } > @@ -238,7 +238,7 @@ static int exynos_cpu0_enter_aftr(void) > > abort: > if (cpu_online(1)) { > - unsigned long boot_addr = virt_to_phys(exynos_cpu_resume); > + unsigned long boot_addr = __pa_symbol(exynos_cpu_resume); > > /* > * Set the boot vector to something non-zero > @@ -330,7 +330,7 @@ static int exynos_cpu1_powerdown(void) > > static void exynos_pre_enter_aftr(void) > { > - unsigned long boot_addr = virt_to_phys(exynos_cpu_resume); > + unsigned long boot_addr = __pa_symbol(exynos_cpu_resume); > > (void)exynos_set_boot_addr(1, boot_addr); > } > diff --git a/arch/arm/mach-exynos/suspend.c b/arch/arm/mach-exynos/suspend.c > index 06332f626565..97765be2cc12 100644 > --- a/arch/arm/mach-exynos/suspend.c > +++ b/arch/arm/mach-exynos/suspend.c > @@ -344,7 +344,7 @@ static void exynos_pm_prepare(void) > exynos_pm_enter_sleep_mode(); > > /* ensure at least INFORM0 has the resume address */ > - pmu_raw_writel(virt_to_phys(exynos_cpu_resume), S5P_INFORM0); > + pmu_raw_writel(__pa_symbol(exynos_cpu_resume), S5P_INFORM0); > } > > static void exynos3250_pm_prepare(void) > @@ -361,7 +361,7 @@ static void exynos3250_pm_prepare(void) > exynos_pm_enter_sleep_mode(); > > /* ensure at least INFORM0 has the resume address */ > - pmu_raw_writel(virt_to_phys(exynos_cpu_resume), S5P_INFORM0); > + pmu_raw_writel(__pa_symbol(exynos_cpu_resume), S5P_INFORM0); > } > > static void exynos5420_pm_prepare(void) > @@ -386,7 +386,7 @@ static void exynos5420_pm_prepare(void) > > /* ensure at least INFORM0 has the resume address */ > if (IS_ENABLED(CONFIG_EXYNOS5420_MCPM)) > - pmu_raw_writel(virt_to_phys(mcpm_entry_point), S5P_INFORM0); > + pmu_raw_writel(__pa_symbol(mcpm_entry_point), S5P_INFORM0); > > tmp = pmu_raw_readl(EXYNOS5_ARM_L2_OPTION); > tmp &= ~EXYNOS5_USE_RETENTION; > diff --git a/arch/arm/mach-hisi/platmcpm.c b/arch/arm/mach-hisi/platmcpm.c > index 4b653a8cb75c..a6c117622d67 100644 > --- a/arch/arm/mach-hisi/platmcpm.c > +++ b/arch/arm/mach-hisi/platmcpm.c > @@ -327,7 +327,7 @@ static int __init hip04_smp_init(void) > */ > writel_relaxed(hip04_boot_method[0], relocation); > writel_relaxed(0xa5a5a5a5, relocation + 4); /* magic number */ > - writel_relaxed(virt_to_phys(secondary_startup), relocation + 8); > + writel_relaxed(__pa_symbol(secondary_startup), relocation + 8); > writel_relaxed(0, relocation + 12); > iounmap(relocation); > > diff --git a/arch/arm/mach-hisi/platsmp.c b/arch/arm/mach-hisi/platsmp.c > index e1d67648d5d0..91bb02dec20f 100644 > --- a/arch/arm/mach-hisi/platsmp.c > +++ b/arch/arm/mach-hisi/platsmp.c > @@ -28,7 +28,7 @@ void hi3xxx_set_cpu_jump(int cpu, void *jump_addr) > cpu = cpu_logical_map(cpu); > if (!cpu || !ctrl_base) > return; > - writel_relaxed(virt_to_phys(jump_addr), ctrl_base + ((cpu - 1) << 2)); > + writel_relaxed(__pa_symbol(jump_addr), ctrl_base + ((cpu - 1) << 2)); > } > > int hi3xxx_get_cpu_jump(int cpu) > @@ -118,7 +118,7 @@ static int hix5hd2_boot_secondary(unsigned int cpu, struct task_struct *idle) > { > phys_addr_t jumpaddr; > > - jumpaddr = virt_to_phys(secondary_startup); > + jumpaddr = __pa_symbol(secondary_startup); > hix5hd2_set_scu_boot_addr(HIX5HD2_BOOT_ADDRESS, jumpaddr); > hix5hd2_set_cpu(cpu, true); > arch_send_wakeup_ipi_mask(cpumask_of(cpu)); > @@ -156,7 +156,7 @@ static int hip01_boot_secondary(unsigned int cpu, struct task_struct *idle) > struct device_node *node; > > > - jumpaddr = virt_to_phys(secondary_startup); > + jumpaddr = __pa_symbol(secondary_startup); > hip01_set_boot_addr(HIP01_BOOT_ADDRESS, jumpaddr); > > node = of_find_compatible_node(NULL, NULL, "hisilicon,hip01-sysctrl"); > diff --git a/arch/arm/mach-imx/platsmp.c b/arch/arm/mach-imx/platsmp.c > index 711dbbd5badd..c2d1b329fba1 100644 > --- a/arch/arm/mach-imx/platsmp.c > +++ b/arch/arm/mach-imx/platsmp.c > @@ -117,7 +117,7 @@ static void __init ls1021a_smp_prepare_cpus(unsigned int max_cpus) > dcfg_base = of_iomap(np, 0); > BUG_ON(!dcfg_base); > > - paddr = virt_to_phys(secondary_startup); > + paddr = __pa_symbol(secondary_startup); > writel_relaxed(cpu_to_be32(paddr), dcfg_base + DCFG_CCSR_SCRATCHRW1); > > iounmap(dcfg_base); > diff --git a/arch/arm/mach-imx/pm-imx6.c b/arch/arm/mach-imx/pm-imx6.c > index 1515e498d348..e61b1d1027e1 100644 > --- a/arch/arm/mach-imx/pm-imx6.c > +++ b/arch/arm/mach-imx/pm-imx6.c > @@ -499,7 +499,7 @@ static int __init imx6q_suspend_init(const struct imx6_pm_socdata *socdata) > memset(suspend_ocram_base, 0, sizeof(*pm_info)); > pm_info = suspend_ocram_base; > pm_info->pbase = ocram_pbase; > - pm_info->resume_addr = virt_to_phys(v7_cpu_resume); > + pm_info->resume_addr = __pa_symbol(v7_cpu_resume); > pm_info->pm_info_size = sizeof(*pm_info); > > /* > diff --git a/arch/arm/mach-imx/src.c b/arch/arm/mach-imx/src.c > index 70b083fe934a..495d85d0fe7e 100644 > --- a/arch/arm/mach-imx/src.c > +++ b/arch/arm/mach-imx/src.c > @@ -99,7 +99,7 @@ void imx_enable_cpu(int cpu, bool enable) > void imx_set_cpu_jump(int cpu, void *jump_addr) > { > cpu = cpu_logical_map(cpu); > - writel_relaxed(virt_to_phys(jump_addr), > + writel_relaxed(__pa_symbol(jump_addr), > src_base + SRC_GPR1 + cpu * 8); > } > > diff --git a/arch/arm/mach-mediatek/platsmp.c b/arch/arm/mach-mediatek/platsmp.c > index b821e34474b6..726eb69bb655 100644 > --- a/arch/arm/mach-mediatek/platsmp.c > +++ b/arch/arm/mach-mediatek/platsmp.c > @@ -122,7 +122,7 @@ static void __init __mtk_smp_prepare_cpus(unsigned int max_cpus, int trustzone) > * write the address of slave startup address into the system-wide > * jump register > */ > - writel_relaxed(virt_to_phys(secondary_startup_arm), > + writel_relaxed(__pa_symbol(secondary_startup_arm), > mtk_smp_base + mtk_smp_info->jump_reg); > } > > diff --git a/arch/arm/mach-mvebu/pm.c b/arch/arm/mach-mvebu/pm.c > index 2990c5269b18..c487be61d6d8 100644 > --- a/arch/arm/mach-mvebu/pm.c > +++ b/arch/arm/mach-mvebu/pm.c > @@ -110,7 +110,7 @@ static void mvebu_pm_store_armadaxp_bootinfo(u32 *store_addr) > { > phys_addr_t resume_pc; > > - resume_pc = virt_to_phys(armada_370_xp_cpu_resume); > + resume_pc = __pa_symbol(armada_370_xp_cpu_resume); > > /* > * The bootloader expects the first two words to be a magic > diff --git a/arch/arm/mach-mvebu/pmsu.c b/arch/arm/mach-mvebu/pmsu.c > index f39bd51bce18..27a78c80e5b1 100644 > --- a/arch/arm/mach-mvebu/pmsu.c > +++ b/arch/arm/mach-mvebu/pmsu.c > @@ -112,7 +112,7 @@ static const struct of_device_id of_pmsu_table[] = { > > void mvebu_pmsu_set_cpu_boot_addr(int hw_cpu, void *boot_addr) > { > - writel(virt_to_phys(boot_addr), pmsu_mp_base + > + writel(__pa_symbol(boot_addr), pmsu_mp_base + > PMSU_BOOT_ADDR_REDIRECT_OFFSET(hw_cpu)); > } > > diff --git a/arch/arm/mach-mvebu/system-controller.c b/arch/arm/mach-mvebu/system-controller.c > index 76cbc82a7407..04d9ebe6a90a 100644 > --- a/arch/arm/mach-mvebu/system-controller.c > +++ b/arch/arm/mach-mvebu/system-controller.c > @@ -153,7 +153,7 @@ void mvebu_system_controller_set_cpu_boot_addr(void *boot_addr) > if (of_machine_is_compatible("marvell,armada375")) > mvebu_armada375_smp_wa_init(); > > - writel(virt_to_phys(boot_addr), system_controller_base + > + writel(__pa_symbol(boot_addr), system_controller_base + > mvebu_sc->resume_boot_addr); > } > #endif > diff --git a/arch/arm/mach-omap2/control.c b/arch/arm/mach-omap2/control.c > index 1662071bb2cc..bd8089ff929f 100644 > --- a/arch/arm/mach-omap2/control.c > +++ b/arch/arm/mach-omap2/control.c > @@ -315,15 +315,15 @@ void omap3_save_scratchpad_contents(void) > scratchpad_contents.boot_config_ptr = 0x0; > if (cpu_is_omap3630()) > scratchpad_contents.public_restore_ptr = > - virt_to_phys(omap3_restore_3630); > + __pa_symbol(omap3_restore_3630); > else if (omap_rev() != OMAP3430_REV_ES3_0 && > omap_rev() != OMAP3430_REV_ES3_1 && > omap_rev() != OMAP3430_REV_ES3_1_2) > scratchpad_contents.public_restore_ptr = > - virt_to_phys(omap3_restore); > + __pa_symbol(omap3_restore); > else > scratchpad_contents.public_restore_ptr = > - virt_to_phys(omap3_restore_es3); > + __pa_symbol(omap3_restore_es3); > > if (omap_type() == OMAP2_DEVICE_TYPE_GP) > scratchpad_contents.secure_ram_restore_ptr = 0x0; > @@ -395,7 +395,7 @@ void omap3_save_scratchpad_contents(void) > sdrc_block_contents.flags = 0x0; > sdrc_block_contents.block_size = 0x0; > > - arm_context_addr = virt_to_phys(omap3_arm_context); > + arm_context_addr = __pa_symbol(omap3_arm_context); > > /* Copy all the contents to the scratchpad location */ > scratchpad_address = OMAP2_L4_IO_ADDRESS(OMAP343X_SCRATCHPAD); > diff --git a/arch/arm/mach-omap2/omap-mpuss-lowpower.c b/arch/arm/mach-omap2/omap-mpuss-lowpower.c > index 7d62ad48c7c9..113ab2dd2ee9 100644 > --- a/arch/arm/mach-omap2/omap-mpuss-lowpower.c > +++ b/arch/arm/mach-omap2/omap-mpuss-lowpower.c > @@ -273,7 +273,7 @@ int omap4_enter_lowpower(unsigned int cpu, unsigned int power_state) > cpu_clear_prev_logic_pwrst(cpu); > pwrdm_set_next_pwrst(pm_info->pwrdm, power_state); > pwrdm_set_logic_retst(pm_info->pwrdm, cpu_logic_state); > - set_cpu_wakeup_addr(cpu, virt_to_phys(omap_pm_ops.resume)); > + set_cpu_wakeup_addr(cpu, __pa_symbol(omap_pm_ops.resume)); > omap_pm_ops.scu_prepare(cpu, power_state); > l2x0_pwrst_prepare(cpu, save_state); > > @@ -325,7 +325,7 @@ int omap4_hotplug_cpu(unsigned int cpu, unsigned int power_state) > > pwrdm_clear_all_prev_pwrst(pm_info->pwrdm); > pwrdm_set_next_pwrst(pm_info->pwrdm, power_state); > - set_cpu_wakeup_addr(cpu, virt_to_phys(omap_pm_ops.hotplug_restart)); > + set_cpu_wakeup_addr(cpu, __pa_symbol(omap_pm_ops.hotplug_restart)); > omap_pm_ops.scu_prepare(cpu, power_state); > > /* > @@ -467,13 +467,13 @@ void __init omap4_mpuss_early_init(void) > sar_base = omap4_get_sar_ram_base(); > > if (cpu_is_omap443x()) > - startup_pa = virt_to_phys(omap4_secondary_startup); > + startup_pa = __pa_symbol(omap4_secondary_startup); > else if (cpu_is_omap446x()) > - startup_pa = virt_to_phys(omap4460_secondary_startup); > + startup_pa = __pa_symbol(omap4460_secondary_startup); > else if ((__boot_cpu_mode & MODE_MASK) == HYP_MODE) > - startup_pa = virt_to_phys(omap5_secondary_hyp_startup); > + startup_pa = __pa_symbol(omap5_secondary_hyp_startup); > else > - startup_pa = virt_to_phys(omap5_secondary_startup); > + startup_pa = __pa_symbol(omap5_secondary_startup); > > if (cpu_is_omap44xx()) > writel_relaxed(startup_pa, sar_base + > diff --git a/arch/arm/mach-omap2/omap-smp.c b/arch/arm/mach-omap2/omap-smp.c > index b4de3da6dffa..003353b0b794 100644 > --- a/arch/arm/mach-omap2/omap-smp.c > +++ b/arch/arm/mach-omap2/omap-smp.c > @@ -316,9 +316,9 @@ static void __init omap4_smp_prepare_cpus(unsigned int max_cpus) > * A barrier is added to ensure that write buffer is drained > */ > if (omap_secure_apis_support()) > - omap_auxcoreboot_addr(virt_to_phys(cfg.startup_addr)); > + omap_auxcoreboot_addr(__pa_symbol(cfg.startup_addr)); > else > - writel_relaxed(virt_to_phys(cfg.startup_addr), > + writel_relaxed(__pa_symbol(cfg.startup_addr), > base + OMAP_AUX_CORE_BOOT_1); > } > > diff --git a/arch/arm/mach-prima2/platsmp.c b/arch/arm/mach-prima2/platsmp.c > index 0875b99add18..75ef5d4be554 100644 > --- a/arch/arm/mach-prima2/platsmp.c > +++ b/arch/arm/mach-prima2/platsmp.c > @@ -65,7 +65,7 @@ static int sirfsoc_boot_secondary(unsigned int cpu, struct task_struct *idle) > * waiting for. This would wake up the secondary core from WFE > */ > #define SIRFSOC_CPU1_JUMPADDR_OFFSET 0x2bc > - __raw_writel(virt_to_phys(sirfsoc_secondary_startup), > + __raw_writel(__pa_symbol(sirfsoc_secondary_startup), > clk_base + SIRFSOC_CPU1_JUMPADDR_OFFSET); > > #define SIRFSOC_CPU1_WAKEMAGIC_OFFSET 0x2b8 > diff --git a/arch/arm/mach-prima2/pm.c b/arch/arm/mach-prima2/pm.c > index 83e94c95e314..b0bcf1ff02dd 100644 > --- a/arch/arm/mach-prima2/pm.c > +++ b/arch/arm/mach-prima2/pm.c > @@ -54,7 +54,7 @@ static void sirfsoc_set_sleep_mode(u32 mode) > > static int sirfsoc_pre_suspend_power_off(void) > { > - u32 wakeup_entry = virt_to_phys(cpu_resume); > + u32 wakeup_entry = __pa_symbol(cpu_resume); > > sirfsoc_rtc_iobrg_writel(wakeup_entry, sirfsoc_pwrc_base + > SIRFSOC_PWRC_SCRATCH_PAD1); > diff --git a/arch/arm/mach-pxa/palmz72.c b/arch/arm/mach-pxa/palmz72.c > index 9c308de158c6..29630061e700 100644 > --- a/arch/arm/mach-pxa/palmz72.c > +++ b/arch/arm/mach-pxa/palmz72.c > @@ -249,7 +249,7 @@ static int palmz72_pm_suspend(void) > store_ptr = *PALMZ72_SAVE_DWORD; > > /* Setting PSPR to a proper value */ > - PSPR = virt_to_phys(&palmz72_resume_info); > + PSPR = __pa_symbol(&palmz72_resume_info); > > return 0; > } > diff --git a/arch/arm/mach-pxa/pxa25x.c b/arch/arm/mach-pxa/pxa25x.c > index c725baf119e1..ba431fad5c47 100644 > --- a/arch/arm/mach-pxa/pxa25x.c > +++ b/arch/arm/mach-pxa/pxa25x.c > @@ -85,7 +85,7 @@ static void pxa25x_cpu_pm_enter(suspend_state_t state) > static int pxa25x_cpu_pm_prepare(void) > { > /* set resume return address */ > - PSPR = virt_to_phys(cpu_resume); > + PSPR = __pa_symbol(cpu_resume); > return 0; > } > > diff --git a/arch/arm/mach-pxa/pxa27x.c b/arch/arm/mach-pxa/pxa27x.c > index c0185c5c5a08..9b69be4e9fe3 100644 > --- a/arch/arm/mach-pxa/pxa27x.c > +++ b/arch/arm/mach-pxa/pxa27x.c > @@ -168,7 +168,7 @@ static int pxa27x_cpu_pm_valid(suspend_state_t state) > static int pxa27x_cpu_pm_prepare(void) > { > /* set resume return address */ > - PSPR = virt_to_phys(cpu_resume); > + PSPR = __pa_symbol(cpu_resume); > return 0; > } > > diff --git a/arch/arm/mach-pxa/pxa3xx.c b/arch/arm/mach-pxa/pxa3xx.c > index 87acc96388c7..0cc9f124c9ac 100644 > --- a/arch/arm/mach-pxa/pxa3xx.c > +++ b/arch/arm/mach-pxa/pxa3xx.c > @@ -123,7 +123,7 @@ static void pxa3xx_cpu_pm_suspend(void) > PSPR = 0x5c014000; > > /* overwrite with the resume address */ > - *p = virt_to_phys(cpu_resume); > + *p = __pa_symbol(cpu_resume); > > cpu_suspend(0, pxa3xx_finish_suspend); > > diff --git a/arch/arm/mach-realview/platsmp-dt.c b/arch/arm/mach-realview/platsmp-dt.c > index 70ca99eb52c6..c242423bf8db 100644 > --- a/arch/arm/mach-realview/platsmp-dt.c > +++ b/arch/arm/mach-realview/platsmp-dt.c > @@ -76,7 +76,7 @@ static void __init realview_smp_prepare_cpus(unsigned int max_cpus) > } > /* Put the boot address in this magic register */ > regmap_write(map, REALVIEW_SYS_FLAGSSET_OFFSET, > - virt_to_phys(versatile_secondary_startup)); > + __pa_symbol(versatile_secondary_startup)); > } > > static const struct smp_operations realview_dt_smp_ops __initconst = { > diff --git a/arch/arm/mach-rockchip/platsmp.c b/arch/arm/mach-rockchip/platsmp.c > index 4d827a069d49..3abafdbdd7f4 100644 > --- a/arch/arm/mach-rockchip/platsmp.c > +++ b/arch/arm/mach-rockchip/platsmp.c > @@ -156,7 +156,7 @@ static int rockchip_boot_secondary(unsigned int cpu, struct task_struct *idle) > */ > mdelay(1); /* ensure the cpus other than cpu0 to startup */ > > - writel(virt_to_phys(secondary_startup), sram_base_addr + 8); > + writel(__pa_symbol(secondary_startup), sram_base_addr + 8); > writel(0xDEADBEAF, sram_base_addr + 4); > dsb_sev(); > } > @@ -195,7 +195,7 @@ static int __init rockchip_smp_prepare_sram(struct device_node *node) > } > > /* set the boot function for the sram code */ > - rockchip_boot_fn = virt_to_phys(secondary_startup); > + rockchip_boot_fn = __pa_symbol(secondary_startup); > > /* copy the trampoline to sram, that runs during startup of the core */ > memcpy(sram_base_addr, &rockchip_secondary_trampoline, trampoline_sz); > diff --git a/arch/arm/mach-rockchip/pm.c b/arch/arm/mach-rockchip/pm.c > index bee8c8051929..0592534e0b88 100644 > --- a/arch/arm/mach-rockchip/pm.c > +++ b/arch/arm/mach-rockchip/pm.c > @@ -62,7 +62,7 @@ static inline u32 rk3288_l2_config(void) > static void rk3288_config_bootdata(void) > { > rkpm_bootdata_cpusp = rk3288_bootram_phy + (SZ_4K - 8); > - rkpm_bootdata_cpu_code = virt_to_phys(cpu_resume); > + rkpm_bootdata_cpu_code = __pa_symbol(cpu_resume); > > rkpm_bootdata_l2ctlr_f = 1; > rkpm_bootdata_l2ctlr = rk3288_l2_config(); > diff --git a/arch/arm/mach-s3c24xx/mach-jive.c b/arch/arm/mach-s3c24xx/mach-jive.c > index 895aca225952..f5b5c49b56ac 100644 > --- a/arch/arm/mach-s3c24xx/mach-jive.c > +++ b/arch/arm/mach-s3c24xx/mach-jive.c > @@ -484,7 +484,7 @@ static int jive_pm_suspend(void) > * correct address to resume from. */ > > __raw_writel(0x2BED, S3C2412_INFORM0); > - __raw_writel(virt_to_phys(s3c_cpu_resume), S3C2412_INFORM1); > + __raw_writel(__pa_symbol(s3c_cpu_resume), S3C2412_INFORM1); > > return 0; > } > diff --git a/arch/arm/mach-s3c24xx/pm-s3c2410.c b/arch/arm/mach-s3c24xx/pm-s3c2410.c > index 20e481d8a33a..a4588daeddb0 100644 > --- a/arch/arm/mach-s3c24xx/pm-s3c2410.c > +++ b/arch/arm/mach-s3c24xx/pm-s3c2410.c > @@ -45,7 +45,7 @@ static void s3c2410_pm_prepare(void) > { > /* ensure at least GSTATUS3 has the resume address */ > > - __raw_writel(virt_to_phys(s3c_cpu_resume), S3C2410_GSTATUS3); > + __raw_writel(__pa_symbol(s3c_cpu_resume), S3C2410_GSTATUS3); > > S3C_PMDBG("GSTATUS3 0x%08x\n", __raw_readl(S3C2410_GSTATUS3)); > S3C_PMDBG("GSTATUS4 0x%08x\n", __raw_readl(S3C2410_GSTATUS4)); > diff --git a/arch/arm/mach-s3c24xx/pm-s3c2416.c b/arch/arm/mach-s3c24xx/pm-s3c2416.c > index c0e328e37bd6..b5bbf0d5985c 100644 > --- a/arch/arm/mach-s3c24xx/pm-s3c2416.c > +++ b/arch/arm/mach-s3c24xx/pm-s3c2416.c > @@ -48,7 +48,7 @@ static void s3c2416_pm_prepare(void) > * correct address to resume from. > */ > __raw_writel(0x2BED, S3C2412_INFORM0); > - __raw_writel(virt_to_phys(s3c_cpu_resume), S3C2412_INFORM1); > + __raw_writel(__pa_symbol(s3c_cpu_resume), S3C2412_INFORM1); > } > > static int s3c2416_pm_add(struct device *dev, struct subsys_interface *sif) > diff --git a/arch/arm/mach-s3c64xx/pm.c b/arch/arm/mach-s3c64xx/pm.c > index 59d91b83b03d..945a9d1e1a71 100644 > --- a/arch/arm/mach-s3c64xx/pm.c > +++ b/arch/arm/mach-s3c64xx/pm.c > @@ -304,7 +304,7 @@ static void s3c64xx_pm_prepare(void) > wake_irqs, ARRAY_SIZE(wake_irqs)); > > /* store address of resume. */ > - __raw_writel(virt_to_phys(s3c_cpu_resume), S3C64XX_INFORM0); > + __raw_writel(__pa_symbol(s3c_cpu_resume), S3C64XX_INFORM0); > > /* ensure previous wakeup state is cleared before sleeping */ > __raw_writel(__raw_readl(S3C64XX_WAKEUP_STAT), S3C64XX_WAKEUP_STAT); > diff --git a/arch/arm/mach-s5pv210/pm.c b/arch/arm/mach-s5pv210/pm.c > index 21b4b13c5ab7..2d5f08015e34 100644 > --- a/arch/arm/mach-s5pv210/pm.c > +++ b/arch/arm/mach-s5pv210/pm.c > @@ -69,7 +69,7 @@ static void s5pv210_pm_prepare(void) > __raw_writel(s5pv210_irqwake_intmask, S5P_WAKEUP_MASK); > > /* ensure at least INFORM0 has the resume address */ > - __raw_writel(virt_to_phys(s5pv210_cpu_resume), S5P_INFORM0); > + __raw_writel(__pa_symbol(s5pv210_cpu_resume), S5P_INFORM0); > > tmp = __raw_readl(S5P_SLEEP_CFG); > tmp &= ~(S5P_SLEEP_CFG_OSC_EN | S5P_SLEEP_CFG_USBOSC_EN); > diff --git a/arch/arm/mach-sa1100/pm.c b/arch/arm/mach-sa1100/pm.c > index 34853d5dfda2..9a7079f565bd 100644 > --- a/arch/arm/mach-sa1100/pm.c > +++ b/arch/arm/mach-sa1100/pm.c > @@ -73,7 +73,7 @@ static int sa11x0_pm_enter(suspend_state_t state) > RCSR = RCSR_HWR | RCSR_SWR | RCSR_WDR | RCSR_SMR; > > /* set resume return address */ > - PSPR = virt_to_phys(cpu_resume); > + PSPR = __pa_symbol(cpu_resume); > > /* go zzz */ > cpu_suspend(0, sa1100_finish_suspend); > diff --git a/arch/arm/mach-shmobile/platsmp-apmu.c b/arch/arm/mach-shmobile/platsmp-apmu.c > index 0c6bb458b7a4..71729b8d1900 100644 > --- a/arch/arm/mach-shmobile/platsmp-apmu.c > +++ b/arch/arm/mach-shmobile/platsmp-apmu.c > @@ -171,7 +171,7 @@ static void apmu_parse_dt(void (*fn)(struct resource *res, int cpu, int bit)) > static void __init shmobile_smp_apmu_setup_boot(void) > { > /* install boot code shared by all CPUs */ > - shmobile_boot_fn = virt_to_phys(shmobile_smp_boot); > + shmobile_boot_fn = __pa_symbol(shmobile_smp_boot); > } > > void __init shmobile_smp_apmu_prepare_cpus(unsigned int max_cpus, > @@ -185,7 +185,7 @@ void __init shmobile_smp_apmu_prepare_cpus(unsigned int max_cpus, > int shmobile_smp_apmu_boot_secondary(unsigned int cpu, struct task_struct *idle) > { > /* For this particular CPU register boot vector */ > - shmobile_smp_hook(cpu, virt_to_phys(secondary_startup), 0); > + shmobile_smp_hook(cpu, __pa_symbol(secondary_startup), 0); > > return apmu_wrap(cpu, apmu_power_on); > } > @@ -301,7 +301,7 @@ int shmobile_smp_apmu_cpu_kill(unsigned int cpu) > #if defined(CONFIG_SUSPEND) > static int shmobile_smp_apmu_do_suspend(unsigned long cpu) > { > - shmobile_smp_hook(cpu, virt_to_phys(cpu_resume), 0); > + shmobile_smp_hook(cpu, __pa_symbol(cpu_resume), 0); > shmobile_smp_apmu_cpu_shutdown(cpu); > cpu_do_idle(); /* WFI selects Core Standby */ > return 1; > diff --git a/arch/arm/mach-shmobile/platsmp-scu.c b/arch/arm/mach-shmobile/platsmp-scu.c > index d1ecaf37d142..f1a1efde4beb 100644 > --- a/arch/arm/mach-shmobile/platsmp-scu.c > +++ b/arch/arm/mach-shmobile/platsmp-scu.c > @@ -24,7 +24,7 @@ static void __iomem *shmobile_scu_base; > static int shmobile_scu_cpu_prepare(unsigned int cpu) > { > /* For this particular CPU register SCU SMP boot vector */ > - shmobile_smp_hook(cpu, virt_to_phys(shmobile_boot_scu), > + shmobile_smp_hook(cpu, __pa_symbol(shmobile_boot_scu), > shmobile_scu_base_phys); > return 0; > } > @@ -33,7 +33,7 @@ void __init shmobile_smp_scu_prepare_cpus(phys_addr_t scu_base_phys, > unsigned int max_cpus) > { > /* install boot code shared by all CPUs */ > - shmobile_boot_fn = virt_to_phys(shmobile_smp_boot); > + shmobile_boot_fn = __pa_symbol(shmobile_smp_boot); > > /* enable SCU and cache coherency on booting CPU */ > shmobile_scu_base_phys = scu_base_phys; > diff --git a/arch/arm/mach-socfpga/platsmp.c b/arch/arm/mach-socfpga/platsmp.c > index 07945748b571..0ee76772b507 100644 > --- a/arch/arm/mach-socfpga/platsmp.c > +++ b/arch/arm/mach-socfpga/platsmp.c > @@ -40,7 +40,7 @@ static int socfpga_boot_secondary(unsigned int cpu, struct task_struct *idle) > > memcpy(phys_to_virt(0), &secondary_trampoline, trampoline_size); > > - writel(virt_to_phys(secondary_startup), > + writel(__pa_symbol(secondary_startup), > sys_manager_base_addr + (socfpga_cpu1start_addr & 0x000000ff)); > > flush_cache_all(); > @@ -63,7 +63,7 @@ static int socfpga_a10_boot_secondary(unsigned int cpu, struct task_struct *idle > SOCFPGA_A10_RSTMGR_MODMPURST); > memcpy(phys_to_virt(0), &secondary_trampoline, trampoline_size); > > - writel(virt_to_phys(secondary_startup), > + writel(__pa_symbol(secondary_startup), > sys_manager_base_addr + (socfpga_cpu1start_addr & 0x00000fff)); > > flush_cache_all(); > diff --git a/arch/arm/mach-spear/platsmp.c b/arch/arm/mach-spear/platsmp.c > index 8d1e2d551786..39038a03836a 100644 > --- a/arch/arm/mach-spear/platsmp.c > +++ b/arch/arm/mach-spear/platsmp.c > @@ -117,7 +117,7 @@ static void __init spear13xx_smp_prepare_cpus(unsigned int max_cpus) > * (presently it is in SRAM). The BootMonitor waits until it receives a > * soft interrupt, and then the secondary CPU branches to this address. > */ > - __raw_writel(virt_to_phys(spear13xx_secondary_startup), SYS_LOCATION); > + __raw_writel(__pa_symbol(spear13xx_secondary_startup), SYS_LOCATION); > } > > const struct smp_operations spear13xx_smp_ops __initconst = { > diff --git a/arch/arm/mach-sti/platsmp.c b/arch/arm/mach-sti/platsmp.c > index ea5a2277ee46..231f19e17436 100644 > --- a/arch/arm/mach-sti/platsmp.c > +++ b/arch/arm/mach-sti/platsmp.c > @@ -103,7 +103,7 @@ static void __init sti_smp_prepare_cpus(unsigned int max_cpus) > u32 __iomem *cpu_strt_ptr; > u32 release_phys; > int cpu; > - unsigned long entry_pa = virt_to_phys(sti_secondary_startup); > + unsigned long entry_pa = __pa_symbol(sti_secondary_startup); > > np = of_find_compatible_node(NULL, NULL, "arm,cortex-a9-scu"); > > diff --git a/arch/arm/mach-sunxi/platsmp.c b/arch/arm/mach-sunxi/platsmp.c > index 6642267812c9..8fb5088464db 100644 > --- a/arch/arm/mach-sunxi/platsmp.c > +++ b/arch/arm/mach-sunxi/platsmp.c > @@ -80,7 +80,7 @@ static int sun6i_smp_boot_secondary(unsigned int cpu, > spin_lock(&cpu_lock); > > /* Set CPU boot address */ > - writel(virt_to_phys(secondary_startup), > + writel(__pa_symbol(secondary_startup), > cpucfg_membase + CPUCFG_PRIVATE0_REG); > > /* Assert the CPU core in reset */ > @@ -162,7 +162,7 @@ static int sun8i_smp_boot_secondary(unsigned int cpu, > spin_lock(&cpu_lock); > > /* Set CPU boot address */ > - writel(virt_to_phys(secondary_startup), > + writel(__pa_symbol(secondary_startup), > cpucfg_membase + CPUCFG_PRIVATE0_REG); > > /* Assert the CPU core in reset */ > diff --git a/arch/arm/mach-tango/platsmp.c b/arch/arm/mach-tango/platsmp.c > index 98c62a4a8623..2f0c6c050fed 100644 > --- a/arch/arm/mach-tango/platsmp.c > +++ b/arch/arm/mach-tango/platsmp.c > @@ -5,7 +5,7 @@ > > static int tango_boot_secondary(unsigned int cpu, struct task_struct *idle) > { > - tango_set_aux_boot_addr(virt_to_phys(secondary_startup)); > + tango_set_aux_boot_addr(__pa_symbol(secondary_startup)); > tango_start_aux_core(cpu); > return 0; > } > diff --git a/arch/arm/mach-tango/pm.c b/arch/arm/mach-tango/pm.c > index b05c6d6f99d0..406c0814eb6e 100644 > --- a/arch/arm/mach-tango/pm.c > +++ b/arch/arm/mach-tango/pm.c > @@ -5,7 +5,7 @@ > > static int tango_pm_powerdown(unsigned long arg) > { > - tango_suspend(virt_to_phys(cpu_resume)); > + tango_suspend(__pa_symbol(cpu_resume)); > > return -EIO; /* tango_suspend has failed */ > } > diff --git a/arch/arm/mach-tegra/reset.c b/arch/arm/mach-tegra/reset.c > index 6fd9db54887e..dc558892753c 100644 > --- a/arch/arm/mach-tegra/reset.c > +++ b/arch/arm/mach-tegra/reset.c > @@ -94,14 +94,14 @@ void __init tegra_cpu_reset_handler_init(void) > __tegra_cpu_reset_handler_data[TEGRA_RESET_MASK_PRESENT] = > *((u32 *)cpu_possible_mask); > __tegra_cpu_reset_handler_data[TEGRA_RESET_STARTUP_SECONDARY] = > - virt_to_phys((void *)secondary_startup); > + __pa_symbol((void *)secondary_startup); > #endif > > #ifdef CONFIG_PM_SLEEP > __tegra_cpu_reset_handler_data[TEGRA_RESET_STARTUP_LP1] = > TEGRA_IRAM_LPx_RESUME_AREA; > __tegra_cpu_reset_handler_data[TEGRA_RESET_STARTUP_LP2] = > - virt_to_phys((void *)tegra_resume); > + __pa_symbol((void *)tegra_resume); > #endif > > tegra_cpu_reset_handler_enable(); > diff --git a/arch/arm/mach-ux500/platsmp.c b/arch/arm/mach-ux500/platsmp.c > index 8f2f615ff958..8c8f26389067 100644 > --- a/arch/arm/mach-ux500/platsmp.c > +++ b/arch/arm/mach-ux500/platsmp.c > @@ -54,7 +54,7 @@ static void wakeup_secondary(void) > * backup ram register at offset 0x1FF0, which is what boot rom code > * is waiting for. This will wake up the secondary core from WFE. > */ > - writel(virt_to_phys(secondary_startup), > + writel(__pa_symbol(secondary_startup), > backupram + UX500_CPU1_JUMPADDR_OFFSET); > writel(0xA1FEED01, > backupram + UX500_CPU1_WAKEMAGIC_OFFSET); > diff --git a/arch/arm/mach-vexpress/dcscb.c b/arch/arm/mach-vexpress/dcscb.c > index 5cedcf572104..ee2a0faafaa1 100644 > --- a/arch/arm/mach-vexpress/dcscb.c > +++ b/arch/arm/mach-vexpress/dcscb.c > @@ -166,7 +166,7 @@ static int __init dcscb_init(void) > * Future entries into the kernel can now go > * through the cluster entry vectors. > */ > - vexpress_flags_set(virt_to_phys(mcpm_entry_point)); > + vexpress_flags_set(__pa_symbol(mcpm_entry_point)); > > return 0; > } > diff --git a/arch/arm/mach-vexpress/platsmp.c b/arch/arm/mach-vexpress/platsmp.c > index 98e29dee91e8..742499bac6d0 100644 > --- a/arch/arm/mach-vexpress/platsmp.c > +++ b/arch/arm/mach-vexpress/platsmp.c > @@ -79,7 +79,7 @@ static void __init vexpress_smp_dt_prepare_cpus(unsigned int max_cpus) > * until it receives a soft interrupt, and then the > * secondary CPU branches to this address. > */ > - vexpress_flags_set(virt_to_phys(versatile_secondary_startup)); > + vexpress_flags_set(__pa_symbol(versatile_secondary_startup)); > } > > const struct smp_operations vexpress_smp_dt_ops __initconst = { > diff --git a/arch/arm/mach-vexpress/tc2_pm.c b/arch/arm/mach-vexpress/tc2_pm.c > index 1aa4ccece69f..9b5f3c427086 100644 > --- a/arch/arm/mach-vexpress/tc2_pm.c > +++ b/arch/arm/mach-vexpress/tc2_pm.c > @@ -54,7 +54,7 @@ static int tc2_pm_cpu_powerup(unsigned int cpu, unsigned int cluster) > if (cluster >= TC2_CLUSTERS || cpu >= tc2_nr_cpus[cluster]) > return -EINVAL; > ve_spc_set_resume_addr(cluster, cpu, > - virt_to_phys(mcpm_entry_point)); > + __pa_symbol(mcpm_entry_point)); > ve_spc_cpu_wakeup_irq(cluster, cpu, true); > return 0; > } > @@ -159,7 +159,7 @@ static int tc2_pm_wait_for_powerdown(unsigned int cpu, unsigned int cluster) > > static void tc2_pm_cpu_suspend_prepare(unsigned int cpu, unsigned int cluster) > { > - ve_spc_set_resume_addr(cluster, cpu, virt_to_phys(mcpm_entry_point)); > + ve_spc_set_resume_addr(cluster, cpu, __pa_symbol(mcpm_entry_point)); > } > > static void tc2_pm_cpu_is_up(unsigned int cpu, unsigned int cluster) > diff --git a/arch/arm/mach-zx/platsmp.c b/arch/arm/mach-zx/platsmp.c > index 0297f92084e0..afb9a82dedc3 100644 > --- a/arch/arm/mach-zx/platsmp.c > +++ b/arch/arm/mach-zx/platsmp.c > @@ -76,7 +76,7 @@ void __init zx_smp_prepare_cpus(unsigned int max_cpus) > * until it receives a soft interrupt, and then the > * secondary CPU branches to this address. > */ > - __raw_writel(virt_to_phys(zx_secondary_startup), > + __raw_writel(__pa_symbol(zx_secondary_startup), > aonsysctrl_base + AON_SYS_CTRL_RESERVED1); > > iounmap(aonsysctrl_base); > @@ -94,7 +94,7 @@ void __init zx_smp_prepare_cpus(unsigned int max_cpus) > > /* Map the first 4 KB IRAM for suspend usage */ > sys_iram = __arm_ioremap_exec(ZX_IRAM_BASE, PAGE_SIZE, false); > - zx_secondary_startup_pa = virt_to_phys(zx_secondary_startup); > + zx_secondary_startup_pa = __pa_symbol(zx_secondary_startup); > fncpy(sys_iram, &zx_resume_jump, zx_suspend_iram_sz); > } > > diff --git a/arch/arm/mach-zynq/platsmp.c b/arch/arm/mach-zynq/platsmp.c > index 7cd9865bdeb7..caa6d5fe9078 100644 > --- a/arch/arm/mach-zynq/platsmp.c > +++ b/arch/arm/mach-zynq/platsmp.c > @@ -89,7 +89,7 @@ EXPORT_SYMBOL(zynq_cpun_start); > > static int zynq_boot_secondary(unsigned int cpu, struct task_struct *idle) > { > - return zynq_cpun_start(virt_to_phys(secondary_startup), cpu); > + return zynq_cpun_start(__pa_symbol(secondary_startup), cpu); > } > > /* > ^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v6 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL 2017-01-04 1:14 ` [PATCH v5 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli ` (3 preceding siblings ...) 2017-01-04 1:14 ` [PATCH v5 4/4] ARM: treewide: Replace uses of virt_to_phys with __pa_symbol Florian Fainelli @ 2017-01-04 22:39 ` Florian Fainelli 2017-01-04 22:39 ` [PATCH v6 1/4] mtd: lart: Rename partition defines to be prefixed with PART_ Florian Fainelli ` (4 more replies) 4 siblings, 5 replies; 32+ messages in thread From: Florian Fainelli @ 2017-01-04 22:39 UTC (permalink / raw) To: linux-arm-kernel, catalin.marinas, will.deacon Cc: Florian Fainelli, linux, nicolas.pitre, panand, chris.brandt, arnd, jonathan.austin, pawel.moll, vladimir.murzin, mark.rutland, ard.biesheuvel, keescook, matt, labbott, kirill.shutemov, ben, js07.lee, stefan, linux-kernel, linux-mtd, cyrille.pitchen, richard, boris.brezillon, computersforpeace, dwmw2 This patch series builds on top of Laura's [PATCHv6 00/10] CONFIG_DEBUG_VIRTUAL for arm64 to add support for CONFIG_DEBUG_VIRTUAL for ARM. This was tested on a Brahma B15 platform (ARMv7 + HIGHMEM + LPAE). Note that the treewide changes would involve a huge CC list, which is why it has been purposely trimmed to just focusing on the DEBUG_VIRTUAL aspect. Catalin, provided that you take Laura's series, I suppose I would submit this one through Russell's patch system if that's okay with everyone? Thanks! Changes in v6: - utilize KERNEL_END in lieu of _end in arm_memblock_init, thanks Hartley! - utilize Laura's comment suggestion against MAX_DMA_ADDRESS - added Laura's Acked-by to patch 3 and Reviewed-by to patch 4 Changes in v5: - rebased against Laura's [PATCHv6 00/10] CONFIG_DEBUG_VIRTUAL for arm64 and v4.10-rc2 - added Russell's acked-by for patches 2 through 4 Changes in v4: - added Boris' ack for the first patch - reworked the virtual address check based on Laura's suggestion to make the code more readable Changes in v3: - fix build failures reported by Kbuild test robot Changes in v2: - Modified MTD LART driver not to create symbol conflicts with KERNEL_START - Fixed patch that defines and uses KERNEL_START/END - Fixed __pa_symbol()'s definition - Inline __pa_symbol() check wihtin the VIRTUAL_BUG_ON statement - Simplified check for virtual addresses - Added a tree-wide patch changing SMP/PM implementations to use __pa_symbol(), build tested against multi_v{5,7}_defconfig Florian Fainelli (4): mtd: lart: Rename partition defines to be prefixed with PART_ ARM: Define KERNEL_START and KERNEL_END ARM: Add support for CONFIG_DEBUG_VIRTUAL ARM: treewide: Replace uses of virt_to_phys with __pa_symbol arch/arm/Kconfig | 1 + arch/arm/common/mcpm_entry.c | 12 +++---- arch/arm/include/asm/memory.h | 23 +++++++++++-- arch/arm/mach-alpine/platsmp.c | 2 +- arch/arm/mach-axxia/platsmp.c | 2 +- arch/arm/mach-bcm/bcm63xx_smp.c | 2 +- arch/arm/mach-bcm/platsmp-brcmstb.c | 2 +- arch/arm/mach-bcm/platsmp.c | 4 +-- arch/arm/mach-berlin/platsmp.c | 2 +- arch/arm/mach-exynos/firmware.c | 4 +-- arch/arm/mach-exynos/mcpm-exynos.c | 2 +- arch/arm/mach-exynos/platsmp.c | 4 +-- arch/arm/mach-exynos/pm.c | 6 ++-- arch/arm/mach-exynos/suspend.c | 6 ++-- arch/arm/mach-hisi/platmcpm.c | 2 +- arch/arm/mach-hisi/platsmp.c | 6 ++-- arch/arm/mach-imx/platsmp.c | 2 +- arch/arm/mach-imx/pm-imx6.c | 2 +- arch/arm/mach-imx/src.c | 2 +- arch/arm/mach-mediatek/platsmp.c | 2 +- arch/arm/mach-mvebu/pm.c | 2 +- arch/arm/mach-mvebu/pmsu.c | 2 +- arch/arm/mach-mvebu/system-controller.c | 2 +- arch/arm/mach-omap2/control.c | 8 ++--- arch/arm/mach-omap2/omap-mpuss-lowpower.c | 12 +++---- arch/arm/mach-omap2/omap-smp.c | 4 +-- arch/arm/mach-prima2/platsmp.c | 2 +- arch/arm/mach-prima2/pm.c | 2 +- arch/arm/mach-pxa/palmz72.c | 2 +- arch/arm/mach-pxa/pxa25x.c | 2 +- arch/arm/mach-pxa/pxa27x.c | 2 +- arch/arm/mach-pxa/pxa3xx.c | 2 +- arch/arm/mach-realview/platsmp-dt.c | 2 +- arch/arm/mach-rockchip/platsmp.c | 4 +-- arch/arm/mach-rockchip/pm.c | 2 +- arch/arm/mach-s3c24xx/mach-jive.c | 2 +- arch/arm/mach-s3c24xx/pm-s3c2410.c | 2 +- arch/arm/mach-s3c24xx/pm-s3c2416.c | 2 +- arch/arm/mach-s3c64xx/pm.c | 2 +- arch/arm/mach-s5pv210/pm.c | 2 +- arch/arm/mach-sa1100/pm.c | 2 +- arch/arm/mach-shmobile/platsmp-apmu.c | 6 ++-- arch/arm/mach-shmobile/platsmp-scu.c | 4 +-- arch/arm/mach-socfpga/platsmp.c | 4 +-- arch/arm/mach-spear/platsmp.c | 2 +- arch/arm/mach-sti/platsmp.c | 2 +- arch/arm/mach-sunxi/platsmp.c | 4 +-- arch/arm/mach-tango/platsmp.c | 2 +- arch/arm/mach-tango/pm.c | 2 +- arch/arm/mach-tegra/reset.c | 4 +-- arch/arm/mach-ux500/platsmp.c | 2 +- arch/arm/mach-vexpress/dcscb.c | 2 +- arch/arm/mach-vexpress/platsmp.c | 2 +- arch/arm/mach-vexpress/tc2_pm.c | 4 +-- arch/arm/mach-zx/platsmp.c | 4 +-- arch/arm/mach-zynq/platsmp.c | 2 +- arch/arm/mm/Makefile | 1 + arch/arm/mm/init.c | 7 ++-- arch/arm/mm/mmu.c | 6 +--- arch/arm/mm/physaddr.c | 55 +++++++++++++++++++++++++++++++ drivers/mtd/devices/lart.c | 24 +++++++------- 61 files changed, 179 insertions(+), 110 deletions(-) create mode 100644 arch/arm/mm/physaddr.c -- 2.9.3 ^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v6 1/4] mtd: lart: Rename partition defines to be prefixed with PART_ 2017-01-04 22:39 ` [PATCH v6 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli @ 2017-01-04 22:39 ` Florian Fainelli 2017-01-04 22:39 ` [PATCH v6 2/4] ARM: Define KERNEL_START and KERNEL_END Florian Fainelli ` (3 subsequent siblings) 4 siblings, 0 replies; 32+ messages in thread From: Florian Fainelli @ 2017-01-04 22:39 UTC (permalink / raw) To: linux-arm-kernel, catalin.marinas, will.deacon Cc: Florian Fainelli, linux, nicolas.pitre, panand, chris.brandt, arnd, jonathan.austin, pawel.moll, vladimir.murzin, mark.rutland, ard.biesheuvel, keescook, matt, labbott, kirill.shutemov, ben, js07.lee, stefan, linux-kernel, linux-mtd, cyrille.pitchen, richard, boris.brezillon, computersforpeace, dwmw2 In preparation for defining KERNEL_START on ARM, rename KERNEL_START to PART_KERNEL_START, and to be consistent, do this for all partition-related constants. Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> --- drivers/mtd/devices/lart.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/drivers/mtd/devices/lart.c b/drivers/mtd/devices/lart.c index 82bd00af5cc3..268aae45b514 100644 --- a/drivers/mtd/devices/lart.c +++ b/drivers/mtd/devices/lart.c @@ -75,18 +75,18 @@ static char module_name[] = "lart"; /* blob */ #define NUM_BLOB_BLOCKS FLASH_NUMBLOCKS_16m_PARAM -#define BLOB_START 0x00000000 -#define BLOB_LEN (NUM_BLOB_BLOCKS * FLASH_BLOCKSIZE_PARAM) +#define PART_BLOB_START 0x00000000 +#define PART_BLOB_LEN (NUM_BLOB_BLOCKS * FLASH_BLOCKSIZE_PARAM) /* kernel */ #define NUM_KERNEL_BLOCKS 7 -#define KERNEL_START (BLOB_START + BLOB_LEN) -#define KERNEL_LEN (NUM_KERNEL_BLOCKS * FLASH_BLOCKSIZE_MAIN) +#define PART_KERNEL_START (PART_BLOB_START + PART_BLOB_LEN) +#define PART_KERNEL_LEN (NUM_KERNEL_BLOCKS * FLASH_BLOCKSIZE_MAIN) /* initial ramdisk */ #define NUM_INITRD_BLOCKS 24 -#define INITRD_START (KERNEL_START + KERNEL_LEN) -#define INITRD_LEN (NUM_INITRD_BLOCKS * FLASH_BLOCKSIZE_MAIN) +#define PART_INITRD_START (PART_KERNEL_START + PART_KERNEL_LEN) +#define PART_INITRD_LEN (NUM_INITRD_BLOCKS * FLASH_BLOCKSIZE_MAIN) /* * See section 4.0 in "3 Volt Fast Boot Block Flash Memory" Intel Datasheet @@ -587,20 +587,20 @@ static struct mtd_partition lart_partitions[] = { /* blob */ { .name = "blob", - .offset = BLOB_START, - .size = BLOB_LEN, + .offset = PART_BLOB_START, + .size = PART_BLOB_LEN, }, /* kernel */ { .name = "kernel", - .offset = KERNEL_START, /* MTDPART_OFS_APPEND */ - .size = KERNEL_LEN, + .offset = PART_KERNEL_START, /* MTDPART_OFS_APPEND */ + .size = PART_KERNEL_LEN, }, /* initial ramdisk / file system */ { .name = "file system", - .offset = INITRD_START, /* MTDPART_OFS_APPEND */ - .size = INITRD_LEN, /* MTDPART_SIZ_FULL */ + .offset = PART_INITRD_START, /* MTDPART_OFS_APPEND */ + .size = PART_INITRD_LEN, /* MTDPART_SIZ_FULL */ } }; #define NUM_PARTITIONS ARRAY_SIZE(lart_partitions) -- 2.9.3 ^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v6 2/4] ARM: Define KERNEL_START and KERNEL_END 2017-01-04 22:39 ` [PATCH v6 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli 2017-01-04 22:39 ` [PATCH v6 1/4] mtd: lart: Rename partition defines to be prefixed with PART_ Florian Fainelli @ 2017-01-04 22:39 ` Florian Fainelli 2017-01-04 22:39 ` [PATCH v6 3/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli ` (2 subsequent siblings) 4 siblings, 0 replies; 32+ messages in thread From: Florian Fainelli @ 2017-01-04 22:39 UTC (permalink / raw) To: linux-arm-kernel, catalin.marinas, will.deacon Cc: Florian Fainelli, linux, nicolas.pitre, panand, chris.brandt, arnd, jonathan.austin, pawel.moll, vladimir.murzin, mark.rutland, ard.biesheuvel, keescook, matt, labbott, kirill.shutemov, ben, js07.lee, stefan, linux-kernel, linux-mtd, cyrille.pitchen, richard, boris.brezillon, computersforpeace, dwmw2 In preparation for adding CONFIG_DEBUG_VIRTUAL support, define a set of common constants: KERNEL_START and KERNEL_END which abstract CONFIG_XIP_KERNEL vs. !CONFIG_XIP_KERNEL. Update the code where relevant. Acked-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> --- arch/arm/include/asm/memory.h | 7 +++++++ arch/arm/mm/init.c | 7 ++----- arch/arm/mm/mmu.c | 6 +----- 3 files changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h index 76cbd9c674df..bee7511c5098 100644 --- a/arch/arm/include/asm/memory.h +++ b/arch/arm/include/asm/memory.h @@ -111,6 +111,13 @@ #endif /* !CONFIG_MMU */ +#ifdef CONFIG_XIP_KERNEL +#define KERNEL_START _sdata +#else +#define KERNEL_START _stext +#endif +#define KERNEL_END _end + /* * We fix the TCM memories max 32 KiB ITCM resp DTCM at these * locations diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index 370581aeb871..4127f578086c 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -230,11 +230,8 @@ phys_addr_t __init arm_memblock_steal(phys_addr_t size, phys_addr_t align) void __init arm_memblock_init(const struct machine_desc *mdesc) { /* Register the kernel text, kernel data and initrd with memblock. */ -#ifdef CONFIG_XIP_KERNEL - memblock_reserve(__pa(_sdata), _end - _sdata); -#else - memblock_reserve(__pa(_stext), _end - _stext); -#endif + memblock_reserve(__pa(KERNEL_START), KERNEL_END - KERNEL_START); + #ifdef CONFIG_BLK_DEV_INITRD /* FDT scan will populate initrd_start */ if (initrd_start && !phys_initrd_size) { diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 4001dd15818d..f0fd1a2db036 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1437,11 +1437,7 @@ static void __init kmap_init(void) static void __init map_lowmem(void) { struct memblock_region *reg; -#ifdef CONFIG_XIP_KERNEL - phys_addr_t kernel_x_start = round_down(__pa(_sdata), SECTION_SIZE); -#else - phys_addr_t kernel_x_start = round_down(__pa(_stext), SECTION_SIZE); -#endif + phys_addr_t kernel_x_start = round_down(__pa(KERNEL_START), SECTION_SIZE); phys_addr_t kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE); /* Map all the lowmem memory banks. */ -- 2.9.3 ^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v6 3/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL 2017-01-04 22:39 ` [PATCH v6 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli 2017-01-04 22:39 ` [PATCH v6 1/4] mtd: lart: Rename partition defines to be prefixed with PART_ Florian Fainelli 2017-01-04 22:39 ` [PATCH v6 2/4] ARM: Define KERNEL_START and KERNEL_END Florian Fainelli @ 2017-01-04 22:39 ` Florian Fainelli 2017-01-04 22:39 ` [PATCH v6 4/4] ARM: treewide: Replace uses of virt_to_phys with __pa_symbol Florian Fainelli 2017-01-15 3:01 ` [PATCH v6 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli 4 siblings, 0 replies; 32+ messages in thread From: Florian Fainelli @ 2017-01-04 22:39 UTC (permalink / raw) To: linux-arm-kernel, catalin.marinas, will.deacon Cc: Florian Fainelli, linux, nicolas.pitre, panand, chris.brandt, arnd, jonathan.austin, pawel.moll, vladimir.murzin, mark.rutland, ard.biesheuvel, keescook, matt, labbott, kirill.shutemov, ben, js07.lee, stefan, linux-kernel, linux-mtd, cyrille.pitchen, richard, boris.brezillon, computersforpeace, dwmw2 x86 has an option: CONFIG_DEBUG_VIRTUAL to do additional checks on virt_to_phys calls. The goal is to catch users who are calling virt_to_phys on non-linear addresses immediately. This includes caller using __virt_to_phys() on image addresses instead of __pa_symbol(). This is a generally useful debug feature to spot bad code (particulary in drivers). Acked-by: Russell King <rmk+kernel@armlinux.org.uk> Acked-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> --- arch/arm/Kconfig | 1 + arch/arm/include/asm/memory.h | 15 ++++++++++-- arch/arm/mm/Makefile | 1 + arch/arm/mm/physaddr.c | 57 +++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 72 insertions(+), 2 deletions(-) create mode 100644 arch/arm/mm/physaddr.c diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 5fab553fd03a..4700294f4e09 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -2,6 +2,7 @@ config ARM bool default y select ARCH_CLOCKSOURCE_DATA + select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEVMEM_IS_ALLOWED select ARCH_HAS_ELF_RANDOMIZE select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h index bee7511c5098..c30d0d82a105 100644 --- a/arch/arm/include/asm/memory.h +++ b/arch/arm/include/asm/memory.h @@ -213,7 +213,7 @@ extern const void *__pv_table_begin, *__pv_table_end; : "r" (x), "I" (__PV_BITS_31_24) \ : "cc") -static inline phys_addr_t __virt_to_phys(unsigned long x) +static inline phys_addr_t __virt_to_phys_nodebug(unsigned long x) { phys_addr_t t; @@ -245,7 +245,7 @@ static inline unsigned long __phys_to_virt(phys_addr_t x) #define PHYS_OFFSET PLAT_PHYS_OFFSET #define PHYS_PFN_OFFSET ((unsigned long)(PHYS_OFFSET >> PAGE_SHIFT)) -static inline phys_addr_t __virt_to_phys(unsigned long x) +static inline phys_addr_t __virt_to_phys_nodebug(unsigned long x) { return (phys_addr_t)x - PAGE_OFFSET + PHYS_OFFSET; } @@ -261,6 +261,16 @@ static inline unsigned long __phys_to_virt(phys_addr_t x) ((((unsigned long)(kaddr) - PAGE_OFFSET) >> PAGE_SHIFT) + \ PHYS_PFN_OFFSET) +#define __pa_symbol_nodebug(x) __virt_to_phys_nodebug((x)) + +#ifdef CONFIG_DEBUG_VIRTUAL +extern phys_addr_t __virt_to_phys(unsigned long x); +extern phys_addr_t __phys_addr_symbol(unsigned long x); +#else +#define __virt_to_phys(x) __virt_to_phys_nodebug(x) +#define __phys_addr_symbol(x) __pa_symbol_nodebug(x) +#endif + /* * These are *only* valid on the kernel direct mapped RAM memory. * Note: Drivers should NOT use these. They are the wrong @@ -283,6 +293,7 @@ static inline void *phys_to_virt(phys_addr_t x) * Drivers should NOT use these either. */ #define __pa(x) __virt_to_phys((unsigned long)(x)) +#define __pa_symbol(x) __phys_addr_symbol(RELOC_HIDE((unsigned long)(x), 0)) #define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x))) #define pfn_to_kaddr(pfn) __va((phys_addr_t)(pfn) << PAGE_SHIFT) diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile index e8698241ece9..b3dea80715b4 100644 --- a/arch/arm/mm/Makefile +++ b/arch/arm/mm/Makefile @@ -14,6 +14,7 @@ endif obj-$(CONFIG_ARM_PTDUMP) += dump.o obj-$(CONFIG_MODULES) += proc-syms.o +obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o obj-$(CONFIG_ALIGNMENT_TRAP) += alignment.o obj-$(CONFIG_HIGHMEM) += highmem.o diff --git a/arch/arm/mm/physaddr.c b/arch/arm/mm/physaddr.c new file mode 100644 index 000000000000..02e60f495608 --- /dev/null +++ b/arch/arm/mm/physaddr.c @@ -0,0 +1,57 @@ +#include <linux/bug.h> +#include <linux/export.h> +#include <linux/types.h> +#include <linux/mmdebug.h> +#include <linux/mm.h> + +#include <asm/sections.h> +#include <asm/memory.h> +#include <asm/fixmap.h> +#include <asm/dma.h> + +#include "mm.h" + +static inline bool __virt_addr_valid(unsigned long x) +{ + /* + * high_memory does not get immediately defined, and there + * are early callers of __pa() against PAGE_OFFSET + */ + if (!high_memory && x >= PAGE_OFFSET) + return true; + + if (high_memory && x >= PAGE_OFFSET && x < (unsigned long)high_memory) + return true; + + /* + * MAX_DMA_ADDRESS is a virtual address that may not correspond to an + * actual physical address. Enough code relies on __pa(MAX_DMA_ADDRESS) + * that we just need to work around it and always return true. + */ + if (x == MAX_DMA_ADDRESS) + return true; + + return false; +} + +phys_addr_t __virt_to_phys(unsigned long x) +{ + WARN(!__virt_addr_valid(x), + "virt_to_phys used for non-linear address: %pK (%pS)\n", + (void *)x, (void *)x); + + return __virt_to_phys_nodebug(x); +} +EXPORT_SYMBOL(__virt_to_phys); + +phys_addr_t __phys_addr_symbol(unsigned long x) +{ + /* This is bounds checking against the kernel image only. + * __pa_symbol should only be used on kernel symbol addresses. + */ + VIRTUAL_BUG_ON(x < (unsigned long)KERNEL_START || + x > (unsigned long)KERNEL_END); + + return __pa_symbol_nodebug(x); +} +EXPORT_SYMBOL(__phys_addr_symbol); -- 2.9.3 ^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v6 4/4] ARM: treewide: Replace uses of virt_to_phys with __pa_symbol 2017-01-04 22:39 ` [PATCH v6 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli ` (2 preceding siblings ...) 2017-01-04 22:39 ` [PATCH v6 3/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli @ 2017-01-04 22:39 ` Florian Fainelli 2017-01-15 3:01 ` [PATCH v6 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli 4 siblings, 0 replies; 32+ messages in thread From: Florian Fainelli @ 2017-01-04 22:39 UTC (permalink / raw) To: linux-arm-kernel, catalin.marinas, will.deacon Cc: Florian Fainelli, linux, nicolas.pitre, panand, chris.brandt, arnd, jonathan.austin, pawel.moll, vladimir.murzin, mark.rutland, ard.biesheuvel, keescook, matt, labbott, kirill.shutemov, ben, js07.lee, stefan, linux-kernel, linux-mtd, cyrille.pitchen, richard, boris.brezillon, computersforpeace, dwmw2 All low-level PM/SMP code using virt_to_phys() should actually use __pa_symbol() against kernel symbols. Update code where relevant to move away from virt_to_phys(). Acked-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> --- arch/arm/common/mcpm_entry.c | 12 ++++++------ arch/arm/mach-alpine/platsmp.c | 2 +- arch/arm/mach-axxia/platsmp.c | 2 +- arch/arm/mach-bcm/bcm63xx_smp.c | 2 +- arch/arm/mach-bcm/platsmp-brcmstb.c | 2 +- arch/arm/mach-bcm/platsmp.c | 4 ++-- arch/arm/mach-berlin/platsmp.c | 2 +- arch/arm/mach-exynos/firmware.c | 4 ++-- arch/arm/mach-exynos/mcpm-exynos.c | 2 +- arch/arm/mach-exynos/platsmp.c | 4 ++-- arch/arm/mach-exynos/pm.c | 6 +++--- arch/arm/mach-exynos/suspend.c | 6 +++--- arch/arm/mach-hisi/platmcpm.c | 2 +- arch/arm/mach-hisi/platsmp.c | 6 +++--- arch/arm/mach-imx/platsmp.c | 2 +- arch/arm/mach-imx/pm-imx6.c | 2 +- arch/arm/mach-imx/src.c | 2 +- arch/arm/mach-mediatek/platsmp.c | 2 +- arch/arm/mach-mvebu/pm.c | 2 +- arch/arm/mach-mvebu/pmsu.c | 2 +- arch/arm/mach-mvebu/system-controller.c | 2 +- arch/arm/mach-omap2/control.c | 8 ++++---- arch/arm/mach-omap2/omap-mpuss-lowpower.c | 12 ++++++------ arch/arm/mach-omap2/omap-smp.c | 4 ++-- arch/arm/mach-prima2/platsmp.c | 2 +- arch/arm/mach-prima2/pm.c | 2 +- arch/arm/mach-pxa/palmz72.c | 2 +- arch/arm/mach-pxa/pxa25x.c | 2 +- arch/arm/mach-pxa/pxa27x.c | 2 +- arch/arm/mach-pxa/pxa3xx.c | 2 +- arch/arm/mach-realview/platsmp-dt.c | 2 +- arch/arm/mach-rockchip/platsmp.c | 4 ++-- arch/arm/mach-rockchip/pm.c | 2 +- arch/arm/mach-s3c24xx/mach-jive.c | 2 +- arch/arm/mach-s3c24xx/pm-s3c2410.c | 2 +- arch/arm/mach-s3c24xx/pm-s3c2416.c | 2 +- arch/arm/mach-s3c64xx/pm.c | 2 +- arch/arm/mach-s5pv210/pm.c | 2 +- arch/arm/mach-sa1100/pm.c | 2 +- arch/arm/mach-shmobile/platsmp-apmu.c | 6 +++--- arch/arm/mach-shmobile/platsmp-scu.c | 4 ++-- arch/arm/mach-socfpga/platsmp.c | 4 ++-- arch/arm/mach-spear/platsmp.c | 2 +- arch/arm/mach-sti/platsmp.c | 2 +- arch/arm/mach-sunxi/platsmp.c | 4 ++-- arch/arm/mach-tango/platsmp.c | 2 +- arch/arm/mach-tango/pm.c | 2 +- arch/arm/mach-tegra/reset.c | 4 ++-- arch/arm/mach-ux500/platsmp.c | 2 +- arch/arm/mach-vexpress/dcscb.c | 2 +- arch/arm/mach-vexpress/platsmp.c | 2 +- arch/arm/mach-vexpress/tc2_pm.c | 4 ++-- arch/arm/mach-zx/platsmp.c | 4 ++-- arch/arm/mach-zynq/platsmp.c | 2 +- 54 files changed, 86 insertions(+), 86 deletions(-) diff --git a/arch/arm/common/mcpm_entry.c b/arch/arm/common/mcpm_entry.c index a923524d1040..cf062472e07b 100644 --- a/arch/arm/common/mcpm_entry.c +++ b/arch/arm/common/mcpm_entry.c @@ -144,7 +144,7 @@ extern unsigned long mcpm_entry_vectors[MAX_NR_CLUSTERS][MAX_CPUS_PER_CLUSTER]; void mcpm_set_entry_vector(unsigned cpu, unsigned cluster, void *ptr) { - unsigned long val = ptr ? virt_to_phys(ptr) : 0; + unsigned long val = ptr ? __pa_symbol(ptr) : 0; mcpm_entry_vectors[cluster][cpu] = val; sync_cache_w(&mcpm_entry_vectors[cluster][cpu]); } @@ -299,8 +299,8 @@ void mcpm_cpu_power_down(void) * the kernel as if the power_up method just had deasserted reset * on the CPU. */ - phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset); - phys_reset(virt_to_phys(mcpm_entry_point)); + phys_reset = (phys_reset_t)(unsigned long)__pa_symbol(cpu_reset); + phys_reset(__pa_symbol(mcpm_entry_point)); /* should never get here */ BUG(); @@ -388,8 +388,8 @@ static int __init nocache_trampoline(unsigned long _arg) __mcpm_outbound_leave_critical(cluster, CLUSTER_DOWN); __mcpm_cpu_down(cpu, cluster); - phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset); - phys_reset(virt_to_phys(mcpm_entry_point)); + phys_reset = (phys_reset_t)(unsigned long)__pa_symbol(cpu_reset); + phys_reset(__pa_symbol(mcpm_entry_point)); BUG(); } @@ -449,7 +449,7 @@ int __init mcpm_sync_init( sync_cache_w(&mcpm_sync); if (power_up_setup) { - mcpm_power_up_setup_phys = virt_to_phys(power_up_setup); + mcpm_power_up_setup_phys = __pa_symbol(power_up_setup); sync_cache_w(&mcpm_power_up_setup_phys); } diff --git a/arch/arm/mach-alpine/platsmp.c b/arch/arm/mach-alpine/platsmp.c index dd77ea25e7ca..6dc6d491f88a 100644 --- a/arch/arm/mach-alpine/platsmp.c +++ b/arch/arm/mach-alpine/platsmp.c @@ -27,7 +27,7 @@ static int alpine_boot_secondary(unsigned int cpu, struct task_struct *idle) { phys_addr_t addr; - addr = virt_to_phys(secondary_startup); + addr = __pa_symbol(secondary_startup); if (addr > (phys_addr_t)(uint32_t)(-1)) { pr_err("FAIL: resume address over 32bit (%pa)", &addr); diff --git a/arch/arm/mach-axxia/platsmp.c b/arch/arm/mach-axxia/platsmp.c index ffbd71d45008..502e3df69f69 100644 --- a/arch/arm/mach-axxia/platsmp.c +++ b/arch/arm/mach-axxia/platsmp.c @@ -25,7 +25,7 @@ static void write_release_addr(u32 release_phys) { u32 *virt = (u32 *) phys_to_virt(release_phys); - writel_relaxed(virt_to_phys(secondary_startup), virt); + writel_relaxed(__pa_symbol(secondary_startup), virt); /* Make sure this store is visible to other CPUs */ smp_wmb(); __cpuc_flush_dcache_area(virt, sizeof(u32)); diff --git a/arch/arm/mach-bcm/bcm63xx_smp.c b/arch/arm/mach-bcm/bcm63xx_smp.c index 9b6727ed68cd..f5fb10b4376f 100644 --- a/arch/arm/mach-bcm/bcm63xx_smp.c +++ b/arch/arm/mach-bcm/bcm63xx_smp.c @@ -135,7 +135,7 @@ static int bcm63138_smp_boot_secondary(unsigned int cpu, } /* Write the secondary init routine to the BootLUT reset vector */ - val = virt_to_phys(secondary_startup); + val = __pa_symbol(secondary_startup); writel_relaxed(val, bootlut_base + BOOTLUT_RESET_VECT); /* Power up the core, will jump straight to its reset vector when we diff --git a/arch/arm/mach-bcm/platsmp-brcmstb.c b/arch/arm/mach-bcm/platsmp-brcmstb.c index 40dc8448445e..12379960e982 100644 --- a/arch/arm/mach-bcm/platsmp-brcmstb.c +++ b/arch/arm/mach-bcm/platsmp-brcmstb.c @@ -151,7 +151,7 @@ static void brcmstb_cpu_boot(u32 cpu) * Set the reset vector to point to the secondary_startup * routine */ - cpu_set_boot_addr(cpu, virt_to_phys(secondary_startup)); + cpu_set_boot_addr(cpu, __pa_symbol(secondary_startup)); /* Unhalt the cpu */ cpu_rst_cfg_set(cpu, 0); diff --git a/arch/arm/mach-bcm/platsmp.c b/arch/arm/mach-bcm/platsmp.c index 3ac3a9bc663c..582886d0d02f 100644 --- a/arch/arm/mach-bcm/platsmp.c +++ b/arch/arm/mach-bcm/platsmp.c @@ -116,7 +116,7 @@ static int nsp_write_lut(unsigned int cpu) return -ENOMEM; } - secondary_startup_phy = virt_to_phys(secondary_startup); + secondary_startup_phy = __pa_symbol(secondary_startup); BUG_ON(secondary_startup_phy > (phys_addr_t)U32_MAX); writel_relaxed(secondary_startup_phy, sku_rom_lut); @@ -189,7 +189,7 @@ static int kona_boot_secondary(unsigned int cpu, struct task_struct *idle) * Secondary cores will start in secondary_startup(), * defined in "arch/arm/kernel/head.S" */ - boot_func = virt_to_phys(secondary_startup); + boot_func = __pa_symbol(secondary_startup); BUG_ON(boot_func & BOOT_ADDR_CPUID_MASK); BUG_ON(boot_func > (phys_addr_t)U32_MAX); diff --git a/arch/arm/mach-berlin/platsmp.c b/arch/arm/mach-berlin/platsmp.c index 93f90688db18..1167b0ed92c8 100644 --- a/arch/arm/mach-berlin/platsmp.c +++ b/arch/arm/mach-berlin/platsmp.c @@ -92,7 +92,7 @@ static void __init berlin_smp_prepare_cpus(unsigned int max_cpus) * Write the secondary startup address into the SW reset address * vector. This is used by boot_inst. */ - writel(virt_to_phys(secondary_startup), vectors_base + SW_RESET_ADDR); + writel(__pa_symbol(secondary_startup), vectors_base + SW_RESET_ADDR); iounmap(vectors_base); unmap_scu: diff --git a/arch/arm/mach-exynos/firmware.c b/arch/arm/mach-exynos/firmware.c index fd6da5419b51..e81a78b125d9 100644 --- a/arch/arm/mach-exynos/firmware.c +++ b/arch/arm/mach-exynos/firmware.c @@ -41,7 +41,7 @@ static int exynos_do_idle(unsigned long mode) case FW_DO_IDLE_AFTR: if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9) exynos_save_cp15(); - writel_relaxed(virt_to_phys(exynos_cpu_resume_ns), + writel_relaxed(__pa_symbol(exynos_cpu_resume_ns), sysram_ns_base_addr + 0x24); writel_relaxed(EXYNOS_AFTR_MAGIC, sysram_ns_base_addr + 0x20); if (soc_is_exynos3250()) { @@ -135,7 +135,7 @@ static int exynos_suspend(void) exynos_save_cp15(); writel(EXYNOS_SLEEP_MAGIC, sysram_ns_base_addr + EXYNOS_BOOT_FLAG); - writel(virt_to_phys(exynos_cpu_resume_ns), + writel(__pa_symbol(exynos_cpu_resume_ns), sysram_ns_base_addr + EXYNOS_BOOT_ADDR); return cpu_suspend(0, exynos_cpu_suspend); diff --git a/arch/arm/mach-exynos/mcpm-exynos.c b/arch/arm/mach-exynos/mcpm-exynos.c index f086bf615b29..214a9cfa92e9 100644 --- a/arch/arm/mach-exynos/mcpm-exynos.c +++ b/arch/arm/mach-exynos/mcpm-exynos.c @@ -221,7 +221,7 @@ static void exynos_mcpm_setup_entry_point(void) */ __raw_writel(0xe59f0000, ns_sram_base_addr); /* ldr r0, [pc, #0] */ __raw_writel(0xe12fff10, ns_sram_base_addr + 4); /* bx r0 */ - __raw_writel(virt_to_phys(mcpm_entry_point), ns_sram_base_addr + 8); + __raw_writel(__pa_symbol(mcpm_entry_point), ns_sram_base_addr + 8); } static struct syscore_ops exynos_mcpm_syscore_ops = { diff --git a/arch/arm/mach-exynos/platsmp.c b/arch/arm/mach-exynos/platsmp.c index 98ffe1e62ad5..9f4949f7ed88 100644 --- a/arch/arm/mach-exynos/platsmp.c +++ b/arch/arm/mach-exynos/platsmp.c @@ -353,7 +353,7 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle) smp_rmb(); - boot_addr = virt_to_phys(exynos4_secondary_startup); + boot_addr = __pa_symbol(exynos4_secondary_startup); ret = exynos_set_boot_addr(core_id, boot_addr); if (ret) @@ -443,7 +443,7 @@ static void __init exynos_smp_prepare_cpus(unsigned int max_cpus) mpidr = cpu_logical_map(i); core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); - boot_addr = virt_to_phys(exynos4_secondary_startup); + boot_addr = __pa_symbol(exynos4_secondary_startup); ret = exynos_set_boot_addr(core_id, boot_addr); if (ret) diff --git a/arch/arm/mach-exynos/pm.c b/arch/arm/mach-exynos/pm.c index 487295f4a56b..1a7e5b5d08d8 100644 --- a/arch/arm/mach-exynos/pm.c +++ b/arch/arm/mach-exynos/pm.c @@ -132,7 +132,7 @@ static void exynos_set_wakeupmask(long mask) static void exynos_cpu_set_boot_vector(long flags) { - writel_relaxed(virt_to_phys(exynos_cpu_resume), + writel_relaxed(__pa_symbol(exynos_cpu_resume), exynos_boot_vector_addr()); writel_relaxed(flags, exynos_boot_vector_flag()); } @@ -238,7 +238,7 @@ static int exynos_cpu0_enter_aftr(void) abort: if (cpu_online(1)) { - unsigned long boot_addr = virt_to_phys(exynos_cpu_resume); + unsigned long boot_addr = __pa_symbol(exynos_cpu_resume); /* * Set the boot vector to something non-zero @@ -330,7 +330,7 @@ static int exynos_cpu1_powerdown(void) static void exynos_pre_enter_aftr(void) { - unsigned long boot_addr = virt_to_phys(exynos_cpu_resume); + unsigned long boot_addr = __pa_symbol(exynos_cpu_resume); (void)exynos_set_boot_addr(1, boot_addr); } diff --git a/arch/arm/mach-exynos/suspend.c b/arch/arm/mach-exynos/suspend.c index 06332f626565..97765be2cc12 100644 --- a/arch/arm/mach-exynos/suspend.c +++ b/arch/arm/mach-exynos/suspend.c @@ -344,7 +344,7 @@ static void exynos_pm_prepare(void) exynos_pm_enter_sleep_mode(); /* ensure at least INFORM0 has the resume address */ - pmu_raw_writel(virt_to_phys(exynos_cpu_resume), S5P_INFORM0); + pmu_raw_writel(__pa_symbol(exynos_cpu_resume), S5P_INFORM0); } static void exynos3250_pm_prepare(void) @@ -361,7 +361,7 @@ static void exynos3250_pm_prepare(void) exynos_pm_enter_sleep_mode(); /* ensure at least INFORM0 has the resume address */ - pmu_raw_writel(virt_to_phys(exynos_cpu_resume), S5P_INFORM0); + pmu_raw_writel(__pa_symbol(exynos_cpu_resume), S5P_INFORM0); } static void exynos5420_pm_prepare(void) @@ -386,7 +386,7 @@ static void exynos5420_pm_prepare(void) /* ensure at least INFORM0 has the resume address */ if (IS_ENABLED(CONFIG_EXYNOS5420_MCPM)) - pmu_raw_writel(virt_to_phys(mcpm_entry_point), S5P_INFORM0); + pmu_raw_writel(__pa_symbol(mcpm_entry_point), S5P_INFORM0); tmp = pmu_raw_readl(EXYNOS5_ARM_L2_OPTION); tmp &= ~EXYNOS5_USE_RETENTION; diff --git a/arch/arm/mach-hisi/platmcpm.c b/arch/arm/mach-hisi/platmcpm.c index 4b653a8cb75c..a6c117622d67 100644 --- a/arch/arm/mach-hisi/platmcpm.c +++ b/arch/arm/mach-hisi/platmcpm.c @@ -327,7 +327,7 @@ static int __init hip04_smp_init(void) */ writel_relaxed(hip04_boot_method[0], relocation); writel_relaxed(0xa5a5a5a5, relocation + 4); /* magic number */ - writel_relaxed(virt_to_phys(secondary_startup), relocation + 8); + writel_relaxed(__pa_symbol(secondary_startup), relocation + 8); writel_relaxed(0, relocation + 12); iounmap(relocation); diff --git a/arch/arm/mach-hisi/platsmp.c b/arch/arm/mach-hisi/platsmp.c index e1d67648d5d0..91bb02dec20f 100644 --- a/arch/arm/mach-hisi/platsmp.c +++ b/arch/arm/mach-hisi/platsmp.c @@ -28,7 +28,7 @@ void hi3xxx_set_cpu_jump(int cpu, void *jump_addr) cpu = cpu_logical_map(cpu); if (!cpu || !ctrl_base) return; - writel_relaxed(virt_to_phys(jump_addr), ctrl_base + ((cpu - 1) << 2)); + writel_relaxed(__pa_symbol(jump_addr), ctrl_base + ((cpu - 1) << 2)); } int hi3xxx_get_cpu_jump(int cpu) @@ -118,7 +118,7 @@ static int hix5hd2_boot_secondary(unsigned int cpu, struct task_struct *idle) { phys_addr_t jumpaddr; - jumpaddr = virt_to_phys(secondary_startup); + jumpaddr = __pa_symbol(secondary_startup); hix5hd2_set_scu_boot_addr(HIX5HD2_BOOT_ADDRESS, jumpaddr); hix5hd2_set_cpu(cpu, true); arch_send_wakeup_ipi_mask(cpumask_of(cpu)); @@ -156,7 +156,7 @@ static int hip01_boot_secondary(unsigned int cpu, struct task_struct *idle) struct device_node *node; - jumpaddr = virt_to_phys(secondary_startup); + jumpaddr = __pa_symbol(secondary_startup); hip01_set_boot_addr(HIP01_BOOT_ADDRESS, jumpaddr); node = of_find_compatible_node(NULL, NULL, "hisilicon,hip01-sysctrl"); diff --git a/arch/arm/mach-imx/platsmp.c b/arch/arm/mach-imx/platsmp.c index 711dbbd5badd..c2d1b329fba1 100644 --- a/arch/arm/mach-imx/platsmp.c +++ b/arch/arm/mach-imx/platsmp.c @@ -117,7 +117,7 @@ static void __init ls1021a_smp_prepare_cpus(unsigned int max_cpus) dcfg_base = of_iomap(np, 0); BUG_ON(!dcfg_base); - paddr = virt_to_phys(secondary_startup); + paddr = __pa_symbol(secondary_startup); writel_relaxed(cpu_to_be32(paddr), dcfg_base + DCFG_CCSR_SCRATCHRW1); iounmap(dcfg_base); diff --git a/arch/arm/mach-imx/pm-imx6.c b/arch/arm/mach-imx/pm-imx6.c index 1515e498d348..e61b1d1027e1 100644 --- a/arch/arm/mach-imx/pm-imx6.c +++ b/arch/arm/mach-imx/pm-imx6.c @@ -499,7 +499,7 @@ static int __init imx6q_suspend_init(const struct imx6_pm_socdata *socdata) memset(suspend_ocram_base, 0, sizeof(*pm_info)); pm_info = suspend_ocram_base; pm_info->pbase = ocram_pbase; - pm_info->resume_addr = virt_to_phys(v7_cpu_resume); + pm_info->resume_addr = __pa_symbol(v7_cpu_resume); pm_info->pm_info_size = sizeof(*pm_info); /* diff --git a/arch/arm/mach-imx/src.c b/arch/arm/mach-imx/src.c index 70b083fe934a..495d85d0fe7e 100644 --- a/arch/arm/mach-imx/src.c +++ b/arch/arm/mach-imx/src.c @@ -99,7 +99,7 @@ void imx_enable_cpu(int cpu, bool enable) void imx_set_cpu_jump(int cpu, void *jump_addr) { cpu = cpu_logical_map(cpu); - writel_relaxed(virt_to_phys(jump_addr), + writel_relaxed(__pa_symbol(jump_addr), src_base + SRC_GPR1 + cpu * 8); } diff --git a/arch/arm/mach-mediatek/platsmp.c b/arch/arm/mach-mediatek/platsmp.c index b821e34474b6..726eb69bb655 100644 --- a/arch/arm/mach-mediatek/platsmp.c +++ b/arch/arm/mach-mediatek/platsmp.c @@ -122,7 +122,7 @@ static void __init __mtk_smp_prepare_cpus(unsigned int max_cpus, int trustzone) * write the address of slave startup address into the system-wide * jump register */ - writel_relaxed(virt_to_phys(secondary_startup_arm), + writel_relaxed(__pa_symbol(secondary_startup_arm), mtk_smp_base + mtk_smp_info->jump_reg); } diff --git a/arch/arm/mach-mvebu/pm.c b/arch/arm/mach-mvebu/pm.c index 2990c5269b18..c487be61d6d8 100644 --- a/arch/arm/mach-mvebu/pm.c +++ b/arch/arm/mach-mvebu/pm.c @@ -110,7 +110,7 @@ static void mvebu_pm_store_armadaxp_bootinfo(u32 *store_addr) { phys_addr_t resume_pc; - resume_pc = virt_to_phys(armada_370_xp_cpu_resume); + resume_pc = __pa_symbol(armada_370_xp_cpu_resume); /* * The bootloader expects the first two words to be a magic diff --git a/arch/arm/mach-mvebu/pmsu.c b/arch/arm/mach-mvebu/pmsu.c index f39bd51bce18..27a78c80e5b1 100644 --- a/arch/arm/mach-mvebu/pmsu.c +++ b/arch/arm/mach-mvebu/pmsu.c @@ -112,7 +112,7 @@ static const struct of_device_id of_pmsu_table[] = { void mvebu_pmsu_set_cpu_boot_addr(int hw_cpu, void *boot_addr) { - writel(virt_to_phys(boot_addr), pmsu_mp_base + + writel(__pa_symbol(boot_addr), pmsu_mp_base + PMSU_BOOT_ADDR_REDIRECT_OFFSET(hw_cpu)); } diff --git a/arch/arm/mach-mvebu/system-controller.c b/arch/arm/mach-mvebu/system-controller.c index 76cbc82a7407..04d9ebe6a90a 100644 --- a/arch/arm/mach-mvebu/system-controller.c +++ b/arch/arm/mach-mvebu/system-controller.c @@ -153,7 +153,7 @@ void mvebu_system_controller_set_cpu_boot_addr(void *boot_addr) if (of_machine_is_compatible("marvell,armada375")) mvebu_armada375_smp_wa_init(); - writel(virt_to_phys(boot_addr), system_controller_base + + writel(__pa_symbol(boot_addr), system_controller_base + mvebu_sc->resume_boot_addr); } #endif diff --git a/arch/arm/mach-omap2/control.c b/arch/arm/mach-omap2/control.c index 1662071bb2cc..bd8089ff929f 100644 --- a/arch/arm/mach-omap2/control.c +++ b/arch/arm/mach-omap2/control.c @@ -315,15 +315,15 @@ void omap3_save_scratchpad_contents(void) scratchpad_contents.boot_config_ptr = 0x0; if (cpu_is_omap3630()) scratchpad_contents.public_restore_ptr = - virt_to_phys(omap3_restore_3630); + __pa_symbol(omap3_restore_3630); else if (omap_rev() != OMAP3430_REV_ES3_0 && omap_rev() != OMAP3430_REV_ES3_1 && omap_rev() != OMAP3430_REV_ES3_1_2) scratchpad_contents.public_restore_ptr = - virt_to_phys(omap3_restore); + __pa_symbol(omap3_restore); else scratchpad_contents.public_restore_ptr = - virt_to_phys(omap3_restore_es3); + __pa_symbol(omap3_restore_es3); if (omap_type() == OMAP2_DEVICE_TYPE_GP) scratchpad_contents.secure_ram_restore_ptr = 0x0; @@ -395,7 +395,7 @@ void omap3_save_scratchpad_contents(void) sdrc_block_contents.flags = 0x0; sdrc_block_contents.block_size = 0x0; - arm_context_addr = virt_to_phys(omap3_arm_context); + arm_context_addr = __pa_symbol(omap3_arm_context); /* Copy all the contents to the scratchpad location */ scratchpad_address = OMAP2_L4_IO_ADDRESS(OMAP343X_SCRATCHPAD); diff --git a/arch/arm/mach-omap2/omap-mpuss-lowpower.c b/arch/arm/mach-omap2/omap-mpuss-lowpower.c index 7d62ad48c7c9..113ab2dd2ee9 100644 --- a/arch/arm/mach-omap2/omap-mpuss-lowpower.c +++ b/arch/arm/mach-omap2/omap-mpuss-lowpower.c @@ -273,7 +273,7 @@ int omap4_enter_lowpower(unsigned int cpu, unsigned int power_state) cpu_clear_prev_logic_pwrst(cpu); pwrdm_set_next_pwrst(pm_info->pwrdm, power_state); pwrdm_set_logic_retst(pm_info->pwrdm, cpu_logic_state); - set_cpu_wakeup_addr(cpu, virt_to_phys(omap_pm_ops.resume)); + set_cpu_wakeup_addr(cpu, __pa_symbol(omap_pm_ops.resume)); omap_pm_ops.scu_prepare(cpu, power_state); l2x0_pwrst_prepare(cpu, save_state); @@ -325,7 +325,7 @@ int omap4_hotplug_cpu(unsigned int cpu, unsigned int power_state) pwrdm_clear_all_prev_pwrst(pm_info->pwrdm); pwrdm_set_next_pwrst(pm_info->pwrdm, power_state); - set_cpu_wakeup_addr(cpu, virt_to_phys(omap_pm_ops.hotplug_restart)); + set_cpu_wakeup_addr(cpu, __pa_symbol(omap_pm_ops.hotplug_restart)); omap_pm_ops.scu_prepare(cpu, power_state); /* @@ -467,13 +467,13 @@ void __init omap4_mpuss_early_init(void) sar_base = omap4_get_sar_ram_base(); if (cpu_is_omap443x()) - startup_pa = virt_to_phys(omap4_secondary_startup); + startup_pa = __pa_symbol(omap4_secondary_startup); else if (cpu_is_omap446x()) - startup_pa = virt_to_phys(omap4460_secondary_startup); + startup_pa = __pa_symbol(omap4460_secondary_startup); else if ((__boot_cpu_mode & MODE_MASK) == HYP_MODE) - startup_pa = virt_to_phys(omap5_secondary_hyp_startup); + startup_pa = __pa_symbol(omap5_secondary_hyp_startup); else - startup_pa = virt_to_phys(omap5_secondary_startup); + startup_pa = __pa_symbol(omap5_secondary_startup); if (cpu_is_omap44xx()) writel_relaxed(startup_pa, sar_base + diff --git a/arch/arm/mach-omap2/omap-smp.c b/arch/arm/mach-omap2/omap-smp.c index b4de3da6dffa..003353b0b794 100644 --- a/arch/arm/mach-omap2/omap-smp.c +++ b/arch/arm/mach-omap2/omap-smp.c @@ -316,9 +316,9 @@ static void __init omap4_smp_prepare_cpus(unsigned int max_cpus) * A barrier is added to ensure that write buffer is drained */ if (omap_secure_apis_support()) - omap_auxcoreboot_addr(virt_to_phys(cfg.startup_addr)); + omap_auxcoreboot_addr(__pa_symbol(cfg.startup_addr)); else - writel_relaxed(virt_to_phys(cfg.startup_addr), + writel_relaxed(__pa_symbol(cfg.startup_addr), base + OMAP_AUX_CORE_BOOT_1); } diff --git a/arch/arm/mach-prima2/platsmp.c b/arch/arm/mach-prima2/platsmp.c index 0875b99add18..75ef5d4be554 100644 --- a/arch/arm/mach-prima2/platsmp.c +++ b/arch/arm/mach-prima2/platsmp.c @@ -65,7 +65,7 @@ static int sirfsoc_boot_secondary(unsigned int cpu, struct task_struct *idle) * waiting for. This would wake up the secondary core from WFE */ #define SIRFSOC_CPU1_JUMPADDR_OFFSET 0x2bc - __raw_writel(virt_to_phys(sirfsoc_secondary_startup), + __raw_writel(__pa_symbol(sirfsoc_secondary_startup), clk_base + SIRFSOC_CPU1_JUMPADDR_OFFSET); #define SIRFSOC_CPU1_WAKEMAGIC_OFFSET 0x2b8 diff --git a/arch/arm/mach-prima2/pm.c b/arch/arm/mach-prima2/pm.c index 83e94c95e314..b0bcf1ff02dd 100644 --- a/arch/arm/mach-prima2/pm.c +++ b/arch/arm/mach-prima2/pm.c @@ -54,7 +54,7 @@ static void sirfsoc_set_sleep_mode(u32 mode) static int sirfsoc_pre_suspend_power_off(void) { - u32 wakeup_entry = virt_to_phys(cpu_resume); + u32 wakeup_entry = __pa_symbol(cpu_resume); sirfsoc_rtc_iobrg_writel(wakeup_entry, sirfsoc_pwrc_base + SIRFSOC_PWRC_SCRATCH_PAD1); diff --git a/arch/arm/mach-pxa/palmz72.c b/arch/arm/mach-pxa/palmz72.c index 9c308de158c6..29630061e700 100644 --- a/arch/arm/mach-pxa/palmz72.c +++ b/arch/arm/mach-pxa/palmz72.c @@ -249,7 +249,7 @@ static int palmz72_pm_suspend(void) store_ptr = *PALMZ72_SAVE_DWORD; /* Setting PSPR to a proper value */ - PSPR = virt_to_phys(&palmz72_resume_info); + PSPR = __pa_symbol(&palmz72_resume_info); return 0; } diff --git a/arch/arm/mach-pxa/pxa25x.c b/arch/arm/mach-pxa/pxa25x.c index c725baf119e1..ba431fad5c47 100644 --- a/arch/arm/mach-pxa/pxa25x.c +++ b/arch/arm/mach-pxa/pxa25x.c @@ -85,7 +85,7 @@ static void pxa25x_cpu_pm_enter(suspend_state_t state) static int pxa25x_cpu_pm_prepare(void) { /* set resume return address */ - PSPR = virt_to_phys(cpu_resume); + PSPR = __pa_symbol(cpu_resume); return 0; } diff --git a/arch/arm/mach-pxa/pxa27x.c b/arch/arm/mach-pxa/pxa27x.c index c0185c5c5a08..9b69be4e9fe3 100644 --- a/arch/arm/mach-pxa/pxa27x.c +++ b/arch/arm/mach-pxa/pxa27x.c @@ -168,7 +168,7 @@ static int pxa27x_cpu_pm_valid(suspend_state_t state) static int pxa27x_cpu_pm_prepare(void) { /* set resume return address */ - PSPR = virt_to_phys(cpu_resume); + PSPR = __pa_symbol(cpu_resume); return 0; } diff --git a/arch/arm/mach-pxa/pxa3xx.c b/arch/arm/mach-pxa/pxa3xx.c index 87acc96388c7..0cc9f124c9ac 100644 --- a/arch/arm/mach-pxa/pxa3xx.c +++ b/arch/arm/mach-pxa/pxa3xx.c @@ -123,7 +123,7 @@ static void pxa3xx_cpu_pm_suspend(void) PSPR = 0x5c014000; /* overwrite with the resume address */ - *p = virt_to_phys(cpu_resume); + *p = __pa_symbol(cpu_resume); cpu_suspend(0, pxa3xx_finish_suspend); diff --git a/arch/arm/mach-realview/platsmp-dt.c b/arch/arm/mach-realview/platsmp-dt.c index 70ca99eb52c6..c242423bf8db 100644 --- a/arch/arm/mach-realview/platsmp-dt.c +++ b/arch/arm/mach-realview/platsmp-dt.c @@ -76,7 +76,7 @@ static void __init realview_smp_prepare_cpus(unsigned int max_cpus) } /* Put the boot address in this magic register */ regmap_write(map, REALVIEW_SYS_FLAGSSET_OFFSET, - virt_to_phys(versatile_secondary_startup)); + __pa_symbol(versatile_secondary_startup)); } static const struct smp_operations realview_dt_smp_ops __initconst = { diff --git a/arch/arm/mach-rockchip/platsmp.c b/arch/arm/mach-rockchip/platsmp.c index 4d827a069d49..3abafdbdd7f4 100644 --- a/arch/arm/mach-rockchip/platsmp.c +++ b/arch/arm/mach-rockchip/platsmp.c @@ -156,7 +156,7 @@ static int rockchip_boot_secondary(unsigned int cpu, struct task_struct *idle) */ mdelay(1); /* ensure the cpus other than cpu0 to startup */ - writel(virt_to_phys(secondary_startup), sram_base_addr + 8); + writel(__pa_symbol(secondary_startup), sram_base_addr + 8); writel(0xDEADBEAF, sram_base_addr + 4); dsb_sev(); } @@ -195,7 +195,7 @@ static int __init rockchip_smp_prepare_sram(struct device_node *node) } /* set the boot function for the sram code */ - rockchip_boot_fn = virt_to_phys(secondary_startup); + rockchip_boot_fn = __pa_symbol(secondary_startup); /* copy the trampoline to sram, that runs during startup of the core */ memcpy(sram_base_addr, &rockchip_secondary_trampoline, trampoline_sz); diff --git a/arch/arm/mach-rockchip/pm.c b/arch/arm/mach-rockchip/pm.c index bee8c8051929..0592534e0b88 100644 --- a/arch/arm/mach-rockchip/pm.c +++ b/arch/arm/mach-rockchip/pm.c @@ -62,7 +62,7 @@ static inline u32 rk3288_l2_config(void) static void rk3288_config_bootdata(void) { rkpm_bootdata_cpusp = rk3288_bootram_phy + (SZ_4K - 8); - rkpm_bootdata_cpu_code = virt_to_phys(cpu_resume); + rkpm_bootdata_cpu_code = __pa_symbol(cpu_resume); rkpm_bootdata_l2ctlr_f = 1; rkpm_bootdata_l2ctlr = rk3288_l2_config(); diff --git a/arch/arm/mach-s3c24xx/mach-jive.c b/arch/arm/mach-s3c24xx/mach-jive.c index 895aca225952..f5b5c49b56ac 100644 --- a/arch/arm/mach-s3c24xx/mach-jive.c +++ b/arch/arm/mach-s3c24xx/mach-jive.c @@ -484,7 +484,7 @@ static int jive_pm_suspend(void) * correct address to resume from. */ __raw_writel(0x2BED, S3C2412_INFORM0); - __raw_writel(virt_to_phys(s3c_cpu_resume), S3C2412_INFORM1); + __raw_writel(__pa_symbol(s3c_cpu_resume), S3C2412_INFORM1); return 0; } diff --git a/arch/arm/mach-s3c24xx/pm-s3c2410.c b/arch/arm/mach-s3c24xx/pm-s3c2410.c index 20e481d8a33a..a4588daeddb0 100644 --- a/arch/arm/mach-s3c24xx/pm-s3c2410.c +++ b/arch/arm/mach-s3c24xx/pm-s3c2410.c @@ -45,7 +45,7 @@ static void s3c2410_pm_prepare(void) { /* ensure at least GSTATUS3 has the resume address */ - __raw_writel(virt_to_phys(s3c_cpu_resume), S3C2410_GSTATUS3); + __raw_writel(__pa_symbol(s3c_cpu_resume), S3C2410_GSTATUS3); S3C_PMDBG("GSTATUS3 0x%08x\n", __raw_readl(S3C2410_GSTATUS3)); S3C_PMDBG("GSTATUS4 0x%08x\n", __raw_readl(S3C2410_GSTATUS4)); diff --git a/arch/arm/mach-s3c24xx/pm-s3c2416.c b/arch/arm/mach-s3c24xx/pm-s3c2416.c index c0e328e37bd6..b5bbf0d5985c 100644 --- a/arch/arm/mach-s3c24xx/pm-s3c2416.c +++ b/arch/arm/mach-s3c24xx/pm-s3c2416.c @@ -48,7 +48,7 @@ static void s3c2416_pm_prepare(void) * correct address to resume from. */ __raw_writel(0x2BED, S3C2412_INFORM0); - __raw_writel(virt_to_phys(s3c_cpu_resume), S3C2412_INFORM1); + __raw_writel(__pa_symbol(s3c_cpu_resume), S3C2412_INFORM1); } static int s3c2416_pm_add(struct device *dev, struct subsys_interface *sif) diff --git a/arch/arm/mach-s3c64xx/pm.c b/arch/arm/mach-s3c64xx/pm.c index 59d91b83b03d..945a9d1e1a71 100644 --- a/arch/arm/mach-s3c64xx/pm.c +++ b/arch/arm/mach-s3c64xx/pm.c @@ -304,7 +304,7 @@ static void s3c64xx_pm_prepare(void) wake_irqs, ARRAY_SIZE(wake_irqs)); /* store address of resume. */ - __raw_writel(virt_to_phys(s3c_cpu_resume), S3C64XX_INFORM0); + __raw_writel(__pa_symbol(s3c_cpu_resume), S3C64XX_INFORM0); /* ensure previous wakeup state is cleared before sleeping */ __raw_writel(__raw_readl(S3C64XX_WAKEUP_STAT), S3C64XX_WAKEUP_STAT); diff --git a/arch/arm/mach-s5pv210/pm.c b/arch/arm/mach-s5pv210/pm.c index 21b4b13c5ab7..2d5f08015e34 100644 --- a/arch/arm/mach-s5pv210/pm.c +++ b/arch/arm/mach-s5pv210/pm.c @@ -69,7 +69,7 @@ static void s5pv210_pm_prepare(void) __raw_writel(s5pv210_irqwake_intmask, S5P_WAKEUP_MASK); /* ensure at least INFORM0 has the resume address */ - __raw_writel(virt_to_phys(s5pv210_cpu_resume), S5P_INFORM0); + __raw_writel(__pa_symbol(s5pv210_cpu_resume), S5P_INFORM0); tmp = __raw_readl(S5P_SLEEP_CFG); tmp &= ~(S5P_SLEEP_CFG_OSC_EN | S5P_SLEEP_CFG_USBOSC_EN); diff --git a/arch/arm/mach-sa1100/pm.c b/arch/arm/mach-sa1100/pm.c index 34853d5dfda2..9a7079f565bd 100644 --- a/arch/arm/mach-sa1100/pm.c +++ b/arch/arm/mach-sa1100/pm.c @@ -73,7 +73,7 @@ static int sa11x0_pm_enter(suspend_state_t state) RCSR = RCSR_HWR | RCSR_SWR | RCSR_WDR | RCSR_SMR; /* set resume return address */ - PSPR = virt_to_phys(cpu_resume); + PSPR = __pa_symbol(cpu_resume); /* go zzz */ cpu_suspend(0, sa1100_finish_suspend); diff --git a/arch/arm/mach-shmobile/platsmp-apmu.c b/arch/arm/mach-shmobile/platsmp-apmu.c index 0c6bb458b7a4..71729b8d1900 100644 --- a/arch/arm/mach-shmobile/platsmp-apmu.c +++ b/arch/arm/mach-shmobile/platsmp-apmu.c @@ -171,7 +171,7 @@ static void apmu_parse_dt(void (*fn)(struct resource *res, int cpu, int bit)) static void __init shmobile_smp_apmu_setup_boot(void) { /* install boot code shared by all CPUs */ - shmobile_boot_fn = virt_to_phys(shmobile_smp_boot); + shmobile_boot_fn = __pa_symbol(shmobile_smp_boot); } void __init shmobile_smp_apmu_prepare_cpus(unsigned int max_cpus, @@ -185,7 +185,7 @@ void __init shmobile_smp_apmu_prepare_cpus(unsigned int max_cpus, int shmobile_smp_apmu_boot_secondary(unsigned int cpu, struct task_struct *idle) { /* For this particular CPU register boot vector */ - shmobile_smp_hook(cpu, virt_to_phys(secondary_startup), 0); + shmobile_smp_hook(cpu, __pa_symbol(secondary_startup), 0); return apmu_wrap(cpu, apmu_power_on); } @@ -301,7 +301,7 @@ int shmobile_smp_apmu_cpu_kill(unsigned int cpu) #if defined(CONFIG_SUSPEND) static int shmobile_smp_apmu_do_suspend(unsigned long cpu) { - shmobile_smp_hook(cpu, virt_to_phys(cpu_resume), 0); + shmobile_smp_hook(cpu, __pa_symbol(cpu_resume), 0); shmobile_smp_apmu_cpu_shutdown(cpu); cpu_do_idle(); /* WFI selects Core Standby */ return 1; diff --git a/arch/arm/mach-shmobile/platsmp-scu.c b/arch/arm/mach-shmobile/platsmp-scu.c index d1ecaf37d142..f1a1efde4beb 100644 --- a/arch/arm/mach-shmobile/platsmp-scu.c +++ b/arch/arm/mach-shmobile/platsmp-scu.c @@ -24,7 +24,7 @@ static void __iomem *shmobile_scu_base; static int shmobile_scu_cpu_prepare(unsigned int cpu) { /* For this particular CPU register SCU SMP boot vector */ - shmobile_smp_hook(cpu, virt_to_phys(shmobile_boot_scu), + shmobile_smp_hook(cpu, __pa_symbol(shmobile_boot_scu), shmobile_scu_base_phys); return 0; } @@ -33,7 +33,7 @@ void __init shmobile_smp_scu_prepare_cpus(phys_addr_t scu_base_phys, unsigned int max_cpus) { /* install boot code shared by all CPUs */ - shmobile_boot_fn = virt_to_phys(shmobile_smp_boot); + shmobile_boot_fn = __pa_symbol(shmobile_smp_boot); /* enable SCU and cache coherency on booting CPU */ shmobile_scu_base_phys = scu_base_phys; diff --git a/arch/arm/mach-socfpga/platsmp.c b/arch/arm/mach-socfpga/platsmp.c index 07945748b571..0ee76772b507 100644 --- a/arch/arm/mach-socfpga/platsmp.c +++ b/arch/arm/mach-socfpga/platsmp.c @@ -40,7 +40,7 @@ static int socfpga_boot_secondary(unsigned int cpu, struct task_struct *idle) memcpy(phys_to_virt(0), &secondary_trampoline, trampoline_size); - writel(virt_to_phys(secondary_startup), + writel(__pa_symbol(secondary_startup), sys_manager_base_addr + (socfpga_cpu1start_addr & 0x000000ff)); flush_cache_all(); @@ -63,7 +63,7 @@ static int socfpga_a10_boot_secondary(unsigned int cpu, struct task_struct *idle SOCFPGA_A10_RSTMGR_MODMPURST); memcpy(phys_to_virt(0), &secondary_trampoline, trampoline_size); - writel(virt_to_phys(secondary_startup), + writel(__pa_symbol(secondary_startup), sys_manager_base_addr + (socfpga_cpu1start_addr & 0x00000fff)); flush_cache_all(); diff --git a/arch/arm/mach-spear/platsmp.c b/arch/arm/mach-spear/platsmp.c index 8d1e2d551786..39038a03836a 100644 --- a/arch/arm/mach-spear/platsmp.c +++ b/arch/arm/mach-spear/platsmp.c @@ -117,7 +117,7 @@ static void __init spear13xx_smp_prepare_cpus(unsigned int max_cpus) * (presently it is in SRAM). The BootMonitor waits until it receives a * soft interrupt, and then the secondary CPU branches to this address. */ - __raw_writel(virt_to_phys(spear13xx_secondary_startup), SYS_LOCATION); + __raw_writel(__pa_symbol(spear13xx_secondary_startup), SYS_LOCATION); } const struct smp_operations spear13xx_smp_ops __initconst = { diff --git a/arch/arm/mach-sti/platsmp.c b/arch/arm/mach-sti/platsmp.c index ea5a2277ee46..231f19e17436 100644 --- a/arch/arm/mach-sti/platsmp.c +++ b/arch/arm/mach-sti/platsmp.c @@ -103,7 +103,7 @@ static void __init sti_smp_prepare_cpus(unsigned int max_cpus) u32 __iomem *cpu_strt_ptr; u32 release_phys; int cpu; - unsigned long entry_pa = virt_to_phys(sti_secondary_startup); + unsigned long entry_pa = __pa_symbol(sti_secondary_startup); np = of_find_compatible_node(NULL, NULL, "arm,cortex-a9-scu"); diff --git a/arch/arm/mach-sunxi/platsmp.c b/arch/arm/mach-sunxi/platsmp.c index 6642267812c9..8fb5088464db 100644 --- a/arch/arm/mach-sunxi/platsmp.c +++ b/arch/arm/mach-sunxi/platsmp.c @@ -80,7 +80,7 @@ static int sun6i_smp_boot_secondary(unsigned int cpu, spin_lock(&cpu_lock); /* Set CPU boot address */ - writel(virt_to_phys(secondary_startup), + writel(__pa_symbol(secondary_startup), cpucfg_membase + CPUCFG_PRIVATE0_REG); /* Assert the CPU core in reset */ @@ -162,7 +162,7 @@ static int sun8i_smp_boot_secondary(unsigned int cpu, spin_lock(&cpu_lock); /* Set CPU boot address */ - writel(virt_to_phys(secondary_startup), + writel(__pa_symbol(secondary_startup), cpucfg_membase + CPUCFG_PRIVATE0_REG); /* Assert the CPU core in reset */ diff --git a/arch/arm/mach-tango/platsmp.c b/arch/arm/mach-tango/platsmp.c index 98c62a4a8623..2f0c6c050fed 100644 --- a/arch/arm/mach-tango/platsmp.c +++ b/arch/arm/mach-tango/platsmp.c @@ -5,7 +5,7 @@ static int tango_boot_secondary(unsigned int cpu, struct task_struct *idle) { - tango_set_aux_boot_addr(virt_to_phys(secondary_startup)); + tango_set_aux_boot_addr(__pa_symbol(secondary_startup)); tango_start_aux_core(cpu); return 0; } diff --git a/arch/arm/mach-tango/pm.c b/arch/arm/mach-tango/pm.c index b05c6d6f99d0..406c0814eb6e 100644 --- a/arch/arm/mach-tango/pm.c +++ b/arch/arm/mach-tango/pm.c @@ -5,7 +5,7 @@ static int tango_pm_powerdown(unsigned long arg) { - tango_suspend(virt_to_phys(cpu_resume)); + tango_suspend(__pa_symbol(cpu_resume)); return -EIO; /* tango_suspend has failed */ } diff --git a/arch/arm/mach-tegra/reset.c b/arch/arm/mach-tegra/reset.c index 6fd9db54887e..dc558892753c 100644 --- a/arch/arm/mach-tegra/reset.c +++ b/arch/arm/mach-tegra/reset.c @@ -94,14 +94,14 @@ void __init tegra_cpu_reset_handler_init(void) __tegra_cpu_reset_handler_data[TEGRA_RESET_MASK_PRESENT] = *((u32 *)cpu_possible_mask); __tegra_cpu_reset_handler_data[TEGRA_RESET_STARTUP_SECONDARY] = - virt_to_phys((void *)secondary_startup); + __pa_symbol((void *)secondary_startup); #endif #ifdef CONFIG_PM_SLEEP __tegra_cpu_reset_handler_data[TEGRA_RESET_STARTUP_LP1] = TEGRA_IRAM_LPx_RESUME_AREA; __tegra_cpu_reset_handler_data[TEGRA_RESET_STARTUP_LP2] = - virt_to_phys((void *)tegra_resume); + __pa_symbol((void *)tegra_resume); #endif tegra_cpu_reset_handler_enable(); diff --git a/arch/arm/mach-ux500/platsmp.c b/arch/arm/mach-ux500/platsmp.c index 8f2f615ff958..8c8f26389067 100644 --- a/arch/arm/mach-ux500/platsmp.c +++ b/arch/arm/mach-ux500/platsmp.c @@ -54,7 +54,7 @@ static void wakeup_secondary(void) * backup ram register at offset 0x1FF0, which is what boot rom code * is waiting for. This will wake up the secondary core from WFE. */ - writel(virt_to_phys(secondary_startup), + writel(__pa_symbol(secondary_startup), backupram + UX500_CPU1_JUMPADDR_OFFSET); writel(0xA1FEED01, backupram + UX500_CPU1_WAKEMAGIC_OFFSET); diff --git a/arch/arm/mach-vexpress/dcscb.c b/arch/arm/mach-vexpress/dcscb.c index 5cedcf572104..ee2a0faafaa1 100644 --- a/arch/arm/mach-vexpress/dcscb.c +++ b/arch/arm/mach-vexpress/dcscb.c @@ -166,7 +166,7 @@ static int __init dcscb_init(void) * Future entries into the kernel can now go * through the cluster entry vectors. */ - vexpress_flags_set(virt_to_phys(mcpm_entry_point)); + vexpress_flags_set(__pa_symbol(mcpm_entry_point)); return 0; } diff --git a/arch/arm/mach-vexpress/platsmp.c b/arch/arm/mach-vexpress/platsmp.c index 98e29dee91e8..742499bac6d0 100644 --- a/arch/arm/mach-vexpress/platsmp.c +++ b/arch/arm/mach-vexpress/platsmp.c @@ -79,7 +79,7 @@ static void __init vexpress_smp_dt_prepare_cpus(unsigned int max_cpus) * until it receives a soft interrupt, and then the * secondary CPU branches to this address. */ - vexpress_flags_set(virt_to_phys(versatile_secondary_startup)); + vexpress_flags_set(__pa_symbol(versatile_secondary_startup)); } const struct smp_operations vexpress_smp_dt_ops __initconst = { diff --git a/arch/arm/mach-vexpress/tc2_pm.c b/arch/arm/mach-vexpress/tc2_pm.c index 1aa4ccece69f..9b5f3c427086 100644 --- a/arch/arm/mach-vexpress/tc2_pm.c +++ b/arch/arm/mach-vexpress/tc2_pm.c @@ -54,7 +54,7 @@ static int tc2_pm_cpu_powerup(unsigned int cpu, unsigned int cluster) if (cluster >= TC2_CLUSTERS || cpu >= tc2_nr_cpus[cluster]) return -EINVAL; ve_spc_set_resume_addr(cluster, cpu, - virt_to_phys(mcpm_entry_point)); + __pa_symbol(mcpm_entry_point)); ve_spc_cpu_wakeup_irq(cluster, cpu, true); return 0; } @@ -159,7 +159,7 @@ static int tc2_pm_wait_for_powerdown(unsigned int cpu, unsigned int cluster) static void tc2_pm_cpu_suspend_prepare(unsigned int cpu, unsigned int cluster) { - ve_spc_set_resume_addr(cluster, cpu, virt_to_phys(mcpm_entry_point)); + ve_spc_set_resume_addr(cluster, cpu, __pa_symbol(mcpm_entry_point)); } static void tc2_pm_cpu_is_up(unsigned int cpu, unsigned int cluster) diff --git a/arch/arm/mach-zx/platsmp.c b/arch/arm/mach-zx/platsmp.c index 0297f92084e0..afb9a82dedc3 100644 --- a/arch/arm/mach-zx/platsmp.c +++ b/arch/arm/mach-zx/platsmp.c @@ -76,7 +76,7 @@ void __init zx_smp_prepare_cpus(unsigned int max_cpus) * until it receives a soft interrupt, and then the * secondary CPU branches to this address. */ - __raw_writel(virt_to_phys(zx_secondary_startup), + __raw_writel(__pa_symbol(zx_secondary_startup), aonsysctrl_base + AON_SYS_CTRL_RESERVED1); iounmap(aonsysctrl_base); @@ -94,7 +94,7 @@ void __init zx_smp_prepare_cpus(unsigned int max_cpus) /* Map the first 4 KB IRAM for suspend usage */ sys_iram = __arm_ioremap_exec(ZX_IRAM_BASE, PAGE_SIZE, false); - zx_secondary_startup_pa = virt_to_phys(zx_secondary_startup); + zx_secondary_startup_pa = __pa_symbol(zx_secondary_startup); fncpy(sys_iram, &zx_resume_jump, zx_suspend_iram_sz); } diff --git a/arch/arm/mach-zynq/platsmp.c b/arch/arm/mach-zynq/platsmp.c index 7cd9865bdeb7..caa6d5fe9078 100644 --- a/arch/arm/mach-zynq/platsmp.c +++ b/arch/arm/mach-zynq/platsmp.c @@ -89,7 +89,7 @@ EXPORT_SYMBOL(zynq_cpun_start); static int zynq_boot_secondary(unsigned int cpu, struct task_struct *idle) { - return zynq_cpun_start(virt_to_phys(secondary_startup), cpu); + return zynq_cpun_start(__pa_symbol(secondary_startup), cpu); } /* -- 2.9.3 ^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCH v6 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL 2017-01-04 22:39 ` [PATCH v6 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli ` (3 preceding siblings ...) 2017-01-04 22:39 ` [PATCH v6 4/4] ARM: treewide: Replace uses of virt_to_phys with __pa_symbol Florian Fainelli @ 2017-01-15 3:01 ` Florian Fainelli 4 siblings, 0 replies; 32+ messages in thread From: Florian Fainelli @ 2017-01-15 3:01 UTC (permalink / raw) To: linux-arm-kernel, catalin.marinas, will.deacon Cc: linux, nicolas.pitre, panand, chris.brandt, arnd, jonathan.austin, pawel.moll, vladimir.murzin, mark.rutland, ard.biesheuvel, keescook, matt, labbott, kirill.shutemov, ben, js07.lee, stefan, linux-kernel, linux-mtd, cyrille.pitchen, richard, boris.brezillon, computersforpeace, dwmw2 Le 01/04/17 à 14:39, Florian Fainelli a écrit : > This patch series builds on top of Laura's [PATCHv6 00/10] CONFIG_DEBUG_VIRTUAL > for arm64 to add support for CONFIG_DEBUG_VIRTUAL for ARM. > > This was tested on a Brahma B15 platform (ARMv7 + HIGHMEM + LPAE). > > Note that the treewide changes would involve a huge CC list, which > is why it has been purposely trimmed to just focusing on the DEBUG_VIRTUAL > aspect. > > Catalin, provided that you take Laura's series, I suppose I would submit > this one through Russell's patch system if that's okay with everyone? Submitted through Russell's patch tracking system as #8638-8641 -- Florian ^ permalink raw reply [flat|nested] 32+ messages in thread
end of thread, other threads:[~2017-01-15 3:01 UTC | newest] Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2017-01-03 17:21 [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott 2017-01-03 17:21 ` [PATCHv6 01/11] lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL Laura Abbott 2017-01-03 17:21 ` [PATCHv6 02/11] mm/cma: Cleanup highmem check Laura Abbott 2017-01-03 17:21 ` [PATCHv6 03/11] arm64: Move some macros under #ifndef __ASSEMBLY__ Laura Abbott 2017-01-03 17:21 ` [PATCHv6 04/11] arm64: Add cast for virt_to_pfn Laura Abbott 2017-01-03 17:21 ` [PATCHv6 05/11] mm: Introduce lm_alias Laura Abbott 2017-01-03 17:21 ` [PATCHv6 06/11] arm64: Use __pa_symbol for kernel symbols Laura Abbott 2017-01-03 17:21 ` [PATCHv6 07/11] drivers: firmware: psci: Use __pa_symbol for kernel symbol Laura Abbott 2017-01-03 17:21 ` [PATCHv6 08/11] kexec: Switch to __pa_symbol Laura Abbott 2017-01-03 17:21 ` [PATCHv6 09/11] mm/kasan: Switch to using __pa_symbol and lm_alias Laura Abbott 2017-01-03 17:21 ` [PATCHv6 10/11] mm/usercopy: Switch to using lm_alias Laura Abbott 2017-01-03 17:21 ` [PATCHv6 11/11] arm64: Add support for CONFIG_DEBUG_VIRTUAL Laura Abbott 2017-01-03 22:56 ` [PATCHv6 00/11] CONFIG_DEBUG_VIRTUAL for arm64 Florian Fainelli 2017-01-03 23:25 ` Laura Abbott 2017-01-04 11:44 ` Will Deacon 2017-01-04 22:30 ` Florian Fainelli 2017-01-10 12:41 ` Will Deacon 2017-01-04 1:14 ` [PATCH v5 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli 2017-01-04 1:14 ` [PATCH v5 1/4] mtd: lart: Rename partition defines to be prefixed with PART_ Florian Fainelli 2017-01-04 1:14 ` [PATCH v5 2/4] ARM: Define KERNEL_START and KERNEL_END Florian Fainelli 2017-01-04 15:58 ` Hartley Sweeten 2017-01-04 17:36 ` Florian Fainelli 2017-01-04 1:14 ` [PATCH v5 3/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli 2017-01-04 17:20 ` Laura Abbott 2017-01-04 1:14 ` [PATCH v5 4/4] ARM: treewide: Replace uses of virt_to_phys with __pa_symbol Florian Fainelli 2017-01-04 17:31 ` Laura Abbott 2017-01-04 22:39 ` [PATCH v6 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli 2017-01-04 22:39 ` [PATCH v6 1/4] mtd: lart: Rename partition defines to be prefixed with PART_ Florian Fainelli 2017-01-04 22:39 ` [PATCH v6 2/4] ARM: Define KERNEL_START and KERNEL_END Florian Fainelli 2017-01-04 22:39 ` [PATCH v6 3/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli 2017-01-04 22:39 ` [PATCH v6 4/4] ARM: treewide: Replace uses of virt_to_phys with __pa_symbol Florian Fainelli 2017-01-15 3:01 ` [PATCH v6 0/4] ARM: Add support for CONFIG_DEBUG_VIRTUAL Florian Fainelli
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).