linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv3 0/6] CONFIG_DEBUG_VIRTUAL for arm64
@ 2016-11-18  1:16 Laura Abbott
  2016-11-18  1:16 ` [PATCHv3 1/6] lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL Laura Abbott
                   ` (6 more replies)
  0 siblings, 7 replies; 18+ messages in thread
From: Laura Abbott @ 2016-11-18  1:16 UTC (permalink / raw)
  To: Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas
  Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	linux-kernel, linux-mm, Andrew Morton, Marek Szyprowski,
	Joonsoo Kim, linux-arm-kernel

Hi,

This is v3 of the series to add CONFIG_DEBUG_VIRTUAL for arm64.
The biggest change from v2 is the conversion of more __pa sites
to __pa_symbol for stricter checks.

With that expansion, having this go through the arm64 tree is going to be
easiest so I'd like to start getting Acks from x86 and mm maintainers.

Thanks,
Laura

Laura Abbott (6):
  lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL
  mm/cma: Cleanup highmem check
  arm64: Move some macros under #ifndef __ASSEMBLY__
  arm64: Add cast for virt_to_pfn
  arm64: Use __pa_symbol for kernel symbols
  arm64: Add support for CONFIG_DEBUG_VIRTUAL

 arch/arm64/Kconfig                        |  1 +
 arch/arm64/include/asm/kvm_mmu.h          |  4 +-
 arch/arm64/include/asm/memory.h           | 70 ++++++++++++++++++++++---------
 arch/arm64/include/asm/mmu_context.h      |  6 +--
 arch/arm64/include/asm/pgtable.h          |  2 +-
 arch/arm64/kernel/acpi_parking_protocol.c |  2 +-
 arch/arm64/kernel/cpufeature.c            |  2 +-
 arch/arm64/kernel/hibernate.c             |  9 ++--
 arch/arm64/kernel/insn.c                  |  2 +-
 arch/arm64/kernel/psci.c                  |  2 +-
 arch/arm64/kernel/setup.c                 |  8 ++--
 arch/arm64/kernel/smp_spin_table.c        |  2 +-
 arch/arm64/kernel/vdso.c                  |  4 +-
 arch/arm64/mm/Makefile                    |  1 +
 arch/arm64/mm/init.c                      | 11 ++---
 arch/arm64/mm/mmu.c                       | 24 +++++------
 arch/arm64/mm/physaddr.c                  | 39 +++++++++++++++++
 arch/x86/Kconfig                          |  1 +
 drivers/firmware/psci.c                   |  2 +-
 lib/Kconfig.debug                         |  5 ++-
 mm/cma.c                                  | 15 +++----
 21 files changed, 140 insertions(+), 72 deletions(-)
 create mode 100644 arch/arm64/mm/physaddr.c

-- 
2.7.4

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCHv3 1/6] lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL
  2016-11-18  1:16 [PATCHv3 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott
@ 2016-11-18  1:16 ` Laura Abbott
  2016-11-18  8:25   ` Ingo Molnar
  2016-11-18  1:16 ` [PATCHv3 2/6] mm/cma: Cleanup highmem check Laura Abbott
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 18+ messages in thread
From: Laura Abbott @ 2016-11-18  1:16 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Mark Rutland,
	Ard Biesheuvel, Will Deacon, Catalin Marinas
  Cc: Laura Abbott, x86, linux-kernel, linux-mm, Andrew Morton,
	Marek Szyprowski, Joonsoo Kim, linux-arm-kernel


DEBUG_VIRTUAL currently depends on DEBUG_KERNEL && X86. arm64 is getting
the same support. Rather than add a list of architectures, switch this
to ARCH_HAS_DEBUG_VIRTUAL and let architectures select it as
appropriate.

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@redhat.com>
---
v3: No change, x86 maintainers please ack if you are okay with this.
---
 arch/x86/Kconfig  | 1 +
 lib/Kconfig.debug | 5 ++++-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index bada636..f533321 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -23,6 +23,7 @@ config X86
 	select ARCH_CLOCKSOURCE_DATA
 	select ARCH_DISCARD_MEMBLOCK
 	select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI
+	select ARCH_HAS_DEBUG_VIRTUAL
 	select ARCH_HAS_DEVMEM_IS_ALLOWED
 	select ARCH_HAS_ELF_RANDOMIZE
 	select ARCH_HAS_FAST_MULTIPLIER
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index b01e547..5050530 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -603,9 +603,12 @@ config DEBUG_VM_PGFLAGS
 
 	  If unsure, say N.
 
+config ARCH_HAS_DEBUG_VIRTUAL
+	bool
+
 config DEBUG_VIRTUAL
 	bool "Debug VM translations"
-	depends on DEBUG_KERNEL && X86
+	depends on DEBUG_KERNEL && ARCH_HAS_DEBUG_VIRTUAL
 	help
 	  Enable some costly sanity checks in virtual to page code. This can
 	  catch mistakes with virt_to_page() and friends.
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCHv3 2/6] mm/cma: Cleanup highmem check
  2016-11-18  1:16 [PATCHv3 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott
  2016-11-18  1:16 ` [PATCHv3 1/6] lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL Laura Abbott
@ 2016-11-18  1:16 ` Laura Abbott
  2016-11-18  1:16 ` [PATCHv3 3/6] arm64: Move some macros under #ifndef __ASSEMBLY__ Laura Abbott
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 18+ messages in thread
From: Laura Abbott @ 2016-11-18  1:16 UTC (permalink / raw)
  To: Marek Szyprowski, Joonsoo Kim, Mark Rutland, Ard Biesheuvel,
	Will Deacon, Catalin Marinas
  Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	linux-kernel, linux-mm, Andrew Morton, linux-arm-kernel


6b101e2a3ce4 ("mm/CMA: fix boot regression due to physical address of
high_memory") added checks to use __pa_nodebug on x86 since
CONFIG_DEBUG_VIRTUAL complains about high_memory not being linearlly
mapped. arm64 is now getting support for CONFIG_DEBUG_VIRTUAL as well.
Rather than add an explosion of arches to the #ifdef, switch to an
alternate method to calculate the physical start of highmem using
the page before highmem starts. This avoids the need for the #ifdef and
extra __pa_nodebug calls.

Signed-off-by: Laura Abbott <labbott@redhat.com>
---
v3: No change, mm maintainers please ack if you are okay with this.
---
 mm/cma.c | 15 +++++----------
 1 file changed, 5 insertions(+), 10 deletions(-)

diff --git a/mm/cma.c b/mm/cma.c
index c960459..94b3460 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -235,18 +235,13 @@ int __init cma_declare_contiguous(phys_addr_t base,
 	phys_addr_t highmem_start;
 	int ret = 0;
 
-#ifdef CONFIG_X86
 	/*
-	 * high_memory isn't direct mapped memory so retrieving its physical
-	 * address isn't appropriate.  But it would be useful to check the
-	 * physical address of the highmem boundary so it's justifiable to get
-	 * the physical address from it.  On x86 there is a validation check for
-	 * this case, so the following workaround is needed to avoid it.
+	 * We can't use __pa(high_memory) directly, since high_memory
+	 * isn't a valid direct map VA, and DEBUG_VIRTUAL will (validly)
+	 * complain. Find the boundary by adding one to the last valid
+	 * address.
 	 */
-	highmem_start = __pa_nodebug(high_memory);
-#else
-	highmem_start = __pa(high_memory);
-#endif
+	highmem_start = __pa(high_memory - 1) + 1;
 	pr_debug("%s(size %pa, base %pa, limit %pa alignment %pa)\n",
 		__func__, &size, &base, &limit, &alignment);
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCHv3 3/6] arm64: Move some macros under #ifndef __ASSEMBLY__
  2016-11-18  1:16 [PATCHv3 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott
  2016-11-18  1:16 ` [PATCHv3 1/6] lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL Laura Abbott
  2016-11-18  1:16 ` [PATCHv3 2/6] mm/cma: Cleanup highmem check Laura Abbott
@ 2016-11-18  1:16 ` Laura Abbott
  2016-11-18  1:16 ` [PATCHv3 4/6] arm64: Add cast for virt_to_pfn Laura Abbott
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 18+ messages in thread
From: Laura Abbott @ 2016-11-18  1:16 UTC (permalink / raw)
  To: Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas
  Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	linux-kernel, linux-mm, Andrew Morton, Marek Szyprowski,
	Joonsoo Kim, linux-arm-kernel


Several macros for various x_to_y exist outside the bounds of an
__ASSEMBLY__ guard. Move them in preparation for support for
CONFIG_DEBUG_VIRTUAL.

Signed-off-by: Laura Abbott <labbott@redhat.com>
---
v3: No change
---
 arch/arm64/include/asm/memory.h | 38 +++++++++++++++++++-------------------
 1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index b71086d..b4d2b32 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -102,25 +102,6 @@
 #endif
 
 /*
- * Physical vs virtual RAM address space conversion.  These are
- * private definitions which should NOT be used outside memory.h
- * files.  Use virt_to_phys/phys_to_virt/__pa/__va instead.
- */
-#define __virt_to_phys(x) ({						\
-	phys_addr_t __x = (phys_addr_t)(x);				\
-	__x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET :	\
-				 (__x - kimage_voffset); })
-
-#define __phys_to_virt(x)	((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET)
-#define __phys_to_kimg(x)	((unsigned long)((x) + kimage_voffset))
-
-/*
- * Convert a page to/from a physical address
- */
-#define page_to_phys(page)	(__pfn_to_phys(page_to_pfn(page)))
-#define phys_to_page(phys)	(pfn_to_page(__phys_to_pfn(phys)))
-
-/*
  * Memory types available.
  */
 #define MT_DEVICE_nGnRnE	0
@@ -182,6 +163,25 @@ extern u64			kimage_voffset;
 #define PHYS_PFN_OFFSET	(PHYS_OFFSET >> PAGE_SHIFT)
 
 /*
+ * Physical vs virtual RAM address space conversion.  These are
+ * private definitions which should NOT be used outside memory.h
+ * files.  Use virt_to_phys/phys_to_virt/__pa/__va instead.
+ */
+#define __virt_to_phys(x) ({						\
+	phys_addr_t __x = (phys_addr_t)(x);				\
+	__x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET :	\
+				 (__x - kimage_voffset); })
+
+#define __phys_to_virt(x)	((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET)
+#define __phys_to_kimg(x)	((unsigned long)((x) + kimage_voffset))
+
+/*
+ * Convert a page to/from a physical address
+ */
+#define page_to_phys(page)	(__pfn_to_phys(page_to_pfn(page)))
+#define phys_to_page(phys)	(pfn_to_page(__phys_to_pfn(phys)))
+
+/*
  * Note: Drivers should NOT use these.  They are the wrong
  * translation for translating DMA addresses.  Use the driver
  * DMA support - see dma-mapping.h.
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCHv3 4/6] arm64: Add cast for virt_to_pfn
  2016-11-18  1:16 [PATCHv3 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott
                   ` (2 preceding siblings ...)
  2016-11-18  1:16 ` [PATCHv3 3/6] arm64: Move some macros under #ifndef __ASSEMBLY__ Laura Abbott
@ 2016-11-18  1:16 ` Laura Abbott
  2016-11-18  1:16 ` [PATCHv3 5/6] arm64: Use __pa_symbol for kernel symbols Laura Abbott
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 18+ messages in thread
From: Laura Abbott @ 2016-11-18  1:16 UTC (permalink / raw)
  To: Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas
  Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	linux-kernel, linux-mm, Andrew Morton, Marek Szyprowski,
	Joonsoo Kim, linux-arm-kernel


virt_to_pfn lacks a cast at the top level. Don't rely on __virt_to_phys
and explicitly cast to unsigned long.

Signed-off-by: Laura Abbott <labbott@redhat.com>
---
v3: No change
---
 arch/arm64/include/asm/memory.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index b4d2b32..d773e2c 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -204,7 +204,7 @@ static inline void *phys_to_virt(phys_addr_t x)
 #define __pa(x)			__virt_to_phys((unsigned long)(x))
 #define __va(x)			((void *)__phys_to_virt((phys_addr_t)(x)))
 #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
-#define virt_to_pfn(x)      __phys_to_pfn(__virt_to_phys(x))
+#define virt_to_pfn(x)      __phys_to_pfn(__virt_to_phys((unsigned long)(x)))
 
 /*
  *  virt_to_page(k)	convert a _valid_ virtual address to struct page *
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCHv3 5/6] arm64: Use __pa_symbol for kernel symbols
  2016-11-18  1:16 [PATCHv3 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott
                   ` (3 preceding siblings ...)
  2016-11-18  1:16 ` [PATCHv3 4/6] arm64: Add cast for virt_to_pfn Laura Abbott
@ 2016-11-18  1:16 ` Laura Abbott
  2016-11-18 14:35   ` Mark Rutland
  2016-11-18  1:16 ` [PATCHv3 6/6] arm64: Add support for CONFIG_DEBUG_VIRTUAL Laura Abbott
  2016-11-18 17:57 ` [PATCHv3 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Mark Rutland
  6 siblings, 1 reply; 18+ messages in thread
From: Laura Abbott @ 2016-11-18  1:16 UTC (permalink / raw)
  To: Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas
  Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	linux-kernel, linux-mm, Andrew Morton, Marek Szyprowski,
	Joonsoo Kim, linux-arm-kernel


__pa_symbol is technically the marco that should be used for kernel
symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
will do bounds checking.

Signed-off-by: Laura Abbott <labbott@redhat.com>
---
v3: Conversion of more sites besides just _end. Addition of __lm_sym_addr
macro to take care of the _va(__pa_symbol(..)) idiom.

Note that a copy of __pa_symbol was added to avoid a mess of headers
since the #ifndef __pa_symbol case is defined in linux/mm.h
---
 arch/arm64/include/asm/kvm_mmu.h          |  4 ++--
 arch/arm64/include/asm/memory.h           |  6 ++++++
 arch/arm64/include/asm/mmu_context.h      |  6 +++---
 arch/arm64/include/asm/pgtable.h          |  2 +-
 arch/arm64/kernel/acpi_parking_protocol.c |  2 +-
 arch/arm64/kernel/cpufeature.c            |  2 +-
 arch/arm64/kernel/hibernate.c             |  9 +++------
 arch/arm64/kernel/insn.c                  |  2 +-
 arch/arm64/kernel/psci.c                  |  2 +-
 arch/arm64/kernel/setup.c                 |  8 ++++----
 arch/arm64/kernel/smp_spin_table.c        |  2 +-
 arch/arm64/kernel/vdso.c                  |  4 ++--
 arch/arm64/mm/init.c                      | 11 ++++++-----
 arch/arm64/mm/mmu.c                       | 24 ++++++++++++------------
 drivers/firmware/psci.c                   |  2 +-
 15 files changed, 45 insertions(+), 41 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 6f72fe8..79a472c 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -47,7 +47,7 @@
  * If the page is in the bottom half, we have to use the top half. If
  * the page is in the top half, we have to use the bottom half:
  *
- * T = __virt_to_phys(__hyp_idmap_text_start)
+ * T = __pa_symbol(__hyp_idmap_text_start)
  * if (T & BIT(VA_BITS - 1))
  *	HYP_VA_MIN = 0  //idmap in upper half
  * else
@@ -271,7 +271,7 @@ static inline void __kvm_flush_dcache_pud(pud_t pud)
 	kvm_flush_dcache_to_poc(page_address(page), PUD_SIZE);
 }
 
-#define kvm_virt_to_phys(x)		__virt_to_phys((unsigned long)(x))
+#define kvm_virt_to_phys(x)		__pa_symbol((unsigned long)(x))
 
 void kvm_set_way_flush(struct kvm_vcpu *vcpu);
 void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled);
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index d773e2c..1e65299 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -202,11 +202,17 @@ static inline void *phys_to_virt(phys_addr_t x)
  * Drivers should NOT use these either.
  */
 #define __pa(x)			__virt_to_phys((unsigned long)(x))
+#define __pa_symbol(x)		__pa(RELOC_HIDE((unsigned long)(x), 0))
 #define __va(x)			((void *)__phys_to_virt((phys_addr_t)(x)))
 #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
 #define virt_to_pfn(x)      __phys_to_pfn(__virt_to_phys((unsigned long)(x)))
 
 /*
+ * translates a kernel image symbol address into its linear alias.
+ */
+#define __lm_sym_addr(x)	__va(__pa_symbol(x))
+
+/*
  *  virt_to_page(k)	convert a _valid_ virtual address to struct page *
  *  virt_addr_valid(k)	indicates whether a virtual address is valid
  */
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index a501853..0322322 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -44,7 +44,7 @@ static inline void contextidr_thread_switch(struct task_struct *next)
  */
 static inline void cpu_set_reserved_ttbr0(void)
 {
-	unsigned long ttbr = virt_to_phys(empty_zero_page);
+	unsigned long ttbr = __pa_symbol(empty_zero_page);
 
 	write_sysreg(ttbr, ttbr0_el1);
 	isb();
@@ -113,7 +113,7 @@ static inline void cpu_install_idmap(void)
 	local_flush_tlb_all();
 	cpu_set_idmap_tcr_t0sz();
 
-	cpu_switch_mm(idmap_pg_dir, &init_mm);
+	cpu_switch_mm(__lm_sym_addr(idmap_pg_dir), &init_mm);
 }
 
 /*
@@ -128,7 +128,7 @@ static inline void cpu_replace_ttbr1(pgd_t *pgd)
 
 	phys_addr_t pgd_phys = virt_to_phys(pgd);
 
-	replace_phys = (void *)virt_to_phys(idmap_cpu_replace_ttbr1);
+	replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
 
 	cpu_install_idmap();
 	replace_phys(pgd_phys);
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index ffbb9a5..c2041a3 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -52,7 +52,7 @@ extern void __pgd_error(const char *file, int line, unsigned long val);
  * for zero-mapped memory areas etc..
  */
 extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
-#define ZERO_PAGE(vaddr)	pfn_to_page(PHYS_PFN(__pa(empty_zero_page)))
+#define ZERO_PAGE(vaddr)	pfn_to_page(PHYS_PFN(__pa_symbol(empty_zero_page)))
 
 #define pte_ERROR(pte)		__pte_error(__FILE__, __LINE__, pte_val(pte))
 
diff --git a/arch/arm64/kernel/acpi_parking_protocol.c b/arch/arm64/kernel/acpi_parking_protocol.c
index a32b401..df58310 100644
--- a/arch/arm64/kernel/acpi_parking_protocol.c
+++ b/arch/arm64/kernel/acpi_parking_protocol.c
@@ -109,7 +109,7 @@ static int acpi_parking_protocol_cpu_boot(unsigned int cpu)
 	 * that read this address need to convert this address to the
 	 * Boot-Loader's endianness before jumping.
 	 */
-	writeq_relaxed(__pa(secondary_entry), &mailbox->entry_point);
+	writeq_relaxed(__pa_symbol(secondary_entry), &mailbox->entry_point);
 	writel_relaxed(cpu_entry->gic_cpu_id, &mailbox->cpu_id);
 
 	arch_send_wakeup_ipi_mask(cpumask_of(cpu));
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c02504e..6ccadf2 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -736,7 +736,7 @@ static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused
 static bool hyp_offset_low(const struct arm64_cpu_capabilities *entry,
 			   int __unused)
 {
-	phys_addr_t idmap_addr = virt_to_phys(__hyp_idmap_text_start);
+	phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
 
 	/*
 	 * Activate the lower HYP offset only if:
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index d55a7b0..7103b8b 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -50,9 +50,6 @@
  */
 extern int in_suspend;
 
-/* Find a symbols alias in the linear map */
-#define LMADDR(x)	phys_to_virt(virt_to_phys(x))
-
 /* Do we need to reset el2? */
 #define el2_reset_needed() (is_hyp_mode_available() && !is_kernel_in_hyp_mode())
 
@@ -125,12 +122,12 @@ int arch_hibernation_header_save(void *addr, unsigned int max_size)
 		return -EOVERFLOW;
 
 	arch_hdr_invariants(&hdr->invariants);
-	hdr->ttbr1_el1		= virt_to_phys(swapper_pg_dir);
+	hdr->ttbr1_el1		= __pa_symbol(swapper_pg_dir);
 	hdr->reenter_kernel	= _cpu_resume;
 
 	/* We can't use __hyp_get_vectors() because kvm may still be loaded */
 	if (el2_reset_needed())
-		hdr->__hyp_stub_vectors = virt_to_phys(__hyp_stub_vectors);
+		hdr->__hyp_stub_vectors = __pa_symbol(__hyp_stub_vectors);
 	else
 		hdr->__hyp_stub_vectors = 0;
 
@@ -484,7 +481,7 @@ int swsusp_arch_resume(void)
 	 * Since we only copied the linear map, we need to find restore_pblist's
 	 * linear map address.
 	 */
-	lm_restore_pblist = LMADDR(restore_pblist);
+	lm_restore_pblist = __lm_sym_addr(restore_pblist);
 
 	/*
 	 * We need a zero page that is zero before & after resume in order to
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 6f2ac4f..af8967a 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -97,7 +97,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
 	if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
 		page = vmalloc_to_page(addr);
 	else if (!module)
-		page = pfn_to_page(PHYS_PFN(__pa(addr)));
+		page = pfn_to_page(PHYS_PFN(__pa_symbol(addr)));
 	else
 		return addr;
 
diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c
index 42816be..f0f2abb 100644
--- a/arch/arm64/kernel/psci.c
+++ b/arch/arm64/kernel/psci.c
@@ -45,7 +45,7 @@ static int __init cpu_psci_cpu_prepare(unsigned int cpu)
 
 static int cpu_psci_cpu_boot(unsigned int cpu)
 {
-	int err = psci_ops.cpu_on(cpu_logical_map(cpu), __pa(secondary_entry));
+	int err = psci_ops.cpu_on(cpu_logical_map(cpu), __pa_symbol(secondary_entry));
 	if (err)
 		pr_err("failed to boot CPU%d (%d)\n", cpu, err);
 
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index f534f49..e2dbc02 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -199,10 +199,10 @@ static void __init request_standard_resources(void)
 	struct memblock_region *region;
 	struct resource *res;
 
-	kernel_code.start   = virt_to_phys(_text);
-	kernel_code.end     = virt_to_phys(__init_begin - 1);
-	kernel_data.start   = virt_to_phys(_sdata);
-	kernel_data.end     = virt_to_phys(_end - 1);
+	kernel_code.start   = __pa_symbol(_text);
+	kernel_code.end     = __pa_symbol(__init_begin - 1);
+	kernel_data.start   = __pa_symbol(_sdata);
+	kernel_data.end     = __pa_symbol(_end - 1);
 
 	for_each_memblock(memory, region) {
 		res = alloc_bootmem_low(sizeof(*res));
diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c
index 9a00eee..25fccca 100644
--- a/arch/arm64/kernel/smp_spin_table.c
+++ b/arch/arm64/kernel/smp_spin_table.c
@@ -98,7 +98,7 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
 	 * boot-loader's endianess before jumping. This is mandated by
 	 * the boot protocol.
 	 */
-	writeq_relaxed(__pa(secondary_holding_pen), release_addr);
+	writeq_relaxed(__pa_symbol(secondary_holding_pen), release_addr);
 	__flush_dcache_area((__force void *)release_addr,
 			    sizeof(*release_addr));
 
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index a2c2478..791e87a 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -140,11 +140,11 @@ static int __init vdso_init(void)
 		return -ENOMEM;
 
 	/* Grab the vDSO data page. */
-	vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa(vdso_data)));
+	vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa_symbol(vdso_data)));
 
 	/* Grab the vDSO code pages. */
 	for (i = 0; i < vdso_pages; i++)
-		vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa(&vdso_start)) + i);
+		vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa_symbol(&vdso_start)) + i);
 
 	vdso_spec[0].pages = &vdso_pagelist[0];
 	vdso_spec[1].pages = &vdso_pagelist[1];
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 212c4d1..a2981a7 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -209,8 +209,8 @@ void __init arm64_memblock_init(void)
 	 * linear mapping. Take care not to clip the kernel which may be
 	 * high in memory.
 	 */
-	memblock_remove(max_t(u64, memstart_addr + linear_region_size, __pa(_end)),
-			ULLONG_MAX);
+	memblock_remove(max_t(u64, memstart_addr + linear_region_size,
+			__pa_symbol(_end)), ULLONG_MAX);
 	if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {
 		/* ensure that memstart_addr remains sufficiently aligned */
 		memstart_addr = round_up(memblock_end_of_DRAM() - linear_region_size,
@@ -225,7 +225,7 @@ void __init arm64_memblock_init(void)
 	 */
 	if (memory_limit != (phys_addr_t)ULLONG_MAX) {
 		memblock_mem_limit_remove_map(memory_limit);
-		memblock_add(__pa(_text), (u64)(_end - _text));
+		memblock_add(__pa_symbol(_text), (u64)(_end - _text));
 	}
 
 	if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && initrd_start) {
@@ -278,7 +278,7 @@ void __init arm64_memblock_init(void)
 	 * Register the kernel text, kernel data, initrd, and initial
 	 * pagetables with memblock.
 	 */
-	memblock_reserve(__pa(_text), _end - _text);
+	memblock_reserve(__pa_symbol(_text), _end - _text);
 #ifdef CONFIG_BLK_DEV_INITRD
 	if (initrd_start) {
 		memblock_reserve(initrd_start, initrd_end - initrd_start);
@@ -483,7 +483,8 @@ void __init mem_init(void)
 
 void free_initmem(void)
 {
-	free_reserved_area(__va(__pa(__init_begin)), __va(__pa(__init_end)),
+	free_reserved_area(__lm_sym_addr(__init_begin),
+			   __lm_sym_addr(__init_end),
 			   0, "unused kernel");
 	/*
 	 * Unmap the __init region but leave the VM area in place. This
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 05615a3..cf04157 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -319,8 +319,8 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
 
 static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end)
 {
-	unsigned long kernel_start = __pa(_text);
-	unsigned long kernel_end = __pa(__init_begin);
+	unsigned long kernel_start = __pa_symbol(_text);
+	unsigned long kernel_end = __pa_symbol(__init_begin);
 
 	/*
 	 * Take care not to create a writable alias for the
@@ -387,21 +387,21 @@ void mark_rodata_ro(void)
 	unsigned long section_size;
 
 	section_size = (unsigned long)_etext - (unsigned long)_text;
-	create_mapping_late(__pa(_text), (unsigned long)_text,
+	create_mapping_late(__pa_symbol(_text), (unsigned long)_text,
 			    section_size, PAGE_KERNEL_ROX);
 	/*
 	 * mark .rodata as read only. Use __init_begin rather than __end_rodata
 	 * to cover NOTES and EXCEPTION_TABLE.
 	 */
 	section_size = (unsigned long)__init_begin - (unsigned long)__start_rodata;
-	create_mapping_late(__pa(__start_rodata), (unsigned long)__start_rodata,
+	create_mapping_late(__pa_symbol(__start_rodata), (unsigned long)__start_rodata,
 			    section_size, PAGE_KERNEL_RO);
 }
 
 static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end,
 				      pgprot_t prot, struct vm_struct *vma)
 {
-	phys_addr_t pa_start = __pa(va_start);
+	phys_addr_t pa_start = __pa_symbol(va_start);
 	unsigned long size = va_end - va_start;
 
 	BUG_ON(!PAGE_ALIGNED(pa_start));
@@ -449,7 +449,7 @@ static void __init map_kernel(pgd_t *pgd)
 		 */
 		BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES));
 		set_pud(pud_set_fixmap_offset(pgd, FIXADDR_START),
-			__pud(__pa(bm_pmd) | PUD_TYPE_TABLE));
+			__pud(__pa_symbol(bm_pmd) | PUD_TYPE_TABLE));
 		pud_clear_fixmap();
 	} else {
 		BUG();
@@ -480,7 +480,7 @@ void __init paging_init(void)
 	 */
 	cpu_replace_ttbr1(__va(pgd_phys));
 	memcpy(swapper_pg_dir, pgd, PAGE_SIZE);
-	cpu_replace_ttbr1(swapper_pg_dir);
+	cpu_replace_ttbr1(__lm_sym_addr(swapper_pg_dir));
 
 	pgd_clear_fixmap();
 	memblock_free(pgd_phys, PAGE_SIZE);
@@ -489,7 +489,7 @@ void __init paging_init(void)
 	 * We only reuse the PGD from the swapper_pg_dir, not the pud + pmd
 	 * allocated with it.
 	 */
-	memblock_free(__pa(swapper_pg_dir) + PAGE_SIZE,
+	memblock_free(__pa_symbol(swapper_pg_dir) + PAGE_SIZE,
 		      SWAPPER_DIR_SIZE - PAGE_SIZE);
 }
 
@@ -609,7 +609,7 @@ void __init early_fixmap_init(void)
 
 	pgd = pgd_offset_k(addr);
 	if (CONFIG_PGTABLE_LEVELS > 3 &&
-	    !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa(bm_pud))) {
+	    !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa_symbol(bm_pud))) {
 		/*
 		 * We only end up here if the kernel mapping and the fixmap
 		 * share the top level pgd entry, which should only happen on
@@ -618,12 +618,12 @@ void __init early_fixmap_init(void)
 		BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES));
 		pud = pud_offset_kimg(pgd, addr);
 	} else {
-		pgd_populate(&init_mm, pgd, bm_pud);
+		pgd_populate(&init_mm, pgd, __lm_sym_addr(bm_pud));
 		pud = fixmap_pud(addr);
 	}
-	pud_populate(&init_mm, pud, bm_pmd);
+	pud_populate(&init_mm, pud, __lm_sym_addr(bm_pmd));
 	pmd = fixmap_pmd(addr);
-	pmd_populate_kernel(&init_mm, pmd, bm_pte);
+	pmd_populate_kernel(&init_mm, pmd, __lm_sym_addr(bm_pte));
 
 	/*
 	 * The boot-ioremap range spans multiple pmds, for which
diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
index 8263429..9defbe2 100644
--- a/drivers/firmware/psci.c
+++ b/drivers/firmware/psci.c
@@ -383,7 +383,7 @@ static int psci_suspend_finisher(unsigned long index)
 	u32 *state = __this_cpu_read(psci_power_state);
 
 	return psci_ops.cpu_suspend(state[index - 1],
-				    virt_to_phys(cpu_resume));
+				    __pa_symbol(cpu_resume));
 }
 
 int psci_cpu_suspend_enter(unsigned long index)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCHv3 6/6] arm64: Add support for CONFIG_DEBUG_VIRTUAL
  2016-11-18  1:16 [PATCHv3 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott
                   ` (4 preceding siblings ...)
  2016-11-18  1:16 ` [PATCHv3 5/6] arm64: Use __pa_symbol for kernel symbols Laura Abbott
@ 2016-11-18  1:16 ` Laura Abbott
  2016-11-18 17:53   ` Mark Rutland
  2016-11-18 18:25   ` Mark Rutland
  2016-11-18 17:57 ` [PATCHv3 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Mark Rutland
  6 siblings, 2 replies; 18+ messages in thread
From: Laura Abbott @ 2016-11-18  1:16 UTC (permalink / raw)
  To: Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas
  Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	linux-kernel, linux-mm, Andrew Morton, Marek Szyprowski,
	Joonsoo Kim, linux-arm-kernel


x86 has an option CONFIG_DEBUG_VIRTUAL to do additional checks
on virt_to_phys calls. The goal is to catch users who are calling
virt_to_phys on non-linear addresses immediately. This inclues callers
using virt_to_phys on image addresses instead of __pa_symbol. As features
such as CONFIG_VMAP_STACK get enabled for arm64, this becomes increasingly
important. Add checks to catch bad virt_to_phys usage.

Signed-off-by: Laura Abbott <labbott@redhat.com>
---
v3: Make use of __pa_symbol required via debug checks. It's a WARN for now but
it can become a BUG after wider testing. __virt_to_phys is
now for linear addresses only. Dropped the VM_BUG_ON from Catalin's suggested
version from the nodebug version since having that in the nodebug version
essentially made them the debug version. Changed to KERNEL_START/KERNEL_END
for bounds checking. More comments.
---
 arch/arm64/Kconfig              |  1 +
 arch/arm64/include/asm/memory.h | 32 ++++++++++++++++++++++++++++----
 arch/arm64/mm/Makefile          |  1 +
 arch/arm64/mm/physaddr.c        | 39 +++++++++++++++++++++++++++++++++++++++
 4 files changed, 69 insertions(+), 4 deletions(-)
 create mode 100644 arch/arm64/mm/physaddr.c

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 969ef88..83b95bc 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -6,6 +6,7 @@ config ARM64
 	select ACPI_MCFG if ACPI
 	select ACPI_SPCR_TABLE if ACPI
 	select ARCH_CLOCKSOURCE_DATA
+	select ARCH_HAS_DEBUG_VIRTUAL
 	select ARCH_HAS_DEVMEM_IS_ALLOWED
 	select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI
 	select ARCH_HAS_ELF_RANDOMIZE
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 1e65299..2ed712e 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -167,10 +167,33 @@ extern u64			kimage_voffset;
  * private definitions which should NOT be used outside memory.h
  * files.  Use virt_to_phys/phys_to_virt/__pa/__va instead.
  */
-#define __virt_to_phys(x) ({						\
+
+
+/*
+ * This is for translation from the standard linear map to physical addresses.
+ * It is not to be used for kernel symbols.
+ */
+#define __virt_to_phys_nodebug(x) ({					\
 	phys_addr_t __x = (phys_addr_t)(x);				\
-	__x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET :	\
-				 (__x - kimage_voffset); })
+	((__x & ~PAGE_OFFSET) + PHYS_OFFSET);				\
+})
+
+/*
+ * This is for translation from a kernel image/symbol address to a
+ * physical address.
+ */
+#define __pa_symbol_nodebug(x) ({					\
+	phys_addr_t __x = (phys_addr_t)(x);				\
+	(__x - kimage_voffset);						\
+})
+
+#ifdef CONFIG_DEBUG_VIRTUAL
+extern unsigned long __virt_to_phys(unsigned long x);
+extern unsigned long __phys_addr_symbol(unsigned long x);
+#else
+#define __virt_to_phys(x)	__virt_to_phys_nodebug(x)
+#define __phys_addr_symbol(x)	__pa_symbol_nodebug(x)
+#endif
 
 #define __phys_to_virt(x)	((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET)
 #define __phys_to_kimg(x)	((unsigned long)((x) + kimage_voffset))
@@ -202,7 +225,8 @@ static inline void *phys_to_virt(phys_addr_t x)
  * Drivers should NOT use these either.
  */
 #define __pa(x)			__virt_to_phys((unsigned long)(x))
-#define __pa_symbol(x)		__pa(RELOC_HIDE((unsigned long)(x), 0))
+#define __pa_symbol(x)		__phys_addr_symbol(RELOC_HIDE((unsigned long)(x), 0))
+#define __pa_nodebug(x)		__virt_to_phys_nodebug((unsigned long)(x))
 #define __va(x)			((void *)__phys_to_virt((phys_addr_t)(x)))
 #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
 #define virt_to_pfn(x)      __phys_to_pfn(__virt_to_phys((unsigned long)(x)))
diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
index 54bb209..0d37c19 100644
--- a/arch/arm64/mm/Makefile
+++ b/arch/arm64/mm/Makefile
@@ -5,6 +5,7 @@ obj-y				:= dma-mapping.o extable.o fault.o init.o \
 obj-$(CONFIG_HUGETLB_PAGE)	+= hugetlbpage.o
 obj-$(CONFIG_ARM64_PTDUMP)	+= dump.o
 obj-$(CONFIG_NUMA)		+= numa.o
+obj-$(CONFIG_DEBUG_VIRTUAL)	+= physaddr.o
 
 obj-$(CONFIG_KASAN)		+= kasan_init.o
 KASAN_SANITIZE_kasan_init.o	:= n
diff --git a/arch/arm64/mm/physaddr.c b/arch/arm64/mm/physaddr.c
new file mode 100644
index 0000000..f8eb781
--- /dev/null
+++ b/arch/arm64/mm/physaddr.c
@@ -0,0 +1,39 @@
+#include <linux/mm.h>
+
+#include <asm/memory.h>
+
+unsigned long __virt_to_phys(unsigned long x)
+{
+	phys_addr_t __x = (phys_addr_t)x;
+
+	if (__x & BIT(VA_BITS - 1)) {
+		/*
+		 * The linear kernel range starts in the middle of the virtual
+		 * adddress space. Testing the top bit for the start of the
+		 * region is a sufficient check.
+		 */
+		return (__x & ~PAGE_OFFSET) + PHYS_OFFSET;
+	} else {
+		/*
+		 * __virt_to_phys should not be used on symbol addresses.
+		 * This should be changed to a BUG once all basic bad uses have
+		 * been cleaned up.
+		 */
+		WARN(1, "Do not use virt_to_phys on symbol addresses");
+		return __phys_addr_symbol(x);
+	}
+}
+EXPORT_SYMBOL(__virt_to_phys);
+
+unsigned long __phys_addr_symbol(unsigned long x)
+{
+	phys_addr_t __x = (phys_addr_t)x;
+
+	/*
+	 * This is bounds checking against the kernel image only.
+	 * __pa_symbol should only be used on kernel symbol addresses.
+	 */
+	VIRTUAL_BUG_ON(x < (unsigned long) KERNEL_START || x > (unsigned long) KERNEL_END);
+	return (__x - kimage_voffset);
+}
+EXPORT_SYMBOL(__phys_addr_symbol);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCHv3 1/6] lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL
  2016-11-18  1:16 ` [PATCHv3 1/6] lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL Laura Abbott
@ 2016-11-18  8:25   ` Ingo Molnar
  0 siblings, 0 replies; 18+ messages in thread
From: Ingo Molnar @ 2016-11-18  8:25 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Mark Rutland,
	Ard Biesheuvel, Will Deacon, Catalin Marinas, x86, linux-kernel,
	linux-mm, Andrew Morton, Marek Szyprowski, Joonsoo Kim,
	linux-arm-kernel


* Laura Abbott <labbott@redhat.com> wrote:

> 
> DEBUG_VIRTUAL currently depends on DEBUG_KERNEL && X86. arm64 is getting
> the same support. Rather than add a list of architectures, switch this
> to ARCH_HAS_DEBUG_VIRTUAL and let architectures select it as
> appropriate.
> 
> Suggested-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Laura Abbott <labbott@redhat.com>
> ---
> v3: No change, x86 maintainers please ack if you are okay with this.
> ---
>  arch/x86/Kconfig  | 1 +
>  lib/Kconfig.debug | 5 ++++-
>  2 files changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index bada636..f533321 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -23,6 +23,7 @@ config X86
>  	select ARCH_CLOCKSOURCE_DATA
>  	select ARCH_DISCARD_MEMBLOCK
>  	select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI
> +	select ARCH_HAS_DEBUG_VIRTUAL
>  	select ARCH_HAS_DEVMEM_IS_ALLOWED
>  	select ARCH_HAS_ELF_RANDOMIZE
>  	select ARCH_HAS_FAST_MULTIPLIER

Acked-by: Ingo Molnar <mingo@kernel.org>

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCHv3 5/6] arm64: Use __pa_symbol for kernel symbols
  2016-11-18  1:16 ` [PATCHv3 5/6] arm64: Use __pa_symbol for kernel symbols Laura Abbott
@ 2016-11-18 14:35   ` Mark Rutland
  2016-11-18 16:46     ` Mark Rutland
  2016-11-21 17:40     ` Laura Abbott
  0 siblings, 2 replies; 18+ messages in thread
From: Mark Rutland @ 2016-11-18 14:35 UTC (permalink / raw)
  To: Laura Abbott, lorenzo.pieralisi
  Cc: Ard Biesheuvel, Will Deacon, Catalin Marinas, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm,
	Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel

Hi Laura,

On Thu, Nov 17, 2016 at 05:16:55PM -0800, Laura Abbott wrote:
> 
> __pa_symbol is technically the marco that should be used for kernel
> symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
> will do bounds checking.
> 
> Signed-off-by: Laura Abbott <labbott@redhat.com>
> ---
> v3: Conversion of more sites besides just _end. Addition of __lm_sym_addr
> macro to take care of the _va(__pa_symbol(..)) idiom.
> 
> Note that a copy of __pa_symbol was added to avoid a mess of headers
> since the #ifndef __pa_symbol case is defined in linux/mm.h

I think we also need to fix up virt_to_phys(__cpu_soft_restart) in
arch/arm64/kernel/cpu-reset.h. Otherwise, this looks complete for uses
falling under arch/arm64/.

I also think it's worth mentioning in the commit message that this patch
adds and __lm_sym_addr() and uses it in some places so that low-level
helpers can use virt_to_phys() or __pa() consistently.

The PSCI change doesn't conflict with patches [1] that'll go via
arm-soc, so I'm happy for that PSCI change to go via the arm64 tree,
though it may be worth splitting into its own patch just in case
something unexpected crops up.

With those fixed up:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-November/466522.html

Otherwise, I just have a few nits below.

> @@ -271,7 +271,7 @@ static inline void __kvm_flush_dcache_pud(pud_t pud)
>  	kvm_flush_dcache_to_poc(page_address(page), PUD_SIZE);
>  }
>  
> -#define kvm_virt_to_phys(x)		__virt_to_phys((unsigned long)(x))
> +#define kvm_virt_to_phys(x)		__pa_symbol((unsigned long)(x))

Nit: we can drop the unsigned long cast given __pa_symbol() contains
one.

> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index ffbb9a5..c2041a3 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -52,7 +52,7 @@ extern void __pgd_error(const char *file, int line, unsigned long val);
>   * for zero-mapped memory areas etc..
>   */
>  extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
> -#define ZERO_PAGE(vaddr)	pfn_to_page(PHYS_PFN(__pa(empty_zero_page)))
> +#define ZERO_PAGE(vaddr)	pfn_to_page(PHYS_PFN(__pa_symbol(empty_zero_page)))

Nit: I think we can also simplify this to:

	phys_to_page(__pa_symbol(empty_zero_page))

... since phys_to_page(p) is (pfn_to_page(__phys_to_pfn(p)))
... and __phys_to_pfn(p) is PHYS_PFN(p)

> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 6f2ac4f..af8967a 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -97,7 +97,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
>  	if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
>  		page = vmalloc_to_page(addr);
>  	else if (!module)
> -		page = pfn_to_page(PHYS_PFN(__pa(addr)));
> +		page = pfn_to_page(PHYS_PFN(__pa_symbol(addr)));

Nit: likewise, we can use phys_to_page() here.

> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
> index a2c2478..791e87a 100644
> --- a/arch/arm64/kernel/vdso.c
> +++ b/arch/arm64/kernel/vdso.c
> @@ -140,11 +140,11 @@ static int __init vdso_init(void)
>  		return -ENOMEM;
>  
>  	/* Grab the vDSO data page. */
> -	vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa(vdso_data)));
> +	vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa_symbol(vdso_data)));

Nit: phys_to_page() again.

>  
>  	/* Grab the vDSO code pages. */
>  	for (i = 0; i < vdso_pages; i++)
> -		vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa(&vdso_start)) + i);
> +		vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa_symbol(&vdso_start)) + i);

Nit: phys_to_page() again.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCHv3 5/6] arm64: Use __pa_symbol for kernel symbols
  2016-11-18 14:35   ` Mark Rutland
@ 2016-11-18 16:46     ` Mark Rutland
  2016-11-21 17:40     ` Laura Abbott
  1 sibling, 0 replies; 18+ messages in thread
From: Mark Rutland @ 2016-11-18 16:46 UTC (permalink / raw)
  To: Laura Abbott, lorenzo.pieralisi
  Cc: Ard Biesheuvel, Catalin Marinas, x86, Will Deacon, linux-kernel,
	linux-mm, Ingo Molnar, H. Peter Anvin, Joonsoo Kim,
	Thomas Gleixner, Andrew Morton, linux-arm-kernel,
	Marek Szyprowski

On Fri, Nov 18, 2016 at 02:35:44PM +0000, Mark Rutland wrote:
> Hi Laura,
> 
> On Thu, Nov 17, 2016 at 05:16:55PM -0800, Laura Abbott wrote:
> > 
> > __pa_symbol is technically the marco that should be used for kernel
> > symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
> > will do bounds checking.
> > 
> > Signed-off-by: Laura Abbott <labbott@redhat.com>
> > ---
> > v3: Conversion of more sites besides just _end. Addition of __lm_sym_addr
> > macro to take care of the _va(__pa_symbol(..)) idiom.
> > 
> > Note that a copy of __pa_symbol was added to avoid a mess of headers
> > since the #ifndef __pa_symbol case is defined in linux/mm.h
> 
> I think we also need to fix up virt_to_phys(__cpu_soft_restart) in
> arch/arm64/kernel/cpu-reset.h. Otherwise, this looks complete for uses
> falling under arch/arm64/.

I think I spoke too soon. :(

In the kasan code, use of tmp_pg_dir, kasan_zero_{page,pte,pmd,pud} all
need to be vetted, as those are in the image, but get passed directly to
functions which will end up doing a virt_to_phys behind the scenes (e.g.
cpu_replace_ttbr1(), pmd_populate_kernel()).

There's also some virt_to_pfn(<symbol>) usage that needs to be fixed up
in arch/arm64/kernel/hibernate.c.

... there's also more of that in common kernel code. :(

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCHv3 6/6] arm64: Add support for CONFIG_DEBUG_VIRTUAL
  2016-11-18  1:16 ` [PATCHv3 6/6] arm64: Add support for CONFIG_DEBUG_VIRTUAL Laura Abbott
@ 2016-11-18 17:53   ` Mark Rutland
  2016-11-18 18:42     ` Laura Abbott
  2016-11-18 18:25   ` Mark Rutland
  1 sibling, 1 reply; 18+ messages in thread
From: Mark Rutland @ 2016-11-18 17:53 UTC (permalink / raw)
  To: Laura Abbott, Catalin Marinas
  Cc: Ard Biesheuvel, Will Deacon, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, x86, linux-kernel, linux-mm, Andrew Morton,
	Marek Szyprowski, Joonsoo Kim, linux-arm-kernel

Hi,

On Thu, Nov 17, 2016 at 05:16:56PM -0800, Laura Abbott wrote:
> 
> x86 has an option CONFIG_DEBUG_VIRTUAL to do additional checks
> on virt_to_phys calls. The goal is to catch users who are calling
> virt_to_phys on non-linear addresses immediately. This inclues callers
> using virt_to_phys on image addresses instead of __pa_symbol. As features
> such as CONFIG_VMAP_STACK get enabled for arm64, this becomes increasingly
> important. Add checks to catch bad virt_to_phys usage.
> 
> Signed-off-by: Laura Abbott <labbott@redhat.com>
> ---
> v3: Make use of __pa_symbol required via debug checks. It's a WARN for now but
> it can become a BUG after wider testing. __virt_to_phys is
> now for linear addresses only. Dropped the VM_BUG_ON from Catalin's suggested
> version from the nodebug version since having that in the nodebug version
> essentially made them the debug version. Changed to KERNEL_START/KERNEL_END
> for bounds checking. More comments.

I gave this a go with DEBUG_VIRTUAL && KASAN_INLINE selected, and the
kernel dies somewhere before bringing up earlycon. :(

I mentioned some possible reasons in a reply to pastch 5, and I have
some more comments below.

[...]

> -#define __virt_to_phys(x) ({						\
> +
> +
> +/*
> + * This is for translation from the standard linear map to physical addresses.
> + * It is not to be used for kernel symbols.
> + */
> +#define __virt_to_phys_nodebug(x) ({					\
>  	phys_addr_t __x = (phys_addr_t)(x);				\
> -	__x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET :	\
> -				 (__x - kimage_voffset); })
> +	((__x & ~PAGE_OFFSET) + PHYS_OFFSET);				\
> +})

Given the KASAN failure, and the strong possibility that there's even
more stuff lurking in common code, I think we should retain the logic to
handle kernel image addresses for the timebeing (as x86 does). Once
we've merged DEBUG_VIRTUAL, it will be easier to track those down.

Catalin, I think you suggested removing that logic; are you happy for it
to be restored?

See below for a refactoring that retains this logic.

[...]

> +/*
> + * This is for translation from a kernel image/symbol address to a
> + * physical address.
> + */
> +#define __pa_symbol_nodebug(x) ({					\
> +	phys_addr_t __x = (phys_addr_t)(x);				\
> +	(__x - kimage_voffset);						\
> +})

We can avoid duplication here (and in physaddr.c) if we factor the logic
into helpers, e.g.

/*
 * The linear kernel range starts in the middle of the virtual adddress
 * space. Testing the top bit for the start of the region is a
 * sufficient check.
 */
#define __is_lm_address(addr)	(!!((addr) & BIT(VA_BITS - 1)))

#define __lm_to_phys(addr)	(((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
#define __kimg_to_phys(addr)	((addr) - kimage_voffset)

#define __virt_to_phys_nodebug(x) ({					\
	phys_addr_t __x = (phys_addr_t)(x);				\
	__is_lm_address(__x) ? __lm_to_phys(__x) :			\
			       __kimg_to_phys(__x);			\
})

#define __pa_symbol_nodebug(x)	__kimg_to_phys((phys_addr_t)(x))

> +#ifdef CONFIG_DEBUG_VIRTUAL
> +extern unsigned long __virt_to_phys(unsigned long x);
> +extern unsigned long __phys_addr_symbol(unsigned long x);

It would be better for both of these to return phys_addr_t.

[...]

> diff --git a/arch/arm64/mm/physaddr.c b/arch/arm64/mm/physaddr.c
> new file mode 100644
> index 0000000..f8eb781
> --- /dev/null
> +++ b/arch/arm64/mm/physaddr.c
> @@ -0,0 +1,39 @@
> +#include <linux/mm.h>
> +
> +#include <asm/memory.h>

We also need:

#include <linux/bug.h>
#include <linux/export.h>
#include <linux/types.h>
#include <linux/mmdebug.h>

> +unsigned long __virt_to_phys(unsigned long x)
> +{
> +	phys_addr_t __x = (phys_addr_t)x;
> +
> +	if (__x & BIT(VA_BITS - 1)) {
> +		/*
> +		 * The linear kernel range starts in the middle of the virtual
> +		 * adddress space. Testing the top bit for the start of the
> +		 * region is a sufficient check.
> +		 */
> +		return (__x & ~PAGE_OFFSET) + PHYS_OFFSET;
> +	} else {
> +		/*
> +		 * __virt_to_phys should not be used on symbol addresses.
> +		 * This should be changed to a BUG once all basic bad uses have
> +		 * been cleaned up.
> +		 */
> +		WARN(1, "Do not use virt_to_phys on symbol addresses");
> +		return __phys_addr_symbol(x);
> +	}
> +}
> +EXPORT_SYMBOL(__virt_to_phys);

I think this would be better something like:

phys_addr_t __virt_to_phys(unsigned long x)
{
	WARN(!__is_lm_address(x),
	     "virt_to_phys() used for non-linear address: %pK\n",
	     (void*)x);
	
	return __virt_to_phys_nodebug(x);
}
EXPORT_SYMBOL(__virt_to_phys);

> +
> +unsigned long __phys_addr_symbol(unsigned long x)
> +{
> +	phys_addr_t __x = (phys_addr_t)x;
> +
> +	/*
> +	 * This is bounds checking against the kernel image only.
> +	 * __pa_symbol should only be used on kernel symbol addresses.
> +	 */
> +	VIRTUAL_BUG_ON(x < (unsigned long) KERNEL_START || x > (unsigned long) KERNEL_END);
> +	return (__x - kimage_voffset);
> +}
> +EXPORT_SYMBOL(__phys_addr_symbol);

Similarly:

phys_addr_t __phys_addr_symbol(unsigned long x)
{
	/*
	 * This is bounds checking against the kernel image only.
	 * __pa_symbol should only be used on kernel symbol addresses.
	 */
	VIRTUAL_BUG_ON(x < (unsigned long) KERNEL_START ||
		       x > (unsigned long) KERNEL_END);

	return __pa_symbol_nodebug(x);
}
EXPORT_SYMBOL(__phys_addr_symbol);

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCHv3 0/6] CONFIG_DEBUG_VIRTUAL for arm64
  2016-11-18  1:16 [PATCHv3 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott
                   ` (5 preceding siblings ...)
  2016-11-18  1:16 ` [PATCHv3 6/6] arm64: Add support for CONFIG_DEBUG_VIRTUAL Laura Abbott
@ 2016-11-18 17:57 ` Mark Rutland
  6 siblings, 0 replies; 18+ messages in thread
From: Mark Rutland @ 2016-11-18 17:57 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Ard Biesheuvel, Will Deacon, Catalin Marinas, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm,
	Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel

On Thu, Nov 17, 2016 at 05:16:50PM -0800, Laura Abbott wrote:
> Hi,

Hi,

Thanks again for putting this together.

> This is v3 of the series to add CONFIG_DEBUG_VIRTUAL for arm64.
> The biggest change from v2 is the conversion of more __pa sites
> to __pa_symbol for stricter checks.

Patches 1-4 look good to me, and I've given them a spin in a few
configurations on arm64. For those:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>

Patches 5 and 6 look like they're mostly there, but there are still a
few issues which I've commented on. Hopefully those aren't too painful
to sort out; it would be great to have this available.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCHv3 6/6] arm64: Add support for CONFIG_DEBUG_VIRTUAL
  2016-11-18  1:16 ` [PATCHv3 6/6] arm64: Add support for CONFIG_DEBUG_VIRTUAL Laura Abbott
  2016-11-18 17:53   ` Mark Rutland
@ 2016-11-18 18:25   ` Mark Rutland
  1 sibling, 0 replies; 18+ messages in thread
From: Mark Rutland @ 2016-11-18 18:25 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Ard Biesheuvel, Will Deacon, Catalin Marinas, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm,
	Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel

On Thu, Nov 17, 2016 at 05:16:56PM -0800, Laura Abbott wrote:
> diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
> index 54bb209..0d37c19 100644
> --- a/arch/arm64/mm/Makefile
> +++ b/arch/arm64/mm/Makefile
> @@ -5,6 +5,7 @@ obj-y				:= dma-mapping.o extable.o fault.o init.o \
>  obj-$(CONFIG_HUGETLB_PAGE)	+= hugetlbpage.o
>  obj-$(CONFIG_ARM64_PTDUMP)	+= dump.o
>  obj-$(CONFIG_NUMA)		+= numa.o
> +obj-$(CONFIG_DEBUG_VIRTUAL)	+= physaddr.o

We'll also need:

KASAN_SANITIZE_physaddr.o	:= n

... or code prior to KASAN init will cause the kernel to die if
__virt_to_phys() or __phys_addr_symbol() are called.

>  obj-$(CONFIG_KASAN)		+= kasan_init.o
>  KASAN_SANITIZE_kasan_init.o	:= n

Thanks,
Mark,

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCHv3 6/6] arm64: Add support for CONFIG_DEBUG_VIRTUAL
  2016-11-18 17:53   ` Mark Rutland
@ 2016-11-18 18:42     ` Laura Abbott
  2016-11-18 19:05       ` Mark Rutland
  0 siblings, 1 reply; 18+ messages in thread
From: Laura Abbott @ 2016-11-18 18:42 UTC (permalink / raw)
  To: Mark Rutland, Catalin Marinas
  Cc: Ard Biesheuvel, Will Deacon, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, x86, linux-kernel, linux-mm, Andrew Morton,
	Marek Szyprowski, Joonsoo Kim, linux-arm-kernel

On 11/18/2016 09:53 AM, Mark Rutland wrote:
> Hi,
> 
> On Thu, Nov 17, 2016 at 05:16:56PM -0800, Laura Abbott wrote:
>>
>> x86 has an option CONFIG_DEBUG_VIRTUAL to do additional checks
>> on virt_to_phys calls. The goal is to catch users who are calling
>> virt_to_phys on non-linear addresses immediately. This inclues callers
>> using virt_to_phys on image addresses instead of __pa_symbol. As features
>> such as CONFIG_VMAP_STACK get enabled for arm64, this becomes increasingly
>> important. Add checks to catch bad virt_to_phys usage.
>>
>> Signed-off-by: Laura Abbott <labbott@redhat.com>
>> ---
>> v3: Make use of __pa_symbol required via debug checks. It's a WARN for now but
>> it can become a BUG after wider testing. __virt_to_phys is
>> now for linear addresses only. Dropped the VM_BUG_ON from Catalin's suggested
>> version from the nodebug version since having that in the nodebug version
>> essentially made them the debug version. Changed to KERNEL_START/KERNEL_END
>> for bounds checking. More comments.
> 
> I gave this a go with DEBUG_VIRTUAL && KASAN_INLINE selected, and the
> kernel dies somewhere before bringing up earlycon. :(
> 
> I mentioned some possible reasons in a reply to pastch 5, and I have
> some more comments below.
> 
> [...]
> 
>> -#define __virt_to_phys(x) ({						\
>> +
>> +
>> +/*
>> + * This is for translation from the standard linear map to physical addresses.
>> + * It is not to be used for kernel symbols.
>> + */
>> +#define __virt_to_phys_nodebug(x) ({					\
>>  	phys_addr_t __x = (phys_addr_t)(x);				\
>> -	__x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET :	\
>> -				 (__x - kimage_voffset); })
>> +	((__x & ~PAGE_OFFSET) + PHYS_OFFSET);				\
>> +})
> 
> Given the KASAN failure, and the strong possibility that there's even
> more stuff lurking in common code, I think we should retain the logic to
> handle kernel image addresses for the timebeing (as x86 does). Once
> we've merged DEBUG_VIRTUAL, it will be easier to track those down.
> 

Agreed. I might see about adding another option DEBUG_STRICT_VIRTUAL
for catching bad __pa vs __pa_symbol usage and keep DEBUG_VIRTUAL for
catching addresses that will work in neither case.

> Catalin, I think you suggested removing that logic; are you happy for it
> to be restored?
> 
> See below for a refactoring that retains this logic.
> 
> [...]
> 
>> +/*
>> + * This is for translation from a kernel image/symbol address to a
>> + * physical address.
>> + */
>> +#define __pa_symbol_nodebug(x) ({					\
>> +	phys_addr_t __x = (phys_addr_t)(x);				\
>> +	(__x - kimage_voffset);						\
>> +})
> 
> We can avoid duplication here (and in physaddr.c) if we factor the logic
> into helpers, e.g.
> 
> /*
>  * The linear kernel range starts in the middle of the virtual adddress
>  * space. Testing the top bit for the start of the region is a
>  * sufficient check.
>  */
> #define __is_lm_address(addr)	(!!((addr) & BIT(VA_BITS - 1)))
> 
> #define __lm_to_phys(addr)	(((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
> #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
> 
> #define __virt_to_phys_nodebug(x) ({					\
> 	phys_addr_t __x = (phys_addr_t)(x);				\
> 	__is_lm_address(__x) ? __lm_to_phys(__x) :			\
> 			       __kimg_to_phys(__x);			\
> })
> 
> #define __pa_symbol_nodebug(x)	__kimg_to_phys((phys_addr_t)(x))
> 

Yes, this is much cleaner

>> +#ifdef CONFIG_DEBUG_VIRTUAL
>> +extern unsigned long __virt_to_phys(unsigned long x);
>> +extern unsigned long __phys_addr_symbol(unsigned long x);
> 
> It would be better for both of these to return phys_addr_t.
> 
> [...]
> 

I was worried this would turn into another warning project but
at first pass this works fine and the type will hopefully catch
some bad uses elsewhere.

>> diff --git a/arch/arm64/mm/physaddr.c b/arch/arm64/mm/physaddr.c
>> new file mode 100644
>> index 0000000..f8eb781
>> --- /dev/null
>> +++ b/arch/arm64/mm/physaddr.c
>> @@ -0,0 +1,39 @@
>> +#include <linux/mm.h>
>> +
>> +#include <asm/memory.h>
> 
> We also need:
> 
> #include <linux/bug.h>
> #include <linux/export.h>
> #include <linux/types.h>
> #include <linux/mmdebug.h>
> 
>> +unsigned long __virt_to_phys(unsigned long x)
>> +{
>> +	phys_addr_t __x = (phys_addr_t)x;
>> +
>> +	if (__x & BIT(VA_BITS - 1)) {
>> +		/*
>> +		 * The linear kernel range starts in the middle of the virtual
>> +		 * adddress space. Testing the top bit for the start of the
>> +		 * region is a sufficient check.
>> +		 */
>> +		return (__x & ~PAGE_OFFSET) + PHYS_OFFSET;
>> +	} else {
>> +		/*
>> +		 * __virt_to_phys should not be used on symbol addresses.
>> +		 * This should be changed to a BUG once all basic bad uses have
>> +		 * been cleaned up.
>> +		 */
>> +		WARN(1, "Do not use virt_to_phys on symbol addresses");
>> +		return __phys_addr_symbol(x);
>> +	}
>> +}
>> +EXPORT_SYMBOL(__virt_to_phys);
> 
> I think this would be better something like:
> 
> phys_addr_t __virt_to_phys(unsigned long x)
> {
> 	WARN(!__is_lm_address(x),
> 	     "virt_to_phys() used for non-linear address: %pK\n",
> 	     (void*)x);
> 	
> 	return __virt_to_phys_nodebug(x);
> }
> EXPORT_SYMBOL(__virt_to_phys);
> 
>> +
>> +unsigned long __phys_addr_symbol(unsigned long x)
>> +{
>> +	phys_addr_t __x = (phys_addr_t)x;
>> +
>> +	/*
>> +	 * This is bounds checking against the kernel image only.
>> +	 * __pa_symbol should only be used on kernel symbol addresses.
>> +	 */
>> +	VIRTUAL_BUG_ON(x < (unsigned long) KERNEL_START || x > (unsigned long) KERNEL_END);
>> +	return (__x - kimage_voffset);
>> +}
>> +EXPORT_SYMBOL(__phys_addr_symbol);
> 
> Similarly:
> 
> phys_addr_t __phys_addr_symbol(unsigned long x)
> {
> 	/*
> 	 * This is bounds checking against the kernel image only.
> 	 * __pa_symbol should only be used on kernel symbol addresses.
> 	 */
> 	VIRTUAL_BUG_ON(x < (unsigned long) KERNEL_START ||
> 		       x > (unsigned long) KERNEL_END);
> 
> 	return __pa_symbol_nodebug(x);
> }
> EXPORT_SYMBOL(__phys_addr_symbol);
> 
> Thanks,
> Mark.
> 

Thanks,
Laura

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCHv3 6/6] arm64: Add support for CONFIG_DEBUG_VIRTUAL
  2016-11-18 18:42     ` Laura Abbott
@ 2016-11-18 19:05       ` Mark Rutland
  2016-11-18 19:17         ` Laura Abbott
  0 siblings, 1 reply; 18+ messages in thread
From: Mark Rutland @ 2016-11-18 19:05 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Catalin Marinas, Ard Biesheuvel, Will Deacon, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm,
	Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel

On Fri, Nov 18, 2016 at 10:42:56AM -0800, Laura Abbott wrote:
> On 11/18/2016 09:53 AM, Mark Rutland wrote:
> > On Thu, Nov 17, 2016 at 05:16:56PM -0800, Laura Abbott wrote:

> >> +#define __virt_to_phys_nodebug(x) ({					\
> >>  	phys_addr_t __x = (phys_addr_t)(x);				\
> >> -	__x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET :	\
> >> -				 (__x - kimage_voffset); })
> >> +	((__x & ~PAGE_OFFSET) + PHYS_OFFSET);				\
> >> +})
> > 
> > Given the KASAN failure, and the strong possibility that there's even
> > more stuff lurking in common code, I think we should retain the logic to
> > handle kernel image addresses for the timebeing (as x86 does). Once
> > we've merged DEBUG_VIRTUAL, it will be easier to track those down.
> 
> Agreed. I might see about adding another option DEBUG_STRICT_VIRTUAL
> for catching bad __pa vs __pa_symbol usage and keep DEBUG_VIRTUAL for
> catching addresses that will work in neither case.

I think it makes sense for DEBUG_VIRTUAL to do both, so long as the
default behaviour (and fallback after a WARN for virt_to_phys()) matches
what we currently do. We'll get useful diagnostics, but a graceful
fallback.

I think the helpers I suggested below do that?  Or have I misunderstood,
and you mean something stricter (e.g. checking whether a lm address is
is backed by something)?

> > phys_addr_t __virt_to_phys(unsigned long x)
> > {
> > 	WARN(!__is_lm_address(x),
> > 	     "virt_to_phys() used for non-linear address: %pK\n",
> > 	     (void*)x);
> > 	
> > 	return __virt_to_phys_nodebug(x);
> > }
> > EXPORT_SYMBOL(__virt_to_phys);

> > phys_addr_t __phys_addr_symbol(unsigned long x)
> > {
> > 	/*
> > 	 * This is bounds checking against the kernel image only.
> > 	 * __pa_symbol should only be used on kernel symbol addresses.
> > 	 */
> > 	VIRTUAL_BUG_ON(x < (unsigned long) KERNEL_START ||
> > 		       x > (unsigned long) KERNEL_END);
> > 
> > 	return __pa_symbol_nodebug(x);
> > }
> > EXPORT_SYMBOL(__phys_addr_symbol);

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCHv3 6/6] arm64: Add support for CONFIG_DEBUG_VIRTUAL
  2016-11-18 19:05       ` Mark Rutland
@ 2016-11-18 19:17         ` Laura Abbott
  0 siblings, 0 replies; 18+ messages in thread
From: Laura Abbott @ 2016-11-18 19:17 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Catalin Marinas, Ard Biesheuvel, Will Deacon, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm,
	Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel

On 11/18/2016 11:05 AM, Mark Rutland wrote:
> On Fri, Nov 18, 2016 at 10:42:56AM -0800, Laura Abbott wrote:
>> On 11/18/2016 09:53 AM, Mark Rutland wrote:
>>> On Thu, Nov 17, 2016 at 05:16:56PM -0800, Laura Abbott wrote:
> 
>>>> +#define __virt_to_phys_nodebug(x) ({					\
>>>>  	phys_addr_t __x = (phys_addr_t)(x);				\
>>>> -	__x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET :	\
>>>> -				 (__x - kimage_voffset); })
>>>> +	((__x & ~PAGE_OFFSET) + PHYS_OFFSET);				\
>>>> +})
>>>
>>> Given the KASAN failure, and the strong possibility that there's even
>>> more stuff lurking in common code, I think we should retain the logic to
>>> handle kernel image addresses for the timebeing (as x86 does). Once
>>> we've merged DEBUG_VIRTUAL, it will be easier to track those down.
>>
>> Agreed. I might see about adding another option DEBUG_STRICT_VIRTUAL
>> for catching bad __pa vs __pa_symbol usage and keep DEBUG_VIRTUAL for
>> catching addresses that will work in neither case.
> 
> I think it makes sense for DEBUG_VIRTUAL to do both, so long as the
> default behaviour (and fallback after a WARN for virt_to_phys()) matches
> what we currently do. We'll get useful diagnostics, but a graceful
> fallback.
> 

I was suggesting making the WARN optional for having this be more useful
before all the __pa_symbol stuff gets cleaned up. Maybe the WARN won't
actually be a hindrance.

Thanks,
Laura

> I think the helpers I suggested below do that?  Or have I misunderstood,
> and you mean something stricter (e.g. checking whether a lm address is
> is backed by something)?
> 
>>> phys_addr_t __virt_to_phys(unsigned long x)
>>> {
>>> 	WARN(!__is_lm_address(x),
>>> 	     "virt_to_phys() used for non-linear address: %pK\n",
>>> 	     (void*)x);
>>> 	
>>> 	return __virt_to_phys_nodebug(x);
>>> }
>>> EXPORT_SYMBOL(__virt_to_phys);
> 
>>> phys_addr_t __phys_addr_symbol(unsigned long x)
>>> {
>>> 	/*
>>> 	 * This is bounds checking against the kernel image only.
>>> 	 * __pa_symbol should only be used on kernel symbol addresses.
>>> 	 */
>>> 	VIRTUAL_BUG_ON(x < (unsigned long) KERNEL_START ||
>>> 		       x > (unsigned long) KERNEL_END);
>>>
>>> 	return __pa_symbol_nodebug(x);
>>> }
>>> EXPORT_SYMBOL(__phys_addr_symbol);
> 
> Thanks,
> Mark.
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCHv3 5/6] arm64: Use __pa_symbol for kernel symbols
  2016-11-18 14:35   ` Mark Rutland
  2016-11-18 16:46     ` Mark Rutland
@ 2016-11-21 17:40     ` Laura Abbott
  2016-11-23  9:48       ` Mark Rutland
  1 sibling, 1 reply; 18+ messages in thread
From: Laura Abbott @ 2016-11-21 17:40 UTC (permalink / raw)
  To: Mark Rutland, lorenzo.pieralisi
  Cc: Ard Biesheuvel, Will Deacon, Catalin Marinas, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm,
	Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel

On 11/18/2016 06:35 AM, Mark Rutland wrote:
> Hi Laura,
> 
> On Thu, Nov 17, 2016 at 05:16:55PM -0800, Laura Abbott wrote:
>>
>> __pa_symbol is technically the marco that should be used for kernel
>> symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
>> will do bounds checking.
>>
>> Signed-off-by: Laura Abbott <labbott@redhat.com>
>> ---
>> v3: Conversion of more sites besides just _end. Addition of __lm_sym_addr
>> macro to take care of the _va(__pa_symbol(..)) idiom.
>>
>> Note that a copy of __pa_symbol was added to avoid a mess of headers
>> since the #ifndef __pa_symbol case is defined in linux/mm.h
> 
> I think we also need to fix up virt_to_phys(__cpu_soft_restart) in
> arch/arm64/kernel/cpu-reset.h. Otherwise, this looks complete for uses
> falling under arch/arm64/.
> 
> I also think it's worth mentioning in the commit message that this patch
> adds and __lm_sym_addr() and uses it in some places so that low-level
> helpers can use virt_to_phys() or __pa() consistently.
> 
> The PSCI change doesn't conflict with patches [1] that'll go via
> arm-soc, so I'm happy for that PSCI change to go via the arm64 tree,
> though it may be worth splitting into its own patch just in case
> something unexpected crops up.
> 
> With those fixed up:
> 
> Reviewed-by: Mark Rutland <mark.rutland@arm.com>
> 
> [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-November/466522.html
> 
> Otherwise, I just have a few nits below.
> 
>> @@ -271,7 +271,7 @@ static inline void __kvm_flush_dcache_pud(pud_t pud)
>>  	kvm_flush_dcache_to_poc(page_address(page), PUD_SIZE);
>>  }
>>  
>> -#define kvm_virt_to_phys(x)		__virt_to_phys((unsigned long)(x))
>> +#define kvm_virt_to_phys(x)		__pa_symbol((unsigned long)(x))
> 
> Nit: we can drop the unsigned long cast given __pa_symbol() contains
> one.
> 
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index ffbb9a5..c2041a3 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -52,7 +52,7 @@ extern void __pgd_error(const char *file, int line, unsigned long val);
>>   * for zero-mapped memory areas etc..
>>   */
>>  extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
>> -#define ZERO_PAGE(vaddr)	pfn_to_page(PHYS_PFN(__pa(empty_zero_page)))
>> +#define ZERO_PAGE(vaddr)	pfn_to_page(PHYS_PFN(__pa_symbol(empty_zero_page)))
> 
> Nit: I think we can also simplify this to:
> 
> 	phys_to_page(__pa_symbol(empty_zero_page))
> 
> ... since phys_to_page(p) is (pfn_to_page(__phys_to_pfn(p)))
> ... and __phys_to_pfn(p) is PHYS_PFN(p)
> 
>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>> index 6f2ac4f..af8967a 100644
>> --- a/arch/arm64/kernel/insn.c
>> +++ b/arch/arm64/kernel/insn.c
>> @@ -97,7 +97,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
>>  	if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
>>  		page = vmalloc_to_page(addr);
>>  	else if (!module)
>> -		page = pfn_to_page(PHYS_PFN(__pa(addr)));
>> +		page = pfn_to_page(PHYS_PFN(__pa_symbol(addr)));
> 
> Nit: likewise, we can use phys_to_page() here.
> 
>> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
>> index a2c2478..791e87a 100644
>> --- a/arch/arm64/kernel/vdso.c
>> +++ b/arch/arm64/kernel/vdso.c
>> @@ -140,11 +140,11 @@ static int __init vdso_init(void)
>>  		return -ENOMEM;
>>  
>>  	/* Grab the vDSO data page. */
>> -	vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa(vdso_data)));
>> +	vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa_symbol(vdso_data)));
> 
> Nit: phys_to_page() again.
> 
>>  
>>  	/* Grab the vDSO code pages. */
>>  	for (i = 0; i < vdso_pages; i++)
>> -		vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa(&vdso_start)) + i);
>> +		vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa_symbol(&vdso_start)) + i);
> 
> Nit: phys_to_page() again.

I think it makes sense to keep this one as is. It's offsetting
by pfn number and trying force phys_to_page would make it more
difficult to read.

> 
> Thanks,
> Mark.
> 

Thanks,
Laura

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCHv3 5/6] arm64: Use __pa_symbol for kernel symbols
  2016-11-21 17:40     ` Laura Abbott
@ 2016-11-23  9:48       ` Mark Rutland
  0 siblings, 0 replies; 18+ messages in thread
From: Mark Rutland @ 2016-11-23  9:48 UTC (permalink / raw)
  To: Laura Abbott
  Cc: lorenzo.pieralisi, Ard Biesheuvel, Will Deacon, Catalin Marinas,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86, linux-kernel,
	linux-mm, Andrew Morton, Marek Szyprowski, Joonsoo Kim,
	linux-arm-kernel

On Mon, Nov 21, 2016 at 09:40:06AM -0800, Laura Abbott wrote:
> On 11/18/2016 06:35 AM, Mark Rutland wrote:
> > On Thu, Nov 17, 2016 at 05:16:55PM -0800, Laura Abbott wrote:
> >>  	/* Grab the vDSO code pages. */
> >>  	for (i = 0; i < vdso_pages; i++)
> >> -		vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa(&vdso_start)) + i);
> >> +		vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa_symbol(&vdso_start)) + i);
> > 
> > Nit: phys_to_page() again.
> 
> I think it makes sense to keep this one as is. It's offsetting
> by pfn number and trying force phys_to_page would make it more
> difficult to read.

My bad; I failed to spot the + i.

That sounds good to me; sorry for the noise there.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2016-11-23  9:57 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-18  1:16 [PATCHv3 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott
2016-11-18  1:16 ` [PATCHv3 1/6] lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL Laura Abbott
2016-11-18  8:25   ` Ingo Molnar
2016-11-18  1:16 ` [PATCHv3 2/6] mm/cma: Cleanup highmem check Laura Abbott
2016-11-18  1:16 ` [PATCHv3 3/6] arm64: Move some macros under #ifndef __ASSEMBLY__ Laura Abbott
2016-11-18  1:16 ` [PATCHv3 4/6] arm64: Add cast for virt_to_pfn Laura Abbott
2016-11-18  1:16 ` [PATCHv3 5/6] arm64: Use __pa_symbol for kernel symbols Laura Abbott
2016-11-18 14:35   ` Mark Rutland
2016-11-18 16:46     ` Mark Rutland
2016-11-21 17:40     ` Laura Abbott
2016-11-23  9:48       ` Mark Rutland
2016-11-18  1:16 ` [PATCHv3 6/6] arm64: Add support for CONFIG_DEBUG_VIRTUAL Laura Abbott
2016-11-18 17:53   ` Mark Rutland
2016-11-18 18:42     ` Laura Abbott
2016-11-18 19:05       ` Mark Rutland
2016-11-18 19:17         ` Laura Abbott
2016-11-18 18:25   ` Mark Rutland
2016-11-18 17:57 ` [PATCHv3 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Mark Rutland

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).