linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv2 0/6] CONFIG_DEBUG_VIRTUAL for arm64
@ 2016-11-02 21:00 Laura Abbott
  2016-11-02 21:00 ` [PATCHv2 1/6] lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL Laura Abbott
                   ` (6 more replies)
  0 siblings, 7 replies; 20+ messages in thread
From: Laura Abbott @ 2016-11-02 21:00 UTC (permalink / raw)
  To: Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas
  Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	linux-kernel, linux-mm, Andrew Morton, Marek Szyprowski,
	Joonsoo Kim, linux-arm-kernel

Hi,

This is v2 of the series to add CONFIG_DEBUG_VIRTUAL support from arm64. This
has been split out into a number of patches:

Patch #1 Adds ARCH_HAS_DEBUG_VIRTUAL to avoid the need for adding arch
dependencies for DEBUG_VIRTUAL. This touches arch/x86/Kconfig

Patch #2 Cleans up cma to not rely on __pa_nodebug and have an #ifdef inline
in the function.

Patch #3 Adjust some macros in arm64 memory.h to be under __ASSEMBLY__
protection

Patch #4 Adds a cast for virt_to_pfn since __virt_to_phys for DEBUG_VIRTUAL no
longer has a cast.

Patch #5 Switches to using __pa_symbol for _end to avoid erroneously triggering
a bounds error with the debugging.

Patch #6 is the actual implementation of DEBUG_VIRTUAL. The biggest change from
the RFCv1 is the addition of __phys_addr_symbol. This is to handle several
places where the physical address of _end is needed. x86 avoids this problem by
doing its bounds check based on the entire possible image space which is well
beyond where _end would end up.

There are a few dependencies outside of arm64, so I don't know if
it will be easier for this to eventually go through arm64 or the mm tree.

Thanks,
Laura




Laura Abbott (6):
  lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL
  mm/cma: Cleanup highmem check
  arm64: Move some macros under #ifndef __ASSEMBLY__
  arm64: Add cast for virt_to_pfn
  arm64: Use __pa_symbol for _end
  arm64: Add support for CONFIG_DEBUG_VIRTUAL

 arch/arm64/Kconfig              |  1 +
 arch/arm64/include/asm/memory.h | 50 ++++++++++++++++++++++++-----------------
 arch/arm64/mm/Makefile          |  2 ++
 arch/arm64/mm/init.c            |  4 ++--
 arch/arm64/mm/physaddr.c        | 34 ++++++++++++++++++++++++++++
 arch/x86/Kconfig                |  1 +
 lib/Kconfig.debug               |  5 ++++-
 mm/cma.c                        | 15 +++++--------
 8 files changed, 79 insertions(+), 33 deletions(-)
 create mode 100644 arch/arm64/mm/physaddr.c

-- 
2.10.1

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCHv2 1/6] lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL
  2016-11-02 21:00 [PATCHv2 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott
@ 2016-11-02 21:00 ` Laura Abbott
  2016-11-02 21:00 ` [PATCHv2 2/6] mm/cma: Cleanup highmem check Laura Abbott
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Laura Abbott @ 2016-11-02 21:00 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Mark Rutland,
	Ard Biesheuvel, Will Deacon, Catalin Marinas
  Cc: Laura Abbott, x86, linux-kernel, linux-mm, Andrew Morton,
	Marek Szyprowski, Joonsoo Kim, linux-arm-kernel


DEBUG_VIRTUAL currently depends on DEBUG_KERNEL && X86. arm64 is getting
the same support. Rather than add a list of architectures, switch this
to ARCH_HAS_DEBUG_VIRTUAL and let architectures select it as
appropriate.

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@redhat.com>
---
 arch/x86/Kconfig  | 1 +
 lib/Kconfig.debug | 5 ++++-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index bada636..f533321 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -23,6 +23,7 @@ config X86
 	select ARCH_CLOCKSOURCE_DATA
 	select ARCH_DISCARD_MEMBLOCK
 	select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI
+	select ARCH_HAS_DEBUG_VIRTUAL
 	select ARCH_HAS_DEVMEM_IS_ALLOWED
 	select ARCH_HAS_ELF_RANDOMIZE
 	select ARCH_HAS_FAST_MULTIPLIER
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index b01e547..5050530 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -603,9 +603,12 @@ config DEBUG_VM_PGFLAGS
 
 	  If unsure, say N.
 
+config ARCH_HAS_DEBUG_VIRTUAL
+	bool
+
 config DEBUG_VIRTUAL
 	bool "Debug VM translations"
-	depends on DEBUG_KERNEL && X86
+	depends on DEBUG_KERNEL && ARCH_HAS_DEBUG_VIRTUAL
 	help
 	  Enable some costly sanity checks in virtual to page code. This can
 	  catch mistakes with virt_to_page() and friends.
-- 
2.10.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCHv2 2/6] mm/cma: Cleanup highmem check
  2016-11-02 21:00 [PATCHv2 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott
  2016-11-02 21:00 ` [PATCHv2 1/6] lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL Laura Abbott
@ 2016-11-02 21:00 ` Laura Abbott
  2016-11-02 21:00 ` [PATCHv2 3/6] arm64: Move some macros under #ifndef __ASSEMBLY__ Laura Abbott
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Laura Abbott @ 2016-11-02 21:00 UTC (permalink / raw)
  To: Marek Szyprowski, Joonsoo Kim, Mark Rutland, Ard Biesheuvel,
	Will Deacon, Catalin Marinas
  Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	linux-kernel, linux-mm, Andrew Morton, linux-arm-kernel


6b101e2a3ce4 ("mm/CMA: fix boot regression due to physical address of
high_memory") added checks to use __pa_nodebug on x86 since
CONFIG_DEBUG_VIRTUAL complains about high_memory not being linearlly
mapped. arm64 is now getting support for CONFIG_DEBUG_VIRTUAL as well.
Rather than add an explosion of arches to the #ifdef, switch to an
alternate method to calculate the physical start of highmem using
the page before highmem starts. This avoids the need for the #ifdef and
extra __pa_nodebug calls.

Signed-off-by: Laura Abbott <labbott@redhat.com>
---
 mm/cma.c | 15 +++++----------
 1 file changed, 5 insertions(+), 10 deletions(-)

diff --git a/mm/cma.c b/mm/cma.c
index 384c2cb..71a2ec1 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -235,18 +235,13 @@ int __init cma_declare_contiguous(phys_addr_t base,
 	phys_addr_t highmem_start;
 	int ret = 0;
 
-#ifdef CONFIG_X86
 	/*
-	 * high_memory isn't direct mapped memory so retrieving its physical
-	 * address isn't appropriate.  But it would be useful to check the
-	 * physical address of the highmem boundary so it's justifiable to get
-	 * the physical address from it.  On x86 there is a validation check for
-	 * this case, so the following workaround is needed to avoid it.
+	 * We can't use __pa(high_memory) directly, since high_memory
+	 * isn't a valid direct map VA, and DEBUG_VIRTUAL will (validly)
+	 * complain. Find the boundary by adding one to the last valid
+	 * address.
 	 */
-	highmem_start = __pa_nodebug(high_memory);
-#else
-	highmem_start = __pa(high_memory);
-#endif
+	highmem_start = __pa(high_memory - 1) + 1;
 	pr_debug("%s(size %pa, base %pa, limit %pa alignment %pa)\n",
 		__func__, &size, &base, &limit, &alignment);
 
-- 
2.10.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCHv2 3/6] arm64: Move some macros under #ifndef __ASSEMBLY__
  2016-11-02 21:00 [PATCHv2 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott
  2016-11-02 21:00 ` [PATCHv2 1/6] lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL Laura Abbott
  2016-11-02 21:00 ` [PATCHv2 2/6] mm/cma: Cleanup highmem check Laura Abbott
@ 2016-11-02 21:00 ` Laura Abbott
  2016-11-02 21:00 ` [PATCHv2 4/6] arm64: Add cast for virt_to_pfn Laura Abbott
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Laura Abbott @ 2016-11-02 21:00 UTC (permalink / raw)
  To: Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas
  Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	linux-kernel, linux-mm, Andrew Morton, Marek Szyprowski,
	Joonsoo Kim, linux-arm-kernel


Several macros for various x_to_y exist outside the bounds of an
__ASSEMBLY__ guard. Move them in preparation for support for
CONFIG_DEBUG_VIRTUAL.

Signed-off-by: Laura Abbott <labbott@redhat.com>
---
 arch/arm64/include/asm/memory.h | 38 +++++++++++++++++++-------------------
 1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index b71086d..b4d2b32 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -102,25 +102,6 @@
 #endif
 
 /*
- * Physical vs virtual RAM address space conversion.  These are
- * private definitions which should NOT be used outside memory.h
- * files.  Use virt_to_phys/phys_to_virt/__pa/__va instead.
- */
-#define __virt_to_phys(x) ({						\
-	phys_addr_t __x = (phys_addr_t)(x);				\
-	__x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET :	\
-				 (__x - kimage_voffset); })
-
-#define __phys_to_virt(x)	((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET)
-#define __phys_to_kimg(x)	((unsigned long)((x) + kimage_voffset))
-
-/*
- * Convert a page to/from a physical address
- */
-#define page_to_phys(page)	(__pfn_to_phys(page_to_pfn(page)))
-#define phys_to_page(phys)	(pfn_to_page(__phys_to_pfn(phys)))
-
-/*
  * Memory types available.
  */
 #define MT_DEVICE_nGnRnE	0
@@ -182,6 +163,25 @@ extern u64			kimage_voffset;
 #define PHYS_PFN_OFFSET	(PHYS_OFFSET >> PAGE_SHIFT)
 
 /*
+ * Physical vs virtual RAM address space conversion.  These are
+ * private definitions which should NOT be used outside memory.h
+ * files.  Use virt_to_phys/phys_to_virt/__pa/__va instead.
+ */
+#define __virt_to_phys(x) ({						\
+	phys_addr_t __x = (phys_addr_t)(x);				\
+	__x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET :	\
+				 (__x - kimage_voffset); })
+
+#define __phys_to_virt(x)	((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET)
+#define __phys_to_kimg(x)	((unsigned long)((x) + kimage_voffset))
+
+/*
+ * Convert a page to/from a physical address
+ */
+#define page_to_phys(page)	(__pfn_to_phys(page_to_pfn(page)))
+#define phys_to_page(phys)	(pfn_to_page(__phys_to_pfn(phys)))
+
+/*
  * Note: Drivers should NOT use these.  They are the wrong
  * translation for translating DMA addresses.  Use the driver
  * DMA support - see dma-mapping.h.
-- 
2.10.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCHv2 4/6] arm64: Add cast for virt_to_pfn
  2016-11-02 21:00 [PATCHv2 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott
                   ` (2 preceding siblings ...)
  2016-11-02 21:00 ` [PATCHv2 3/6] arm64: Move some macros under #ifndef __ASSEMBLY__ Laura Abbott
@ 2016-11-02 21:00 ` Laura Abbott
  2016-11-02 21:00 ` [PATCHv2 5/6] arm64: Use __pa_symbol for _end Laura Abbott
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Laura Abbott @ 2016-11-02 21:00 UTC (permalink / raw)
  To: Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas
  Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	linux-kernel, linux-mm, Andrew Morton, Marek Szyprowski,
	Joonsoo Kim, linux-arm-kernel


virt_to_pfn lacks a cast at the top level. Don't rely on __virt_to_phys
and explicitly cast to unsigned long.

Signed-off-by: Laura Abbott <labbott@redhat.com>
---
 arch/arm64/include/asm/memory.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index b4d2b32..d773e2c 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -204,7 +204,7 @@ static inline void *phys_to_virt(phys_addr_t x)
 #define __pa(x)			__virt_to_phys((unsigned long)(x))
 #define __va(x)			((void *)__phys_to_virt((phys_addr_t)(x)))
 #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
-#define virt_to_pfn(x)      __phys_to_pfn(__virt_to_phys(x))
+#define virt_to_pfn(x)      __phys_to_pfn(__virt_to_phys((unsigned long)(x)))
 
 /*
  *  virt_to_page(k)	convert a _valid_ virtual address to struct page *
-- 
2.10.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCHv2 5/6] arm64: Use __pa_symbol for _end
  2016-11-02 21:00 [PATCHv2 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott
                   ` (3 preceding siblings ...)
  2016-11-02 21:00 ` [PATCHv2 4/6] arm64: Add cast for virt_to_pfn Laura Abbott
@ 2016-11-02 21:00 ` Laura Abbott
  2016-11-02 22:52   ` Mark Rutland
  2016-11-02 21:00 ` [PATCHv2 6/6] arm64: Add support for CONFIG_DEBUG_VIRTUAL Laura Abbott
  2016-11-02 23:07 ` [PATCHv2 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Mark Rutland
  6 siblings, 1 reply; 20+ messages in thread
From: Laura Abbott @ 2016-11-02 21:00 UTC (permalink / raw)
  To: Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas
  Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	linux-kernel, linux-mm, Andrew Morton, Marek Szyprowski,
	Joonsoo Kim, linux-arm-kernel


__pa_symbol is technically the marco that should be used for kernel
symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL.

Signed-off-by: Laura Abbott <labbott@redhat.com>
---
 arch/arm64/mm/init.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 212c4d1..3236eb0 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -209,8 +209,8 @@ void __init arm64_memblock_init(void)
 	 * linear mapping. Take care not to clip the kernel which may be
 	 * high in memory.
 	 */
-	memblock_remove(max_t(u64, memstart_addr + linear_region_size, __pa(_end)),
-			ULLONG_MAX);
+	memblock_remove(max_t(u64, memstart_addr + linear_region_size,
+			__pa_symbol(_end)), ULLONG_MAX);
 	if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {
 		/* ensure that memstart_addr remains sufficiently aligned */
 		memstart_addr = round_up(memblock_end_of_DRAM() - linear_region_size,
-- 
2.10.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCHv2 6/6] arm64: Add support for CONFIG_DEBUG_VIRTUAL
  2016-11-02 21:00 [PATCHv2 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott
                   ` (4 preceding siblings ...)
  2016-11-02 21:00 ` [PATCHv2 5/6] arm64: Use __pa_symbol for _end Laura Abbott
@ 2016-11-02 21:00 ` Laura Abbott
  2016-11-02 23:06   ` Mark Rutland
  2016-11-02 23:07 ` [PATCHv2 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Mark Rutland
  6 siblings, 1 reply; 20+ messages in thread
From: Laura Abbott @ 2016-11-02 21:00 UTC (permalink / raw)
  To: Mark Rutland, Ard Biesheuvel, Will Deacon, Catalin Marinas
  Cc: Laura Abbott, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	linux-kernel, linux-mm, Andrew Morton, Marek Szyprowski,
	Joonsoo Kim, linux-arm-kernel


x86 has an option CONFIG_DEBUG_VIRTUAL to do additional checks
on virt_to_phys calls. The goal is to catch users who are calling
virt_to_phys on non-linear addresses immediately. As features
such as CONFIG_VMAP_STACK get enabled for arm64, this becomes
increasingly important. Add checks to catch bad virt_to_phys
usage.

Signed-off-by: Laura Abbott <labbott@redhat.com>
---
 arch/arm64/Kconfig              |  1 +
 arch/arm64/include/asm/memory.h | 12 +++++++++++-
 arch/arm64/mm/Makefile          |  2 ++
 arch/arm64/mm/physaddr.c        | 34 ++++++++++++++++++++++++++++++++++
 4 files changed, 48 insertions(+), 1 deletion(-)
 create mode 100644 arch/arm64/mm/physaddr.c

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 969ef88..83b95bc 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -6,6 +6,7 @@ config ARM64
 	select ACPI_MCFG if ACPI
 	select ACPI_SPCR_TABLE if ACPI
 	select ARCH_CLOCKSOURCE_DATA
+	select ARCH_HAS_DEBUG_VIRTUAL
 	select ARCH_HAS_DEVMEM_IS_ALLOWED
 	select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI
 	select ARCH_HAS_ELF_RANDOMIZE
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index d773e2c..eac3dbb 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -167,11 +167,19 @@ extern u64			kimage_voffset;
  * private definitions which should NOT be used outside memory.h
  * files.  Use virt_to_phys/phys_to_virt/__pa/__va instead.
  */
-#define __virt_to_phys(x) ({						\
+#define __virt_to_phys_nodebug(x) ({					\
 	phys_addr_t __x = (phys_addr_t)(x);				\
 	__x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET :	\
 				 (__x - kimage_voffset); })
 
+#ifdef CONFIG_DEBUG_VIRTUAL
+extern unsigned long __virt_to_phys(unsigned long x);
+extern unsigned long __phys_addr_symbol(unsigned long x);
+#else
+#define __virt_to_phys(x)	__virt_to_phys_nodebug(x)
+#define __phys_addr_symbol	__pa
+#endif
+
 #define __phys_to_virt(x)	((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET)
 #define __phys_to_kimg(x)	((unsigned long)((x) + kimage_voffset))
 
@@ -202,6 +210,8 @@ static inline void *phys_to_virt(phys_addr_t x)
  * Drivers should NOT use these either.
  */
 #define __pa(x)			__virt_to_phys((unsigned long)(x))
+#define __pa_symbol(x)  __phys_addr_symbol(RELOC_HIDE((unsigned long)(x), 0))
+#define __pa_nodebug(x)		__virt_to_phys_nodebug((unsigned long)(x))
 #define __va(x)			((void *)__phys_to_virt((phys_addr_t)(x)))
 #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
 #define virt_to_pfn(x)      __phys_to_pfn(__virt_to_phys((unsigned long)(x)))
diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
index 54bb209..377f4ab 100644
--- a/arch/arm64/mm/Makefile
+++ b/arch/arm64/mm/Makefile
@@ -5,6 +5,8 @@ obj-y				:= dma-mapping.o extable.o fault.o init.o \
 obj-$(CONFIG_HUGETLB_PAGE)	+= hugetlbpage.o
 obj-$(CONFIG_ARM64_PTDUMP)	+= dump.o
 obj-$(CONFIG_NUMA)		+= numa.o
+CFLAGS_physaddr.o		:= -DTEXT_OFFSET=$(TEXT_OFFSET)
+obj-$(CONFIG_DEBUG_VIRTUAL)	+= physaddr.o
 
 obj-$(CONFIG_KASAN)		+= kasan_init.o
 KASAN_SANITIZE_kasan_init.o	:= n
diff --git a/arch/arm64/mm/physaddr.c b/arch/arm64/mm/physaddr.c
new file mode 100644
index 0000000..874c782
--- /dev/null
+++ b/arch/arm64/mm/physaddr.c
@@ -0,0 +1,34 @@
+#include <linux/mm.h>
+
+#include <asm/memory.h>
+
+unsigned long __virt_to_phys(unsigned long x)
+{
+	phys_addr_t __x = (phys_addr_t)x;
+
+	if (__x & BIT(VA_BITS - 1)) {
+		/*
+		 * The linear kernel range starts in the middle of the virtual
+		 * adddress space. Testing the top bit for the start of the
+		 * region is a sufficient check.
+		 */
+		return (__x & ~PAGE_OFFSET) + PHYS_OFFSET;
+	} else {
+		VIRTUAL_BUG_ON(x < kimage_vaddr || x >= (unsigned long)_end);
+		return (__x - kimage_voffset);
+	}
+}
+EXPORT_SYMBOL(__virt_to_phys);
+
+unsigned long __phys_addr_symbol(unsigned long x)
+{
+	phys_addr_t __x = (phys_addr_t)x;
+
+	/*
+	 * This is intentionally different than above to be a tighter check
+	 * for symbols.
+	 */
+	VIRTUAL_BUG_ON(x < kimage_vaddr + TEXT_OFFSET || x > (unsigned long) _end);
+	return (__x - kimage_voffset);
+}
+EXPORT_SYMBOL(__phys_addr_symbol);
-- 
2.10.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCHv2 5/6] arm64: Use __pa_symbol for _end
  2016-11-02 21:00 ` [PATCHv2 5/6] arm64: Use __pa_symbol for _end Laura Abbott
@ 2016-11-02 22:52   ` Mark Rutland
  2016-11-02 23:56     ` Laura Abbott
  0 siblings, 1 reply; 20+ messages in thread
From: Mark Rutland @ 2016-11-02 22:52 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Ard Biesheuvel, Will Deacon, Catalin Marinas, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm,
	Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel

On Wed, Nov 02, 2016 at 03:00:53PM -0600, Laura Abbott wrote:
> 
> __pa_symbol is technically the marco that should be used for kernel
> symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL.

Nit: s/marco/macro/

I see there are some other uses of __pa() that look like they could/should be
__pa_symbol(), e.g. in mark_rodata_ro().

I guess strictly speaking those need to be updated to? Or is there a reason
that we should not?

Thanks,
Mark.

> Signed-off-by: Laura Abbott <labbott@redhat.com>
> ---
>  arch/arm64/mm/init.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 212c4d1..3236eb0 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -209,8 +209,8 @@ void __init arm64_memblock_init(void)
>  	 * linear mapping. Take care not to clip the kernel which may be
>  	 * high in memory.
>  	 */
> -	memblock_remove(max_t(u64, memstart_addr + linear_region_size, __pa(_end)),
> -			ULLONG_MAX);
> +	memblock_remove(max_t(u64, memstart_addr + linear_region_size,
> +			__pa_symbol(_end)), ULLONG_MAX);
>  	if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {
>  		/* ensure that memstart_addr remains sufficiently aligned */
>  		memstart_addr = round_up(memblock_end_of_DRAM() - linear_region_size,
> -- 
> 2.10.1
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv2 6/6] arm64: Add support for CONFIG_DEBUG_VIRTUAL
  2016-11-02 21:00 ` [PATCHv2 6/6] arm64: Add support for CONFIG_DEBUG_VIRTUAL Laura Abbott
@ 2016-11-02 23:06   ` Mark Rutland
  2016-11-03  0:05     ` Laura Abbott
  0 siblings, 1 reply; 20+ messages in thread
From: Mark Rutland @ 2016-11-02 23:06 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Ard Biesheuvel, Will Deacon, Catalin Marinas, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm,
	Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel

On Wed, Nov 02, 2016 at 03:00:54PM -0600, Laura Abbott wrote:
> +CFLAGS_physaddr.o		:= -DTEXT_OFFSET=$(TEXT_OFFSET)
> +obj-$(CONFIG_DEBUG_VIRTUAL)	+= physaddr.o

> diff --git a/arch/arm64/mm/physaddr.c b/arch/arm64/mm/physaddr.c
> new file mode 100644
> index 0000000..874c782
> --- /dev/null
> +++ b/arch/arm64/mm/physaddr.c
> @@ -0,0 +1,34 @@
> +#include <linux/mm.h>
> +
> +#include <asm/memory.h>
> +
> +unsigned long __virt_to_phys(unsigned long x)
> +{
> +	phys_addr_t __x = (phys_addr_t)x;
> +
> +	if (__x & BIT(VA_BITS - 1)) {
> +		/*
> +		 * The linear kernel range starts in the middle of the virtual
> +		 * adddress space. Testing the top bit for the start of the
> +		 * region is a sufficient check.
> +		 */
> +		return (__x & ~PAGE_OFFSET) + PHYS_OFFSET;
> +	} else {
> +		VIRTUAL_BUG_ON(x < kimage_vaddr || x >= (unsigned long)_end);
> +		return (__x - kimage_voffset);
> +	}
> +}
> +EXPORT_SYMBOL(__virt_to_phys);
> +
> +unsigned long __phys_addr_symbol(unsigned long x)
> +{
> +	phys_addr_t __x = (phys_addr_t)x;
> +
> +	/*
> +	 * This is intentionally different than above to be a tighter check
> +	 * for symbols.
> +	 */
> +	VIRTUAL_BUG_ON(x < kimage_vaddr + TEXT_OFFSET || x > (unsigned long) _end);

Can't we use _text instead of kimage_vaddr + TEXT_OFFSET? That way we don't
need CFLAGS_physaddr.o.

Or KERNEL_START / KERNEL_END from <asm/memory.h>?

Otherwise, this looks good to me (though I haven't grokked the need for
__pa_symbol() yet).

Thanks,
Mark.

> +	return (__x - kimage_voffset);
> +}
> +EXPORT_SYMBOL(__phys_addr_symbol);
> -- 
> 2.10.1
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv2 0/6] CONFIG_DEBUG_VIRTUAL for arm64
  2016-11-02 21:00 [PATCHv2 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott
                   ` (5 preceding siblings ...)
  2016-11-02 21:00 ` [PATCHv2 6/6] arm64: Add support for CONFIG_DEBUG_VIRTUAL Laura Abbott
@ 2016-11-02 23:07 ` Mark Rutland
  6 siblings, 0 replies; 20+ messages in thread
From: Mark Rutland @ 2016-11-02 23:07 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Ard Biesheuvel, Will Deacon, Catalin Marinas, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm,
	Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel

Hi Laura,

FWIW, for patches 1-4:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

I still need to figure out the __pa() stuff in the last two patches.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv2 5/6] arm64: Use __pa_symbol for _end
  2016-11-02 22:52   ` Mark Rutland
@ 2016-11-02 23:56     ` Laura Abbott
  2016-11-03 15:51       ` Mark Rutland
  0 siblings, 1 reply; 20+ messages in thread
From: Laura Abbott @ 2016-11-02 23:56 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Ard Biesheuvel, Will Deacon, Catalin Marinas, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm,
	Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel

On 11/02/2016 04:52 PM, Mark Rutland wrote:
> On Wed, Nov 02, 2016 at 03:00:53PM -0600, Laura Abbott wrote:
>>
>> __pa_symbol is technically the marco that should be used for kernel
>> symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL.
>
> Nit: s/marco/macro/
>
> I see there are some other uses of __pa() that look like they could/should be
> __pa_symbol(), e.g. in mark_rodata_ro().
>
> I guess strictly speaking those need to be updated to? Or is there a reason
> that we should not?
>

If the concept of __pa_symbol is okay then yes I think all uses of __pa
should eventually be converted for consistency and debugging.

> Thanks,
> Mark.
>
>> Signed-off-by: Laura Abbott <labbott@redhat.com>
>> ---
>>  arch/arm64/mm/init.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
>> index 212c4d1..3236eb0 100644
>> --- a/arch/arm64/mm/init.c
>> +++ b/arch/arm64/mm/init.c
>> @@ -209,8 +209,8 @@ void __init arm64_memblock_init(void)
>>  	 * linear mapping. Take care not to clip the kernel which may be
>>  	 * high in memory.
>>  	 */
>> -	memblock_remove(max_t(u64, memstart_addr + linear_region_size, __pa(_end)),
>> -			ULLONG_MAX);
>> +	memblock_remove(max_t(u64, memstart_addr + linear_region_size,
>> +			__pa_symbol(_end)), ULLONG_MAX);
>>  	if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {
>>  		/* ensure that memstart_addr remains sufficiently aligned */
>>  		memstart_addr = round_up(memblock_end_of_DRAM() - linear_region_size,
>> --
>> 2.10.1
>>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv2 6/6] arm64: Add support for CONFIG_DEBUG_VIRTUAL
  2016-11-02 23:06   ` Mark Rutland
@ 2016-11-03  0:05     ` Laura Abbott
  2016-11-03 15:57       ` Mark Rutland
  0 siblings, 1 reply; 20+ messages in thread
From: Laura Abbott @ 2016-11-03  0:05 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Ard Biesheuvel, Will Deacon, Catalin Marinas, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm,
	Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel

On 11/02/2016 05:06 PM, Mark Rutland wrote:
> On Wed, Nov 02, 2016 at 03:00:54PM -0600, Laura Abbott wrote:
>> +CFLAGS_physaddr.o		:= -DTEXT_OFFSET=$(TEXT_OFFSET)
>> +obj-$(CONFIG_DEBUG_VIRTUAL)	+= physaddr.o
>
>> diff --git a/arch/arm64/mm/physaddr.c b/arch/arm64/mm/physaddr.c
>> new file mode 100644
>> index 0000000..874c782
>> --- /dev/null
>> +++ b/arch/arm64/mm/physaddr.c
>> @@ -0,0 +1,34 @@
>> +#include <linux/mm.h>
>> +
>> +#include <asm/memory.h>
>> +
>> +unsigned long __virt_to_phys(unsigned long x)
>> +{
>> +	phys_addr_t __x = (phys_addr_t)x;
>> +
>> +	if (__x & BIT(VA_BITS - 1)) {
>> +		/*
>> +		 * The linear kernel range starts in the middle of the virtual
>> +		 * adddress space. Testing the top bit for the start of the
>> +		 * region is a sufficient check.
>> +		 */
>> +		return (__x & ~PAGE_OFFSET) + PHYS_OFFSET;
>> +	} else {
>> +		VIRTUAL_BUG_ON(x < kimage_vaddr || x >= (unsigned long)_end);
>> +		return (__x - kimage_voffset);
>> +	}
>> +}
>> +EXPORT_SYMBOL(__virt_to_phys);
>> +
>> +unsigned long __phys_addr_symbol(unsigned long x)
>> +{
>> +	phys_addr_t __x = (phys_addr_t)x;
>> +
>> +	/*
>> +	 * This is intentionally different than above to be a tighter check
>> +	 * for symbols.
>> +	 */
>> +	VIRTUAL_BUG_ON(x < kimage_vaddr + TEXT_OFFSET || x > (unsigned long) _end);
>
> Can't we use _text instead of kimage_vaddr + TEXT_OFFSET? That way we don't
> need CFLAGS_physaddr.o.
>
> Or KERNEL_START / KERNEL_END from <asm/memory.h>?
>
> Otherwise, this looks good to me (though I haven't grokked the need for
> __pa_symbol() yet).

I guess it's a question of what's clearer. I like kimage_vaddr +
TEXT_OFFSET because it clearly states we are checking from the
start of the kernel image vs. _text only shows the start of the
text region. Yes, it's technically the same but a little less
obvious. I suppose that could be solved with some more elaboration
in the comment.

Thanks,
Laura

>
> Thanks,
> Mark.
>
>> +	return (__x - kimage_voffset);
>> +}
>> +EXPORT_SYMBOL(__phys_addr_symbol);
>> --
>> 2.10.1
>>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv2 5/6] arm64: Use __pa_symbol for _end
  2016-11-02 23:56     ` Laura Abbott
@ 2016-11-03 15:51       ` Mark Rutland
  2016-11-14 18:19         ` Catalin Marinas
  0 siblings, 1 reply; 20+ messages in thread
From: Mark Rutland @ 2016-11-03 15:51 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Ard Biesheuvel, Will Deacon, Catalin Marinas, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm,
	Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel

On Wed, Nov 02, 2016 at 05:56:42PM -0600, Laura Abbott wrote:
> On 11/02/2016 04:52 PM, Mark Rutland wrote:
> >On Wed, Nov 02, 2016 at 03:00:53PM -0600, Laura Abbott wrote:
> >>
> >>__pa_symbol is technically the marco that should be used for kernel
> >>symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL.
> >
> >Nit: s/marco/macro/
> >
> >I see there are some other uses of __pa() that look like they could/should be
> >__pa_symbol(), e.g. in mark_rodata_ro().
> >
> >I guess strictly speaking those need to be updated to? Or is there a reason
> >that we should not?
> 
> If the concept of __pa_symbol is okay then yes I think all uses of __pa
> should eventually be converted for consistency and debugging.

I have no strong feelings either way about __pa_symbol(); I'm not clear on what
the purpose of __pa_symbol() is specifically, but I'm happy even if it's just
for consistency with other architectures.

However, if we use it I think that we should (attempt to) use it consistently
from the outset. 

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv2 6/6] arm64: Add support for CONFIG_DEBUG_VIRTUAL
  2016-11-03  0:05     ` Laura Abbott
@ 2016-11-03 15:57       ` Mark Rutland
  0 siblings, 0 replies; 20+ messages in thread
From: Mark Rutland @ 2016-11-03 15:57 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Ard Biesheuvel, Will Deacon, Catalin Marinas, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, x86, linux-kernel, linux-mm,
	Andrew Morton, Marek Szyprowski, Joonsoo Kim, linux-arm-kernel

On Wed, Nov 02, 2016 at 06:05:38PM -0600, Laura Abbott wrote:
> On 11/02/2016 05:06 PM, Mark Rutland wrote:
> >On Wed, Nov 02, 2016 at 03:00:54PM -0600, Laura Abbott wrote:
> >>+CFLAGS_physaddr.o		:= -DTEXT_OFFSET=$(TEXT_OFFSET)
> >>+obj-$(CONFIG_DEBUG_VIRTUAL)	+= physaddr.o

> >>+	/*
> >>+	 * This is intentionally different than above to be a tighter check
> >>+	 * for symbols.
> >>+	 */
> >>+	VIRTUAL_BUG_ON(x < kimage_vaddr + TEXT_OFFSET || x > (unsigned long) _end);
> >
> >Can't we use _text instead of kimage_vaddr + TEXT_OFFSET? That way we don't
> >need CFLAGS_physaddr.o.
> >
> >Or KERNEL_START / KERNEL_END from <asm/memory.h>?
> >
> >Otherwise, this looks good to me (though I haven't grokked the need for
> >__pa_symbol() yet).
> 
> I guess it's a question of what's clearer. I like kimage_vaddr +
> TEXT_OFFSET because it clearly states we are checking from the
> start of the kernel image vs. _text only shows the start of the
> text region. Yes, it's technically the same but a little less
> obvious. I suppose that could be solved with some more elaboration
> in the comment.

Sure, it's arguable either way.

I do think that KERNEL_START/KERNEL_END are a better choice, with the comment
you suggest, and/or renamed to KERNEL_IMAGE_*. They already describe the bounds
of the image (though the naming doesn't make that entirely clear).

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv2 5/6] arm64: Use __pa_symbol for _end
  2016-11-03 15:51       ` Mark Rutland
@ 2016-11-14 18:19         ` Catalin Marinas
  2016-11-14 18:41           ` Laura Abbott
  0 siblings, 1 reply; 20+ messages in thread
From: Catalin Marinas @ 2016-11-14 18:19 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Laura Abbott, Ard Biesheuvel, x86, Will Deacon, linux-kernel,
	linux-mm, Ingo Molnar, H. Peter Anvin, Joonsoo Kim,
	Thomas Gleixner, Andrew Morton, linux-arm-kernel,
	Marek Szyprowski

On Thu, Nov 03, 2016 at 03:51:07PM +0000, Mark Rutland wrote:
> On Wed, Nov 02, 2016 at 05:56:42PM -0600, Laura Abbott wrote:
> > On 11/02/2016 04:52 PM, Mark Rutland wrote:
> > >On Wed, Nov 02, 2016 at 03:00:53PM -0600, Laura Abbott wrote:
> > >>
> > >>__pa_symbol is technically the marco that should be used for kernel
> > >>symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL.
> > >
> > >Nit: s/marco/macro/
> > >
> > >I see there are some other uses of __pa() that look like they could/should be
> > >__pa_symbol(), e.g. in mark_rodata_ro().
> > >
> > >I guess strictly speaking those need to be updated to? Or is there a reason
> > >that we should not?
> > 
> > If the concept of __pa_symbol is okay then yes I think all uses of __pa
> > should eventually be converted for consistency and debugging.
> 
> I have no strong feelings either way about __pa_symbol(); I'm not clear on what
> the purpose of __pa_symbol() is specifically, but I'm happy even if it's just
> for consistency with other architectures.

At a quick grep, it seems to only be used by mips and x86 and a single
place in mm/memblock.c.

Since we haven't seen any issues on arm/arm64 without this macro, can we
not just continue to use __pa()?

Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv2 5/6] arm64: Use __pa_symbol for _end
  2016-11-14 18:19         ` Catalin Marinas
@ 2016-11-14 18:41           ` Laura Abbott
  2016-11-15 18:35             ` Catalin Marinas
  0 siblings, 1 reply; 20+ messages in thread
From: Laura Abbott @ 2016-11-14 18:41 UTC (permalink / raw)
  To: Catalin Marinas, Mark Rutland
  Cc: Ard Biesheuvel, x86, Will Deacon, linux-kernel, linux-mm,
	Ingo Molnar, H. Peter Anvin, Joonsoo Kim, Thomas Gleixner,
	Andrew Morton, linux-arm-kernel, Marek Szyprowski

On 11/14/2016 10:19 AM, Catalin Marinas wrote:
> On Thu, Nov 03, 2016 at 03:51:07PM +0000, Mark Rutland wrote:
>> On Wed, Nov 02, 2016 at 05:56:42PM -0600, Laura Abbott wrote:
>>> On 11/02/2016 04:52 PM, Mark Rutland wrote:
>>>> On Wed, Nov 02, 2016 at 03:00:53PM -0600, Laura Abbott wrote:
>>>>>
>>>>> __pa_symbol is technically the marco that should be used for kernel
>>>>> symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL.
>>>>
>>>> Nit: s/marco/macro/
>>>>
>>>> I see there are some other uses of __pa() that look like they could/should be
>>>> __pa_symbol(), e.g. in mark_rodata_ro().
>>>>
>>>> I guess strictly speaking those need to be updated to? Or is there a reason
>>>> that we should not?
>>>
>>> If the concept of __pa_symbol is okay then yes I think all uses of __pa
>>> should eventually be converted for consistency and debugging.
>>
>> I have no strong feelings either way about __pa_symbol(); I'm not clear on what
>> the purpose of __pa_symbol() is specifically, but I'm happy even if it's just
>> for consistency with other architectures.
> 
> At a quick grep, it seems to only be used by mips and x86 and a single
> place in mm/memblock.c.
> 
> Since we haven't seen any issues on arm/arm64 without this macro, can we
> not just continue to use __pa()?

Technically yes but if it's introduced it may be confusing why it's being
used some places but not others. Maybe the bounds in the debug virtual check
should just be adjusted so we don't need __pa_symbol along with a nice fat
comment explaining why. 

> 
> Thanks.
> 

Thanks,
Laura

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv2 5/6] arm64: Use __pa_symbol for _end
  2016-11-14 18:41           ` Laura Abbott
@ 2016-11-15 18:35             ` Catalin Marinas
  2016-11-16  0:09               ` Laura Abbott
  0 siblings, 1 reply; 20+ messages in thread
From: Catalin Marinas @ 2016-11-15 18:35 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Mark Rutland, Andrew Morton, Ard Biesheuvel, x86, Will Deacon,
	linux-kernel, linux-mm, Ingo Molnar, H. Peter Anvin, Joonsoo Kim,
	Thomas Gleixner, linux-arm-kernel, Marek Szyprowski

On Mon, Nov 14, 2016 at 10:41:29AM -0800, Laura Abbott wrote:
> On 11/14/2016 10:19 AM, Catalin Marinas wrote:
> > On Thu, Nov 03, 2016 at 03:51:07PM +0000, Mark Rutland wrote:
> >> On Wed, Nov 02, 2016 at 05:56:42PM -0600, Laura Abbott wrote:
> >>> On 11/02/2016 04:52 PM, Mark Rutland wrote:
> >>>> On Wed, Nov 02, 2016 at 03:00:53PM -0600, Laura Abbott wrote:
> >>>>>
> >>>>> __pa_symbol is technically the marco that should be used for kernel
> >>>>> symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL.
> >>>>
> >>>> Nit: s/marco/macro/
> >>>>
> >>>> I see there are some other uses of __pa() that look like they could/should be
> >>>> __pa_symbol(), e.g. in mark_rodata_ro().
> >>>>
> >>>> I guess strictly speaking those need to be updated to? Or is there a reason
> >>>> that we should not?
> >>>
> >>> If the concept of __pa_symbol is okay then yes I think all uses of __pa
> >>> should eventually be converted for consistency and debugging.
> >>
> >> I have no strong feelings either way about __pa_symbol(); I'm not clear on what
> >> the purpose of __pa_symbol() is specifically, but I'm happy even if it's just
> >> for consistency with other architectures.
> > 
> > At a quick grep, it seems to only be used by mips and x86 and a single
> > place in mm/memblock.c.
> > 
> > Since we haven't seen any issues on arm/arm64 without this macro, can we
> > not just continue to use __pa()?
> 
> Technically yes but if it's introduced it may be confusing why it's being
> used some places but not others.

As it currently stands, your patches introduce the first and only use of
__pa_symbol to arch/arm64. But I don't see the point, unless we replace
all of the other uses.

> Maybe the bounds in the debug virtual check should just be adjusted so
> we don't need __pa_symbol along with a nice fat comment explaining
> why. 

I'm fine with __pa_symbol use entirely from under arch/arm64. But if you
want to use __pa_symbol, I tried to change most (all?) places where
necessary, together with making virt_to_phys() only deal with the kernel
linear mapping. Not sure it looks cleaner, especially the
__va(__pa_symbol()) cases (we could replace the latter with another
macro and proper comment):

-------------8<--------------------------------------
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index a79b969c26fc..fa6c44ebb51f 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -47,7 +47,7 @@
  * If the page is in the bottom half, we have to use the top half. If
  * the page is in the top half, we have to use the bottom half:
  *
- * T = __virt_to_phys(__hyp_idmap_text_start)
+ * T = __pa_symbol(__hyp_idmap_text_start)
  * if (T & BIT(VA_BITS - 1))
  *	HYP_VA_MIN = 0  //idmap in upper half
  * else
@@ -271,7 +271,7 @@ static inline void __kvm_flush_dcache_pud(pud_t pud)
 	kvm_flush_dcache_to_poc(page_address(page), PUD_SIZE);
 }
 
-#define kvm_virt_to_phys(x)		__virt_to_phys((unsigned long)(x))
+#define kvm_virt_to_phys(x)		__pa_symbol((unsigned long)(x))
 
 void kvm_set_way_flush(struct kvm_vcpu *vcpu);
 void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled);
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index eac3dbb7e313..e02f45e5ee1b 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -169,15 +169,22 @@ extern u64			kimage_voffset;
  */
 #define __virt_to_phys_nodebug(x) ({					\
 	phys_addr_t __x = (phys_addr_t)(x);				\
-	__x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET :	\
-				 (__x - kimage_voffset); })
+	VM_BUG_ON(!(__x & BIT(VA_BITS - 1)));				\
+	((__x & ~PAGE_OFFSET) + PHYS_OFFSET);				\
+})
+
+#define __pa_symbol_nodebug(x) ({					\
+	phys_addr_t __x = (phys_addr_t)(x);				\
+	VM_BUG_ON(__x & BIT(VA_BITS - 1));				\
+	(__x - kimage_voffset);						\
+})
 
 #ifdef CONFIG_DEBUG_VIRTUAL
 extern unsigned long __virt_to_phys(unsigned long x);
 extern unsigned long __phys_addr_symbol(unsigned long x);
 #else
 #define __virt_to_phys(x)	__virt_to_phys_nodebug(x)
-#define __phys_addr_symbol	__pa
+#define __phys_addr_symbol(x)	__pa_symbol_nodebug(x)
 #endif
 
 #define __phys_to_virt(x)	((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET)
@@ -210,7 +217,7 @@ static inline void *phys_to_virt(phys_addr_t x)
  * Drivers should NOT use these either.
  */
 #define __pa(x)			__virt_to_phys((unsigned long)(x))
-#define __pa_symbol(x)  __phys_addr_symbol(RELOC_HIDE((unsigned long)(x), 0))
+#define __pa_symbol(x)		__phys_addr_symbol(RELOC_HIDE((unsigned long)(x), 0))
 #define __pa_nodebug(x)		__virt_to_phys_nodebug((unsigned long)(x))
 #define __va(x)			((void *)__phys_to_virt((phys_addr_t)(x)))
 #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index a50185375f09..6cf3763c6e11 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -44,7 +44,7 @@ static inline void contextidr_thread_switch(struct task_struct *next)
  */
 static inline void cpu_set_reserved_ttbr0(void)
 {
-	unsigned long ttbr = virt_to_phys(empty_zero_page);
+	unsigned long ttbr = __pa_symbol(empty_zero_page);
 
 	write_sysreg(ttbr, ttbr0_el1);
 	isb();
@@ -113,7 +113,7 @@ static inline void cpu_install_idmap(void)
 	local_flush_tlb_all();
 	cpu_set_idmap_tcr_t0sz();
 
-	cpu_switch_mm(idmap_pg_dir, &init_mm);
+	cpu_switch_mm(__va(__pa_symbol(idmap_pg_dir)), &init_mm);
 }
 
 /*
@@ -128,7 +128,7 @@ static inline void cpu_replace_ttbr1(pgd_t *pgd)
 
 	phys_addr_t pgd_phys = virt_to_phys(pgd);
 
-	replace_phys = (void *)virt_to_phys(idmap_cpu_replace_ttbr1);
+	replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
 
 	cpu_install_idmap();
 	replace_phys(pgd_phys);
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index ffbb9a520563..c2041a39a3e3 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -52,7 +52,7 @@ extern void __pgd_error(const char *file, int line, unsigned long val);
  * for zero-mapped memory areas etc..
  */
 extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
-#define ZERO_PAGE(vaddr)	pfn_to_page(PHYS_PFN(__pa(empty_zero_page)))
+#define ZERO_PAGE(vaddr)	pfn_to_page(PHYS_PFN(__pa_symbol(empty_zero_page)))
 
 #define pte_ERROR(pte)		__pte_error(__FILE__, __LINE__, pte_val(pte))
 
diff --git a/arch/arm64/kernel/acpi_parking_protocol.c b/arch/arm64/kernel/acpi_parking_protocol.c
index a32b4011d711..df58310660c6 100644
--- a/arch/arm64/kernel/acpi_parking_protocol.c
+++ b/arch/arm64/kernel/acpi_parking_protocol.c
@@ -109,7 +109,7 @@ static int acpi_parking_protocol_cpu_boot(unsigned int cpu)
 	 * that read this address need to convert this address to the
 	 * Boot-Loader's endianness before jumping.
 	 */
-	writeq_relaxed(__pa(secondary_entry), &mailbox->entry_point);
+	writeq_relaxed(__pa_symbol(secondary_entry), &mailbox->entry_point);
 	writel_relaxed(cpu_entry->gic_cpu_id, &mailbox->cpu_id);
 
 	arch_send_wakeup_ipi_mask(cpumask_of(cpu));
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c02504ea304b..6ccadf255fba 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -736,7 +736,7 @@ static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused
 static bool hyp_offset_low(const struct arm64_cpu_capabilities *entry,
 			   int __unused)
 {
-	phys_addr_t idmap_addr = virt_to_phys(__hyp_idmap_text_start);
+	phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
 
 	/*
 	 * Activate the lower HYP offset only if:
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index d55a7b09959b..81c03c74e5fe 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -51,7 +51,7 @@
 extern int in_suspend;
 
 /* Find a symbols alias in the linear map */
-#define LMADDR(x)	phys_to_virt(virt_to_phys(x))
+#define LMADDR(x)	__va(__pa_symbol(x))
 
 /* Do we need to reset el2? */
 #define el2_reset_needed() (is_hyp_mode_available() && !is_kernel_in_hyp_mode())
@@ -125,12 +125,12 @@ int arch_hibernation_header_save(void *addr, unsigned int max_size)
 		return -EOVERFLOW;
 
 	arch_hdr_invariants(&hdr->invariants);
-	hdr->ttbr1_el1		= virt_to_phys(swapper_pg_dir);
+	hdr->ttbr1_el1		= __pa_symbol(swapper_pg_dir);
 	hdr->reenter_kernel	= _cpu_resume;
 
 	/* We can't use __hyp_get_vectors() because kvm may still be loaded */
 	if (el2_reset_needed())
-		hdr->__hyp_stub_vectors = virt_to_phys(__hyp_stub_vectors);
+		hdr->__hyp_stub_vectors = __pa_symbol(__hyp_stub_vectors);
 	else
 		hdr->__hyp_stub_vectors = 0;
 
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 6f2ac4fc66ca..af8967a0343b 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -97,7 +97,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
 	if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
 		page = vmalloc_to_page(addr);
 	else if (!module)
-		page = pfn_to_page(PHYS_PFN(__pa(addr)));
+		page = pfn_to_page(PHYS_PFN(__pa_symbol(addr)));
 	else
 		return addr;
 
diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c
index 42816bebb1e0..f0f2abb72cf9 100644
--- a/arch/arm64/kernel/psci.c
+++ b/arch/arm64/kernel/psci.c
@@ -45,7 +45,7 @@ static int __init cpu_psci_cpu_prepare(unsigned int cpu)
 
 static int cpu_psci_cpu_boot(unsigned int cpu)
 {
-	int err = psci_ops.cpu_on(cpu_logical_map(cpu), __pa(secondary_entry));
+	int err = psci_ops.cpu_on(cpu_logical_map(cpu), __pa_symbol(secondary_entry));
 	if (err)
 		pr_err("failed to boot CPU%d (%d)\n", cpu, err);
 
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index f534f492a268..e2dbc02f4792 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -199,10 +199,10 @@ static void __init request_standard_resources(void)
 	struct memblock_region *region;
 	struct resource *res;
 
-	kernel_code.start   = virt_to_phys(_text);
-	kernel_code.end     = virt_to_phys(__init_begin - 1);
-	kernel_data.start   = virt_to_phys(_sdata);
-	kernel_data.end     = virt_to_phys(_end - 1);
+	kernel_code.start   = __pa_symbol(_text);
+	kernel_code.end     = __pa_symbol(__init_begin - 1);
+	kernel_data.start   = __pa_symbol(_sdata);
+	kernel_data.end     = __pa_symbol(_end - 1);
 
 	for_each_memblock(memory, region) {
 		res = alloc_bootmem_low(sizeof(*res));
diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c
index 9a00eee9acc8..25fcccaf79b8 100644
--- a/arch/arm64/kernel/smp_spin_table.c
+++ b/arch/arm64/kernel/smp_spin_table.c
@@ -98,7 +98,7 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
 	 * boot-loader's endianess before jumping. This is mandated by
 	 * the boot protocol.
 	 */
-	writeq_relaxed(__pa(secondary_holding_pen), release_addr);
+	writeq_relaxed(__pa_symbol(secondary_holding_pen), release_addr);
 	__flush_dcache_area((__force void *)release_addr,
 			    sizeof(*release_addr));
 
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index a2c2478e7d78..791e87a99148 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -140,11 +140,11 @@ static int __init vdso_init(void)
 		return -ENOMEM;
 
 	/* Grab the vDSO data page. */
-	vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa(vdso_data)));
+	vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa_symbol(vdso_data)));
 
 	/* Grab the vDSO code pages. */
 	for (i = 0; i < vdso_pages; i++)
-		vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa(&vdso_start)) + i);
+		vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa_symbol(&vdso_start)) + i);
 
 	vdso_spec[0].pages = &vdso_pagelist[0];
 	vdso_spec[1].pages = &vdso_pagelist[1];
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 3236eb062444..14f426fea61b 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -225,7 +225,7 @@ void __init arm64_memblock_init(void)
 	 */
 	if (memory_limit != (phys_addr_t)ULLONG_MAX) {
 		memblock_mem_limit_remove_map(memory_limit);
-		memblock_add(__pa(_text), (u64)(_end - _text));
+		memblock_add(__pa_symbol(_text), (u64)(_end - _text));
 	}
 
 	if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && initrd_start) {
@@ -278,7 +278,7 @@ void __init arm64_memblock_init(void)
 	 * Register the kernel text, kernel data, initrd, and initial
 	 * pagetables with memblock.
 	 */
-	memblock_reserve(__pa(_text), _end - _text);
+	memblock_reserve(__pa_symbol(_text), _end - _text);
 #ifdef CONFIG_BLK_DEV_INITRD
 	if (initrd_start) {
 		memblock_reserve(initrd_start, initrd_end - initrd_start);
@@ -483,7 +483,8 @@ void __init mem_init(void)
 
 void free_initmem(void)
 {
-	free_reserved_area(__va(__pa(__init_begin)), __va(__pa(__init_end)),
+	free_reserved_area(__va(__pa_symbol(__init_begin)),
+			   __va(__pa_symbol(__init_end)),
 			   0, "unused kernel");
 	/*
 	 * Unmap the __init region but leave the VM area in place. This
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 17243e43184e..f7c0a47a8ebd 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -359,8 +359,8 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
 
 static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end)
 {
-	unsigned long kernel_start = __pa(_text);
-	unsigned long kernel_end = __pa(__init_begin);
+	unsigned long kernel_start = __pa_symbol(_text);
+	unsigned long kernel_end = __pa_symbol(__init_begin);
 
 	/*
 	 * Take care not to create a writable alias for the
@@ -427,14 +427,14 @@ void mark_rodata_ro(void)
 	unsigned long section_size;
 
 	section_size = (unsigned long)_etext - (unsigned long)_text;
-	create_mapping_late(__pa(_text), (unsigned long)_text,
+	create_mapping_late(__pa_symbol(_text), (unsigned long)_text,
 			    section_size, PAGE_KERNEL_ROX);
 	/*
 	 * mark .rodata as read only. Use __init_begin rather than __end_rodata
 	 * to cover NOTES and EXCEPTION_TABLE.
 	 */
 	section_size = (unsigned long)__init_begin - (unsigned long)__start_rodata;
-	create_mapping_late(__pa(__start_rodata), (unsigned long)__start_rodata,
+	create_mapping_late(__pa_symbol(__start_rodata), (unsigned long)__start_rodata,
 			    section_size, PAGE_KERNEL_RO);
 
 	/* flush the TLBs after updating live kernel mappings */
@@ -446,7 +446,7 @@ void mark_rodata_ro(void)
 static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end,
 				      pgprot_t prot, struct vm_struct *vma)
 {
-	phys_addr_t pa_start = __pa(va_start);
+	phys_addr_t pa_start = __pa_symbol(va_start);
 	unsigned long size = va_end - va_start;
 
 	BUG_ON(!PAGE_ALIGNED(pa_start));
@@ -494,7 +494,7 @@ static void __init map_kernel(pgd_t *pgd)
 		 */
 		BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES));
 		set_pud(pud_set_fixmap_offset(pgd, FIXADDR_START),
-			__pud(__pa(bm_pmd) | PUD_TYPE_TABLE));
+			__pud(__pa_symbol(bm_pmd) | PUD_TYPE_TABLE));
 		pud_clear_fixmap();
 	} else {
 		BUG();
@@ -525,7 +525,7 @@ void __init paging_init(void)
 	 */
 	cpu_replace_ttbr1(__va(pgd_phys));
 	memcpy(swapper_pg_dir, pgd, PAGE_SIZE);
-	cpu_replace_ttbr1(swapper_pg_dir);
+	cpu_replace_ttbr1(__va(__pa_symbol(swapper_pg_dir)));
 
 	pgd_clear_fixmap();
 	memblock_free(pgd_phys, PAGE_SIZE);
@@ -534,7 +534,7 @@ void __init paging_init(void)
 	 * We only reuse the PGD from the swapper_pg_dir, not the pud + pmd
 	 * allocated with it.
 	 */
-	memblock_free(__pa(swapper_pg_dir) + PAGE_SIZE,
+	memblock_free(__pa_symbol(swapper_pg_dir) + PAGE_SIZE,
 		      SWAPPER_DIR_SIZE - PAGE_SIZE);
 }
 
@@ -654,7 +654,7 @@ void __init early_fixmap_init(void)
 
 	pgd = pgd_offset_k(addr);
 	if (CONFIG_PGTABLE_LEVELS > 3 &&
-	    !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa(bm_pud))) {
+	    !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa_symbol(bm_pud))) {
 		/*
 		 * We only end up here if the kernel mapping and the fixmap
 		 * share the top level pgd entry, which should only happen on
@@ -663,12 +663,12 @@ void __init early_fixmap_init(void)
 		BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES));
 		pud = pud_offset_kimg(pgd, addr);
 	} else {
-		pgd_populate(&init_mm, pgd, bm_pud);
+		pgd_populate(&init_mm, pgd, __va(__pa_symbol(bm_pud)));
 		pud = fixmap_pud(addr);
 	}
-	pud_populate(&init_mm, pud, bm_pmd);
+	pud_populate(&init_mm, pud, __va(__pa_symbol(bm_pmd)));
 	pmd = fixmap_pmd(addr);
-	pmd_populate_kernel(&init_mm, pmd, bm_pte);
+	pmd_populate_kernel(&init_mm, pmd, __va(__pa_symbol(bm_pte)));
 
 	/*
 	 * The boot-ioremap range spans multiple pmds, for which
diff --git a/arch/arm64/mm/physaddr.c b/arch/arm64/mm/physaddr.c
index 874c78201a2b..98dae943e496 100644
--- a/arch/arm64/mm/physaddr.c
+++ b/arch/arm64/mm/physaddr.c
@@ -14,8 +14,8 @@ unsigned long __virt_to_phys(unsigned long x)
 		 */
 		return (__x & ~PAGE_OFFSET) + PHYS_OFFSET;
 	} else {
-		VIRTUAL_BUG_ON(x < kimage_vaddr || x >= (unsigned long)_end);
-		return (__x - kimage_voffset);
+		WARN_ON(1);
+		return __phys_addr_symbol(x);
 	}
 }
 EXPORT_SYMBOL(__virt_to_phys);
diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
index 8263429e21b8..9defbe243c2f 100644
--- a/drivers/firmware/psci.c
+++ b/drivers/firmware/psci.c
@@ -383,7 +383,7 @@ static int psci_suspend_finisher(unsigned long index)
 	u32 *state = __this_cpu_read(psci_power_state);
 
 	return psci_ops.cpu_suspend(state[index - 1],
-				    virt_to_phys(cpu_resume));
+				    __pa_symbol(cpu_resume));
 }
 
 int psci_cpu_suspend_enter(unsigned long index)

-- 
Catalin

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCHv2 5/6] arm64: Use __pa_symbol for _end
  2016-11-15 18:35             ` Catalin Marinas
@ 2016-11-16  0:09               ` Laura Abbott
  2016-11-16 17:32                 ` Catalin Marinas
  0 siblings, 1 reply; 20+ messages in thread
From: Laura Abbott @ 2016-11-16  0:09 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Mark Rutland, Andrew Morton, Ard Biesheuvel, x86, Will Deacon,
	linux-kernel, linux-mm, Ingo Molnar, H. Peter Anvin, Joonsoo Kim,
	Thomas Gleixner, linux-arm-kernel, Marek Szyprowski

On 11/15/2016 10:35 AM, Catalin Marinas wrote:
> On Mon, Nov 14, 2016 at 10:41:29AM -0800, Laura Abbott wrote:
>> On 11/14/2016 10:19 AM, Catalin Marinas wrote:
>>> On Thu, Nov 03, 2016 at 03:51:07PM +0000, Mark Rutland wrote:
>>>> On Wed, Nov 02, 2016 at 05:56:42PM -0600, Laura Abbott wrote:
>>>>> On 11/02/2016 04:52 PM, Mark Rutland wrote:
>>>>>> On Wed, Nov 02, 2016 at 03:00:53PM -0600, Laura Abbott wrote:
>>>>>>>
>>>>>>> __pa_symbol is technically the marco that should be used for kernel
>>>>>>> symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL.
>>>>>>
>>>>>> Nit: s/marco/macro/
>>>>>>
>>>>>> I see there are some other uses of __pa() that look like they could/should be
>>>>>> __pa_symbol(), e.g. in mark_rodata_ro().
>>>>>>
>>>>>> I guess strictly speaking those need to be updated to? Or is there a reason
>>>>>> that we should not?
>>>>>
>>>>> If the concept of __pa_symbol is okay then yes I think all uses of __pa
>>>>> should eventually be converted for consistency and debugging.
>>>>
>>>> I have no strong feelings either way about __pa_symbol(); I'm not clear on what
>>>> the purpose of __pa_symbol() is specifically, but I'm happy even if it's just
>>>> for consistency with other architectures.
>>>
>>> At a quick grep, it seems to only be used by mips and x86 and a single
>>> place in mm/memblock.c.
>>>
>>> Since we haven't seen any issues on arm/arm64 without this macro, can we
>>> not just continue to use __pa()?
>>
>> Technically yes but if it's introduced it may be confusing why it's being
>> used some places but not others.
> 
> As it currently stands, your patches introduce the first and only use of
> __pa_symbol to arch/arm64. But I don't see the point, unless we replace
> all of the other uses.
>  
>> Maybe the bounds in the debug virtual check should just be adjusted so
>> we don't need __pa_symbol along with a nice fat comment explaining
>> why. 
> 
> I'm fine with __pa_symbol use entirely from under arch/arm64. But if you
> want to use __pa_symbol, I tried to change most (all?) places where
> necessary, together with making virt_to_phys() only deal with the kernel
> linear mapping. Not sure it looks cleaner, especially the
> __va(__pa_symbol()) cases (we could replace the latter with another
> macro and proper comment):
> 

I agree everything should be converted over, I was considering doing
that in a separate patch but this covers everything nicely. Are you
okay with me folding this in? (Few comments below)

> -------------8<--------------------------------------
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index a79b969c26fc..fa6c44ebb51f 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -47,7 +47,7 @@
>   * If the page is in the bottom half, we have to use the top half. If
>   * the page is in the top half, we have to use the bottom half:
>   *
> - * T = __virt_to_phys(__hyp_idmap_text_start)
> + * T = __pa_symbol(__hyp_idmap_text_start)
>   * if (T & BIT(VA_BITS - 1))
>   *	HYP_VA_MIN = 0  //idmap in upper half
>   * else
> @@ -271,7 +271,7 @@ static inline void __kvm_flush_dcache_pud(pud_t pud)
>  	kvm_flush_dcache_to_poc(page_address(page), PUD_SIZE);
>  }
>  
> -#define kvm_virt_to_phys(x)		__virt_to_phys((unsigned long)(x))
> +#define kvm_virt_to_phys(x)		__pa_symbol((unsigned long)(x))
>  
>  void kvm_set_way_flush(struct kvm_vcpu *vcpu);
>  void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled);
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index eac3dbb7e313..e02f45e5ee1b 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -169,15 +169,22 @@ extern u64			kimage_voffset;
>   */
>  #define __virt_to_phys_nodebug(x) ({					\
>  	phys_addr_t __x = (phys_addr_t)(x);				\
> -	__x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET :	\
> -				 (__x - kimage_voffset); })
> +	VM_BUG_ON(!(__x & BIT(VA_BITS - 1)));				\
> +	((__x & ~PAGE_OFFSET) + PHYS_OFFSET);				\
> +})

I do think this is easier to understand vs the ternary operator.
I'll add a comment detailing the use of __pa vs __pa_symbol somewhere
as well.

> +
> +#define __pa_symbol_nodebug(x) ({					\
> +	phys_addr_t __x = (phys_addr_t)(x);				\
> +	VM_BUG_ON(__x & BIT(VA_BITS - 1));				\
> +	(__x - kimage_voffset);						\
> +})
>  
>  #ifdef CONFIG_DEBUG_VIRTUAL
>  extern unsigned long __virt_to_phys(unsigned long x);
>  extern unsigned long __phys_addr_symbol(unsigned long x);
>  #else
>  #define __virt_to_phys(x)	__virt_to_phys_nodebug(x)
> -#define __phys_addr_symbol	__pa
> +#define __phys_addr_symbol(x)	__pa_symbol_nodebug(x)
>  #endif
>  
>  #define __phys_to_virt(x)	((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET)
> @@ -210,7 +217,7 @@ static inline void *phys_to_virt(phys_addr_t x)
>   * Drivers should NOT use these either.
>   */
>  #define __pa(x)			__virt_to_phys((unsigned long)(x))
> -#define __pa_symbol(x)  __phys_addr_symbol(RELOC_HIDE((unsigned long)(x), 0))
> +#define __pa_symbol(x)		__phys_addr_symbol(RELOC_HIDE((unsigned long)(x), 0))
>  #define __pa_nodebug(x)		__virt_to_phys_nodebug((unsigned long)(x))
>  #define __va(x)			((void *)__phys_to_virt((phys_addr_t)(x)))
>  #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> index a50185375f09..6cf3763c6e11 100644
> --- a/arch/arm64/include/asm/mmu_context.h
> +++ b/arch/arm64/include/asm/mmu_context.h
> @@ -44,7 +44,7 @@ static inline void contextidr_thread_switch(struct task_struct *next)
>   */
>  static inline void cpu_set_reserved_ttbr0(void)
>  {
> -	unsigned long ttbr = virt_to_phys(empty_zero_page);
> +	unsigned long ttbr = __pa_symbol(empty_zero_page);
>  
>  	write_sysreg(ttbr, ttbr0_el1);
>  	isb();
> @@ -113,7 +113,7 @@ static inline void cpu_install_idmap(void)
>  	local_flush_tlb_all();
>  	cpu_set_idmap_tcr_t0sz();
>  
> -	cpu_switch_mm(idmap_pg_dir, &init_mm);
> +	cpu_switch_mm(__va(__pa_symbol(idmap_pg_dir)), &init_mm);

Yes, the __va(__pa_symbol(..)) idiom needs to be macroized and commented...

>  }
>  
>  /*
> @@ -128,7 +128,7 @@ static inline void cpu_replace_ttbr1(pgd_t *pgd)
>  
>  	phys_addr_t pgd_phys = virt_to_phys(pgd);
>  
> -	replace_phys = (void *)virt_to_phys(idmap_cpu_replace_ttbr1);
> +	replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
>  
>  	cpu_install_idmap();
>  	replace_phys(pgd_phys);
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index ffbb9a520563..c2041a39a3e3 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -52,7 +52,7 @@ extern void __pgd_error(const char *file, int line, unsigned long val);
>   * for zero-mapped memory areas etc..
>   */
>  extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
> -#define ZERO_PAGE(vaddr)	pfn_to_page(PHYS_PFN(__pa(empty_zero_page)))
> +#define ZERO_PAGE(vaddr)	pfn_to_page(PHYS_PFN(__pa_symbol(empty_zero_page)))
>  
>  #define pte_ERROR(pte)		__pte_error(__FILE__, __LINE__, pte_val(pte))
>  
> diff --git a/arch/arm64/kernel/acpi_parking_protocol.c b/arch/arm64/kernel/acpi_parking_protocol.c
> index a32b4011d711..df58310660c6 100644
> --- a/arch/arm64/kernel/acpi_parking_protocol.c
> +++ b/arch/arm64/kernel/acpi_parking_protocol.c
> @@ -109,7 +109,7 @@ static int acpi_parking_protocol_cpu_boot(unsigned int cpu)
>  	 * that read this address need to convert this address to the
>  	 * Boot-Loader's endianness before jumping.
>  	 */
> -	writeq_relaxed(__pa(secondary_entry), &mailbox->entry_point);
> +	writeq_relaxed(__pa_symbol(secondary_entry), &mailbox->entry_point);
>  	writel_relaxed(cpu_entry->gic_cpu_id, &mailbox->cpu_id);
>  
>  	arch_send_wakeup_ipi_mask(cpumask_of(cpu));
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index c02504ea304b..6ccadf255fba 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -736,7 +736,7 @@ static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused
>  static bool hyp_offset_low(const struct arm64_cpu_capabilities *entry,
>  			   int __unused)
>  {
> -	phys_addr_t idmap_addr = virt_to_phys(__hyp_idmap_text_start);
> +	phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
>  
>  	/*
>  	 * Activate the lower HYP offset only if:
> diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> index d55a7b09959b..81c03c74e5fe 100644
> --- a/arch/arm64/kernel/hibernate.c
> +++ b/arch/arm64/kernel/hibernate.c
> @@ -51,7 +51,7 @@
>  extern int in_suspend;
>  
>  /* Find a symbols alias in the linear map */
> -#define LMADDR(x)	phys_to_virt(virt_to_phys(x))
> +#define LMADDR(x)	__va(__pa_symbol(x))

...Perhaps just borrowing this macro?

>  
>  /* Do we need to reset el2? */
>  #define el2_reset_needed() (is_hyp_mode_available() && !is_kernel_in_hyp_mode())
> @@ -125,12 +125,12 @@ int arch_hibernation_header_save(void *addr, unsigned int max_size)
>  		return -EOVERFLOW;
>  
>  	arch_hdr_invariants(&hdr->invariants);
> -	hdr->ttbr1_el1		= virt_to_phys(swapper_pg_dir);
> +	hdr->ttbr1_el1		= __pa_symbol(swapper_pg_dir);
>  	hdr->reenter_kernel	= _cpu_resume;
>  
>  	/* We can't use __hyp_get_vectors() because kvm may still be loaded */
>  	if (el2_reset_needed())
> -		hdr->__hyp_stub_vectors = virt_to_phys(__hyp_stub_vectors);
> +		hdr->__hyp_stub_vectors = __pa_symbol(__hyp_stub_vectors);
>  	else
>  		hdr->__hyp_stub_vectors = 0;
>  
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 6f2ac4fc66ca..af8967a0343b 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -97,7 +97,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
>  	if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
>  		page = vmalloc_to_page(addr);
>  	else if (!module)
> -		page = pfn_to_page(PHYS_PFN(__pa(addr)));
> +		page = pfn_to_page(PHYS_PFN(__pa_symbol(addr)));
>  	else
>  		return addr;
>  
> diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c
> index 42816bebb1e0..f0f2abb72cf9 100644
> --- a/arch/arm64/kernel/psci.c
> +++ b/arch/arm64/kernel/psci.c
> @@ -45,7 +45,7 @@ static int __init cpu_psci_cpu_prepare(unsigned int cpu)
>  
>  static int cpu_psci_cpu_boot(unsigned int cpu)
>  {
> -	int err = psci_ops.cpu_on(cpu_logical_map(cpu), __pa(secondary_entry));
> +	int err = psci_ops.cpu_on(cpu_logical_map(cpu), __pa_symbol(secondary_entry));
>  	if (err)
>  		pr_err("failed to boot CPU%d (%d)\n", cpu, err);
>  
> diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
> index f534f492a268..e2dbc02f4792 100644
> --- a/arch/arm64/kernel/setup.c
> +++ b/arch/arm64/kernel/setup.c
> @@ -199,10 +199,10 @@ static void __init request_standard_resources(void)
>  	struct memblock_region *region;
>  	struct resource *res;
>  
> -	kernel_code.start   = virt_to_phys(_text);
> -	kernel_code.end     = virt_to_phys(__init_begin - 1);
> -	kernel_data.start   = virt_to_phys(_sdata);
> -	kernel_data.end     = virt_to_phys(_end - 1);
> +	kernel_code.start   = __pa_symbol(_text);
> +	kernel_code.end     = __pa_symbol(__init_begin - 1);
> +	kernel_data.start   = __pa_symbol(_sdata);
> +	kernel_data.end     = __pa_symbol(_end - 1);
>  
>  	for_each_memblock(memory, region) {
>  		res = alloc_bootmem_low(sizeof(*res));
> diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c
> index 9a00eee9acc8..25fcccaf79b8 100644
> --- a/arch/arm64/kernel/smp_spin_table.c
> +++ b/arch/arm64/kernel/smp_spin_table.c
> @@ -98,7 +98,7 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
>  	 * boot-loader's endianess before jumping. This is mandated by
>  	 * the boot protocol.
>  	 */
> -	writeq_relaxed(__pa(secondary_holding_pen), release_addr);
> +	writeq_relaxed(__pa_symbol(secondary_holding_pen), release_addr);
>  	__flush_dcache_area((__force void *)release_addr,
>  			    sizeof(*release_addr));
>  
> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
> index a2c2478e7d78..791e87a99148 100644
> --- a/arch/arm64/kernel/vdso.c
> +++ b/arch/arm64/kernel/vdso.c
> @@ -140,11 +140,11 @@ static int __init vdso_init(void)
>  		return -ENOMEM;
>  
>  	/* Grab the vDSO data page. */
> -	vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa(vdso_data)));
> +	vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa_symbol(vdso_data)));
>  
>  	/* Grab the vDSO code pages. */
>  	for (i = 0; i < vdso_pages; i++)
> -		vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa(&vdso_start)) + i);
> +		vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa_symbol(&vdso_start)) + i);
>  
>  	vdso_spec[0].pages = &vdso_pagelist[0];
>  	vdso_spec[1].pages = &vdso_pagelist[1];
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 3236eb062444..14f426fea61b 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -225,7 +225,7 @@ void __init arm64_memblock_init(void)
>  	 */
>  	if (memory_limit != (phys_addr_t)ULLONG_MAX) {
>  		memblock_mem_limit_remove_map(memory_limit);
> -		memblock_add(__pa(_text), (u64)(_end - _text));
> +		memblock_add(__pa_symbol(_text), (u64)(_end - _text));
>  	}
>  
>  	if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && initrd_start) {
> @@ -278,7 +278,7 @@ void __init arm64_memblock_init(void)
>  	 * Register the kernel text, kernel data, initrd, and initial
>  	 * pagetables with memblock.
>  	 */
> -	memblock_reserve(__pa(_text), _end - _text);
> +	memblock_reserve(__pa_symbol(_text), _end - _text);
>  #ifdef CONFIG_BLK_DEV_INITRD
>  	if (initrd_start) {
>  		memblock_reserve(initrd_start, initrd_end - initrd_start);
> @@ -483,7 +483,8 @@ void __init mem_init(void)
>  
>  void free_initmem(void)
>  {
> -	free_reserved_area(__va(__pa(__init_begin)), __va(__pa(__init_end)),
> +	free_reserved_area(__va(__pa_symbol(__init_begin)),
> +			   __va(__pa_symbol(__init_end)),
>  			   0, "unused kernel");
>  	/*
>  	 * Unmap the __init region but leave the VM area in place. This
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 17243e43184e..f7c0a47a8ebd 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -359,8 +359,8 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
>  
>  static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end)
>  {
> -	unsigned long kernel_start = __pa(_text);
> -	unsigned long kernel_end = __pa(__init_begin);
> +	unsigned long kernel_start = __pa_symbol(_text);
> +	unsigned long kernel_end = __pa_symbol(__init_begin);
>  
>  	/*
>  	 * Take care not to create a writable alias for the
> @@ -427,14 +427,14 @@ void mark_rodata_ro(void)
>  	unsigned long section_size;
>  
>  	section_size = (unsigned long)_etext - (unsigned long)_text;
> -	create_mapping_late(__pa(_text), (unsigned long)_text,
> +	create_mapping_late(__pa_symbol(_text), (unsigned long)_text,
>  			    section_size, PAGE_KERNEL_ROX);
>  	/*
>  	 * mark .rodata as read only. Use __init_begin rather than __end_rodata
>  	 * to cover NOTES and EXCEPTION_TABLE.
>  	 */
>  	section_size = (unsigned long)__init_begin - (unsigned long)__start_rodata;
> -	create_mapping_late(__pa(__start_rodata), (unsigned long)__start_rodata,
> +	create_mapping_late(__pa_symbol(__start_rodata), (unsigned long)__start_rodata,
>  			    section_size, PAGE_KERNEL_RO);
>  
>  	/* flush the TLBs after updating live kernel mappings */
> @@ -446,7 +446,7 @@ void mark_rodata_ro(void)
>  static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end,
>  				      pgprot_t prot, struct vm_struct *vma)
>  {
> -	phys_addr_t pa_start = __pa(va_start);
> +	phys_addr_t pa_start = __pa_symbol(va_start);
>  	unsigned long size = va_end - va_start;
>  
>  	BUG_ON(!PAGE_ALIGNED(pa_start));
> @@ -494,7 +494,7 @@ static void __init map_kernel(pgd_t *pgd)
>  		 */
>  		BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES));
>  		set_pud(pud_set_fixmap_offset(pgd, FIXADDR_START),
> -			__pud(__pa(bm_pmd) | PUD_TYPE_TABLE));
> +			__pud(__pa_symbol(bm_pmd) | PUD_TYPE_TABLE));
>  		pud_clear_fixmap();
>  	} else {
>  		BUG();
> @@ -525,7 +525,7 @@ void __init paging_init(void)
>  	 */
>  	cpu_replace_ttbr1(__va(pgd_phys));
>  	memcpy(swapper_pg_dir, pgd, PAGE_SIZE);
> -	cpu_replace_ttbr1(swapper_pg_dir);
> +	cpu_replace_ttbr1(__va(__pa_symbol(swapper_pg_dir)));
>  
>  	pgd_clear_fixmap();
>  	memblock_free(pgd_phys, PAGE_SIZE);
> @@ -534,7 +534,7 @@ void __init paging_init(void)
>  	 * We only reuse the PGD from the swapper_pg_dir, not the pud + pmd
>  	 * allocated with it.
>  	 */
> -	memblock_free(__pa(swapper_pg_dir) + PAGE_SIZE,
> +	memblock_free(__pa_symbol(swapper_pg_dir) + PAGE_SIZE,
>  		      SWAPPER_DIR_SIZE - PAGE_SIZE);
>  }
>  
> @@ -654,7 +654,7 @@ void __init early_fixmap_init(void)
>  
>  	pgd = pgd_offset_k(addr);
>  	if (CONFIG_PGTABLE_LEVELS > 3 &&
> -	    !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa(bm_pud))) {
> +	    !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa_symbol(bm_pud))) {
>  		/*
>  		 * We only end up here if the kernel mapping and the fixmap
>  		 * share the top level pgd entry, which should only happen on
> @@ -663,12 +663,12 @@ void __init early_fixmap_init(void)
>  		BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES));
>  		pud = pud_offset_kimg(pgd, addr);
>  	} else {
> -		pgd_populate(&init_mm, pgd, bm_pud);
> +		pgd_populate(&init_mm, pgd, __va(__pa_symbol(bm_pud)));
>  		pud = fixmap_pud(addr);
>  	}
> -	pud_populate(&init_mm, pud, bm_pmd);
> +	pud_populate(&init_mm, pud, __va(__pa_symbol(bm_pmd)));
>  	pmd = fixmap_pmd(addr);
> -	pmd_populate_kernel(&init_mm, pmd, bm_pte);
> +	pmd_populate_kernel(&init_mm, pmd, __va(__pa_symbol(bm_pte)));
>  
>  	/*
>  	 * The boot-ioremap range spans multiple pmds, for which
> diff --git a/arch/arm64/mm/physaddr.c b/arch/arm64/mm/physaddr.c
> index 874c78201a2b..98dae943e496 100644
> --- a/arch/arm64/mm/physaddr.c
> +++ b/arch/arm64/mm/physaddr.c
> @@ -14,8 +14,8 @@ unsigned long __virt_to_phys(unsigned long x)
>  		 */
>  		return (__x & ~PAGE_OFFSET) + PHYS_OFFSET;
>  	} else {
> -		VIRTUAL_BUG_ON(x < kimage_vaddr || x >= (unsigned long)_end);
> -		return (__x - kimage_voffset);
> +		WARN_ON(1);

Was the deletion of the BUG_ON here intentional? VIRTUAL_BUG_ON
is the check enabled by CONFIG_DEBUG_VIRTUAL vs just CONFIG_DEBUG_VM.
I intentionally kept CONFIG_DEBUG_VIRTUAL separate since the checks
are expensive.

> +		return __phys_addr_symbol(x);
>  	}
>  }
>  EXPORT_SYMBOL(__virt_to_phys);
> diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
> index 8263429e21b8..9defbe243c2f 100644
> --- a/drivers/firmware/psci.c
> +++ b/drivers/firmware/psci.c
> @@ -383,7 +383,7 @@ static int psci_suspend_finisher(unsigned long index)
>  	u32 *state = __this_cpu_read(psci_power_state);
>  
>  	return psci_ops.cpu_suspend(state[index - 1],
> -				    virt_to_phys(cpu_resume));
> +				    __pa_symbol(cpu_resume));
>  }
>  
>  int psci_cpu_suspend_enter(unsigned long index)
> 

Thanks,
Laura

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv2 5/6] arm64: Use __pa_symbol for _end
  2016-11-16  0:09               ` Laura Abbott
@ 2016-11-16 17:32                 ` Catalin Marinas
  2016-11-18 10:23                   ` Ard Biesheuvel
  0 siblings, 1 reply; 20+ messages in thread
From: Catalin Marinas @ 2016-11-16 17:32 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Mark Rutland, Ard Biesheuvel, x86, Will Deacon, linux-kernel,
	linux-mm, Ingo Molnar, Thomas Gleixner, H. Peter Anvin,
	Andrew Morton, Joonsoo Kim, linux-arm-kernel, Marek Szyprowski

On Tue, Nov 15, 2016 at 04:09:07PM -0800, Laura Abbott wrote:
> On 11/15/2016 10:35 AM, Catalin Marinas wrote:
> > I'm fine with __pa_symbol use entirely from under arch/arm64. But if you
> > want to use __pa_symbol, I tried to change most (all?) places where
> > necessary, together with making virt_to_phys() only deal with the kernel
> > linear mapping. Not sure it looks cleaner, especially the
> > __va(__pa_symbol()) cases (we could replace the latter with another
> > macro and proper comment):
> 
> I agree everything should be converted over, I was considering doing
> that in a separate patch but this covers everything nicely. Are you
> okay with me folding this in? (Few comments below)

Yes. I would also like Ard to review it since he introduced the current
__virt_to_phys() macro.

> > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > index eac3dbb7e313..e02f45e5ee1b 100644
> > --- a/arch/arm64/include/asm/memory.h
> > +++ b/arch/arm64/include/asm/memory.h
> > @@ -169,15 +169,22 @@ extern u64			kimage_voffset;
> >   */
> >  #define __virt_to_phys_nodebug(x) ({					\
> >  	phys_addr_t __x = (phys_addr_t)(x);				\
> > -	__x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET :	\
> > -				 (__x - kimage_voffset); })
> > +	VM_BUG_ON(!(__x & BIT(VA_BITS - 1)));				\
> > +	((__x & ~PAGE_OFFSET) + PHYS_OFFSET);				\
> > +})
> 
> I do think this is easier to understand vs the ternary operator.
> I'll add a comment detailing the use of __pa vs __pa_symbol somewhere
> as well.

Of course, a comment is welcome (I just did a quick hack to check that
it works).

> > --- a/arch/arm64/include/asm/mmu_context.h
> > +++ b/arch/arm64/include/asm/mmu_context.h
> > @@ -44,7 +44,7 @@ static inline void contextidr_thread_switch(struct task_struct *next)
> >   */
> >  static inline void cpu_set_reserved_ttbr0(void)
> >  {
> > -	unsigned long ttbr = virt_to_phys(empty_zero_page);
> > +	unsigned long ttbr = __pa_symbol(empty_zero_page);
> >  
> >  	write_sysreg(ttbr, ttbr0_el1);
> >  	isb();
> > @@ -113,7 +113,7 @@ static inline void cpu_install_idmap(void)
> >  	local_flush_tlb_all();
> >  	cpu_set_idmap_tcr_t0sz();
> >  
> > -	cpu_switch_mm(idmap_pg_dir, &init_mm);
> > +	cpu_switch_mm(__va(__pa_symbol(idmap_pg_dir)), &init_mm);
> 
> Yes, the __va(__pa_symbol(..)) idiom needs to be macroized and commented...

Indeed. At the same time we should also replace the LMADDR macro in
hibernate.c with whatever you come up with.

> > diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> > index d55a7b09959b..81c03c74e5fe 100644
> > --- a/arch/arm64/kernel/hibernate.c
> > +++ b/arch/arm64/kernel/hibernate.c
> > @@ -51,7 +51,7 @@
> >  extern int in_suspend;
> >  
> >  /* Find a symbols alias in the linear map */
> > -#define LMADDR(x)	phys_to_virt(virt_to_phys(x))
> > +#define LMADDR(x)	__va(__pa_symbol(x))
> 
> ...Perhaps just borrowing this macro?

Yes but I don't particularly like the name, especially since it goes
into a .h file. Maybe __lm_sym_addr() or something else if you have a
better idea.

> > diff --git a/arch/arm64/mm/physaddr.c b/arch/arm64/mm/physaddr.c
> > index 874c78201a2b..98dae943e496 100644
> > --- a/arch/arm64/mm/physaddr.c
> > +++ b/arch/arm64/mm/physaddr.c
> > @@ -14,8 +14,8 @@ unsigned long __virt_to_phys(unsigned long x)
> >  		 */
> >  		return (__x & ~PAGE_OFFSET) + PHYS_OFFSET;
> >  	} else {
> > -		VIRTUAL_BUG_ON(x < kimage_vaddr || x >= (unsigned long)_end);
> > -		return (__x - kimage_voffset);
> > +		WARN_ON(1);
> 
> Was the deletion of the BUG_ON here intentional? VIRTUAL_BUG_ON
> is the check enabled by CONFIG_DEBUG_VIRTUAL vs just CONFIG_DEBUG_VM.
> I intentionally kept CONFIG_DEBUG_VIRTUAL separate since the checks
> are expensive.

I wanted to always get a warning but fall back to __phys_addr_symbol()
so that I can track down other uses of __virt_to_phys() on kernel
symbols without killing the kernel. A better option would have been
VIRTUAL_WARN_ON (or *_ONCE) but we don't have it. VM_WARN_ON, as you
said, is independent of CONFIG_DEBUG_VIRTUAL.

We could as well kill the system with VIRTUAL_BUG_ON in this case but I
thought we should be more gentle until all the __virt_to_phys use-cases
are sorted out.

-- 
Catalin

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCHv2 5/6] arm64: Use __pa_symbol for _end
  2016-11-16 17:32                 ` Catalin Marinas
@ 2016-11-18 10:23                   ` Ard Biesheuvel
  0 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2016-11-18 10:23 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Laura Abbott, Mark Rutland, x86, Will Deacon, linux-kernel,
	linux-mm, Ingo Molnar, Thomas Gleixner, H. Peter Anvin,
	Andrew Morton, Joonsoo Kim, linux-arm-kernel, Marek Szyprowski

On 16 November 2016 at 17:32, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Tue, Nov 15, 2016 at 04:09:07PM -0800, Laura Abbott wrote:
>> On 11/15/2016 10:35 AM, Catalin Marinas wrote:
>> > I'm fine with __pa_symbol use entirely from under arch/arm64. But if you
>> > want to use __pa_symbol, I tried to change most (all?) places where
>> > necessary, together with making virt_to_phys() only deal with the kernel
>> > linear mapping. Not sure it looks cleaner, especially the
>> > __va(__pa_symbol()) cases (we could replace the latter with another
>> > macro and proper comment):
>>
>> I agree everything should be converted over, I was considering doing
>> that in a separate patch but this covers everything nicely. Are you
>> okay with me folding this in? (Few comments below)
>
> Yes. I would also like Ard to review it since he introduced the current
> __virt_to_phys() macro.
>

I think this is a clear improvement. I didn't dare to propose it at
the time, due to the fallout, but it is obviously much better to have
separate accessors than to have runtime tests to decide something that
is already known at compile time. My only concern is potential uses in
generic code: I think there may be something in the handling of
initramfs, or freeing the __init segment (I know it had 'init' in the
name :-)) that refers to the physical address of symbols, but I don't
remember exactly what it is.

Did you test it with a initramfs?

>> > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
>> > index eac3dbb7e313..e02f45e5ee1b 100644
>> > --- a/arch/arm64/include/asm/memory.h
>> > +++ b/arch/arm64/include/asm/memory.h
>> > @@ -169,15 +169,22 @@ extern u64                    kimage_voffset;
>> >   */
>> >  #define __virt_to_phys_nodebug(x) ({                                       \
>> >     phys_addr_t __x = (phys_addr_t)(x);                             \
>> > -   __x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET :   \
>> > -                            (__x - kimage_voffset); })
>> > +   VM_BUG_ON(!(__x & BIT(VA_BITS - 1)));                           \
>> > +   ((__x & ~PAGE_OFFSET) + PHYS_OFFSET);                           \
>> > +})
>>
>> I do think this is easier to understand vs the ternary operator.
>> I'll add a comment detailing the use of __pa vs __pa_symbol somewhere
>> as well.
>
> Of course, a comment is welcome (I just did a quick hack to check that
> it works).
>
>> > --- a/arch/arm64/include/asm/mmu_context.h
>> > +++ b/arch/arm64/include/asm/mmu_context.h
>> > @@ -44,7 +44,7 @@ static inline void contextidr_thread_switch(struct task_struct *next)
>> >   */
>> >  static inline void cpu_set_reserved_ttbr0(void)
>> >  {
>> > -   unsigned long ttbr = virt_to_phys(empty_zero_page);
>> > +   unsigned long ttbr = __pa_symbol(empty_zero_page);
>> >
>> >     write_sysreg(ttbr, ttbr0_el1);
>> >     isb();
>> > @@ -113,7 +113,7 @@ static inline void cpu_install_idmap(void)
>> >     local_flush_tlb_all();
>> >     cpu_set_idmap_tcr_t0sz();
>> >
>> > -   cpu_switch_mm(idmap_pg_dir, &init_mm);
>> > +   cpu_switch_mm(__va(__pa_symbol(idmap_pg_dir)), &init_mm);
>>
>> Yes, the __va(__pa_symbol(..)) idiom needs to be macroized and commented...
>
> Indeed. At the same time we should also replace the LMADDR macro in
> hibernate.c with whatever you come up with.
>
>> > diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
>> > index d55a7b09959b..81c03c74e5fe 100644
>> > --- a/arch/arm64/kernel/hibernate.c
>> > +++ b/arch/arm64/kernel/hibernate.c
>> > @@ -51,7 +51,7 @@
>> >  extern int in_suspend;
>> >
>> >  /* Find a symbols alias in the linear map */
>> > -#define LMADDR(x)  phys_to_virt(virt_to_phys(x))
>> > +#define LMADDR(x)  __va(__pa_symbol(x))
>>
>> ...Perhaps just borrowing this macro?
>
> Yes but I don't particularly like the name, especially since it goes
> into a .h file. Maybe __lm_sym_addr() or something else if you have a
> better idea.
>
>> > diff --git a/arch/arm64/mm/physaddr.c b/arch/arm64/mm/physaddr.c
>> > index 874c78201a2b..98dae943e496 100644
>> > --- a/arch/arm64/mm/physaddr.c
>> > +++ b/arch/arm64/mm/physaddr.c
>> > @@ -14,8 +14,8 @@ unsigned long __virt_to_phys(unsigned long x)
>> >              */
>> >             return (__x & ~PAGE_OFFSET) + PHYS_OFFSET;
>> >     } else {
>> > -           VIRTUAL_BUG_ON(x < kimage_vaddr || x >= (unsigned long)_end);
>> > -           return (__x - kimage_voffset);
>> > +           WARN_ON(1);
>>
>> Was the deletion of the BUG_ON here intentional? VIRTUAL_BUG_ON
>> is the check enabled by CONFIG_DEBUG_VIRTUAL vs just CONFIG_DEBUG_VM.
>> I intentionally kept CONFIG_DEBUG_VIRTUAL separate since the checks
>> are expensive.
>
> I wanted to always get a warning but fall back to __phys_addr_symbol()
> so that I can track down other uses of __virt_to_phys() on kernel
> symbols without killing the kernel. A better option would have been
> VIRTUAL_WARN_ON (or *_ONCE) but we don't have it. VM_WARN_ON, as you
> said, is independent of CONFIG_DEBUG_VIRTUAL.
>
> We could as well kill the system with VIRTUAL_BUG_ON in this case but I
> thought we should be more gentle until all the __virt_to_phys use-cases
> are sorted out.
>
> --
> Catalin

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2016-11-18 10:23 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-02 21:00 [PATCHv2 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Laura Abbott
2016-11-02 21:00 ` [PATCHv2 1/6] lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL Laura Abbott
2016-11-02 21:00 ` [PATCHv2 2/6] mm/cma: Cleanup highmem check Laura Abbott
2016-11-02 21:00 ` [PATCHv2 3/6] arm64: Move some macros under #ifndef __ASSEMBLY__ Laura Abbott
2016-11-02 21:00 ` [PATCHv2 4/6] arm64: Add cast for virt_to_pfn Laura Abbott
2016-11-02 21:00 ` [PATCHv2 5/6] arm64: Use __pa_symbol for _end Laura Abbott
2016-11-02 22:52   ` Mark Rutland
2016-11-02 23:56     ` Laura Abbott
2016-11-03 15:51       ` Mark Rutland
2016-11-14 18:19         ` Catalin Marinas
2016-11-14 18:41           ` Laura Abbott
2016-11-15 18:35             ` Catalin Marinas
2016-11-16  0:09               ` Laura Abbott
2016-11-16 17:32                 ` Catalin Marinas
2016-11-18 10:23                   ` Ard Biesheuvel
2016-11-02 21:00 ` [PATCHv2 6/6] arm64: Add support for CONFIG_DEBUG_VIRTUAL Laura Abbott
2016-11-02 23:06   ` Mark Rutland
2016-11-03  0:05     ` Laura Abbott
2016-11-03 15:57       ` Mark Rutland
2016-11-02 23:07 ` [PATCHv2 0/6] CONFIG_DEBUG_VIRTUAL for arm64 Mark Rutland

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).