linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/8] Fix issues with 52-bit kernel virtual addressing
@ 2019-08-13 17:01 Will Deacon
  2019-08-13 17:01 ` [PATCH 1/8] arm64: memory: Fix virt_addr_valid() using __is_lm_address() Will Deacon
                   ` (10 more replies)
  0 siblings, 11 replies; 44+ messages in thread
From: Will Deacon @ 2019-08-13 17:01 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Steve Capper, Catalin Marinas, Qian Cai,
	Andrey Konovalov, Geert Uytterhoeven, Will Deacon

Hi all,

This patch series addresses some issues with 52-bit kernel VAs reported
by Qian Cai and Geert. It's all confined to asm/memory.h and I got a bit
carried away cleaning that thing up so the patches get more worthless
as you go through the series. Still, I'd like to queue this on top of
the 52-bit VA stuff currently sitting in -next.

Although Geert and Steve tested my initial hacks, I dropped the tags
because I've split things up and could've easily broken things again.

Cheers,

Will

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Steve Capper <steve.capper@arm.com>
Cc: Qian Cai <cai@lca.pw>
Cc: Andrey Konovalov <andreyknvl@google.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>

--->8

Will Deacon (8):
  arm64: memory: Fix virt_addr_valid() using __is_lm_address()
  arm64: memory: Ensure address tag is masked in conversion macros
  arm64: memory: Rewrite default page_to_virt()/virt_to_page()
  arm64: memory: Simplify virt_to_page() implementation
  arm64: memory: Simplify _VA_START and _PAGE_OFFSET definitions
  arm64: memory: Implement __tag_set() as common function
  arm64: memory: Add comments to end of non-trivial #ifdef blocks
  arm64: memory: Cosmetic cleanups

 arch/arm64/include/asm/memory.h | 89 ++++++++++++++++++++---------------------
 1 file changed, 44 insertions(+), 45 deletions(-)

-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH 1/8] arm64: memory: Fix virt_addr_valid() using __is_lm_address()
  2019-08-13 17:01 [PATCH 0/8] Fix issues with 52-bit kernel virtual addressing Will Deacon
@ 2019-08-13 17:01 ` Will Deacon
  2019-08-13 18:09   ` Ard Biesheuvel
                     ` (2 more replies)
  2019-08-13 17:01 ` [PATCH 2/8] arm64: memory: Ensure address tag is masked in conversion macros Will Deacon
                   ` (9 subsequent siblings)
  10 siblings, 3 replies; 44+ messages in thread
From: Will Deacon @ 2019-08-13 17:01 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Steve Capper, Catalin Marinas, Qian Cai,
	Andrey Konovalov, Geert Uytterhoeven, Will Deacon

virt_addr_valid() is intended to test whether or not the passed address
is a valid linear map address. Unfortunately, it relies on
_virt_addr_is_linear() which is broken because it assumes the linear
map is at the top of the address space, which it no longer is.

Reimplement virt_addr_valid() using __is_lm_address() and remove
_virt_addr_is_linear() entirely. At the same time, ensure we evaluate
the macro parameter only once and move it within the __ASSEMBLY__ block.

Reported-by: Qian Cai <cai@lca.pw>
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Fixes: 14c127c957c1 ("arm64: mm: Flip kernel VA space")
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/memory.h | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index afaf512c0e1b..442ab861cab8 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -244,9 +244,9 @@ static inline const void *__tag_set(const void *addr, u8 tag)
 /*
  * The linear kernel range starts in the middle of the virtual adddress
  * space. Testing the top bit for the start of the region is a
- * sufficient check.
+ * sufficient check and avoids having to worry about the tag.
  */
-#define __is_lm_address(addr)	(!((addr) & BIT(vabits_actual - 1)))
+#define __is_lm_address(addr)	(!(((u64)addr) & BIT(vabits_actual - 1)))
 
 #define __lm_to_phys(addr)	(((addr) + physvirt_offset))
 #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
@@ -326,13 +326,13 @@ static inline void *phys_to_virt(phys_addr_t x)
 
 #define virt_to_page(vaddr)	((struct page *)((__virt_to_pgoff(vaddr)) + VMEMMAP_START))
 #endif
-#endif
 
-#define _virt_addr_is_linear(kaddr)	\
-	(__tag_reset((u64)(kaddr)) >= PAGE_OFFSET)
+#define virt_addr_valid(addr)	({					\
+	__typeof__(addr) __addr = addr;					\
+	__is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr));	\
+})
 
-#define virt_addr_valid(kaddr)		\
-	(_virt_addr_is_linear(kaddr) && pfn_valid(virt_to_pfn(kaddr)))
+#endif
 
 /*
  * Given that the GIC architecture permits ITS implementations that can only be
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 2/8] arm64: memory: Ensure address tag is masked in conversion macros
  2019-08-13 17:01 [PATCH 0/8] Fix issues with 52-bit kernel virtual addressing Will Deacon
  2019-08-13 17:01 ` [PATCH 1/8] arm64: memory: Fix virt_addr_valid() using __is_lm_address() Will Deacon
@ 2019-08-13 17:01 ` Will Deacon
  2019-08-13 18:54   ` Steve Capper
  2019-08-14  9:23   ` Catalin Marinas
  2019-08-13 17:01 ` [PATCH 3/8] arm64: memory: Rewrite default page_to_virt()/virt_to_page() Will Deacon
                   ` (8 subsequent siblings)
  10 siblings, 2 replies; 44+ messages in thread
From: Will Deacon @ 2019-08-13 17:01 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Steve Capper, Catalin Marinas, Qian Cai,
	Andrey Konovalov, Geert Uytterhoeven, Will Deacon

When converting a linear virtual address to a physical address, pfn or
struct page *, we must make sure that the tag bits are masked before the
calculation otherwise we end up with corrupt pointers when running with
CONFIG_KASAN_SW_TAGS=y:

  | Unable to handle kernel paging request at virtual address 0037fe0007580d08
  | [0037fe0007580d08] address between user and kernel address ranges

Mask out the tag in __virt_to_phys_nodebug() and virt_to_page().

Reported-by: Qian Cai <cai@lca.pw>
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Fixes: 9cb1c5ddd2c4 ("arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START")
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/memory.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 442ab861cab8..47b4dc73b8bf 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -252,7 +252,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
 #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
 
 #define __virt_to_phys_nodebug(x) ({					\
-	phys_addr_t __x = (phys_addr_t)(x);				\
+	phys_addr_t __x = (phys_addr_t)(__tag_reset(x));		\
 	__is_lm_address(__x) ? __lm_to_phys(__x) :			\
 			       __kimg_to_phys(__x);			\
 })
@@ -324,7 +324,8 @@ static inline void *phys_to_virt(phys_addr_t x)
 	((void *)__addr_tag);						\
 })
 
-#define virt_to_page(vaddr)	((struct page *)((__virt_to_pgoff(vaddr)) + VMEMMAP_START))
+#define virt_to_page(vaddr)	\
+	((struct page *)((__virt_to_pgoff(__tag_reset(vaddr))) + VMEMMAP_START))
 #endif
 
 #define virt_addr_valid(addr)	({					\
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 3/8] arm64: memory: Rewrite default page_to_virt()/virt_to_page()
  2019-08-13 17:01 [PATCH 0/8] Fix issues with 52-bit kernel virtual addressing Will Deacon
  2019-08-13 17:01 ` [PATCH 1/8] arm64: memory: Fix virt_addr_valid() using __is_lm_address() Will Deacon
  2019-08-13 17:01 ` [PATCH 2/8] arm64: memory: Ensure address tag is masked in conversion macros Will Deacon
@ 2019-08-13 17:01 ` Will Deacon
  2019-08-13 18:54   ` Steve Capper
  2019-08-14  9:30   ` Catalin Marinas
  2019-08-13 17:01 ` [PATCH 4/8] arm64: memory: Simplify virt_to_page() implementation Will Deacon
                   ` (7 subsequent siblings)
  10 siblings, 2 replies; 44+ messages in thread
From: Will Deacon @ 2019-08-13 17:01 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Steve Capper, Catalin Marinas, Qian Cai,
	Andrey Konovalov, Geert Uytterhoeven, Will Deacon

The default implementations of page_to_virt() and virt_to_page() are
fairly confusing to read and the former evaluates its 'page' parameter
twice in the macro

Rewrite them so that the computation is expressed as 'base + index' in
both cases and the parameter is always evaluated exactly once.

Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/memory.h | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 47b4dc73b8bf..77074b3a1025 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -313,19 +313,18 @@ static inline void *phys_to_virt(phys_addr_t x)
 #if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
 #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
 #else
-#define __virt_to_pgoff(kaddr)	(((u64)(kaddr) - PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page))
-#define __page_to_voff(kaddr)	(((u64)(kaddr) - VMEMMAP_START) * PAGE_SIZE / sizeof(struct page))
-
-#define page_to_virt(page)	({					\
-	unsigned long __addr =						\
-		((__page_to_voff(page)) + PAGE_OFFSET);			\
-	const void *__addr_tag =					\
-		__tag_set((void *)__addr, page_kasan_tag(page));	\
-	((void *)__addr_tag);						\
+#define page_to_virt(x)	({						\
+	__typeof__(x) __page = x;					\
+	u64 __idx = ((u64)__page - VMEMMAP_START) / sizeof(struct page);\
+	u64 __addr = PAGE_OFFSET + (__idx * PAGE_SIZE);			\
+	(void *)__tag_set((const void *)__addr, page_kasan_tag(__page));\
 })
 
-#define virt_to_page(vaddr)	\
-	((struct page *)((__virt_to_pgoff(__tag_reset(vaddr))) + VMEMMAP_START))
+#define virt_to_page(x)	({						\
+	u64 __idx = (__tag_reset((u64)x) - PAGE_OFFSET) / PAGE_SIZE;	\
+	u64 __addr = VMEMMAP_START + (__idx * sizeof(struct page));	\
+	(struct page *)__addr;						\
+})
 #endif
 
 #define virt_addr_valid(addr)	({					\
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 4/8] arm64: memory: Simplify virt_to_page() implementation
  2019-08-13 17:01 [PATCH 0/8] Fix issues with 52-bit kernel virtual addressing Will Deacon
                   ` (2 preceding siblings ...)
  2019-08-13 17:01 ` [PATCH 3/8] arm64: memory: Rewrite default page_to_virt()/virt_to_page() Will Deacon
@ 2019-08-13 17:01 ` Will Deacon
  2019-08-13 18:55   ` Steve Capper
  2019-08-14  9:32   ` Catalin Marinas
  2019-08-13 17:01 ` [PATCH 5/8] arm64: memory: Simplify _VA_START and _PAGE_OFFSET definitions Will Deacon
                   ` (6 subsequent siblings)
  10 siblings, 2 replies; 44+ messages in thread
From: Will Deacon @ 2019-08-13 17:01 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Steve Capper, Catalin Marinas, Qian Cai,
	Andrey Konovalov, Geert Uytterhoeven, Will Deacon

Build virt_to_page() on top of virt_to_pfn() so we can avoid the need
for explicit shifting.

Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/memory.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 77074b3a1025..56be462c69ce 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -311,7 +311,7 @@ static inline void *phys_to_virt(phys_addr_t x)
 #define ARCH_PFN_OFFSET		((unsigned long)PHYS_PFN_OFFSET)
 
 #if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
-#define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
+#define virt_to_page(kaddr)	pfn_to_page(virt_to_pfn(kaddr))
 #else
 #define page_to_virt(x)	({						\
 	__typeof__(x) __page = x;					\
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 5/8] arm64: memory: Simplify _VA_START and _PAGE_OFFSET definitions
  2019-08-13 17:01 [PATCH 0/8] Fix issues with 52-bit kernel virtual addressing Will Deacon
                   ` (3 preceding siblings ...)
  2019-08-13 17:01 ` [PATCH 4/8] arm64: memory: Simplify virt_to_page() implementation Will Deacon
@ 2019-08-13 17:01 ` Will Deacon
  2019-08-13 18:55   ` Steve Capper
                     ` (2 more replies)
  2019-08-13 17:01 ` [PATCH 6/8] arm64: memory: Implement __tag_set() as common function Will Deacon
                   ` (5 subsequent siblings)
  10 siblings, 3 replies; 44+ messages in thread
From: Will Deacon @ 2019-08-13 17:01 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Steve Capper, Catalin Marinas, Qian Cai,
	Andrey Konovalov, Geert Uytterhoeven, Will Deacon

Rather than subtracting from -1 and then adding 1, we can simply
subtract from 0.

Cc: Steve Capper <steve.capper@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/memory.h | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 56be462c69ce..5552c8cba1e2 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -44,8 +44,7 @@
  * VA_START - the first kernel virtual address.
  */
 #define VA_BITS			(CONFIG_ARM64_VA_BITS)
-#define _PAGE_OFFSET(va)	(UL(0xffffffffffffffff) - \
-					(UL(1) << (va)) + 1)
+#define _PAGE_OFFSET(va)	(-(UL(1) << (va)))
 #define PAGE_OFFSET		(_PAGE_OFFSET(VA_BITS))
 #define KIMAGE_VADDR		(MODULES_END)
 #define BPF_JIT_REGION_START	(KASAN_SHADOW_END)
@@ -63,8 +62,7 @@
 #else
 #define VA_BITS_MIN		(VA_BITS)
 #endif
-#define _VA_START(va)		(UL(0xffffffffffffffff) - \
-				(UL(1) << ((va) - 1)) + 1)
+#define _VA_START(va)		(-(UL(1) << ((va) - 1)))
 
 #define KERNEL_START      _text
 #define KERNEL_END        _end
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 6/8] arm64: memory: Implement __tag_set() as common function
  2019-08-13 17:01 [PATCH 0/8] Fix issues with 52-bit kernel virtual addressing Will Deacon
                   ` (4 preceding siblings ...)
  2019-08-13 17:01 ` [PATCH 5/8] arm64: memory: Simplify _VA_START and _PAGE_OFFSET definitions Will Deacon
@ 2019-08-13 17:01 ` Will Deacon
  2019-08-13 18:56   ` Steve Capper
  2019-08-14  9:34   ` Catalin Marinas
  2019-08-13 17:01 ` [PATCH 7/8] arm64: memory: Add comments to end of non-trivial #ifdef blocks Will Deacon
                   ` (4 subsequent siblings)
  10 siblings, 2 replies; 44+ messages in thread
From: Will Deacon @ 2019-08-13 17:01 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Steve Capper, Catalin Marinas, Qian Cai,
	Andrey Konovalov, Geert Uytterhoeven, Will Deacon

There's no need for __tag_set() to be a complicated macro when
CONFIG_KASAN_SW_TAGS=y and a simple static inline otherwise. Rewrite
the thing as a common static inline function.

Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/memory.h | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 5552c8cba1e2..e902132b808c 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -218,20 +218,20 @@ static inline unsigned long kaslr_offset(void)
 
 #ifdef CONFIG_KASAN_SW_TAGS
 #define __tag_shifted(tag)	((u64)(tag) << 56)
-#define __tag_set(addr, tag)	(__typeof__(addr))( \
-		((u64)(addr) & ~__tag_shifted(0xff)) | __tag_shifted(tag))
 #define __tag_reset(addr)	untagged_addr(addr)
 #define __tag_get(addr)		(__u8)((u64)(addr) >> 56)
 #else
-static inline const void *__tag_set(const void *addr, u8 tag)
-{
-	return addr;
-}
-
+#define __tag_shifted(tag)	0UL
 #define __tag_reset(addr)	(addr)
 #define __tag_get(addr)		0
 #endif
 
+static inline const void *__tag_set(const void *addr, u8 tag)
+{
+	u64 __addr = (u64)addr & ~__tag_shifted(0xff);
+	return (const void *)(__addr | __tag_shifted(tag));
+}
+
 /*
  * Physical vs virtual RAM address space conversion.  These are
  * private definitions which should NOT be used outside memory.h
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 7/8] arm64: memory: Add comments to end of non-trivial #ifdef blocks
  2019-08-13 17:01 [PATCH 0/8] Fix issues with 52-bit kernel virtual addressing Will Deacon
                   ` (5 preceding siblings ...)
  2019-08-13 17:01 ` [PATCH 6/8] arm64: memory: Implement __tag_set() as common function Will Deacon
@ 2019-08-13 17:01 ` Will Deacon
  2019-08-13 18:57   ` Steve Capper
  2019-08-14  9:35   ` Catalin Marinas
  2019-08-13 17:01 ` [PATCH 8/8] arm64: memory: Cosmetic cleanups Will Deacon
                   ` (3 subsequent siblings)
  10 siblings, 2 replies; 44+ messages in thread
From: Will Deacon @ 2019-08-13 17:01 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Steve Capper, Catalin Marinas, Qian Cai,
	Andrey Konovalov, Geert Uytterhoeven, Will Deacon

Commenting the #endif of a multi-statement #ifdef block with the
condition which guards it is useful and can save having to scroll back
through the file to figure out which set of Kconfig options apply to
a particular piece of code.

Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/memory.h | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index e902132b808c..d31e4b6e349f 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -57,11 +57,13 @@
 #define PCI_IO_END		(VMEMMAP_START - SZ_2M)
 #define PCI_IO_START		(PCI_IO_END - PCI_IO_SIZE)
 #define FIXADDR_TOP		(PCI_IO_START - SZ_2M)
+
 #if VA_BITS > 48
 #define VA_BITS_MIN		(48)
 #else
 #define VA_BITS_MIN		(VA_BITS)
 #endif
+
 #define _VA_START(va)		(-(UL(1) << ((va) - 1)))
 
 #define KERNEL_START      _text
@@ -86,7 +88,7 @@
 #else
 #define KASAN_THREAD_SHIFT	0
 #define KASAN_SHADOW_END	(_VA_START(VA_BITS_MIN))
-#endif
+#endif /* CONFIG_KASAN */
 
 #define MIN_THREAD_SHIFT	(14 + KASAN_THREAD_SHIFT)
 
@@ -224,7 +226,7 @@ static inline unsigned long kaslr_offset(void)
 #define __tag_shifted(tag)	0UL
 #define __tag_reset(addr)	(addr)
 #define __tag_get(addr)		0
-#endif
+#endif /* CONFIG_KASAN_SW_TAGS */
 
 static inline const void *__tag_set(const void *addr, u8 tag)
 {
@@ -263,7 +265,7 @@ extern phys_addr_t __phys_addr_symbol(unsigned long x);
 #else
 #define __virt_to_phys(x)	__virt_to_phys_nodebug(x)
 #define __phys_addr_symbol(x)	__pa_symbol_nodebug(x)
-#endif
+#endif /* CONFIG_DEBUG_VIRTUAL */
 
 #define __phys_to_virt(x)	((unsigned long)((x) - physvirt_offset))
 #define __phys_to_kimg(x)	((unsigned long)((x) + kimage_voffset))
@@ -323,14 +325,14 @@ static inline void *phys_to_virt(phys_addr_t x)
 	u64 __addr = VMEMMAP_START + (__idx * sizeof(struct page));	\
 	(struct page *)__addr;						\
 })
-#endif
+#endif /* !CONFIG_SPARSEMEM_VMEMMAP || CONFIG_DEBUG_VIRTUAL */
 
 #define virt_addr_valid(addr)	({					\
 	__typeof__(addr) __addr = addr;					\
 	__is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr));	\
 })
 
-#endif
+#endif /* !ASSEMBLY */
 
 /*
  * Given that the GIC architecture permits ITS implementations that can only be
@@ -345,4 +347,4 @@ static inline void *phys_to_virt(phys_addr_t x)
 
 #include <asm-generic/memory_model.h>
 
-#endif
+#endif /* __ASM_MEMORY_H */
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 8/8] arm64: memory: Cosmetic cleanups
  2019-08-13 17:01 [PATCH 0/8] Fix issues with 52-bit kernel virtual addressing Will Deacon
                   ` (6 preceding siblings ...)
  2019-08-13 17:01 ` [PATCH 7/8] arm64: memory: Add comments to end of non-trivial #ifdef blocks Will Deacon
@ 2019-08-13 17:01 ` Will Deacon
  2019-08-13 18:57   ` Steve Capper
  2019-08-14  9:35   ` Catalin Marinas
  2019-08-13 18:53 ` [PATCH 0/8] Fix issues with 52-bit kernel virtual addressing Steve Capper
                   ` (2 subsequent siblings)
  10 siblings, 2 replies; 44+ messages in thread
From: Will Deacon @ 2019-08-13 17:01 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Steve Capper, Catalin Marinas, Qian Cai,
	Andrey Konovalov, Geert Uytterhoeven, Will Deacon

Cleanup memory.h so that the indentation is consistent, remove pointless
line-wrapping and use consistent parameter names for different versions
of the same macro.

Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/memory.h | 23 +++++++++++------------
 1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index d31e4b6e349f..69f4cecb7241 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -12,10 +12,10 @@
 
 #include <linux/compiler.h>
 #include <linux/const.h>
+#include <linux/sizes.h>
 #include <linux/types.h>
 #include <asm/bug.h>
 #include <asm/page-def.h>
-#include <linux/sizes.h>
 
 /*
  * Size of the PCI I/O space. This must remain a power of two so that
@@ -66,8 +66,8 @@
 
 #define _VA_START(va)		(-(UL(1) << ((va) - 1)))
 
-#define KERNEL_START      _text
-#define KERNEL_END        _end
+#define KERNEL_START		_text
+#define KERNEL_END		_end
 
 #ifdef CONFIG_ARM64_VA_BITS_52
 #define MAX_USER_VA_BITS	52
@@ -132,14 +132,14 @@
  * 16 KB granule: 128 level 3 entries, with contiguous bit
  * 64 KB granule:  32 level 3 entries, with contiguous bit
  */
-#define SEGMENT_ALIGN			SZ_2M
+#define SEGMENT_ALIGN		SZ_2M
 #else
 /*
  *  4 KB granule:  16 level 3 entries, with contiguous bit
  * 16 KB granule:   4 level 3 entries, without contiguous bit
  * 64 KB granule:   1 level 3 entry
  */
-#define SEGMENT_ALIGN			SZ_64K
+#define SEGMENT_ALIGN		SZ_64K
 #endif
 
 /*
@@ -253,8 +253,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
 
 #define __virt_to_phys_nodebug(x) ({					\
 	phys_addr_t __x = (phys_addr_t)(__tag_reset(x));		\
-	__is_lm_address(__x) ? __lm_to_phys(__x) :			\
-			       __kimg_to_phys(__x);			\
+	__is_lm_address(__x) ? __lm_to_phys(__x) : __kimg_to_phys(__x);	\
 })
 
 #define __pa_symbol_nodebug(x)	__kimg_to_phys((phys_addr_t)(x))
@@ -301,17 +300,17 @@ static inline void *phys_to_virt(phys_addr_t x)
 #define __pa_nodebug(x)		__virt_to_phys_nodebug((unsigned long)(x))
 #define __va(x)			((void *)__phys_to_virt((phys_addr_t)(x)))
 #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
-#define virt_to_pfn(x)      __phys_to_pfn(__virt_to_phys((unsigned long)(x)))
-#define sym_to_pfn(x)	    __phys_to_pfn(__pa_symbol(x))
+#define virt_to_pfn(x)		__phys_to_pfn(__virt_to_phys((unsigned long)(x)))
+#define sym_to_pfn(x)		__phys_to_pfn(__pa_symbol(x))
 
 /*
- *  virt_to_page(k)	convert a _valid_ virtual address to struct page *
- *  virt_addr_valid(k)	indicates whether a virtual address is valid
+ *  virt_to_page(x)	convert a _valid_ virtual address to struct page *
+ *  virt_addr_valid(x)	indicates whether a virtual address is valid
  */
 #define ARCH_PFN_OFFSET		((unsigned long)PHYS_PFN_OFFSET)
 
 #if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
-#define virt_to_page(kaddr)	pfn_to_page(virt_to_pfn(kaddr))
+#define virt_to_page(x)		pfn_to_page(virt_to_pfn(x))
 #else
 #define page_to_virt(x)	({						\
 	__typeof__(x) __page = x;					\
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [PATCH 1/8] arm64: memory: Fix virt_addr_valid() using __is_lm_address()
  2019-08-13 17:01 ` [PATCH 1/8] arm64: memory: Fix virt_addr_valid() using __is_lm_address() Will Deacon
@ 2019-08-13 18:09   ` Ard Biesheuvel
  2019-08-13 19:11     ` Steve Capper
  2019-08-13 18:53   ` Steve Capper
  2019-08-14  9:19   ` Catalin Marinas
  2 siblings, 1 reply; 44+ messages in thread
From: Ard Biesheuvel @ 2019-08-13 18:09 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Steve Capper, Catalin Marinas, Qian Cai,
	Andrey Konovalov, Geert Uytterhoeven, linux-arm-kernel

On Tue, 13 Aug 2019 at 20:02, Will Deacon <will@kernel.org> wrote:
>
> virt_addr_valid() is intended to test whether or not the passed address
> is a valid linear map address. Unfortunately, it relies on
> _virt_addr_is_linear() which is broken because it assumes the linear
> map is at the top of the address space, which it no longer is.
>
> Reimplement virt_addr_valid() using __is_lm_address() and remove
> _virt_addr_is_linear() entirely. At the same time, ensure we evaluate
> the macro parameter only once and move it within the __ASSEMBLY__ block.
>
> Reported-by: Qian Cai <cai@lca.pw>
> Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
> Fixes: 14c127c957c1 ("arm64: mm: Flip kernel VA space")
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/include/asm/memory.h | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index afaf512c0e1b..442ab861cab8 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -244,9 +244,9 @@ static inline const void *__tag_set(const void *addr, u8 tag)
>  /*
>   * The linear kernel range starts in the middle of the virtual adddress
>   * space.

This is no longer true either.

> Testing the top bit for the start of the region is a
> - * sufficient check.
> + * sufficient check and avoids having to worry about the tag.
>   */
> -#define __is_lm_address(addr)  (!((addr) & BIT(vabits_actual - 1)))
> +#define __is_lm_address(addr)  (!(((u64)addr) & BIT(vabits_actual - 1)))
>

... and this assumes that the VA space is split evenly between linear
and vmalloc/vmemmap/etc, which is no longer true when running with
52-bit VAs

>  #define __lm_to_phys(addr)     (((addr) + physvirt_offset))
>  #define __kimg_to_phys(addr)   ((addr) - kimage_voffset)
> @@ -326,13 +326,13 @@ static inline void *phys_to_virt(phys_addr_t x)
>
>  #define virt_to_page(vaddr)    ((struct page *)((__virt_to_pgoff(vaddr)) + VMEMMAP_START))
>  #endif
> -#endif
>
> -#define _virt_addr_is_linear(kaddr)    \
> -       (__tag_reset((u64)(kaddr)) >= PAGE_OFFSET)
> +#define virt_addr_valid(addr)  ({                                      \
> +       __typeof__(addr) __addr = addr;                                 \
> +       __is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr));      \
> +})
>
> -#define virt_addr_valid(kaddr)         \
> -       (_virt_addr_is_linear(kaddr) && pfn_valid(virt_to_pfn(kaddr)))
> +#endif
>
>  /*
>   * Given that the GIC architecture permits ITS implementations that can only be
> --
> 2.11.0
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 0/8] Fix issues with 52-bit kernel virtual addressing
  2019-08-13 17:01 [PATCH 0/8] Fix issues with 52-bit kernel virtual addressing Will Deacon
                   ` (7 preceding siblings ...)
  2019-08-13 17:01 ` [PATCH 8/8] arm64: memory: Cosmetic cleanups Will Deacon
@ 2019-08-13 18:53 ` Steve Capper
  2019-08-14  8:01 ` Geert Uytterhoeven
  2019-08-14 11:29 ` Mark Rutland
  10 siblings, 0 replies; 44+ messages in thread
From: Steve Capper @ 2019-08-13 18:53 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Catalin Marinas, Geert Uytterhoeven,
	Andrey Konovalov, Qian Cai, nd, linux-arm-kernel

On Tue, Aug 13, 2019 at 06:01:41PM +0100, Will Deacon wrote:
> Hi all,
> 
> This patch series addresses some issues with 52-bit kernel VAs reported
> by Qian Cai and Geert. It's all confined to asm/memory.h and I got a bit
> carried away cleaning that thing up so the patches get more worthless
> as you go through the series. Still, I'd like to queue this on top of
> the 52-bit VA stuff currently sitting in -next.
> 
> Although Geert and Steve tested my initial hacks, I dropped the tags
> because I've split things up and could've easily broken things again.
> 
> Cheers,
> 
> Will
> 

A really big thank you for this Will! and apologies again for missing
these things in testing.

I've applied this series to for-next/52-bit-kva and tested using 52-bit
VAs with SW TAGS with DEBUG_VIRTUAL and DEBUG_VM, so please feel free to
add:

Tested-by: Steve Capper <steve.capper@arm.com>

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 1/8] arm64: memory: Fix virt_addr_valid() using __is_lm_address()
  2019-08-13 17:01 ` [PATCH 1/8] arm64: memory: Fix virt_addr_valid() using __is_lm_address() Will Deacon
  2019-08-13 18:09   ` Ard Biesheuvel
@ 2019-08-13 18:53   ` Steve Capper
  2019-08-14  9:19   ` Catalin Marinas
  2 siblings, 0 replies; 44+ messages in thread
From: Steve Capper @ 2019-08-13 18:53 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Catalin Marinas, Geert Uytterhoeven,
	Andrey Konovalov, Qian Cai, nd, linux-arm-kernel

On Tue, Aug 13, 2019 at 06:01:42PM +0100, Will Deacon wrote:
> virt_addr_valid() is intended to test whether or not the passed address
> is a valid linear map address. Unfortunately, it relies on
> _virt_addr_is_linear() which is broken because it assumes the linear
> map is at the top of the address space, which it no longer is.
> 
> Reimplement virt_addr_valid() using __is_lm_address() and remove
> _virt_addr_is_linear() entirely. At the same time, ensure we evaluate
> the macro parameter only once and move it within the __ASSEMBLY__ block.
> 
> Reported-by: Qian Cai <cai@lca.pw>
> Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
> Fixes: 14c127c957c1 ("arm64: mm: Flip kernel VA space")
> Signed-off-by: Will Deacon <will@kernel.org>

Reviewed-by: Steve Capper <steve.capper@arm.com>

> ---
>  arch/arm64/include/asm/memory.h | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index afaf512c0e1b..442ab861cab8 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -244,9 +244,9 @@ static inline const void *__tag_set(const void *addr, u8 tag)
>  /*
>   * The linear kernel range starts in the middle of the virtual adddress
>   * space. Testing the top bit for the start of the region is a
> - * sufficient check.
> + * sufficient check and avoids having to worry about the tag.
>   */
> -#define __is_lm_address(addr)	(!((addr) & BIT(vabits_actual - 1)))
> +#define __is_lm_address(addr)	(!(((u64)addr) & BIT(vabits_actual - 1)))
>  
>  #define __lm_to_phys(addr)	(((addr) + physvirt_offset))
>  #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
> @@ -326,13 +326,13 @@ static inline void *phys_to_virt(phys_addr_t x)
>  
>  #define virt_to_page(vaddr)	((struct page *)((__virt_to_pgoff(vaddr)) + VMEMMAP_START))
>  #endif
> -#endif
>  
> -#define _virt_addr_is_linear(kaddr)	\
> -	(__tag_reset((u64)(kaddr)) >= PAGE_OFFSET)
> +#define virt_addr_valid(addr)	({					\
> +	__typeof__(addr) __addr = addr;					\
> +	__is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr));	\
> +})
>  
> -#define virt_addr_valid(kaddr)		\
> -	(_virt_addr_is_linear(kaddr) && pfn_valid(virt_to_pfn(kaddr)))
> +#endif
>  
>  /*
>   * Given that the GIC architecture permits ITS implementations that can only be
> -- 
> 2.11.0
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 2/8] arm64: memory: Ensure address tag is masked in conversion macros
  2019-08-13 17:01 ` [PATCH 2/8] arm64: memory: Ensure address tag is masked in conversion macros Will Deacon
@ 2019-08-13 18:54   ` Steve Capper
  2019-08-14  9:23   ` Catalin Marinas
  1 sibling, 0 replies; 44+ messages in thread
From: Steve Capper @ 2019-08-13 18:54 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Catalin Marinas, Geert Uytterhoeven,
	Andrey Konovalov, Qian Cai, nd, linux-arm-kernel

On Tue, Aug 13, 2019 at 06:01:43PM +0100, Will Deacon wrote:
> When converting a linear virtual address to a physical address, pfn or
> struct page *, we must make sure that the tag bits are masked before the
> calculation otherwise we end up with corrupt pointers when running with
> CONFIG_KASAN_SW_TAGS=y:
> 
>   | Unable to handle kernel paging request at virtual address 0037fe0007580d08
>   | [0037fe0007580d08] address between user and kernel address ranges
> 
> Mask out the tag in __virt_to_phys_nodebug() and virt_to_page().
> 
> Reported-by: Qian Cai <cai@lca.pw>
> Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
> Fixes: 9cb1c5ddd2c4 ("arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START")
> Signed-off-by: Will Deacon <will@kernel.org>

Reviewed-by: Steve Capper <steve.capper@arm.com>

> ---
>  arch/arm64/include/asm/memory.h | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 442ab861cab8..47b4dc73b8bf 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -252,7 +252,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
>  #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
>  
>  #define __virt_to_phys_nodebug(x) ({					\
> -	phys_addr_t __x = (phys_addr_t)(x);				\
> +	phys_addr_t __x = (phys_addr_t)(__tag_reset(x));		\
>  	__is_lm_address(__x) ? __lm_to_phys(__x) :			\
>  			       __kimg_to_phys(__x);			\
>  })
> @@ -324,7 +324,8 @@ static inline void *phys_to_virt(phys_addr_t x)
>  	((void *)__addr_tag);						\
>  })
>  
> -#define virt_to_page(vaddr)	((struct page *)((__virt_to_pgoff(vaddr)) + VMEMMAP_START))
> +#define virt_to_page(vaddr)	\
> +	((struct page *)((__virt_to_pgoff(__tag_reset(vaddr))) + VMEMMAP_START))
>  #endif
>  
>  #define virt_addr_valid(addr)	({					\
> -- 
> 2.11.0
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 3/8] arm64: memory: Rewrite default page_to_virt()/virt_to_page()
  2019-08-13 17:01 ` [PATCH 3/8] arm64: memory: Rewrite default page_to_virt()/virt_to_page() Will Deacon
@ 2019-08-13 18:54   ` Steve Capper
  2019-08-14  9:30   ` Catalin Marinas
  1 sibling, 0 replies; 44+ messages in thread
From: Steve Capper @ 2019-08-13 18:54 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Catalin Marinas, Geert Uytterhoeven,
	Andrey Konovalov, Qian Cai, nd, linux-arm-kernel

On Tue, Aug 13, 2019 at 06:01:44PM +0100, Will Deacon wrote:
> The default implementations of page_to_virt() and virt_to_page() are
> fairly confusing to read and the former evaluates its 'page' parameter
> twice in the macro
> 
> Rewrite them so that the computation is expressed as 'base + index' in
> both cases and the parameter is always evaluated exactly once.
> 
> Signed-off-by: Will Deacon <will@kernel.org>

Reviewed-by: Steve Capper <steve.capper@arm.com>

> ---
>  arch/arm64/include/asm/memory.h | 21 ++++++++++-----------
>  1 file changed, 10 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 47b4dc73b8bf..77074b3a1025 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -313,19 +313,18 @@ static inline void *phys_to_virt(phys_addr_t x)
>  #if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
>  #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
>  #else
> -#define __virt_to_pgoff(kaddr)	(((u64)(kaddr) - PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page))
> -#define __page_to_voff(kaddr)	(((u64)(kaddr) - VMEMMAP_START) * PAGE_SIZE / sizeof(struct page))
> -
> -#define page_to_virt(page)	({					\
> -	unsigned long __addr =						\
> -		((__page_to_voff(page)) + PAGE_OFFSET);			\
> -	const void *__addr_tag =					\
> -		__tag_set((void *)__addr, page_kasan_tag(page));	\
> -	((void *)__addr_tag);						\
> +#define page_to_virt(x)	({						\
> +	__typeof__(x) __page = x;					\
> +	u64 __idx = ((u64)__page - VMEMMAP_START) / sizeof(struct page);\
> +	u64 __addr = PAGE_OFFSET + (__idx * PAGE_SIZE);			\
> +	(void *)__tag_set((const void *)__addr, page_kasan_tag(__page));\
>  })
>  
> -#define virt_to_page(vaddr)	\
> -	((struct page *)((__virt_to_pgoff(__tag_reset(vaddr))) + VMEMMAP_START))
> +#define virt_to_page(x)	({						\
> +	u64 __idx = (__tag_reset((u64)x) - PAGE_OFFSET) / PAGE_SIZE;	\
> +	u64 __addr = VMEMMAP_START + (__idx * sizeof(struct page));	\
> +	(struct page *)__addr;						\
> +})
>  #endif
>  
>  #define virt_addr_valid(addr)	({					\
> -- 
> 2.11.0
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 4/8] arm64: memory: Simplify virt_to_page() implementation
  2019-08-13 17:01 ` [PATCH 4/8] arm64: memory: Simplify virt_to_page() implementation Will Deacon
@ 2019-08-13 18:55   ` Steve Capper
  2019-08-14  9:32   ` Catalin Marinas
  1 sibling, 0 replies; 44+ messages in thread
From: Steve Capper @ 2019-08-13 18:55 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Catalin Marinas, Geert Uytterhoeven,
	Andrey Konovalov, Qian Cai, nd, linux-arm-kernel

On Tue, Aug 13, 2019 at 06:01:45PM +0100, Will Deacon wrote:
> Build virt_to_page() on top of virt_to_pfn() so we can avoid the need
> for explicit shifting.
> 
> Signed-off-by: Will Deacon <will@kernel.org>

Reviewed-by: Steve Capper <steve.capper@arm.com>

> ---
>  arch/arm64/include/asm/memory.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 77074b3a1025..56be462c69ce 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -311,7 +311,7 @@ static inline void *phys_to_virt(phys_addr_t x)
>  #define ARCH_PFN_OFFSET		((unsigned long)PHYS_PFN_OFFSET)
>  
>  #if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
> -#define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
> +#define virt_to_page(kaddr)	pfn_to_page(virt_to_pfn(kaddr))
>  #else
>  #define page_to_virt(x)	({						\
>  	__typeof__(x) __page = x;					\
> -- 
> 2.11.0
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 5/8] arm64: memory: Simplify _VA_START and _PAGE_OFFSET definitions
  2019-08-13 17:01 ` [PATCH 5/8] arm64: memory: Simplify _VA_START and _PAGE_OFFSET definitions Will Deacon
@ 2019-08-13 18:55   ` Steve Capper
  2019-08-14  9:33   ` Catalin Marinas
  2019-08-14 11:23   ` Mark Rutland
  2 siblings, 0 replies; 44+ messages in thread
From: Steve Capper @ 2019-08-13 18:55 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Catalin Marinas, Geert Uytterhoeven,
	Andrey Konovalov, Qian Cai, nd, linux-arm-kernel

On Tue, Aug 13, 2019 at 06:01:46PM +0100, Will Deacon wrote:
> Rather than subtracting from -1 and then adding 1, we can simply
> subtract from 0.
> 
> Cc: Steve Capper <steve.capper@arm.com>
> Signed-off-by: Will Deacon <will@kernel.org>

Reviewed-by: Steve Capper <steve.capper@arm.com>

> ---
>  arch/arm64/include/asm/memory.h | 6 ++----
>  1 file changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 56be462c69ce..5552c8cba1e2 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -44,8 +44,7 @@
>   * VA_START - the first kernel virtual address.
>   */
>  #define VA_BITS			(CONFIG_ARM64_VA_BITS)
> -#define _PAGE_OFFSET(va)	(UL(0xffffffffffffffff) - \
> -					(UL(1) << (va)) + 1)
> +#define _PAGE_OFFSET(va)	(-(UL(1) << (va)))
>  #define PAGE_OFFSET		(_PAGE_OFFSET(VA_BITS))
>  #define KIMAGE_VADDR		(MODULES_END)
>  #define BPF_JIT_REGION_START	(KASAN_SHADOW_END)
> @@ -63,8 +62,7 @@
>  #else
>  #define VA_BITS_MIN		(VA_BITS)
>  #endif
> -#define _VA_START(va)		(UL(0xffffffffffffffff) - \
> -				(UL(1) << ((va) - 1)) + 1)
> +#define _VA_START(va)		(-(UL(1) << ((va) - 1)))
>  
>  #define KERNEL_START      _text
>  #define KERNEL_END        _end
> -- 
> 2.11.0
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 6/8] arm64: memory: Implement __tag_set() as common function
  2019-08-13 17:01 ` [PATCH 6/8] arm64: memory: Implement __tag_set() as common function Will Deacon
@ 2019-08-13 18:56   ` Steve Capper
  2019-08-14  9:34   ` Catalin Marinas
  1 sibling, 0 replies; 44+ messages in thread
From: Steve Capper @ 2019-08-13 18:56 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Catalin Marinas, Geert Uytterhoeven,
	Andrey Konovalov, Qian Cai, nd, linux-arm-kernel

On Tue, Aug 13, 2019 at 06:01:47PM +0100, Will Deacon wrote:
> There's no need for __tag_set() to be a complicated macro when
> CONFIG_KASAN_SW_TAGS=y and a simple static inline otherwise. Rewrite
> the thing as a common static inline function.
> 
> Signed-off-by: Will Deacon <will@kernel.org>

Reviewed-by: Steve Capper <steve.capper@arm.com>

> ---
>  arch/arm64/include/asm/memory.h | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 5552c8cba1e2..e902132b808c 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -218,20 +218,20 @@ static inline unsigned long kaslr_offset(void)
>  
>  #ifdef CONFIG_KASAN_SW_TAGS
>  #define __tag_shifted(tag)	((u64)(tag) << 56)
> -#define __tag_set(addr, tag)	(__typeof__(addr))( \
> -		((u64)(addr) & ~__tag_shifted(0xff)) | __tag_shifted(tag))
>  #define __tag_reset(addr)	untagged_addr(addr)
>  #define __tag_get(addr)		(__u8)((u64)(addr) >> 56)
>  #else
> -static inline const void *__tag_set(const void *addr, u8 tag)
> -{
> -	return addr;
> -}
> -
> +#define __tag_shifted(tag)	0UL
>  #define __tag_reset(addr)	(addr)
>  #define __tag_get(addr)		0
>  #endif
>  
> +static inline const void *__tag_set(const void *addr, u8 tag)
> +{
> +	u64 __addr = (u64)addr & ~__tag_shifted(0xff);
> +	return (const void *)(__addr | __tag_shifted(tag));
> +}
> +
>  /*
>   * Physical vs virtual RAM address space conversion.  These are
>   * private definitions which should NOT be used outside memory.h
> -- 
> 2.11.0
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 7/8] arm64: memory: Add comments to end of non-trivial #ifdef blocks
  2019-08-13 17:01 ` [PATCH 7/8] arm64: memory: Add comments to end of non-trivial #ifdef blocks Will Deacon
@ 2019-08-13 18:57   ` Steve Capper
  2019-08-14  9:35   ` Catalin Marinas
  1 sibling, 0 replies; 44+ messages in thread
From: Steve Capper @ 2019-08-13 18:57 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Catalin Marinas, Geert Uytterhoeven,
	Andrey Konovalov, Qian Cai, nd, linux-arm-kernel

On Tue, Aug 13, 2019 at 06:01:48PM +0100, Will Deacon wrote:
> Commenting the #endif of a multi-statement #ifdef block with the
> condition which guards it is useful and can save having to scroll back
> through the file to figure out which set of Kconfig options apply to
> a particular piece of code.
> 
> Signed-off-by: Will Deacon <will@kernel.org>

Reviewed-by: Steve Capper <steve.capper@arm.com>

> ---
>  arch/arm64/include/asm/memory.h | 14 ++++++++------
>  1 file changed, 8 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index e902132b808c..d31e4b6e349f 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -57,11 +57,13 @@
>  #define PCI_IO_END		(VMEMMAP_START - SZ_2M)
>  #define PCI_IO_START		(PCI_IO_END - PCI_IO_SIZE)
>  #define FIXADDR_TOP		(PCI_IO_START - SZ_2M)
> +
>  #if VA_BITS > 48
>  #define VA_BITS_MIN		(48)
>  #else
>  #define VA_BITS_MIN		(VA_BITS)
>  #endif
> +
>  #define _VA_START(va)		(-(UL(1) << ((va) - 1)))
>  
>  #define KERNEL_START      _text
> @@ -86,7 +88,7 @@
>  #else
>  #define KASAN_THREAD_SHIFT	0
>  #define KASAN_SHADOW_END	(_VA_START(VA_BITS_MIN))
> -#endif
> +#endif /* CONFIG_KASAN */
>  
>  #define MIN_THREAD_SHIFT	(14 + KASAN_THREAD_SHIFT)
>  
> @@ -224,7 +226,7 @@ static inline unsigned long kaslr_offset(void)
>  #define __tag_shifted(tag)	0UL
>  #define __tag_reset(addr)	(addr)
>  #define __tag_get(addr)		0
> -#endif
> +#endif /* CONFIG_KASAN_SW_TAGS */
>  
>  static inline const void *__tag_set(const void *addr, u8 tag)
>  {
> @@ -263,7 +265,7 @@ extern phys_addr_t __phys_addr_symbol(unsigned long x);
>  #else
>  #define __virt_to_phys(x)	__virt_to_phys_nodebug(x)
>  #define __phys_addr_symbol(x)	__pa_symbol_nodebug(x)
> -#endif
> +#endif /* CONFIG_DEBUG_VIRTUAL */
>  
>  #define __phys_to_virt(x)	((unsigned long)((x) - physvirt_offset))
>  #define __phys_to_kimg(x)	((unsigned long)((x) + kimage_voffset))
> @@ -323,14 +325,14 @@ static inline void *phys_to_virt(phys_addr_t x)
>  	u64 __addr = VMEMMAP_START + (__idx * sizeof(struct page));	\
>  	(struct page *)__addr;						\
>  })
> -#endif
> +#endif /* !CONFIG_SPARSEMEM_VMEMMAP || CONFIG_DEBUG_VIRTUAL */
>  
>  #define virt_addr_valid(addr)	({					\
>  	__typeof__(addr) __addr = addr;					\
>  	__is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr));	\
>  })
>  
> -#endif
> +#endif /* !ASSEMBLY */
>  
>  /*
>   * Given that the GIC architecture permits ITS implementations that can only be
> @@ -345,4 +347,4 @@ static inline void *phys_to_virt(phys_addr_t x)
>  
>  #include <asm-generic/memory_model.h>
>  
> -#endif
> +#endif /* __ASM_MEMORY_H */
> -- 
> 2.11.0
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 8/8] arm64: memory: Cosmetic cleanups
  2019-08-13 17:01 ` [PATCH 8/8] arm64: memory: Cosmetic cleanups Will Deacon
@ 2019-08-13 18:57   ` Steve Capper
  2019-08-14  9:35   ` Catalin Marinas
  1 sibling, 0 replies; 44+ messages in thread
From: Steve Capper @ 2019-08-13 18:57 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Catalin Marinas, Geert Uytterhoeven,
	Andrey Konovalov, Qian Cai, nd, linux-arm-kernel

On Tue, Aug 13, 2019 at 06:01:49PM +0100, Will Deacon wrote:
> Cleanup memory.h so that the indentation is consistent, remove pointless
> line-wrapping and use consistent parameter names for different versions
> of the same macro.
> 
> Signed-off-by: Will Deacon <will@kernel.org>

Reviewed-by: Steve Capper <steve.capper@arm.com>

> ---
>  arch/arm64/include/asm/memory.h | 23 +++++++++++------------
>  1 file changed, 11 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index d31e4b6e349f..69f4cecb7241 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -12,10 +12,10 @@
>  
>  #include <linux/compiler.h>
>  #include <linux/const.h>
> +#include <linux/sizes.h>
>  #include <linux/types.h>
>  #include <asm/bug.h>
>  #include <asm/page-def.h>
> -#include <linux/sizes.h>
>  
>  /*
>   * Size of the PCI I/O space. This must remain a power of two so that
> @@ -66,8 +66,8 @@
>  
>  #define _VA_START(va)		(-(UL(1) << ((va) - 1)))
>  
> -#define KERNEL_START      _text
> -#define KERNEL_END        _end
> +#define KERNEL_START		_text
> +#define KERNEL_END		_end
>  
>  #ifdef CONFIG_ARM64_VA_BITS_52
>  #define MAX_USER_VA_BITS	52
> @@ -132,14 +132,14 @@
>   * 16 KB granule: 128 level 3 entries, with contiguous bit
>   * 64 KB granule:  32 level 3 entries, with contiguous bit
>   */
> -#define SEGMENT_ALIGN			SZ_2M
> +#define SEGMENT_ALIGN		SZ_2M
>  #else
>  /*
>   *  4 KB granule:  16 level 3 entries, with contiguous bit
>   * 16 KB granule:   4 level 3 entries, without contiguous bit
>   * 64 KB granule:   1 level 3 entry
>   */
> -#define SEGMENT_ALIGN			SZ_64K
> +#define SEGMENT_ALIGN		SZ_64K
>  #endif
>  
>  /*
> @@ -253,8 +253,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
>  
>  #define __virt_to_phys_nodebug(x) ({					\
>  	phys_addr_t __x = (phys_addr_t)(__tag_reset(x));		\
> -	__is_lm_address(__x) ? __lm_to_phys(__x) :			\
> -			       __kimg_to_phys(__x);			\
> +	__is_lm_address(__x) ? __lm_to_phys(__x) : __kimg_to_phys(__x);	\
>  })
>  
>  #define __pa_symbol_nodebug(x)	__kimg_to_phys((phys_addr_t)(x))
> @@ -301,17 +300,17 @@ static inline void *phys_to_virt(phys_addr_t x)
>  #define __pa_nodebug(x)		__virt_to_phys_nodebug((unsigned long)(x))
>  #define __va(x)			((void *)__phys_to_virt((phys_addr_t)(x)))
>  #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
> -#define virt_to_pfn(x)      __phys_to_pfn(__virt_to_phys((unsigned long)(x)))
> -#define sym_to_pfn(x)	    __phys_to_pfn(__pa_symbol(x))
> +#define virt_to_pfn(x)		__phys_to_pfn(__virt_to_phys((unsigned long)(x)))
> +#define sym_to_pfn(x)		__phys_to_pfn(__pa_symbol(x))
>  
>  /*
> - *  virt_to_page(k)	convert a _valid_ virtual address to struct page *
> - *  virt_addr_valid(k)	indicates whether a virtual address is valid
> + *  virt_to_page(x)	convert a _valid_ virtual address to struct page *
> + *  virt_addr_valid(x)	indicates whether a virtual address is valid
>   */
>  #define ARCH_PFN_OFFSET		((unsigned long)PHYS_PFN_OFFSET)
>  
>  #if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
> -#define virt_to_page(kaddr)	pfn_to_page(virt_to_pfn(kaddr))
> +#define virt_to_page(x)		pfn_to_page(virt_to_pfn(x))
>  #else
>  #define page_to_virt(x)	({						\
>  	__typeof__(x) __page = x;					\
> -- 
> 2.11.0
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 1/8] arm64: memory: Fix virt_addr_valid() using __is_lm_address()
  2019-08-13 18:09   ` Ard Biesheuvel
@ 2019-08-13 19:11     ` Steve Capper
  2019-08-13 19:25       ` Ard Biesheuvel
  2019-08-14  8:32       ` Will Deacon
  0 siblings, 2 replies; 44+ messages in thread
From: Steve Capper @ 2019-08-13 19:11 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Mark Rutland, Catalin Marinas, Qian Cai, Andrey Konovalov,
	Geert Uytterhoeven, nd, Will Deacon, linux-arm-kernel

Hi Ard,

On Tue, Aug 13, 2019 at 09:09:16PM +0300, Ard Biesheuvel wrote:
> On Tue, 13 Aug 2019 at 20:02, Will Deacon <will@kernel.org> wrote:
> >
> > virt_addr_valid() is intended to test whether or not the passed address
> > is a valid linear map address. Unfortunately, it relies on
> > _virt_addr_is_linear() which is broken because it assumes the linear
> > map is at the top of the address space, which it no longer is.
> >
> > Reimplement virt_addr_valid() using __is_lm_address() and remove
> > _virt_addr_is_linear() entirely. At the same time, ensure we evaluate
> > the macro parameter only once and move it within the __ASSEMBLY__ block.
> >
> > Reported-by: Qian Cai <cai@lca.pw>
> > Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
> > Fixes: 14c127c957c1 ("arm64: mm: Flip kernel VA space")
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/include/asm/memory.h | 14 +++++++-------
> >  1 file changed, 7 insertions(+), 7 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > index afaf512c0e1b..442ab861cab8 100644
> > --- a/arch/arm64/include/asm/memory.h
> > +++ b/arch/arm64/include/asm/memory.h
> > @@ -244,9 +244,9 @@ static inline const void *__tag_set(const void *addr, u8 tag)
> >  /*
> >   * The linear kernel range starts in the middle of the virtual adddress
> >   * space.
> 
> This is no longer true either.
> 

Whoops agreed.

> > Testing the top bit for the start of the region is a
> > - * sufficient check.
> > + * sufficient check and avoids having to worry about the tag.
> >   */
> > -#define __is_lm_address(addr)  (!((addr) & BIT(vabits_actual - 1)))
> > +#define __is_lm_address(addr)  (!(((u64)addr) & BIT(vabits_actual - 1)))
> >
> 
> ... and this assumes that the VA space is split evenly between linear
> and vmalloc/vmemmap/etc, which is no longer true when running with
> 52-bit VAs
> 

For 52-bit VAs we have two possibilities:
  Start                 End                     Size            Use
  -----------------------------------------------------------------------
  0000000000000000      000fffffffffffff           4PB          user
  fff0000000000000      fff7ffffffffffff           2PB          kernel logical memory map
  fff8000000000000      fffd9fffffffffff        1440TB          [gap]
  fffda00000000000      ffff9fffffffffff         512TB          kasan shadow region

and
  Start                        End                     Size            Use
  -----------------------------------------------------------------------
  0000000000000000     0000ffffffffffff         256TB          user
  ffff000000000000     ffff7fffffffffff         128TB          kernel logical memory map
  ffff800000000000     ffff9fffffffffff          32TB          kasan shadow region
  ffffa00000000000     ffffa00007ffffff         128MB          bpf jit region

IIUC the definition for __is_lm_address is correct for these cases?
(it's based off vabits_actual).

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 1/8] arm64: memory: Fix virt_addr_valid() using __is_lm_address()
  2019-08-13 19:11     ` Steve Capper
@ 2019-08-13 19:25       ` Ard Biesheuvel
  2019-08-13 20:34         ` Steve Capper
  2019-08-14  8:32       ` Will Deacon
  1 sibling, 1 reply; 44+ messages in thread
From: Ard Biesheuvel @ 2019-08-13 19:25 UTC (permalink / raw)
  To: Steve Capper
  Cc: Mark Rutland, Catalin Marinas, Qian Cai, Andrey Konovalov,
	Geert Uytterhoeven, nd, Will Deacon, linux-arm-kernel

On Tue, 13 Aug 2019 at 22:11, Steve Capper <Steve.Capper@arm.com> wrote:
>
> Hi Ard,
>
> On Tue, Aug 13, 2019 at 09:09:16PM +0300, Ard Biesheuvel wrote:
> > On Tue, 13 Aug 2019 at 20:02, Will Deacon <will@kernel.org> wrote:
> > >
> > > virt_addr_valid() is intended to test whether or not the passed address
> > > is a valid linear map address. Unfortunately, it relies on
> > > _virt_addr_is_linear() which is broken because it assumes the linear
> > > map is at the top of the address space, which it no longer is.
> > >
> > > Reimplement virt_addr_valid() using __is_lm_address() and remove
> > > _virt_addr_is_linear() entirely. At the same time, ensure we evaluate
> > > the macro parameter only once and move it within the __ASSEMBLY__ block.
> > >
> > > Reported-by: Qian Cai <cai@lca.pw>
> > > Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
> > > Fixes: 14c127c957c1 ("arm64: mm: Flip kernel VA space")
> > > Signed-off-by: Will Deacon <will@kernel.org>
> > > ---
> > >  arch/arm64/include/asm/memory.h | 14 +++++++-------
> > >  1 file changed, 7 insertions(+), 7 deletions(-)
> > >
> > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > > index afaf512c0e1b..442ab861cab8 100644
> > > --- a/arch/arm64/include/asm/memory.h
> > > +++ b/arch/arm64/include/asm/memory.h
> > > @@ -244,9 +244,9 @@ static inline const void *__tag_set(const void *addr, u8 tag)
> > >  /*
> > >   * The linear kernel range starts in the middle of the virtual adddress
> > >   * space.
> >
> > This is no longer true either.
> >
>
> Whoops agreed.
>
> > > Testing the top bit for the start of the region is a
> > > - * sufficient check.
> > > + * sufficient check and avoids having to worry about the tag.
> > >   */
> > > -#define __is_lm_address(addr)  (!((addr) & BIT(vabits_actual - 1)))
> > > +#define __is_lm_address(addr)  (!(((u64)addr) & BIT(vabits_actual - 1)))
> > >
> >
> > ... and this assumes that the VA space is split evenly between linear
> > and vmalloc/vmemmap/etc, which is no longer true when running with
> > 52-bit VAs
> >
>
> For 52-bit VAs we have two possibilities:
>   Start                 End                     Size            Use
>   -----------------------------------------------------------------------
>   0000000000000000      000fffffffffffff           4PB          user
>   fff0000000000000      fff7ffffffffffff           2PB          kernel logical memory map
>   fff8000000000000      fffd9fffffffffff        1440TB          [gap]

Right. I missed the part where we throw away 1/3 of the VA space:
IIRC, the idea was that keeping the size of the upper half of the
48-bit VA space fixed for 52-bit not only allowed compile time
constant addresses to be used for many of the things that populate it,
it also makes a lot more VA space available to the linear region,
which is where we need it the most.

>   fffda00000000000      ffff9fffffffffff         512TB          kasan shadow region
>
> and
>   Start                        End                     Size            Use
>   -----------------------------------------------------------------------
>   0000000000000000     0000ffffffffffff         256TB          user
>   ffff000000000000     ffff7fffffffffff         128TB          kernel logical memory map
>   ffff800000000000     ffff9fffffffffff          32TB          kasan shadow region
>   ffffa00000000000     ffffa00007ffffff         128MB          bpf jit region
>
> IIUC the definition for __is_lm_address is correct for these cases?
> (it's based off vabits_actual).
>

With the gap taken into account, it is correct. But throwing away 1440
TB of address space seems suboptimal to me.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 1/8] arm64: memory: Fix virt_addr_valid() using __is_lm_address()
  2019-08-13 19:25       ` Ard Biesheuvel
@ 2019-08-13 20:34         ` Steve Capper
  2019-08-14 15:17           ` Ard Biesheuvel
  0 siblings, 1 reply; 44+ messages in thread
From: Steve Capper @ 2019-08-13 20:34 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Mark Rutland, Catalin Marinas, Qian Cai, Andrey Konovalov,
	Geert Uytterhoeven, nd, Will Deacon, linux-arm-kernel

On Tue, Aug 13, 2019 at 10:25:14PM +0300, Ard Biesheuvel wrote:
> On Tue, 13 Aug 2019 at 22:11, Steve Capper <Steve.Capper@arm.com> wrote:
> >
> > Hi Ard,
> >
> > On Tue, Aug 13, 2019 at 09:09:16PM +0300, Ard Biesheuvel wrote:
> > > On Tue, 13 Aug 2019 at 20:02, Will Deacon <will@kernel.org> wrote:
> > > >
> > > > virt_addr_valid() is intended to test whether or not the passed address
> > > > is a valid linear map address. Unfortunately, it relies on
> > > > _virt_addr_is_linear() which is broken because it assumes the linear
> > > > map is at the top of the address space, which it no longer is.
> > > >
> > > > Reimplement virt_addr_valid() using __is_lm_address() and remove
> > > > _virt_addr_is_linear() entirely. At the same time, ensure we evaluate
> > > > the macro parameter only once and move it within the __ASSEMBLY__ block.
> > > >
> > > > Reported-by: Qian Cai <cai@lca.pw>
> > > > Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
> > > > Fixes: 14c127c957c1 ("arm64: mm: Flip kernel VA space")
> > > > Signed-off-by: Will Deacon <will@kernel.org>
> > > > ---
> > > >  arch/arm64/include/asm/memory.h | 14 +++++++-------
> > > >  1 file changed, 7 insertions(+), 7 deletions(-)
> > > >
> > > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > > > index afaf512c0e1b..442ab861cab8 100644
> > > > --- a/arch/arm64/include/asm/memory.h
> > > > +++ b/arch/arm64/include/asm/memory.h
> > > > @@ -244,9 +244,9 @@ static inline const void *__tag_set(const void *addr, u8 tag)
> > > >  /*
> > > >   * The linear kernel range starts in the middle of the virtual adddress
> > > >   * space.
> > >
> > > This is no longer true either.
> > >
> >
> > Whoops agreed.
> >
> > > > Testing the top bit for the start of the region is a
> > > > - * sufficient check.
> > > > + * sufficient check and avoids having to worry about the tag.
> > > >   */
> > > > -#define __is_lm_address(addr)  (!((addr) & BIT(vabits_actual - 1)))
> > > > +#define __is_lm_address(addr)  (!(((u64)addr) & BIT(vabits_actual - 1)))
> > > >
> > >
> > > ... and this assumes that the VA space is split evenly between linear
> > > and vmalloc/vmemmap/etc, which is no longer true when running with
> > > 52-bit VAs
> > >
> >
> > For 52-bit VAs we have two possibilities:
> >   Start                 End                     Size            Use
> >   -----------------------------------------------------------------------
> >   0000000000000000      000fffffffffffff           4PB          user
> >   fff0000000000000      fff7ffffffffffff           2PB          kernel logical memory map
> >   fff8000000000000      fffd9fffffffffff        1440TB          [gap]
> 
> Right. I missed the part where we throw away 1/3 of the VA space:
> IIRC, the idea was that keeping the size of the upper half of the
> 48-bit VA space fixed for 52-bit not only allowed compile time
> constant addresses to be used for many of the things that populate it,
> it also makes a lot more VA space available to the linear region,
> which is where we need it the most.
> 
> >   fffda00000000000      ffff9fffffffffff         512TB          kasan shadow region
> >
> > and
> >   Start                        End                     Size            Use
> >   -----------------------------------------------------------------------
> >   0000000000000000     0000ffffffffffff         256TB          user
> >   ffff000000000000     ffff7fffffffffff         128TB          kernel logical memory map
> >   ffff800000000000     ffff9fffffffffff          32TB          kasan shadow region
> >   ffffa00000000000     ffffa00007ffffff         128MB          bpf jit region
> >
> > IIUC the definition for __is_lm_address is correct for these cases?
> > (it's based off vabits_actual).
> >
> 
> With the gap taken into account, it is correct. But throwing away 1440
> TB of address space seems suboptimal to me.

When getting the 52-bit kernel VA support ready, I was trying to achieve
functional and performant support in as few steps as possible to avoid risk of
breaking things (unfortunately I missed a couple of things between
rebases with the SW KASAN). The big gain from that series is support for
a much larger linear mapping.

The best way I can think of to get rid of the gap is to use it for
vmalloc space which means changes to VMALLOC_START and VMALLOC_END. I
think it would be better to make this change incrementally and I'm more
than happy to get hacking on a patch. Or maybe there's a better use for
the gap in other areas...

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 0/8] Fix issues with 52-bit kernel virtual addressing
  2019-08-13 17:01 [PATCH 0/8] Fix issues with 52-bit kernel virtual addressing Will Deacon
                   ` (8 preceding siblings ...)
  2019-08-13 18:53 ` [PATCH 0/8] Fix issues with 52-bit kernel virtual addressing Steve Capper
@ 2019-08-14  8:01 ` Geert Uytterhoeven
  2019-08-14 11:29 ` Mark Rutland
  10 siblings, 0 replies; 44+ messages in thread
From: Geert Uytterhoeven @ 2019-08-14  8:01 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Steve Capper, Catalin Marinas, Qian Cai,
	Andrey Konovalov, Linux ARM

Hi Will,

On Tue, Aug 13, 2019 at 7:01 PM Will Deacon <will@kernel.org> wrote:
> This patch series addresses some issues with 52-bit kernel VAs reported
> by Qian Cai and Geert. It's all confined to asm/memory.h and I got a bit
> carried away cleaning that thing up so the patches get more worthless
> as you go through the series. Still, I'd like to queue this on top of
> the 52-bit VA stuff currently sitting in -next.
>
> Although Geert and Steve tested my initial hacks, I dropped the tags
> because I've split things up and could've easily broken things again.

Thanks, this fixes the problem I was seeing, so
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>

> Will Deacon (8):
>   arm64: memory: Fix virt_addr_valid() using __is_lm_address()
>   arm64: memory: Ensure address tag is masked in conversion macros
>   arm64: memory: Rewrite default page_to_virt()/virt_to_page()
>   arm64: memory: Simplify virt_to_page() implementation
>   arm64: memory: Simplify _VA_START and _PAGE_OFFSET definitions
>   arm64: memory: Implement __tag_set() as common function
>   arm64: memory: Add comments to end of non-trivial #ifdef blocks
>   arm64: memory: Cosmetic cleanups
>
>  arch/arm64/include/asm/memory.h | 89 ++++++++++++++++++++---------------------
>  1 file changed, 44 insertions(+), 45 deletions(-)

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 1/8] arm64: memory: Fix virt_addr_valid() using __is_lm_address()
  2019-08-13 19:11     ` Steve Capper
  2019-08-13 19:25       ` Ard Biesheuvel
@ 2019-08-14  8:32       ` Will Deacon
  1 sibling, 0 replies; 44+ messages in thread
From: Will Deacon @ 2019-08-14  8:32 UTC (permalink / raw)
  To: Steve Capper
  Cc: Mark Rutland, Ard Biesheuvel, Catalin Marinas, Qian Cai,
	Andrey Konovalov, Geert Uytterhoeven, nd, linux-arm-kernel

On Tue, Aug 13, 2019 at 07:11:26PM +0000, Steve Capper wrote:
> On Tue, Aug 13, 2019 at 09:09:16PM +0300, Ard Biesheuvel wrote:
> > On Tue, 13 Aug 2019 at 20:02, Will Deacon <will@kernel.org> wrote:
> > >
> > > virt_addr_valid() is intended to test whether or not the passed address
> > > is a valid linear map address. Unfortunately, it relies on
> > > _virt_addr_is_linear() which is broken because it assumes the linear
> > > map is at the top of the address space, which it no longer is.
> > >
> > > Reimplement virt_addr_valid() using __is_lm_address() and remove
> > > _virt_addr_is_linear() entirely. At the same time, ensure we evaluate
> > > the macro parameter only once and move it within the __ASSEMBLY__ block.
> > >
> > > Reported-by: Qian Cai <cai@lca.pw>
> > > Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
> > > Fixes: 14c127c957c1 ("arm64: mm: Flip kernel VA space")
> > > Signed-off-by: Will Deacon <will@kernel.org>
> > > ---
> > >  arch/arm64/include/asm/memory.h | 14 +++++++-------
> > >  1 file changed, 7 insertions(+), 7 deletions(-)
> > >
> > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > > index afaf512c0e1b..442ab861cab8 100644
> > > --- a/arch/arm64/include/asm/memory.h
> > > +++ b/arch/arm64/include/asm/memory.h
> > > @@ -244,9 +244,9 @@ static inline const void *__tag_set(const void *addr, u8 tag)
> > >  /*
> > >   * The linear kernel range starts in the middle of the virtual adddress
> > >   * space.
> > 
> > This is no longer true either.
> > 
> 
> Whoops agreed.

Bah, stupid comment. Dunno how I missed that when I was editing it. I'll
change "starts in the middle" to be "starts at the bottom".

We can look at the wasted VA space issue as part of a seperate series,
since I'd rather not hold the current patches up on that.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 1/8] arm64: memory: Fix virt_addr_valid() using __is_lm_address()
  2019-08-13 17:01 ` [PATCH 1/8] arm64: memory: Fix virt_addr_valid() using __is_lm_address() Will Deacon
  2019-08-13 18:09   ` Ard Biesheuvel
  2019-08-13 18:53   ` Steve Capper
@ 2019-08-14  9:19   ` Catalin Marinas
  2019-08-14  9:48     ` Will Deacon
  2 siblings, 1 reply; 44+ messages in thread
From: Catalin Marinas @ 2019-08-14  9:19 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Steve Capper, Andrey Konovalov, Geert Uytterhoeven,
	Qian Cai, linux-arm-kernel

On Tue, Aug 13, 2019 at 06:01:42PM +0100, Will Deacon wrote:
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index afaf512c0e1b..442ab861cab8 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -244,9 +244,9 @@ static inline const void *__tag_set(const void *addr, u8 tag)
>  /*
>   * The linear kernel range starts in the middle of the virtual adddress
>   * space. Testing the top bit for the start of the region is a
> - * sufficient check.
> + * sufficient check and avoids having to worry about the tag.
>   */
> -#define __is_lm_address(addr)	(!((addr) & BIT(vabits_actual - 1)))
> +#define __is_lm_address(addr)	(!(((u64)addr) & BIT(vabits_actual - 1)))
>  
>  #define __lm_to_phys(addr)	(((addr) + physvirt_offset))
>  #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
> @@ -326,13 +326,13 @@ static inline void *phys_to_virt(phys_addr_t x)
>  
>  #define virt_to_page(vaddr)	((struct page *)((__virt_to_pgoff(vaddr)) + VMEMMAP_START))
>  #endif
> -#endif
>  
> -#define _virt_addr_is_linear(kaddr)	\
> -	(__tag_reset((u64)(kaddr)) >= PAGE_OFFSET)
> +#define virt_addr_valid(addr)	({					\
> +	__typeof__(addr) __addr = addr;					\
> +	__is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr));	\
> +})

There is a slight change of semantics here but I don't think it's an
issue currently. __is_lm_address() is true even for a user address, so
at least the first part of virt_addr_valid() now accepts such addresses.
The pfn would be wrong eventually because of the virt-to-phys offsetting
and pfn_valid() false but we rely on this rather than checking it's a
kernel address. Slight concern as this macro is called from drivers.

Should we keep the PAGE_OFFSET check as well?

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 2/8] arm64: memory: Ensure address tag is masked in conversion macros
  2019-08-13 17:01 ` [PATCH 2/8] arm64: memory: Ensure address tag is masked in conversion macros Will Deacon
  2019-08-13 18:54   ` Steve Capper
@ 2019-08-14  9:23   ` Catalin Marinas
  1 sibling, 0 replies; 44+ messages in thread
From: Catalin Marinas @ 2019-08-14  9:23 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Steve Capper, Andrey Konovalov, Geert Uytterhoeven,
	Qian Cai, linux-arm-kernel

On Tue, Aug 13, 2019 at 06:01:43PM +0100, Will Deacon wrote:
> When converting a linear virtual address to a physical address, pfn or
> struct page *, we must make sure that the tag bits are masked before the
> calculation otherwise we end up with corrupt pointers when running with
> CONFIG_KASAN_SW_TAGS=y:
> 
>   | Unable to handle kernel paging request at virtual address 0037fe0007580d08
>   | [0037fe0007580d08] address between user and kernel address ranges
> 
> Mask out the tag in __virt_to_phys_nodebug() and virt_to_page().
> 
> Reported-by: Qian Cai <cai@lca.pw>
> Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
> Fixes: 9cb1c5ddd2c4 ("arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START")
> Signed-off-by: Will Deacon <will@kernel.org>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 3/8] arm64: memory: Rewrite default page_to_virt()/virt_to_page()
  2019-08-13 17:01 ` [PATCH 3/8] arm64: memory: Rewrite default page_to_virt()/virt_to_page() Will Deacon
  2019-08-13 18:54   ` Steve Capper
@ 2019-08-14  9:30   ` Catalin Marinas
  2019-08-14  9:41     ` Will Deacon
  1 sibling, 1 reply; 44+ messages in thread
From: Catalin Marinas @ 2019-08-14  9:30 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Steve Capper, Andrey Konovalov, Geert Uytterhoeven,
	Qian Cai, linux-arm-kernel

On Tue, Aug 13, 2019 at 06:01:44PM +0100, Will Deacon wrote:
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 47b4dc73b8bf..77074b3a1025 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -313,19 +313,18 @@ static inline void *phys_to_virt(phys_addr_t x)
>  #if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
>  #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
>  #else
> -#define __virt_to_pgoff(kaddr)	(((u64)(kaddr) - PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page))
> -#define __page_to_voff(kaddr)	(((u64)(kaddr) - VMEMMAP_START) * PAGE_SIZE / sizeof(struct page))
> -
> -#define page_to_virt(page)	({					\
> -	unsigned long __addr =						\
> -		((__page_to_voff(page)) + PAGE_OFFSET);			\
> -	const void *__addr_tag =					\
> -		__tag_set((void *)__addr, page_kasan_tag(page));	\
> -	((void *)__addr_tag);						\
> +#define page_to_virt(x)	({						\
> +	__typeof__(x) __page = x;					\

Why not struct page * directly here?

Either way:

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 4/8] arm64: memory: Simplify virt_to_page() implementation
  2019-08-13 17:01 ` [PATCH 4/8] arm64: memory: Simplify virt_to_page() implementation Will Deacon
  2019-08-13 18:55   ` Steve Capper
@ 2019-08-14  9:32   ` Catalin Marinas
  1 sibling, 0 replies; 44+ messages in thread
From: Catalin Marinas @ 2019-08-14  9:32 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Steve Capper, Andrey Konovalov, Geert Uytterhoeven,
	Qian Cai, linux-arm-kernel

On Tue, Aug 13, 2019 at 06:01:45PM +0100, Will Deacon wrote:
> Build virt_to_page() on top of virt_to_pfn() so we can avoid the need
> for explicit shifting.
> 
> Signed-off-by: Will Deacon <will@kernel.org>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 5/8] arm64: memory: Simplify _VA_START and _PAGE_OFFSET definitions
  2019-08-13 17:01 ` [PATCH 5/8] arm64: memory: Simplify _VA_START and _PAGE_OFFSET definitions Will Deacon
  2019-08-13 18:55   ` Steve Capper
@ 2019-08-14  9:33   ` Catalin Marinas
  2019-08-14 11:23   ` Mark Rutland
  2 siblings, 0 replies; 44+ messages in thread
From: Catalin Marinas @ 2019-08-14  9:33 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Steve Capper, Andrey Konovalov, Geert Uytterhoeven,
	Qian Cai, linux-arm-kernel

On Tue, Aug 13, 2019 at 06:01:46PM +0100, Will Deacon wrote:
> Rather than subtracting from -1 and then adding 1, we can simply
> subtract from 0.
> 
> Cc: Steve Capper <steve.capper@arm.com>
> Signed-off-by: Will Deacon <will@kernel.org>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 6/8] arm64: memory: Implement __tag_set() as common function
  2019-08-13 17:01 ` [PATCH 6/8] arm64: memory: Implement __tag_set() as common function Will Deacon
  2019-08-13 18:56   ` Steve Capper
@ 2019-08-14  9:34   ` Catalin Marinas
  1 sibling, 0 replies; 44+ messages in thread
From: Catalin Marinas @ 2019-08-14  9:34 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Steve Capper, Andrey Konovalov, Geert Uytterhoeven,
	Qian Cai, linux-arm-kernel

On Tue, Aug 13, 2019 at 06:01:47PM +0100, Will Deacon wrote:
> There's no need for __tag_set() to be a complicated macro when
> CONFIG_KASAN_SW_TAGS=y and a simple static inline otherwise. Rewrite
> the thing as a common static inline function.
> 
> Signed-off-by: Will Deacon <will@kernel.org>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 7/8] arm64: memory: Add comments to end of non-trivial #ifdef blocks
  2019-08-13 17:01 ` [PATCH 7/8] arm64: memory: Add comments to end of non-trivial #ifdef blocks Will Deacon
  2019-08-13 18:57   ` Steve Capper
@ 2019-08-14  9:35   ` Catalin Marinas
  1 sibling, 0 replies; 44+ messages in thread
From: Catalin Marinas @ 2019-08-14  9:35 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Steve Capper, Andrey Konovalov, Geert Uytterhoeven,
	Qian Cai, linux-arm-kernel

On Tue, Aug 13, 2019 at 06:01:48PM +0100, Will Deacon wrote:
> Commenting the #endif of a multi-statement #ifdef block with the
> condition which guards it is useful and can save having to scroll back
> through the file to figure out which set of Kconfig options apply to
> a particular piece of code.
> 
> Signed-off-by: Will Deacon <will@kernel.org>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 8/8] arm64: memory: Cosmetic cleanups
  2019-08-13 17:01 ` [PATCH 8/8] arm64: memory: Cosmetic cleanups Will Deacon
  2019-08-13 18:57   ` Steve Capper
@ 2019-08-14  9:35   ` Catalin Marinas
  1 sibling, 0 replies; 44+ messages in thread
From: Catalin Marinas @ 2019-08-14  9:35 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Steve Capper, Andrey Konovalov, Geert Uytterhoeven,
	Qian Cai, linux-arm-kernel

On Tue, Aug 13, 2019 at 06:01:49PM +0100, Will Deacon wrote:
> Cleanup memory.h so that the indentation is consistent, remove pointless
> line-wrapping and use consistent parameter names for different versions
> of the same macro.
> 
> Signed-off-by: Will Deacon <will@kernel.org>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 3/8] arm64: memory: Rewrite default page_to_virt()/virt_to_page()
  2019-08-14  9:30   ` Catalin Marinas
@ 2019-08-14  9:41     ` Will Deacon
  2019-08-14 10:56       ` Mark Rutland
  0 siblings, 1 reply; 44+ messages in thread
From: Will Deacon @ 2019-08-14  9:41 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Mark Rutland, Steve Capper, Andrey Konovalov, Geert Uytterhoeven,
	Qian Cai, linux-arm-kernel

On Wed, Aug 14, 2019 at 10:30:19AM +0100, Catalin Marinas wrote:
> On Tue, Aug 13, 2019 at 06:01:44PM +0100, Will Deacon wrote:
> > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > index 47b4dc73b8bf..77074b3a1025 100644
> > --- a/arch/arm64/include/asm/memory.h
> > +++ b/arch/arm64/include/asm/memory.h
> > @@ -313,19 +313,18 @@ static inline void *phys_to_virt(phys_addr_t x)
> >  #if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
> >  #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
> >  #else
> > -#define __virt_to_pgoff(kaddr)	(((u64)(kaddr) - PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page))
> > -#define __page_to_voff(kaddr)	(((u64)(kaddr) - VMEMMAP_START) * PAGE_SIZE / sizeof(struct page))
> > -
> > -#define page_to_virt(page)	({					\
> > -	unsigned long __addr =						\
> > -		((__page_to_voff(page)) + PAGE_OFFSET);			\
> > -	const void *__addr_tag =					\
> > -		__tag_set((void *)__addr, page_kasan_tag(page));	\
> > -	((void *)__addr_tag);						\
> > +#define page_to_virt(x)	({						\
> > +	__typeof__(x) __page = x;					\
> 
> Why not struct page * directly here?

I started out with that, but then you have to deal with const struct page *
as well and it gets pretty messy.

> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

Thanks,

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 1/8] arm64: memory: Fix virt_addr_valid() using __is_lm_address()
  2019-08-14  9:19   ` Catalin Marinas
@ 2019-08-14  9:48     ` Will Deacon
  2019-08-14 10:40       ` Catalin Marinas
  0 siblings, 1 reply; 44+ messages in thread
From: Will Deacon @ 2019-08-14  9:48 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Mark Rutland, Steve Capper, Andrey Konovalov, Geert Uytterhoeven,
	Qian Cai, linux-arm-kernel

On Wed, Aug 14, 2019 at 10:19:42AM +0100, Catalin Marinas wrote:
> On Tue, Aug 13, 2019 at 06:01:42PM +0100, Will Deacon wrote:
> > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > index afaf512c0e1b..442ab861cab8 100644
> > --- a/arch/arm64/include/asm/memory.h
> > +++ b/arch/arm64/include/asm/memory.h
> > @@ -244,9 +244,9 @@ static inline const void *__tag_set(const void *addr, u8 tag)
> >  /*
> >   * The linear kernel range starts in the middle of the virtual adddress
> >   * space. Testing the top bit for the start of the region is a
> > - * sufficient check.
> > + * sufficient check and avoids having to worry about the tag.
> >   */
> > -#define __is_lm_address(addr)	(!((addr) & BIT(vabits_actual - 1)))
> > +#define __is_lm_address(addr)	(!(((u64)addr) & BIT(vabits_actual - 1)))
> >  
> >  #define __lm_to_phys(addr)	(((addr) + physvirt_offset))
> >  #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
> > @@ -326,13 +326,13 @@ static inline void *phys_to_virt(phys_addr_t x)
> >  
> >  #define virt_to_page(vaddr)	((struct page *)((__virt_to_pgoff(vaddr)) + VMEMMAP_START))
> >  #endif
> > -#endif
> >  
> > -#define _virt_addr_is_linear(kaddr)	\
> > -	(__tag_reset((u64)(kaddr)) >= PAGE_OFFSET)
> > +#define virt_addr_valid(addr)	({					\
> > +	__typeof__(addr) __addr = addr;					\
> > +	__is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr));	\
> > +})
> 
> There is a slight change of semantics here but I don't think it's an
> issue currently. __is_lm_address() is true even for a user address, so
> at least the first part of virt_addr_valid() now accepts such addresses.
> The pfn would be wrong eventually because of the virt-to-phys offsetting
> and pfn_valid() false but we rely on this rather than checking it's a
> kernel address. Slight concern as this macro is called from drivers.
> 
> Should we keep the PAGE_OFFSET check as well?

In virt_addr_valid() or __is_lm_address()?

To be honest with you, I'm not even sure what virt_addr_valid() is supposed
to do with non-linear kernel addresses: look at powerpc and riscv, who
appear to convert the address straight to a pfn. Many callers check against
is_vmalloc_addr() first, but not all of them.

I think passing in a *user* address would be a huge bug in the caller,
just like it would be if you called virt_to_phys() on a user address.
If we care about that, then I think __is_lm_address() should be the one
doing the check against PAGE_OFFSET.

Thoughts? I'd be inclined to leave this patch as it is.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 1/8] arm64: memory: Fix virt_addr_valid() using __is_lm_address()
  2019-08-14  9:48     ` Will Deacon
@ 2019-08-14 10:40       ` Catalin Marinas
  2019-08-14 12:02         ` Will Deacon
  0 siblings, 1 reply; 44+ messages in thread
From: Catalin Marinas @ 2019-08-14 10:40 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Steve Capper, Andrey Konovalov, Geert Uytterhoeven,
	Qian Cai, linux-arm-kernel

On Wed, Aug 14, 2019 at 10:48:20AM +0100, Will Deacon wrote:
> On Wed, Aug 14, 2019 at 10:19:42AM +0100, Catalin Marinas wrote:
> > On Tue, Aug 13, 2019 at 06:01:42PM +0100, Will Deacon wrote:
> > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > > index afaf512c0e1b..442ab861cab8 100644
> > > --- a/arch/arm64/include/asm/memory.h
> > > +++ b/arch/arm64/include/asm/memory.h
> > > @@ -244,9 +244,9 @@ static inline const void *__tag_set(const void *addr, u8 tag)
> > >  /*
> > >   * The linear kernel range starts in the middle of the virtual adddress
> > >   * space. Testing the top bit for the start of the region is a
> > > - * sufficient check.
> > > + * sufficient check and avoids having to worry about the tag.
> > >   */
> > > -#define __is_lm_address(addr)	(!((addr) & BIT(vabits_actual - 1)))
> > > +#define __is_lm_address(addr)	(!(((u64)addr) & BIT(vabits_actual - 1)))
> > >  
> > >  #define __lm_to_phys(addr)	(((addr) + physvirt_offset))
> > >  #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
> > > @@ -326,13 +326,13 @@ static inline void *phys_to_virt(phys_addr_t x)
> > >  
> > >  #define virt_to_page(vaddr)	((struct page *)((__virt_to_pgoff(vaddr)) + VMEMMAP_START))
> > >  #endif
> > > -#endif
> > >  
> > > -#define _virt_addr_is_linear(kaddr)	\
> > > -	(__tag_reset((u64)(kaddr)) >= PAGE_OFFSET)
> > > +#define virt_addr_valid(addr)	({					\
> > > +	__typeof__(addr) __addr = addr;					\
> > > +	__is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr));	\
> > > +})
> > 
> > There is a slight change of semantics here but I don't think it's an
> > issue currently. __is_lm_address() is true even for a user address, so
> > at least the first part of virt_addr_valid() now accepts such addresses.
> > The pfn would be wrong eventually because of the virt-to-phys offsetting
> > and pfn_valid() false but we rely on this rather than checking it's a
> > kernel address. Slight concern as this macro is called from drivers.
> > 
> > Should we keep the PAGE_OFFSET check as well?
> 
> In virt_addr_valid() or __is_lm_address()?
> 
> To be honest with you, I'm not even sure what virt_addr_valid() is supposed
> to do with non-linear kernel addresses: look at powerpc and riscv, who
> appear to convert the address straight to a pfn. Many callers check against
> is_vmalloc_addr() first, but not all of them.

Even if they call is_vmalloc_addr(), it would return false for user
address. Anyway, at a quick look, I couldn't find any virt_addr_valid()
where it would be an issue.

> I think passing in a *user* address would be a huge bug in the caller,
> just like it would be if you called virt_to_phys() on a user address.
> If we care about that, then I think __is_lm_address() should be the one
> doing the check against PAGE_OFFSET.
> 
> Thoughts? I'd be inclined to leave this patch as it is.

Leave it as it is. The way pfn_valid() is written it wouldn't return
true for a user address due to the fact that virt_to_phys() cannot
return the same physical address for a user and linear map one.

For this patch:

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 3/8] arm64: memory: Rewrite default page_to_virt()/virt_to_page()
  2019-08-14  9:41     ` Will Deacon
@ 2019-08-14 10:56       ` Mark Rutland
  2019-08-14 11:17         ` Will Deacon
  0 siblings, 1 reply; 44+ messages in thread
From: Mark Rutland @ 2019-08-14 10:56 UTC (permalink / raw)
  To: Will Deacon
  Cc: Steve Capper, Andrey Konovalov, Geert Uytterhoeven,
	Catalin Marinas, Qian Cai, linux-arm-kernel

On Wed, Aug 14, 2019 at 10:41:19AM +0100, Will Deacon wrote:
> On Wed, Aug 14, 2019 at 10:30:19AM +0100, Catalin Marinas wrote:
> > On Tue, Aug 13, 2019 at 06:01:44PM +0100, Will Deacon wrote:
> > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > > index 47b4dc73b8bf..77074b3a1025 100644
> > > --- a/arch/arm64/include/asm/memory.h
> > > +++ b/arch/arm64/include/asm/memory.h
> > > @@ -313,19 +313,18 @@ static inline void *phys_to_virt(phys_addr_t x)
> > >  #if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
> > >  #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
> > >  #else
> > > -#define __virt_to_pgoff(kaddr)	(((u64)(kaddr) - PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page))
> > > -#define __page_to_voff(kaddr)	(((u64)(kaddr) - VMEMMAP_START) * PAGE_SIZE / sizeof(struct page))
> > > -
> > > -#define page_to_virt(page)	({					\
> > > -	unsigned long __addr =						\
> > > -		((__page_to_voff(page)) + PAGE_OFFSET);			\
> > > -	const void *__addr_tag =					\
> > > -		__tag_set((void *)__addr, page_kasan_tag(page));	\
> > > -	((void *)__addr_tag);						\
> > > +#define page_to_virt(x)	({						\
> > > +	__typeof__(x) __page = x;					\
> > 
> > Why not struct page * directly here?
> 
> I started out with that, but then you have to deal with const struct page *
> as well and it gets pretty messy.

What goes wrong if you always use const struct page *__page?

Otherwise this looks sound to me.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 3/8] arm64: memory: Rewrite default page_to_virt()/virt_to_page()
  2019-08-14 10:56       ` Mark Rutland
@ 2019-08-14 11:17         ` Will Deacon
  2019-08-14 11:26           ` Mark Rutland
  0 siblings, 1 reply; 44+ messages in thread
From: Will Deacon @ 2019-08-14 11:17 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Steve Capper, Andrey Konovalov, Geert Uytterhoeven,
	Catalin Marinas, Qian Cai, linux-arm-kernel

On Wed, Aug 14, 2019 at 11:56:39AM +0100, Mark Rutland wrote:
> On Wed, Aug 14, 2019 at 10:41:19AM +0100, Will Deacon wrote:
> > On Wed, Aug 14, 2019 at 10:30:19AM +0100, Catalin Marinas wrote:
> > > On Tue, Aug 13, 2019 at 06:01:44PM +0100, Will Deacon wrote:
> > > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > > > index 47b4dc73b8bf..77074b3a1025 100644
> > > > --- a/arch/arm64/include/asm/memory.h
> > > > +++ b/arch/arm64/include/asm/memory.h
> > > > @@ -313,19 +313,18 @@ static inline void *phys_to_virt(phys_addr_t x)
> > > >  #if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
> > > >  #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
> > > >  #else
> > > > -#define __virt_to_pgoff(kaddr)	(((u64)(kaddr) - PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page))
> > > > -#define __page_to_voff(kaddr)	(((u64)(kaddr) - VMEMMAP_START) * PAGE_SIZE / sizeof(struct page))
> > > > -
> > > > -#define page_to_virt(page)	({					\
> > > > -	unsigned long __addr =						\
> > > > -		((__page_to_voff(page)) + PAGE_OFFSET);			\
> > > > -	const void *__addr_tag =					\
> > > > -		__tag_set((void *)__addr, page_kasan_tag(page));	\
> > > > -	((void *)__addr_tag);						\
> > > > +#define page_to_virt(x)	({						\
> > > > +	__typeof__(x) __page = x;					\
> > > 
> > > Why not struct page * directly here?
> > 
> > I started out with that, but then you have to deal with const struct page *
> > as well and it gets pretty messy.
> 
> What goes wrong if you always use const struct page *__page?

It would probably work, but then I wondered about the possibility of
volatile and decided that __typeof__ was cleaner.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 5/8] arm64: memory: Simplify _VA_START and _PAGE_OFFSET definitions
  2019-08-13 17:01 ` [PATCH 5/8] arm64: memory: Simplify _VA_START and _PAGE_OFFSET definitions Will Deacon
  2019-08-13 18:55   ` Steve Capper
  2019-08-14  9:33   ` Catalin Marinas
@ 2019-08-14 11:23   ` Mark Rutland
  2019-08-14 12:00     ` Will Deacon
  2 siblings, 1 reply; 44+ messages in thread
From: Mark Rutland @ 2019-08-14 11:23 UTC (permalink / raw)
  To: Will Deacon
  Cc: Steve Capper, Andrey Konovalov, Geert Uytterhoeven,
	Catalin Marinas, Qian Cai, linux-arm-kernel

On Tue, Aug 13, 2019 at 06:01:46PM +0100, Will Deacon wrote:
> Rather than subtracting from -1 and then adding 1, we can simply
> subtract from 0.
> 
> Cc: Steve Capper <steve.capper@arm.com>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/include/asm/memory.h | 6 ++----
>  1 file changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 56be462c69ce..5552c8cba1e2 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -44,8 +44,7 @@
>   * VA_START - the first kernel virtual address.
>   */
>  #define VA_BITS			(CONFIG_ARM64_VA_BITS)
> -#define _PAGE_OFFSET(va)	(UL(0xffffffffffffffff) - \
> -					(UL(1) << (va)) + 1)
> +#define _PAGE_OFFSET(va)	(-(UL(1) << (va)))
>  #define PAGE_OFFSET		(_PAGE_OFFSET(VA_BITS))
>  #define KIMAGE_VADDR		(MODULES_END)
>  #define BPF_JIT_REGION_START	(KASAN_SHADOW_END)
> @@ -63,8 +62,7 @@
>  #else
>  #define VA_BITS_MIN		(VA_BITS)
>  #endif
> -#define _VA_START(va)		(UL(0xffffffffffffffff) - \
> -				(UL(1) << ((va) - 1)) + 1)
> +#define _VA_START(va)		(-(UL(1) << ((va) - 1)))

This didn't make any sense to me until I realised that we changed the
meaning of VA_START when flippnig the VA space. Given that, this cleanup
looks sound to me.

However...

VA_START used to be the start of the TTBR1 address space, which was what
the "first kernel virtual address" comment was trying to say. Now it's
the first non-linear kernel virtual addres, which I think is very
confusing.

AFAICT, that change breaks at least:

* is_ttbr1_addr() -- now returns false for linear map addresses
* ptdump_check_wx() -- now skips the linear map
* ptdump_init() -- initialises start_address inccorrectly.

... so could we please find a new name for the first non-linear address,
e.g. PAGE_END, and leave VA_START as the first TTBR1 address?

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 3/8] arm64: memory: Rewrite default page_to_virt()/virt_to_page()
  2019-08-14 11:17         ` Will Deacon
@ 2019-08-14 11:26           ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2019-08-14 11:26 UTC (permalink / raw)
  To: Will Deacon
  Cc: Steve Capper, Andrey Konovalov, Geert Uytterhoeven,
	Catalin Marinas, Qian Cai, linux-arm-kernel

On Wed, Aug 14, 2019 at 12:17:22PM +0100, Will Deacon wrote:
> On Wed, Aug 14, 2019 at 11:56:39AM +0100, Mark Rutland wrote:
> > On Wed, Aug 14, 2019 at 10:41:19AM +0100, Will Deacon wrote:
> > > On Wed, Aug 14, 2019 at 10:30:19AM +0100, Catalin Marinas wrote:
> > > > On Tue, Aug 13, 2019 at 06:01:44PM +0100, Will Deacon wrote:
> > > > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > > > > index 47b4dc73b8bf..77074b3a1025 100644
> > > > > --- a/arch/arm64/include/asm/memory.h
> > > > > +++ b/arch/arm64/include/asm/memory.h
> > > > > @@ -313,19 +313,18 @@ static inline void *phys_to_virt(phys_addr_t x)
> > > > >  #if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
> > > > >  #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
> > > > >  #else
> > > > > -#define __virt_to_pgoff(kaddr)	(((u64)(kaddr) - PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page))
> > > > > -#define __page_to_voff(kaddr)	(((u64)(kaddr) - VMEMMAP_START) * PAGE_SIZE / sizeof(struct page))
> > > > > -
> > > > > -#define page_to_virt(page)	({					\
> > > > > -	unsigned long __addr =						\
> > > > > -		((__page_to_voff(page)) + PAGE_OFFSET);			\
> > > > > -	const void *__addr_tag =					\
> > > > > -		__tag_set((void *)__addr, page_kasan_tag(page));	\
> > > > > -	((void *)__addr_tag);						\
> > > > > +#define page_to_virt(x)	({						\
> > > > > +	__typeof__(x) __page = x;					\
> > > > 
> > > > Why not struct page * directly here?
> > > 
> > > I started out with that, but then you have to deal with const struct page *
> > > as well and it gets pretty messy.
> > 
> > What goes wrong if you always use const struct page *__page?
> 
> It would probably work, but then I wondered about the possibility of
> volatile and decided that __typeof__ was cleaner.

Ok, but I'd suggest that volatile struct page * that's never sane to use
in the first place.

If you don't want to change this, then no worries -- I just can't
follow.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 0/8] Fix issues with 52-bit kernel virtual addressing
  2019-08-13 17:01 [PATCH 0/8] Fix issues with 52-bit kernel virtual addressing Will Deacon
                   ` (9 preceding siblings ...)
  2019-08-14  8:01 ` Geert Uytterhoeven
@ 2019-08-14 11:29 ` Mark Rutland
  10 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2019-08-14 11:29 UTC (permalink / raw)
  To: Will Deacon
  Cc: Steve Capper, Andrey Konovalov, Geert Uytterhoeven,
	Catalin Marinas, Qian Cai, linux-arm-kernel

On Tue, Aug 13, 2019 at 06:01:41PM +0100, Will Deacon wrote:
> Hi all,
> 
> This patch series addresses some issues with 52-bit kernel VAs reported
> by Qian Cai and Geert. It's all confined to asm/memory.h and I got a bit
> carried away cleaning that thing up so the patches get more worthless
> as you go through the series. Still, I'd like to queue this on top of
> the 52-bit VA stuff currently sitting in -next.
> 
> Although Geert and Steve tested my initial hacks, I dropped the tags
> because I've split things up and could've easily broken things again.
> 
> Cheers,
> 
> Will
> 
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Steve Capper <steve.capper@arm.com>
> Cc: Qian Cai <cai@lca.pw>
> Cc: Andrey Konovalov <andreyknvl@google.com>
> Cc: Geert Uytterhoeven <geert@linux-m68k.org>

Other than the comments I've made, for the series:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

If you want, I can spin a fix / cleanup for the VA_START issues I
mentioned atop of this series.

Thanks,
Mark.

> 
> --->8
> 
> Will Deacon (8):
>   arm64: memory: Fix virt_addr_valid() using __is_lm_address()
>   arm64: memory: Ensure address tag is masked in conversion macros
>   arm64: memory: Rewrite default page_to_virt()/virt_to_page()
>   arm64: memory: Simplify virt_to_page() implementation
>   arm64: memory: Simplify _VA_START and _PAGE_OFFSET definitions
>   arm64: memory: Implement __tag_set() as common function
>   arm64: memory: Add comments to end of non-trivial #ifdef blocks
>   arm64: memory: Cosmetic cleanups
> 
>  arch/arm64/include/asm/memory.h | 89 ++++++++++++++++++++---------------------
>  1 file changed, 44 insertions(+), 45 deletions(-)
> 
> -- 
> 2.11.0
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 5/8] arm64: memory: Simplify _VA_START and _PAGE_OFFSET definitions
  2019-08-14 11:23   ` Mark Rutland
@ 2019-08-14 12:00     ` Will Deacon
  2019-08-14 13:18       ` Mark Rutland
  0 siblings, 1 reply; 44+ messages in thread
From: Will Deacon @ 2019-08-14 12:00 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Steve Capper, Andrey Konovalov, Geert Uytterhoeven,
	Catalin Marinas, Qian Cai, linux-arm-kernel

On Wed, Aug 14, 2019 at 12:23:39PM +0100, Mark Rutland wrote:
> On Tue, Aug 13, 2019 at 06:01:46PM +0100, Will Deacon wrote:
> > Rather than subtracting from -1 and then adding 1, we can simply
> > subtract from 0.
> > 
> > Cc: Steve Capper <steve.capper@arm.com>
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/include/asm/memory.h | 6 ++----
> >  1 file changed, 2 insertions(+), 4 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > index 56be462c69ce..5552c8cba1e2 100644
> > --- a/arch/arm64/include/asm/memory.h
> > +++ b/arch/arm64/include/asm/memory.h
> > @@ -44,8 +44,7 @@
> >   * VA_START - the first kernel virtual address.
> >   */
> >  #define VA_BITS			(CONFIG_ARM64_VA_BITS)
> > -#define _PAGE_OFFSET(va)	(UL(0xffffffffffffffff) - \
> > -					(UL(1) << (va)) + 1)
> > +#define _PAGE_OFFSET(va)	(-(UL(1) << (va)))
> >  #define PAGE_OFFSET		(_PAGE_OFFSET(VA_BITS))
> >  #define KIMAGE_VADDR		(MODULES_END)
> >  #define BPF_JIT_REGION_START	(KASAN_SHADOW_END)
> > @@ -63,8 +62,7 @@
> >  #else
> >  #define VA_BITS_MIN		(VA_BITS)
> >  #endif
> > -#define _VA_START(va)		(UL(0xffffffffffffffff) - \
> > -				(UL(1) << ((va) - 1)) + 1)
> > +#define _VA_START(va)		(-(UL(1) << ((va) - 1)))
> 
> This didn't make any sense to me until I realised that we changed the
> meaning of VA_START when flippnig the VA space. Given that, this cleanup
> looks sound to me.
> 
> However...
> 
> VA_START used to be the start of the TTBR1 address space, which was what
> the "first kernel virtual address" comment was trying to say. Now it's
> the first non-linear kernel virtual addres, which I think is very
> confusing.
> 
> AFAICT, that change breaks at least:
> 
> * is_ttbr1_addr() -- now returns false for linear map addresses
> * ptdump_check_wx() -- now skips the linear map
> * ptdump_init() -- initialises start_address inccorrectly.
> 
> ... so could we please find a new name for the first non-linear address,
> e.g. PAGE_END, and leave VA_START as the first TTBR1 address?

I think VA_START becomes PAGE_END and then things like is_ttbr1_addr()
just refer to PAGE_OFFSET instead. ptdump_init() looks ok to me, but I could
be missing something.

Anyway, these seem to be comments on the original patches from Steve rather
than my fixes, so please send additional fixes on top. I'll push out an
updated branch for you to work with...

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 1/8] arm64: memory: Fix virt_addr_valid() using __is_lm_address()
  2019-08-14 10:40       ` Catalin Marinas
@ 2019-08-14 12:02         ` Will Deacon
  0 siblings, 0 replies; 44+ messages in thread
From: Will Deacon @ 2019-08-14 12:02 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Mark Rutland, Steve Capper, Andrey Konovalov, Geert Uytterhoeven,
	Qian Cai, linux-arm-kernel

On Wed, Aug 14, 2019 at 11:40:23AM +0100, Catalin Marinas wrote:
> On Wed, Aug 14, 2019 at 10:48:20AM +0100, Will Deacon wrote:
> > On Wed, Aug 14, 2019 at 10:19:42AM +0100, Catalin Marinas wrote:
> > > On Tue, Aug 13, 2019 at 06:01:42PM +0100, Will Deacon wrote:
> > > > -#define _virt_addr_is_linear(kaddr)	\
> > > > -	(__tag_reset((u64)(kaddr)) >= PAGE_OFFSET)
> > > > +#define virt_addr_valid(addr)	({					\
> > > > +	__typeof__(addr) __addr = addr;					\
> > > > +	__is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr));	\
> > > > +})
> > > 
> > > There is a slight change of semantics here but I don't think it's an
> > > issue currently. __is_lm_address() is true even for a user address, so
> > > at least the first part of virt_addr_valid() now accepts such addresses.
> > > The pfn would be wrong eventually because of the virt-to-phys offsetting
> > > and pfn_valid() false but we rely on this rather than checking it's a
> > > kernel address. Slight concern as this macro is called from drivers.
> > > 
> > > Should we keep the PAGE_OFFSET check as well?
> > 
> > In virt_addr_valid() or __is_lm_address()?
> > 
> > To be honest with you, I'm not even sure what virt_addr_valid() is supposed
> > to do with non-linear kernel addresses: look at powerpc and riscv, who
> > appear to convert the address straight to a pfn. Many callers check against
> > is_vmalloc_addr() first, but not all of them.
> 
> Even if they call is_vmalloc_addr(), it would return false for user
> address. Anyway, at a quick look, I couldn't find any virt_addr_valid()
> where it would be an issue.

Sure, but my point was more that it would be crazy to have a macro that
accepted user and linear addresses, but not vmalloc addresses!

> > I think passing in a *user* address would be a huge bug in the caller,
> > just like it would be if you called virt_to_phys() on a user address.
> > If we care about that, then I think __is_lm_address() should be the one
> > doing the check against PAGE_OFFSET.
> > 
> > Thoughts? I'd be inclined to leave this patch as it is.
> 
> Leave it as it is. The way pfn_valid() is written it wouldn't return
> true for a user address due to the fact that virt_to_phys() cannot
> return the same physical address for a user and linear map one.
> 
> For this patch:
> 
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

Ta,

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 5/8] arm64: memory: Simplify _VA_START and _PAGE_OFFSET definitions
  2019-08-14 12:00     ` Will Deacon
@ 2019-08-14 13:18       ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2019-08-14 13:18 UTC (permalink / raw)
  To: Will Deacon
  Cc: Steve Capper, Andrey Konovalov, Geert Uytterhoeven,
	Catalin Marinas, Qian Cai, linux-arm-kernel

On Wed, Aug 14, 2019 at 01:00:00PM +0100, Will Deacon wrote:
> On Wed, Aug 14, 2019 at 12:23:39PM +0100, Mark Rutland wrote:
> > On Tue, Aug 13, 2019 at 06:01:46PM +0100, Will Deacon wrote:
> > > Rather than subtracting from -1 and then adding 1, we can simply
> > > subtract from 0.
> > > 
> > > Cc: Steve Capper <steve.capper@arm.com>
> > > Signed-off-by: Will Deacon <will@kernel.org>
> > > ---
> > >  arch/arm64/include/asm/memory.h | 6 ++----
> > >  1 file changed, 2 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > > index 56be462c69ce..5552c8cba1e2 100644
> > > --- a/arch/arm64/include/asm/memory.h
> > > +++ b/arch/arm64/include/asm/memory.h
> > > @@ -44,8 +44,7 @@
> > >   * VA_START - the first kernel virtual address.
> > >   */
> > >  #define VA_BITS			(CONFIG_ARM64_VA_BITS)
> > > -#define _PAGE_OFFSET(va)	(UL(0xffffffffffffffff) - \
> > > -					(UL(1) << (va)) + 1)
> > > +#define _PAGE_OFFSET(va)	(-(UL(1) << (va)))
> > >  #define PAGE_OFFSET		(_PAGE_OFFSET(VA_BITS))
> > >  #define KIMAGE_VADDR		(MODULES_END)
> > >  #define BPF_JIT_REGION_START	(KASAN_SHADOW_END)
> > > @@ -63,8 +62,7 @@
> > >  #else
> > >  #define VA_BITS_MIN		(VA_BITS)
> > >  #endif
> > > -#define _VA_START(va)		(UL(0xffffffffffffffff) - \
> > > -				(UL(1) << ((va) - 1)) + 1)
> > > +#define _VA_START(va)		(-(UL(1) << ((va) - 1)))
> > 
> > This didn't make any sense to me until I realised that we changed the
> > meaning of VA_START when flippnig the VA space. Given that, this cleanup
> > looks sound to me.
> > 
> > However...
> > 
> > VA_START used to be the start of the TTBR1 address space, which was what
> > the "first kernel virtual address" comment was trying to say. Now it's
> > the first non-linear kernel virtual addres, which I think is very
> > confusing.
> > 
> > AFAICT, that change breaks at least:
> > 
> > * is_ttbr1_addr() -- now returns false for linear map addresses
> > * ptdump_check_wx() -- now skips the linear map
> > * ptdump_init() -- initialises start_address inccorrectly.
> > 
> > ... so could we please find a new name for the first non-linear address,
> > e.g. PAGE_END, and leave VA_START as the first TTBR1 address?
> 
> I think VA_START becomes PAGE_END and then things like is_ttbr1_addr()
> just refer to PAGE_OFFSET instead. ptdump_init() looks ok to me, but I could
> be missing something.

Yes; you're right about ptdump_init().

> Anyway, these seem to be comments on the original patches from Steve rather
> than my fixes, so please send additional fixes on top. I'll push out an
> updated branch for you to work with...

Sure, I'll post a couple of patches momentarily...

Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 1/8] arm64: memory: Fix virt_addr_valid() using __is_lm_address()
  2019-08-13 20:34         ` Steve Capper
@ 2019-08-14 15:17           ` Ard Biesheuvel
  0 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2019-08-14 15:17 UTC (permalink / raw)
  To: Steve Capper
  Cc: Mark Rutland, Catalin Marinas, Qian Cai, Andrey Konovalov,
	Geert Uytterhoeven, nd, Will Deacon, linux-arm-kernel

On Tue, 13 Aug 2019 at 23:34, Steve Capper <Steve.Capper@arm.com> wrote:
>
> On Tue, Aug 13, 2019 at 10:25:14PM +0300, Ard Biesheuvel wrote:
> > On Tue, 13 Aug 2019 at 22:11, Steve Capper <Steve.Capper@arm.com> wrote:
> > >
> > > Hi Ard,
> > >
> > > On Tue, Aug 13, 2019 at 09:09:16PM +0300, Ard Biesheuvel wrote:
> > > > On Tue, 13 Aug 2019 at 20:02, Will Deacon <will@kernel.org> wrote:
> > > > >
> > > > > virt_addr_valid() is intended to test whether or not the passed address
> > > > > is a valid linear map address. Unfortunately, it relies on
> > > > > _virt_addr_is_linear() which is broken because it assumes the linear
> > > > > map is at the top of the address space, which it no longer is.
> > > > >
> > > > > Reimplement virt_addr_valid() using __is_lm_address() and remove
> > > > > _virt_addr_is_linear() entirely. At the same time, ensure we evaluate
> > > > > the macro parameter only once and move it within the __ASSEMBLY__ block.
> > > > >
> > > > > Reported-by: Qian Cai <cai@lca.pw>
> > > > > Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
> > > > > Fixes: 14c127c957c1 ("arm64: mm: Flip kernel VA space")
> > > > > Signed-off-by: Will Deacon <will@kernel.org>
> > > > > ---
> > > > >  arch/arm64/include/asm/memory.h | 14 +++++++-------
> > > > >  1 file changed, 7 insertions(+), 7 deletions(-)
> > > > >
> > > > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > > > > index afaf512c0e1b..442ab861cab8 100644
> > > > > --- a/arch/arm64/include/asm/memory.h
> > > > > +++ b/arch/arm64/include/asm/memory.h
> > > > > @@ -244,9 +244,9 @@ static inline const void *__tag_set(const void *addr, u8 tag)
> > > > >  /*
> > > > >   * The linear kernel range starts in the middle of the virtual adddress
> > > > >   * space.
> > > >
> > > > This is no longer true either.
> > > >
> > >
> > > Whoops agreed.
> > >
> > > > > Testing the top bit for the start of the region is a
> > > > > - * sufficient check.
> > > > > + * sufficient check and avoids having to worry about the tag.
> > > > >   */
> > > > > -#define __is_lm_address(addr)  (!((addr) & BIT(vabits_actual - 1)))
> > > > > +#define __is_lm_address(addr)  (!(((u64)addr) & BIT(vabits_actual - 1)))
> > > > >
> > > >
> > > > ... and this assumes that the VA space is split evenly between linear
> > > > and vmalloc/vmemmap/etc, which is no longer true when running with
> > > > 52-bit VAs
> > > >
> > >
> > > For 52-bit VAs we have two possibilities:
> > >   Start                 End                     Size            Use
> > >   -----------------------------------------------------------------------
> > >   0000000000000000      000fffffffffffff           4PB          user
> > >   fff0000000000000      fff7ffffffffffff           2PB          kernel logical memory map
> > >   fff8000000000000      fffd9fffffffffff        1440TB          [gap]
> >
> > Right. I missed the part where we throw away 1/3 of the VA space:
> > IIRC, the idea was that keeping the size of the upper half of the
> > 48-bit VA space fixed for 52-bit not only allowed compile time
> > constant addresses to be used for many of the things that populate it,
> > it also makes a lot more VA space available to the linear region,
> > which is where we need it the most.
> >
> > >   fffda00000000000      ffff9fffffffffff         512TB          kasan shadow region
> > >
> > > and
> > >   Start                        End                     Size            Use
> > >   -----------------------------------------------------------------------
> > >   0000000000000000     0000ffffffffffff         256TB          user
> > >   ffff000000000000     ffff7fffffffffff         128TB          kernel logical memory map
> > >   ffff800000000000     ffff9fffffffffff          32TB          kasan shadow region
> > >   ffffa00000000000     ffffa00007ffffff         128MB          bpf jit region
> > >
> > > IIUC the definition for __is_lm_address is correct for these cases?
> > > (it's based off vabits_actual).
> > >
> >
> > With the gap taken into account, it is correct. But throwing away 1440
> > TB of address space seems suboptimal to me.
>
> When getting the 52-bit kernel VA support ready, I was trying to achieve
> functional and performant support in as few steps as possible to avoid risk of
> breaking things (unfortunately I missed a couple of things between
> rebases with the SW KASAN). The big gain from that series is support for
> a much larger linear mapping.
>

Sure.

> The best way I can think of to get rid of the gap is to use it for
> vmalloc space which means changes to VMALLOC_START and VMALLOC_END. I
> think it would be better to make this change incrementally and I'm more
> than happy to get hacking on a patch. Or maybe there's a better use for
> the gap in other areas...
>

Given that it is only the MMU hardware that is preventing us from
running a 52-bit VA kernel on 48-bit VA hardware, I still don't follow
100% why we have all these moving parts. If we start the address space
at 0xfff0_... and start the vmalloc/vmemmap/etc space at 0xffff_8....
(and make sure the vmemmap space is sized so it can cover the entire
span), all we should need to do is make sure we don't map any memory
below 0xffff_0... if the hardware does not support that. I don't think
there is any need to make the start of physical memory coincide with
the lowest kernel VA supported by the hardware, or have other variable
quantities that assume different values based on the actual h/w
config.

[Apologies for not bringing this up earlier. I am not actually working atm :-)]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2019-08-14 15:17 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-13 17:01 [PATCH 0/8] Fix issues with 52-bit kernel virtual addressing Will Deacon
2019-08-13 17:01 ` [PATCH 1/8] arm64: memory: Fix virt_addr_valid() using __is_lm_address() Will Deacon
2019-08-13 18:09   ` Ard Biesheuvel
2019-08-13 19:11     ` Steve Capper
2019-08-13 19:25       ` Ard Biesheuvel
2019-08-13 20:34         ` Steve Capper
2019-08-14 15:17           ` Ard Biesheuvel
2019-08-14  8:32       ` Will Deacon
2019-08-13 18:53   ` Steve Capper
2019-08-14  9:19   ` Catalin Marinas
2019-08-14  9:48     ` Will Deacon
2019-08-14 10:40       ` Catalin Marinas
2019-08-14 12:02         ` Will Deacon
2019-08-13 17:01 ` [PATCH 2/8] arm64: memory: Ensure address tag is masked in conversion macros Will Deacon
2019-08-13 18:54   ` Steve Capper
2019-08-14  9:23   ` Catalin Marinas
2019-08-13 17:01 ` [PATCH 3/8] arm64: memory: Rewrite default page_to_virt()/virt_to_page() Will Deacon
2019-08-13 18:54   ` Steve Capper
2019-08-14  9:30   ` Catalin Marinas
2019-08-14  9:41     ` Will Deacon
2019-08-14 10:56       ` Mark Rutland
2019-08-14 11:17         ` Will Deacon
2019-08-14 11:26           ` Mark Rutland
2019-08-13 17:01 ` [PATCH 4/8] arm64: memory: Simplify virt_to_page() implementation Will Deacon
2019-08-13 18:55   ` Steve Capper
2019-08-14  9:32   ` Catalin Marinas
2019-08-13 17:01 ` [PATCH 5/8] arm64: memory: Simplify _VA_START and _PAGE_OFFSET definitions Will Deacon
2019-08-13 18:55   ` Steve Capper
2019-08-14  9:33   ` Catalin Marinas
2019-08-14 11:23   ` Mark Rutland
2019-08-14 12:00     ` Will Deacon
2019-08-14 13:18       ` Mark Rutland
2019-08-13 17:01 ` [PATCH 6/8] arm64: memory: Implement __tag_set() as common function Will Deacon
2019-08-13 18:56   ` Steve Capper
2019-08-14  9:34   ` Catalin Marinas
2019-08-13 17:01 ` [PATCH 7/8] arm64: memory: Add comments to end of non-trivial #ifdef blocks Will Deacon
2019-08-13 18:57   ` Steve Capper
2019-08-14  9:35   ` Catalin Marinas
2019-08-13 17:01 ` [PATCH 8/8] arm64: memory: Cosmetic cleanups Will Deacon
2019-08-13 18:57   ` Steve Capper
2019-08-14  9:35   ` Catalin Marinas
2019-08-13 18:53 ` [PATCH 0/8] Fix issues with 52-bit kernel virtual addressing Steve Capper
2019-08-14  8:01 ` Geert Uytterhoeven
2019-08-14 11:29 ` Mark Rutland

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).