linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] arm64: memory: VA_START fixups
@ 2019-08-14 13:28 Mark Rutland
  2019-08-14 13:28 ` [PATCH 1/2] arm64: memory: fix flipped VA space fallout Mark Rutland
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Mark Rutland @ 2019-08-14 13:28 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: mark.rutland, catalin.marinas, will, steve.capper

Hi all,

These patches address my concerns with the new VA_START semantic as I
spotted while reviewing Will's 52-bit VA cleanup. The first patch
corrects the newly broken usage of VA_START, and the second renames
VA_START to PAGE_END to make the new semantic clearer.

Both patches are based on the arm64 for-next/52-bit-kva branch, and I've
given a 52-bit VA configuration a build+boot test (on HW without 52-bit
VA support).

Thanks,
Mark.

Mark Rutland (2):
  arm64: memory: fix flipped VA space fallout
  arm64: memory: rename VA_START to PAGE_END

 arch/arm64/include/asm/memory.h  | 20 ++++++++++----------
 arch/arm64/include/asm/pgtable.h |  4 ++--
 arch/arm64/kernel/hibernate.c    |  2 +-
 arch/arm64/mm/dump.c             |  8 ++++----
 arch/arm64/mm/fault.c            |  2 +-
 arch/arm64/mm/kasan_init.c       |  2 +-
 arch/arm64/mm/mmu.c              |  4 ++--
 7 files changed, 21 insertions(+), 21 deletions(-)

-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/2] arm64: memory: fix flipped VA space fallout
  2019-08-14 13:28 [PATCH 0/2] arm64: memory: VA_START fixups Mark Rutland
@ 2019-08-14 13:28 ` Mark Rutland
  2019-08-14 14:51   ` Steve Capper
  2019-08-14 13:28 ` [PATCH 2/2] arm64: memory: rename VA_START to PAGE_END Mark Rutland
  2019-08-14 14:51 ` [PATCH 0/2] arm64: memory: VA_START fixups Steve Capper
  2 siblings, 1 reply; 6+ messages in thread
From: Mark Rutland @ 2019-08-14 13:28 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: mark.rutland, catalin.marinas, will, steve.capper

VA_START used to be the start of the TTBR1 address space, but now it's a
point midway though. In a couple of places we still use VA_START to get
the start of the TTBR1 address space, so let's fix these up to use
PAGE_OFFSET instead.

Fixes: 14c127c957c1c607 ("arm64: mm: Flip kernel VA space")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Steve Capper <steve.capper@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/mm/dump.c  | 2 +-
 arch/arm64/mm/fault.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
index 6ec75305828e..8e10b4ba215a 100644
--- a/arch/arm64/mm/dump.c
+++ b/arch/arm64/mm/dump.c
@@ -400,7 +400,7 @@ void ptdump_check_wx(void)
 		.check_wx = true,
 	};
 
-	walk_pgd(&st, &init_mm, VA_START);
+	walk_pgd(&st, &init_mm, PAGE_OFFSET);
 	note_page(&st, 0, 0, 0);
 	if (st.wx_pages || st.uxn_pages)
 		pr_warn("Checked W+X mappings: FAILED, %lu W+X pages found, %lu non-UXN pages found\n",
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 75eff57bd9ef..bb4e4f3fffd8 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -109,7 +109,7 @@ static inline bool is_ttbr0_addr(unsigned long addr)
 static inline bool is_ttbr1_addr(unsigned long addr)
 {
 	/* TTBR1 addresses may have a tag if KASAN_SW_TAGS is in use */
-	return arch_kasan_reset_tag(addr) >= VA_START;
+	return arch_kasan_reset_tag(addr) >= PAGE_OFFSET;
 }
 
 /*
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/2] arm64: memory: rename VA_START to PAGE_END
  2019-08-14 13:28 [PATCH 0/2] arm64: memory: VA_START fixups Mark Rutland
  2019-08-14 13:28 ` [PATCH 1/2] arm64: memory: fix flipped VA space fallout Mark Rutland
@ 2019-08-14 13:28 ` Mark Rutland
  2019-08-14 14:51   ` Steve Capper
  2019-08-14 14:51 ` [PATCH 0/2] arm64: memory: VA_START fixups Steve Capper
  2 siblings, 1 reply; 6+ messages in thread
From: Mark Rutland @ 2019-08-14 13:28 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: mark.rutland, catalin.marinas, will, steve.capper

Prior to commit:

  14c127c957c1c607 ("arm64: mm: Flip kernel VA space")

... VA_START described the start of the TTBR1 address space for a given
VA size described by VA_BITS, where all kernel mappings began.

Since that commit, VA_START described a portion midway through the
address space, where the linear map ends and other kernel mappings
begin.

To avoid confusion, let's rename VA_START to PAGE_END, making it clear
that it's not the start of the TTBR1 address space and implying that
it's related to PAGE_OFFSET. Comments and other mnemonics are updated
accordingly, along with a typo fix in the decription of VMEMMAP_SIZE.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Steve Capper <steve.capper@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/memory.h  | 20 ++++++++++----------
 arch/arm64/include/asm/pgtable.h |  4 ++--
 arch/arm64/kernel/hibernate.c    |  2 +-
 arch/arm64/mm/dump.c             |  6 +++---
 arch/arm64/mm/kasan_init.c       |  2 +-
 arch/arm64/mm/mmu.c              |  4 ++--
 6 files changed, 19 insertions(+), 19 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index d69c2865ae40..a713bad71db5 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -28,20 +28,20 @@
  *                a struct page array
  *
  * If we are configured with a 52-bit kernel VA then our VMEMMAP_SIZE
- * neads to cover the memory region from the beginning of the 52-bit
- * PAGE_OFFSET all the way to VA_START for 48-bit. This allows us to
+ * needs to cover the memory region from the beginning of the 52-bit
+ * PAGE_OFFSET all the way to PAGE_END for 48-bit. This allows us to
  * keep a constant PAGE_OFFSET and "fallback" to using the higher end
  * of the VMEMMAP where 52-bit support is not available in hardware.
  */
-#define VMEMMAP_SIZE ((_VA_START(VA_BITS_MIN) - PAGE_OFFSET) \
+#define VMEMMAP_SIZE ((_PAGE_END(VA_BITS_MIN) - PAGE_OFFSET) \
 			>> (PAGE_SHIFT - STRUCT_PAGE_MAX_SHIFT))
 
 /*
- * PAGE_OFFSET - the virtual address of the start of the linear map (top
- *		 (VA_BITS - 1))
- * KIMAGE_VADDR - the virtual address of the start of the kernel image
+ * PAGE_OFFSET - the virtual address of the start of the linear map, at the
+ *               start of the TTBR1 address space.
+ * PAGE_END - the end of the linear map, where all other kernel mappings begin.
+ * KIMAGE_VADDR - the virtual address of the start of the kernel image.
  * VA_BITS - the maximum number of bits for virtual addresses.
- * VA_START - the first kernel virtual address.
  */
 #define VA_BITS			(CONFIG_ARM64_VA_BITS)
 #define _PAGE_OFFSET(va)	(-(UL(1) << (va)))
@@ -64,7 +64,7 @@
 #define VA_BITS_MIN		(VA_BITS)
 #endif
 
-#define _VA_START(va)		(-(UL(1) << ((va) - 1)))
+#define _PAGE_END(va)		(-(UL(1) << ((va) - 1)))
 
 #define KERNEL_START		_text
 #define KERNEL_END		_end
@@ -87,7 +87,7 @@
 #define KASAN_THREAD_SHIFT	1
 #else
 #define KASAN_THREAD_SHIFT	0
-#define KASAN_SHADOW_END	(_VA_START(VA_BITS_MIN))
+#define KASAN_SHADOW_END	(_PAGE_END(VA_BITS_MIN))
 #endif /* CONFIG_KASAN */
 
 #define MIN_THREAD_SHIFT	(14 + KASAN_THREAD_SHIFT)
@@ -173,7 +173,7 @@
 
 #ifndef __ASSEMBLY__
 extern u64			vabits_actual;
-#define VA_START		(_VA_START(vabits_actual))
+#define PAGE_END		(_PAGE_END(vabits_actual))
 
 #include <linux/bitops.h>
 #include <linux/mmdebug.h>
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 4a695b9ee0f0..979e24fadf35 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -856,8 +856,8 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
 
 #define update_mmu_cache_pmd(vma, address, pmd) do { } while (0)
 
-#define kc_vaddr_to_offset(v)	((v) & ~VA_START)
-#define kc_offset_to_vaddr(o)	((o) | VA_START)
+#define kc_vaddr_to_offset(v)	((v) & ~PAGE_END)
+#define kc_offset_to_vaddr(o)	((o) | PAGE_END)
 
 #ifdef CONFIG_ARM64_PA_BITS_52
 #define phys_to_ttbr(addr)	(((addr) | ((addr) >> 46)) & TTBR_BADDR_MASK_52)
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index e130db05d932..e0a7fce0e01c 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -496,7 +496,7 @@ int swsusp_arch_resume(void)
 		rc = -ENOMEM;
 		goto out;
 	}
-	rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, VA_START);
+	rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, PAGE_END);
 	if (rc)
 		goto out;
 
diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
index 8e10b4ba215a..93f9f77582ae 100644
--- a/arch/arm64/mm/dump.c
+++ b/arch/arm64/mm/dump.c
@@ -28,7 +28,7 @@
 
 enum address_markers_idx {
 	PAGE_OFFSET_NR = 0,
-	VA_START_NR,
+	PAGE_END_NR,
 #ifdef CONFIG_KASAN
 	KASAN_START_NR,
 #endif
@@ -36,7 +36,7 @@ enum address_markers_idx {
 
 static struct addr_marker address_markers[] = {
 	{ PAGE_OFFSET,			"Linear Mapping start" },
-	{ 0 /* VA_START */,		"Linear Mapping end" },
+	{ 0 /* PAGE_END */,		"Linear Mapping end" },
 #ifdef CONFIG_KASAN
 	{ 0 /* KASAN_SHADOW_START */,	"Kasan shadow start" },
 	{ KASAN_SHADOW_END,		"Kasan shadow end" },
@@ -411,7 +411,7 @@ void ptdump_check_wx(void)
 
 static int ptdump_init(void)
 {
-	address_markers[VA_START_NR].start_address = VA_START;
+	address_markers[PAGE_END_NR].start_address = PAGE_END;
 #ifdef CONFIG_KASAN
 	address_markers[KASAN_START_NR].start_address = KASAN_SHADOW_START;
 #endif
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 725222271474..f87a32484ea8 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -226,7 +226,7 @@ void __init kasan_init(void)
 	kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
 			   early_pfn_to_nid(virt_to_pfn(lm_alias(_text))));
 
-	kasan_populate_early_shadow(kasan_mem_to_shadow((void *) VA_START),
+	kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END),
 				   (void *)mod_shadow_start);
 	kasan_populate_early_shadow((void *)kimg_shadow_end,
 				   (void *)KASAN_SHADOW_END);
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 0c8f7e55f859..8e4b7eaff8ce 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -399,7 +399,7 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
 static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
 				  phys_addr_t size, pgprot_t prot)
 {
-	if ((virt >= VA_START) && (virt < VMALLOC_START)) {
+	if ((virt >= PAGE_END) && (virt < VMALLOC_START)) {
 		pr_warn("BUG: not creating mapping for %pa at 0x%016lx - outside kernel range\n",
 			&phys, virt);
 		return;
@@ -426,7 +426,7 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
 				phys_addr_t size, pgprot_t prot)
 {
-	if ((virt >= VA_START) && (virt < VMALLOC_START)) {
+	if ((virt >= PAGE_END) && (virt < VMALLOC_START)) {
 		pr_warn("BUG: not updating mapping for %pa at 0x%016lx - outside kernel range\n",
 			&phys, virt);
 		return;
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 2/2] arm64: memory: rename VA_START to PAGE_END
  2019-08-14 13:28 ` [PATCH 2/2] arm64: memory: rename VA_START to PAGE_END Mark Rutland
@ 2019-08-14 14:51   ` Steve Capper
  0 siblings, 0 replies; 6+ messages in thread
From: Steve Capper @ 2019-08-14 14:51 UTC (permalink / raw)
  To: Mark Rutland; +Cc: Catalin Marinas, nd, will, linux-arm-kernel

On Wed, Aug 14, 2019 at 02:28:48PM +0100, Mark Rutland wrote:
> Prior to commit:
> 
>   14c127c957c1c607 ("arm64: mm: Flip kernel VA space")
> 
> ... VA_START described the start of the TTBR1 address space for a given
> VA size described by VA_BITS, where all kernel mappings began.
> 
> Since that commit, VA_START described a portion midway through the
> address space, where the linear map ends and other kernel mappings
> begin.
> 
> To avoid confusion, let's rename VA_START to PAGE_END, making it clear
> that it's not the start of the TTBR1 address space and implying that
> it's related to PAGE_OFFSET. Comments and other mnemonics are updated
> accordingly, along with a typo fix in the decription of VMEMMAP_SIZE.
> 
> There should be no functional change as a result of this patch.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Steve Capper <steve.capper@arm.com>
> Cc: Will Deacon <will@kernel.org>

This looks a lot better, and avoids future use of "VA_START", thanks Mark!

Reviewed-by: Steve Capper <steve.capper@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] arm64: memory: fix flipped VA space fallout
  2019-08-14 13:28 ` [PATCH 1/2] arm64: memory: fix flipped VA space fallout Mark Rutland
@ 2019-08-14 14:51   ` Steve Capper
  0 siblings, 0 replies; 6+ messages in thread
From: Steve Capper @ 2019-08-14 14:51 UTC (permalink / raw)
  To: Mark Rutland; +Cc: Catalin Marinas, nd, will, linux-arm-kernel

On Wed, Aug 14, 2019 at 02:28:47PM +0100, Mark Rutland wrote:
> VA_START used to be the start of the TTBR1 address space, but now it's a
> point midway though. In a couple of places we still use VA_START to get
> the start of the TTBR1 address space, so let's fix these up to use
> PAGE_OFFSET instead.
> 
> Fixes: 14c127c957c1c607 ("arm64: mm: Flip kernel VA space")
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Steve Capper <steve.capper@arm.com>
> Cc: Will Deacon <will@kernel.org>

Reviewed-by: Steve Capper <steve.capper@arm.com>

> ---
>  arch/arm64/mm/dump.c  | 2 +-
>  arch/arm64/mm/fault.c | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
> index 6ec75305828e..8e10b4ba215a 100644
> --- a/arch/arm64/mm/dump.c
> +++ b/arch/arm64/mm/dump.c
> @@ -400,7 +400,7 @@ void ptdump_check_wx(void)
>  		.check_wx = true,
>  	};
>  
> -	walk_pgd(&st, &init_mm, VA_START);
> +	walk_pgd(&st, &init_mm, PAGE_OFFSET);
>  	note_page(&st, 0, 0, 0);
>  	if (st.wx_pages || st.uxn_pages)
>  		pr_warn("Checked W+X mappings: FAILED, %lu W+X pages found, %lu non-UXN pages found\n",
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index 75eff57bd9ef..bb4e4f3fffd8 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -109,7 +109,7 @@ static inline bool is_ttbr0_addr(unsigned long addr)
>  static inline bool is_ttbr1_addr(unsigned long addr)
>  {
>  	/* TTBR1 addresses may have a tag if KASAN_SW_TAGS is in use */
> -	return arch_kasan_reset_tag(addr) >= VA_START;
> +	return arch_kasan_reset_tag(addr) >= PAGE_OFFSET;
>  }
>  
>  /*
> -- 
> 2.11.0
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/2] arm64: memory: VA_START fixups
  2019-08-14 13:28 [PATCH 0/2] arm64: memory: VA_START fixups Mark Rutland
  2019-08-14 13:28 ` [PATCH 1/2] arm64: memory: fix flipped VA space fallout Mark Rutland
  2019-08-14 13:28 ` [PATCH 2/2] arm64: memory: rename VA_START to PAGE_END Mark Rutland
@ 2019-08-14 14:51 ` Steve Capper
  2 siblings, 0 replies; 6+ messages in thread
From: Steve Capper @ 2019-08-14 14:51 UTC (permalink / raw)
  To: Mark Rutland; +Cc: Catalin Marinas, nd, will, linux-arm-kernel

On Wed, Aug 14, 2019 at 02:28:46PM +0100, Mark Rutland wrote:
> Hi all,

Hi Mark,

> 
> These patches address my concerns with the new VA_START semantic as I
> spotted while reviewing Will's 52-bit VA cleanup. The first patch
> corrects the newly broken usage of VA_START, and the second renames
> VA_START to PAGE_END to make the new semantic clearer.
> 
> Both patches are based on the arm64 for-next/52-bit-kva branch, and I've
> given a 52-bit VA configuration a build+boot test (on HW without 52-bit
> VA support).
> 

A big thank you for this!
I have applied this series and tested it with CONFIG_DEBUG_VIRTUAL,
CONFIG_DEBUG_VM and KASAN SW TAGS.

I've tested this with 52-bit and have given the kernel page table dumper
a go too.

FWIW:
Tested-by: Steve Capper <steve.capper@arm.com>

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-08-14 14:52 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-14 13:28 [PATCH 0/2] arm64: memory: VA_START fixups Mark Rutland
2019-08-14 13:28 ` [PATCH 1/2] arm64: memory: fix flipped VA space fallout Mark Rutland
2019-08-14 14:51   ` Steve Capper
2019-08-14 13:28 ` [PATCH 2/2] arm64: memory: rename VA_START to PAGE_END Mark Rutland
2019-08-14 14:51   ` Steve Capper
2019-08-14 14:51 ` [PATCH 0/2] arm64: memory: VA_START fixups Steve Capper

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).