All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ryan Roberts <ryan.roberts@arm.com>
To: Ard Biesheuvel <ardb@kernel.org>, linux-kernel@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Marc Zyngier <maz@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Kees Cook <keescook@chromium.org>
Subject: Re: [PATCH v3 35/60] arm64: mm: Use 48-bit virtual addressing for the permanent ID map
Date: Tue, 18 Apr 2023 11:22:25 +0100	[thread overview]
Message-ID: <73a89872-092e-4794-3956-71afe653dae0@arm.com> (raw)
In-Reply-To: <20230307140522.2311461-36-ardb@kernel.org>

On 07/03/2023 14:04, Ard Biesheuvel wrote:
> Even though we support loading kernels anywhere in 48-bit addressable
> physical memory, we create the ID maps based on the number of levels
> that we happened to configure for the kernel VA and user VA spaces.
> 
> The reason for this is that the PGD/PUD/PMD based classification of
> translation levels, along with the associated folding when the number of
> levels is less than 5, does not permit creating a page table hierarchy
> of a set number of levels. This means that, for instance, on 39-bit VA
> kernels we need to configure an additional level above PGD level on the
> fly, and 36-bit VA kernels still only support 47-bit virtual addressing
> with this trick applied.
> 
> Now that we have a separate helper to populate page table hierarchies
> that does not define the levels in terms of PUDS/PMDS/etc at all, let's
> reuse it to create the permanent ID map with a fixed VA size of 48 bits.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
>  arch/arm64/include/asm/kernel-pgtable.h |  2 ++
>  arch/arm64/kernel/head.S                |  5 +++
>  arch/arm64/kvm/mmu.c                    | 15 +++------
>  arch/arm64/mm/mmu.c                     | 32 +++++++++++---------
>  arch/arm64/mm/proc.S                    |  9 ++----
>  5 files changed, 31 insertions(+), 32 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
> index 50b5c145358a5d8e..2a2c80ffe59e5307 100644
> --- a/arch/arm64/include/asm/kernel-pgtable.h
> +++ b/arch/arm64/include/asm/kernel-pgtable.h
> @@ -35,6 +35,8 @@
>  #define SWAPPER_PGTABLE_LEVELS	(CONFIG_PGTABLE_LEVELS)
>  #endif
>  
> +#define IDMAP_LEVELS		ARM64_HW_PGTABLE_LEVELS(48)
> +#define IDMAP_ROOT_LEVEL	(4 - IDMAP_LEVELS)
>  
>  /*
>   * If KASLR is enabled, then an offset K is added to the kernel address
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index e45fd99e8ab4272a..fc6a4076d826b728 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -727,6 +727,11 @@ SYM_FUNC_START_LOCAL(__no_granule_support)
>  SYM_FUNC_END(__no_granule_support)
>  
>  SYM_FUNC_START_LOCAL(__primary_switch)
> +	mrs		x1, tcr_el1
> +	mov		x2, #64 - VA_BITS
> +	tcr_set_t0sz	x1, x2
> +	msr		tcr_el1, x1
> +
>  	adrp	x1, reserved_pg_dir
>  	adrp	x2, init_idmap_pg_dir
>  	bl	__enable_mmu
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 7113587222ffe8e1..d64be7b5f6692e8b 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1687,16 +1687,9 @@ int __init kvm_mmu_init(u32 *hyp_va_bits)
>  	BUG_ON((hyp_idmap_start ^ (hyp_idmap_end - 1)) & PAGE_MASK);
>  
>  	/*
> -	 * The ID map may be configured to use an extended virtual address
> -	 * range. This is only the case if system RAM is out of range for the
> -	 * currently configured page size and VA_BITS_MIN, in which case we will
> -	 * also need the extended virtual range for the HYP ID map, or we won't
> -	 * be able to enable the EL2 MMU.
> -	 *
> -	 * However, in some cases the ID map may be configured for fewer than
> -	 * the number of VA bits used by the regular kernel stage 1. This
> -	 * happens when VA_BITS=52 and the kernel image is placed in PA space
> -	 * below 48 bits.
> +	 * The ID map is always configured for 48 bits of translation, which
> +	 * may be fewer than the number of VA bits used by the regular kernel
> +	 * stage 1, when VA_BITS=52.
>  	 *
>  	 * At EL2, there is only one TTBR register, and we can't switch between
>  	 * translation tables *and* update TCR_EL2.T0SZ at the same time. Bottom
> @@ -1707,7 +1700,7 @@ int __init kvm_mmu_init(u32 *hyp_va_bits)
>  	 * 1 VA bits to assure that the hypervisor can both ID map its code page
>  	 * and map any kernel memory.
>  	 */
> -	idmap_bits = 64 - ((idmap_t0sz & TCR_T0SZ_MASK) >> TCR_T0SZ_OFFSET);
> +	idmap_bits = 48;
>  	kernel_bits = vabits_actual;
>  	*hyp_va_bits = max(idmap_bits, kernel_bits);

This effectively means that the hypervisor always uses at least 48 VA bits.
Previously, I think it would have been 39 for (e.g.) Android builds? Does this
have any performance implications for pKVM?

>  
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 81e1420d2cc13246..a59433ae4f5f8d02 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -762,22 +762,21 @@ static void __init map_kernel(pgd_t *pgdp)
>  	kasan_copy_shadow(pgdp);
>  }
>  
> +void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot,
> +		    int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
> +
> +static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
> +	  kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;

I see this new storage introduced, but I don't see you removing the storage for
the old method (I have a vague memory of it being defined in the linker script)?

> +
>  static void __init create_idmap(void)
>  {
>  	u64 start = __pa_symbol(__idmap_text_start);
> -	u64 size = __pa_symbol(__idmap_text_end) - start;
> -	pgd_t *pgd = idmap_pg_dir;
> -	u64 pgd_phys;
> -
> -	/* check if we need an additional level of translation */
> -	if (VA_BITS < 48 && idmap_t0sz < (64 - VA_BITS_MIN)) {
> -		pgd_phys = early_pgtable_alloc(PAGE_SHIFT);
> -		set_pgd(&idmap_pg_dir[start >> VA_BITS],
> -			__pgd(pgd_phys | P4D_TYPE_TABLE));
> -		pgd = __va(pgd_phys);
> -	}
> -	__create_pgd_mapping(pgd, start, start, size, PAGE_KERNEL_ROX,
> -			     early_pgtable_alloc, 0);
> +	u64 end   = __pa_symbol(__idmap_text_end);
> +	u64 ptep  = __pa_symbol(idmap_ptes);
> +
> +	__pi_map_range(&ptep, start, end, start, PAGE_KERNEL_ROX,
> +		       IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
> +		       __phys_to_virt(ptep) - ptep);
>  
>  	if (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) {
>  		extern u32 __idmap_kpti_flag;
> @@ -787,8 +786,10 @@ static void __init create_idmap(void)
>  		 * The KPTI G-to-nG conversion code needs a read-write mapping
>  		 * of its synchronization flag in the ID map.
>  		 */
> -		__create_pgd_mapping(pgd, pa, pa, sizeof(u32), PAGE_KERNEL,
> -				     early_pgtable_alloc, 0);
> +		ptep = __pa_symbol(kpti_ptes);
> +		__pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
> +			       IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
> +			       __phys_to_virt(ptep) - ptep);
>  	}
>  }
>  
> @@ -813,6 +814,7 @@ void __init paging_init(void)
>  	memblock_allow_resize();
>  
>  	create_idmap();
> +	idmap_t0sz = TCR_T0SZ(48);
>  }
>  
>  #ifdef CONFIG_MEMORY_HOTPLUG
> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> index 82e88f4521737c0e..c7129b21bfd5191f 100644
> --- a/arch/arm64/mm/proc.S
> +++ b/arch/arm64/mm/proc.S
> @@ -422,9 +422,9 @@ SYM_FUNC_START(__cpu_setup)
>  	mair	.req	x17
>  	tcr	.req	x16
>  	mov_q	mair, MAIR_EL1_SET
> -	mov_q	tcr, TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \
> -			TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \
> -			TCR_TBI0 | TCR_A1 | TCR_KASAN_SW_FLAGS | TCR_MTE_FLAGS
> +	mov_q	tcr, TCR_T0SZ(48) | TCR_T1SZ(VA_BITS) | TCR_CACHE_FLAGS | \
> +		     TCR_SMP_FLAGS | TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \
> +		     TCR_TBI0 | TCR_A1 | TCR_KASAN_SW_FLAGS | TCR_MTE_FLAGS

You're hardcoding 48 in 3 places in this patch. I wonder if an IDMAP_VA_BITS
macro might help here?

>  
>  	tcr_clear_errata_bits tcr, x9, x5
>  
> @@ -432,10 +432,7 @@ SYM_FUNC_START(__cpu_setup)
>  	sub		x9, xzr, x0
>  	add		x9, x9, #64
>  	tcr_set_t1sz	tcr, x9
> -#else
> -	idmap_get_t0sz	x9
>  #endif
> -	tcr_set_t0sz	tcr, x9
>  
>  	/*
>  	 * Set the IPS bits in TCR_EL1.


WARNING: multiple messages have this Message-ID (diff)
From: Ryan Roberts <ryan.roberts@arm.com>
To: Ard Biesheuvel <ardb@kernel.org>, linux-kernel@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Marc Zyngier <maz@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Kees Cook <keescook@chromium.org>
Subject: Re: [PATCH v3 35/60] arm64: mm: Use 48-bit virtual addressing for the permanent ID map
Date: Tue, 18 Apr 2023 11:22:25 +0100	[thread overview]
Message-ID: <73a89872-092e-4794-3956-71afe653dae0@arm.com> (raw)
In-Reply-To: <20230307140522.2311461-36-ardb@kernel.org>

On 07/03/2023 14:04, Ard Biesheuvel wrote:
> Even though we support loading kernels anywhere in 48-bit addressable
> physical memory, we create the ID maps based on the number of levels
> that we happened to configure for the kernel VA and user VA spaces.
> 
> The reason for this is that the PGD/PUD/PMD based classification of
> translation levels, along with the associated folding when the number of
> levels is less than 5, does not permit creating a page table hierarchy
> of a set number of levels. This means that, for instance, on 39-bit VA
> kernels we need to configure an additional level above PGD level on the
> fly, and 36-bit VA kernels still only support 47-bit virtual addressing
> with this trick applied.
> 
> Now that we have a separate helper to populate page table hierarchies
> that does not define the levels in terms of PUDS/PMDS/etc at all, let's
> reuse it to create the permanent ID map with a fixed VA size of 48 bits.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
>  arch/arm64/include/asm/kernel-pgtable.h |  2 ++
>  arch/arm64/kernel/head.S                |  5 +++
>  arch/arm64/kvm/mmu.c                    | 15 +++------
>  arch/arm64/mm/mmu.c                     | 32 +++++++++++---------
>  arch/arm64/mm/proc.S                    |  9 ++----
>  5 files changed, 31 insertions(+), 32 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
> index 50b5c145358a5d8e..2a2c80ffe59e5307 100644
> --- a/arch/arm64/include/asm/kernel-pgtable.h
> +++ b/arch/arm64/include/asm/kernel-pgtable.h
> @@ -35,6 +35,8 @@
>  #define SWAPPER_PGTABLE_LEVELS	(CONFIG_PGTABLE_LEVELS)
>  #endif
>  
> +#define IDMAP_LEVELS		ARM64_HW_PGTABLE_LEVELS(48)
> +#define IDMAP_ROOT_LEVEL	(4 - IDMAP_LEVELS)
>  
>  /*
>   * If KASLR is enabled, then an offset K is added to the kernel address
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index e45fd99e8ab4272a..fc6a4076d826b728 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -727,6 +727,11 @@ SYM_FUNC_START_LOCAL(__no_granule_support)
>  SYM_FUNC_END(__no_granule_support)
>  
>  SYM_FUNC_START_LOCAL(__primary_switch)
> +	mrs		x1, tcr_el1
> +	mov		x2, #64 - VA_BITS
> +	tcr_set_t0sz	x1, x2
> +	msr		tcr_el1, x1
> +
>  	adrp	x1, reserved_pg_dir
>  	adrp	x2, init_idmap_pg_dir
>  	bl	__enable_mmu
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 7113587222ffe8e1..d64be7b5f6692e8b 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1687,16 +1687,9 @@ int __init kvm_mmu_init(u32 *hyp_va_bits)
>  	BUG_ON((hyp_idmap_start ^ (hyp_idmap_end - 1)) & PAGE_MASK);
>  
>  	/*
> -	 * The ID map may be configured to use an extended virtual address
> -	 * range. This is only the case if system RAM is out of range for the
> -	 * currently configured page size and VA_BITS_MIN, in which case we will
> -	 * also need the extended virtual range for the HYP ID map, or we won't
> -	 * be able to enable the EL2 MMU.
> -	 *
> -	 * However, in some cases the ID map may be configured for fewer than
> -	 * the number of VA bits used by the regular kernel stage 1. This
> -	 * happens when VA_BITS=52 and the kernel image is placed in PA space
> -	 * below 48 bits.
> +	 * The ID map is always configured for 48 bits of translation, which
> +	 * may be fewer than the number of VA bits used by the regular kernel
> +	 * stage 1, when VA_BITS=52.
>  	 *
>  	 * At EL2, there is only one TTBR register, and we can't switch between
>  	 * translation tables *and* update TCR_EL2.T0SZ at the same time. Bottom
> @@ -1707,7 +1700,7 @@ int __init kvm_mmu_init(u32 *hyp_va_bits)
>  	 * 1 VA bits to assure that the hypervisor can both ID map its code page
>  	 * and map any kernel memory.
>  	 */
> -	idmap_bits = 64 - ((idmap_t0sz & TCR_T0SZ_MASK) >> TCR_T0SZ_OFFSET);
> +	idmap_bits = 48;
>  	kernel_bits = vabits_actual;
>  	*hyp_va_bits = max(idmap_bits, kernel_bits);

This effectively means that the hypervisor always uses at least 48 VA bits.
Previously, I think it would have been 39 for (e.g.) Android builds? Does this
have any performance implications for pKVM?

>  
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 81e1420d2cc13246..a59433ae4f5f8d02 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -762,22 +762,21 @@ static void __init map_kernel(pgd_t *pgdp)
>  	kasan_copy_shadow(pgdp);
>  }
>  
> +void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot,
> +		    int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
> +
> +static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
> +	  kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;

I see this new storage introduced, but I don't see you removing the storage for
the old method (I have a vague memory of it being defined in the linker script)?

> +
>  static void __init create_idmap(void)
>  {
>  	u64 start = __pa_symbol(__idmap_text_start);
> -	u64 size = __pa_symbol(__idmap_text_end) - start;
> -	pgd_t *pgd = idmap_pg_dir;
> -	u64 pgd_phys;
> -
> -	/* check if we need an additional level of translation */
> -	if (VA_BITS < 48 && idmap_t0sz < (64 - VA_BITS_MIN)) {
> -		pgd_phys = early_pgtable_alloc(PAGE_SHIFT);
> -		set_pgd(&idmap_pg_dir[start >> VA_BITS],
> -			__pgd(pgd_phys | P4D_TYPE_TABLE));
> -		pgd = __va(pgd_phys);
> -	}
> -	__create_pgd_mapping(pgd, start, start, size, PAGE_KERNEL_ROX,
> -			     early_pgtable_alloc, 0);
> +	u64 end   = __pa_symbol(__idmap_text_end);
> +	u64 ptep  = __pa_symbol(idmap_ptes);
> +
> +	__pi_map_range(&ptep, start, end, start, PAGE_KERNEL_ROX,
> +		       IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
> +		       __phys_to_virt(ptep) - ptep);
>  
>  	if (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) {
>  		extern u32 __idmap_kpti_flag;
> @@ -787,8 +786,10 @@ static void __init create_idmap(void)
>  		 * The KPTI G-to-nG conversion code needs a read-write mapping
>  		 * of its synchronization flag in the ID map.
>  		 */
> -		__create_pgd_mapping(pgd, pa, pa, sizeof(u32), PAGE_KERNEL,
> -				     early_pgtable_alloc, 0);
> +		ptep = __pa_symbol(kpti_ptes);
> +		__pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
> +			       IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
> +			       __phys_to_virt(ptep) - ptep);
>  	}
>  }
>  
> @@ -813,6 +814,7 @@ void __init paging_init(void)
>  	memblock_allow_resize();
>  
>  	create_idmap();
> +	idmap_t0sz = TCR_T0SZ(48);
>  }
>  
>  #ifdef CONFIG_MEMORY_HOTPLUG
> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> index 82e88f4521737c0e..c7129b21bfd5191f 100644
> --- a/arch/arm64/mm/proc.S
> +++ b/arch/arm64/mm/proc.S
> @@ -422,9 +422,9 @@ SYM_FUNC_START(__cpu_setup)
>  	mair	.req	x17
>  	tcr	.req	x16
>  	mov_q	mair, MAIR_EL1_SET
> -	mov_q	tcr, TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \
> -			TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \
> -			TCR_TBI0 | TCR_A1 | TCR_KASAN_SW_FLAGS | TCR_MTE_FLAGS
> +	mov_q	tcr, TCR_T0SZ(48) | TCR_T1SZ(VA_BITS) | TCR_CACHE_FLAGS | \
> +		     TCR_SMP_FLAGS | TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \
> +		     TCR_TBI0 | TCR_A1 | TCR_KASAN_SW_FLAGS | TCR_MTE_FLAGS

You're hardcoding 48 in 3 places in this patch. I wonder if an IDMAP_VA_BITS
macro might help here?

>  
>  	tcr_clear_errata_bits tcr, x9, x5
>  
> @@ -432,10 +432,7 @@ SYM_FUNC_START(__cpu_setup)
>  	sub		x9, xzr, x0
>  	add		x9, x9, #64
>  	tcr_set_t1sz	tcr, x9
> -#else
> -	idmap_get_t0sz	x9
>  #endif
> -	tcr_set_t0sz	tcr, x9
>  
>  	/*
>  	 * Set the IPS bits in TCR_EL1.


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2023-04-18 10:23 UTC|newest]

Thread overview: 184+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-07 14:04 [PATCH v3 00/60] arm64: Add support for LPA2 at stage1 and WXN Ard Biesheuvel
2023-03-07 14:04 ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 01/60] arm64: kernel: Disable latent_entropy GCC plugin in early C runtime Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-04-28 10:37   ` Mark Rutland
2023-04-28 10:37     ` Mark Rutland
2023-04-28 10:54     ` Ard Biesheuvel
2023-04-28 10:54       ` Ard Biesheuvel
2023-04-28 11:48       ` Mark Rutland
2023-04-28 11:48         ` Mark Rutland
2023-03-07 14:04 ` [PATCH v3 02/60] arm64: mm: Take potential load offset into account when KASLR is off Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-04-28 10:41   ` Mark Rutland
2023-04-28 10:41     ` Mark Rutland
2023-03-07 14:04 ` [PATCH v3 03/60] arm64: mm: get rid of kimage_vaddr global variable Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-04-28 10:42   ` Mark Rutland
2023-04-28 10:42     ` Mark Rutland
2023-03-07 14:04 ` [PATCH v3 04/60] arm64: mm: Move PCI I/O emulation region above the vmemmap region Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 05/60] arm64: mm: Move fixmap region above " Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-04-28 11:00   ` Mark Rutland
2023-04-28 11:00     ` Mark Rutland
2023-03-07 14:04 ` [PATCH v3 06/60] arm64: ptdump: Allow VMALLOC_END to be defined at boot Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 16:58   ` Ryan Roberts
2023-03-07 16:58     ` Ryan Roberts
2023-03-07 17:01     ` Ard Biesheuvel
2023-03-07 17:01       ` Ard Biesheuvel
2023-04-28 11:25   ` Mark Rutland
2023-04-28 11:25     ` Mark Rutland
2023-03-07 14:04 ` [PATCH v3 07/60] arm64: ptdump: Discover start of vmemmap region at runtime Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 16:36   ` Ryan Roberts
2023-03-07 16:36     ` Ryan Roberts
2023-04-28 11:27   ` Mark Rutland
2023-04-28 11:27     ` Mark Rutland
2023-03-07 14:04 ` [PATCH v3 08/60] arm64: vmemmap: Avoid base2 order of struct page size to dimension region Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 09/60] arm64: mm: Reclaim unused vmemmap region for vmalloc use Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 16:42   ` Ryan Roberts
2023-03-07 16:42     ` Ryan Roberts
2023-03-07 16:58     ` Ard Biesheuvel
2023-03-07 16:58       ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 10/60] arm64: kaslr: Adjust randomization range dynamically Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 11/60] arm64: kaslr: drop special case for ThunderX in kaslr_requires_kpti() Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 12/60] arm64: Turn kaslr_feature_override into a generic SW feature override Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 13/60] arm64: kvm: honour 'nokaslr' command line option for the HYP VA space Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 14/60] arm64: kernel: Manage absolute relocations in code built under pi/ Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 15/60] arm64: kernel: Don't rely on objcopy to make code under pi/ __init Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 16/60] arm64: head: move relocation handling to C code Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 17/60] arm64: idreg-override: Omit non-NULL checks for override pointer Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 18/60] arm64: idreg-override: Prepare for place relative reloc patching Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 19/60] arm64: idreg-override: Avoid parameq() and parameqn() Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 20/60] arm64: idreg-override: avoid strlen() to check for empty strings Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 21/60] arm64: idreg-override: Avoid sprintf() for simple string concatenation Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 22/60] arm64: idreg-override: Avoid kstrtou64() to parse a single hex digit Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 23/60] arm64: idreg-override: Move to early mini C runtime Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 24/60] arm64: kernel: Remove early fdt remap code Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 25/60] arm64: head: Clear BSS and the kernel page tables in one go Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-04-17 14:00   ` Ryan Roberts
2023-04-17 14:00     ` Ryan Roberts
2023-04-17 14:02     ` Ard Biesheuvel
2023-04-17 14:02       ` Ard Biesheuvel
2023-04-17 14:09       ` Ryan Roberts
2023-04-17 14:09         ` Ryan Roberts
2023-03-07 14:04 ` [PATCH v3 26/60] arm64: Move feature overrides into the BSS section Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 27/60] arm64: head: Run feature override detection before mapping the kernel Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 28/60] arm64: head: move dynamic shadow call stack patching into early C runtime Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 29/60] arm64: kaslr: Use feature override instead of parsing the cmdline again Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 30/60] arm64: idreg-override: Create a pseudo feature for rodata=off Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-04-17 14:28   ` Ryan Roberts
2023-04-17 14:28     ` Ryan Roberts
2023-04-17 14:30     ` Ard Biesheuvel
2023-04-17 14:30       ` Ard Biesheuvel
2023-04-17 14:33       ` Ryan Roberts
2023-04-17 14:33         ` Ryan Roberts
2023-03-07 14:04 ` [PATCH v3 31/60] arm64: Add helpers to probe local CPU for PAC/BTI/E0PD support Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 32/60] arm64: head: allocate more pages for the kernel mapping Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-04-17 15:48   ` Ryan Roberts
2023-04-17 15:48     ` Ryan Roberts
2023-04-17 16:11     ` Ard Biesheuvel
2023-04-17 16:11       ` Ard Biesheuvel
2023-04-17 16:18       ` Ryan Roberts
2023-04-17 16:18         ` Ryan Roberts
2023-03-07 14:04 ` [PATCH v3 33/60] arm64: head: move memstart_offset_seed handling to C code Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 34/60] arm64: head: Move early kernel mapping routines into " Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-04-18  9:31   ` Ryan Roberts
2023-04-18  9:31     ` Ryan Roberts
2023-04-18 10:06     ` Ard Biesheuvel
2023-04-18 10:06       ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 35/60] arm64: mm: Use 48-bit virtual addressing for the permanent ID map Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-04-18 10:22   ` Ryan Roberts [this message]
2023-04-18 10:22     ` Ryan Roberts
2023-03-07 14:04 ` [PATCH v3 36/60] arm64: pgtable: Decouple PGDIR size macros from PGD/PUD/PMD levels Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:04 ` [PATCH v3 37/60] arm64: kernel: Create initial ID map from C code Ard Biesheuvel
2023-03-07 14:04   ` Ard Biesheuvel
2023-03-07 14:05 ` [PATCH v3 38/60] arm64: mm: avoid fixmap for early swapper_pg_dir updates Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-03-07 14:05 ` [PATCH v3 39/60] arm64: mm: omit redundant remap of kernel image Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-03-07 14:05 ` [PATCH v3 40/60] arm64: Revert "mm: provide idmap pointer to cpu_replace_ttbr1()" Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-03-07 14:05 ` [PATCH v3 41/60] arm64/mm: Add FEAT_LPA2 specific TCR_EL1.DS field Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-03-07 14:05 ` [PATCH v3 42/60] arm64/mm: Add FEAT_LPA2 specific ID_AA64MMFR0.TGRAN[2] Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-03-07 14:05 ` [PATCH v3 43/60] arm64: mm: Handle LVA support as a CPU feature Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-03-07 14:05 ` [PATCH v3 44/60] arm64: mm: Add feature override support for LVA Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-03-07 14:05 ` [PATCH v3 45/60] arm64: mm: Wire up TCR.DS bit to PTE shareability fields Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-03-07 14:05 ` [PATCH v3 46/60] arm64: mm: Add LPA2 support to phys<->pte conversion routines Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-03-07 14:05 ` [PATCH v3 47/60] arm64: mm: Add definitions to support 5 levels of paging Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-03-07 14:05 ` [PATCH v3 48/60] arm64: mm: add LPA2 and 5 level paging support to G-to-nG conversion Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-03-07 14:05 ` [PATCH v3 49/60] arm64: Enable LPA2 at boot if supported by the system Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-04-18 13:50   ` Ryan Roberts
2023-04-18 13:50     ` Ryan Roberts
2023-03-07 14:05 ` [PATCH v3 50/60] arm64: mm: Add 5 level paging support to fixmap and swapper handling Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-03-07 14:05 ` [PATCH v3 51/60] arm64: kasan: Reduce minimum shadow alignment and enable 5 level paging Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-03-07 14:05 ` [PATCH v3 52/60] arm64: mm: Add support for folding PUDs at runtime Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-03-07 14:05 ` [PATCH v3 53/60] arm64: ptdump: Disregard unaddressable VA space Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-03-07 14:05 ` [PATCH v3 54/60] arm64: ptdump: Deal with translation levels folded at runtime Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-03-07 14:05 ` [PATCH v3 55/60] arm64: kvm: avoid CONFIG_PGTABLE_LEVELS for runtime levels Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-04-18 14:29   ` Ryan Roberts
2023-04-18 14:29     ` Ryan Roberts
2023-03-07 14:05 ` [PATCH v3 56/60] arm64: kvm: Limit HYP VA and host S2 range to 48 bits when LPA2 is in effect Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-04-18 14:33   ` Ryan Roberts
2023-04-18 14:33     ` Ryan Roberts
2023-03-07 14:05 ` [PATCH v3 57/60] arm64: Enable 52-bit virtual addressing for 4k and 16k granule configs Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-03-07 14:05 ` [PATCH v3 58/60] arm64: defconfig: Enable LPA2 support Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-03-07 14:05 ` [PATCH v3 59/60] mm: add arch hook to validate mmap() prot flags Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-03-07 14:05 ` [PATCH v3 60/60] arm64: mm: add support for WXN memory translation attribute Ard Biesheuvel
2023-03-07 14:05   ` Ard Biesheuvel
2023-03-07 16:28 ` [PATCH v3 00/60] arm64: Add support for LPA2 at stage1 and WXN Ryan Roberts
2023-03-07 16:28   ` Ryan Roberts
2023-03-08  8:31   ` Ard Biesheuvel
2023-03-08  8:31     ` Ard Biesheuvel
2023-04-18 15:01 ` Ryan Roberts
2023-04-18 15:01   ` Ryan Roberts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=73a89872-092e-4794-3956-71afe653dae0@arm.com \
    --to=ryan.roberts@arm.com \
    --cc=anshuman.khandual@arm.com \
    --cc=ardb@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=keescook@chromium.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=maz@kernel.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.