linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] arm64/mm: Drop ARM64_KERNEL_USES_PMD_MAPS
@ 2022-09-23 13:08 Anshuman Khandual
  2022-09-23 13:38 ` Joey Gouly
  0 siblings, 1 reply; 5+ messages in thread
From: Anshuman Khandual @ 2022-09-23 13:08 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Anshuman Khandual, Catalin Marinas, Will Deacon

Currently ARM64_KERNEL_USES_PMD_MAPS is an unnecessary abstraction. Kernel
mapping at PMD (aka huge page aka block) level, is only applicable with 4K
base page, which makes it 2MB aligned, a necessary requirement for linear
mapping and physical memory start address. This can be easily achieved by
directly checking against base page size itself. This drops off the macro
ARM64_KERNE_USES_PMD_MAPS which is redundant.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
This applies on v6.0-rc6 after the following patch.

https://lore.kernel.org/all/20220920014951.196191-1-wangkefeng.wang@huawei.com/

 arch/arm64/include/asm/kernel-pgtable.h | 33 +++++++++----------------
 arch/arm64/mm/mmu.c                     |  2 +-
 2 files changed, 12 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
index 32d14f481f0c..5c2f72bae2ca 100644
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -18,11 +18,6 @@
  * with 4K (section size = 2M) but not with 16K (section size = 32M) or
  * 64K (section size = 512M).
  */
-#ifdef CONFIG_ARM64_4K_PAGES
-#define ARM64_KERNEL_USES_PMD_MAPS 1
-#else
-#define ARM64_KERNEL_USES_PMD_MAPS 0
-#endif
 
 /*
  * The idmap and swapper page tables need some space reserved in the kernel
@@ -34,10 +29,20 @@
  * VA range, so pages required to map highest possible PA are reserved in all
  * cases.
  */
-#if ARM64_KERNEL_USES_PMD_MAPS
+#ifdef CONFIG_ARM64_4K_PAGES
 #define SWAPPER_PGTABLE_LEVELS	(CONFIG_PGTABLE_LEVELS - 1)
+#define SWAPPER_BLOCK_SHIFT	PMD_SHIFT
+#define SWAPPER_BLOCK_SIZE	PMD_SIZE
+#define SWAPPER_TABLE_SHIFT	PUD_SHIFT
+#define SWAPPER_RW_MMUFLAGS	(PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS)
+#define SWAPPER_RX_MMUFLAGS	(SWAPPER_RW_MMUFLAGS | PMD_SECT_RDONLY)
 #else
 #define SWAPPER_PGTABLE_LEVELS	(CONFIG_PGTABLE_LEVELS)
+#define SWAPPER_BLOCK_SHIFT	PAGE_SHIFT
+#define SWAPPER_BLOCK_SIZE	PAGE_SIZE
+#define SWAPPER_TABLE_SHIFT	PMD_SHIFT
+#define SWAPPER_RW_MMUFLAGS	(PTE_ATTRINDX(MT_NORMAL) | SWAPPER_PTE_FLAGS)
+#define SWAPPER_RX_MMUFLAGS	(SWAPPER_RW_MMUFLAGS | PTE_RDONLY)
 #endif
 
 
@@ -96,15 +101,6 @@
 #define INIT_IDMAP_DIR_PAGES	EARLY_PAGES(KIMAGE_VADDR, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE, 1)
 
 /* Initial memory map size */
-#if ARM64_KERNEL_USES_PMD_MAPS
-#define SWAPPER_BLOCK_SHIFT	PMD_SHIFT
-#define SWAPPER_BLOCK_SIZE	PMD_SIZE
-#define SWAPPER_TABLE_SHIFT	PUD_SHIFT
-#else
-#define SWAPPER_BLOCK_SHIFT	PAGE_SHIFT
-#define SWAPPER_BLOCK_SIZE	PAGE_SIZE
-#define SWAPPER_TABLE_SHIFT	PMD_SHIFT
-#endif
 
 /*
  * Initial memory map attributes.
@@ -112,13 +108,6 @@
 #define SWAPPER_PTE_FLAGS	(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
 #define SWAPPER_PMD_FLAGS	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
 
-#if ARM64_KERNEL_USES_PMD_MAPS
-#define SWAPPER_RW_MMUFLAGS	(PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS)
-#define SWAPPER_RX_MMUFLAGS	(SWAPPER_RW_MMUFLAGS | PMD_SECT_RDONLY)
-#else
-#define SWAPPER_RW_MMUFLAGS	(PTE_ATTRINDX(MT_NORMAL) | SWAPPER_PTE_FLAGS)
-#define SWAPPER_RX_MMUFLAGS	(SWAPPER_RW_MMUFLAGS | PTE_RDONLY)
-#endif
 
 /*
  * To make optimal use of block mappings when laying out the linear
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 69deed27dec8..df1eac788c33 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1192,7 +1192,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 
 	WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
 
-	if (!ARM64_KERNEL_USES_PMD_MAPS)
+	if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
 		return vmemmap_populate_basepages(start, end, node, altmap);
 
 	do {
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] arm64/mm: Drop ARM64_KERNEL_USES_PMD_MAPS
  2022-09-23 13:08 [PATCH] arm64/mm: Drop ARM64_KERNEL_USES_PMD_MAPS Anshuman Khandual
@ 2022-09-23 13:38 ` Joey Gouly
  2022-09-26  3:18   ` Anshuman Khandual
  0 siblings, 1 reply; 5+ messages in thread
From: Joey Gouly @ 2022-09-23 13:38 UTC (permalink / raw)
  To: Anshuman Khandual; +Cc: linux-arm-kernel, Catalin Marinas, Will Deacon, nd

Hi Anshuman,

On Fri, Sep 23, 2022 at 06:38:41PM +0530, Anshuman Khandual wrote:
> Currently ARM64_KERNEL_USES_PMD_MAPS is an unnecessary abstraction. Kernel
> mapping at PMD (aka huge page aka block) level, is only applicable with 4K
> base page, which makes it 2MB aligned, a necessary requirement for linear
> mapping and physical memory start address. This can be easily achieved by
> directly checking against base page size itself. This drops off the macro
> ARM64_KERNE_USES_PMD_MAPS which is redundant.
> 
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: linux-arm-kernel@lists.infradead.org
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
> This applies on v6.0-rc6 after the following patch.
> 
> https://lore.kernel.org/all/20220920014951.196191-1-wangkefeng.wang@huawei.com/
> 
>  arch/arm64/include/asm/kernel-pgtable.h | 33 +++++++++----------------
>  arch/arm64/mm/mmu.c                     |  2 +-
>  2 files changed, 12 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
> index 32d14f481f0c..5c2f72bae2ca 100644
> --- a/arch/arm64/include/asm/kernel-pgtable.h
> +++ b/arch/arm64/include/asm/kernel-pgtable.h
> @@ -18,11 +18,6 @@
>   * with 4K (section size = 2M) but not with 16K (section size = 32M) or
>   * 64K (section size = 512M).
>   */
> -#ifdef CONFIG_ARM64_4K_PAGES
> -#define ARM64_KERNEL_USES_PMD_MAPS 1
> -#else
> -#define ARM64_KERNEL_USES_PMD_MAPS 0
> -#endif

There is now a dangling comment above this. I think it's quite a useful comment,
so could be moved elsewhere if possible.

Or maybe just keep ARM64_KERNEL_USES_PMD_MAPS because it's not a big abstraction
and it's more obvious to why there's differences in SWAPPER_BLOCK_SIZE etc. 

>  
>  /*
>   * The idmap and swapper page tables need some space reserved in the kernel
> @@ -34,10 +29,20 @@
>   * VA range, so pages required to map highest possible PA are reserved in all
>   * cases.
>   */
> -#if ARM64_KERNEL_USES_PMD_MAPS
> +#ifdef CONFIG_ARM64_4K_PAGES
>  #define SWAPPER_PGTABLE_LEVELS	(CONFIG_PGTABLE_LEVELS - 1)
> +#define SWAPPER_BLOCK_SHIFT	PMD_SHIFT
> +#define SWAPPER_BLOCK_SIZE	PMD_SIZE
> +#define SWAPPER_TABLE_SHIFT	PUD_SHIFT
> +#define SWAPPER_RW_MMUFLAGS	(PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS)
> +#define SWAPPER_RX_MMUFLAGS	(SWAPPER_RW_MMUFLAGS | PMD_SECT_RDONLY)
>  #else
>  #define SWAPPER_PGTABLE_LEVELS	(CONFIG_PGTABLE_LEVELS)
> +#define SWAPPER_BLOCK_SHIFT	PAGE_SHIFT
> +#define SWAPPER_BLOCK_SIZE	PAGE_SIZE
> +#define SWAPPER_TABLE_SHIFT	PMD_SHIFT
> +#define SWAPPER_RW_MMUFLAGS	(PTE_ATTRINDX(MT_NORMAL) | SWAPPER_PTE_FLAGS)
> +#define SWAPPER_RX_MMUFLAGS	(SWAPPER_RW_MMUFLAGS | PTE_RDONLY)
>  #endif
>  
>  
> @@ -96,15 +101,6 @@
>  #define INIT_IDMAP_DIR_PAGES	EARLY_PAGES(KIMAGE_VADDR, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE, 1)
>  
>  /* Initial memory map size */
> -#if ARM64_KERNEL_USES_PMD_MAPS
> -#define SWAPPER_BLOCK_SHIFT	PMD_SHIFT
> -#define SWAPPER_BLOCK_SIZE	PMD_SIZE
> -#define SWAPPER_TABLE_SHIFT	PUD_SHIFT
> -#else
> -#define SWAPPER_BLOCK_SHIFT	PAGE_SHIFT
> -#define SWAPPER_BLOCK_SIZE	PAGE_SIZE
> -#define SWAPPER_TABLE_SHIFT	PMD_SHIFT
> -#endif

Also a dangling comment here.

Thanks,
Joey

>  
>  /*
>   * Initial memory map attributes.
> @@ -112,13 +108,6 @@
>  #define SWAPPER_PTE_FLAGS	(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
>  #define SWAPPER_PMD_FLAGS	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
>  
> -#if ARM64_KERNEL_USES_PMD_MAPS
> -#define SWAPPER_RW_MMUFLAGS	(PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS)
> -#define SWAPPER_RX_MMUFLAGS	(SWAPPER_RW_MMUFLAGS | PMD_SECT_RDONLY)
> -#else
> -#define SWAPPER_RW_MMUFLAGS	(PTE_ATTRINDX(MT_NORMAL) | SWAPPER_PTE_FLAGS)
> -#define SWAPPER_RX_MMUFLAGS	(SWAPPER_RW_MMUFLAGS | PTE_RDONLY)
> -#endif
>  
>  /*
>   * To make optimal use of block mappings when laying out the linear
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 69deed27dec8..df1eac788c33 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1192,7 +1192,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
>  
>  	WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
>  
> -	if (!ARM64_KERNEL_USES_PMD_MAPS)
> +	if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
>  		return vmemmap_populate_basepages(start, end, node, altmap);
>  
>  	do {

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] arm64/mm: Drop ARM64_KERNEL_USES_PMD_MAPS
  2022-09-23 13:38 ` Joey Gouly
@ 2022-09-26  3:18   ` Anshuman Khandual
  2022-11-07 15:22     ` Will Deacon
  0 siblings, 1 reply; 5+ messages in thread
From: Anshuman Khandual @ 2022-09-26  3:18 UTC (permalink / raw)
  To: Joey Gouly; +Cc: linux-arm-kernel, Catalin Marinas, Will Deacon, nd



On 9/23/22 19:08, Joey Gouly wrote:
> Hi Anshuman,
> 
> On Fri, Sep 23, 2022 at 06:38:41PM +0530, Anshuman Khandual wrote:
>> Currently ARM64_KERNEL_USES_PMD_MAPS is an unnecessary abstraction. Kernel
>> mapping at PMD (aka huge page aka block) level, is only applicable with 4K
>> base page, which makes it 2MB aligned, a necessary requirement for linear
>> mapping and physical memory start address. This can be easily achieved by
>> directly checking against base page size itself. This drops off the macro
>> ARM64_KERNE_USES_PMD_MAPS which is redundant.
>>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will@kernel.org>
>> Cc: linux-arm-kernel@lists.infradead.org
>> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
>> ---
>> This applies on v6.0-rc6 after the following patch.
>>
>> https://lore.kernel.org/all/20220920014951.196191-1-wangkefeng.wang@huawei.com/
>>
>>  arch/arm64/include/asm/kernel-pgtable.h | 33 +++++++++----------------
>>  arch/arm64/mm/mmu.c                     |  2 +-
>>  2 files changed, 12 insertions(+), 23 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
>> index 32d14f481f0c..5c2f72bae2ca 100644
>> --- a/arch/arm64/include/asm/kernel-pgtable.h
>> +++ b/arch/arm64/include/asm/kernel-pgtable.h
>> @@ -18,11 +18,6 @@
>>   * with 4K (section size = 2M) but not with 16K (section size = 32M) or
>>   * 64K (section size = 512M).
>>   */
>> -#ifdef CONFIG_ARM64_4K_PAGES
>> -#define ARM64_KERNEL_USES_PMD_MAPS 1
>> -#else
>> -#define ARM64_KERNEL_USES_PMD_MAPS 0
>> -#endif
> 
> There is now a dangling comment above this. I think it's quite a useful comment,
> so could be moved elsewhere if possible.

I have collected both these relevant comment paragraphs before the 4K switch.

> 
> Or maybe just keep ARM64_KERNEL_USES_PMD_MAPS because it's not a big abstraction
> and it's more obvious to why there's differences in SWAPPER_BLOCK_SIZE etc. 

The decision about kernel mapping granularity is static i.e depends just on
base page size. If that decision needs to be remembered at all in form of an
abstraction, it can be achieved via a new config option such as the following
rather than a macro.

config ARM64_KERNEL_USES_PMD_MAPS
	default y
	depends on CONFIG_ARM64_4K_PAGES

> 
>>  
>>  /*
>>   * The idmap and swapper page tables need some space reserved in the kernel
>> @@ -34,10 +29,20 @@
>>   * VA range, so pages required to map highest possible PA are reserved in all
>>   * cases.
>>   */
>> -#if ARM64_KERNEL_USES_PMD_MAPS
>> +#ifdef CONFIG_ARM64_4K_PAGES
>>  #define SWAPPER_PGTABLE_LEVELS	(CONFIG_PGTABLE_LEVELS - 1)
>> +#define SWAPPER_BLOCK_SHIFT	PMD_SHIFT
>> +#define SWAPPER_BLOCK_SIZE	PMD_SIZE
>> +#define SWAPPER_TABLE_SHIFT	PUD_SHIFT
>> +#define SWAPPER_RW_MMUFLAGS	(PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS)
>> +#define SWAPPER_RX_MMUFLAGS	(SWAPPER_RW_MMUFLAGS | PMD_SECT_RDONLY)
>>  #else
>>  #define SWAPPER_PGTABLE_LEVELS	(CONFIG_PGTABLE_LEVELS)
>> +#define SWAPPER_BLOCK_SHIFT	PAGE_SHIFT
>> +#define SWAPPER_BLOCK_SIZE	PAGE_SIZE
>> +#define SWAPPER_TABLE_SHIFT	PMD_SHIFT
>> +#define SWAPPER_RW_MMUFLAGS	(PTE_ATTRINDX(MT_NORMAL) | SWAPPER_PTE_FLAGS)
>> +#define SWAPPER_RX_MMUFLAGS	(SWAPPER_RW_MMUFLAGS | PTE_RDONLY)
>>  #endif
>>  
>>  
>> @@ -96,15 +101,6 @@
>>  #define INIT_IDMAP_DIR_PAGES	EARLY_PAGES(KIMAGE_VADDR, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE, 1)
>>  
>>  /* Initial memory map size */
>> -#if ARM64_KERNEL_USES_PMD_MAPS
>> -#define SWAPPER_BLOCK_SHIFT	PMD_SHIFT
>> -#define SWAPPER_BLOCK_SIZE	PMD_SIZE
>> -#define SWAPPER_TABLE_SHIFT	PUD_SHIFT
>> -#else
>> -#define SWAPPER_BLOCK_SHIFT	PAGE_SHIFT
>> -#define SWAPPER_BLOCK_SIZE	PAGE_SIZE
>> -#define SWAPPER_TABLE_SHIFT	PMD_SHIFT
>> -#endif
> 
> Also a dangling comment here.

These ? can be dropped off without much problem.

/* Initial memory map size */
/*
 * Initial memory map attributes.
 */

Will try to re-arrange these comments next time around.

- Anshuman

> 
> Thanks,
> Joey
> 
>>  
>>  /*
>>   * Initial memory map attributes.
>> @@ -112,13 +108,6 @@
>>  #define SWAPPER_PTE_FLAGS	(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
>>  #define SWAPPER_PMD_FLAGS	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
>>  
>> -#if ARM64_KERNEL_USES_PMD_MAPS
>> -#define SWAPPER_RW_MMUFLAGS	(PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS)
>> -#define SWAPPER_RX_MMUFLAGS	(SWAPPER_RW_MMUFLAGS | PMD_SECT_RDONLY)
>> -#else
>> -#define SWAPPER_RW_MMUFLAGS	(PTE_ATTRINDX(MT_NORMAL) | SWAPPER_PTE_FLAGS)
>> -#define SWAPPER_RX_MMUFLAGS	(SWAPPER_RW_MMUFLAGS | PTE_RDONLY)
>> -#endif
>>  
>>  /*
>>   * To make optimal use of block mappings when laying out the linear
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 69deed27dec8..df1eac788c33 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -1192,7 +1192,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
>>  
>>  	WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
>>  
>> -	if (!ARM64_KERNEL_USES_PMD_MAPS)
>> +	if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
>>  		return vmemmap_populate_basepages(start, end, node, altmap);
>>  
>>  	do {

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] arm64/mm: Drop ARM64_KERNEL_USES_PMD_MAPS
  2022-09-26  3:18   ` Anshuman Khandual
@ 2022-11-07 15:22     ` Will Deacon
  2022-11-08  2:32       ` Anshuman Khandual
  0 siblings, 1 reply; 5+ messages in thread
From: Will Deacon @ 2022-11-07 15:22 UTC (permalink / raw)
  To: Anshuman Khandual; +Cc: Joey Gouly, linux-arm-kernel, Catalin Marinas, nd

On Mon, Sep 26, 2022 at 08:48:22AM +0530, Anshuman Khandual wrote:
> On 9/23/22 19:08, Joey Gouly wrote:
> > On Fri, Sep 23, 2022 at 06:38:41PM +0530, Anshuman Khandual wrote:
> >> Currently ARM64_KERNEL_USES_PMD_MAPS is an unnecessary abstraction. Kernel
> >> mapping at PMD (aka huge page aka block) level, is only applicable with 4K
> >> base page, which makes it 2MB aligned, a necessary requirement for linear
> >> mapping and physical memory start address. This can be easily achieved by
> >> directly checking against base page size itself. This drops off the macro
> >> ARM64_KERNE_USES_PMD_MAPS which is redundant.
> >>
> >> Cc: Catalin Marinas <catalin.marinas@arm.com>
> >> Cc: Will Deacon <will@kernel.org>
> >> Cc: linux-arm-kernel@lists.infradead.org
> >> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
> >> ---
> >> This applies on v6.0-rc6 after the following patch.
> >>
> >> https://lore.kernel.org/all/20220920014951.196191-1-wangkefeng.wang@huawei.com/
> >>
> >>  arch/arm64/include/asm/kernel-pgtable.h | 33 +++++++++----------------
> >>  arch/arm64/mm/mmu.c                     |  2 +-
> >>  2 files changed, 12 insertions(+), 23 deletions(-)

[...]

> >> @@ -96,15 +101,6 @@
> >>  #define INIT_IDMAP_DIR_PAGES	EARLY_PAGES(KIMAGE_VADDR, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE, 1)
> >>  
> >>  /* Initial memory map size */
> >> -#if ARM64_KERNEL_USES_PMD_MAPS
> >> -#define SWAPPER_BLOCK_SHIFT	PMD_SHIFT
> >> -#define SWAPPER_BLOCK_SIZE	PMD_SIZE
> >> -#define SWAPPER_TABLE_SHIFT	PUD_SHIFT
> >> -#else
> >> -#define SWAPPER_BLOCK_SHIFT	PAGE_SHIFT
> >> -#define SWAPPER_BLOCK_SIZE	PAGE_SIZE
> >> -#define SWAPPER_TABLE_SHIFT	PMD_SHIFT
> >> -#endif
> > 
> > Also a dangling comment here.
> 
> These ? can be dropped off without much problem.
> 
> /* Initial memory map size */
> /*
>  * Initial memory map attributes.
>  */
> 
> Will try to re-arrange these comments next time around.

Did you post another version of this, or change your mind about it?

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] arm64/mm: Drop ARM64_KERNEL_USES_PMD_MAPS
  2022-11-07 15:22     ` Will Deacon
@ 2022-11-08  2:32       ` Anshuman Khandual
  0 siblings, 0 replies; 5+ messages in thread
From: Anshuman Khandual @ 2022-11-08  2:32 UTC (permalink / raw)
  To: Will Deacon; +Cc: Joey Gouly, linux-arm-kernel, Catalin Marinas, nd



On 11/7/22 20:52, Will Deacon wrote:
> On Mon, Sep 26, 2022 at 08:48:22AM +0530, Anshuman Khandual wrote:
>> On 9/23/22 19:08, Joey Gouly wrote:
>>> On Fri, Sep 23, 2022 at 06:38:41PM +0530, Anshuman Khandual wrote:
>>>> Currently ARM64_KERNEL_USES_PMD_MAPS is an unnecessary abstraction. Kernel
>>>> mapping at PMD (aka huge page aka block) level, is only applicable with 4K
>>>> base page, which makes it 2MB aligned, a necessary requirement for linear
>>>> mapping and physical memory start address. This can be easily achieved by
>>>> directly checking against base page size itself. This drops off the macro
>>>> ARM64_KERNE_USES_PMD_MAPS which is redundant.
>>>>
>>>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>>>> Cc: Will Deacon <will@kernel.org>
>>>> Cc: linux-arm-kernel@lists.infradead.org
>>>> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
>>>> ---
>>>> This applies on v6.0-rc6 after the following patch.
>>>>
>>>> https://lore.kernel.org/all/20220920014951.196191-1-wangkefeng.wang@huawei.com/
>>>>
>>>>  arch/arm64/include/asm/kernel-pgtable.h | 33 +++++++++----------------
>>>>  arch/arm64/mm/mmu.c                     |  2 +-
>>>>  2 files changed, 12 insertions(+), 23 deletions(-)
> 
> [...]
> 
>>>> @@ -96,15 +101,6 @@
>>>>  #define INIT_IDMAP_DIR_PAGES	EARLY_PAGES(KIMAGE_VADDR, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE, 1)
>>>>  
>>>>  /* Initial memory map size */
>>>> -#if ARM64_KERNEL_USES_PMD_MAPS
>>>> -#define SWAPPER_BLOCK_SHIFT	PMD_SHIFT
>>>> -#define SWAPPER_BLOCK_SIZE	PMD_SIZE
>>>> -#define SWAPPER_TABLE_SHIFT	PUD_SHIFT
>>>> -#else
>>>> -#define SWAPPER_BLOCK_SHIFT	PAGE_SHIFT
>>>> -#define SWAPPER_BLOCK_SIZE	PAGE_SIZE
>>>> -#define SWAPPER_TABLE_SHIFT	PMD_SHIFT
>>>> -#endif
>>>
>>> Also a dangling comment here.
>>
>> These ? can be dropped off without much problem.
>>
>> /* Initial memory map size */
>> /*
>>  * Initial memory map attributes.
>>  */
>>
>> Will try to re-arrange these comments next time around.
> 
> Did you post another version of this, or change your mind about it?

Will post another version soon.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-11-08  2:33 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-23 13:08 [PATCH] arm64/mm: Drop ARM64_KERNEL_USES_PMD_MAPS Anshuman Khandual
2022-09-23 13:38 ` Joey Gouly
2022-09-26  3:18   ` Anshuman Khandual
2022-11-07 15:22     ` Will Deacon
2022-11-08  2:32       ` Anshuman Khandual

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).