linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs
@ 2020-06-23  6:21 Anshuman Khandual
  2020-06-23  9:24 ` kernel test robot
  2020-06-26 18:42 ` kernel test robot
  0 siblings, 2 replies; 8+ messages in thread
From: Anshuman Khandual @ 2020-06-23  6:21 UTC (permalink / raw)
  To: linux-mm
  Cc: Anshuman Khandual, Catalin Marinas, Will Deacon, Mark Rutland,
	Mike Kravetz, Barry Song, Andrew Morton, linux-arm-kernel,
	linux-kernel

Currently 'hugetlb_cma=' command line argument does not create CMA area on
ARM64_16K_PAGES and ARM64_64K_PAGES based platforms. Instead, it just ends
up with the following warning message. Reason being, hugetlb_cma_reserve()
never gets called for these huge page sizes.

[   64.255669] hugetlb_cma: the option isn't supported by current arch

This enables CMA areas reservation on ARM64_16K_PAGES and ARM64_64K_PAGES
configs by defining an unified arm64_hugetlb_cma_reseve() that is wrapped
in CONFIG_CMA.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Barry Song <song.bao.hua@hisilicon.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
Applies on 5.8-rc2.

 arch/arm64/include/asm/hugetlb.h |  8 ++++++++
 arch/arm64/mm/hugetlbpage.c      | 38 ++++++++++++++++++++++++++++++++++++++
 arch/arm64/mm/init.c             |  4 +---
 3 files changed, 47 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
index 94ba0c5..8eea0e0 100644
--- a/arch/arm64/include/asm/hugetlb.h
+++ b/arch/arm64/include/asm/hugetlb.h
@@ -17,6 +17,14 @@
 extern bool arch_hugetlb_migration_supported(struct hstate *h);
 #endif
 
+#ifdef CONFIG_CMA
+void arm64_hugetlb_cma_reserve(void);
+#else
+static inline void arm64_hugetlb_cma_reserve(void)
+{
+}
+#endif
+
 static inline void arch_clear_hugepage_flags(struct page *page)
 {
 	clear_bit(PG_dcache_clean, &page->flags);
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 0a52ce4..ea7fb48 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -19,6 +19,44 @@
 #include <asm/tlbflush.h>
 #include <asm/pgalloc.h>
 
+/*
+ * HugeTLB Support Matrix
+ *
+ * ---------------------------------------------------
+ * | Page Size | CONT PTE |  PMD  | CONT PMD |  PUD  |
+ * ---------------------------------------------------
+ * |     4K    |   64K    |   2M  |    32M   |   1G  |
+ * |    16K    |    2M    |  32M  |     1G   |       |
+ * |    64K    |    2M    | 512M  |    16G   |       |
+ * ---------------------------------------------------
+ */
+
+/*
+ * Reserve CMA areas for the largest supported gigantic
+ * huge page when requested. Any other smaller gigantic
+ * huge pages could still be served from those areas.
+ */
+#ifdef CONFIG_CMA
+void __init arm64_hugetlb_cma_reserve(void)
+{
+	int order;
+
+#ifdef CONFIG_ARM64_4K_PAGES
+	order = PUD_SHIFT - PAGE_SHIFT;
+#else
+	order = CONT_PMD_SHIFT + PMD_SHIFT - PAGE_SHIFT;
+#endif
+	/*
+	 * HugeTLB CMA reservation is required for gigantic
+	 * huge pages which could not be allocated via the
+	 * page allocator. Just warn if there is any change
+	 * breaking this assumption.
+	 */
+	WARN_ON(order <= MAX_ORDER);
+	hugetlb_cma_reserve(order);
+}
+#endif /* CONFIG_CMA */
+
 #ifdef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
 bool arch_hugetlb_migration_supported(struct hstate *h)
 {
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 1e93cfc..fabf8b0 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -425,9 +425,7 @@ void __init bootmem_init(void)
 	 * initialize node_online_map that gets used in hugetlb_cma_reserve()
 	 * while allocating required CMA size across online nodes.
 	 */
-#ifdef CONFIG_ARM64_4K_PAGES
-	hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
-#endif
+	arm64_hugetlb_cma_reserve();
 
 	/*
 	 * Sparsemem tries to allocate bootmem in memory_present(), so must be
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs
  2020-06-23  6:21 [PATCH] arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs Anshuman Khandual
@ 2020-06-23  9:24 ` kernel test robot
  2020-06-23 12:48   ` Anshuman Khandual
  2020-06-26 18:42 ` kernel test robot
  1 sibling, 1 reply; 8+ messages in thread
From: kernel test robot @ 2020-06-23  9:24 UTC (permalink / raw)
  To: Anshuman Khandual, linux-mm
  Cc: kbuild-all, Anshuman Khandual, Catalin Marinas, Will Deacon,
	Mark Rutland, Mike Kravetz, Barry Song, Andrew Morton,
	Linux Memory Management List, linux-arm-kernel

[-- Attachment #1: Type: text/plain, Size: 2609 bytes --]

Hi Anshuman,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on v5.8-rc2]
[also build test ERROR on next-20200623]
[cannot apply to arm64/for-next/core]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use  as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Anshuman-Khandual/arm64-hugetlb-Reserve-CMA-areas-for-gigantic-pages-on-16K-and-64K-configs/20200623-142507
base:    48778464bb7d346b47157d21ffde2af6b2d39110
config: arm64-allnoconfig (attached as .config)
compiler: aarch64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=arm64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   arch/arm64/mm/init.c: In function 'bootmem_init':
>> arch/arm64/mm/init.c:428:2: error: implicit declaration of function 'arm64_hugetlb_cma_reserve'; did you mean 'hugetlb_cma_reserve'? [-Werror=implicit-function-declaration]
     428 |  arm64_hugetlb_cma_reserve();
         |  ^~~~~~~~~~~~~~~~~~~~~~~~~
         |  hugetlb_cma_reserve
   cc1: some warnings being treated as errors

vim +428 arch/arm64/mm/init.c

   408	
   409	void __init bootmem_init(void)
   410	{
   411		unsigned long min, max;
   412	
   413		min = PFN_UP(memblock_start_of_DRAM());
   414		max = PFN_DOWN(memblock_end_of_DRAM());
   415	
   416		early_memtest(min << PAGE_SHIFT, max << PAGE_SHIFT);
   417	
   418		max_pfn = max_low_pfn = max;
   419		min_low_pfn = min;
   420	
   421		arm64_numa_init();
   422	
   423		/*
   424		 * must be done after arm64_numa_init() which calls numa_init() to
   425		 * initialize node_online_map that gets used in hugetlb_cma_reserve()
   426		 * while allocating required CMA size across online nodes.
   427		 */
 > 428		arm64_hugetlb_cma_reserve();
   429	
   430		/*
   431		 * Sparsemem tries to allocate bootmem in memory_present(), so must be
   432		 * done after the fixed reservations.
   433		 */
   434		memblocks_present();
   435	
   436		sparse_init();
   437		zone_sizes_init(min, max);
   438	
   439		memblock_dump_all();
   440	}
   441	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 8161 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs
  2020-06-23  9:24 ` kernel test robot
@ 2020-06-23 12:48   ` Anshuman Khandual
  2020-06-23 16:40     ` Robin Murphy
  0 siblings, 1 reply; 8+ messages in thread
From: Anshuman Khandual @ 2020-06-23 12:48 UTC (permalink / raw)
  To: kernel test robot, linux-mm
  Cc: kbuild-all, Catalin Marinas, Will Deacon, Mark Rutland,
	Mike Kravetz, Barry Song, Andrew Morton, linux-arm-kernel


On 06/23/2020 02:54 PM, kernel test robot wrote:
>    423		/*
>    424		 * must be done after arm64_numa_init() which calls numa_init() to
>    425		 * initialize node_online_map that gets used in hugetlb_cma_reserve()
>    426		 * while allocating required CMA size across online nodes.
>    427		 */
>  > 428		arm64_hugetlb_cma_reserve();

Wrapping this call site with CONFIG_HUGETLB_PAGE solves the problem.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs
  2020-06-23 12:48   ` Anshuman Khandual
@ 2020-06-23 16:40     ` Robin Murphy
  2020-06-24  0:17       ` Anshuman Khandual
  0 siblings, 1 reply; 8+ messages in thread
From: Robin Murphy @ 2020-06-23 16:40 UTC (permalink / raw)
  To: Anshuman Khandual, kernel test robot, linux-mm
  Cc: Mark Rutland, Barry Song, kbuild-all, Catalin Marinas,
	Andrew Morton, Will Deacon, linux-arm-kernel, Mike Kravetz

On 2020-06-23 13:48, Anshuman Khandual wrote:
> 
> On 06/23/2020 02:54 PM, kernel test robot wrote:
>>     423		/*
>>     424		 * must be done after arm64_numa_init() which calls numa_init() to
>>     425		 * initialize node_online_map that gets used in hugetlb_cma_reserve()
>>     426		 * while allocating required CMA size across online nodes.
>>     427		 */
>>   > 428		arm64_hugetlb_cma_reserve();
> 
> Wrapping this call site with CONFIG_HUGETLB_PAGE solves the problem.

...although it might be nicer to include asm/hugetlb.h directly so that 
you can pick up the stub definition reliably.

Robin.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs
  2020-06-23 16:40     ` Robin Murphy
@ 2020-06-24  0:17       ` Anshuman Khandual
  2020-06-24  9:45         ` Robin Murphy
  0 siblings, 1 reply; 8+ messages in thread
From: Anshuman Khandual @ 2020-06-24  0:17 UTC (permalink / raw)
  To: Robin Murphy, kernel test robot, linux-mm
  Cc: Mark Rutland, Barry Song, kbuild-all, Catalin Marinas,
	Andrew Morton, Will Deacon, linux-arm-kernel, Mike Kravetz



On 06/23/2020 10:10 PM, Robin Murphy wrote:
> On 2020-06-23 13:48, Anshuman Khandual wrote:
>>
>> On 06/23/2020 02:54 PM, kernel test robot wrote:
>>>     423        /*
>>>     424         * must be done after arm64_numa_init() which calls numa_init() to
>>>     425         * initialize node_online_map that gets used in hugetlb_cma_reserve()
>>>     426         * while allocating required CMA size across online nodes.
>>>     427         */
>>>   > 428        arm64_hugetlb_cma_reserve();
>>
>> Wrapping this call site with CONFIG_HUGETLB_PAGE solves the problem.
> 
> ...although it might be nicer to include asm/hugetlb.h directly so that you can pick up the stub definition reliably.

Including <asm/hugetlb.h> directly does not solve the problem and
<linux/hugetlb.h> is no better. arm64_hugetlb_cma_reserve() needs
protection wrt both CMA and HUGETLB_PAGE. Dropped HUGETLB_PAGE
assuming it should have been taken care as the stub itself was in
<asm/hugetlb.h>, which turns out to be not true.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs
  2020-06-24  0:17       ` Anshuman Khandual
@ 2020-06-24  9:45         ` Robin Murphy
  2020-06-24 12:22           ` Anshuman Khandual
  0 siblings, 1 reply; 8+ messages in thread
From: Robin Murphy @ 2020-06-24  9:45 UTC (permalink / raw)
  To: Anshuman Khandual, kernel test robot, linux-mm
  Cc: Mark Rutland, Barry Song, kbuild-all, Catalin Marinas,
	Andrew Morton, Will Deacon, linux-arm-kernel, Mike Kravetz

On 2020-06-24 01:17, Anshuman Khandual wrote:
> 
> 
> On 06/23/2020 10:10 PM, Robin Murphy wrote:
>> On 2020-06-23 13:48, Anshuman Khandual wrote:
>>>
>>> On 06/23/2020 02:54 PM, kernel test robot wrote:
>>>>      423        /*
>>>>      424         * must be done after arm64_numa_init() which calls numa_init() to
>>>>      425         * initialize node_online_map that gets used in hugetlb_cma_reserve()
>>>>      426         * while allocating required CMA size across online nodes.
>>>>      427         */
>>>>    > 428        arm64_hugetlb_cma_reserve();
>>>
>>> Wrapping this call site with CONFIG_HUGETLB_PAGE solves the problem.
>>
>> ...although it might be nicer to include asm/hugetlb.h directly so that you can pick up the stub definition reliably.
> 
> Including <asm/hugetlb.h> directly does not solve the problem and
> <linux/hugetlb.h> is no better. arm64_hugetlb_cma_reserve() needs
> protection wrt both CMA and HUGETLB_PAGE. Dropped HUGETLB_PAGE
> assuming it should have been taken care as the stub itself was in
> <asm/hugetlb.h>, which turns out to be not true.

Sure, I wasn't suggesting that the implementation of the header itself 
wouldn't need tweaking - the point I was trying to get at is that it's 
preferable to have *either* a stub definition in an always-reachable 
header, *or* inline #ifdefs around the caller. Mixing both such that 
there are 3 or 4 possible combinations just isn't nice to maintain.

Robin.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs
  2020-06-24  9:45         ` Robin Murphy
@ 2020-06-24 12:22           ` Anshuman Khandual
  0 siblings, 0 replies; 8+ messages in thread
From: Anshuman Khandual @ 2020-06-24 12:22 UTC (permalink / raw)
  To: Robin Murphy, kernel test robot, linux-mm
  Cc: Mark Rutland, Barry Song, kbuild-all, Catalin Marinas,
	Andrew Morton, Will Deacon, linux-arm-kernel, Mike Kravetz



On 06/24/2020 03:15 PM, Robin Murphy wrote:
> On 2020-06-24 01:17, Anshuman Khandual wrote:
>>
>>
>> On 06/23/2020 10:10 PM, Robin Murphy wrote:
>>> On 2020-06-23 13:48, Anshuman Khandual wrote:
>>>>
>>>> On 06/23/2020 02:54 PM, kernel test robot wrote:
>>>>>      423        /*
>>>>>      424         * must be done after arm64_numa_init() which calls numa_init() to
>>>>>      425         * initialize node_online_map that gets used in hugetlb_cma_reserve()
>>>>>      426         * while allocating required CMA size across online nodes.
>>>>>      427         */
>>>>>    > 428        arm64_hugetlb_cma_reserve();
>>>>
>>>> Wrapping this call site with CONFIG_HUGETLB_PAGE solves the problem.
>>>
>>> ...although it might be nicer to include asm/hugetlb.h directly so that you can pick up the stub definition reliably.
>>
>> Including <asm/hugetlb.h> directly does not solve the problem and
>> <linux/hugetlb.h> is no better. arm64_hugetlb_cma_reserve() needs
>> protection wrt both CMA and HUGETLB_PAGE. Dropped HUGETLB_PAGE
>> assuming it should have been taken care as the stub itself was in
>> <asm/hugetlb.h>, which turns out to be not true.
> 
> Sure, I wasn't suggesting that the implementation of the header itself wouldn't need tweaking - the point I was trying to get at is that it's preferable to have *either* a stub definition in an always-reachable header, *or* inline #ifdefs around the caller. Mixing both such that there are 3 or 4 possible combinations just isn't nice to maintain.
> 

Makes sense, which one of these would be preferred instead.

1. Call site is protected with HUGETLB_PAGE and CMA
2. Definition is protected with CMA and HUGETLB_PAGE - in arch/arm64/mm/hugetlb.c
3. Declaration is in arch/arm64/include/asm/hugetlb.h (without any #ifdef)

OR

1. Definition is protected with CMA and HUGETLB_PAGE - in arch/arm64/mm/hugetlb.c
2. Stub definition and declaration with #ifdefs (HUGETLB_PAGE and CMA) in arch/arm64/mm/init.c
3. Call site is without any #ifdef


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs
  2020-06-23  6:21 [PATCH] arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs Anshuman Khandual
  2020-06-23  9:24 ` kernel test robot
@ 2020-06-26 18:42 ` kernel test robot
  1 sibling, 0 replies; 8+ messages in thread
From: kernel test robot @ 2020-06-26 18:42 UTC (permalink / raw)
  To: Anshuman Khandual, linux-mm
  Cc: kbuild-all, clang-built-linux, Anshuman Khandual,
	Catalin Marinas, Will Deacon, Mark Rutland, Mike Kravetz,
	Barry Song, Andrew Morton, Linux Memory Management List,
	linux-arm-kernel

[-- Attachment #1: Type: text/plain, Size: 2902 bytes --]

Hi Anshuman,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on v5.8-rc2]
[also build test ERROR on next-20200626]
[cannot apply to arm64/for-next/core]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use  as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Anshuman-Khandual/arm64-hugetlb-Reserve-CMA-areas-for-gigantic-pages-on-16K-and-64K-configs/20200623-142507
base:    48778464bb7d346b47157d21ffde2af6b2d39110
config: arm64-randconfig-r025-20200624 (attached as .config)
compiler: clang version 11.0.0 (https://github.com/llvm/llvm-project 6e11ed52057ffc39941cb2de6d93cae522db4782)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install arm64 cross compiling tool for clang build
        # apt-get install binutils-aarch64-linux-gnu
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=arm64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> arch/arm64/mm/init.c:428:2: error: implicit declaration of function 'arm64_hugetlb_cma_reserve' [-Werror,-Wimplicit-function-declaration]
           arm64_hugetlb_cma_reserve();
           ^
   arch/arm64/mm/init.c:428:2: note: did you mean 'hugetlb_cma_reserve'?
   include/linux/hugetlb.h:915:27: note: 'hugetlb_cma_reserve' declared here
   static inline __init void hugetlb_cma_reserve(int order)
                             ^
   1 error generated.

vim +/arm64_hugetlb_cma_reserve +428 arch/arm64/mm/init.c

   408	
   409	void __init bootmem_init(void)
   410	{
   411		unsigned long min, max;
   412	
   413		min = PFN_UP(memblock_start_of_DRAM());
   414		max = PFN_DOWN(memblock_end_of_DRAM());
   415	
   416		early_memtest(min << PAGE_SHIFT, max << PAGE_SHIFT);
   417	
   418		max_pfn = max_low_pfn = max;
   419		min_low_pfn = min;
   420	
   421		arm64_numa_init();
   422	
   423		/*
   424		 * must be done after arm64_numa_init() which calls numa_init() to
   425		 * initialize node_online_map that gets used in hugetlb_cma_reserve()
   426		 * while allocating required CMA size across online nodes.
   427		 */
 > 428		arm64_hugetlb_cma_reserve();
   429	
   430		/*
   431		 * Sparsemem tries to allocate bootmem in memory_present(), so must be
   432		 * done after the fixed reservations.
   433		 */
   434		memblocks_present();
   435	
   436		sparse_init();
   437		zone_sizes_init(min, max);
   438	
   439		memblock_dump_all();
   440	}
   441	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 31797 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-06-26 18:42 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-23  6:21 [PATCH] arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs Anshuman Khandual
2020-06-23  9:24 ` kernel test robot
2020-06-23 12:48   ` Anshuman Khandual
2020-06-23 16:40     ` Robin Murphy
2020-06-24  0:17       ` Anshuman Khandual
2020-06-24  9:45         ` Robin Murphy
2020-06-24 12:22           ` Anshuman Khandual
2020-06-26 18:42 ` kernel test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).