mm/hugetlb: use some helper functions to cleanup code
diff mbox series

Message ID 20210210065346.21958-1-linmiaohe@huawei.com
State Accepted
Commit 04adbc3f7bff403a97355531da0190a263d66ea5
Headers show
Series
  • mm/hugetlb: use some helper functions to cleanup code
Related show

Commit Message

Miaohe Lin Feb. 10, 2021, 6:53 a.m. UTC
We could use pages_per_huge_page to get the number of pages per hugepage,
use get_hstate_idx to calculate hstate index, and use hstate_is_gigantic
to check if a hstate is gigantic to make code more succinct.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 fs/hugetlbfs/inode.c | 2 +-
 mm/hugetlb.c         | 6 +++---
 2 files changed, 4 insertions(+), 4 deletions(-)

Comments

David Hildenbrand Feb. 10, 2021, 8:35 a.m. UTC | #1
On 10.02.21 07:53, Miaohe Lin wrote:
> We could use pages_per_huge_page to get the number of pages per hugepage,
> use get_hstate_idx to calculate hstate index, and use hstate_is_gigantic
> to check if a hstate is gigantic to make code more succinct.
> 

Another suggestion, please collect and group your cleanups for a 
subsystem and send them in a single cleanup patch series where possible. 
Again, makes life easier for reviewers and maintainers.

Thanks!
Miaohe Lin Feb. 10, 2021, 8:51 a.m. UTC | #2
On 2021/2/10 16:35, David Hildenbrand wrote:
> On 10.02.21 07:53, Miaohe Lin wrote:
>> We could use pages_per_huge_page to get the number of pages per hugepage,
>> use get_hstate_idx to calculate hstate index, and use hstate_is_gigantic
>> to check if a hstate is gigantic to make code more succinct.
>>
> 
> Another suggestion, please collect and group your cleanups for a subsystem and send them in a single cleanup patch series where possible. Again, makes life easier for reviewers and maintainers.
> 

Many thanks for your suggestion again. I will keep it in mind. :)

> Thanks!
> 

Thanks a lot.

>
Miaohe Lin March 2, 2021, 11:46 a.m. UTC | #3
On 2021/2/10 14:53, Miaohe Lin wrote:
> We could use pages_per_huge_page to get the number of pages per hugepage,
> use get_hstate_idx to calculate hstate index, and use hstate_is_gigantic
> to check if a hstate is gigantic to make code more succinct.
> > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>

friendly ping after busy merge window. :)

> ---
>  fs/hugetlbfs/inode.c | 2 +-
>  mm/hugetlb.c         | 6 +++---
>  2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> index 701c82c36138..c262566f7c5d 100644
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -1435,7 +1435,7 @@ static int get_hstate_idx(int page_size_log)
>  
>  	if (!h)
>  		return -1;
> -	return h - hstates;
> +	return hstate_index(h);
>  }
>  
>  /*
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 8f6c98096476..da347047ea10 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1271,7 +1271,7 @@ static void free_gigantic_page(struct page *page, unsigned int order)
>  static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
>  		int nid, nodemask_t *nodemask)
>  {
> -	unsigned long nr_pages = 1UL << huge_page_order(h);
> +	unsigned long nr_pages = pages_per_huge_page(h);
>  	if (nid == NUMA_NO_NODE)
>  		nid = numa_mem_id();
>  
> @@ -3262,10 +3262,10 @@ static int __init hugepages_setup(char *s)
>  
>  	/*
>  	 * Global state is always initialized later in hugetlb_init.
> -	 * But we need to allocate >= MAX_ORDER hstates here early to still
> +	 * But we need to allocate gigantic hstates here early to still
>  	 * use the bootmem allocator.
>  	 */
> -	if (hugetlb_max_hstate && parsed_hstate->order >= MAX_ORDER)
> +	if (hugetlb_max_hstate && hstate_is_gigantic(parsed_hstate))
>  		hugetlb_hstate_alloc_pages(parsed_hstate);
>  
>  	last_mhp = mhp;
>
Mike Kravetz March 2, 2021, 10:18 p.m. UTC | #4
On 2/9/21 10:53 PM, Miaohe Lin wrote:
> We could use pages_per_huge_page to get the number of pages per hugepage,
> use get_hstate_idx to calculate hstate index, and use hstate_is_gigantic
> to check if a hstate is gigantic to make code more succinct.
> 
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>

These are all straight forward substitutions of open coded calculations
with the appropriate helper routine.

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>

> ---
>  fs/hugetlbfs/inode.c | 2 +-
>  mm/hugetlb.c         | 6 +++---
>  2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> index 701c82c36138..c262566f7c5d 100644
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -1435,7 +1435,7 @@ static int get_hstate_idx(int page_size_log)
>  
>  	if (!h)
>  		return -1;
> -	return h - hstates;
> +	return hstate_index(h);
>  }
>  
>  /*
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 8f6c98096476..da347047ea10 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1271,7 +1271,7 @@ static void free_gigantic_page(struct page *page, unsigned int order)
>  static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
>  		int nid, nodemask_t *nodemask)
>  {
> -	unsigned long nr_pages = 1UL << huge_page_order(h);
> +	unsigned long nr_pages = pages_per_huge_page(h);
>  	if (nid == NUMA_NO_NODE)
>  		nid = numa_mem_id();
>  
> @@ -3262,10 +3262,10 @@ static int __init hugepages_setup(char *s)
>  
>  	/*
>  	 * Global state is always initialized later in hugetlb_init.
> -	 * But we need to allocate >= MAX_ORDER hstates here early to still
> +	 * But we need to allocate gigantic hstates here early to still
>  	 * use the bootmem allocator.
>  	 */
> -	if (hugetlb_max_hstate && parsed_hstate->order >= MAX_ORDER)
> +	if (hugetlb_max_hstate && hstate_is_gigantic(parsed_hstate))
>  		hugetlb_hstate_alloc_pages(parsed_hstate);
>  
>  	last_mhp = mhp;
>

Patch
diff mbox series

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 701c82c36138..c262566f7c5d 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -1435,7 +1435,7 @@  static int get_hstate_idx(int page_size_log)
 
 	if (!h)
 		return -1;
-	return h - hstates;
+	return hstate_index(h);
 }
 
 /*
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 8f6c98096476..da347047ea10 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1271,7 +1271,7 @@  static void free_gigantic_page(struct page *page, unsigned int order)
 static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
 		int nid, nodemask_t *nodemask)
 {
-	unsigned long nr_pages = 1UL << huge_page_order(h);
+	unsigned long nr_pages = pages_per_huge_page(h);
 	if (nid == NUMA_NO_NODE)
 		nid = numa_mem_id();
 
@@ -3262,10 +3262,10 @@  static int __init hugepages_setup(char *s)
 
 	/*
 	 * Global state is always initialized later in hugetlb_init.
-	 * But we need to allocate >= MAX_ORDER hstates here early to still
+	 * But we need to allocate gigantic hstates here early to still
 	 * use the bootmem allocator.
 	 */
-	if (hugetlb_max_hstate && parsed_hstate->order >= MAX_ORDER)
+	if (hugetlb_max_hstate && hstate_is_gigantic(parsed_hstate))
 		hugetlb_hstate_alloc_pages(parsed_hstate);
 
 	last_mhp = mhp;