linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] Some cleanups for hugetlb
@ 2021-03-08 11:28 Miaohe Lin
  2021-03-08 11:28 ` [PATCH 1/5] mm/hugetlb: use some helper functions to cleanup code Miaohe Lin
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Miaohe Lin @ 2021-03-08 11:28 UTC (permalink / raw)
  To: akpm, mike.kravetz; +Cc: linux-kernel, linux-mm, linmiaohe

Hi all,
This series contains cleanups to remove unnecessary VM_BUG_ON_PAGE, use
helper function and so on. I also collect some previous patches into this
series in case they are forgotten.
More details can be found in the respective changelogs. Thanks!

Miaohe Lin (5):
  mm/hugetlb: use some helper functions to cleanup code
  mm/hugetlb: optimize the surplus state transfer code in
    move_hugetlb_state()
  hugetlb_cgroup: remove unnecessary VM_BUG_ON_PAGE in
    hugetlb_cgroup_migrate()
  mm/hugetlb: simplify the code when alloc_huge_page() failed in
    hugetlb_no_page()
  mm/hugetlb: avoid calculating fault_mutex_hash in truncate_op case

 fs/hugetlbfs/inode.c |  6 +++---
 mm/hugetlb.c         | 21 ++++++++++++---------
 mm/hugetlb_cgroup.c  |  1 -
 3 files changed, 15 insertions(+), 13 deletions(-)

-- 
2.19.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/5] mm/hugetlb: use some helper functions to cleanup code
  2021-03-08 11:28 [PATCH 0/5] Some cleanups for hugetlb Miaohe Lin
@ 2021-03-08 11:28 ` Miaohe Lin
  2021-03-08 11:28 ` [PATCH 2/5] mm/hugetlb: optimize the surplus state transfer code in move_hugetlb_state() Miaohe Lin
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 13+ messages in thread
From: Miaohe Lin @ 2021-03-08 11:28 UTC (permalink / raw)
  To: akpm, mike.kravetz; +Cc: linux-kernel, linux-mm, linmiaohe

We could use pages_per_huge_page to get the number of pages per hugepage,
use get_hstate_idx to calculate hstate index, and use hstate_is_gigantic
to check if a hstate is gigantic to make code more succinct.

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 fs/hugetlbfs/inode.c | 2 +-
 mm/hugetlb.c         | 6 +++---
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 701c82c36138..c262566f7c5d 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -1435,7 +1435,7 @@ static int get_hstate_idx(int page_size_log)
 
 	if (!h)
 		return -1;
-	return h - hstates;
+	return hstate_index(h);
 }
 
 /*
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 9059d2519962..33a3edf79022 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1271,7 +1271,7 @@ static void free_gigantic_page(struct page *page, unsigned int order)
 static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
 		int nid, nodemask_t *nodemask)
 {
-	unsigned long nr_pages = 1UL << huge_page_order(h);
+	unsigned long nr_pages = pages_per_huge_page(h);
 	if (nid == NUMA_NO_NODE)
 		nid = numa_mem_id();
 
@@ -3265,10 +3265,10 @@ static int __init hugepages_setup(char *s)
 
 	/*
 	 * Global state is always initialized later in hugetlb_init.
-	 * But we need to allocate >= MAX_ORDER hstates here early to still
+	 * But we need to allocate gigantic hstates here early to still
 	 * use the bootmem allocator.
 	 */
-	if (hugetlb_max_hstate && parsed_hstate->order >= MAX_ORDER)
+	if (hugetlb_max_hstate && hstate_is_gigantic(parsed_hstate))
 		hugetlb_hstate_alloc_pages(parsed_hstate);
 
 	last_mhp = mhp;
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/5] mm/hugetlb: optimize the surplus state transfer code in move_hugetlb_state()
  2021-03-08 11:28 [PATCH 0/5] Some cleanups for hugetlb Miaohe Lin
  2021-03-08 11:28 ` [PATCH 1/5] mm/hugetlb: use some helper functions to cleanup code Miaohe Lin
@ 2021-03-08 11:28 ` Miaohe Lin
  2021-03-08 11:28 ` [PATCH 3/5] hugetlb_cgroup: remove unnecessary VM_BUG_ON_PAGE in hugetlb_cgroup_migrate() Miaohe Lin
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 13+ messages in thread
From: Miaohe Lin @ 2021-03-08 11:28 UTC (permalink / raw)
  To: akpm, mike.kravetz; +Cc: linux-kernel, linux-mm, linmiaohe

We should not transfer the per-node surplus state when we do not cross the
node in order to save some cpu cycles

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/hugetlb.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 33a3edf79022..695603071f2c 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5621,6 +5621,12 @@ void move_hugetlb_state(struct page *oldpage, struct page *newpage, int reason)
 		SetHPageTemporary(oldpage);
 		ClearHPageTemporary(newpage);
 
+		/*
+		 * There is no need to transfer the per-node surplus state
+		 * when we do not cross the node.
+		 */
+		if (new_nid == old_nid)
+			return;
 		spin_lock(&hugetlb_lock);
 		if (h->surplus_huge_pages_node[old_nid]) {
 			h->surplus_huge_pages_node[old_nid]--;
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/5] hugetlb_cgroup: remove unnecessary VM_BUG_ON_PAGE in hugetlb_cgroup_migrate()
  2021-03-08 11:28 [PATCH 0/5] Some cleanups for hugetlb Miaohe Lin
  2021-03-08 11:28 ` [PATCH 1/5] mm/hugetlb: use some helper functions to cleanup code Miaohe Lin
  2021-03-08 11:28 ` [PATCH 2/5] mm/hugetlb: optimize the surplus state transfer code in move_hugetlb_state() Miaohe Lin
@ 2021-03-08 11:28 ` Miaohe Lin
  2021-03-12 19:46   ` Mike Kravetz
  2021-03-08 11:28 ` [PATCH 4/5] mm/hugetlb: simplify the code when alloc_huge_page() failed in hugetlb_no_page() Miaohe Lin
  2021-03-08 11:28 ` [PATCH 5/5] mm/hugetlb: avoid calculating fault_mutex_hash in truncate_op case Miaohe Lin
  4 siblings, 1 reply; 13+ messages in thread
From: Miaohe Lin @ 2021-03-08 11:28 UTC (permalink / raw)
  To: akpm, mike.kravetz; +Cc: linux-kernel, linux-mm, linmiaohe

!PageHuge(oldhpage) is implicitly checked in page_hstate() above, so we
remove this explicit one.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/hugetlb_cgroup.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index 8668ba87cfe4..3dde6ddf0170 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -785,7 +785,6 @@ void hugetlb_cgroup_migrate(struct page *oldhpage, struct page *newhpage)
 	if (hugetlb_cgroup_disabled())
 		return;
 
-	VM_BUG_ON_PAGE(!PageHuge(oldhpage), oldhpage);
 	spin_lock(&hugetlb_lock);
 	h_cg = hugetlb_cgroup_from_page(oldhpage);
 	h_cg_rsvd = hugetlb_cgroup_from_page_rsvd(oldhpage);
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/5] mm/hugetlb: simplify the code when alloc_huge_page() failed in hugetlb_no_page()
  2021-03-08 11:28 [PATCH 0/5] Some cleanups for hugetlb Miaohe Lin
                   ` (2 preceding siblings ...)
  2021-03-08 11:28 ` [PATCH 3/5] hugetlb_cgroup: remove unnecessary VM_BUG_ON_PAGE in hugetlb_cgroup_migrate() Miaohe Lin
@ 2021-03-08 11:28 ` Miaohe Lin
  2021-03-12 19:58   ` Mike Kravetz
  2021-03-08 11:28 ` [PATCH 5/5] mm/hugetlb: avoid calculating fault_mutex_hash in truncate_op case Miaohe Lin
  4 siblings, 1 reply; 13+ messages in thread
From: Miaohe Lin @ 2021-03-08 11:28 UTC (permalink / raw)
  To: akpm, mike.kravetz; +Cc: linux-kernel, linux-mm, linmiaohe

Rework the error handling code when alloc_huge_page() failed to remove some
duplicated code and simplify the code slightly.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/hugetlb.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 695603071f2c..69b8de866a24 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -4337,13 +4337,10 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
 			 * sure there really is no pte entry.
 			 */
 			ptl = huge_pte_lock(h, mm, ptep);
-			if (!huge_pte_none(huge_ptep_get(ptep))) {
-				ret = 0;
-				spin_unlock(ptl);
-				goto out;
-			}
+			ret = 0;
+			if (huge_pte_none(huge_ptep_get(ptep)))
+				ret = vmf_error(PTR_ERR(page));
 			spin_unlock(ptl);
-			ret = vmf_error(PTR_ERR(page));
 			goto out;
 		}
 		clear_huge_page(page, address, pages_per_huge_page(h));
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 5/5] mm/hugetlb: avoid calculating fault_mutex_hash in truncate_op case
  2021-03-08 11:28 [PATCH 0/5] Some cleanups for hugetlb Miaohe Lin
                   ` (3 preceding siblings ...)
  2021-03-08 11:28 ` [PATCH 4/5] mm/hugetlb: simplify the code when alloc_huge_page() failed in hugetlb_no_page() Miaohe Lin
@ 2021-03-08 11:28 ` Miaohe Lin
  2021-03-12 20:03   ` Mike Kravetz
  4 siblings, 1 reply; 13+ messages in thread
From: Miaohe Lin @ 2021-03-08 11:28 UTC (permalink / raw)
  To: akpm, mike.kravetz; +Cc: linux-kernel, linux-mm, linmiaohe

The fault_mutex hashing overhead can be avoided in truncate_op case because
page faults can not race with truncation in this routine. So calculate hash
for fault_mutex only in !truncate_op case to save some cpu cycles.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 fs/hugetlbfs/inode.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index c262566f7c5d..d81f52b87bd7 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -482,10 +482,9 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
 
 		for (i = 0; i < pagevec_count(&pvec); ++i) {
 			struct page *page = pvec.pages[i];
-			u32 hash;
+			u32 hash = 0;
 
 			index = page->index;
-			hash = hugetlb_fault_mutex_hash(mapping, index);
 			if (!truncate_op) {
 				/*
 				 * Only need to hold the fault mutex in the
@@ -493,6 +492,7 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
 				 * page faults.  Races are not possible in the
 				 * case of truncation.
 				 */
+				hash = hugetlb_fault_mutex_hash(mapping, index);
 				mutex_lock(&hugetlb_fault_mutex_table[hash]);
 			}
 
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/5] hugetlb_cgroup: remove unnecessary VM_BUG_ON_PAGE in hugetlb_cgroup_migrate()
  2021-03-08 11:28 ` [PATCH 3/5] hugetlb_cgroup: remove unnecessary VM_BUG_ON_PAGE in hugetlb_cgroup_migrate() Miaohe Lin
@ 2021-03-12 19:46   ` Mike Kravetz
  0 siblings, 0 replies; 13+ messages in thread
From: Mike Kravetz @ 2021-03-12 19:46 UTC (permalink / raw)
  To: Miaohe Lin, akpm; +Cc: linux-kernel, linux-mm

On 3/8/21 3:28 AM, Miaohe Lin wrote:
> !PageHuge(oldhpage) is implicitly checked in page_hstate() above, so we
> remove this explicit one.

Correct.  In addition, this will only be called for hugetlb pages.  The
VM_BUG_ON_PAGE is unnecessary.

> 
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/hugetlb_cgroup.c | 1 -
>  1 file changed, 1 deletion(-)

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
-- 
Mike Kravetz

> 
> diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
> index 8668ba87cfe4..3dde6ddf0170 100644
> --- a/mm/hugetlb_cgroup.c
> +++ b/mm/hugetlb_cgroup.c
> @@ -785,7 +785,6 @@ void hugetlb_cgroup_migrate(struct page *oldhpage, struct page *newhpage)
>  	if (hugetlb_cgroup_disabled())
>  		return;
>  
> -	VM_BUG_ON_PAGE(!PageHuge(oldhpage), oldhpage);
>  	spin_lock(&hugetlb_lock);
>  	h_cg = hugetlb_cgroup_from_page(oldhpage);
>  	h_cg_rsvd = hugetlb_cgroup_from_page_rsvd(oldhpage);
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/5] mm/hugetlb: simplify the code when alloc_huge_page() failed in hugetlb_no_page()
  2021-03-08 11:28 ` [PATCH 4/5] mm/hugetlb: simplify the code when alloc_huge_page() failed in hugetlb_no_page() Miaohe Lin
@ 2021-03-12 19:58   ` Mike Kravetz
  2021-03-13  2:54     ` Miaohe Lin
  0 siblings, 1 reply; 13+ messages in thread
From: Mike Kravetz @ 2021-03-12 19:58 UTC (permalink / raw)
  To: Miaohe Lin, akpm; +Cc: linux-kernel, linux-mm

On 3/8/21 3:28 AM, Miaohe Lin wrote:
> Rework the error handling code when alloc_huge_page() failed to remove some
> duplicated code and simplify the code slightly.
> 
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/hugetlb.c | 9 +++------
>  1 file changed, 3 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 695603071f2c..69b8de866a24 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -4337,13 +4337,10 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
>  			 * sure there really is no pte entry.
>  			 */
>  			ptl = huge_pte_lock(h, mm, ptep);
> -			if (!huge_pte_none(huge_ptep_get(ptep))) {
> -				ret = 0;
> -				spin_unlock(ptl);
> -				goto out;
> -			}
> +			ret = 0;
> +			if (huge_pte_none(huge_ptep_get(ptep)))
> +				ret = vmf_error(PTR_ERR(page));

This new code is simpler.

The !huge_pte_none() catches an unlikely race.  IMO, the existing code
made that very clear.  Would have been even more clear with an unlikely
modifier.  In any case, the lengthy comment above this code makes it
clear why the check is there.  Code changes are fine.

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>

-- 
Mike Kravetz

>  			spin_unlock(ptl);
> -			ret = vmf_error(PTR_ERR(page));
>  			goto out;
>  		}
>  		clear_huge_page(page, address, pages_per_huge_page(h));
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 5/5] mm/hugetlb: avoid calculating fault_mutex_hash in truncate_op case
  2021-03-08 11:28 ` [PATCH 5/5] mm/hugetlb: avoid calculating fault_mutex_hash in truncate_op case Miaohe Lin
@ 2021-03-12 20:03   ` Mike Kravetz
  2021-03-13  2:49     ` Miaohe Lin
  0 siblings, 1 reply; 13+ messages in thread
From: Mike Kravetz @ 2021-03-12 20:03 UTC (permalink / raw)
  To: Miaohe Lin, akpm; +Cc: linux-kernel, linux-mm

On 3/8/21 3:28 AM, Miaohe Lin wrote:
> The fault_mutex hashing overhead can be avoided in truncate_op case because
> page faults can not race with truncation in this routine. So calculate hash
> for fault_mutex only in !truncate_op case to save some cpu cycles.
> 
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  fs/hugetlbfs/inode.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> index c262566f7c5d..d81f52b87bd7 100644
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -482,10 +482,9 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
>  
>  		for (i = 0; i < pagevec_count(&pvec); ++i) {
>  			struct page *page = pvec.pages[i];
> -			u32 hash;
> +			u32 hash = 0;

Do we need to initialize hash here?
I would not bring this up normally, but the purpose of the patch is to save
cpu cycles.
-- 
Mike Kravetz

>  
>  			index = page->index;
> -			hash = hugetlb_fault_mutex_hash(mapping, index);
>  			if (!truncate_op) {
>  				/*
>  				 * Only need to hold the fault mutex in the
> @@ -493,6 +492,7 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
>  				 * page faults.  Races are not possible in the
>  				 * case of truncation.
>  				 */
> +				hash = hugetlb_fault_mutex_hash(mapping, index);
>  				mutex_lock(&hugetlb_fault_mutex_table[hash]);
>  			}
>  
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 5/5] mm/hugetlb: avoid calculating fault_mutex_hash in truncate_op case
  2021-03-12 20:03   ` Mike Kravetz
@ 2021-03-13  2:49     ` Miaohe Lin
  2021-03-13 21:17       ` Mike Kravetz
  0 siblings, 1 reply; 13+ messages in thread
From: Miaohe Lin @ 2021-03-13  2:49 UTC (permalink / raw)
  To: Mike Kravetz, akpm; +Cc: linux-kernel, linux-mm

Hi:
On 2021/3/13 4:03, Mike Kravetz wrote:
> On 3/8/21 3:28 AM, Miaohe Lin wrote:
>> The fault_mutex hashing overhead can be avoided in truncate_op case because
>> page faults can not race with truncation in this routine. So calculate hash
>> for fault_mutex only in !truncate_op case to save some cpu cycles.
>>
>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>> ---
>>  fs/hugetlbfs/inode.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
>> index c262566f7c5d..d81f52b87bd7 100644
>> --- a/fs/hugetlbfs/inode.c
>> +++ b/fs/hugetlbfs/inode.c
>> @@ -482,10 +482,9 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
>>  
>>  		for (i = 0; i < pagevec_count(&pvec); ++i) {
>>  			struct page *page = pvec.pages[i];
>> -			u32 hash;
>> +			u32 hash = 0;
> 
> Do we need to initialize hash here?
> I would not bring this up normally, but the purpose of the patch is to save
> cpu cycles.

The hash is initialized here in order to avoid false positive
"uninitialized local variable used" warning. Or this is indeed unnecessary?

Many thanks for review.

> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/5] mm/hugetlb: simplify the code when alloc_huge_page() failed in hugetlb_no_page()
  2021-03-12 19:58   ` Mike Kravetz
@ 2021-03-13  2:54     ` Miaohe Lin
  0 siblings, 0 replies; 13+ messages in thread
From: Miaohe Lin @ 2021-03-13  2:54 UTC (permalink / raw)
  To: Mike Kravetz, akpm; +Cc: linux-kernel, linux-mm

On 2021/3/13 3:58, Mike Kravetz wrote:
> On 3/8/21 3:28 AM, Miaohe Lin wrote:
>> Rework the error handling code when alloc_huge_page() failed to remove some
>> duplicated code and simplify the code slightly.
>>
>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>> ---
>>  mm/hugetlb.c | 9 +++------
>>  1 file changed, 3 insertions(+), 6 deletions(-)
>>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 695603071f2c..69b8de866a24 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -4337,13 +4337,10 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
>>  			 * sure there really is no pte entry.
>>  			 */
>>  			ptl = huge_pte_lock(h, mm, ptep);
>> -			if (!huge_pte_none(huge_ptep_get(ptep))) {
>> -				ret = 0;
>> -				spin_unlock(ptl);
>> -				goto out;
>> -			}
>> +			ret = 0;
>> +			if (huge_pte_none(huge_ptep_get(ptep)))
>> +				ret = vmf_error(PTR_ERR(page));
> 
> This new code is simpler.
> 
> The !huge_pte_none() catches an unlikely race.  IMO, the existing code
> made that very clear.  Would have been even more clear with an unlikely
> modifier.  In any case, the lengthy comment above this code makes it
> clear why the check is there.  Code changes are fine.
> 

Yep, the lengthy comment above this code makes it much clear why we need the check.
Thanks for carefully review! :)

> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 5/5] mm/hugetlb: avoid calculating fault_mutex_hash in truncate_op case
  2021-03-13  2:49     ` Miaohe Lin
@ 2021-03-13 21:17       ` Mike Kravetz
  2021-03-15  1:50         ` Miaohe Lin
  0 siblings, 1 reply; 13+ messages in thread
From: Mike Kravetz @ 2021-03-13 21:17 UTC (permalink / raw)
  To: Miaohe Lin, akpm; +Cc: linux-kernel, linux-mm

On 3/12/21 6:49 PM, Miaohe Lin wrote:
> Hi:
> On 2021/3/13 4:03, Mike Kravetz wrote:
>> On 3/8/21 3:28 AM, Miaohe Lin wrote:
>>> The fault_mutex hashing overhead can be avoided in truncate_op case because
>>> page faults can not race with truncation in this routine. So calculate hash
>>> for fault_mutex only in !truncate_op case to save some cpu cycles.
>>>
>>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>>> ---
>>>  fs/hugetlbfs/inode.c | 4 ++--
>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
>>> index c262566f7c5d..d81f52b87bd7 100644
>>> --- a/fs/hugetlbfs/inode.c
>>> +++ b/fs/hugetlbfs/inode.c
>>> @@ -482,10 +482,9 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
>>>  
>>>  		for (i = 0; i < pagevec_count(&pvec); ++i) {
>>>  			struct page *page = pvec.pages[i];
>>> -			u32 hash;
>>> +			u32 hash = 0;
>>
>> Do we need to initialize hash here?
>> I would not bring this up normally, but the purpose of the patch is to save
>> cpu cycles.
> 
> The hash is initialized here in order to avoid false positive
> "uninitialized local variable used" warning. Or this is indeed unnecessary?
> 

Of course.  In this case we know more about usage then the compiler.
You can add:

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 5/5] mm/hugetlb: avoid calculating fault_mutex_hash in truncate_op case
  2021-03-13 21:17       ` Mike Kravetz
@ 2021-03-15  1:50         ` Miaohe Lin
  0 siblings, 0 replies; 13+ messages in thread
From: Miaohe Lin @ 2021-03-15  1:50 UTC (permalink / raw)
  To: Mike Kravetz, akpm; +Cc: linux-kernel, linux-mm

On 2021/3/14 5:17, Mike Kravetz wrote:
> On 3/12/21 6:49 PM, Miaohe Lin wrote:
>> Hi:
>> On 2021/3/13 4:03, Mike Kravetz wrote:
>>> On 3/8/21 3:28 AM, Miaohe Lin wrote:
>>>> The fault_mutex hashing overhead can be avoided in truncate_op case because
>>>> page faults can not race with truncation in this routine. So calculate hash
>>>> for fault_mutex only in !truncate_op case to save some cpu cycles.
>>>>
>>>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>>>> ---
>>>>  fs/hugetlbfs/inode.c | 4 ++--
>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
>>>> index c262566f7c5d..d81f52b87bd7 100644
>>>> --- a/fs/hugetlbfs/inode.c
>>>> +++ b/fs/hugetlbfs/inode.c
>>>> @@ -482,10 +482,9 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
>>>>  
>>>>  		for (i = 0; i < pagevec_count(&pvec); ++i) {
>>>>  			struct page *page = pvec.pages[i];
>>>> -			u32 hash;
>>>> +			u32 hash = 0;
>>>
>>> Do we need to initialize hash here?
>>> I would not bring this up normally, but the purpose of the patch is to save
>>> cpu cycles.
>>
>> The hash is initialized here in order to avoid false positive
>> "uninitialized local variable used" warning. Or this is indeed unnecessary?
>>
> 
> Of course.  In this case we know more about usage then the compiler.
> You can add:
> 

I see. Many thanks. Am I supposed to resend the whole v2 patch series ? Or just a single v2 patch with change mentioned above?
Please let me know which is the easiest one for you.

> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2021-03-15  1:53 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-08 11:28 [PATCH 0/5] Some cleanups for hugetlb Miaohe Lin
2021-03-08 11:28 ` [PATCH 1/5] mm/hugetlb: use some helper functions to cleanup code Miaohe Lin
2021-03-08 11:28 ` [PATCH 2/5] mm/hugetlb: optimize the surplus state transfer code in move_hugetlb_state() Miaohe Lin
2021-03-08 11:28 ` [PATCH 3/5] hugetlb_cgroup: remove unnecessary VM_BUG_ON_PAGE in hugetlb_cgroup_migrate() Miaohe Lin
2021-03-12 19:46   ` Mike Kravetz
2021-03-08 11:28 ` [PATCH 4/5] mm/hugetlb: simplify the code when alloc_huge_page() failed in hugetlb_no_page() Miaohe Lin
2021-03-12 19:58   ` Mike Kravetz
2021-03-13  2:54     ` Miaohe Lin
2021-03-08 11:28 ` [PATCH 5/5] mm/hugetlb: avoid calculating fault_mutex_hash in truncate_op case Miaohe Lin
2021-03-12 20:03   ` Mike Kravetz
2021-03-13  2:49     ` Miaohe Lin
2021-03-13 21:17       ` Mike Kravetz
2021-03-15  1:50         ` Miaohe Lin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).