All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] hugetlb: fix huge_pmd_unshare address update
@ 2022-05-24 20:50 Mike Kravetz
  2022-05-26  8:01 ` Muchun Song
  0 siblings, 1 reply; 2+ messages in thread
From: Mike Kravetz @ 2022-05-24 20:50 UTC (permalink / raw)
  To: linux-mm, linux-kernel; +Cc: Andrew Morton, Mike Kravetz, stable

The routine huge_pmd_unshare is passed a pointer to an address
associated with an area which may be unshared.  If unshare is successful
this address is updated to 'optimize' callers iterating over huge page
addresses.  For the optimization to work correctly, address should be
updated to the last huge page in the unmapped/unshared area.  However,
in the common case where the passed address is PUD_SIZE aligned, the
address is incorrectly updated to the address of the preceding huge
page.  That wastes CPU cycles as the unmapped/unshared range is scanned
twice.

Cc: <stable@vger.kernel.org>
Fixes: 39dde65c9940 ("shared page table for hugetlb page")
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 mm/hugetlb.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 01f0e2e5ab48..7c468ac1d069 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6755,7 +6755,14 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
 	pud_clear(pud);
 	put_page(virt_to_page(ptep));
 	mm_dec_nr_pmds(mm);
-	*addr = ALIGN(*addr, HPAGE_SIZE * PTRS_PER_PTE) - HPAGE_SIZE;
+	/*
+	 * This update of passed address optimizes loops sequentially
+	 * processing addresses in increments of huge page size (PMD_SIZE
+	 * in this case).  By clearing the pud, a PUD_SIZE area is unmapped.
+	 * Update address to the 'last page' in the cleared area so that
+	 * calling loop can move to first page past this area.
+	 */
+	*addr |= PUD_SIZE - PMD_SIZE;
 	return 1;
 }
 
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] hugetlb: fix huge_pmd_unshare address update
  2022-05-24 20:50 [PATCH] hugetlb: fix huge_pmd_unshare address update Mike Kravetz
@ 2022-05-26  8:01 ` Muchun Song
  0 siblings, 0 replies; 2+ messages in thread
From: Muchun Song @ 2022-05-26  8:01 UTC (permalink / raw)
  To: Mike Kravetz; +Cc: linux-mm, linux-kernel, Andrew Morton, stable

On Tue, May 24, 2022 at 01:50:03PM -0700, Mike Kravetz wrote:
> The routine huge_pmd_unshare is passed a pointer to an address
> associated with an area which may be unshared.  If unshare is successful
> this address is updated to 'optimize' callers iterating over huge page
> addresses.  For the optimization to work correctly, address should be
> updated to the last huge page in the unmapped/unshared area.  However,
> in the common case where the passed address is PUD_SIZE aligned, the
> address is incorrectly updated to the address of the preceding huge
> page.  That wastes CPU cycles as the unmapped/unshared range is scanned
> twice.
> 
> Cc: <stable@vger.kernel.org>
> Fixes: 39dde65c9940 ("shared page table for hugetlb page")
> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>

Acked-by: Muchun Song <songmuchun@bytedance.com>

Thanks.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-05-26  8:01 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-24 20:50 [PATCH] hugetlb: fix huge_pmd_unshare address update Mike Kravetz
2022-05-26  8:01 ` Muchun Song

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.