All of lore.kernel.org
 help / color / mirror / Atom feed
From: Baolin Wang <baolin.wang@linux.alibaba.com>
To: akpm@linux-foundation.org, songmuchun@bytedance.com,
	mike.kravetz@oracle.com
Cc: baolin.wang@linux.alibaba.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: [PATCH v2 3/5] mm/hugetlb: fix races when looking up a CONT-PMD size hugetlb page
Date: Tue, 23 Aug 2022 15:50:03 +0800	[thread overview]
Message-ID: <e057d7392325d7fb8a16d24c41b1394359daae23.1661240170.git.baolin.wang@linux.alibaba.com> (raw)
In-Reply-To: <cover.1661240170.git.baolin.wang@linux.alibaba.com>
In-Reply-To: <cover.1661240170.git.baolin.wang@linux.alibaba.com>

On some architectures (like ARM64), it can support CONT-PTE/PMD size
hugetlb, which means it can support not only PMD/PUD size hugetlb
(2M and 1G), but also CONT-PTE/PMD size(64K and 32M) if a 4K page size
specified.

When looking up a CONT-PMD size hugetlb page by follow_page(), it will
always use the PMD page lock to protect the pmd entry in follow_huge_pmd().
However this is not the correct lock for CONT-PMD size hugetlb, and the
pmd entry will be unstable under the incorrect lock, which means it still
can be migrated or poisoned, and can not get the correct CONT-PMD size
page.

Thus changing to use huge_pte_lock() to get the correct pmd entry lock
for CONT-PMD size hugetlb to fix the potential race.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 include/linux/hugetlb.h | 4 ++--
 mm/gup.c                | 2 +-
 mm/hugetlb.c            | 7 ++++---
 3 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 4b172a7..3a96f67 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -209,7 +209,7 @@ struct page *follow_huge_pd(struct vm_area_struct *vma,
 			    int flags, int pdshift);
 struct page *follow_huge_pte(struct vm_area_struct *vma, unsigned long address,
 			     pmd_t *pmd, int flags);
-struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address,
+struct page *follow_huge_pmd(struct vm_area_struct *vma, unsigned long address,
 				pmd_t *pmd, int flags);
 struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address,
 				pud_t *pud, int flags);
@@ -320,7 +320,7 @@ static inline struct page *follow_huge_pte(struct vm_area_struct *vma,
 	return NULL;
 }
 
-static inline struct page *follow_huge_pmd(struct mm_struct *mm,
+static inline struct page *follow_huge_pmd(struct vm_area_struct *vma,
 				unsigned long address, pmd_t *pmd, int flags)
 {
 	return NULL;
diff --git a/mm/gup.c b/mm/gup.c
index 87a94f5..014accd 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -673,7 +673,7 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
 	if (pmd_none(pmdval))
 		return no_page_table(vma, flags);
 	if (pmd_huge(pmdval) && is_vm_hugetlb_page(vma)) {
-		page = follow_huge_pmd(mm, address, pmd, flags);
+		page = follow_huge_pmd(vma, address, pmd, flags);
 		if (page)
 			return page;
 		return no_page_table(vma, flags);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index cf742d1..2c4048a 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -7035,9 +7035,11 @@ struct page * __weak
 }
 
 struct page * __weak
-follow_huge_pmd(struct mm_struct *mm, unsigned long address,
+follow_huge_pmd(struct vm_area_struct *vma, unsigned long address,
 		pmd_t *pmd, int flags)
 {
+	struct mm_struct *mm = vma->vm_mm;
+	struct hstate *hstate = hstate_vma(vma);
 	struct page *page = NULL;
 	spinlock_t *ptl;
 	pte_t pte;
@@ -7050,8 +7052,7 @@ struct page * __weak
 		return NULL;
 
 retry:
-	ptl = pmd_lockptr(mm, pmd);
-	spin_lock(ptl);
+	ptl = huge_pte_lock(hstate, mm, (pte_t *)pmd);
 	/*
 	 * make sure that the address range covered by this pmd is not
 	 * unmapped from other threads.
-- 
1.8.3.1


  parent reply	other threads:[~2022-08-23  7:51 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-23  7:50 [PATCH v2 0/5] Fix some issues when looking up hugetlb page Baolin Wang
2022-08-23  7:50 ` [PATCH v2 1/5] mm/hugetlb: fix races when looking up a CONT-PTE size " Baolin Wang
2022-08-23  8:29   ` David Hildenbrand
2022-08-23 10:02     ` Baolin Wang
2022-08-23 10:23       ` David Hildenbrand
2022-08-23 23:55         ` Mike Kravetz
2022-08-24  2:06           ` Baolin Wang
2022-08-24  7:31             ` David Hildenbrand
2022-08-24  9:41               ` Baolin Wang
2022-08-24 11:55                 ` David Hildenbrand
2022-08-24 14:30                   ` Baolin Wang
2022-08-24 14:33                     ` David Hildenbrand
2022-08-24 15:06                       ` Baolin Wang
2022-08-24 15:13                         ` David Hildenbrand
2022-08-24 15:23                           ` Baolin Wang
2022-08-24 23:34                 ` Mike Kravetz
2022-08-25  1:43                   ` Baolin Wang
2022-08-25  7:10                     ` David Hildenbrand
2022-08-25  7:58                       ` Baolin Wang
2022-08-25 18:30                     ` Mike Kravetz
2022-08-25  7:25                   ` David Hildenbrand
2022-08-25 10:54                     ` Baolin Wang
2022-08-25 21:13                     ` Mike Kravetz
2022-08-26 22:40                       ` Mike Kravetz
2022-08-27 13:59                       ` Aneesh Kumar K.V
2022-08-29 18:30                         ` Mike Kravetz
2022-08-23  7:50 ` [PATCH v2 2/5] mm/hugetlb: use PTE page lock to protect CONT-PTE entries Baolin Wang
2022-08-23  7:50 ` Baolin Wang [this message]
2022-08-23  7:50 ` [PATCH v2 4/5] mm/hugetlb: use PMD " Baolin Wang
2022-08-23  8:14   ` David Hildenbrand
2022-08-23 10:12     ` Baolin Wang
2022-08-23  7:50 ` [PATCH v2 5/5] mm/hugetlb: add FOLL_MIGRATION validation before waiting for a migration entry Baolin Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e057d7392325d7fb8a16d24c41b1394359daae23.1661240170.git.baolin.wang@linux.alibaba.com \
    --to=baolin.wang@linux.alibaba.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mike.kravetz@oracle.com \
    --cc=songmuchun@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.