linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] mm/hugetlb: fix race when migrate pages
@ 2016-07-19 13:45 zhongjiang
  2016-07-20  7:38 ` Michal Hocko
  0 siblings, 1 reply; 9+ messages in thread
From: zhongjiang @ 2016-07-19 13:45 UTC (permalink / raw)
  To: mhocko, vbabka, qiuxishi, akpm; +Cc: linux-mm

From: zhong jiang <zhongjiang@huawei.com>

I hit the following code in huge_pte_alloc when run the database and
online-offline memory in the system.

BUG_ON(pte && !pte_none(*pte) && !pte_huge(*pte));

when pmd share function enable, we may be obtain a shared pmd entry.
due to ongoing offline memory , the pmd entry points to the page will
turn into migrate condition. therefore, the bug will come up.

The patch fix it by checking the pmd entry when we obtain the lock.
if the shared pmd entry points to page is under migration. we should
allocate a new pmd entry.

Signed-off-by: zhong jiang <zhongjiang@huawei.com>
---
 mm/hugetlb.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 6384dfd..797db55 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -4213,7 +4213,7 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
 	struct vm_area_struct *svma;
 	unsigned long saddr;
 	pte_t *spte = NULL;
-	pte_t *pte;
+	pte_t *pte, entry;
 	spinlock_t *ptl;
 
 	if (!vma_shareable(vma, addr))
@@ -4240,6 +4240,11 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
 
 	ptl = huge_pte_lockptr(hstate_vma(vma), mm, spte);
 	spin_lock(ptl);
+	entry = huge_ptep_get(spte);
+	if (is_hugetlb_entry_migration(entry) ||
+			is_hugetlb_entry_hwpoisoned(entry)) {
+		goto out_unlock;
+	}
 	if (pud_none(*pud)) {
 		pud_populate(mm, pud,
 				(pmd_t *)((unsigned long)spte & PAGE_MASK));
@@ -4247,6 +4252,8 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
 		put_page(virt_to_page(spte));
 		mm_dec_nr_pmds(mm);
 	}
+
+out_unlock:
 	spin_unlock(ptl);
 out:
 	pte = (pte_t *)pmd_alloc(mm, pud, addr);
-- 
1.8.3.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 9+ messages in thread
* [PATCH v2] mm/hugetlb: fix race when migrate pages
@ 2016-07-19 13:04 zhongjiang
  0 siblings, 0 replies; 9+ messages in thread
From: zhongjiang @ 2016-07-19 13:04 UTC (permalink / raw)
  To: mhocko, vbabka, qiuxishi, akpm; +Cc: linux-mm

From: zhong jiang <zhongjiang@huawei.com>

I hit the following code in huge_pte_alloc when run the database and
online-offline memory in the system.

BUG_ON(pte && !pte_none(*pte) && !pte_huge(*pte));

when pmd share function enable, we may be obtain a shared pmd entry.
due to ongoing offline memory , the pmd entry points to the page will
turn into migrate condition. therefore, the bug will come up.

The patch fix it by checking the pmd entry when we obtain the lock.
if the shared pmd entry points to page is under migration. we should
allocate a new pmd entry.

Signed-off-by: zhong jiang <zhongjiang@huawei.com>
---
 mm/hugetlb.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 6384dfd..3454051 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -4213,7 +4213,7 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
 	struct vm_area_struct *svma;
 	unsigned long saddr;
 	pte_t *spte = NULL;
-	pte_t *pte;
+	pte_t *pte, entry;
 	spinlock_t *ptl;
 
 	if (!vma_shareable(vma, addr))
@@ -4240,6 +4240,11 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
 
 	ptl = huge_pte_lockptr(hstate_vma(vma), mm, spte);
 	spin_lock(ptl);
+	entry = huge_ptep_get(spte);
+ 	if (is_hugetlb_entry_migration(entry) || 
+			is_hugetlb_entry_hwpoisoned(entry)) {
+		goto out_unlock;
+	}	
 	if (pud_none(*pud)) {
 		pud_populate(mm, pud,
 				(pmd_t *)((unsigned long)spte & PAGE_MASK));
@@ -4247,6 +4252,7 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
 		put_page(virt_to_page(spte));
 		mm_dec_nr_pmds(mm);
 	}
+out_unlock:
 	spin_unlock(ptl);
 out:
 	pte = (pte_t *)pmd_alloc(mm, pud, addr);
-- 
1.8.3.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2016-07-21  9:36 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-19 13:45 [PATCH v2] mm/hugetlb: fix race when migrate pages zhongjiang
2016-07-20  7:38 ` Michal Hocko
2016-07-20 10:03   ` zhong jiang
2016-07-20 12:16     ` Michal Hocko
2016-07-20 12:45       ` Michal Hocko
2016-07-20 13:00         ` Michal Hocko
2016-07-20 13:24           ` Michal Hocko
2016-07-21  9:25             ` zhong jiang
  -- strict thread matches above, loose matches on Subject: below --
2016-07-19 13:04 zhongjiang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).