linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/hmm: Cleanup hmm_vma_walk_pud()/walk_pud_range()
@ 2019-12-20 15:38 Steven Price
  2019-12-20 15:54 ` Thomas Hellström (VMware)
  0 siblings, 1 reply; 2+ messages in thread
From: Steven Price @ 2019-12-20 15:38 UTC (permalink / raw)
  To: Jérôme Glisse, Andrew Morton
  Cc: linux-kernel, linux-mm, Steven Price, Thomas Hellström

There are a number of minor misuses of the page table APIs in
hmm_vma_walk_pud():

If the pud_trans_huge_lock() hasn't been obtained it might be because
the PUD is unstable, so we should retry.

If it has been obtained then there's no need for a READ_ONCE, and the
PUD cannot be pud_none() or !pud_present() so these paths are dead code.

Finally in walk_pud_range(), after a call to split_huge_pud() the code
should check pud_trans_unstable() rather than pud_none() to decide
whether the PUD should be retried.

Suggested-by: Thomas Hellström (VMware) <thomas_os@shipmail.org>
Signed-off-by: Steven Price <steven.price@arm.com>
---
This is based on top of my "Generic page walk and ptdump" series and
fixes some pre-existing bugs spotted by Thomas.

 mm/hmm.c      | 16 +++++-----------
 mm/pagewalk.c |  2 +-
 2 files changed, 6 insertions(+), 12 deletions(-)

diff --git a/mm/hmm.c b/mm/hmm.c
index a71295e99968..d4aae4dcc6e8 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -480,28 +480,22 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end,
 	int ret = 0;
 	spinlock_t *ptl = pud_trans_huge_lock(pudp, walk->vma);
 
-	if (!ptl)
+	if (!ptl) {
+		if (pud_trans_unstable(pudp))
+			walk->action = ACTION_AGAIN;
 		return 0;
+	}
 
 	/* Normally we don't want to split the huge page */
 	walk->action = ACTION_CONTINUE;
 
-	pud = READ_ONCE(*pudp);
-	if (pud_none(pud)) {
-		ret = hmm_vma_walk_hole(start, end, -1, walk);
-		goto out_unlock;
-	}
+	pud = *pudp;
 
 	if (pud_huge(pud) && pud_devmap(pud)) {
 		unsigned long i, npages, pfn;
 		uint64_t *pfns, cpu_flags;
 		bool fault, write_fault;
 
-		if (!pud_present(pud)) {
-			ret = hmm_vma_walk_hole(start, end, -1, walk);
-			goto out_unlock;
-		}
-
 		i = (addr - range->start) >> PAGE_SHIFT;
 		npages = (end - addr) >> PAGE_SHIFT;
 		pfns = &range->pfns[i];
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index 5895ce4f1a85..4598f545b869 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -154,7 +154,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
 
 		if (walk->vma)
 			split_huge_pud(walk->vma, pud, addr);
-		if (pud_none(*pud))
+		if (pud_trans_unstable(pud))
 			goto again;
 
 		err = walk_pmd_range(pud, addr, next, walk);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] mm/hmm: Cleanup hmm_vma_walk_pud()/walk_pud_range()
  2019-12-20 15:38 [PATCH] mm/hmm: Cleanup hmm_vma_walk_pud()/walk_pud_range() Steven Price
@ 2019-12-20 15:54 ` Thomas Hellström (VMware)
  0 siblings, 0 replies; 2+ messages in thread
From: Thomas Hellström (VMware) @ 2019-12-20 15:54 UTC (permalink / raw)
  To: Steven Price, Jérôme Glisse, Andrew Morton
  Cc: linux-kernel, linux-mm

On 12/20/19 4:38 PM, Steven Price wrote:
> There are a number of minor misuses of the page table APIs in
> hmm_vma_walk_pud():
>
> If the pud_trans_huge_lock() hasn't been obtained it might be because
> the PUD is unstable, so we should retry.
>
> If it has been obtained then there's no need for a READ_ONCE, and the
> PUD cannot be pud_none() or !pud_present() so these paths are dead code.
>
> Finally in walk_pud_range(), after a call to split_huge_pud() the code
> should check pud_trans_unstable() rather than pud_none() to decide
> whether the PUD should be retried.
>
> Suggested-by: Thomas Hellström (VMware) <thomas_os@shipmail.org>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> This is based on top of my "Generic page walk and ptdump" series and
> fixes some pre-existing bugs spotted by Thomas.
>
>   mm/hmm.c      | 16 +++++-----------
>   mm/pagewalk.c |  2 +-
>   2 files changed, 6 insertions(+), 12 deletions(-)

LGTM.

Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-12-20 15:55 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-20 15:38 [PATCH] mm/hmm: Cleanup hmm_vma_walk_pud()/walk_pud_range() Steven Price
2019-12-20 15:54 ` Thomas Hellström (VMware)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).