linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* misc hmm cleanups
@ 2020-03-16 13:53 Christoph Hellwig
  2020-03-16 13:53 ` [PATCH 1/5] mm: don't provide a stub for hmm_range_fault Christoph Hellwig
                   ` (5 more replies)
  0 siblings, 6 replies; 17+ messages in thread
From: Christoph Hellwig @ 2020-03-16 13:53 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Jerome Glisse, Ralph Campbell, linux-mm

Hi Jason,

I started looking at bits in my backlog that shouldn't get too much
in your way.  Here is a bunch of simple patches, on top of the series
you sent out.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/5] mm: don't provide a stub for hmm_range_fault
  2020-03-16 13:53 misc hmm cleanups Christoph Hellwig
@ 2020-03-16 13:53 ` Christoph Hellwig
  2020-03-16 14:37   ` Zi Yan
  2020-03-16 16:45   ` Jason Gunthorpe
  2020-03-16 13:53 ` [PATCH 2/5] mm: remove the unused HMM_FAULT_ALLOW_RETRY flag Christoph Hellwig
                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 17+ messages in thread
From: Christoph Hellwig @ 2020-03-16 13:53 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Jerome Glisse, Ralph Campbell, linux-mm

All callers of hmm_range_fault depend on CONFIG_HMM_MIRROR, so
don't bother with a stub.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/hmm.h | 7 -------
 1 file changed, 7 deletions(-)

diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index ddf9f7144c43..c102e359b59d 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -225,17 +225,10 @@ static inline uint64_t hmm_device_entry_from_pfn(const struct hmm_range *range,
 /* Don't fault in missing PTEs, just snapshot the current state. */
 #define HMM_FAULT_SNAPSHOT		(1 << 1)
 
-#ifdef CONFIG_HMM_MIRROR
 /*
  * Please see Documentation/vm/hmm.rst for how to use the range API.
  */
 long hmm_range_fault(struct hmm_range *range, unsigned int flags);
-#else
-static inline long hmm_range_fault(struct hmm_range *range, unsigned int flags)
-{
-	return -EOPNOTSUPP;
-}
-#endif
 
 /*
  * HMM_RANGE_DEFAULT_TIMEOUT - default timeout (ms) when waiting for a range
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/5] mm: remove the unused HMM_FAULT_ALLOW_RETRY flag
  2020-03-16 13:53 misc hmm cleanups Christoph Hellwig
  2020-03-16 13:53 ` [PATCH 1/5] mm: don't provide a stub for hmm_range_fault Christoph Hellwig
@ 2020-03-16 13:53 ` Christoph Hellwig
  2020-03-16 16:44   ` Jason Gunthorpe
  2020-03-16 13:53 ` [PATCH 3/5] mm: simplify hmm_vma_walk_hugetlb_entry Christoph Hellwig
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 17+ messages in thread
From: Christoph Hellwig @ 2020-03-16 13:53 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Jerome Glisse, Ralph Campbell, linux-mm

The HMM_FAULT_ALLOW_RETRY isn't used anywhere in the tree.  Remove it
and the weird -EAGAIN handling where handle_mm_fault drops the mmap_sem.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/hmm.h | 5 -----
 mm/hmm.c            | 7 -------
 2 files changed, 12 deletions(-)

diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index c102e359b59d..4bf8d6997b12 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -217,11 +217,6 @@ static inline uint64_t hmm_device_entry_from_pfn(const struct hmm_range *range,
 		range->flags[HMM_PFN_VALID];
 }
 
-/*
- * Retry fault if non-blocking, drop mmap_sem and return -EAGAIN in that case.
- */
-#define HMM_FAULT_ALLOW_RETRY		(1 << 0)
-
 /* Don't fault in missing PTEs, just snapshot the current state. */
 #define HMM_FAULT_SNAPSHOT		(1 << 1)
 
diff --git a/mm/hmm.c b/mm/hmm.c
index 6d9da4b0f0a9..d7ad71d2a998 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -45,16 +45,10 @@ static int hmm_vma_do_fault(struct mm_walk *walk, unsigned long addr,
 	if (!vma)
 		goto err;
 
-	if (hmm_vma_walk->flags & HMM_FAULT_ALLOW_RETRY)
-		flags |= FAULT_FLAG_ALLOW_RETRY;
 	if (write_fault)
 		flags |= FAULT_FLAG_WRITE;
 
 	ret = handle_mm_fault(vma, addr, flags);
-	if (ret & VM_FAULT_RETRY) {
-		/* Note, handle_mm_fault did up_read(&mm->mmap_sem)) */
-		return -EAGAIN;
-	}
 	if (ret & VM_FAULT_ERROR)
 		goto err;
 
@@ -640,7 +634,6 @@ static const struct mm_walk_ops hmm_walk_ops = {
  * -ENOMEM:	Out of memory.
  * -EPERM:	Invalid permission (e.g., asking for write and range is read
  *		only).
- * -EAGAIN:	A page fault needs to be retried and mmap_sem was dropped.
  * -EBUSY:	The range has been invalidated and the caller needs to wait for
  *		the invalidation to finish.
  * -EFAULT:	Invalid (i.e., either no valid vma or it is illegal to access
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/5] mm: simplify hmm_vma_walk_hugetlb_entry
  2020-03-16 13:53 misc hmm cleanups Christoph Hellwig
  2020-03-16 13:53 ` [PATCH 1/5] mm: don't provide a stub for hmm_range_fault Christoph Hellwig
  2020-03-16 13:53 ` [PATCH 2/5] mm: remove the unused HMM_FAULT_ALLOW_RETRY flag Christoph Hellwig
@ 2020-03-16 13:53 ` Christoph Hellwig
  2020-03-16 16:43   ` Jason Gunthorpe
  2020-03-16 13:53 ` [PATCH 4/5] mm: don't handle the non-fault case in hmm_vma_walk_hole_ Christoph Hellwig
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 17+ messages in thread
From: Christoph Hellwig @ 2020-03-16 13:53 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Jerome Glisse, Ralph Campbell, linux-mm

Remove the rather confusing goto label and just handle the fault case
directly in the branch checking for it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 mm/hmm.c | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/mm/hmm.c b/mm/hmm.c
index d7ad71d2a998..6d636373181a 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -537,7 +537,6 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask,
 	bool fault, write_fault;
 	spinlock_t *ptl;
 	pte_t entry;
-	int ret = 0;
 
 	ptl = huge_pte_lock(hstate_vma(vma), walk->mm, pte);
 	entry = huge_ptep_get(pte);
@@ -550,8 +549,8 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask,
 	hmm_pte_need_fault(hmm_vma_walk, orig_pfn, cpu_flags,
 			   &fault, &write_fault);
 	if (fault || write_fault) {
-		ret = -ENOENT;
-		goto unlock;
+		spin_unlock(ptl);
+		return hmm_vma_walk_hole_(addr, end, fault, write_fault, walk);
 	}
 
 	pfn = pte_pfn(entry) + ((start & ~hmask) >> PAGE_SHIFT);
@@ -559,14 +558,8 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask,
 		range->pfns[i] = hmm_device_entry_from_pfn(range, pfn) |
 				 cpu_flags;
 	hmm_vma_walk->last = end;
-
-unlock:
 	spin_unlock(ptl);
-
-	if (ret == -ENOENT)
-		return hmm_vma_walk_hole_(addr, end, fault, write_fault, walk);
-
-	return ret;
+	return 0;
 }
 #else
 #define hmm_vma_walk_hugetlb_entry NULL
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 4/5] mm: don't handle the non-fault case in hmm_vma_walk_hole_
  2020-03-16 13:53 misc hmm cleanups Christoph Hellwig
                   ` (2 preceding siblings ...)
  2020-03-16 13:53 ` [PATCH 3/5] mm: simplify hmm_vma_walk_hugetlb_entry Christoph Hellwig
@ 2020-03-16 13:53 ` Christoph Hellwig
  2020-03-16 16:43   ` Jason Gunthorpe
  2020-03-16 13:53 ` [PATCH 5/5] mm: merge hmm_vma_do_fault into into hmm_vma_walk_hole_ Christoph Hellwig
  2020-03-17 18:38 ` misc hmm cleanups Jason Gunthorpe
  5 siblings, 1 reply; 17+ messages in thread
From: Christoph Hellwig @ 2020-03-16 13:53 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Jerome Glisse, Ralph Campbell, linux-mm

There is just a single caller using hmm_vma_walk_hole_ for the non-fault
case.  Use hmm_pfns_fill to fill the whole pfn array with zeroes in only
caller for the non-fault case and remove the non-fault path from
hmm_vma_walk_hole_.

Also rename the function to hmm_vma_fault to better describe what it
does.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 mm/hmm.c | 38 +++++++++++++++++++-------------------
 1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/mm/hmm.c b/mm/hmm.c
index 6d636373181a..707edba850de 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -73,45 +73,42 @@ static int hmm_pfns_fill(unsigned long addr, unsigned long end,
 }
 
 /*
- * hmm_vma_walk_hole_() - handle a range lacking valid pmd or pte(s)
+ * hmm_vma_fault() - fault in a range lacking valid pmd or pte(s)
  * @addr: range virtual start address (inclusive)
  * @end: range virtual end address (exclusive)
  * @fault: should we fault or not ?
  * @write_fault: write fault ?
  * @walk: mm_walk structure
- * Return: 0 on success, -EBUSY after page fault, or page fault error
+ * Return: -EBUSY after page fault, or page fault error
  *
  * This function will be called whenever pmd_none() or pte_none() returns true,
  * or whenever there is no page directory covering the virtual address range.
  */
-static int hmm_vma_walk_hole_(unsigned long addr, unsigned long end,
+static int hmm_vma_fault(unsigned long addr, unsigned long end,
 			      bool fault, bool write_fault,
 			      struct mm_walk *walk)
 {
 	struct hmm_vma_walk *hmm_vma_walk = walk->private;
 	struct hmm_range *range = hmm_vma_walk->range;
 	uint64_t *pfns = range->pfns;
-	unsigned long i;
+	unsigned long i = (addr - range->start) >> PAGE_SHIFT;
 
+	WARN_ON_ONCE(!fault && !write_fault);
 	hmm_vma_walk->last = addr;
-	i = (addr - range->start) >> PAGE_SHIFT;
 
 	if (write_fault && walk->vma && !(walk->vma->vm_flags & VM_WRITE))
 		return -EPERM;
 
 	for (; addr < end; addr += PAGE_SIZE, i++) {
-		pfns[i] = range->values[HMM_PFN_NONE];
-		if (fault || write_fault) {
-			int ret;
+		int ret;
 
-			ret = hmm_vma_do_fault(walk, addr, write_fault,
-					       &pfns[i]);
-			if (ret != -EBUSY)
-				return ret;
-		}
+		pfns[i] = range->values[HMM_PFN_NONE];
+		ret = hmm_vma_do_fault(walk, addr, write_fault, &pfns[i]);
+		if (ret != -EBUSY)
+			return ret;
 	}
 
-	return (fault || write_fault) ? -EBUSY : 0;
+	return -EBUSY;
 }
 
 static inline void hmm_pte_need_fault(const struct hmm_vma_walk *hmm_vma_walk,
@@ -193,7 +190,10 @@ static int hmm_vma_walk_hole(unsigned long addr, unsigned long end,
 	pfns = &range->pfns[i];
 	hmm_range_need_fault(hmm_vma_walk, pfns, npages,
 			     0, &fault, &write_fault);
-	return hmm_vma_walk_hole_(addr, end, fault, write_fault, walk);
+	if (fault || write_fault)
+		return hmm_vma_fault(addr, end, fault, write_fault, walk);
+	hmm_vma_walk->last = addr;
+	return hmm_pfns_fill(addr, end, range, HMM_PFN_NONE);
 }
 
 static inline uint64_t pmd_to_hmm_pfn_flags(struct hmm_range *range, pmd_t pmd)
@@ -221,7 +221,7 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr,
 			     &fault, &write_fault);
 
 	if (fault || write_fault)
-		return hmm_vma_walk_hole_(addr, end, fault, write_fault, walk);
+		return hmm_vma_fault(addr, end, fault, write_fault, walk);
 
 	pfn = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
 	for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) {
@@ -352,7 +352,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
 fault:
 	pte_unmap(ptep);
 	/* Fault any virtual address we were asked to fault */
-	return hmm_vma_walk_hole_(addr, end, fault, write_fault, walk);
+	return hmm_vma_fault(addr, end, fault, write_fault, walk);
 }
 
 static int hmm_vma_walk_pmd(pmd_t *pmdp,
@@ -494,7 +494,7 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end,
 				     cpu_flags, &fault, &write_fault);
 		if (fault || write_fault) {
 			spin_unlock(ptl);
-			return hmm_vma_walk_hole_(addr, end, fault, write_fault,
+			return hmm_vma_fault(addr, end, fault, write_fault,
 						  walk);
 		}
 
@@ -550,7 +550,7 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask,
 			   &fault, &write_fault);
 	if (fault || write_fault) {
 		spin_unlock(ptl);
-		return hmm_vma_walk_hole_(addr, end, fault, write_fault, walk);
+		return hmm_vma_fault(addr, end, fault, write_fault, walk);
 	}
 
 	pfn = pte_pfn(entry) + ((start & ~hmask) >> PAGE_SHIFT);
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 5/5] mm: merge hmm_vma_do_fault into into hmm_vma_walk_hole_
  2020-03-16 13:53 misc hmm cleanups Christoph Hellwig
                   ` (3 preceding siblings ...)
  2020-03-16 13:53 ` [PATCH 4/5] mm: don't handle the non-fault case in hmm_vma_walk_hole_ Christoph Hellwig
@ 2020-03-16 13:53 ` Christoph Hellwig
  2020-03-16 16:41   ` Jason Gunthorpe
  2020-03-17 18:38 ` misc hmm cleanups Jason Gunthorpe
  5 siblings, 1 reply; 17+ messages in thread
From: Christoph Hellwig @ 2020-03-16 13:53 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Jerome Glisse, Ralph Campbell, linux-mm

There is no good reason for this split, as it just obsfucates the flow.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 mm/hmm.c | 49 ++++++++++++++++---------------------------------
 1 file changed, 16 insertions(+), 33 deletions(-)

diff --git a/mm/hmm.c b/mm/hmm.c
index 707edba850de..180e398170b0 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -33,32 +33,6 @@ struct hmm_vma_walk {
 	unsigned int		flags;
 };
 
-static int hmm_vma_do_fault(struct mm_walk *walk, unsigned long addr,
-			    bool write_fault, uint64_t *pfn)
-{
-	unsigned int flags = FAULT_FLAG_REMOTE;
-	struct hmm_vma_walk *hmm_vma_walk = walk->private;
-	struct hmm_range *range = hmm_vma_walk->range;
-	struct vm_area_struct *vma = walk->vma;
-	vm_fault_t ret;
-
-	if (!vma)
-		goto err;
-
-	if (write_fault)
-		flags |= FAULT_FLAG_WRITE;
-
-	ret = handle_mm_fault(vma, addr, flags);
-	if (ret & VM_FAULT_ERROR)
-		goto err;
-
-	return -EBUSY;
-
-err:
-	*pfn = range->values[HMM_PFN_ERROR];
-	return -EFAULT;
-}
-
 static int hmm_pfns_fill(unsigned long addr, unsigned long end,
 		struct hmm_range *range, enum hmm_pfn_value_e value)
 {
@@ -90,25 +64,34 @@ static int hmm_vma_fault(unsigned long addr, unsigned long end,
 {
 	struct hmm_vma_walk *hmm_vma_walk = walk->private;
 	struct hmm_range *range = hmm_vma_walk->range;
+	struct vm_area_struct *vma = walk->vma;
 	uint64_t *pfns = range->pfns;
 	unsigned long i = (addr - range->start) >> PAGE_SHIFT;
+	unsigned int fault_flags = FAULT_FLAG_REMOTE;
 
 	WARN_ON_ONCE(!fault && !write_fault);
 	hmm_vma_walk->last = addr;
 
-	if (write_fault && walk->vma && !(walk->vma->vm_flags & VM_WRITE))
-		return -EPERM;
+	if (!vma)
+		goto out_error;
+
+	if (write_fault) {
+		if (!(vma->vm_flags & VM_WRITE))
+			return -EPERM;
+		fault_flags |= FAULT_FLAG_WRITE;
+	}
 
 	for (; addr < end; addr += PAGE_SIZE, i++) {
-		int ret;
-
+		if (handle_mm_fault(vma, addr, fault_flags) & VM_FAULT_ERROR)
+			goto out_error;
 		pfns[i] = range->values[HMM_PFN_NONE];
-		ret = hmm_vma_do_fault(walk, addr, write_fault, &pfns[i]);
-		if (ret != -EBUSY)
-			return ret;
 	}
 
 	return -EBUSY;
+
+out_error:
+	pfns[i] = range->values[HMM_PFN_ERROR];
+	return -EFAULT;
 }
 
 static inline void hmm_pte_need_fault(const struct hmm_vma_walk *hmm_vma_walk,
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/5] mm: don't provide a stub for hmm_range_fault
  2020-03-16 13:53 ` [PATCH 1/5] mm: don't provide a stub for hmm_range_fault Christoph Hellwig
@ 2020-03-16 14:37   ` Zi Yan
  2020-03-16 16:45   ` Jason Gunthorpe
  1 sibling, 0 replies; 17+ messages in thread
From: Zi Yan @ 2020-03-16 14:37 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jason Gunthorpe, Jerome Glisse, Ralph Campbell, linux-mm

[-- Attachment #1: Type: text/plain, Size: 1226 bytes --]

On 16 Mar 2020, at 9:53, Christoph Hellwig wrote:

> All callers of hmm_range_fault depend on CONFIG_HMM_MIRROR, so
> don't bother with a stub.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  include/linux/hmm.h | 7 -------
>  1 file changed, 7 deletions(-)
>
> diff --git a/include/linux/hmm.h b/include/linux/hmm.h
> index ddf9f7144c43..c102e359b59d 100644
> --- a/include/linux/hmm.h
> +++ b/include/linux/hmm.h
> @@ -225,17 +225,10 @@ static inline uint64_t hmm_device_entry_from_pfn(const struct hmm_range *range,
>  /* Don't fault in missing PTEs, just snapshot the current state. */
>  #define HMM_FAULT_SNAPSHOT		(1 << 1)
>
> -#ifdef CONFIG_HMM_MIRROR
>  /*
>   * Please see Documentation/vm/hmm.rst for how to use the range API.
>   */
>  long hmm_range_fault(struct hmm_range *range, unsigned int flags);
> -#else
> -static inline long hmm_range_fault(struct hmm_range *range, unsigned int flags)
> -{
> -	return -EOPNOTSUPP;
> -}
> -#endif
>
>  /*
>   * HMM_RANGE_DEFAULT_TIMEOUT - default timeout (ms) when waiting for a range
> -- 
> 2.24.1

LGTM. It compiles with and without CONFIG_HMM_MIRROR.

Reviewed-by: Zi Yan <ziy@nvidia.com>

—
Best Regards,
Yan Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/5] mm: merge hmm_vma_do_fault into into hmm_vma_walk_hole_
  2020-03-16 13:53 ` [PATCH 5/5] mm: merge hmm_vma_do_fault into into hmm_vma_walk_hole_ Christoph Hellwig
@ 2020-03-16 16:41   ` Jason Gunthorpe
  2020-03-16 16:51     ` Christoph Hellwig
  0 siblings, 1 reply; 17+ messages in thread
From: Jason Gunthorpe @ 2020-03-16 16:41 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jerome Glisse, Ralph Campbell, linux-mm

On Mon, Mar 16, 2020 at 02:53:10PM +0100, Christoph Hellwig wrote:
> There is no good reason for this split, as it just obsfucates the flow.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
>  mm/hmm.c | 49 ++++++++++++++++---------------------------------
>  1 file changed, 16 insertions(+), 33 deletions(-)
> 
> diff --git a/mm/hmm.c b/mm/hmm.c
> index 707edba850de..180e398170b0 100644
> +++ b/mm/hmm.c
> @@ -33,32 +33,6 @@ struct hmm_vma_walk {
>  	unsigned int		flags;
>  };
>  
> -static int hmm_vma_do_fault(struct mm_walk *walk, unsigned long addr,
> -			    bool write_fault, uint64_t *pfn)
> -{
> -	unsigned int flags = FAULT_FLAG_REMOTE;
> -	struct hmm_vma_walk *hmm_vma_walk = walk->private;
> -	struct hmm_range *range = hmm_vma_walk->range;
> -	struct vm_area_struct *vma = walk->vma;
> -	vm_fault_t ret;
> -
> -	if (!vma)
> -		goto err;
> -
> -	if (write_fault)
> -		flags |= FAULT_FLAG_WRITE;
> -
> -	ret = handle_mm_fault(vma, addr, flags);
> -	if (ret & VM_FAULT_ERROR)
> -		goto err;
> -
> -	return -EBUSY;
> -
> -err:
> -	*pfn = range->values[HMM_PFN_ERROR];
> -	return -EFAULT;
> -}
> -
>  static int hmm_pfns_fill(unsigned long addr, unsigned long end,
>  		struct hmm_range *range, enum hmm_pfn_value_e value)
>  {
> @@ -90,25 +64,34 @@ static int hmm_vma_fault(unsigned long addr, unsigned long end,
>  {
>  	struct hmm_vma_walk *hmm_vma_walk = walk->private;
>  	struct hmm_range *range = hmm_vma_walk->range;
> +	struct vm_area_struct *vma = walk->vma;
>  	uint64_t *pfns = range->pfns;
>  	unsigned long i = (addr - range->start) >> PAGE_SHIFT;
> +	unsigned int fault_flags = FAULT_FLAG_REMOTE;
>  
>  	WARN_ON_ONCE(!fault && !write_fault);
>  	hmm_vma_walk->last = addr;
>  
> -	if (write_fault && walk->vma && !(walk->vma->vm_flags & VM_WRITE))
> -		return -EPERM;
> +	if (!vma)
> +		goto out_error;
> +
> +	if (write_fault) {
> +		if (!(vma->vm_flags & VM_WRITE))
> +			return -EPERM;
> +		fault_flags |= FAULT_FLAG_WRITE;
> +	}
>  
>  	for (; addr < end; addr += PAGE_SIZE, i++) {
> -		int ret;
> -
> +		if (handle_mm_fault(vma, addr, fault_flags) & VM_FAULT_ERROR)
> +			goto out_error;
>  		pfns[i] = range->values[HMM_PFN_NONE];
> -		ret = hmm_vma_do_fault(walk, addr, write_fault, &pfns[i]);
> -		if (ret != -EBUSY)
> -			return ret;
>  	}

Yes, this is much better

>  	return -EBUSY;
> +
> +out_error:
> +	pfns[i] = range->values[HMM_PFN_ERROR];
> +	return -EFAULT;

I've also got a patch deleting these confusing HMM_PFN_ERRORs. They
are not applied consistently and no caller would scan the output for
ERROR on some failures. For instance the above doesn't set it on EPERM.

So this can just be 'return -EFAULT' instead of 'goto out_error'

I can fold that in if you agree:

Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>

Jason


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/5] mm: don't handle the non-fault case in hmm_vma_walk_hole_
  2020-03-16 13:53 ` [PATCH 4/5] mm: don't handle the non-fault case in hmm_vma_walk_hole_ Christoph Hellwig
@ 2020-03-16 16:43   ` Jason Gunthorpe
  0 siblings, 0 replies; 17+ messages in thread
From: Jason Gunthorpe @ 2020-03-16 16:43 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jerome Glisse, Ralph Campbell, linux-mm

On Mon, Mar 16, 2020 at 02:53:09PM +0100, Christoph Hellwig wrote:
> There is just a single caller using hmm_vma_walk_hole_ for the non-fault
> case.  Use hmm_pfns_fill to fill the whole pfn array with zeroes in only
> caller for the non-fault case and remove the non-fault path from
> hmm_vma_walk_hole_.
> 
> Also rename the function to hmm_vma_fault to better describe what it
> does.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  mm/hmm.c | 38 +++++++++++++++++++-------------------
>  1 file changed, 19 insertions(+), 19 deletions(-)

Ah, I did nearly the same in my the next series, I'm cooking:

https://github.com/jgunthorpe/linux/commit/078e10ca5919f2c263c245784fb5fe63ddbb61f4

But this can go first, I was probably going to break mine up anyhow.

> +               pfns[i] = range->values[HMM_PFN_NONE];

Since the function always returns -EBUSY now, it should not set an
output. This is an existing bug. Since there is no purpose now that
the caller does the fill, this patch should drop the above and have
the fixes line.

>  static inline void hmm_pte_need_fault(const struct hmm_vma_walk *hmm_vma_walk,
> @@ -193,7 +190,10 @@ static int hmm_vma_walk_hole(unsigned long addr, unsigned long end,
>  	pfns = &range->pfns[i];
>  	hmm_range_need_fault(hmm_vma_walk, pfns, npages,
>  			     0, &fault, &write_fault);
> -	return hmm_vma_walk_hole_(addr, end, fault, write_fault, walk);
> +	if (fault || write_fault)
> +		return hmm_vma_fault(addr, end, fault, write_fault, walk);
> +	hmm_vma_walk->last = addr;
> +	return hmm_pfns_fill(addr, end, range, HMM_PFN_NONE);

This is also the only caller of hmm_vma_fault() that could have
walk->vma == NULL, and NULL vma + a fault == -EFAULT

Jason


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/5] mm: simplify hmm_vma_walk_hugetlb_entry
  2020-03-16 13:53 ` [PATCH 3/5] mm: simplify hmm_vma_walk_hugetlb_entry Christoph Hellwig
@ 2020-03-16 16:43   ` Jason Gunthorpe
  0 siblings, 0 replies; 17+ messages in thread
From: Jason Gunthorpe @ 2020-03-16 16:43 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jerome Glisse, Ralph Campbell, linux-mm

On Mon, Mar 16, 2020 at 02:53:08PM +0100, Christoph Hellwig wrote:
> Remove the rather confusing goto label and just handle the fault case
> directly in the branch checking for it.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  mm/hmm.c | 13 +++----------
>  1 file changed, 3 insertions(+), 10 deletions(-)

Yep, I had a version of this too in progress

Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>

Jason


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/5] mm: remove the unused HMM_FAULT_ALLOW_RETRY flag
  2020-03-16 13:53 ` [PATCH 2/5] mm: remove the unused HMM_FAULT_ALLOW_RETRY flag Christoph Hellwig
@ 2020-03-16 16:44   ` Jason Gunthorpe
  0 siblings, 0 replies; 17+ messages in thread
From: Jason Gunthorpe @ 2020-03-16 16:44 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jerome Glisse, Ralph Campbell, linux-mm

On Mon, Mar 16, 2020 at 02:53:07PM +0100, Christoph Hellwig wrote:
> The HMM_FAULT_ALLOW_RETRY isn't used anywhere in the tree.  Remove it
> and the weird -EAGAIN handling where handle_mm_fault drops the mmap_sem.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  include/linux/hmm.h | 5 -----
>  mm/hmm.c            | 7 -------
>  2 files changed, 12 deletions(-)

No objection from me

Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>

Jason


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/5] mm: don't provide a stub for hmm_range_fault
  2020-03-16 13:53 ` [PATCH 1/5] mm: don't provide a stub for hmm_range_fault Christoph Hellwig
  2020-03-16 14:37   ` Zi Yan
@ 2020-03-16 16:45   ` Jason Gunthorpe
  1 sibling, 0 replies; 17+ messages in thread
From: Jason Gunthorpe @ 2020-03-16 16:45 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jerome Glisse, Ralph Campbell, linux-mm

On Mon, Mar 16, 2020 at 02:53:06PM +0100, Christoph Hellwig wrote:
> All callers of hmm_range_fault depend on CONFIG_HMM_MIRROR, so
> don't bother with a stub.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  include/linux/hmm.h | 7 -------
>  1 file changed, 7 deletions(-)

I agree, there is no reason not to just enable CONFIG_HMM_MIRROR if
code calls it.

Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
 
Arguably at this point the config should be renamed to
CONFIG_HMM_RANGE_FAULT as all it does now is enable compiling this
code, there is no runtime cost or downside to enabling it.

Jason


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/5] mm: merge hmm_vma_do_fault into into hmm_vma_walk_hole_
  2020-03-16 16:41   ` Jason Gunthorpe
@ 2020-03-16 16:51     ` Christoph Hellwig
  2020-03-16 18:01       ` Christoph Hellwig
  0 siblings, 1 reply; 17+ messages in thread
From: Christoph Hellwig @ 2020-03-16 16:51 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Christoph Hellwig, Jerome Glisse, Ralph Campbell, linux-mm

On Mon, Mar 16, 2020 at 01:41:10PM -0300, Jason Gunthorpe wrote:
> > +out_error:
> > +	pfns[i] = range->values[HMM_PFN_ERROR];
> > +	return -EFAULT;
> 
> I've also got a patch deleting these confusing HMM_PFN_ERRORs. They
> are not applied consistently and no caller would scan the output for
> ERROR on some failures. For instance the above doesn't set it on EPERM.
> 
> So this can just be 'return -EFAULT' instead of 'goto out_error'
> 
> I can fold that in if you agree:

If you want to fix this up and the other ->pfns initialization feel
free.  Otherwise I'd be happy to resend.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/5] mm: merge hmm_vma_do_fault into into hmm_vma_walk_hole_
  2020-03-16 16:51     ` Christoph Hellwig
@ 2020-03-16 18:01       ` Christoph Hellwig
  0 siblings, 0 replies; 17+ messages in thread
From: Christoph Hellwig @ 2020-03-16 18:01 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Christoph Hellwig, Jerome Glisse, Ralph Campbell, linux-mm

On Mon, Mar 16, 2020 at 05:51:54PM +0100, Christoph Hellwig wrote:
> > I've also got a patch deleting these confusing HMM_PFN_ERRORs. They
> > are not applied consistently and no caller would scan the output for
> > ERROR on some failures. For instance the above doesn't set it on EPERM.
> > 
> > So this can just be 'return -EFAULT' instead of 'goto out_error'
> > 
> > I can fold that in if you agree:
> 
> If you want to fix this up and the other ->pfns initialization feel
> free.  Otherwise I'd be happy to resend.

While we are at it: hmm_vma_walk_hole_ in the subject should be
hmm_vma_fault.  I renamed the function on the last rebase as the old
name was so horrible.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: misc hmm cleanups
  2020-03-16 13:53 misc hmm cleanups Christoph Hellwig
                   ` (4 preceding siblings ...)
  2020-03-16 13:53 ` [PATCH 5/5] mm: merge hmm_vma_do_fault into into hmm_vma_walk_hole_ Christoph Hellwig
@ 2020-03-17 18:38 ` Jason Gunthorpe
  2020-03-17 18:55   ` Christoph Hellwig
  5 siblings, 1 reply; 17+ messages in thread
From: Jason Gunthorpe @ 2020-03-17 18:38 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jerome Glisse, Ralph Campbell, linux-mm

On Mon, Mar 16, 2020 at 02:53:05PM +0100, Christoph Hellwig wrote:
> Hi Jason,
> 
> I started looking at bits in my backlog that shouldn't get too much
> in your way.  Here is a bunch of simple patches, on top of the series
> you sent out.

I applied these two, I made the edits discussed to the last two
patches, and ran the series over the hmm tester Ralph has.

Please take a quick check:

https://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma.git/commit/?h=hmm&id=2a60cfb32b260468dc81dfc1f56fc0f9ebe9f2e5
https://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma.git/commit/?h=hmm&id=ee37d9e6e47439fbd3a72b7c6a13b8ce216fb125

- Edited commit message to point out that the NONE set is a bug and
  add fixes line.
- Drop the pfn error and update the commit message, remove now unused
  pfn and i vars

Thanks,
Jason


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: misc hmm cleanups
  2020-03-17 18:38 ` misc hmm cleanups Jason Gunthorpe
@ 2020-03-17 18:55   ` Christoph Hellwig
  2020-03-19  0:27     ` Jason Gunthorpe
  0 siblings, 1 reply; 17+ messages in thread
From: Christoph Hellwig @ 2020-03-17 18:55 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Christoph Hellwig, Jerome Glisse, Ralph Campbell, linux-mm

On Tue, Mar 17, 2020 at 03:38:23PM -0300, Jason Gunthorpe wrote:
> On Mon, Mar 16, 2020 at 02:53:05PM +0100, Christoph Hellwig wrote:
> > Hi Jason,
> > 
> > I started looking at bits in my backlog that shouldn't get too much
> > in your way.  Here is a bunch of simple patches, on top of the series
> > you sent out.
> 
> I applied these two, I made the edits discussed to the last two
> patches, and ran the series over the hmm tester Ralph has.
> 
> Please take a quick check:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma.git/commit/?h=hmm&id=2a60cfb32b260468dc81dfc1f56fc0f9ebe9f2e5
> https://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma.git/commit/?h=hmm&id=ee37d9e6e47439fbd3a72b7c6a13b8ce216fb125
> 
> - Edited commit message to point out that the NONE set is a bug and
>   add fixes line.
> - Drop the pfn error and update the commit message, remove now unused
>   pfn and i vars

I wonder if just having those fixes as separate commits would have
been more clear.  But the updated patches do look correct to me.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: misc hmm cleanups
  2020-03-17 18:55   ` Christoph Hellwig
@ 2020-03-19  0:27     ` Jason Gunthorpe
  0 siblings, 0 replies; 17+ messages in thread
From: Jason Gunthorpe @ 2020-03-19  0:27 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jerome Glisse, Ralph Campbell, linux-mm

On Tue, Mar 17, 2020 at 07:55:37PM +0100, Christoph Hellwig wrote:
> On Tue, Mar 17, 2020 at 03:38:23PM -0300, Jason Gunthorpe wrote:
> > On Mon, Mar 16, 2020 at 02:53:05PM +0100, Christoph Hellwig wrote:
> > > Hi Jason,
> > > 
> > > I started looking at bits in my backlog that shouldn't get too much
> > > in your way.  Here is a bunch of simple patches, on top of the series
> > > you sent out.
> > 
> > I applied these two, I made the edits discussed to the last two
> > patches, and ran the series over the hmm tester Ralph has.
> > 
> > Please take a quick check:
> > 
> > https://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma.git/commit/?h=hmm&id=2a60cfb32b260468dc81dfc1f56fc0f9ebe9f2e5
> > https://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma.git/commit/?h=hmm&id=ee37d9e6e47439fbd3a72b7c6a13b8ce216fb125
> > 
> > - Edited commit message to point out that the NONE set is a bug and
> >   add fixes line.
> > - Drop the pfn error and update the commit message, remove now unused
> >   pfn and i vars
> 
> I wonder if just having those fixes as separate commits would have
> been more clear.  But the updated patches do look correct to me.

Ok, I agree the last one removing the pfns is a bit of overreach, even
if the -EPERM exit looks bad when it is de-inlined. I will sort it in
a following patch

I like the NONE one though

So, I swapped the last back to your original

I hope to rebase some of my remaining series tomorrow. Things are
crazy here right now.

https://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma.git/commit/?h=hmm&id=ede58c05bec9e5ccf659cf001bb969963806bed0

Thanks,
Jason


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2020-03-19  0:27 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-16 13:53 misc hmm cleanups Christoph Hellwig
2020-03-16 13:53 ` [PATCH 1/5] mm: don't provide a stub for hmm_range_fault Christoph Hellwig
2020-03-16 14:37   ` Zi Yan
2020-03-16 16:45   ` Jason Gunthorpe
2020-03-16 13:53 ` [PATCH 2/5] mm: remove the unused HMM_FAULT_ALLOW_RETRY flag Christoph Hellwig
2020-03-16 16:44   ` Jason Gunthorpe
2020-03-16 13:53 ` [PATCH 3/5] mm: simplify hmm_vma_walk_hugetlb_entry Christoph Hellwig
2020-03-16 16:43   ` Jason Gunthorpe
2020-03-16 13:53 ` [PATCH 4/5] mm: don't handle the non-fault case in hmm_vma_walk_hole_ Christoph Hellwig
2020-03-16 16:43   ` Jason Gunthorpe
2020-03-16 13:53 ` [PATCH 5/5] mm: merge hmm_vma_do_fault into into hmm_vma_walk_hole_ Christoph Hellwig
2020-03-16 16:41   ` Jason Gunthorpe
2020-03-16 16:51     ` Christoph Hellwig
2020-03-16 18:01       ` Christoph Hellwig
2020-03-17 18:38 ` misc hmm cleanups Jason Gunthorpe
2020-03-17 18:55   ` Christoph Hellwig
2020-03-19  0:27     ` Jason Gunthorpe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).