linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 1/3] userfaultfd: use vma_pagesize for all huge page size calculation
@ 2019-09-27  7:00 Wei Yang
  2019-09-27  7:00 ` [PATCH v2 2/3] userfaultfd: remove unnecessary warn_on in __mcopy_atomic_hugetlb Wei Yang
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Wei Yang @ 2019-09-27  7:00 UTC (permalink / raw)
  To: akpm, aarcange, hughd, mike.kravetz; +Cc: linux-mm, Wei Yang

In function __mcopy_atomic_hugetlb, we use two variables to deal with
huge page size: vma_hpagesize and huge_page_size.

Since they are the same, it is not necessary to use two different
mechanism. This patch makes it consistent by all using vma_hpagesize.

Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
---
 mm/userfaultfd.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index a998c1b4d8a1..01ad48621bb7 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -262,7 +262,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
 		pte_t dst_pteval;
 
 		BUG_ON(dst_addr >= dst_start + len);
-		VM_BUG_ON(dst_addr & ~huge_page_mask(h));
+		VM_BUG_ON(dst_addr & (vma_hpagesize - 1));
 
 		/*
 		 * Serialize via hugetlb_fault_mutex
@@ -273,7 +273,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
 		mutex_lock(&hugetlb_fault_mutex_table[hash]);
 
 		err = -ENOMEM;
-		dst_pte = huge_pte_alloc(dst_mm, dst_addr, huge_page_size(h));
+		dst_pte = huge_pte_alloc(dst_mm, dst_addr, vma_hpagesize);
 		if (!dst_pte) {
 			mutex_unlock(&hugetlb_fault_mutex_table[hash]);
 			goto out_unlock;
@@ -300,7 +300,8 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
 
 			err = copy_huge_page_from_user(page,
 						(const void __user *)src_addr,
-						pages_per_huge_page(h), true);
+						vma_hpagesize / PAGE_SIZE,
+						true);
 			if (unlikely(err)) {
 				err = -EFAULT;
 				goto out;
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 2/3] userfaultfd: remove unnecessary warn_on in __mcopy_atomic_hugetlb
  2019-09-27  7:00 [PATCH v2 1/3] userfaultfd: use vma_pagesize for all huge page size calculation Wei Yang
@ 2019-09-27  7:00 ` Wei Yang
  2019-09-27  7:00 ` [PATCH v2 3/3] userfaultfd: wrap the common dst_vma check into an inlined function Wei Yang
  2019-09-27 22:10 ` [PATCH v2 1/3] userfaultfd: use vma_pagesize for all huge page size calculation Andrew Morton
  2 siblings, 0 replies; 13+ messages in thread
From: Wei Yang @ 2019-09-27  7:00 UTC (permalink / raw)
  To: akpm, aarcange, hughd, mike.kravetz; +Cc: linux-mm, Wei Yang

These warning here is to make sure address(dst_addr) and length(len -
copied) are huge page size aligned.

While this is ensured by:

    dst_start and len is huge page size aligned
    dst_addr equals to dst_start and increase huge page size each time
    copied increase huge page size each time

This means these warning will never be triggered.

Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>

---
v2:
  * remove another related warning as suggested by Mike Kravetz
---
 mm/userfaultfd.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 01ad48621bb7..df11743f2196 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -243,10 +243,6 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
 		vm_shared = dst_vma->vm_flags & VM_SHARED;
 	}
 
-	if (WARN_ON(dst_addr & (vma_hpagesize - 1) ||
-		    (len - copied) & (vma_hpagesize - 1)))
-		goto out_unlock;
-
 	/*
 	 * If not shared, ensure the dst_vma has a anon_vma.
 	 */
@@ -262,7 +258,6 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
 		pte_t dst_pteval;
 
 		BUG_ON(dst_addr >= dst_start + len);
-		VM_BUG_ON(dst_addr & (vma_hpagesize - 1));
 
 		/*
 		 * Serialize via hugetlb_fault_mutex
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 3/3] userfaultfd: wrap the common dst_vma check into an inlined function
  2019-09-27  7:00 [PATCH v2 1/3] userfaultfd: use vma_pagesize for all huge page size calculation Wei Yang
  2019-09-27  7:00 ` [PATCH v2 2/3] userfaultfd: remove unnecessary warn_on in __mcopy_atomic_hugetlb Wei Yang
@ 2019-09-27  7:00 ` Wei Yang
  2019-09-27 22:10 ` [PATCH v2 1/3] userfaultfd: use vma_pagesize for all huge page size calculation Andrew Morton
  2 siblings, 0 replies; 13+ messages in thread
From: Wei Yang @ 2019-09-27  7:00 UTC (permalink / raw)
  To: akpm, aarcange, hughd, mike.kravetz; +Cc: linux-mm, Wei Yang

When doing UFFDIO_COPY, it is necessary to find the correct destination
vma and make sure fault range is in it.

Since there are two places need to do the same task, just wrap those
common check into an inlined function.

Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
---
 mm/userfaultfd.c | 56 +++++++++++++++++++++++++++---------------------
 1 file changed, 32 insertions(+), 24 deletions(-)

diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index df11743f2196..0a40746a5b1e 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -18,6 +18,36 @@
 #include <asm/tlbflush.h>
 #include "internal.h"
 
+static __always_inline
+struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm,
+				    unsigned long dst_start,
+				    unsigned long len)
+{
+	/*
+	 * Make sure that the dst range is both valid and fully within a
+	 * single existing vma.
+	 */
+	struct vm_area_struct *dst_vma;
+
+	dst_vma = find_vma(dst_mm, dst_start);
+	if (!dst_vma)
+		return NULL;
+
+	if (dst_start < dst_vma->vm_start ||
+	    dst_start + len > dst_vma->vm_end)
+		return NULL;
+
+	/*
+	 * Check the vma is registered in uffd, this is required to
+	 * enforce the VM_MAYWRITE check done at uffd registration
+	 * time.
+	 */
+	if (!dst_vma->vm_userfaultfd_ctx.ctx)
+		return NULL;
+
+	return dst_vma;
+}
+
 static int mcopy_atomic_pte(struct mm_struct *dst_mm,
 			    pmd_t *dst_pmd,
 			    struct vm_area_struct *dst_vma,
@@ -221,20 +251,9 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
 	 */
 	if (!dst_vma) {
 		err = -ENOENT;
-		dst_vma = find_vma(dst_mm, dst_start);
+		dst_vma = find_dst_vma(dst_mm, dst_start, len);
 		if (!dst_vma || !is_vm_hugetlb_page(dst_vma))
 			goto out_unlock;
-		/*
-		 * Check the vma is registered in uffd, this is
-		 * required to enforce the VM_MAYWRITE check done at
-		 * uffd registration time.
-		 */
-		if (!dst_vma->vm_userfaultfd_ctx.ctx)
-			goto out_unlock;
-
-		if (dst_start < dst_vma->vm_start ||
-		    dst_start + len > dst_vma->vm_end)
-			goto out_unlock;
 
 		err = -EINVAL;
 		if (vma_hpagesize != vma_kernel_pagesize(dst_vma))
@@ -471,20 +490,9 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm,
 	 * both valid and fully within a single existing vma.
 	 */
 	err = -ENOENT;
-	dst_vma = find_vma(dst_mm, dst_start);
+	dst_vma = find_dst_vma(dst_mm, dst_start, len);
 	if (!dst_vma)
 		goto out_unlock;
-	/*
-	 * Check the vma is registered in uffd, this is required to
-	 * enforce the VM_MAYWRITE check done at uffd registration
-	 * time.
-	 */
-	if (!dst_vma->vm_userfaultfd_ctx.ctx)
-		goto out_unlock;
-
-	if (dst_start < dst_vma->vm_start ||
-	    dst_start + len > dst_vma->vm_end)
-		goto out_unlock;
 
 	err = -EINVAL;
 	/*
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/3] userfaultfd: use vma_pagesize for all huge page size calculation
  2019-09-27  7:00 [PATCH v2 1/3] userfaultfd: use vma_pagesize for all huge page size calculation Wei Yang
  2019-09-27  7:00 ` [PATCH v2 2/3] userfaultfd: remove unnecessary warn_on in __mcopy_atomic_hugetlb Wei Yang
  2019-09-27  7:00 ` [PATCH v2 3/3] userfaultfd: wrap the common dst_vma check into an inlined function Wei Yang
@ 2019-09-27 22:10 ` Andrew Morton
  2019-09-27 22:21   ` Mike Kravetz
                     ` (2 more replies)
  2 siblings, 3 replies; 13+ messages in thread
From: Andrew Morton @ 2019-09-27 22:10 UTC (permalink / raw)
  To: Wei Yang; +Cc: aarcange, hughd, mike.kravetz, linux-mm

On Fri, 27 Sep 2019 15:00:30 +0800 Wei Yang <richardw.yang@linux.intel.com> wrote:

> In function __mcopy_atomic_hugetlb, we use two variables to deal with
> huge page size: vma_hpagesize and huge_page_size.
> 
> Since they are the same, it is not necessary to use two different
> mechanism. This patch makes it consistent by all using vma_hpagesize.
> 
> --- a/mm/userfaultfd.c
> +++ b/mm/userfaultfd.c
> @@ -262,7 +262,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>  		pte_t dst_pteval;
>  
>  		BUG_ON(dst_addr >= dst_start + len);
> -		VM_BUG_ON(dst_addr & ~huge_page_mask(h));
> +		VM_BUG_ON(dst_addr & (vma_hpagesize - 1));
>  
>  		/*
>  		 * Serialize via hugetlb_fault_mutex
> @@ -273,7 +273,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>  		mutex_lock(&hugetlb_fault_mutex_table[hash]);
>  
>  		err = -ENOMEM;
> -		dst_pte = huge_pte_alloc(dst_mm, dst_addr, huge_page_size(h));
> +		dst_pte = huge_pte_alloc(dst_mm, dst_addr, vma_hpagesize);
>  		if (!dst_pte) {
>  			mutex_unlock(&hugetlb_fault_mutex_table[hash]);
>  			goto out_unlock;
> @@ -300,7 +300,8 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>  
>  			err = copy_huge_page_from_user(page,
>  						(const void __user *)src_addr,
> -						pages_per_huge_page(h), true);
> +						vma_hpagesize / PAGE_SIZE,
> +						true);
>  			if (unlikely(err)) {
>  				err = -EFAULT;
>  				goto out;

Looks right.

We could go ahead and remove local variable `h', given that
hugetlb_fault_mutex_hash() doesn't actually use its first arg..


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/3] userfaultfd: use vma_pagesize for all huge page size calculation
  2019-09-27 22:10 ` [PATCH v2 1/3] userfaultfd: use vma_pagesize for all huge page size calculation Andrew Morton
@ 2019-09-27 22:21   ` Mike Kravetz
  2019-10-05  0:34     ` Wei Yang
  2019-09-29  0:45   ` Wei Yang
  2019-10-05  0:33   ` [PATCH] hugetlb: remove unused hstate in hugetlb_fault_mutex_hash() Wei Yang
  2 siblings, 1 reply; 13+ messages in thread
From: Mike Kravetz @ 2019-09-27 22:21 UTC (permalink / raw)
  To: Andrew Morton, Wei Yang; +Cc: aarcange, hughd, linux-mm

On 9/27/19 3:10 PM, Andrew Morton wrote:
> On Fri, 27 Sep 2019 15:00:30 +0800 Wei Yang <richardw.yang@linux.intel.com> wrote:
> 
>> In function __mcopy_atomic_hugetlb, we use two variables to deal with
>> huge page size: vma_hpagesize and huge_page_size.
>>
>> Since they are the same, it is not necessary to use two different
>> mechanism. This patch makes it consistent by all using vma_hpagesize.
>>
>> --- a/mm/userfaultfd.c
>> +++ b/mm/userfaultfd.c
>> @@ -262,7 +262,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>>  		pte_t dst_pteval;
>>  
>>  		BUG_ON(dst_addr >= dst_start + len);
>> -		VM_BUG_ON(dst_addr & ~huge_page_mask(h));
>> +		VM_BUG_ON(dst_addr & (vma_hpagesize - 1));
>>  
>>  		/*
>>  		 * Serialize via hugetlb_fault_mutex
>> @@ -273,7 +273,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>>  		mutex_lock(&hugetlb_fault_mutex_table[hash]);
>>  
>>  		err = -ENOMEM;
>> -		dst_pte = huge_pte_alloc(dst_mm, dst_addr, huge_page_size(h));
>> +		dst_pte = huge_pte_alloc(dst_mm, dst_addr, vma_hpagesize);
>>  		if (!dst_pte) {
>>  			mutex_unlock(&hugetlb_fault_mutex_table[hash]);
>>  			goto out_unlock;
>> @@ -300,7 +300,8 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>>  
>>  			err = copy_huge_page_from_user(page,
>>  						(const void __user *)src_addr,
>> -						pages_per_huge_page(h), true);
>> +						vma_hpagesize / PAGE_SIZE,
>> +						true);
>>  			if (unlikely(err)) {
>>  				err = -EFAULT;
>>  				goto out;
> 
> Looks right.
> 
> We could go ahead and remove local variable `h', given that
> hugetlb_fault_mutex_hash() doesn't actually use its first arg..

Good catch Andrew.  I missed that, but I also wrote the original code that
is being cleaned up. :)

You can add,
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
to the series.
-- 
Mike Kravetz


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/3] userfaultfd: use vma_pagesize for all huge page size calculation
  2019-09-27 22:10 ` [PATCH v2 1/3] userfaultfd: use vma_pagesize for all huge page size calculation Andrew Morton
  2019-09-27 22:21   ` Mike Kravetz
@ 2019-09-29  0:45   ` Wei Yang
  2019-10-07 22:55     ` Mike Kravetz
  2019-10-05  0:33   ` [PATCH] hugetlb: remove unused hstate in hugetlb_fault_mutex_hash() Wei Yang
  2 siblings, 1 reply; 13+ messages in thread
From: Wei Yang @ 2019-09-29  0:45 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Wei Yang, aarcange, hughd, mike.kravetz, linux-mm

On Fri, Sep 27, 2019 at 03:10:33PM -0700, Andrew Morton wrote:
>On Fri, 27 Sep 2019 15:00:30 +0800 Wei Yang <richardw.yang@linux.intel.com> wrote:
>
>> In function __mcopy_atomic_hugetlb, we use two variables to deal with
>> huge page size: vma_hpagesize and huge_page_size.
>> 
>> Since they are the same, it is not necessary to use two different
>> mechanism. This patch makes it consistent by all using vma_hpagesize.
>> 
>> --- a/mm/userfaultfd.c
>> +++ b/mm/userfaultfd.c
>> @@ -262,7 +262,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>>  		pte_t dst_pteval;
>>  
>>  		BUG_ON(dst_addr >= dst_start + len);
>> -		VM_BUG_ON(dst_addr & ~huge_page_mask(h));
>> +		VM_BUG_ON(dst_addr & (vma_hpagesize - 1));
>>  
>>  		/*
>>  		 * Serialize via hugetlb_fault_mutex
>> @@ -273,7 +273,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>>  		mutex_lock(&hugetlb_fault_mutex_table[hash]);
>>  
>>  		err = -ENOMEM;
>> -		dst_pte = huge_pte_alloc(dst_mm, dst_addr, huge_page_size(h));
>> +		dst_pte = huge_pte_alloc(dst_mm, dst_addr, vma_hpagesize);
>>  		if (!dst_pte) {
>>  			mutex_unlock(&hugetlb_fault_mutex_table[hash]);
>>  			goto out_unlock;
>> @@ -300,7 +300,8 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>>  
>>  			err = copy_huge_page_from_user(page,
>>  						(const void __user *)src_addr,
>> -						pages_per_huge_page(h), true);
>> +						vma_hpagesize / PAGE_SIZE,
>> +						true);
>>  			if (unlikely(err)) {
>>  				err = -EFAULT;
>>  				goto out;
>
>Looks right.
>
>We could go ahead and remove local variable `h', given that
>hugetlb_fault_mutex_hash() doesn't actually use its first arg..

Oops, haven't imagine h is not used in the function.


Any historical reason to pass h in hugetlb_fault_mutex_hash()? Neither these
two definition use it.

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] hugetlb: remove unused hstate in hugetlb_fault_mutex_hash()
  2019-09-27 22:10 ` [PATCH v2 1/3] userfaultfd: use vma_pagesize for all huge page size calculation Andrew Morton
  2019-09-27 22:21   ` Mike Kravetz
  2019-09-29  0:45   ` Wei Yang
@ 2019-10-05  0:33   ` Wei Yang
  2019-10-05  5:42     ` kbuild test robot
                       ` (2 more replies)
  2 siblings, 3 replies; 13+ messages in thread
From: Wei Yang @ 2019-10-05  0:33 UTC (permalink / raw)
  To: mike.kravetz, akpm, hughd, aarcange; +Cc: linux-mm, linux-kernel, Wei Yang

The first parameter hstate in function hugetlb_fault_mutex_hash() is not
used anymore.

This patch removes it.

Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Suggested-by: Andrew Morton <akpm@linux-foundation.org>
---
 fs/hugetlbfs/inode.c    |  4 ++--
 include/linux/hugetlb.h |  4 ++--
 mm/hugetlb.c            | 12 ++++++------
 mm/userfaultfd.c        |  5 +----
 4 files changed, 11 insertions(+), 14 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index a478df035651..715db1e34174 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -440,7 +440,7 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
 			u32 hash;
 
 			index = page->index;
-			hash = hugetlb_fault_mutex_hash(h, mapping, index, 0);
+			hash = hugetlb_fault_mutex_hash(mapping, index, 0);
 			mutex_lock(&hugetlb_fault_mutex_table[hash]);
 
 			/*
@@ -644,7 +644,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
 		addr = index * hpage_size;
 
 		/* mutex taken here, fault path and hole punch */
-		hash = hugetlb_fault_mutex_hash(h, mapping, index, addr);
+		hash = hugetlb_fault_mutex_hash(mapping, index, addr);
 		mutex_lock(&hugetlb_fault_mutex_table[hash]);
 
 		/* See if already present in mapping to avoid alloc/free */
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 53fc34f930d0..40e9e3fad1cf 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -105,8 +105,8 @@ void move_hugetlb_state(struct page *oldpage, struct page *newpage, int reason);
 void free_huge_page(struct page *page);
 void hugetlb_fix_reserve_counts(struct inode *inode);
 extern struct mutex *hugetlb_fault_mutex_table;
-u32 hugetlb_fault_mutex_hash(struct hstate *h, struct address_space *mapping,
-				pgoff_t idx, unsigned long address);
+u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx,
+				unsigned long address);
 
 pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud);
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ef37c85423a5..e0f033baac9d 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3916,7 +3916,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
 			 * handling userfault.  Reacquire after handling
 			 * fault to make calling code simpler.
 			 */
-			hash = hugetlb_fault_mutex_hash(h, mapping, idx, haddr);
+			hash = hugetlb_fault_mutex_hash(mapping, idx, haddr);
 			mutex_unlock(&hugetlb_fault_mutex_table[hash]);
 			ret = handle_userfault(&vmf, VM_UFFD_MISSING);
 			mutex_lock(&hugetlb_fault_mutex_table[hash]);
@@ -4043,8 +4043,8 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
 }
 
 #ifdef CONFIG_SMP
-u32 hugetlb_fault_mutex_hash(struct hstate *h, struct address_space *mapping,
-			    pgoff_t idx, unsigned long address)
+u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx,
+				unsigned long address)
 {
 	unsigned long key[2];
 	u32 hash;
@@ -4061,8 +4061,8 @@ u32 hugetlb_fault_mutex_hash(struct hstate *h, struct address_space *mapping,
  * For uniprocesor systems we always use a single mutex, so just
  * return 0 and avoid the hashing overhead.
  */
-u32 hugetlb_fault_mutex_hash(struct hstate *h, struct address_space *mapping,
-			    pgoff_t idx, unsigned long address)
+u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx,
+				unsigned long address)
 {
 	return 0;
 }
@@ -4106,7 +4106,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 	 * get spurious allocation failures if two CPUs race to instantiate
 	 * the same page in the page cache.
 	 */
-	hash = hugetlb_fault_mutex_hash(h, mapping, idx, haddr);
+	hash = hugetlb_fault_mutex_hash(mapping, idx, haddr);
 	mutex_lock(&hugetlb_fault_mutex_table[hash]);
 
 	entry = huge_ptep_get(ptep);
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 0a40746a5b1e..5c0a80626cf0 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -214,7 +214,6 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
 	unsigned long src_addr, dst_addr;
 	long copied;
 	struct page *page;
-	struct hstate *h;
 	unsigned long vma_hpagesize;
 	pgoff_t idx;
 	u32 hash;
@@ -271,8 +270,6 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
 			goto out_unlock;
 	}
 
-	h = hstate_vma(dst_vma);
-
 	while (src_addr < src_start + len) {
 		pte_t dst_pteval;
 
@@ -283,7 +280,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
 		 */
 		idx = linear_page_index(dst_vma, dst_addr);
 		mapping = dst_vma->vm_file->f_mapping;
-		hash = hugetlb_fault_mutex_hash(h, mapping, idx, dst_addr);
+		hash = hugetlb_fault_mutex_hash(mapping, idx, dst_addr);
 		mutex_lock(&hugetlb_fault_mutex_table[hash]);
 
 		err = -ENOMEM;
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/3] userfaultfd: use vma_pagesize for all huge page size calculation
  2019-09-27 22:21   ` Mike Kravetz
@ 2019-10-05  0:34     ` Wei Yang
  0 siblings, 0 replies; 13+ messages in thread
From: Wei Yang @ 2019-10-05  0:34 UTC (permalink / raw)
  To: Mike Kravetz; +Cc: Andrew Morton, Wei Yang, aarcange, hughd, linux-mm

On Fri, Sep 27, 2019 at 03:21:38PM -0700, Mike Kravetz wrote:
>On 9/27/19 3:10 PM, Andrew Morton wrote:
>> On Fri, 27 Sep 2019 15:00:30 +0800 Wei Yang <richardw.yang@linux.intel.com> wrote:
>> 
>>> In function __mcopy_atomic_hugetlb, we use two variables to deal with
>>> huge page size: vma_hpagesize and huge_page_size.
>>>
>>> Since they are the same, it is not necessary to use two different
>>> mechanism. This patch makes it consistent by all using vma_hpagesize.
>>>
>>> --- a/mm/userfaultfd.c
>>> +++ b/mm/userfaultfd.c
>>> @@ -262,7 +262,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>>>  		pte_t dst_pteval;
>>>  
>>>  		BUG_ON(dst_addr >= dst_start + len);
>>> -		VM_BUG_ON(dst_addr & ~huge_page_mask(h));
>>> +		VM_BUG_ON(dst_addr & (vma_hpagesize - 1));
>>>  
>>>  		/*
>>>  		 * Serialize via hugetlb_fault_mutex
>>> @@ -273,7 +273,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>>>  		mutex_lock(&hugetlb_fault_mutex_table[hash]);
>>>  
>>>  		err = -ENOMEM;
>>> -		dst_pte = huge_pte_alloc(dst_mm, dst_addr, huge_page_size(h));
>>> +		dst_pte = huge_pte_alloc(dst_mm, dst_addr, vma_hpagesize);
>>>  		if (!dst_pte) {
>>>  			mutex_unlock(&hugetlb_fault_mutex_table[hash]);
>>>  			goto out_unlock;
>>> @@ -300,7 +300,8 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>>>  
>>>  			err = copy_huge_page_from_user(page,
>>>  						(const void __user *)src_addr,
>>> -						pages_per_huge_page(h), true);
>>> +						vma_hpagesize / PAGE_SIZE,
>>> +						true);
>>>  			if (unlikely(err)) {
>>>  				err = -EFAULT;
>>>  				goto out;
>> 
>> Looks right.
>> 
>> We could go ahead and remove local variable `h', given that
>> hugetlb_fault_mutex_hash() doesn't actually use its first arg..
>
>Good catch Andrew.  I missed that, but I also wrote the original code that
>is being cleaned up. :)
>

I did a cleanup to remove the first parameter of function
hugetlb_fault_mutex_hash(). Look forward your comment.

>You can add,
>Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
>to the series.
>-- 
>Mike Kravetz

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] hugetlb: remove unused hstate in hugetlb_fault_mutex_hash()
  2019-10-05  0:33   ` [PATCH] hugetlb: remove unused hstate in hugetlb_fault_mutex_hash() Wei Yang
@ 2019-10-05  5:42     ` kbuild test robot
  2019-10-05  5:43     ` kbuild test robot
  2019-10-06  2:25     ` kbuild test robot
  2 siblings, 0 replies; 13+ messages in thread
From: kbuild test robot @ 2019-10-05  5:42 UTC (permalink / raw)
  To: Wei Yang
  Cc: kbuild-all, mike.kravetz, akpm, hughd, aarcange, linux-mm,
	linux-kernel, Wei Yang

[-- Attachment #1: Type: text/plain, Size: 6189 bytes --]

Hi Wei,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[cannot apply to v5.4-rc1 next-20191004]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Wei-Yang/hugetlb-remove-unused-hstate-in-hugetlb_fault_mutex_hash/20191005-090034
config: x86_64-fedora-25 (attached as .config)
compiler: gcc-7 (Debian 7.4.0-13) 7.4.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   In file included from include/asm-generic/bug.h:5:0,
                    from arch/x86/include/asm/bug.h:83,
                    from include/linux/bug.h:5,
                    from include/linux/mmdebug.h:5,
                    from include/linux/mm.h:9,
                    from mm/userfaultfd.c:8:
   mm/userfaultfd.c: In function '__mcopy_atomic_hugetlb':
   mm/userfaultfd.c:262:40: error: 'h' undeclared (first use in this function)
      VM_BUG_ON(dst_addr & ~huge_page_mask(h));
                                           ^
   include/linux/compiler.h:78:42: note: in definition of macro 'unlikely'
    # define unlikely(x) __builtin_expect(!!(x), 0)
                                             ^
>> include/linux/mmdebug.h:18:25: note: in expansion of macro 'BUG_ON'
    #define VM_BUG_ON(cond) BUG_ON(cond)
                            ^~~~~~
   mm/userfaultfd.c:262:3: note: in expansion of macro 'VM_BUG_ON'
      VM_BUG_ON(dst_addr & ~huge_page_mask(h));
      ^~~~~~~~~
   mm/userfaultfd.c:262:40: note: each undeclared identifier is reported only once for each function it appears in
      VM_BUG_ON(dst_addr & ~huge_page_mask(h));
                                           ^
   include/linux/compiler.h:78:42: note: in definition of macro 'unlikely'
    # define unlikely(x) __builtin_expect(!!(x), 0)
                                             ^
>> include/linux/mmdebug.h:18:25: note: in expansion of macro 'BUG_ON'
    #define VM_BUG_ON(cond) BUG_ON(cond)
                            ^~~~~~
   mm/userfaultfd.c:262:3: note: in expansion of macro 'VM_BUG_ON'
      VM_BUG_ON(dst_addr & ~huge_page_mask(h));
      ^~~~~~~~~

vim +/BUG_ON +18 include/linux/mmdebug.h

309381feaee564 Sasha Levin           2014-01-23  16  
59ea746337c69f Jiri Slaby            2008-06-12  17  #ifdef CONFIG_DEBUG_VM
59ea746337c69f Jiri Slaby            2008-06-12 @18  #define VM_BUG_ON(cond) BUG_ON(cond)
309381feaee564 Sasha Levin           2014-01-23  19  #define VM_BUG_ON_PAGE(cond, page)					\
e4f674229ce63d Dave Hansen           2014-06-04  20  	do {								\
e4f674229ce63d Dave Hansen           2014-06-04  21  		if (unlikely(cond)) {					\
e4f674229ce63d Dave Hansen           2014-06-04  22  			dump_page(page, "VM_BUG_ON_PAGE(" __stringify(cond)")");\
e4f674229ce63d Dave Hansen           2014-06-04  23  			BUG();						\
e4f674229ce63d Dave Hansen           2014-06-04  24  		}							\
e4f674229ce63d Dave Hansen           2014-06-04  25  	} while (0)
fa3759ccd5651c Sasha Levin           2014-10-09  26  #define VM_BUG_ON_VMA(cond, vma)					\
fa3759ccd5651c Sasha Levin           2014-10-09  27  	do {								\
fa3759ccd5651c Sasha Levin           2014-10-09  28  		if (unlikely(cond)) {					\
fa3759ccd5651c Sasha Levin           2014-10-09  29  			dump_vma(vma);					\
fa3759ccd5651c Sasha Levin           2014-10-09  30  			BUG();						\
fa3759ccd5651c Sasha Levin           2014-10-09  31  		}							\
fa3759ccd5651c Sasha Levin           2014-10-09  32  	} while (0)
31c9afa6db122a Sasha Levin           2014-10-09  33  #define VM_BUG_ON_MM(cond, mm)						\
31c9afa6db122a Sasha Levin           2014-10-09  34  	do {								\
31c9afa6db122a Sasha Levin           2014-10-09  35  		if (unlikely(cond)) {					\
31c9afa6db122a Sasha Levin           2014-10-09  36  			dump_mm(mm);					\
31c9afa6db122a Sasha Levin           2014-10-09  37  			BUG();						\
31c9afa6db122a Sasha Levin           2014-10-09  38  		}							\
31c9afa6db122a Sasha Levin           2014-10-09  39  	} while (0)
91241681c62a5a Michal Hocko          2018-04-05  40  #define VM_WARN_ON(cond) (void)WARN_ON(cond)
91241681c62a5a Michal Hocko          2018-04-05  41  #define VM_WARN_ON_ONCE(cond) (void)WARN_ON_ONCE(cond)
91241681c62a5a Michal Hocko          2018-04-05  42  #define VM_WARN_ONCE(cond, format...) (void)WARN_ONCE(cond, format)
91241681c62a5a Michal Hocko          2018-04-05  43  #define VM_WARN(cond, format...) (void)WARN(cond, format)
59ea746337c69f Jiri Slaby            2008-06-12  44  #else
02602a18c32d76 Konstantin Khlebnikov 2012-05-29  45  #define VM_BUG_ON(cond) BUILD_BUG_ON_INVALID(cond)
309381feaee564 Sasha Levin           2014-01-23  46  #define VM_BUG_ON_PAGE(cond, page) VM_BUG_ON(cond)
fa3759ccd5651c Sasha Levin           2014-10-09  47  #define VM_BUG_ON_VMA(cond, vma) VM_BUG_ON(cond)
31c9afa6db122a Sasha Levin           2014-10-09  48  #define VM_BUG_ON_MM(cond, mm) VM_BUG_ON(cond)
02a8efeda894d3 Andrew Morton         2014-06-04  49  #define VM_WARN_ON(cond) BUILD_BUG_ON_INVALID(cond)
02a8efeda894d3 Andrew Morton         2014-06-04  50  #define VM_WARN_ON_ONCE(cond) BUILD_BUG_ON_INVALID(cond)
ef6b571fb8920d Andrew Morton         2014-08-06  51  #define VM_WARN_ONCE(cond, format...) BUILD_BUG_ON_INVALID(cond)
a54f9aebaa9f0e Aneesh Kumar K.V      2016-07-26  52  #define VM_WARN(cond, format...) BUILD_BUG_ON_INVALID(cond)
59ea746337c69f Jiri Slaby            2008-06-12  53  #endif
59ea746337c69f Jiri Slaby            2008-06-12  54  

:::::: The code at line 18 was first introduced by commit
:::::: 59ea746337c69f6a5f1bc4d5e8544b3cbf12f801 MM: virtual address debug

:::::: TO: Jiri Slaby <jirislaby@gmail.com>
:::::: CC: Ingo Molnar <mingo@elte.hu>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 51184 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] hugetlb: remove unused hstate in hugetlb_fault_mutex_hash()
  2019-10-05  0:33   ` [PATCH] hugetlb: remove unused hstate in hugetlb_fault_mutex_hash() Wei Yang
  2019-10-05  5:42     ` kbuild test robot
@ 2019-10-05  5:43     ` kbuild test robot
  2019-10-06  2:25     ` kbuild test robot
  2 siblings, 0 replies; 13+ messages in thread
From: kbuild test robot @ 2019-10-05  5:43 UTC (permalink / raw)
  To: Wei Yang
  Cc: kbuild-all, mike.kravetz, akpm, hughd, aarcange, linux-mm,
	linux-kernel, Wei Yang

[-- Attachment #1: Type: text/plain, Size: 20198 bytes --]

Hi Wei,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[cannot apply to v5.4-rc1 next-20191004]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Wei-Yang/hugetlb-remove-unused-hstate-in-hugetlb_fault_mutex_hash/20191005-090034
config: x86_64-lkp (attached as .config)
compiler: gcc-7 (Debian 7.4.0-13) 7.4.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

All error/warnings (new ones prefixed by >>):

   In file included from include/linux/kernel.h:16:0,
                    from include/asm-generic/bug.h:19,
                    from arch/x86/include/asm/bug.h:83,
                    from include/linux/bug.h:5,
                    from include/linux/mmdebug.h:5,
                    from include/linux/mm.h:9,
                    from mm/userfaultfd.c:8:
   mm/userfaultfd.c: In function '__mcopy_atomic_hugetlb':
>> mm/userfaultfd.c:262:40: error: 'h' undeclared (first use in this function)
      VM_BUG_ON(dst_addr & ~huge_page_mask(h));
                                           ^
   include/linux/build_bug.h:30:63: note: in definition of macro 'BUILD_BUG_ON_INVALID'
    #define BUILD_BUG_ON_INVALID(e) ((void)(sizeof((__force long)(e))))
                                                                  ^
>> mm/userfaultfd.c:262:3: note: in expansion of macro 'VM_BUG_ON'
      VM_BUG_ON(dst_addr & ~huge_page_mask(h));
      ^~~~~~~~~
   mm/userfaultfd.c:262:40: note: each undeclared identifier is reported only once for each function it appears in
      VM_BUG_ON(dst_addr & ~huge_page_mask(h));
                                           ^
   include/linux/build_bug.h:30:63: note: in definition of macro 'BUILD_BUG_ON_INVALID'
    #define BUILD_BUG_ON_INVALID(e) ((void)(sizeof((__force long)(e))))
                                                                  ^
>> mm/userfaultfd.c:262:3: note: in expansion of macro 'VM_BUG_ON'
      VM_BUG_ON(dst_addr & ~huge_page_mask(h));
      ^~~~~~~~~

vim +/h +262 mm/userfaultfd.c

c1a4de99fada21 Andrea Arcangeli 2015-09-04  167  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  168  #ifdef CONFIG_HUGETLB_PAGE
60d4d2d2b40e44 Mike Kravetz     2017-02-22  169  /*
60d4d2d2b40e44 Mike Kravetz     2017-02-22  170   * __mcopy_atomic processing for HUGETLB vmas.  Note that this routine is
60d4d2d2b40e44 Mike Kravetz     2017-02-22  171   * called with mmap_sem held, it will release mmap_sem before returning.
60d4d2d2b40e44 Mike Kravetz     2017-02-22  172   */
60d4d2d2b40e44 Mike Kravetz     2017-02-22  173  static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  174  					      struct vm_area_struct *dst_vma,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  175  					      unsigned long dst_start,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  176  					      unsigned long src_start,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  177  					      unsigned long len,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  178  					      bool zeropage)
60d4d2d2b40e44 Mike Kravetz     2017-02-22  179  {
1c9e8def43a345 Mike Kravetz     2017-02-22  180  	int vm_alloc_shared = dst_vma->vm_flags & VM_SHARED;
1c9e8def43a345 Mike Kravetz     2017-02-22  181  	int vm_shared = dst_vma->vm_flags & VM_SHARED;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  182  	ssize_t err;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  183  	pte_t *dst_pte;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  184  	unsigned long src_addr, dst_addr;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  185  	long copied;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  186  	struct page *page;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  187  	unsigned long vma_hpagesize;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  188  	pgoff_t idx;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  189  	u32 hash;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  190  	struct address_space *mapping;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  191  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  192  	/*
60d4d2d2b40e44 Mike Kravetz     2017-02-22  193  	 * There is no default zero huge page for all huge page sizes as
60d4d2d2b40e44 Mike Kravetz     2017-02-22  194  	 * supported by hugetlb.  A PMD_SIZE huge pages may exist as used
60d4d2d2b40e44 Mike Kravetz     2017-02-22  195  	 * by THP.  Since we can not reliably insert a zero page, this
60d4d2d2b40e44 Mike Kravetz     2017-02-22  196  	 * feature is not supported.
60d4d2d2b40e44 Mike Kravetz     2017-02-22  197  	 */
60d4d2d2b40e44 Mike Kravetz     2017-02-22  198  	if (zeropage) {
60d4d2d2b40e44 Mike Kravetz     2017-02-22  199  		up_read(&dst_mm->mmap_sem);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  200  		return -EINVAL;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  201  	}
60d4d2d2b40e44 Mike Kravetz     2017-02-22  202  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  203  	src_addr = src_start;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  204  	dst_addr = dst_start;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  205  	copied = 0;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  206  	page = NULL;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  207  	vma_hpagesize = vma_kernel_pagesize(dst_vma);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  208  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  209  	/*
60d4d2d2b40e44 Mike Kravetz     2017-02-22  210  	 * Validate alignment based on huge page size
60d4d2d2b40e44 Mike Kravetz     2017-02-22  211  	 */
60d4d2d2b40e44 Mike Kravetz     2017-02-22  212  	err = -EINVAL;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  213  	if (dst_start & (vma_hpagesize - 1) || len & (vma_hpagesize - 1))
60d4d2d2b40e44 Mike Kravetz     2017-02-22  214  		goto out_unlock;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  215  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  216  retry:
60d4d2d2b40e44 Mike Kravetz     2017-02-22  217  	/*
60d4d2d2b40e44 Mike Kravetz     2017-02-22  218  	 * On routine entry dst_vma is set.  If we had to drop mmap_sem and
60d4d2d2b40e44 Mike Kravetz     2017-02-22  219  	 * retry, dst_vma will be set to NULL and we must lookup again.
60d4d2d2b40e44 Mike Kravetz     2017-02-22  220  	 */
60d4d2d2b40e44 Mike Kravetz     2017-02-22  221  	if (!dst_vma) {
27d02568f529e9 Mike Rapoport    2017-02-24  222  		err = -ENOENT;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  223  		dst_vma = find_vma(dst_mm, dst_start);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  224  		if (!dst_vma || !is_vm_hugetlb_page(dst_vma))
60d4d2d2b40e44 Mike Kravetz     2017-02-22  225  			goto out_unlock;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  226  		/*
29ec90660d68bb Andrea Arcangeli 2018-11-30  227  		 * Check the vma is registered in uffd, this is
29ec90660d68bb Andrea Arcangeli 2018-11-30  228  		 * required to enforce the VM_MAYWRITE check done at
29ec90660d68bb Andrea Arcangeli 2018-11-30  229  		 * uffd registration time.
60d4d2d2b40e44 Mike Kravetz     2017-02-22  230  		 */
27d02568f529e9 Mike Rapoport    2017-02-24  231  		if (!dst_vma->vm_userfaultfd_ctx.ctx)
27d02568f529e9 Mike Rapoport    2017-02-24  232  			goto out_unlock;
27d02568f529e9 Mike Rapoport    2017-02-24  233  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  234  		if (dst_start < dst_vma->vm_start ||
60d4d2d2b40e44 Mike Kravetz     2017-02-22  235  		    dst_start + len > dst_vma->vm_end)
60d4d2d2b40e44 Mike Kravetz     2017-02-22  236  			goto out_unlock;
1c9e8def43a345 Mike Kravetz     2017-02-22  237  
27d02568f529e9 Mike Rapoport    2017-02-24  238  		err = -EINVAL;
27d02568f529e9 Mike Rapoport    2017-02-24  239  		if (vma_hpagesize != vma_kernel_pagesize(dst_vma))
27d02568f529e9 Mike Rapoport    2017-02-24  240  			goto out_unlock;
27d02568f529e9 Mike Rapoport    2017-02-24  241  
1c9e8def43a345 Mike Kravetz     2017-02-22  242  		vm_shared = dst_vma->vm_flags & VM_SHARED;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  243  	}
60d4d2d2b40e44 Mike Kravetz     2017-02-22  244  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  245  	if (WARN_ON(dst_addr & (vma_hpagesize - 1) ||
60d4d2d2b40e44 Mike Kravetz     2017-02-22  246  		    (len - copied) & (vma_hpagesize - 1)))
60d4d2d2b40e44 Mike Kravetz     2017-02-22  247  		goto out_unlock;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  248  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  249  	/*
1c9e8def43a345 Mike Kravetz     2017-02-22  250  	 * If not shared, ensure the dst_vma has a anon_vma.
60d4d2d2b40e44 Mike Kravetz     2017-02-22  251  	 */
60d4d2d2b40e44 Mike Kravetz     2017-02-22  252  	err = -ENOMEM;
1c9e8def43a345 Mike Kravetz     2017-02-22  253  	if (!vm_shared) {
60d4d2d2b40e44 Mike Kravetz     2017-02-22  254  		if (unlikely(anon_vma_prepare(dst_vma)))
60d4d2d2b40e44 Mike Kravetz     2017-02-22  255  			goto out_unlock;
1c9e8def43a345 Mike Kravetz     2017-02-22  256  	}
60d4d2d2b40e44 Mike Kravetz     2017-02-22  257  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  258  	while (src_addr < src_start + len) {
60d4d2d2b40e44 Mike Kravetz     2017-02-22  259  		pte_t dst_pteval;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  260  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  261  		BUG_ON(dst_addr >= dst_start + len);
60d4d2d2b40e44 Mike Kravetz     2017-02-22 @262  		VM_BUG_ON(dst_addr & ~huge_page_mask(h));
60d4d2d2b40e44 Mike Kravetz     2017-02-22  263  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  264  		/*
ddeaab32a89f04 Mike Kravetz     2019-01-08  265  		 * Serialize via hugetlb_fault_mutex
60d4d2d2b40e44 Mike Kravetz     2017-02-22  266  		 */
b43a9990055958 Mike Kravetz     2018-12-28  267  		idx = linear_page_index(dst_vma, dst_addr);
ddeaab32a89f04 Mike Kravetz     2019-01-08  268  		mapping = dst_vma->vm_file->f_mapping;
2b52c262f0e75d Wei Yang         2019-10-05  269  		hash = hugetlb_fault_mutex_hash(mapping, idx, dst_addr);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  270  		mutex_lock(&hugetlb_fault_mutex_table[hash]);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  271  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  272  		err = -ENOMEM;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  273  		dst_pte = huge_pte_alloc(dst_mm, dst_addr, huge_page_size(h));
60d4d2d2b40e44 Mike Kravetz     2017-02-22  274  		if (!dst_pte) {
60d4d2d2b40e44 Mike Kravetz     2017-02-22  275  			mutex_unlock(&hugetlb_fault_mutex_table[hash]);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  276  			goto out_unlock;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  277  		}
60d4d2d2b40e44 Mike Kravetz     2017-02-22  278  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  279  		err = -EEXIST;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  280  		dst_pteval = huge_ptep_get(dst_pte);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  281  		if (!huge_pte_none(dst_pteval)) {
60d4d2d2b40e44 Mike Kravetz     2017-02-22  282  			mutex_unlock(&hugetlb_fault_mutex_table[hash]);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  283  			goto out_unlock;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  284  		}
60d4d2d2b40e44 Mike Kravetz     2017-02-22  285  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  286  		err = hugetlb_mcopy_atomic_pte(dst_mm, dst_pte, dst_vma,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  287  						dst_addr, src_addr, &page);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  288  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  289  		mutex_unlock(&hugetlb_fault_mutex_table[hash]);
1c9e8def43a345 Mike Kravetz     2017-02-22  290  		vm_alloc_shared = vm_shared;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  291  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  292  		cond_resched();
60d4d2d2b40e44 Mike Kravetz     2017-02-22  293  
9e368259ad9883 Andrea Arcangeli 2018-11-30  294  		if (unlikely(err == -ENOENT)) {
60d4d2d2b40e44 Mike Kravetz     2017-02-22  295  			up_read(&dst_mm->mmap_sem);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  296  			BUG_ON(!page);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  297  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  298  			err = copy_huge_page_from_user(page,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  299  						(const void __user *)src_addr,
810a56b943e265 Mike Kravetz     2017-02-22  300  						pages_per_huge_page(h), true);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  301  			if (unlikely(err)) {
60d4d2d2b40e44 Mike Kravetz     2017-02-22  302  				err = -EFAULT;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  303  				goto out;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  304  			}
60d4d2d2b40e44 Mike Kravetz     2017-02-22  305  			down_read(&dst_mm->mmap_sem);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  306  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  307  			dst_vma = NULL;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  308  			goto retry;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  309  		} else
60d4d2d2b40e44 Mike Kravetz     2017-02-22  310  			BUG_ON(page);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  311  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  312  		if (!err) {
60d4d2d2b40e44 Mike Kravetz     2017-02-22  313  			dst_addr += vma_hpagesize;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  314  			src_addr += vma_hpagesize;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  315  			copied += vma_hpagesize;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  316  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  317  			if (fatal_signal_pending(current))
60d4d2d2b40e44 Mike Kravetz     2017-02-22  318  				err = -EINTR;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  319  		}
60d4d2d2b40e44 Mike Kravetz     2017-02-22  320  		if (err)
60d4d2d2b40e44 Mike Kravetz     2017-02-22  321  			break;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  322  	}
60d4d2d2b40e44 Mike Kravetz     2017-02-22  323  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  324  out_unlock:
60d4d2d2b40e44 Mike Kravetz     2017-02-22  325  	up_read(&dst_mm->mmap_sem);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  326  out:
21205bf8f77b23 Mike Kravetz     2017-02-22  327  	if (page) {
21205bf8f77b23 Mike Kravetz     2017-02-22  328  		/*
21205bf8f77b23 Mike Kravetz     2017-02-22  329  		 * We encountered an error and are about to free a newly
1c9e8def43a345 Mike Kravetz     2017-02-22  330  		 * allocated huge page.
1c9e8def43a345 Mike Kravetz     2017-02-22  331  		 *
1c9e8def43a345 Mike Kravetz     2017-02-22  332  		 * Reservation handling is very subtle, and is different for
1c9e8def43a345 Mike Kravetz     2017-02-22  333  		 * private and shared mappings.  See the routine
1c9e8def43a345 Mike Kravetz     2017-02-22  334  		 * restore_reserve_on_error for details.  Unfortunately, we
1c9e8def43a345 Mike Kravetz     2017-02-22  335  		 * can not call restore_reserve_on_error now as it would
1c9e8def43a345 Mike Kravetz     2017-02-22  336  		 * require holding mmap_sem.
1c9e8def43a345 Mike Kravetz     2017-02-22  337  		 *
1c9e8def43a345 Mike Kravetz     2017-02-22  338  		 * If a reservation for the page existed in the reservation
1c9e8def43a345 Mike Kravetz     2017-02-22  339  		 * map of a private mapping, the map was modified to indicate
1c9e8def43a345 Mike Kravetz     2017-02-22  340  		 * the reservation was consumed when the page was allocated.
1c9e8def43a345 Mike Kravetz     2017-02-22  341  		 * We clear the PagePrivate flag now so that the global
21205bf8f77b23 Mike Kravetz     2017-02-22  342  		 * reserve count will not be incremented in free_huge_page.
21205bf8f77b23 Mike Kravetz     2017-02-22  343  		 * The reservation map will still indicate the reservation
21205bf8f77b23 Mike Kravetz     2017-02-22  344  		 * was consumed and possibly prevent later page allocation.
1c9e8def43a345 Mike Kravetz     2017-02-22  345  		 * This is better than leaking a global reservation.  If no
1c9e8def43a345 Mike Kravetz     2017-02-22  346  		 * reservation existed, it is still safe to clear PagePrivate
1c9e8def43a345 Mike Kravetz     2017-02-22  347  		 * as no adjustments to reservation counts were made during
1c9e8def43a345 Mike Kravetz     2017-02-22  348  		 * allocation.
1c9e8def43a345 Mike Kravetz     2017-02-22  349  		 *
1c9e8def43a345 Mike Kravetz     2017-02-22  350  		 * The reservation map for shared mappings indicates which
1c9e8def43a345 Mike Kravetz     2017-02-22  351  		 * pages have reservations.  When a huge page is allocated
1c9e8def43a345 Mike Kravetz     2017-02-22  352  		 * for an address with a reservation, no change is made to
1c9e8def43a345 Mike Kravetz     2017-02-22  353  		 * the reserve map.  In this case PagePrivate will be set
1c9e8def43a345 Mike Kravetz     2017-02-22  354  		 * to indicate that the global reservation count should be
1c9e8def43a345 Mike Kravetz     2017-02-22  355  		 * incremented when the page is freed.  This is the desired
1c9e8def43a345 Mike Kravetz     2017-02-22  356  		 * behavior.  However, when a huge page is allocated for an
1c9e8def43a345 Mike Kravetz     2017-02-22  357  		 * address without a reservation a reservation entry is added
1c9e8def43a345 Mike Kravetz     2017-02-22  358  		 * to the reservation map, and PagePrivate will not be set.
1c9e8def43a345 Mike Kravetz     2017-02-22  359  		 * When the page is freed, the global reserve count will NOT
1c9e8def43a345 Mike Kravetz     2017-02-22  360  		 * be incremented and it will appear as though we have leaked
1c9e8def43a345 Mike Kravetz     2017-02-22  361  		 * reserved page.  In this case, set PagePrivate so that the
1c9e8def43a345 Mike Kravetz     2017-02-22  362  		 * global reserve count will be incremented to match the
1c9e8def43a345 Mike Kravetz     2017-02-22  363  		 * reservation map entry which was created.
1c9e8def43a345 Mike Kravetz     2017-02-22  364  		 *
1c9e8def43a345 Mike Kravetz     2017-02-22  365  		 * Note that vm_alloc_shared is based on the flags of the vma
1c9e8def43a345 Mike Kravetz     2017-02-22  366  		 * for which the page was originally allocated.  dst_vma could
1c9e8def43a345 Mike Kravetz     2017-02-22  367  		 * be different or NULL on error.
21205bf8f77b23 Mike Kravetz     2017-02-22  368  		 */
1c9e8def43a345 Mike Kravetz     2017-02-22  369  		if (vm_alloc_shared)
1c9e8def43a345 Mike Kravetz     2017-02-22  370  			SetPagePrivate(page);
1c9e8def43a345 Mike Kravetz     2017-02-22  371  		else
21205bf8f77b23 Mike Kravetz     2017-02-22  372  			ClearPagePrivate(page);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  373  		put_page(page);
21205bf8f77b23 Mike Kravetz     2017-02-22  374  	}
60d4d2d2b40e44 Mike Kravetz     2017-02-22  375  	BUG_ON(copied < 0);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  376  	BUG_ON(err > 0);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  377  	BUG_ON(!copied && !err);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  378  	return copied ? copied : err;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  379  }
60d4d2d2b40e44 Mike Kravetz     2017-02-22  380  #else /* !CONFIG_HUGETLB_PAGE */
60d4d2d2b40e44 Mike Kravetz     2017-02-22  381  /* fail at build time if gcc attempts to use this */
60d4d2d2b40e44 Mike Kravetz     2017-02-22  382  extern ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  383  				      struct vm_area_struct *dst_vma,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  384  				      unsigned long dst_start,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  385  				      unsigned long src_start,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  386  				      unsigned long len,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  387  				      bool zeropage);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  388  #endif /* CONFIG_HUGETLB_PAGE */
60d4d2d2b40e44 Mike Kravetz     2017-02-22  389  

:::::: The code at line 262 was first introduced by commit
:::::: 60d4d2d2b40e44cd36bfb6049e8d9e2055a24f8a userfaultfd: hugetlbfs: add __mcopy_atomic_hugetlb for huge page UFFDIO_COPY

:::::: TO: Mike Kravetz <mike.kravetz@oracle.com>
:::::: CC: Linus Torvalds <torvalds@linux-foundation.org>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 28633 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] hugetlb: remove unused hstate in hugetlb_fault_mutex_hash()
  2019-10-05  0:33   ` [PATCH] hugetlb: remove unused hstate in hugetlb_fault_mutex_hash() Wei Yang
  2019-10-05  5:42     ` kbuild test robot
  2019-10-05  5:43     ` kbuild test robot
@ 2019-10-06  2:25     ` kbuild test robot
  2 siblings, 0 replies; 13+ messages in thread
From: kbuild test robot @ 2019-10-06  2:25 UTC (permalink / raw)
  To: Wei Yang
  Cc: kbuild-all, mike.kravetz, akpm, hughd, aarcange, linux-mm,
	linux-kernel, Wei Yang

Hi Wei,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[cannot apply to v5.4-rc1 next-20191004]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Wei-Yang/hugetlb-remove-unused-hstate-in-hugetlb_fault_mutex_hash/20191005-090034
reproduce:
        # apt-get install sparse
        # sparse version: v0.6.1-rc1-42-g38eda53-dirty
        make ARCH=x86_64 allmodconfig
        make C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__'

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>


sparse warnings: (new ones prefixed by >>)

   mm/userfaultfd.c:262:17: sparse: sparse: undefined identifier 'h'
   mm/userfaultfd.c:273:75: sparse: sparse: undefined identifier 'h'
   mm/userfaultfd.c:300:69: sparse: sparse: undefined identifier 'h'
>> mm/userfaultfd.c:262:17: sparse: sparse: incorrect type in argument 1 (different base types) @@    expected struct hstate *h @@    got hstate *h @@
>> mm/userfaultfd.c:262:17: sparse:    expected struct hstate *h
>> mm/userfaultfd.c:262:17: sparse:    got bad type
   mm/userfaultfd.c:273:75: sparse: sparse: incorrect type in argument 1 (different base types) @@    expected struct hstate *h @@    got hstate *h @@
   mm/userfaultfd.c:273:75: sparse:    expected struct hstate *h
   mm/userfaultfd.c:273:75: sparse:    got bad type
   mm/userfaultfd.c:300:69: sparse: sparse: incorrect type in argument 1 (different base types) @@    expected struct hstate *h @@    got hstate *h @@
   mm/userfaultfd.c:300:69: sparse:    expected struct hstate *h
   mm/userfaultfd.c:300:69: sparse:    got bad type

vim +262 mm/userfaultfd.c

c1a4de99fada21 Andrea Arcangeli 2015-09-04  167  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  168  #ifdef CONFIG_HUGETLB_PAGE
60d4d2d2b40e44 Mike Kravetz     2017-02-22  169  /*
60d4d2d2b40e44 Mike Kravetz     2017-02-22  170   * __mcopy_atomic processing for HUGETLB vmas.  Note that this routine is
60d4d2d2b40e44 Mike Kravetz     2017-02-22  171   * called with mmap_sem held, it will release mmap_sem before returning.
60d4d2d2b40e44 Mike Kravetz     2017-02-22  172   */
60d4d2d2b40e44 Mike Kravetz     2017-02-22  173  static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  174  					      struct vm_area_struct *dst_vma,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  175  					      unsigned long dst_start,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  176  					      unsigned long src_start,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  177  					      unsigned long len,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  178  					      bool zeropage)
60d4d2d2b40e44 Mike Kravetz     2017-02-22  179  {
1c9e8def43a345 Mike Kravetz     2017-02-22  180  	int vm_alloc_shared = dst_vma->vm_flags & VM_SHARED;
1c9e8def43a345 Mike Kravetz     2017-02-22  181  	int vm_shared = dst_vma->vm_flags & VM_SHARED;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  182  	ssize_t err;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  183  	pte_t *dst_pte;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  184  	unsigned long src_addr, dst_addr;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  185  	long copied;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  186  	struct page *page;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  187  	unsigned long vma_hpagesize;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  188  	pgoff_t idx;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  189  	u32 hash;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  190  	struct address_space *mapping;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  191  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  192  	/*
60d4d2d2b40e44 Mike Kravetz     2017-02-22  193  	 * There is no default zero huge page for all huge page sizes as
60d4d2d2b40e44 Mike Kravetz     2017-02-22  194  	 * supported by hugetlb.  A PMD_SIZE huge pages may exist as used
60d4d2d2b40e44 Mike Kravetz     2017-02-22  195  	 * by THP.  Since we can not reliably insert a zero page, this
60d4d2d2b40e44 Mike Kravetz     2017-02-22  196  	 * feature is not supported.
60d4d2d2b40e44 Mike Kravetz     2017-02-22  197  	 */
60d4d2d2b40e44 Mike Kravetz     2017-02-22  198  	if (zeropage) {
60d4d2d2b40e44 Mike Kravetz     2017-02-22  199  		up_read(&dst_mm->mmap_sem);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  200  		return -EINVAL;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  201  	}
60d4d2d2b40e44 Mike Kravetz     2017-02-22  202  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  203  	src_addr = src_start;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  204  	dst_addr = dst_start;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  205  	copied = 0;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  206  	page = NULL;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  207  	vma_hpagesize = vma_kernel_pagesize(dst_vma);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  208  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  209  	/*
60d4d2d2b40e44 Mike Kravetz     2017-02-22  210  	 * Validate alignment based on huge page size
60d4d2d2b40e44 Mike Kravetz     2017-02-22  211  	 */
60d4d2d2b40e44 Mike Kravetz     2017-02-22  212  	err = -EINVAL;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  213  	if (dst_start & (vma_hpagesize - 1) || len & (vma_hpagesize - 1))
60d4d2d2b40e44 Mike Kravetz     2017-02-22  214  		goto out_unlock;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  215  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  216  retry:
60d4d2d2b40e44 Mike Kravetz     2017-02-22  217  	/*
60d4d2d2b40e44 Mike Kravetz     2017-02-22  218  	 * On routine entry dst_vma is set.  If we had to drop mmap_sem and
60d4d2d2b40e44 Mike Kravetz     2017-02-22  219  	 * retry, dst_vma will be set to NULL and we must lookup again.
60d4d2d2b40e44 Mike Kravetz     2017-02-22  220  	 */
60d4d2d2b40e44 Mike Kravetz     2017-02-22  221  	if (!dst_vma) {
27d02568f529e9 Mike Rapoport    2017-02-24  222  		err = -ENOENT;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  223  		dst_vma = find_vma(dst_mm, dst_start);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  224  		if (!dst_vma || !is_vm_hugetlb_page(dst_vma))
60d4d2d2b40e44 Mike Kravetz     2017-02-22  225  			goto out_unlock;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  226  		/*
29ec90660d68bb Andrea Arcangeli 2018-11-30  227  		 * Check the vma is registered in uffd, this is
29ec90660d68bb Andrea Arcangeli 2018-11-30  228  		 * required to enforce the VM_MAYWRITE check done at
29ec90660d68bb Andrea Arcangeli 2018-11-30  229  		 * uffd registration time.
60d4d2d2b40e44 Mike Kravetz     2017-02-22  230  		 */
27d02568f529e9 Mike Rapoport    2017-02-24  231  		if (!dst_vma->vm_userfaultfd_ctx.ctx)
27d02568f529e9 Mike Rapoport    2017-02-24  232  			goto out_unlock;
27d02568f529e9 Mike Rapoport    2017-02-24  233  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  234  		if (dst_start < dst_vma->vm_start ||
60d4d2d2b40e44 Mike Kravetz     2017-02-22  235  		    dst_start + len > dst_vma->vm_end)
60d4d2d2b40e44 Mike Kravetz     2017-02-22  236  			goto out_unlock;
1c9e8def43a345 Mike Kravetz     2017-02-22  237  
27d02568f529e9 Mike Rapoport    2017-02-24  238  		err = -EINVAL;
27d02568f529e9 Mike Rapoport    2017-02-24  239  		if (vma_hpagesize != vma_kernel_pagesize(dst_vma))
27d02568f529e9 Mike Rapoport    2017-02-24  240  			goto out_unlock;
27d02568f529e9 Mike Rapoport    2017-02-24  241  
1c9e8def43a345 Mike Kravetz     2017-02-22  242  		vm_shared = dst_vma->vm_flags & VM_SHARED;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  243  	}
60d4d2d2b40e44 Mike Kravetz     2017-02-22  244  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  245  	if (WARN_ON(dst_addr & (vma_hpagesize - 1) ||
60d4d2d2b40e44 Mike Kravetz     2017-02-22  246  		    (len - copied) & (vma_hpagesize - 1)))
60d4d2d2b40e44 Mike Kravetz     2017-02-22  247  		goto out_unlock;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  248  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  249  	/*
1c9e8def43a345 Mike Kravetz     2017-02-22  250  	 * If not shared, ensure the dst_vma has a anon_vma.
60d4d2d2b40e44 Mike Kravetz     2017-02-22  251  	 */
60d4d2d2b40e44 Mike Kravetz     2017-02-22  252  	err = -ENOMEM;
1c9e8def43a345 Mike Kravetz     2017-02-22  253  	if (!vm_shared) {
60d4d2d2b40e44 Mike Kravetz     2017-02-22  254  		if (unlikely(anon_vma_prepare(dst_vma)))
60d4d2d2b40e44 Mike Kravetz     2017-02-22  255  			goto out_unlock;
1c9e8def43a345 Mike Kravetz     2017-02-22  256  	}
60d4d2d2b40e44 Mike Kravetz     2017-02-22  257  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  258  	while (src_addr < src_start + len) {
60d4d2d2b40e44 Mike Kravetz     2017-02-22  259  		pte_t dst_pteval;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  260  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  261  		BUG_ON(dst_addr >= dst_start + len);
60d4d2d2b40e44 Mike Kravetz     2017-02-22 @262  		VM_BUG_ON(dst_addr & ~huge_page_mask(h));
60d4d2d2b40e44 Mike Kravetz     2017-02-22  263  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  264  		/*
ddeaab32a89f04 Mike Kravetz     2019-01-08  265  		 * Serialize via hugetlb_fault_mutex
60d4d2d2b40e44 Mike Kravetz     2017-02-22  266  		 */
b43a9990055958 Mike Kravetz     2018-12-28  267  		idx = linear_page_index(dst_vma, dst_addr);
ddeaab32a89f04 Mike Kravetz     2019-01-08  268  		mapping = dst_vma->vm_file->f_mapping;
2b52c262f0e75d Wei Yang         2019-10-05  269  		hash = hugetlb_fault_mutex_hash(mapping, idx, dst_addr);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  270  		mutex_lock(&hugetlb_fault_mutex_table[hash]);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  271  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  272  		err = -ENOMEM;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  273  		dst_pte = huge_pte_alloc(dst_mm, dst_addr, huge_page_size(h));
60d4d2d2b40e44 Mike Kravetz     2017-02-22  274  		if (!dst_pte) {
60d4d2d2b40e44 Mike Kravetz     2017-02-22  275  			mutex_unlock(&hugetlb_fault_mutex_table[hash]);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  276  			goto out_unlock;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  277  		}
60d4d2d2b40e44 Mike Kravetz     2017-02-22  278  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  279  		err = -EEXIST;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  280  		dst_pteval = huge_ptep_get(dst_pte);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  281  		if (!huge_pte_none(dst_pteval)) {
60d4d2d2b40e44 Mike Kravetz     2017-02-22  282  			mutex_unlock(&hugetlb_fault_mutex_table[hash]);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  283  			goto out_unlock;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  284  		}
60d4d2d2b40e44 Mike Kravetz     2017-02-22  285  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  286  		err = hugetlb_mcopy_atomic_pte(dst_mm, dst_pte, dst_vma,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  287  						dst_addr, src_addr, &page);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  288  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  289  		mutex_unlock(&hugetlb_fault_mutex_table[hash]);
1c9e8def43a345 Mike Kravetz     2017-02-22  290  		vm_alloc_shared = vm_shared;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  291  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  292  		cond_resched();
60d4d2d2b40e44 Mike Kravetz     2017-02-22  293  
9e368259ad9883 Andrea Arcangeli 2018-11-30  294  		if (unlikely(err == -ENOENT)) {
60d4d2d2b40e44 Mike Kravetz     2017-02-22  295  			up_read(&dst_mm->mmap_sem);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  296  			BUG_ON(!page);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  297  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  298  			err = copy_huge_page_from_user(page,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  299  						(const void __user *)src_addr,
810a56b943e265 Mike Kravetz     2017-02-22  300  						pages_per_huge_page(h), true);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  301  			if (unlikely(err)) {
60d4d2d2b40e44 Mike Kravetz     2017-02-22  302  				err = -EFAULT;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  303  				goto out;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  304  			}
60d4d2d2b40e44 Mike Kravetz     2017-02-22  305  			down_read(&dst_mm->mmap_sem);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  306  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  307  			dst_vma = NULL;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  308  			goto retry;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  309  		} else
60d4d2d2b40e44 Mike Kravetz     2017-02-22  310  			BUG_ON(page);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  311  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  312  		if (!err) {
60d4d2d2b40e44 Mike Kravetz     2017-02-22  313  			dst_addr += vma_hpagesize;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  314  			src_addr += vma_hpagesize;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  315  			copied += vma_hpagesize;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  316  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  317  			if (fatal_signal_pending(current))
60d4d2d2b40e44 Mike Kravetz     2017-02-22  318  				err = -EINTR;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  319  		}
60d4d2d2b40e44 Mike Kravetz     2017-02-22  320  		if (err)
60d4d2d2b40e44 Mike Kravetz     2017-02-22  321  			break;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  322  	}
60d4d2d2b40e44 Mike Kravetz     2017-02-22  323  
60d4d2d2b40e44 Mike Kravetz     2017-02-22  324  out_unlock:
60d4d2d2b40e44 Mike Kravetz     2017-02-22  325  	up_read(&dst_mm->mmap_sem);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  326  out:
21205bf8f77b23 Mike Kravetz     2017-02-22  327  	if (page) {
21205bf8f77b23 Mike Kravetz     2017-02-22  328  		/*
21205bf8f77b23 Mike Kravetz     2017-02-22  329  		 * We encountered an error and are about to free a newly
1c9e8def43a345 Mike Kravetz     2017-02-22  330  		 * allocated huge page.
1c9e8def43a345 Mike Kravetz     2017-02-22  331  		 *
1c9e8def43a345 Mike Kravetz     2017-02-22  332  		 * Reservation handling is very subtle, and is different for
1c9e8def43a345 Mike Kravetz     2017-02-22  333  		 * private and shared mappings.  See the routine
1c9e8def43a345 Mike Kravetz     2017-02-22  334  		 * restore_reserve_on_error for details.  Unfortunately, we
1c9e8def43a345 Mike Kravetz     2017-02-22  335  		 * can not call restore_reserve_on_error now as it would
1c9e8def43a345 Mike Kravetz     2017-02-22  336  		 * require holding mmap_sem.
1c9e8def43a345 Mike Kravetz     2017-02-22  337  		 *
1c9e8def43a345 Mike Kravetz     2017-02-22  338  		 * If a reservation for the page existed in the reservation
1c9e8def43a345 Mike Kravetz     2017-02-22  339  		 * map of a private mapping, the map was modified to indicate
1c9e8def43a345 Mike Kravetz     2017-02-22  340  		 * the reservation was consumed when the page was allocated.
1c9e8def43a345 Mike Kravetz     2017-02-22  341  		 * We clear the PagePrivate flag now so that the global
21205bf8f77b23 Mike Kravetz     2017-02-22  342  		 * reserve count will not be incremented in free_huge_page.
21205bf8f77b23 Mike Kravetz     2017-02-22  343  		 * The reservation map will still indicate the reservation
21205bf8f77b23 Mike Kravetz     2017-02-22  344  		 * was consumed and possibly prevent later page allocation.
1c9e8def43a345 Mike Kravetz     2017-02-22  345  		 * This is better than leaking a global reservation.  If no
1c9e8def43a345 Mike Kravetz     2017-02-22  346  		 * reservation existed, it is still safe to clear PagePrivate
1c9e8def43a345 Mike Kravetz     2017-02-22  347  		 * as no adjustments to reservation counts were made during
1c9e8def43a345 Mike Kravetz     2017-02-22  348  		 * allocation.
1c9e8def43a345 Mike Kravetz     2017-02-22  349  		 *
1c9e8def43a345 Mike Kravetz     2017-02-22  350  		 * The reservation map for shared mappings indicates which
1c9e8def43a345 Mike Kravetz     2017-02-22  351  		 * pages have reservations.  When a huge page is allocated
1c9e8def43a345 Mike Kravetz     2017-02-22  352  		 * for an address with a reservation, no change is made to
1c9e8def43a345 Mike Kravetz     2017-02-22  353  		 * the reserve map.  In this case PagePrivate will be set
1c9e8def43a345 Mike Kravetz     2017-02-22  354  		 * to indicate that the global reservation count should be
1c9e8def43a345 Mike Kravetz     2017-02-22  355  		 * incremented when the page is freed.  This is the desired
1c9e8def43a345 Mike Kravetz     2017-02-22  356  		 * behavior.  However, when a huge page is allocated for an
1c9e8def43a345 Mike Kravetz     2017-02-22  357  		 * address without a reservation a reservation entry is added
1c9e8def43a345 Mike Kravetz     2017-02-22  358  		 * to the reservation map, and PagePrivate will not be set.
1c9e8def43a345 Mike Kravetz     2017-02-22  359  		 * When the page is freed, the global reserve count will NOT
1c9e8def43a345 Mike Kravetz     2017-02-22  360  		 * be incremented and it will appear as though we have leaked
1c9e8def43a345 Mike Kravetz     2017-02-22  361  		 * reserved page.  In this case, set PagePrivate so that the
1c9e8def43a345 Mike Kravetz     2017-02-22  362  		 * global reserve count will be incremented to match the
1c9e8def43a345 Mike Kravetz     2017-02-22  363  		 * reservation map entry which was created.
1c9e8def43a345 Mike Kravetz     2017-02-22  364  		 *
1c9e8def43a345 Mike Kravetz     2017-02-22  365  		 * Note that vm_alloc_shared is based on the flags of the vma
1c9e8def43a345 Mike Kravetz     2017-02-22  366  		 * for which the page was originally allocated.  dst_vma could
1c9e8def43a345 Mike Kravetz     2017-02-22  367  		 * be different or NULL on error.
21205bf8f77b23 Mike Kravetz     2017-02-22  368  		 */
1c9e8def43a345 Mike Kravetz     2017-02-22  369  		if (vm_alloc_shared)
1c9e8def43a345 Mike Kravetz     2017-02-22  370  			SetPagePrivate(page);
1c9e8def43a345 Mike Kravetz     2017-02-22  371  		else
21205bf8f77b23 Mike Kravetz     2017-02-22  372  			ClearPagePrivate(page);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  373  		put_page(page);
21205bf8f77b23 Mike Kravetz     2017-02-22  374  	}
60d4d2d2b40e44 Mike Kravetz     2017-02-22  375  	BUG_ON(copied < 0);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  376  	BUG_ON(err > 0);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  377  	BUG_ON(!copied && !err);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  378  	return copied ? copied : err;
60d4d2d2b40e44 Mike Kravetz     2017-02-22  379  }
60d4d2d2b40e44 Mike Kravetz     2017-02-22  380  #else /* !CONFIG_HUGETLB_PAGE */
60d4d2d2b40e44 Mike Kravetz     2017-02-22  381  /* fail at build time if gcc attempts to use this */
60d4d2d2b40e44 Mike Kravetz     2017-02-22  382  extern ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  383  				      struct vm_area_struct *dst_vma,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  384  				      unsigned long dst_start,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  385  				      unsigned long src_start,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  386  				      unsigned long len,
60d4d2d2b40e44 Mike Kravetz     2017-02-22  387  				      bool zeropage);
60d4d2d2b40e44 Mike Kravetz     2017-02-22  388  #endif /* CONFIG_HUGETLB_PAGE */
60d4d2d2b40e44 Mike Kravetz     2017-02-22  389  

:::::: The code at line 262 was first introduced by commit
:::::: 60d4d2d2b40e44cd36bfb6049e8d9e2055a24f8a userfaultfd: hugetlbfs: add __mcopy_atomic_hugetlb for huge page UFFDIO_COPY

:::::: TO: Mike Kravetz <mike.kravetz@oracle.com>
:::::: CC: Linus Torvalds <torvalds@linux-foundation.org>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/3] userfaultfd: use vma_pagesize for all huge page size calculation
  2019-09-29  0:45   ` Wei Yang
@ 2019-10-07 22:55     ` Mike Kravetz
  2019-10-08  0:57       ` Wei Yang
  0 siblings, 1 reply; 13+ messages in thread
From: Mike Kravetz @ 2019-10-07 22:55 UTC (permalink / raw)
  To: Wei Yang, Andrew Morton; +Cc: aarcange, hughd, linux-mm

On 9/28/19 5:45 PM, Wei Yang wrote:
> On Fri, Sep 27, 2019 at 03:10:33PM -0700, Andrew Morton wrote:
>> On Fri, 27 Sep 2019 15:00:30 +0800 Wei Yang <richardw.yang@linux.intel.com> wrote:
>>
>>> In function __mcopy_atomic_hugetlb, we use two variables to deal with
>>> huge page size: vma_hpagesize and huge_page_size.
>>>
>>> Since they are the same, it is not necessary to use two different
>>> mechanism. This patch makes it consistent by all using vma_hpagesize.
>>>
>>> --- a/mm/userfaultfd.c
>>> +++ b/mm/userfaultfd.c
>>> @@ -262,7 +262,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>>>  		pte_t dst_pteval;
>>>  
>>>  		BUG_ON(dst_addr >= dst_start + len);
>>> -		VM_BUG_ON(dst_addr & ~huge_page_mask(h));
>>> +		VM_BUG_ON(dst_addr & (vma_hpagesize - 1));
>>>  
>>>  		/*
>>>  		 * Serialize via hugetlb_fault_mutex
>>> @@ -273,7 +273,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>>>  		mutex_lock(&hugetlb_fault_mutex_table[hash]);
>>>  
>>>  		err = -ENOMEM;
>>> -		dst_pte = huge_pte_alloc(dst_mm, dst_addr, huge_page_size(h));
>>> +		dst_pte = huge_pte_alloc(dst_mm, dst_addr, vma_hpagesize);
>>>  		if (!dst_pte) {
>>>  			mutex_unlock(&hugetlb_fault_mutex_table[hash]);
>>>  			goto out_unlock;
>>> @@ -300,7 +300,8 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>>>  
>>>  			err = copy_huge_page_from_user(page,
>>>  						(const void __user *)src_addr,
>>> -						pages_per_huge_page(h), true);
>>> +						vma_hpagesize / PAGE_SIZE,
>>> +						true);
>>>  			if (unlikely(err)) {
>>>  				err = -EFAULT;
>>>  				goto out;
>>
>> Looks right.
>>
>> We could go ahead and remove local variable `h', given that
>> hugetlb_fault_mutex_hash() doesn't actually use its first arg..
> 
> Oops, haven't imagine h is not used in the function.
> 
> 
> Any historical reason to pass h in hugetlb_fault_mutex_hash()? Neither these
> two definition use it.

See 1b426bac66e6 ("hugetlb: use same fault hash key for shared and private
mappings").  Prior to that change, the hash key for private mappings was
created by:

	key[0] = (unsigned long) mm;
	key[1] = address >> huge_page_shift(h);

When removing that code, I should have removed 'h'.
-- 
Mike Kravetz


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/3] userfaultfd: use vma_pagesize for all huge page size calculation
  2019-10-07 22:55     ` Mike Kravetz
@ 2019-10-08  0:57       ` Wei Yang
  0 siblings, 0 replies; 13+ messages in thread
From: Wei Yang @ 2019-10-08  0:57 UTC (permalink / raw)
  To: Mike Kravetz; +Cc: Wei Yang, Andrew Morton, aarcange, hughd, linux-mm

On Mon, Oct 07, 2019 at 03:55:21PM -0700, Mike Kravetz wrote:
>On 9/28/19 5:45 PM, Wei Yang wrote:
>> On Fri, Sep 27, 2019 at 03:10:33PM -0700, Andrew Morton wrote:
>>> On Fri, 27 Sep 2019 15:00:30 +0800 Wei Yang <richardw.yang@linux.intel.com> wrote:
>>>
>>>> In function __mcopy_atomic_hugetlb, we use two variables to deal with
>>>> huge page size: vma_hpagesize and huge_page_size.
>>>>
>>>> Since they are the same, it is not necessary to use two different
>>>> mechanism. This patch makes it consistent by all using vma_hpagesize.
>>>>
>>>> --- a/mm/userfaultfd.c
>>>> +++ b/mm/userfaultfd.c
>>>> @@ -262,7 +262,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>>>>  		pte_t dst_pteval;
>>>>  
>>>>  		BUG_ON(dst_addr >= dst_start + len);
>>>> -		VM_BUG_ON(dst_addr & ~huge_page_mask(h));
>>>> +		VM_BUG_ON(dst_addr & (vma_hpagesize - 1));
>>>>  
>>>>  		/*
>>>>  		 * Serialize via hugetlb_fault_mutex
>>>> @@ -273,7 +273,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>>>>  		mutex_lock(&hugetlb_fault_mutex_table[hash]);
>>>>  
>>>>  		err = -ENOMEM;
>>>> -		dst_pte = huge_pte_alloc(dst_mm, dst_addr, huge_page_size(h));
>>>> +		dst_pte = huge_pte_alloc(dst_mm, dst_addr, vma_hpagesize);
>>>>  		if (!dst_pte) {
>>>>  			mutex_unlock(&hugetlb_fault_mutex_table[hash]);
>>>>  			goto out_unlock;
>>>> @@ -300,7 +300,8 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>>>>  
>>>>  			err = copy_huge_page_from_user(page,
>>>>  						(const void __user *)src_addr,
>>>> -						pages_per_huge_page(h), true);
>>>> +						vma_hpagesize / PAGE_SIZE,
>>>> +						true);
>>>>  			if (unlikely(err)) {
>>>>  				err = -EFAULT;
>>>>  				goto out;
>>>
>>> Looks right.
>>>
>>> We could go ahead and remove local variable `h', given that
>>> hugetlb_fault_mutex_hash() doesn't actually use its first arg..
>> 
>> Oops, haven't imagine h is not used in the function.
>> 
>> 
>> Any historical reason to pass h in hugetlb_fault_mutex_hash()? Neither these
>> two definition use it.
>
>See 1b426bac66e6 ("hugetlb: use same fault hash key for shared and private
>mappings").  Prior to that change, the hash key for private mappings was
>created by:
>
>	key[0] = (unsigned long) mm;
>	key[1] = address >> huge_page_shift(h);
>
>When removing that code, I should have removed 'h'.

Thanks for this information.

>-- 
>Mike Kravetz

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2019-10-08  0:57 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-27  7:00 [PATCH v2 1/3] userfaultfd: use vma_pagesize for all huge page size calculation Wei Yang
2019-09-27  7:00 ` [PATCH v2 2/3] userfaultfd: remove unnecessary warn_on in __mcopy_atomic_hugetlb Wei Yang
2019-09-27  7:00 ` [PATCH v2 3/3] userfaultfd: wrap the common dst_vma check into an inlined function Wei Yang
2019-09-27 22:10 ` [PATCH v2 1/3] userfaultfd: use vma_pagesize for all huge page size calculation Andrew Morton
2019-09-27 22:21   ` Mike Kravetz
2019-10-05  0:34     ` Wei Yang
2019-09-29  0:45   ` Wei Yang
2019-10-07 22:55     ` Mike Kravetz
2019-10-08  0:57       ` Wei Yang
2019-10-05  0:33   ` [PATCH] hugetlb: remove unused hstate in hugetlb_fault_mutex_hash() Wei Yang
2019-10-05  5:42     ` kbuild test robot
2019-10-05  5:43     ` kbuild test robot
2019-10-06  2:25     ` kbuild test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).