linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/hugetlb: avoid looping to the same hugepage if !pages and !vmas
@ 2019-08-29 11:37 zhigang lu
  2019-08-29 11:54 ` Matthew Wilcox
  0 siblings, 1 reply; 3+ messages in thread
From: zhigang lu @ 2019-08-29 11:37 UTC (permalink / raw)
  To: mike.kravetz, linux-mm, linux-kernel; +Cc: tonnylu, hzhongzhang, knightzhang

From: Zhigang Lu <tonnylu@tencent.com>

This change greatly decrease the time of mmaping a file in hugetlbfs.
With MAP_POPULATE flag, it takes about 50 milliseconds to mmap a
existing 128GB file in hugetlbfs. With this change, it takes less
then 1 millisecond.

Signed-off-by: Zhigang Lu <tonnylu@tencent.com>
Reviewed-by: Haozhong Zhang <hzhongzhang@tencent.com>
Reviewed-by: Zongming Zhang <knightzhang@tencent.com>
---
 mm/hugetlb.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 6d7296d..2df941a 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -4391,6 +4391,17 @@ long follow_hugetlb_page(struct mm_struct *mm,
struct vm_area_struct *vma,
  break;
  }
  }
+
+ if (!pages && !vmas && !pfn_offset &&
+     (vaddr + huge_page_size(h) < vma->vm_end) &&
+     (remainder >= pages_per_huge_page(h))) {
+ vaddr += huge_page_size(h);
+ remainder -= pages_per_huge_page(h);
+ i += pages_per_huge_page(h);
+ spin_unlock(ptl);
+ continue;
+ }
+
 same_page:
  if (pages) {
  pages[i] = mem_map_offset(page, pfn_offset);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] mm/hugetlb: avoid looping to the same hugepage if !pages and !vmas
  2019-08-29 11:37 [PATCH] mm/hugetlb: avoid looping to the same hugepage if !pages and !vmas zhigang lu
@ 2019-08-29 11:54 ` Matthew Wilcox
  2019-08-29 12:36   ` 答复: [PATCH] mm/hugetlb: avoid looping to the same hugepage if !pages and !vmas(Internet mail) tonnylu(陆志刚)
  0 siblings, 1 reply; 3+ messages in thread
From: Matthew Wilcox @ 2019-08-29 11:54 UTC (permalink / raw)
  To: zhigang lu
  Cc: mike.kravetz, linux-mm, linux-kernel, tonnylu, hzhongzhang, knightzhang

On Thu, Aug 29, 2019 at 07:37:22PM +0800, zhigang lu wrote:
> This change greatly decrease the time of mmaping a file in hugetlbfs.
> With MAP_POPULATE flag, it takes about 50 milliseconds to mmap a
> existing 128GB file in hugetlbfs. With this change, it takes less
> then 1 millisecond.

You're going to need to find a new way of sending patches; this patch is
mangled by your mail system.

> @@ -4391,6 +4391,17 @@ long follow_hugetlb_page(struct mm_struct *mm,
> struct vm_area_struct *vma,
>   break;
>   }
>   }
> +
> + if (!pages && !vmas && !pfn_offset &&
> +     (vaddr + huge_page_size(h) < vma->vm_end) &&
> +     (remainder >= pages_per_huge_page(h))) {
> + vaddr += huge_page_size(h);
> + remainder -= pages_per_huge_page(h);
> + i += pages_per_huge_page(h);
> + spin_unlock(ptl);
> + continue;
> + }

The concept seems good to me.  The description above could do with some
better explanation though.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* 答复: [PATCH] mm/hugetlb: avoid looping to the same hugepage if !pages and !vmas(Internet mail)
  2019-08-29 11:54 ` Matthew Wilcox
@ 2019-08-29 12:36   ` tonnylu(陆志刚)
  0 siblings, 0 replies; 3+ messages in thread
From: tonnylu(陆志刚) @ 2019-08-29 12:36 UTC (permalink / raw)
  To: Matthew Wilcox, zhigang lu
  Cc: mike.kravetz, linux-mm, linux-kernel,
	hzhongzhang(张昊中),
	knightzhang(张宗明)



-----邮件原件-----
发件人: Matthew Wilcox <willy@infradead.org> 
发送时间: 2019年8月29日 19:55
收件人: zhigang lu <luzhigang001@gmail.com>
抄送: mike.kravetz@oracle.com; linux-mm@kvack.org; linux-kernel@vger.kernel.org; tonnylu(陆志刚) <tonnylu@tencent.com>; hzhongzhang(张昊中) <hzhongzhang@tencent.com>; knightzhang(张宗明) <knightzhang@tencent.com>
主题: Re: [PATCH] mm/hugetlb: avoid looping to the same hugepage if !pages and !vmas(Internet mail)

On Thu, Aug 29, 2019 at 07:37:22PM +0800, zhigang lu wrote:
> This change greatly decrease the time of mmaping a file in hugetlbfs.
> With MAP_POPULATE flag, it takes about 50 milliseconds to mmap a
> existing 128GB file in hugetlbfs. With this change, it takes less
> then 1 millisecond.

You're going to need to find a new way of sending patches; this patch is
mangled by your mail system.


> @@ -4391,6 +4391,17 @@ long follow_hugetlb_page(struct mm_struct *mm,
> struct vm_area_struct *vma,
>   break;
>   }
>   }
> +
> + if (!pages && !vmas && !pfn_offset &&
> +     (vaddr + huge_page_size(h) < vma->vm_end) &&
> +     (remainder >= pages_per_huge_page(h))) {
> + vaddr += huge_page_size(h);
> + remainder -= pages_per_huge_page(h);
> + i += pages_per_huge_page(h);
> + spin_unlock(ptl);
> + continue;
> + }

The concept seems good to me.  The description above could do with some
better explanation though.

Thanks, Willy. I will add more explanation and resend the patches in plain text mode.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-08-29 12:36 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-29 11:37 [PATCH] mm/hugetlb: avoid looping to the same hugepage if !pages and !vmas zhigang lu
2019-08-29 11:54 ` Matthew Wilcox
2019-08-29 12:36   ` 答复: [PATCH] mm/hugetlb: avoid looping to the same hugepage if !pages and !vmas(Internet mail) tonnylu(陆志刚)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).