Linux-mm Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH v2] hugetlbfs: Take read_lock on i_mmap for PMD sharing
@ 2019-11-07 21:18 Waiman Long
  2019-11-08  2:03 ` Davidlohr Bueso
  0 siblings, 1 reply; 3+ messages in thread
From: Waiman Long @ 2019-11-07 21:18 UTC (permalink / raw)
  To: Mike Kravetz, Andrew Morton
  Cc: linux-kernel, linux-mm, Davidlohr Bueso, Peter Zijlstra,
	Ingo Molnar, Will Deacon, Matthew Wilcox, Waiman Long

A customer with large SMP systems (up to 16 sockets) with application
that uses large amount of static hugepages (~500-1500GB) are experiencing
random multisecond delays. These delays was caused by the long time it
took to scan the VMA interval tree with mmap_sem held.

The sharing of huge PMD does not require changes to the i_mmap at all.
Therefore, we can just take the read lock and let other threads searching
for the right VMA to share it in parallel. Once the right VMA is found,
either the PMD lock (2M huge page for x86-64) or the mm->page_table_lock
will be acquired to perform the actual PMD sharing.

Lock contention, if present, will happen in the spinlock. That is much
better than contention in the rwsem where the time needed to scan the
the interval tree is indeterminate.

With this patch applied, the customer is seeing significant performance
improvement over the unpatched kernel.

Suggested-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Signed-off-by: Waiman Long <longman@redhat.com>
---
 mm/hugetlb.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index b45a95363a84..f78891f92765 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -4842,7 +4842,7 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
 	if (!vma_shareable(vma, addr))
 		return (pte_t *)pmd_alloc(mm, pud, addr);
 
-	i_mmap_lock_write(mapping);
+	i_mmap_lock_read(mapping);
 	vma_interval_tree_foreach(svma, &mapping->i_mmap, idx, idx) {
 		if (svma == vma)
 			continue;
@@ -4872,7 +4872,7 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
 	spin_unlock(ptl);
 out:
 	pte = (pte_t *)pmd_alloc(mm, pud, addr);
-	i_mmap_unlock_write(mapping);
+	i_mmap_unlock_read(mapping);
 	return pte;
 }
 
-- 
2.18.1



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] hugetlbfs: Take read_lock on i_mmap for PMD sharing
  2019-11-07 21:18 [PATCH v2] hugetlbfs: Take read_lock on i_mmap for PMD sharing Waiman Long
@ 2019-11-08  2:03 ` Davidlohr Bueso
  2019-11-08 18:44   ` Waiman Long
  0 siblings, 1 reply; 3+ messages in thread
From: Davidlohr Bueso @ 2019-11-08  2:03 UTC (permalink / raw)
  To: Waiman Long
  Cc: Mike Kravetz, Andrew Morton, linux-kernel, linux-mm,
	Peter Zijlstra, Ingo Molnar, Will Deacon, Matthew Wilcox

On Thu, 07 Nov 2019, Waiman Long wrote:
>With this patch applied, the customer is seeing significant performance
>improvement over the unpatched kernel.

Could you give more details here?

Thanks,
Davidlohr


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] hugetlbfs: Take read_lock on i_mmap for PMD sharing
  2019-11-08  2:03 ` Davidlohr Bueso
@ 2019-11-08 18:44   ` Waiman Long
  0 siblings, 0 replies; 3+ messages in thread
From: Waiman Long @ 2019-11-08 18:44 UTC (permalink / raw)
  To: Mike Kravetz, Andrew Morton, linux-kernel, linux-mm,
	Peter Zijlstra, Ingo Molnar, Will Deacon, Matthew Wilcox

On 11/7/19 9:03 PM, Davidlohr Bueso wrote:
> On Thu, 07 Nov 2019, Waiman Long wrote:
>> With this patch applied, the customer is seeing significant performance
>> improvement over the unpatched kernel.
>
> Could you give more details here? 

Red Hat has a customer that is running a transactional database
workload. In this particular case, about ~500-1500GB of static hugepages
are allocated.  The database then allocates a single large shared memory
segment in those hugepages to use primarily as a database buffer for 8kB
blocks from disk (there are also other database structures in that
shared memory, but it's mostly for buffer).  Then thousands of separate
processes reference and load data into that buffer. They were seeing
multi-second pauses when starting up the database.

I first gave them a patched kernel that disabled PMD sharing. That fixed
their problem. After that, I gave them another test kernel that
contained this patch. They said there were significant improved compared
with the unpatched kernel. There is still some degradation compared to
the kernel with huge shared pmd disabled entirely, but they're pretty
close in performance.

Cheer,
Longman




^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, back to index

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-07 21:18 [PATCH v2] hugetlbfs: Take read_lock on i_mmap for PMD sharing Waiman Long
2019-11-08  2:03 ` Davidlohr Bueso
2019-11-08 18:44   ` Waiman Long

Linux-mm Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-mm/0 linux-mm/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-mm linux-mm/ https://lore.kernel.org/linux-mm \
		linux-mm@kvack.org
	public-inbox-index linux-mm

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kvack.linux-mm


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git