linux-riscv.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Joel Fernandes <joel@joelfernandes.org>
To: linux-kernel@vger.kernel.org
Cc: linux-mips@linux-mips.org, Rich Felker <dalias@libc.org>,
	linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Will Deacon <will.deacon@arm.com>,
	Michal Hocko <mhocko@kernel.org>,
	linux-mm@kvack.org, lokeshgidra@google.com,
	"Joel Fernandes \(Google\)" <joel@joelfernandes.org>,
	linux-riscv@lists.infradead.org, elfring@users.sourceforge.net,
	Jonas Bonn <jonas@southpole.se>,
	kvmarm@lists.cs.columbia.edu, dancol@google.com,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org,
	linux-hexagon@vger.kernel.org, Helge Deller <deller@gmx.de>,
	"maintainer:X86 ARCHITECTURE 32-BIT AND 64-BIT" <x86@kernel.org>,
	hughd@google.com, "James E.J. Bottomley" <jejb@parisc-linux.org>,
	kasan-dev@googlegroups.com, anton.ivanov@kot-begemot.co.uk,
	Ingo Molnar <mingo@redhat.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	linux-snps-arc@lists.infradead.org, kernel-team@android.com,
	Sam Creasey <sammy@sammy.net>, Fenghua Yu <fenghua.yu@intel.com>,
	linux-s390@vger.kernel.org, Jeff Dike <jdike@addtoit.com>,
	linux-um@lists.infradead.org,
	Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>,
	Julia Lawall <Julia.Lawall@lip6.fr>,
	linux-m68k@lists.linux-m68k.org, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	nios2-dev@lists.rocketboards.org,
	"Kirill A . Shutemov" <kirill@shutemov.name>,
	Stafford Horne <shorne@gmail.com>, Guan Xuetao <gxt@pku.edu.cn>,
	Chris Zankel <chris@zankel.net>, Tony Luck <tony.luck@intel.com>,
	Richard Weinberger <richard@nod.at>,
	linux-parisc@vger.kernel.org, Max Filippov <jcmvbkbc@gmail.com>,
	pantin@google.com, minchan@kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-alpha@vger.kernel.org, Ley Foon Tan <lftan@altera.com>,
	akpm@linux-foundation.org, linuxppc-dev@lists.ozlabs.org,
	"David S. Miller" <davem@davemloft.net>
Subject: [PATCH -next 2/3] mm: speed up mremap by 20x on large regions (v4)
Date: Fri,  2 Nov 2018 21:00:40 -0700	[thread overview]
Message-ID: <20181103040041.7085-3-joelaf@google.com> (raw)
Message-ID: <20181103040040.Vk8vS2_6-y2DLIDXY9uU8B7PNLcHisobTW8SHv7-1SQ@z> (raw)
In-Reply-To: <20181103040041.7085-1-joelaf@google.com>

From: "Joel Fernandes (Google)" <joel@joelfernandes.org>

Android needs to mremap large regions of memory during memory management
related operations. The mremap system call can be really slow if THP is
not enabled. The bottleneck is move_page_tables, which is copying each
pte at a time, and can be really slow across a large map. Turning on THP
may not be a viable option, and is not for us. This patch speeds up the
performance for non-THP system by copying at the PMD level when possible.

The speed up is an order of magnitude on x86 (~20x). On a 1GB mremap,
the mremap completion times drops from 3.4-3.6 milliseconds to 144-160
microseconds.

Before:
Total mremap time for 1GB data: 3521942 nanoseconds.
Total mremap time for 1GB data: 3449229 nanoseconds.
Total mremap time for 1GB data: 3488230 nanoseconds.

After:
Total mremap time for 1GB data: 150279 nanoseconds.
Total mremap time for 1GB data: 144665 nanoseconds.
Total mremap time for 1GB data: 158708 nanoseconds.

Incase THP is enabled, the optimization is mostly skipped except in
certain situations.

Acked-by: Kirill A. Shutemov <kirill@shutemov.name>
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
---

Note that since the bug fix in [1], we now have to flush the TLB every
PMD move. The above numbers were obtained on x86 with a flush done every
move. For arm64, I previously encountered performance issues doing a
flush everytime we move, however Will Deacon says [2] the performance
should be better now with recent release. Until we can evaluate arm64, I
am dropping the HAVE_MOVE_PMD config enable patch for ARM64 for now. It
can be added back once we finish the performance evaluation. Also of
note is that the speed up on arm64 with this patch but without the TLB
flush every PMD move is around 500x.

[1] https://bugs.chromium.org/p/project-zero/issues/detail?id=1695
[2] https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg140837.html

 arch/Kconfig |  5 +++++
 mm/mremap.c  | 60 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 65 insertions(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index e1e540ffa979..b70c952ac838 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -535,6 +535,11 @@ config HAVE_IRQ_TIME_ACCOUNTING
 	  Archs need to ensure they use a high enough resolution clock to
 	  support irq time accounting and then call enable_sched_clock_irqtime().
 
+config HAVE_MOVE_PMD
+	bool
+	help
+	  Archs that select this are able to move page tables at the PMD level.
+
 config HAVE_ARCH_TRANSPARENT_HUGEPAGE
 	bool
 
diff --git a/mm/mremap.c b/mm/mremap.c
index 7c9ab747f19d..7cf6b0943090 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -191,6 +191,50 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
 		drop_rmap_locks(vma);
 }
 
+static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+		  unsigned long new_addr, unsigned long old_end,
+		  pmd_t *old_pmd, pmd_t *new_pmd)
+{
+	spinlock_t *old_ptl, *new_ptl;
+	struct mm_struct *mm = vma->vm_mm;
+	pmd_t pmd;
+
+	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)
+	    || old_end - old_addr < PMD_SIZE)
+		return false;
+
+	/*
+	 * The destination pmd shouldn't be established, free_pgtables()
+	 * should have release it.
+	 */
+	if (WARN_ON(!pmd_none(*new_pmd)))
+		return false;
+
+	/*
+	 * We don't have to worry about the ordering of src and dst
+	 * ptlocks because exclusive mmap_sem prevents deadlock.
+	 */
+	old_ptl = pmd_lock(vma->vm_mm, old_pmd);
+	new_ptl = pmd_lockptr(mm, new_pmd);
+	if (new_ptl != old_ptl)
+		spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
+
+	/* Clear the pmd */
+	pmd = *old_pmd;
+	pmd_clear(old_pmd);
+
+	VM_BUG_ON(!pmd_none(*new_pmd));
+
+	/* Set the new pmd */
+	set_pmd_at(mm, new_addr, new_pmd, pmd);
+	flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE);
+	if (new_ptl != old_ptl)
+		spin_unlock(new_ptl);
+	spin_unlock(old_ptl);
+
+	return true;
+}
+
 unsigned long move_page_tables(struct vm_area_struct *vma,
 		unsigned long old_addr, struct vm_area_struct *new_vma,
 		unsigned long new_addr, unsigned long len,
@@ -237,7 +281,23 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
 			split_huge_pmd(vma, old_pmd, old_addr);
 			if (pmd_trans_unstable(old_pmd))
 				continue;
+		} else if (extent == PMD_SIZE && IS_ENABLED(CONFIG_HAVE_MOVE_PMD)) {
+			/*
+			 * If the extent is PMD-sized, try to speed the move by
+			 * moving at the PMD level if possible.
+			 */
+			bool moved;
+
+			if (need_rmap_locks)
+				take_rmap_locks(vma);
+			moved = move_normal_pmd(vma, old_addr, new_addr,
+					old_end, old_pmd, new_pmd);
+			if (need_rmap_locks)
+				drop_rmap_locks(vma);
+			if (moved)
+				continue;
 		}
+
 		if (pte_alloc(new_vma->vm_mm, new_pmd))
 			break;
 		next = (new_addr + PMD_SIZE) & PMD_MASK;
-- 
2.19.1.930.g4563a0d9d0-goog

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

  parent reply	other threads:[~2018-11-03  4:01 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-03  4:00 [PATCH -next 0/3] Add support for fast mremap Joel Fernandes
2018-11-03  4:00 ` Joel Fernandes
2018-11-03  4:00 ` [PATCH -next 1/3] mm: treewide: remove unused address argument from pte_alloc functions (v2) Joel Fernandes
2018-11-03  4:00   ` Joel Fernandes
2018-11-03 12:51   ` [PATCH -next v2 1/3] mm: treewide: remove unused address argument from pte_alloc functions SF Markus Elfring
2018-11-03 12:51     ` SF Markus Elfring
2018-11-03  4:00 ` Joel Fernandes [this message]
2018-11-03  4:00   ` [PATCH -next 2/3] mm: speed up mremap by 20x on large regions (v4) Joel Fernandes
2018-11-03 16:45   ` kbuild test robot
2018-11-03 16:45     ` kbuild test robot
2018-11-03 16:56   ` kbuild test robot
2018-11-03 16:56     ` kbuild test robot
2018-11-03  4:00 ` [PATCH -next 3/3] mm: select HAVE_MOVE_PMD in x86 for faster mremap Joel Fernandes
2018-11-03  4:00   ` Joel Fernandes
2018-11-03  9:15 ` [PATCH -next 0/3] Add support for fast mremap Richard Weinberger
2018-11-03  9:15   ` Richard Weinberger
2018-11-03  9:24   ` Anton Ivanov
2018-11-03  9:24     ` Anton Ivanov
2018-11-03 15:20     ` Joel Fernandes
2018-11-03 15:20       ` Joel Fernandes
2018-11-03 18:32     ` Joel Fernandes
2018-11-03 18:32       ` Joel Fernandes
2018-11-04  6:56       ` William Kucharski
2018-11-04  6:56         ` William Kucharski
2018-11-06  4:36         ` Joel Fernandes
2018-11-06  4:36           ` Joel Fernandes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181103040041.7085-3-joelaf@google.com \
    --to=joel@joelfernandes.org \
    --cc=Julia.Lawall@lip6.fr \
    --cc=akpm@linux-foundation.org \
    --cc=anton.ivanov@kot-begemot.co.uk \
    --cc=aryabinin@virtuozzo.com \
    --cc=bp@alien8.de \
    --cc=catalin.marinas@arm.com \
    --cc=chris@zankel.net \
    --cc=dalias@libc.org \
    --cc=dancol@google.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=davem@davemloft.net \
    --cc=deller@gmx.de \
    --cc=elfring@users.sourceforge.net \
    --cc=fenghua.yu@intel.com \
    --cc=geert@linux-m68k.org \
    --cc=gxt@pku.edu.cn \
    --cc=hughd@google.com \
    --cc=jcmvbkbc@gmail.com \
    --cc=jdike@addtoit.com \
    --cc=jejb@parisc-linux.org \
    --cc=jonas@southpole.se \
    --cc=kasan-dev@googlegroups.com \
    --cc=kernel-team@android.com \
    --cc=kirill@shutemov.name \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=lftan@altera.com \
    --cc=linux-alpha@vger.kernel.org \
    --cc=linux-hexagon@vger.kernel.org \
    --cc=linux-ia64@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-m68k@lists.linux-m68k.org \
    --cc=linux-mips@linux-mips.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-parisc@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=linux-sh@vger.kernel.org \
    --cc=linux-snps-arc@lists.infradead.org \
    --cc=linux-um@lists.infradead.org \
    --cc=linux-xtensa@linux-xtensa.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=lokeshgidra@google.com \
    --cc=luto@kernel.org \
    --cc=mhocko@kernel.org \
    --cc=minchan@kernel.org \
    --cc=mingo@redhat.com \
    --cc=nios2-dev@lists.rocketboards.org \
    --cc=pantin@google.com \
    --cc=peterz@infradead.org \
    --cc=richard@nod.at \
    --cc=sammy@sammy.net \
    --cc=shorne@gmail.com \
    --cc=sparclinux@vger.kernel.org \
    --cc=stefan.kristiansson@saunalahti.fi \
    --cc=tglx@linutronix.de \
    --cc=tony.luck@intel.com \
    --cc=will.deacon@arm.com \
    --cc=x86@kernel.org \
    --cc=ysato@users.sourceforge.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).