All of lore.kernel.org
 help / color / mirror / Atom feed
From: Joel Fernandes <joel@joelfernandes.org>
To: Christoph Hellwig <hch@infradead.org>
Cc: linux-mips@linux-mips.org, Rich Felker <dalias@libc.org>,
	linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Will Deacon <will.deacon@arm.com>,
	mhocko@kernel.org, linux-mm@kvack.org, lokeshgidra@google.com,
	sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu, Jonas Bonn <jonas@southpole.se>,
	linux-s390@vger.kernel.org, dancol@google.com,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Max Filippov <jcmvbkbc@gmail.com>,
	linux-hexagon@vger.kernel.org, Helge Deller <deller@gmx.de>,
	"maintainer:X86 ARCHITECTURE 32-BIT AND 64-BIT" <x86@kernel.org>,
	hughd@google.com, "James E.J. Bottomley" <jejb@parisc-linux.org>,
	kasan-dev@googlegroups.com, elfring@users.sourceforge.net,
	Ingo Molnar <mingo@redhat.com>,
	Geert Uytter
Subject: Re: [PATCH 2/4] mm: speed up mremap by 500x on large regions (v2)
Date: Mon, 15 Oct 2018 15:33:03 -0700	[thread overview]
Message-ID: <20181015223303.GA164293@joelaf.mtv.corp.google.com> (raw)
In-Reply-To: <20181015094209.GA31999@infradead.org>

On Mon, Oct 15, 2018 at 02:42:09AM -0700, Christoph Hellwig wrote:
> On Fri, Oct 12, 2018 at 06:31:58PM -0700, Joel Fernandes (Google) wrote:
> > Android needs to mremap large regions of memory during memory management
> > related operations.
> 
> Just curious: why?

In Android we have a requirement of moving a large (up to a GB now, but may
grow bigger in future) memory range from one location to another. This move
operation has to happen when the application threads are paused for this
operation. Therefore, an inefficient move like it is now (for example 250ms
on arm64) will cause response time issues for applications, which is not
acceptable. Huge pages cannot be used in such memory ranges to avoid this
inefficiency as (when the application threads are running) our fault handlers
are designed to process 4KB pages at a time, to keep response times low. So
using huge pages in this context can, again, cause response time issues.

Also, the mremap syscall waiting for quarter of a second for a large mremap
is quite weird and we ought to improve it where possible.

> > +	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)
> > +	    || old_end - old_addr < PMD_SIZE)
> 
> The || goes on the first line.

Ok, fixed.

> > +		} else if (extent == PMD_SIZE && IS_ENABLED(CONFIG_HAVE_MOVE_PMD)) {
> 
> Overly long line.

Ok, fixed. Preview of updated patch is below.

thanks,

 - Joel

------8<---
From: "Joel Fernandes (Google)" <joel@joelfernandes.org>
Subject: [PATCH 2/4] mm: speed up mremap by 500x on large regions (v3)

Android needs to mremap large regions of memory during memory management
related operations. The mremap system call can be really slow if THP is
not enabled. The bottleneck is move_page_tables, which is copying each
pte at a time, and can be really slow across a large map. Turning on THP
may not be a viable option, and is not for us. This patch speeds up the
performance for non-THP system by copying at the PMD level when possible.

The speed up is three orders of magnitude. On a 1GB mremap, the mremap
completion times drops from 160-250 millesconds to 380-400 microseconds.

Before:
Total mremap time for 1GB data: 242321014 nanoseconds.
Total mremap time for 1GB data: 196842467 nanoseconds.
Total mremap time for 1GB data: 167051162 nanoseconds.

After:
Total mremap time for 1GB data: 385781 nanoseconds.
Total mremap time for 1GB data: 388959 nanoseconds.
Total mremap time for 1GB data: 402813 nanoseconds.

Incase THP is enabled, the optimization is mostly skipped except in
certain situations. I also flush the tlb every time we do this
optimization since I couldn't find a way to determine if the low-level
PTEs are dirty. It is seen that the cost of doing so is not much
compared the improvement, on both x86-64 and arm64.

Cc: minchan@kernel.org
Cc: pantin@google.com
Cc: hughd@google.com
Cc: lokeshgidra@google.com
Cc: dancol@google.com
Cc: mhocko@kernel.org
Cc: kirill@shutemov.name
Cc: akpm@linux-foundation.org
Cc: kernel-team@android.com
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
---
 arch/Kconfig |  5 ++++
 mm/mremap.c  | 66 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 71 insertions(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index 6801123932a5..9724fe39884f 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -518,6 +518,11 @@ config HAVE_IRQ_TIME_ACCOUNTING
 	  Archs need to ensure they use a high enough resolution clock to
 	  support irq time accounting and then call enable_sched_clock_irqtime().
 
+config HAVE_MOVE_PMD
+	bool
+	help
+	  Archs that select this are able to move page tables at the PMD level.
+
 config HAVE_ARCH_TRANSPARENT_HUGEPAGE
 	bool
 
diff --git a/mm/mremap.c b/mm/mremap.c
index 9e68a02a52b1..a8dd98a59975 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
 		drop_rmap_locks(vma);
 }
 
+static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+		  unsigned long new_addr, unsigned long old_end,
+		  pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
+{
+	spinlock_t *old_ptl, *new_ptl;
+	struct mm_struct *mm = vma->vm_mm;
+
+	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK) ||
+	    old_end - old_addr < PMD_SIZE)
+		return false;
+
+	/*
+	 * The destination pmd shouldn't be established, free_pgtables()
+	 * should have release it.
+	 */
+	if (WARN_ON(!pmd_none(*new_pmd)))
+		return false;
+
+	/*
+	 * We don't have to worry about the ordering of src and dst
+	 * ptlocks because exclusive mmap_sem prevents deadlock.
+	 */
+	old_ptl = pmd_lock(vma->vm_mm, old_pmd);
+	if (old_ptl) {
+		pmd_t pmd;
+
+		new_ptl = pmd_lockptr(mm, new_pmd);
+		if (new_ptl != old_ptl)
+			spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
+
+		/* Clear the pmd */
+		pmd = *old_pmd;
+		pmd_clear(old_pmd);
+
+		VM_BUG_ON(!pmd_none(*new_pmd));
+
+		/* Set the new pmd */
+		set_pmd_at(mm, new_addr, new_pmd, pmd);
+		if (new_ptl != old_ptl)
+			spin_unlock(new_ptl);
+		spin_unlock(old_ptl);
+
+		*need_flush = true;
+		return true;
+	}
+	return false;
+}
+
 unsigned long move_page_tables(struct vm_area_struct *vma,
 		unsigned long old_addr, struct vm_area_struct *new_vma,
 		unsigned long new_addr, unsigned long len,
@@ -239,7 +287,25 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
 			split_huge_pmd(vma, old_pmd, old_addr);
 			if (pmd_trans_unstable(old_pmd))
 				continue;
+		} else if (extent == PMD_SIZE &&
+			   IS_ENABLED(CONFIG_HAVE_MOVE_PMD)) {
+			/*
+			 * If the extent is PMD-sized, try to speed the move by
+			 * moving at the PMD level if possible.
+			 */
+			bool moved;
+
+			if (need_rmap_locks)
+				take_rmap_locks(vma);
+			moved = move_normal_pmd(vma, old_addr, new_addr,
+						old_end, old_pmd, new_pmd,
+						&need_flush);
+			if (need_rmap_locks)
+				drop_rmap_locks(vma);
+			if (moved)
+				continue;
 		}
+
 		if (pte_alloc(new_vma->vm_mm, new_pmd))
 			break;
 		next = (new_addr + PMD_SIZE) & PMD_MASK;
-- 
2.19.1.331.ge82ca0e54c-goog

WARNING: multiple messages have this Message-ID (diff)
From: Joel Fernandes <joel@joelfernandes.org>
To: Christoph Hellwig <hch@infradead.org>
Cc: linux-kernel@vger.kernel.org, linux-mips@linux-mips.org,
	Rich Felker <dalias@libc.org>,
	linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Will Deacon <will.deacon@arm.com>,
	mhocko@kernel.org, linux-mm@kvack.org, lokeshgidra@google.com,
	linux-riscv@lists.infradead.org, elfring@users.sourceforge.net,
	Jonas Bonn <jonas@southpole.se>,
	kvmarm@lists.cs.columbia.edu, dancol@google.com,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org,
	linux-hexagon@vger.kernel.org, Helge Deller <deller@gmx.de>,
	"maintainer:X86 ARCHITECTURE 32-BIT AND 64-BIT" <x86@kernel.org>,
	hughd@google.com, "James E.J. Bottomley" <jejb@parisc-linux.org>,
	kasan-dev@googlegroups.com, anton.ivanov@kot-begemot.co.uk,
	Ingo Molnar <mingo@redhat.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	linux-snps-arc@lists.infradead.org, kernel-team@android.com,
	Sam Creasey <sammy@sammy.net>, Fenghua Yu <fenghua.yu@intel.com>,
	linux-s390@vger.kernel.org, Jeff Dike <jdike@addtoit.com>,
	linux-um@lists.infradead.org,
	Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>,
	Julia Lawall <Julia.Lawall@lip6.fr>,
	linux-m68k@lists.linux-m68k.org, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	nios2-dev@lists.rocketboards.org, kirill@shutemov.name,
	Stafford Horne <shorne@gmail.com>, Guan Xuetao <gxt@pku.edu.cn>,
	Chris Zankel <chris@zankel.net>, Tony Luck <tony.luck@intel.com>,
	Richard Weinberger <richard@nod.at>,
	linux-parisc@vger.kernel.org, pantin@google.com,
	Max Filippov <jcmvbkbc@gmail.com>,
	minchan@kernel.org, Thomas Gleixner <tglx@linutronix.de>,
	linux-alpha@vger.kernel.org, Ley Foon Tan <lftan@altera.com>,
	akpm@linux-foundation.org, linuxppc-dev@lists.ozlabs.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [PATCH 2/4] mm: speed up mremap by 500x on large regions (v2)
Date: Mon, 15 Oct 2018 15:33:03 -0700	[thread overview]
Message-ID: <20181015223303.GA164293@joelaf.mtv.corp.google.com> (raw)
In-Reply-To: <20181015094209.GA31999@infradead.org>

On Mon, Oct 15, 2018 at 02:42:09AM -0700, Christoph Hellwig wrote:
> On Fri, Oct 12, 2018 at 06:31:58PM -0700, Joel Fernandes (Google) wrote:
> > Android needs to mremap large regions of memory during memory management
> > related operations.
> 
> Just curious: why?

In Android we have a requirement of moving a large (up to a GB now, but may
grow bigger in future) memory range from one location to another. This move
operation has to happen when the application threads are paused for this
operation. Therefore, an inefficient move like it is now (for example 250ms
on arm64) will cause response time issues for applications, which is not
acceptable. Huge pages cannot be used in such memory ranges to avoid this
inefficiency as (when the application threads are running) our fault handlers
are designed to process 4KB pages at a time, to keep response times low. So
using huge pages in this context can, again, cause response time issues.

Also, the mremap syscall waiting for quarter of a second for a large mremap
is quite weird and we ought to improve it where possible.

> > +	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)
> > +	    || old_end - old_addr < PMD_SIZE)
> 
> The || goes on the first line.

Ok, fixed.

> > +		} else if (extent == PMD_SIZE && IS_ENABLED(CONFIG_HAVE_MOVE_PMD)) {
> 
> Overly long line.

Ok, fixed. Preview of updated patch is below.

thanks,

 - Joel

------8<---
From: "Joel Fernandes (Google)" <joel@joelfernandes.org>
Subject: [PATCH 2/4] mm: speed up mremap by 500x on large regions (v3)

Android needs to mremap large regions of memory during memory management
related operations. The mremap system call can be really slow if THP is
not enabled. The bottleneck is move_page_tables, which is copying each
pte at a time, and can be really slow across a large map. Turning on THP
may not be a viable option, and is not for us. This patch speeds up the
performance for non-THP system by copying at the PMD level when possible.

The speed up is three orders of magnitude. On a 1GB mremap, the mremap
completion times drops from 160-250 millesconds to 380-400 microseconds.

Before:
Total mremap time for 1GB data: 242321014 nanoseconds.
Total mremap time for 1GB data: 196842467 nanoseconds.
Total mremap time for 1GB data: 167051162 nanoseconds.

After:
Total mremap time for 1GB data: 385781 nanoseconds.
Total mremap time for 1GB data: 388959 nanoseconds.
Total mremap time for 1GB data: 402813 nanoseconds.

Incase THP is enabled, the optimization is mostly skipped except in
certain situations. I also flush the tlb every time we do this
optimization since I couldn't find a way to determine if the low-level
PTEs are dirty. It is seen that the cost of doing so is not much
compared the improvement, on both x86-64 and arm64.

Cc: minchan@kernel.org
Cc: pantin@google.com
Cc: hughd@google.com
Cc: lokeshgidra@google.com
Cc: dancol@google.com
Cc: mhocko@kernel.org
Cc: kirill@shutemov.name
Cc: akpm@linux-foundation.org
Cc: kernel-team@android.com
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
---
 arch/Kconfig |  5 ++++
 mm/mremap.c  | 66 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 71 insertions(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index 6801123932a5..9724fe39884f 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -518,6 +518,11 @@ config HAVE_IRQ_TIME_ACCOUNTING
 	  Archs need to ensure they use a high enough resolution clock to
 	  support irq time accounting and then call enable_sched_clock_irqtime().
 
+config HAVE_MOVE_PMD
+	bool
+	help
+	  Archs that select this are able to move page tables at the PMD level.
+
 config HAVE_ARCH_TRANSPARENT_HUGEPAGE
 	bool
 
diff --git a/mm/mremap.c b/mm/mremap.c
index 9e68a02a52b1..a8dd98a59975 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
 		drop_rmap_locks(vma);
 }
 
+static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+		  unsigned long new_addr, unsigned long old_end,
+		  pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
+{
+	spinlock_t *old_ptl, *new_ptl;
+	struct mm_struct *mm = vma->vm_mm;
+
+	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK) ||
+	    old_end - old_addr < PMD_SIZE)
+		return false;
+
+	/*
+	 * The destination pmd shouldn't be established, free_pgtables()
+	 * should have release it.
+	 */
+	if (WARN_ON(!pmd_none(*new_pmd)))
+		return false;
+
+	/*
+	 * We don't have to worry about the ordering of src and dst
+	 * ptlocks because exclusive mmap_sem prevents deadlock.
+	 */
+	old_ptl = pmd_lock(vma->vm_mm, old_pmd);
+	if (old_ptl) {
+		pmd_t pmd;
+
+		new_ptl = pmd_lockptr(mm, new_pmd);
+		if (new_ptl != old_ptl)
+			spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
+
+		/* Clear the pmd */
+		pmd = *old_pmd;
+		pmd_clear(old_pmd);
+
+		VM_BUG_ON(!pmd_none(*new_pmd));
+
+		/* Set the new pmd */
+		set_pmd_at(mm, new_addr, new_pmd, pmd);
+		if (new_ptl != old_ptl)
+			spin_unlock(new_ptl);
+		spin_unlock(old_ptl);
+
+		*need_flush = true;
+		return true;
+	}
+	return false;
+}
+
 unsigned long move_page_tables(struct vm_area_struct *vma,
 		unsigned long old_addr, struct vm_area_struct *new_vma,
 		unsigned long new_addr, unsigned long len,
@@ -239,7 +287,25 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
 			split_huge_pmd(vma, old_pmd, old_addr);
 			if (pmd_trans_unstable(old_pmd))
 				continue;
+		} else if (extent == PMD_SIZE &&
+			   IS_ENABLED(CONFIG_HAVE_MOVE_PMD)) {
+			/*
+			 * If the extent is PMD-sized, try to speed the move by
+			 * moving at the PMD level if possible.
+			 */
+			bool moved;
+
+			if (need_rmap_locks)
+				take_rmap_locks(vma);
+			moved = move_normal_pmd(vma, old_addr, new_addr,
+						old_end, old_pmd, new_pmd,
+						&need_flush);
+			if (need_rmap_locks)
+				drop_rmap_locks(vma);
+			if (moved)
+				continue;
 		}
+
 		if (pte_alloc(new_vma->vm_mm, new_pmd))
 			break;
 		next = (new_addr + PMD_SIZE) & PMD_MASK;
-- 
2.19.1.331.ge82ca0e54c-goog

WARNING: multiple messages have this Message-ID (diff)
From: joel@joelfernandes.org (Joel Fernandes)
To: linux-riscv@lists.infradead.org
Subject: [PATCH 2/4] mm: speed up mremap by 500x on large regions (v2)
Date: Mon, 15 Oct 2018 15:33:03 -0700	[thread overview]
Message-ID: <20181015223303.GA164293@joelaf.mtv.corp.google.com> (raw)
In-Reply-To: <20181015094209.GA31999@infradead.org>

On Mon, Oct 15, 2018 at 02:42:09AM -0700, Christoph Hellwig wrote:
> On Fri, Oct 12, 2018 at 06:31:58PM -0700, Joel Fernandes (Google) wrote:
> > Android needs to mremap large regions of memory during memory management
> > related operations.
> 
> Just curious: why?

In Android we have a requirement of moving a large (up to a GB now, but may
grow bigger in future) memory range from one location to another. This move
operation has to happen when the application threads are paused for this
operation. Therefore, an inefficient move like it is now (for example 250ms
on arm64) will cause response time issues for applications, which is not
acceptable. Huge pages cannot be used in such memory ranges to avoid this
inefficiency as (when the application threads are running) our fault handlers
are designed to process 4KB pages at a time, to keep response times low. So
using huge pages in this context can, again, cause response time issues.

Also, the mremap syscall waiting for quarter of a second for a large mremap
is quite weird and we ought to improve it where possible.

> > +	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)
> > +	    || old_end - old_addr < PMD_SIZE)
> 
> The || goes on the first line.

Ok, fixed.

> > +		} else if (extent == PMD_SIZE && IS_ENABLED(CONFIG_HAVE_MOVE_PMD)) {
> 
> Overly long line.

Ok, fixed. Preview of updated patch is below.

thanks,

 - Joel

------8<---
From: "Joel Fernandes (Google)" <joel@joelfernandes.org>
Subject: [PATCH 2/4] mm: speed up mremap by 500x on large regions (v3)

Android needs to mremap large regions of memory during memory management
related operations. The mremap system call can be really slow if THP is
not enabled. The bottleneck is move_page_tables, which is copying each
pte at a time, and can be really slow across a large map. Turning on THP
may not be a viable option, and is not for us. This patch speeds up the
performance for non-THP system by copying at the PMD level when possible.

The speed up is three orders of magnitude. On a 1GB mremap, the mremap
completion times drops from 160-250 millesconds to 380-400 microseconds.

Before:
Total mremap time for 1GB data: 242321014 nanoseconds.
Total mremap time for 1GB data: 196842467 nanoseconds.
Total mremap time for 1GB data: 167051162 nanoseconds.

After:
Total mremap time for 1GB data: 385781 nanoseconds.
Total mremap time for 1GB data: 388959 nanoseconds.
Total mremap time for 1GB data: 402813 nanoseconds.

Incase THP is enabled, the optimization is mostly skipped except in
certain situations. I also flush the tlb every time we do this
optimization since I couldn't find a way to determine if the low-level
PTEs are dirty. It is seen that the cost of doing so is not much
compared the improvement, on both x86-64 and arm64.

Cc: minchan at kernel.org
Cc: pantin at google.com
Cc: hughd at google.com
Cc: lokeshgidra at google.com
Cc: dancol at google.com
Cc: mhocko at kernel.org
Cc: kirill at shutemov.name
Cc: akpm at linux-foundation.org
Cc: kernel-team at android.com
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
---
 arch/Kconfig |  5 ++++
 mm/mremap.c  | 66 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 71 insertions(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index 6801123932a5..9724fe39884f 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -518,6 +518,11 @@ config HAVE_IRQ_TIME_ACCOUNTING
 	  Archs need to ensure they use a high enough resolution clock to
 	  support irq time accounting and then call enable_sched_clock_irqtime().
 
+config HAVE_MOVE_PMD
+	bool
+	help
+	  Archs that select this are able to move page tables at the PMD level.
+
 config HAVE_ARCH_TRANSPARENT_HUGEPAGE
 	bool
 
diff --git a/mm/mremap.c b/mm/mremap.c
index 9e68a02a52b1..a8dd98a59975 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
 		drop_rmap_locks(vma);
 }
 
+static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+		  unsigned long new_addr, unsigned long old_end,
+		  pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
+{
+	spinlock_t *old_ptl, *new_ptl;
+	struct mm_struct *mm = vma->vm_mm;
+
+	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK) ||
+	    old_end - old_addr < PMD_SIZE)
+		return false;
+
+	/*
+	 * The destination pmd shouldn't be established, free_pgtables()
+	 * should have release it.
+	 */
+	if (WARN_ON(!pmd_none(*new_pmd)))
+		return false;
+
+	/*
+	 * We don't have to worry about the ordering of src and dst
+	 * ptlocks because exclusive mmap_sem prevents deadlock.
+	 */
+	old_ptl = pmd_lock(vma->vm_mm, old_pmd);
+	if (old_ptl) {
+		pmd_t pmd;
+
+		new_ptl = pmd_lockptr(mm, new_pmd);
+		if (new_ptl != old_ptl)
+			spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
+
+		/* Clear the pmd */
+		pmd = *old_pmd;
+		pmd_clear(old_pmd);
+
+		VM_BUG_ON(!pmd_none(*new_pmd));
+
+		/* Set the new pmd */
+		set_pmd_at(mm, new_addr, new_pmd, pmd);
+		if (new_ptl != old_ptl)
+			spin_unlock(new_ptl);
+		spin_unlock(old_ptl);
+
+		*need_flush = true;
+		return true;
+	}
+	return false;
+}
+
 unsigned long move_page_tables(struct vm_area_struct *vma,
 		unsigned long old_addr, struct vm_area_struct *new_vma,
 		unsigned long new_addr, unsigned long len,
@@ -239,7 +287,25 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
 			split_huge_pmd(vma, old_pmd, old_addr);
 			if (pmd_trans_unstable(old_pmd))
 				continue;
+		} else if (extent == PMD_SIZE &&
+			   IS_ENABLED(CONFIG_HAVE_MOVE_PMD)) {
+			/*
+			 * If the extent is PMD-sized, try to speed the move by
+			 * moving at the PMD level if possible.
+			 */
+			bool moved;
+
+			if (need_rmap_locks)
+				take_rmap_locks(vma);
+			moved = move_normal_pmd(vma, old_addr, new_addr,
+						old_end, old_pmd, new_pmd,
+						&need_flush);
+			if (need_rmap_locks)
+				drop_rmap_locks(vma);
+			if (moved)
+				continue;
 		}
+
 		if (pte_alloc(new_vma->vm_mm, new_pmd))
 			break;
 		next = (new_addr + PMD_SIZE) & PMD_MASK;
-- 
2.19.1.331.ge82ca0e54c-goog

WARNING: multiple messages have this Message-ID (diff)
From: Joel Fernandes <joel@joelfernandes.org>
To: Christoph Hellwig <hch@infradead.org>
Cc: linux-mips@linux-mips.org, Rich Felker <dalias@libc.org>,
	linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Will Deacon <will.deacon@arm.com>,
	mhocko@kernel.org, linux-mm@kvack.org, lokeshgidra@google.com,
	sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu, Jonas Bonn <jonas@southpole.se>,
	linux-s390@vger.kernel.org, dancol@google.com,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Max Filippov <jcmvbkbc@gmail.com>,
	linux-hexagon@vger.kernel.org, Helge Deller <deller@gmx.de>,
	"maintainer:X86 ARCHITECTURE 32-BIT AND 64-BIT" <x86@kernel.org>,
	hughd@google.com, "James E.J. Bottomley" <jejb@parisc-linux.org>,
	kasan-dev@googlegroups.com, elfring@users.sourceforge.net,
	Ingo Molnar <mingo@redhat.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	linux-snps-arc@lists.infradead.org, kernel-team@android.com,
	Sam Creasey <sammy@sammy.net>,
	linux-xtensa@linux-xtensa.org, Jeff Dike <jdike@addtoit.com>,
	linux-alpha@vger.kernel.org, linux-um@lists.infradead.org,
	Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>,
	Julia Lawall <Julia.Lawall@lip6.fr>,
	linux-m68k@lists.linux-m68k.org, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Ley Foon Tan <lftan@altera.com>,
	kirill@shutemov.name, Stafford Horne <shorne@gmail.com>,
	Guan Xuetao <gxt@pku.edu.cn>, Chris Zankel <chris@zankel.net>,
	Tony Luck <tony.luck@intel.com>,
	linux-parisc@vger.kernel.org, pantin@google.com,
	linux-kernel@vger.kernel.org, Fenghua Yu <fenghua.yu@intel.com>,
	minchan@kernel.org, Thomas Gleixner <tglx@linutronix.de>,
	Richard Weinberger <richard@nod.at>,
	anton.ivanov@kot-begemot.co.uk, nios2-dev@lists.rocketboards.org,
	akpm@linux-foundation.org, linuxppc-dev@lists.ozlabs.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [PATCH 2/4] mm: speed up mremap by 500x on large regions (v2)
Date: Mon, 15 Oct 2018 15:33:03 -0700	[thread overview]
Message-ID: <20181015223303.GA164293@joelaf.mtv.corp.google.com> (raw)
Message-ID: <20181015223303.LuzMdPxZlwPo2phnVTSPBmz_EJNdixgIiqFKpBxNWfI@z> (raw)
In-Reply-To: <20181015094209.GA31999@infradead.org>

On Mon, Oct 15, 2018 at 02:42:09AM -0700, Christoph Hellwig wrote:
> On Fri, Oct 12, 2018 at 06:31:58PM -0700, Joel Fernandes (Google) wrote:
> > Android needs to mremap large regions of memory during memory management
> > related operations.
> 
> Just curious: why?

In Android we have a requirement of moving a large (up to a GB now, but may
grow bigger in future) memory range from one location to another. This move
operation has to happen when the application threads are paused for this
operation. Therefore, an inefficient move like it is now (for example 250ms
on arm64) will cause response time issues for applications, which is not
acceptable. Huge pages cannot be used in such memory ranges to avoid this
inefficiency as (when the application threads are running) our fault handlers
are designed to process 4KB pages at a time, to keep response times low. So
using huge pages in this context can, again, cause response time issues.

Also, the mremap syscall waiting for quarter of a second for a large mremap
is quite weird and we ought to improve it where possible.

> > +	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)
> > +	    || old_end - old_addr < PMD_SIZE)
> 
> The || goes on the first line.

Ok, fixed.

> > +		} else if (extent == PMD_SIZE && IS_ENABLED(CONFIG_HAVE_MOVE_PMD)) {
> 
> Overly long line.

Ok, fixed. Preview of updated patch is below.

thanks,

 - Joel

------8<---
From: "Joel Fernandes (Google)" <joel@joelfernandes.org>
Subject: [PATCH 2/4] mm: speed up mremap by 500x on large regions (v3)

Android needs to mremap large regions of memory during memory management
related operations. The mremap system call can be really slow if THP is
not enabled. The bottleneck is move_page_tables, which is copying each
pte at a time, and can be really slow across a large map. Turning on THP
may not be a viable option, and is not for us. This patch speeds up the
performance for non-THP system by copying at the PMD level when possible.

The speed up is three orders of magnitude. On a 1GB mremap, the mremap
completion times drops from 160-250 millesconds to 380-400 microseconds.

Before:
Total mremap time for 1GB data: 242321014 nanoseconds.
Total mremap time for 1GB data: 196842467 nanoseconds.
Total mremap time for 1GB data: 167051162 nanoseconds.

After:
Total mremap time for 1GB data: 385781 nanoseconds.
Total mremap time for 1GB data: 388959 nanoseconds.
Total mremap time for 1GB data: 402813 nanoseconds.

Incase THP is enabled, the optimization is mostly skipped except in
certain situations. I also flush the tlb every time we do this
optimization since I couldn't find a way to determine if the low-level
PTEs are dirty. It is seen that the cost of doing so is not much
compared the improvement, on both x86-64 and arm64.

Cc: minchan@kernel.org
Cc: pantin@google.com
Cc: hughd@google.com
Cc: lokeshgidra@google.com
Cc: dancol@google.com
Cc: mhocko@kernel.org
Cc: kirill@shutemov.name
Cc: akpm@linux-foundation.org
Cc: kernel-team@android.com
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
---
 arch/Kconfig |  5 ++++
 mm/mremap.c  | 66 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 71 insertions(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index 6801123932a5..9724fe39884f 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -518,6 +518,11 @@ config HAVE_IRQ_TIME_ACCOUNTING
 	  Archs need to ensure they use a high enough resolution clock to
 	  support irq time accounting and then call enable_sched_clock_irqtime().
 
+config HAVE_MOVE_PMD
+	bool
+	help
+	  Archs that select this are able to move page tables at the PMD level.
+
 config HAVE_ARCH_TRANSPARENT_HUGEPAGE
 	bool
 
diff --git a/mm/mremap.c b/mm/mremap.c
index 9e68a02a52b1..a8dd98a59975 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
 		drop_rmap_locks(vma);
 }
 
+static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+		  unsigned long new_addr, unsigned long old_end,
+		  pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
+{
+	spinlock_t *old_ptl, *new_ptl;
+	struct mm_struct *mm = vma->vm_mm;
+
+	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK) ||
+	    old_end - old_addr < PMD_SIZE)
+		return false;
+
+	/*
+	 * The destination pmd shouldn't be established, free_pgtables()
+	 * should have release it.
+	 */
+	if (WARN_ON(!pmd_none(*new_pmd)))
+		return false;
+
+	/*
+	 * We don't have to worry about the ordering of src and dst
+	 * ptlocks because exclusive mmap_sem prevents deadlock.
+	 */
+	old_ptl = pmd_lock(vma->vm_mm, old_pmd);
+	if (old_ptl) {
+		pmd_t pmd;
+
+		new_ptl = pmd_lockptr(mm, new_pmd);
+		if (new_ptl != old_ptl)
+			spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
+
+		/* Clear the pmd */
+		pmd = *old_pmd;
+		pmd_clear(old_pmd);
+
+		VM_BUG_ON(!pmd_none(*new_pmd));
+
+		/* Set the new pmd */
+		set_pmd_at(mm, new_addr, new_pmd, pmd);
+		if (new_ptl != old_ptl)
+			spin_unlock(new_ptl);
+		spin_unlock(old_ptl);
+
+		*need_flush = true;
+		return true;
+	}
+	return false;
+}
+
 unsigned long move_page_tables(struct vm_area_struct *vma,
 		unsigned long old_addr, struct vm_area_struct *new_vma,
 		unsigned long new_addr, unsigned long len,
@@ -239,7 +287,25 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
 			split_huge_pmd(vma, old_pmd, old_addr);
 			if (pmd_trans_unstable(old_pmd))
 				continue;
+		} else if (extent == PMD_SIZE &&
+			   IS_ENABLED(CONFIG_HAVE_MOVE_PMD)) {
+			/*
+			 * If the extent is PMD-sized, try to speed the move by
+			 * moving at the PMD level if possible.
+			 */
+			bool moved;
+
+			if (need_rmap_locks)
+				take_rmap_locks(vma);
+			moved = move_normal_pmd(vma, old_addr, new_addr,
+						old_end, old_pmd, new_pmd,
+						&need_flush);
+			if (need_rmap_locks)
+				drop_rmap_locks(vma);
+			if (moved)
+				continue;
 		}
+
 		if (pte_alloc(new_vma->vm_mm, new_pmd))
 			break;
 		next = (new_addr + PMD_SIZE) & PMD_MASK;
-- 
2.19.1.331.ge82ca0e54c-goog


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

WARNING: multiple messages have this Message-ID (diff)
From: Joel Fernandes <joel@joelfernandes.org>
To: Christoph Hellwig <hch@infradead.org>
Cc: linux-mips@linux-mips.org, Rich Felker <dalias@libc.org>,
	linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Will Deacon <will.deacon@arm.com>,
	mhocko@kernel.org, linux-mm@kvack.org, lokeshgidra@google.com,
	sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu, Jonas Bonn <jonas@southpole.se>,
	linux-s390@vger.kernel.org, dancol@google.com,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Max Filippov <jcmvbkbc@gmail.com>,
	linux-hexagon@vger.kernel.org, Helge Deller <deller@gmx.de>,
	"maintainer:X86 ARCHITECTURE 32-BIT AND 64-BIT" <x86@kernel.org>,
	hughd@google.com, "James E.J. Bottomley" <jejb@parisc-linux.org>,
	kasan-dev@googlegroups.com, elfring@users.sourceforge.net,
	Ingo Molnar <mingo@redhat.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	linux-snps-arc@lists.infradead.org, kernel-team@android.com,
	Sam Creasey <sammy@sammy.net>,
	linux-xtensa@linux-xtensa.org, Jeff Dike <jdike@addtoit.com>,
	linux-alpha@vger.kernel.org, linux-um@lists.infradead.org,
	Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>,
	Julia Lawall <Julia.Lawall@lip6.fr>,
	linux-m68k@lists.linux-m68k.org, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Ley Foon Tan <lftan@altera.com>,
	kirill@shutemov.name, Stafford Horne <shorne@gmail.com>,
	Guan Xuetao <gxt@pku.edu.cn>, Chris Zankel <chris@zankel.net>,
	Tony Luck <tony.luck@intel.com>,
	linux-parisc@vger.kernel.org, pantin@google.com,
	linux-kernel@vger.kernel.org, Fenghua Yu <fenghua.yu@intel.com>,
	minchan@kernel.org, Thomas Gleixner <tglx@linutronix.de>,
	Richard Weinberger <richard@nod.at>,
	anton.ivanov@kot-begemot.co.uk, nios2-dev@lists.rocketboards.org,
	akpm@linux-foundation.org, linuxppc-dev@lists.ozlabs.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [PATCH 2/4] mm: speed up mremap by 500x on large regions (v2)
Date: Mon, 15 Oct 2018 15:33:03 -0700	[thread overview]
Message-ID: <20181015223303.GA164293@joelaf.mtv.corp.google.com> (raw)
In-Reply-To: <20181015094209.GA31999@infradead.org>

On Mon, Oct 15, 2018 at 02:42:09AM -0700, Christoph Hellwig wrote:
> On Fri, Oct 12, 2018 at 06:31:58PM -0700, Joel Fernandes (Google) wrote:
> > Android needs to mremap large regions of memory during memory management
> > related operations.
> 
> Just curious: why?

In Android we have a requirement of moving a large (up to a GB now, but may
grow bigger in future) memory range from one location to another. This move
operation has to happen when the application threads are paused for this
operation. Therefore, an inefficient move like it is now (for example 250ms
on arm64) will cause response time issues for applications, which is not
acceptable. Huge pages cannot be used in such memory ranges to avoid this
inefficiency as (when the application threads are running) our fault handlers
are designed to process 4KB pages at a time, to keep response times low. So
using huge pages in this context can, again, cause response time issues.

Also, the mremap syscall waiting for quarter of a second for a large mremap
is quite weird and we ought to improve it where possible.

> > +	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)
> > +	    || old_end - old_addr < PMD_SIZE)
> 
> The || goes on the first line.

Ok, fixed.

> > +		} else if (extent == PMD_SIZE && IS_ENABLED(CONFIG_HAVE_MOVE_PMD)) {
> 
> Overly long line.

Ok, fixed. Preview of updated patch is below.

thanks,

 - Joel

------8<---
From: "Joel Fernandes (Google)" <joel@joelfernandes.org>
Subject: [PATCH 2/4] mm: speed up mremap by 500x on large regions (v3)

Android needs to mremap large regions of memory during memory management
related operations. The mremap system call can be really slow if THP is
not enabled. The bottleneck is move_page_tables, which is copying each
pte at a time, and can be really slow across a large map. Turning on THP
may not be a viable option, and is not for us. This patch speeds up the
performance for non-THP system by copying at the PMD level when possible.

The speed up is three orders of magnitude. On a 1GB mremap, the mremap
completion times drops from 160-250 millesconds to 380-400 microseconds.

Before:
Total mremap time for 1GB data: 242321014 nanoseconds.
Total mremap time for 1GB data: 196842467 nanoseconds.
Total mremap time for 1GB data: 167051162 nanoseconds.

After:
Total mremap time for 1GB data: 385781 nanoseconds.
Total mremap time for 1GB data: 388959 nanoseconds.
Total mremap time for 1GB data: 402813 nanoseconds.

Incase THP is enabled, the optimization is mostly skipped except in
certain situations. I also flush the tlb every time we do this
optimization since I couldn't find a way to determine if the low-level
PTEs are dirty. It is seen that the cost of doing so is not much
compared the improvement, on both x86-64 and arm64.

Cc: minchan@kernel.org
Cc: pantin@google.com
Cc: hughd@google.com
Cc: lokeshgidra@google.com
Cc: dancol@google.com
Cc: mhocko@kernel.org
Cc: kirill@shutemov.name
Cc: akpm@linux-foundation.org
Cc: kernel-team@android.com
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
---
 arch/Kconfig |  5 ++++
 mm/mremap.c  | 66 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 71 insertions(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index 6801123932a5..9724fe39884f 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -518,6 +518,11 @@ config HAVE_IRQ_TIME_ACCOUNTING
 	  Archs need to ensure they use a high enough resolution clock to
 	  support irq time accounting and then call enable_sched_clock_irqtime().
 
+config HAVE_MOVE_PMD
+	bool
+	help
+	  Archs that select this are able to move page tables at the PMD level.
+
 config HAVE_ARCH_TRANSPARENT_HUGEPAGE
 	bool
 
diff --git a/mm/mremap.c b/mm/mremap.c
index 9e68a02a52b1..a8dd98a59975 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
 		drop_rmap_locks(vma);
 }
 
+static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+		  unsigned long new_addr, unsigned long old_end,
+		  pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
+{
+	spinlock_t *old_ptl, *new_ptl;
+	struct mm_struct *mm = vma->vm_mm;
+
+	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK) ||
+	    old_end - old_addr < PMD_SIZE)
+		return false;
+
+	/*
+	 * The destination pmd shouldn't be established, free_pgtables()
+	 * should have release it.
+	 */
+	if (WARN_ON(!pmd_none(*new_pmd)))
+		return false;
+
+	/*
+	 * We don't have to worry about the ordering of src and dst
+	 * ptlocks because exclusive mmap_sem prevents deadlock.
+	 */
+	old_ptl = pmd_lock(vma->vm_mm, old_pmd);
+	if (old_ptl) {
+		pmd_t pmd;
+
+		new_ptl = pmd_lockptr(mm, new_pmd);
+		if (new_ptl != old_ptl)
+			spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
+
+		/* Clear the pmd */
+		pmd = *old_pmd;
+		pmd_clear(old_pmd);
+
+		VM_BUG_ON(!pmd_none(*new_pmd));
+
+		/* Set the new pmd */
+		set_pmd_at(mm, new_addr, new_pmd, pmd);
+		if (new_ptl != old_ptl)
+			spin_unlock(new_ptl);
+		spin_unlock(old_ptl);
+
+		*need_flush = true;
+		return true;
+	}
+	return false;
+}
+
 unsigned long move_page_tables(struct vm_area_struct *vma,
 		unsigned long old_addr, struct vm_area_struct *new_vma,
 		unsigned long new_addr, unsigned long len,
@@ -239,7 +287,25 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
 			split_huge_pmd(vma, old_pmd, old_addr);
 			if (pmd_trans_unstable(old_pmd))
 				continue;
+		} else if (extent == PMD_SIZE &&
+			   IS_ENABLED(CONFIG_HAVE_MOVE_PMD)) {
+			/*
+			 * If the extent is PMD-sized, try to speed the move by
+			 * moving at the PMD level if possible.
+			 */
+			bool moved;
+
+			if (need_rmap_locks)
+				take_rmap_locks(vma);
+			moved = move_normal_pmd(vma, old_addr, new_addr,
+						old_end, old_pmd, new_pmd,
+						&need_flush);
+			if (need_rmap_locks)
+				drop_rmap_locks(vma);
+			if (moved)
+				continue;
 		}
+
 		if (pte_alloc(new_vma->vm_mm, new_pmd))
 			break;
 		next = (new_addr + PMD_SIZE) & PMD_MASK;
-- 
2.19.1.331.ge82ca0e54c-goog


WARNING: multiple messages have this Message-ID (diff)
From: joel@joelfernandes.org (Joel Fernandes)
To: linux-snps-arc@lists.infradead.org
Subject: [PATCH 2/4] mm: speed up mremap by 500x on large regions (v2)
Date: Mon, 15 Oct 2018 15:33:03 -0700	[thread overview]
Message-ID: <20181015223303.GA164293@joelaf.mtv.corp.google.com> (raw)
In-Reply-To: <20181015094209.GA31999@infradead.org>

On Mon, Oct 15, 2018@02:42:09AM -0700, Christoph Hellwig wrote:
> On Fri, Oct 12, 2018@06:31:58PM -0700, Joel Fernandes (Google) wrote:
> > Android needs to mremap large regions of memory during memory management
> > related operations.
> 
> Just curious: why?

In Android we have a requirement of moving a large (up to a GB now, but may
grow bigger in future) memory range from one location to another. This move
operation has to happen when the application threads are paused for this
operation. Therefore, an inefficient move like it is now (for example 250ms
on arm64) will cause response time issues for applications, which is not
acceptable. Huge pages cannot be used in such memory ranges to avoid this
inefficiency as (when the application threads are running) our fault handlers
are designed to process 4KB pages at a time, to keep response times low. So
using huge pages in this context can, again, cause response time issues.

Also, the mremap syscall waiting for quarter of a second for a large mremap
is quite weird and we ought to improve it where possible.

> > +	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)
> > +	    || old_end - old_addr < PMD_SIZE)
> 
> The || goes on the first line.

Ok, fixed.

> > +		} else if (extent == PMD_SIZE && IS_ENABLED(CONFIG_HAVE_MOVE_PMD)) {
> 
> Overly long line.

Ok, fixed. Preview of updated patch is below.

thanks,

 - Joel

------8<---
From: "Joel Fernandes (Google)" <joel@joelfernandes.org>
Subject: [PATCH 2/4] mm: speed up mremap by 500x on large regions (v3)

Android needs to mremap large regions of memory during memory management
related operations. The mremap system call can be really slow if THP is
not enabled. The bottleneck is move_page_tables, which is copying each
pte at a time, and can be really slow across a large map. Turning on THP
may not be a viable option, and is not for us. This patch speeds up the
performance for non-THP system by copying at the PMD level when possible.

The speed up is three orders of magnitude. On a 1GB mremap, the mremap
completion times drops from 160-250 millesconds to 380-400 microseconds.

Before:
Total mremap time for 1GB data: 242321014 nanoseconds.
Total mremap time for 1GB data: 196842467 nanoseconds.
Total mremap time for 1GB data: 167051162 nanoseconds.

After:
Total mremap time for 1GB data: 385781 nanoseconds.
Total mremap time for 1GB data: 388959 nanoseconds.
Total mremap time for 1GB data: 402813 nanoseconds.

Incase THP is enabled, the optimization is mostly skipped except in
certain situations. I also flush the tlb every time we do this
optimization since I couldn't find a way to determine if the low-level
PTEs are dirty. It is seen that the cost of doing so is not much
compared the improvement, on both x86-64 and arm64.

Cc: minchan at kernel.org
Cc: pantin at google.com
Cc: hughd at google.com
Cc: lokeshgidra at google.com
Cc: dancol at google.com
Cc: mhocko at kernel.org
Cc: kirill at shutemov.name
Cc: akpm at linux-foundation.org
Cc: kernel-team at android.com
Signed-off-by: Joel Fernandes (Google) <joel at joelfernandes.org>
---
 arch/Kconfig |  5 ++++
 mm/mremap.c  | 66 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 71 insertions(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index 6801123932a5..9724fe39884f 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -518,6 +518,11 @@ config HAVE_IRQ_TIME_ACCOUNTING
 	  Archs need to ensure they use a high enough resolution clock to
 	  support irq time accounting and then call enable_sched_clock_irqtime().
 
+config HAVE_MOVE_PMD
+	bool
+	help
+	  Archs that select this are able to move page tables at the PMD level.
+
 config HAVE_ARCH_TRANSPARENT_HUGEPAGE
 	bool
 
diff --git a/mm/mremap.c b/mm/mremap.c
index 9e68a02a52b1..a8dd98a59975 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
 		drop_rmap_locks(vma);
 }
 
+static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+		  unsigned long new_addr, unsigned long old_end,
+		  pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
+{
+	spinlock_t *old_ptl, *new_ptl;
+	struct mm_struct *mm = vma->vm_mm;
+
+	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK) ||
+	    old_end - old_addr < PMD_SIZE)
+		return false;
+
+	/*
+	 * The destination pmd shouldn't be established, free_pgtables()
+	 * should have release it.
+	 */
+	if (WARN_ON(!pmd_none(*new_pmd)))
+		return false;
+
+	/*
+	 * We don't have to worry about the ordering of src and dst
+	 * ptlocks because exclusive mmap_sem prevents deadlock.
+	 */
+	old_ptl = pmd_lock(vma->vm_mm, old_pmd);
+	if (old_ptl) {
+		pmd_t pmd;
+
+		new_ptl = pmd_lockptr(mm, new_pmd);
+		if (new_ptl != old_ptl)
+			spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
+
+		/* Clear the pmd */
+		pmd = *old_pmd;
+		pmd_clear(old_pmd);
+
+		VM_BUG_ON(!pmd_none(*new_pmd));
+
+		/* Set the new pmd */
+		set_pmd_at(mm, new_addr, new_pmd, pmd);
+		if (new_ptl != old_ptl)
+			spin_unlock(new_ptl);
+		spin_unlock(old_ptl);
+
+		*need_flush = true;
+		return true;
+	}
+	return false;
+}
+
 unsigned long move_page_tables(struct vm_area_struct *vma,
 		unsigned long old_addr, struct vm_area_struct *new_vma,
 		unsigned long new_addr, unsigned long len,
@@ -239,7 +287,25 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
 			split_huge_pmd(vma, old_pmd, old_addr);
 			if (pmd_trans_unstable(old_pmd))
 				continue;
+		} else if (extent == PMD_SIZE &&
+			   IS_ENABLED(CONFIG_HAVE_MOVE_PMD)) {
+			/*
+			 * If the extent is PMD-sized, try to speed the move by
+			 * moving at the PMD level if possible.
+			 */
+			bool moved;
+
+			if (need_rmap_locks)
+				take_rmap_locks(vma);
+			moved = move_normal_pmd(vma, old_addr, new_addr,
+						old_end, old_pmd, new_pmd,
+						&need_flush);
+			if (need_rmap_locks)
+				drop_rmap_locks(vma);
+			if (moved)
+				continue;
 		}
+
 		if (pte_alloc(new_vma->vm_mm, new_pmd))
 			break;
 		next = (new_addr + PMD_SIZE) & PMD_MASK;
-- 
2.19.1.331.ge82ca0e54c-goog

WARNING: multiple messages have this Message-ID (diff)
From: Joel Fernandes <joel@joelfernandes.org>
To: Christoph Hellwig <hch@infradead.org>
Cc: linux-mips@linux-mips.org, Rich Felker <dalias@libc.org>,
	linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Will Deacon <will.deacon@arm.com>,
	mhocko@kernel.org, linux-mm@kvack.org, lokeshgidra@google.com,
	sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu, Jonas Bonn <jonas@southpole.se>,
	linux-s390@vger.kernel.org, dancol@google.com,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Max Filippov <jcmvbkbc@gmail.com>,
	linux-hexagon@vger.kernel.org, Helge Deller <deller@gmx.de>,
	"maintainer:X86 ARCHITECTURE 32-BIT AND 64-BIT" <x86@kernel.org>,
	hughd@google.com, "James E.J. Bottomley" <jejb@parisc-linux.org>,
	kasan-dev@googlegroups.com, elfring@users.sourceforge.net,
	Ingo Molnar <mingo@redhat.com>,
	Geert
Subject: Re: [PATCH 2/4] mm: speed up mremap by 500x on large regions (v2)
Date: Mon, 15 Oct 2018 15:33:03 -0700	[thread overview]
Message-ID: <20181015223303.GA164293@joelaf.mtv.corp.google.com> (raw)
In-Reply-To: <20181015094209.GA31999@infradead.org>

On Mon, Oct 15, 2018 at 02:42:09AM -0700, Christoph Hellwig wrote:
> On Fri, Oct 12, 2018 at 06:31:58PM -0700, Joel Fernandes (Google) wrote:
> > Android needs to mremap large regions of memory during memory management
> > related operations.
> 
> Just curious: why?

In Android we have a requirement of moving a large (up to a GB now, but may
grow bigger in future) memory range from one location to another. This move
operation has to happen when the application threads are paused for this
operation. Therefore, an inefficient move like it is now (for example 250ms
on arm64) will cause response time issues for applications, which is not
acceptable. Huge pages cannot be used in such memory ranges to avoid this
inefficiency as (when the application threads are running) our fault handlers
are designed to process 4KB pages at a time, to keep response times low. So
using huge pages in this context can, again, cause response time issues.

Also, the mremap syscall waiting for quarter of a second for a large mremap
is quite weird and we ought to improve it where possible.

> > +	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)
> > +	    || old_end - old_addr < PMD_SIZE)
> 
> The || goes on the first line.

Ok, fixed.

> > +		} else if (extent == PMD_SIZE && IS_ENABLED(CONFIG_HAVE_MOVE_PMD)) {
> 
> Overly long line.

Ok, fixed. Preview of updated patch is below.

thanks,

 - Joel

------8<---
From: "Joel Fernandes (Google)" <joel@joelfernandes.org>
Subject: [PATCH 2/4] mm: speed up mremap by 500x on large regions (v3)

Android needs to mremap large regions of memory during memory management
related operations. The mremap system call can be really slow if THP is
not enabled. The bottleneck is move_page_tables, which is copying each
pte at a time, and can be really slow across a large map. Turning on THP
may not be a viable option, and is not for us. This patch speeds up the
performance for non-THP system by copying at the PMD level when possible.

The speed up is three orders of magnitude. On a 1GB mremap, the mremap
completion times drops from 160-250 millesconds to 380-400 microseconds.

Before:
Total mremap time for 1GB data: 242321014 nanoseconds.
Total mremap time for 1GB data: 196842467 nanoseconds.
Total mremap time for 1GB data: 167051162 nanoseconds.

After:
Total mremap time for 1GB data: 385781 nanoseconds.
Total mremap time for 1GB data: 388959 nanoseconds.
Total mremap time for 1GB data: 402813 nanoseconds.

Incase THP is enabled, the optimization is mostly skipped except in
certain situations. I also flush the tlb every time we do this
optimization since I couldn't find a way to determine if the low-level
PTEs are dirty. It is seen that the cost of doing so is not much
compared the improvement, on both x86-64 and arm64.

Cc: minchan@kernel.org
Cc: pantin@google.com
Cc: hughd@google.com
Cc: lokeshgidra@google.com
Cc: dancol@google.com
Cc: mhocko@kernel.org
Cc: kirill@shutemov.name
Cc: akpm@linux-foundation.org
Cc: kernel-team@android.com
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
---
 arch/Kconfig |  5 ++++
 mm/mremap.c  | 66 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 71 insertions(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index 6801123932a5..9724fe39884f 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -518,6 +518,11 @@ config HAVE_IRQ_TIME_ACCOUNTING
 	  Archs need to ensure they use a high enough resolution clock to
 	  support irq time accounting and then call enable_sched_clock_irqtime().
 
+config HAVE_MOVE_PMD
+	bool
+	help
+	  Archs that select this are able to move page tables at the PMD level.
+
 config HAVE_ARCH_TRANSPARENT_HUGEPAGE
 	bool
 
diff --git a/mm/mremap.c b/mm/mremap.c
index 9e68a02a52b1..a8dd98a59975 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
 		drop_rmap_locks(vma);
 }
 
+static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+		  unsigned long new_addr, unsigned long old_end,
+		  pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
+{
+	spinlock_t *old_ptl, *new_ptl;
+	struct mm_struct *mm = vma->vm_mm;
+
+	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK) ||
+	    old_end - old_addr < PMD_SIZE)
+		return false;
+
+	/*
+	 * The destination pmd shouldn't be established, free_pgtables()
+	 * should have release it.
+	 */
+	if (WARN_ON(!pmd_none(*new_pmd)))
+		return false;
+
+	/*
+	 * We don't have to worry about the ordering of src and dst
+	 * ptlocks because exclusive mmap_sem prevents deadlock.
+	 */
+	old_ptl = pmd_lock(vma->vm_mm, old_pmd);
+	if (old_ptl) {
+		pmd_t pmd;
+
+		new_ptl = pmd_lockptr(mm, new_pmd);
+		if (new_ptl != old_ptl)
+			spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
+
+		/* Clear the pmd */
+		pmd = *old_pmd;
+		pmd_clear(old_pmd);
+
+		VM_BUG_ON(!pmd_none(*new_pmd));
+
+		/* Set the new pmd */
+		set_pmd_at(mm, new_addr, new_pmd, pmd);
+		if (new_ptl != old_ptl)
+			spin_unlock(new_ptl);
+		spin_unlock(old_ptl);
+
+		*need_flush = true;
+		return true;
+	}
+	return false;
+}
+
 unsigned long move_page_tables(struct vm_area_struct *vma,
 		unsigned long old_addr, struct vm_area_struct *new_vma,
 		unsigned long new_addr, unsigned long len,
@@ -239,7 +287,25 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
 			split_huge_pmd(vma, old_pmd, old_addr);
 			if (pmd_trans_unstable(old_pmd))
 				continue;
+		} else if (extent == PMD_SIZE &&
+			   IS_ENABLED(CONFIG_HAVE_MOVE_PMD)) {
+			/*
+			 * If the extent is PMD-sized, try to speed the move by
+			 * moving at the PMD level if possible.
+			 */
+			bool moved;
+
+			if (need_rmap_locks)
+				take_rmap_locks(vma);
+			moved = move_normal_pmd(vma, old_addr, new_addr,
+						old_end, old_pmd, new_pmd,
+						&need_flush);
+			if (need_rmap_locks)
+				drop_rmap_locks(vma);
+			if (moved)
+				continue;
 		}
+
 		if (pte_alloc(new_vma->vm_mm, new_pmd))
 			break;
 		next = (new_addr + PMD_SIZE) & PMD_MASK;
-- 
2.19.1.331.ge82ca0e54c-goog

  reply	other threads:[~2018-10-15 22:33 UTC|newest]

Thread overview: 192+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-13  1:31 [PATCH 0/4] Add support for fast mremap Joel Fernandes (Google)
2018-10-13  1:31 ` Joel Fernandes (Google)
2018-10-13  1:31 ` Joel Fernandes (Google)
2018-10-13  1:31 ` Joel Fernandes (Google)
2018-10-13  1:31 ` Joel Fernandes (Google)
2018-10-13  1:31 ` Joel Fernandes (Google)
2018-10-13  1:31 ` Joel Fernandes (Google)
2018-10-13  1:31 ` Joel Fernandes (Google)
2018-10-13  1:31 ` [PATCH 1/4] treewide: remove unused address argument from pte_alloc functions (v2) Joel Fernandes (Google)
2018-10-13  1:31   ` Joel Fernandes (Google)
2018-10-13  1:31   ` Joel Fernandes (Google)
2018-10-13  1:31   ` Joel Fernandes (Google)
2018-10-13  1:31   ` Joel Fernandes (Google)
2018-10-13  1:31   ` Joel Fernandes (Google)
2018-10-13  1:31   ` Joel Fernandes (Google)
2018-10-13  1:31   ` Joel Fernandes (Google)
2018-10-24  8:37   ` Peter Zijlstra
2018-10-24  8:37     ` Peter Zijlstra
2018-10-24  8:37     ` Peter Zijlstra
2018-10-24  8:37     ` Peter Zijlstra
2018-10-24  8:37     ` Peter Zijlstra
2018-10-24  8:37     ` Peter Zijlstra
2018-10-24  8:37     ` Peter Zijlstra
2018-10-24  8:37     ` Peter Zijlstra
2018-10-25  2:21     ` Joel Fernandes
2018-10-25  2:21       ` Joel Fernandes
2018-10-25  2:21       ` Joel Fernandes
2018-10-25  2:21       ` Joel Fernandes
2018-10-25  2:21       ` Joel Fernandes
2018-10-25  2:21       ` Joel Fernandes
2018-10-25  2:21       ` Joel Fernandes
2018-10-26  8:52       ` Peter Zijlstra
2018-10-26  8:52         ` Peter Zijlstra
2018-10-26  8:52         ` Peter Zijlstra
2018-10-26  8:52         ` Peter Zijlstra
2018-10-26  8:52         ` Peter Zijlstra
2018-10-26  8:52         ` Peter Zijlstra
2018-10-26  8:52         ` Peter Zijlstra
2018-10-26  8:52         ` Peter Zijlstra
2018-10-25 10:47     ` Kirill A. Shutemov
2018-10-25 10:47       ` Kirill A. Shutemov
2018-10-25 10:47       ` Kirill A. Shutemov
2018-10-25 10:47       ` Kirill A. Shutemov
2018-10-25 10:47       ` Kirill A. Shutemov
2018-10-25 10:47       ` Kirill A. Shutemov
2018-10-25 10:47       ` Kirill A. Shutemov
2018-10-26  8:50       ` Peter Zijlstra
2018-10-26  8:50         ` Peter Zijlstra
2018-10-26  8:50         ` Peter Zijlstra
2018-10-26  8:50         ` Peter Zijlstra
2018-10-26  8:50         ` Peter Zijlstra
2018-10-26  8:50         ` Peter Zijlstra
2018-10-26  8:50         ` Peter Zijlstra
2018-10-26  8:50         ` Peter Zijlstra
2018-10-13  1:31 ` [PATCH 2/4] mm: speed up mremap by 500x on large regions (v2) Joel Fernandes (Google)
2018-10-13  1:31   ` Joel Fernandes (Google)
2018-10-13  1:31   ` Joel Fernandes (Google)
2018-10-13  1:31   ` Joel Fernandes (Google)
2018-10-13  1:31   ` Joel Fernandes (Google)
2018-10-13  1:31   ` Joel Fernandes (Google)
2018-10-13  1:31   ` Joel Fernandes (Google)
2018-10-13  1:31   ` Joel Fernandes (Google)
2018-10-15  9:42   ` Christoph Hellwig
2018-10-15  9:42     ` Christoph Hellwig
2018-10-15  9:42     ` Christoph Hellwig
2018-10-15  9:42     ` Christoph Hellwig
2018-10-15  9:42     ` Christoph Hellwig
2018-10-15  9:42     ` Christoph Hellwig
2018-10-15  9:42     ` Christoph Hellwig
2018-10-15  9:42     ` Christoph Hellwig
2018-10-15 22:33     ` Joel Fernandes [this message]
2018-10-15 22:33       ` Joel Fernandes
2018-10-15 22:33       ` Joel Fernandes
2018-10-15 22:33       ` Joel Fernandes
2018-10-15 22:33       ` Joel Fernandes
2018-10-15 22:33       ` Joel Fernandes
2018-10-15 22:33       ` Joel Fernandes
2018-10-16 11:29       ` Vlastimil Babka
2018-10-16 11:29         ` Vlastimil Babka
2018-10-16 11:29         ` Vlastimil Babka
2018-10-16 11:29         ` Vlastimil Babka
2018-10-16 11:29         ` Vlastimil Babka
2018-10-16 11:29         ` Vlastimil Babka
2018-10-16 11:29         ` Vlastimil Babka
2018-10-16 19:43         ` Joel Fernandes
2018-10-16 19:43           ` Joel Fernandes
2018-10-16 19:43           ` Joel Fernandes
2018-10-16 19:43           ` Joel Fernandes
2018-10-16 19:43           ` Joel Fernandes
2018-10-16 19:43           ` Joel Fernandes
2018-10-17  7:38           ` Vlastimil Babka
2018-10-17  7:38             ` Vlastimil Babka
2018-10-17  7:38             ` Vlastimil Babka
2018-10-17  7:38             ` Vlastimil Babka
2018-10-17  7:38             ` Vlastimil Babka
2018-10-17  7:38             ` Vlastimil Babka
2018-10-24 10:12   ` Kirill A. Shutemov
2018-10-24 10:12     ` Kirill A. Shutemov
2018-10-24 10:12     ` Kirill A. Shutemov
2018-10-24 10:12     ` Kirill A. Shutemov
2018-10-24 10:12     ` Kirill A. Shutemov
2018-10-24 10:12     ` Kirill A. Shutemov
2018-10-24 10:12     ` Kirill A. Shutemov
2018-10-24 11:57     ` Balbir Singh
2018-10-24 11:57       ` Balbir Singh
2018-10-24 11:57       ` Balbir Singh
2018-10-24 11:57       ` Balbir Singh
2018-10-24 11:57       ` Balbir Singh
2018-10-24 11:57       ` Balbir Singh
2018-10-24 11:57       ` Balbir Singh
2018-10-24 12:57       ` Kirill A. Shutemov
2018-10-24 12:57         ` Kirill A. Shutemov
2018-10-24 12:57         ` Kirill A. Shutemov
2018-10-24 12:57         ` Kirill A. Shutemov
2018-10-24 12:57         ` Kirill A. Shutemov
2018-10-24 12:57         ` Kirill A. Shutemov
2018-10-24 12:57         ` Kirill A. Shutemov
2018-10-25  2:09         ` Joel Fernandes
2018-10-25  2:09           ` Joel Fernandes
2018-10-25  2:09           ` Joel Fernandes
2018-10-25  2:09           ` Joel Fernandes
2018-10-25  2:09           ` Joel Fernandes
2018-10-25  2:09           ` Joel Fernandes
2018-10-25  2:09           ` Joel Fernandes
2018-10-25  2:09           ` Joel Fernandes
2018-10-25 10:19           ` Kirill A. Shutemov
2018-10-25 10:19             ` Kirill A. Shutemov
2018-10-25 10:19             ` Kirill A. Shutemov
2018-10-25 10:19             ` Kirill A. Shutemov
2018-10-25 10:19             ` Kirill A. Shutemov
2018-10-25 10:19             ` Kirill A. Shutemov
2018-10-25 10:19             ` Kirill A. Shutemov
2018-10-25 10:19             ` Kirill A. Shutemov
2018-10-26 21:11             ` Joel Fernandes
2018-10-26 21:11               ` Joel Fernandes
2018-10-26 21:11               ` Joel Fernandes
2018-10-26 21:11               ` Joel Fernandes
2018-10-26 21:11               ` Joel Fernandes
2018-10-26 21:11               ` Joel Fernandes
2018-10-26 21:11               ` Joel Fernandes
2018-10-26 21:11               ` Joel Fernandes
2018-10-29 10:28               ` Will Deacon
2018-10-29 10:28                 ` Will Deacon
2018-10-29 10:28                 ` Will Deacon
2018-10-29 10:28                 ` Will Deacon
2018-10-29 10:28                 ` Will Deacon
2018-10-29 10:28                 ` Will Deacon
2018-10-29 10:28                 ` Will Deacon
2018-10-25  2:13       ` Joel Fernandes
2018-10-25  2:13         ` Joel Fernandes
2018-10-25  2:13         ` Joel Fernandes
2018-10-25  2:13         ` Joel Fernandes
2018-10-25  2:13         ` Joel Fernandes
2018-10-25  2:13         ` Joel Fernandes
2018-10-25  2:13         ` Joel Fernandes
2018-10-27 10:21         ` Balbir Singh
2018-10-27 10:21           ` Balbir Singh
2018-10-27 10:21           ` Balbir Singh
2018-10-27 10:21           ` Balbir Singh
2018-10-27 10:21           ` Balbir Singh
2018-10-27 10:21           ` Balbir Singh
2018-10-27 10:21           ` Balbir Singh
2018-10-27 19:39           ` Joel Fernandes
2018-10-27 19:39             ` Joel Fernandes
2018-10-27 19:39             ` Joel Fernandes
2018-10-27 19:39             ` Joel Fernandes
2018-10-27 19:39             ` Joel Fernandes
2018-10-27 19:39             ` Joel Fernandes
2018-10-27 19:39             ` Joel Fernandes
2018-10-28 22:40             ` Balbir Singh
2018-10-28 22:40               ` Balbir Singh
2018-10-28 22:40               ` Balbir Singh
2018-10-28 22:40               ` Balbir Singh
2018-10-28 22:40               ` Balbir Singh
2018-10-28 22:40               ` Balbir Singh
2018-10-28 22:40               ` Balbir Singh
2018-10-13  1:31 ` [PATCH 3/4] arm64: select HAVE_MOVE_PMD for faster mremap (v1) Joel Fernandes (Google)
2018-10-13  1:31   ` Joel Fernandes (Google)
2018-10-13  1:31   ` Joel Fernandes (Google)
2018-10-13  1:31   ` Joel Fernandes (Google)
2018-10-13  1:31   ` Joel Fernandes (Google)
2018-10-13  1:31   ` Joel Fernandes (Google)
2018-10-13  1:31   ` Joel Fernandes (Google)
2018-10-13  1:31   ` Joel Fernandes (Google)
2018-10-13  1:32 ` [PATCH 4/4] x86: " Joel Fernandes (Google)
2018-10-13  1:32   ` Joel Fernandes (Google)
2018-10-13  1:32   ` Joel Fernandes (Google)
2018-10-13  1:32   ` Joel Fernandes (Google)
2018-10-13  1:32   ` Joel Fernandes (Google)
2018-10-13  1:32   ` Joel Fernandes (Google)
2018-10-13  1:32   ` Joel Fernandes (Google)
2018-10-13  1:32   ` Joel Fernandes (Google)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181015223303.GA164293@joelaf.mtv.corp.google.com \
    --to=joel@joelfernandes.org \
    --cc=catalin.marinas@arm.com \
    --cc=dalias@libc.org \
    --cc=dancol@google.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=deller@gmx.de \
    --cc=elfring@users.sourceforge.net \
    --cc=hch@infradead.org \
    --cc=hughd@google.com \
    --cc=jcmvbkbc@gmail.com \
    --cc=jejb@parisc-linux.org \
    --cc=jonas@southpole.se \
    --cc=kasan-dev@googlegroups.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-hexagon@vger.kernel.org \
    --cc=linux-ia64@vger.kernel.org \
    --cc=linux-mips@linux-mips.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=linux-sh@vger.kernel.org \
    --cc=lokeshgidra@google.com \
    --cc=mhocko@kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=sparclinux@vger.kernel.org \
    --cc=will.deacon@arm.com \
    --cc=x86@kernel.org \
    --cc=ysato@users.sourceforge.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.