All of lore.kernel.org
 help / color / mirror / Atom feed
From: Joel Fernandes <joel@joelfernandes.org>
To: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: linux-mips@linux-mips.org, Rich Felker <dalias@libc.org>,
	linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Will Deacon <will.deacon@arm.com>,
	mhocko@kernel.org, linux-mm@kvack.org, lokeshgidra@google.com,
	sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org,
	elfring@users.sourceforge.net, Jonas Bonn <jonas@southpole.se>,
	linux-s390@vger.kernel.org, dancol@google.com,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	linux-xtensa@linux-xtensa.org, linux-hexagon@vger.kernel.org,
	Helge Deller <deller@gmx.de>,
	"maintainer:X86 ARCHITECTURE \(32-BIT AND 64-BIT\)"
	<x86@kernel.org>,
	hughd@google.com, "James E.J. Bottomley" <jejb@parisc-linux.org>,
	kasan-dev@googlegroups.com, kvmarm@lists.cs.columbia.edu,
	Christian Borntraeger <borntraeger@de.ibm.co
Subject: Re: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions
Date: Mon, 15 Oct 2018 19:08:53 -0700	[thread overview]
Message-ID: <20181016020853.GA56701@joelaf.mtv.corp.google.com> (raw)
In-Reply-To: <20181015101814.306d257c@mschwideX1>

On Mon, Oct 15, 2018 at 10:18:14AM +0200, Martin Schwidefsky wrote:
> On Mon, 15 Oct 2018 09:10:53 +0200
> Christian Borntraeger <borntraeger@de.ibm.com> wrote:
> 
> > On 10/12/2018 03:37 AM, Joel Fernandes (Google) wrote:
> > > Android needs to mremap large regions of memory during memory management
> > > related operations. The mremap system call can be really slow if THP is
> > > not enabled. The bottleneck is move_page_tables, which is copying each
> > > pte at a time, and can be really slow across a large map. Turning on THP
> > > may not be a viable option, and is not for us. This patch speeds up the
> > > performance for non-THP system by copying at the PMD level when possible.
> > > 
> > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap
> > > completion times drops from 160-250 millesconds to 380-400 microseconds.
> > > 
> > > Before:
> > > Total mremap time for 1GB data: 242321014 nanoseconds.
> > > Total mremap time for 1GB data: 196842467 nanoseconds.
> > > Total mremap time for 1GB data: 167051162 nanoseconds.
> > > 
> > > After:
> > > Total mremap time for 1GB data: 385781 nanoseconds.
> > > Total mremap time for 1GB data: 388959 nanoseconds.
> > > Total mremap time for 1GB data: 402813 nanoseconds.
> > > 
> > > Incase THP is enabled, the optimization is skipped. I also flush the
> > > tlb every time we do this optimization since I couldn't find a way to
> > > determine if the low-level PTEs are dirty. It is seen that the cost of
> > > doing so is not much compared the improvement, on both x86-64 and arm64.
> > > 
> > > Cc: minchan@kernel.org
> > > Cc: pantin@google.com
> > > Cc: hughd@google.com
> > > Cc: lokeshgidra@google.com
> > > Cc: dancol@google.com
> > > Cc: mhocko@kernel.org
> > > Cc: kirill@shutemov.name
> > > Cc: akpm@linux-foundation.org
> > > Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
> > > ---
> > >  mm/mremap.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 62 insertions(+)
> > > 
> > > diff --git a/mm/mremap.c b/mm/mremap.c
> > > index 9e68a02a52b1..d82c485822ef 100644
> > > --- a/mm/mremap.c
> > > +++ b/mm/mremap.c
> > > @@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
> > >  		drop_rmap_locks(vma);
> > >  }
> > >  
> > > +static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
> > > +		  unsigned long new_addr, unsigned long old_end,
> > > +		  pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
> > > +{
> > > +	spinlock_t *old_ptl, *new_ptl;
> > > +	struct mm_struct *mm = vma->vm_mm;
> > > +
> > > +	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)
> > > +	    || old_end - old_addr < PMD_SIZE)
> > > +		return false;
> > > +
> > > +	/*
> > > +	 * The destination pmd shouldn't be established, free_pgtables()
> > > +	 * should have release it.
> > > +	 */
> > > +	if (WARN_ON(!pmd_none(*new_pmd)))
> > > +		return false;
> > > +
> > > +	/*
> > > +	 * We don't have to worry about the ordering of src and dst
> > > +	 * ptlocks because exclusive mmap_sem prevents deadlock.
> > > +	 */
> > > +	old_ptl = pmd_lock(vma->vm_mm, old_pmd);
> > > +	if (old_ptl) {
> > > +		pmd_t pmd;
> > > +
> > > +		new_ptl = pmd_lockptr(mm, new_pmd);
> > > +		if (new_ptl != old_ptl)
> > > +			spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
> > > +
> > > +		/* Clear the pmd */
> > > +		pmd = *old_pmd;
> > > +		pmd_clear(old_pmd);  
> > 
> > Adding Martin Schwidefsky.
> > Is this mapping maybe still in use on other CPUs? If yes, I think for
> > s390 we need to flush here as well (in other word we might need to introduce
> > pmd_clear_flush). On s390 you have to use instructions like CRDTE,IPTE or IDTE
> > to modify page table entries that are still in use. Otherwise you can get a 
> > delayed access exception which is - in contrast to page faults - not recoverable.
> 
> Just clearing an active pmd would be broken for s390. We need the equivalent
> of the ptep_get_and_clear() function for pmds. For s390 this function would
> look like this:
> 
> static inline pte_t pmdp_get_and_clear(struct mm_struct *mm,
>                                        unsigned long addr, pmd_t *pmdp)
> {
>         return pmdp_xchg_lazy(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_INVALID));
> }
> 
> Just like pmdp_huge_get_and_clear() in fact.

I agree architecture like s390 may need additional explicit instructions to
avoid any unrecoverable failure. So the good news is in my last patch I sent, I
have put this behind an architecture flag (HAVE_MOVE_PMD), so we don't have
to enable it with architectures that cannot handle it:
https://www.spinics.net/lists/linux-mm/msg163621.html

Also we are triggering this optimization only if the page is not a transparent
huge page by calling pmd_trans_huge(). For regular pages, it should be safe to
not do the atomic get_and_clear AIUI because Linux doesn't use any bits from
the PMD like the dirty bit if THP is not in use (and the processors that I
saw (not s390) should not storing anything in the bits anyway when the page
is not a huge page. I have gone through various scenarios and read both arm
32-bit and 64-bit and x86 64-bit manuals, and I believe it to be safe.

For s390, lets not set the HAVE_MOVE_PMD flag. Does that work for you?

> > > +
> > > +		VM_BUG_ON(!pmd_none(*new_pmd));
> > > +
> > > +		/* Set the new pmd */
> > > +		set_pmd_at(mm, new_addr, new_pmd, pmd);
> > > +		if (new_ptl != old_ptl)
> > > +			spin_unlock(new_ptl);
> > > +		spin_unlock(old_ptl);
> > > +
> > > +		*need_flush = true;
> > > +		return true;
> > > +	}
> > > +	return false;
> > > +}
> > > +
> 
> So the idea is to move the pmd entry to the new location, dragging
> the whole pte table to a new location with a different address.
> I wonder if that is safe in regard to get_user_pages_fast().

Could you elaborate why you feel it may not be?

Are you concerned that the PMD moving interferes with the page walk? Incase
the tree changes during page-walking, the number of pages pinned by
get_user_pages_fast may be less than the number requested. In this case,
get_user_pages_fast would fall back to the slow path which should be
synchronized with the mremap by courtesy of the mm->mmap_sem. But please let
me know the scenario you have in mind and if I missed something.

thanks,

 - Joel

WARNING: multiple messages have this Message-ID (diff)
From: Joel Fernandes <joel@joelfernandes.org>
To: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: linux-mips@linux-mips.org, Rich Felker <dalias@libc.org>,
	linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Will Deacon <will.deacon@arm.com>,
	mhocko@kernel.org, linux-mm@kvack.org, lokeshgidra@google.com,
	sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org,
	elfring@users.sourceforge.net, Jonas Bonn <jonas@southpole.se>,
	linux-s390@vger.kernel.org, dancol@google.com,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	linux-xtensa@linux-xtensa.org, linux-hexagon@vger.kernel.org,
	Helge Deller <deller@gmx.de>,
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)"
	<x86@kernel.org>,
	hughd@google.com, "James E.J. Bottomley" <jejb@parisc-linux.org>,
	kasan-dev@googlegroups.com, kvmarm@lists.cs.columbia.edu,
	Christian Borntraeger <borntraeger@de.ibm.co>
Subject: Re: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions
Date: Mon, 15 Oct 2018 19:08:53 -0700	[thread overview]
Message-ID: <20181016020853.GA56701@joelaf.mtv.corp.google.com> (raw)
In-Reply-To: <20181015101814.306d257c@mschwideX1>

On Mon, Oct 15, 2018 at 10:18:14AM +0200, Martin Schwidefsky wrote:
> On Mon, 15 Oct 2018 09:10:53 +0200
> Christian Borntraeger <borntraeger@de.ibm.com> wrote:
> 
> > On 10/12/2018 03:37 AM, Joel Fernandes (Google) wrote:
> > > Android needs to mremap large regions of memory during memory management
> > > related operations. The mremap system call can be really slow if THP is
> > > not enabled. The bottleneck is move_page_tables, which is copying each
> > > pte at a time, and can be really slow across a large map. Turning on THP
> > > may not be a viable option, and is not for us. This patch speeds up the
> > > performance for non-THP system by copying at the PMD level when possible.
> > > 
> > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap
> > > completion times drops from 160-250 millesconds to 380-400 microseconds.
> > > 
> > > Before:
> > > Total mremap time for 1GB data: 242321014 nanoseconds.
> > > Total mremap time for 1GB data: 196842467 nanoseconds.
> > > Total mremap time for 1GB data: 167051162 nanoseconds.
> > > 
> > > After:
> > > Total mremap time for 1GB data: 385781 nanoseconds.
> > > Total mremap time for 1GB data: 388959 nanoseconds.
> > > Total mremap time for 1GB data: 402813 nanoseconds.
> > > 
> > > Incase THP is enabled, the optimization is skipped. I also flush the
> > > tlb every time we do this optimization since I couldn't find a way to
> > > determine if the low-level PTEs are dirty. It is seen that the cost of
> > > doing so is not much compared the improvement, on both x86-64 and arm64.
> > > 
> > > Cc: minchan@kernel.org
> > > Cc: pantin@google.com
> > > Cc: hughd@google.com
> > > Cc: lokeshgidra@google.com
> > > Cc: dancol@google.com
> > > Cc: mhocko@kernel.org
> > > Cc: kirill@shutemov.name
> > > Cc: akpm@linux-foundation.org
> > > Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
> > > ---
> > >  mm/mremap.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 62 insertions(+)
> > > 
> > > diff --git a/mm/mremap.c b/mm/mremap.c
> > > index 9e68a02a52b1..d82c485822ef 100644
> > > --- a/mm/mremap.c
> > > +++ b/mm/mremap.c
> > > @@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
> > >  		drop_rmap_locks(vma);
> > >  }
> > >  
> > > +static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
> > > +		  unsigned long new_addr, unsigned long old_end,
> > > +		  pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
> > > +{
> > > +	spinlock_t *old_ptl, *new_ptl;
> > > +	struct mm_struct *mm = vma->vm_mm;
> > > +
> > > +	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)
> > > +	    || old_end - old_addr < PMD_SIZE)
> > > +		return false;
> > > +
> > > +	/*
> > > +	 * The destination pmd shouldn't be established, free_pgtables()
> > > +	 * should have release it.
> > > +	 */
> > > +	if (WARN_ON(!pmd_none(*new_pmd)))
> > > +		return false;
> > > +
> > > +	/*
> > > +	 * We don't have to worry about the ordering of src and dst
> > > +	 * ptlocks because exclusive mmap_sem prevents deadlock.
> > > +	 */
> > > +	old_ptl = pmd_lock(vma->vm_mm, old_pmd);
> > > +	if (old_ptl) {
> > > +		pmd_t pmd;
> > > +
> > > +		new_ptl = pmd_lockptr(mm, new_pmd);
> > > +		if (new_ptl != old_ptl)
> > > +			spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
> > > +
> > > +		/* Clear the pmd */
> > > +		pmd = *old_pmd;
> > > +		pmd_clear(old_pmd);  
> > 
> > Adding Martin Schwidefsky.
> > Is this mapping maybe still in use on other CPUs? If yes, I think for
> > s390 we need to flush here as well (in other word we might need to introduce
> > pmd_clear_flush). On s390 you have to use instructions like CRDTE,IPTE or IDTE
> > to modify page table entries that are still in use. Otherwise you can get a 
> > delayed access exception which is - in contrast to page faults - not recoverable.
> 
> Just clearing an active pmd would be broken for s390. We need the equivalent
> of the ptep_get_and_clear() function for pmds. For s390 this function would
> look like this:
> 
> static inline pte_t pmdp_get_and_clear(struct mm_struct *mm,
>                                        unsigned long addr, pmd_t *pmdp)
> {
>         return pmdp_xchg_lazy(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_INVALID));
> }
> 
> Just like pmdp_huge_get_and_clear() in fact.

I agree architecture like s390 may need additional explicit instructions to
avoid any unrecoverable failure. So the good news is in my last patch I sent, I
have put this behind an architecture flag (HAVE_MOVE_PMD), so we don't have
to enable it with architectures that cannot handle it:
https://www.spinics.net/lists/linux-mm/msg163621.html

Also we are triggering this optimization only if the page is not a transparent
huge page by calling pmd_trans_huge(). For regular pages, it should be safe to
not do the atomic get_and_clear AIUI because Linux doesn't use any bits from
the PMD like the dirty bit if THP is not in use (and the processors that I
saw (not s390) should not storing anything in the bits anyway when the page
is not a huge page. I have gone through various scenarios and read both arm
32-bit and 64-bit and x86 64-bit manuals, and I believe it to be safe.

For s390, lets not set the HAVE_MOVE_PMD flag. Does that work for you?

> > > +
> > > +		VM_BUG_ON(!pmd_none(*new_pmd));
> > > +
> > > +		/* Set the new pmd */
> > > +		set_pmd_at(mm, new_addr, new_pmd, pmd);
> > > +		if (new_ptl != old_ptl)
> > > +			spin_unlock(new_ptl);
> > > +		spin_unlock(old_ptl);
> > > +
> > > +		*need_flush = true;
> > > +		return true;
> > > +	}
> > > +	return false;
> > > +}
> > > +
> 
> So the idea is to move the pmd entry to the new location, dragging
> the whole pte table to a new location with a different address.
> I wonder if that is safe in regard to get_user_pages_fast().

Could you elaborate why you feel it may not be?

Are you concerned that the PMD moving interferes with the page walk? Incase
the tree changes during page-walking, the number of pages pinned by
get_user_pages_fast may be less than the number requested. In this case,
get_user_pages_fast would fall back to the slow path which should be
synchronized with the mremap by courtesy of the mm->mmap_sem. But please let
me know the scenario you have in mind and if I missed something.

thanks,

 - Joel

WARNING: multiple messages have this Message-ID (diff)
From: Joel Fernandes <joel@joelfernandes.org>
To: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>,
	linux-kernel@vger.kernel.org, kernel-team@android.com,
	minchan@kernel.org, pantin@google.com, hughd@google.com,
	lokeshgidra@google.com, dancol@google.com, mhocko@kernel.org,
	kirill@shutemov.name, akpm@linux-foundation.org,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	Andy Lutomirski <luto@kernel.org>, Borislav Petkov <bp@alien8.de>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Chris Zankel <chris@zankel.net>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	elfring@users.sourceforge.net, Fenghua Yu <fenghua.yu@intel.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Guan Xuetao <gxt@pku.edu.cn>, Helge Deller <deller@gmx.de>,
	Ingo Molnar <mingo@redhat.com>,
	"James E.J. Bottomley" <jejb@parisc-linux.org>,
	Jeff Dike <jdike@addtoit.com>, Jonas Bonn <jonas@southpole.se>,
	Julia Lawall <Julia.Lawall@lip6.fr>,
	kasan-dev@googlegroups.com, kvmarm@lists.cs.columbia.edu,
	Ley Foon Tan <lftan@altera.com>,
	linux-alpha@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org,
	linux-m68k@lists.linux-m68k.org, linux-mips@linux-mips.org,
	linux-mm@kvack.org, linux-parisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	linux-snps-arc@lists.infradead.org, linux-um@lists.infradead.org,
	linux-xtensa@linux-xtensa.org, Max Filippov <jcmvbkbc@gmail.com>,
	nios2-dev@lists.rocketboards.org, openrisc@lists.librecores.org,
	Peter Zijlstra <peterz@infradead.org>,
	Richard Weinberger <richard@nod.at>,
	Rich Felker <dalias@libc.org>, Sam Creasey <sammy@sammy.net>,
	sparclinux@vger.kernel.org, Stafford Horne <shorne@gmail.com>,
	Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>,
	Thomas Gleixner <tglx@linutronix.de>,
	Tony Luck <tony.luck@intel.com>,
	Will Deacon <will.deacon@arm.com>,
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)"
	<x86@kernel.org>, Yoshinori Sato <ysato@users.sourceforge.jp>
Subject: Re: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions
Date: Mon, 15 Oct 2018 19:08:53 -0700	[thread overview]
Message-ID: <20181016020853.GA56701@joelaf.mtv.corp.google.com> (raw)
In-Reply-To: <20181015101814.306d257c@mschwideX1>

On Mon, Oct 15, 2018 at 10:18:14AM +0200, Martin Schwidefsky wrote:
> On Mon, 15 Oct 2018 09:10:53 +0200
> Christian Borntraeger <borntraeger@de.ibm.com> wrote:
> 
> > On 10/12/2018 03:37 AM, Joel Fernandes (Google) wrote:
> > > Android needs to mremap large regions of memory during memory management
> > > related operations. The mremap system call can be really slow if THP is
> > > not enabled. The bottleneck is move_page_tables, which is copying each
> > > pte at a time, and can be really slow across a large map. Turning on THP
> > > may not be a viable option, and is not for us. This patch speeds up the
> > > performance for non-THP system by copying at the PMD level when possible.
> > > 
> > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap
> > > completion times drops from 160-250 millesconds to 380-400 microseconds.
> > > 
> > > Before:
> > > Total mremap time for 1GB data: 242321014 nanoseconds.
> > > Total mremap time for 1GB data: 196842467 nanoseconds.
> > > Total mremap time for 1GB data: 167051162 nanoseconds.
> > > 
> > > After:
> > > Total mremap time for 1GB data: 385781 nanoseconds.
> > > Total mremap time for 1GB data: 388959 nanoseconds.
> > > Total mremap time for 1GB data: 402813 nanoseconds.
> > > 
> > > Incase THP is enabled, the optimization is skipped. I also flush the
> > > tlb every time we do this optimization since I couldn't find a way to
> > > determine if the low-level PTEs are dirty. It is seen that the cost of
> > > doing so is not much compared the improvement, on both x86-64 and arm64.
> > > 
> > > Cc: minchan@kernel.org
> > > Cc: pantin@google.com
> > > Cc: hughd@google.com
> > > Cc: lokeshgidra@google.com
> > > Cc: dancol@google.com
> > > Cc: mhocko@kernel.org
> > > Cc: kirill@shutemov.name
> > > Cc: akpm@linux-foundation.org
> > > Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
> > > ---
> > >  mm/mremap.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 62 insertions(+)
> > > 
> > > diff --git a/mm/mremap.c b/mm/mremap.c
> > > index 9e68a02a52b1..d82c485822ef 100644
> > > --- a/mm/mremap.c
> > > +++ b/mm/mremap.c
> > > @@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
> > >  		drop_rmap_locks(vma);
> > >  }
> > >  
> > > +static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
> > > +		  unsigned long new_addr, unsigned long old_end,
> > > +		  pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
> > > +{
> > > +	spinlock_t *old_ptl, *new_ptl;
> > > +	struct mm_struct *mm = vma->vm_mm;
> > > +
> > > +	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)
> > > +	    || old_end - old_addr < PMD_SIZE)
> > > +		return false;
> > > +
> > > +	/*
> > > +	 * The destination pmd shouldn't be established, free_pgtables()
> > > +	 * should have release it.
> > > +	 */
> > > +	if (WARN_ON(!pmd_none(*new_pmd)))
> > > +		return false;
> > > +
> > > +	/*
> > > +	 * We don't have to worry about the ordering of src and dst
> > > +	 * ptlocks because exclusive mmap_sem prevents deadlock.
> > > +	 */
> > > +	old_ptl = pmd_lock(vma->vm_mm, old_pmd);
> > > +	if (old_ptl) {
> > > +		pmd_t pmd;
> > > +
> > > +		new_ptl = pmd_lockptr(mm, new_pmd);
> > > +		if (new_ptl != old_ptl)
> > > +			spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
> > > +
> > > +		/* Clear the pmd */
> > > +		pmd = *old_pmd;
> > > +		pmd_clear(old_pmd);  
> > 
> > Adding Martin Schwidefsky.
> > Is this mapping maybe still in use on other CPUs? If yes, I think for
> > s390 we need to flush here as well (in other word we might need to introduce
> > pmd_clear_flush). On s390 you have to use instructions like CRDTE,IPTE or IDTE
> > to modify page table entries that are still in use. Otherwise you can get a 
> > delayed access exception which is - in contrast to page faults - not recoverable.
> 
> Just clearing an active pmd would be broken for s390. We need the equivalent
> of the ptep_get_and_clear() function for pmds. For s390 this function would
> look like this:
> 
> static inline pte_t pmdp_get_and_clear(struct mm_struct *mm,
>                                        unsigned long addr, pmd_t *pmdp)
> {
>         return pmdp_xchg_lazy(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_INVALID));
> }
> 
> Just like pmdp_huge_get_and_clear() in fact.

I agree architecture like s390 may need additional explicit instructions to
avoid any unrecoverable failure. So the good news is in my last patch I sent, I
have put this behind an architecture flag (HAVE_MOVE_PMD), so we don't have
to enable it with architectures that cannot handle it:
https://www.spinics.net/lists/linux-mm/msg163621.html

Also we are triggering this optimization only if the page is not a transparent
huge page by calling pmd_trans_huge(). For regular pages, it should be safe to
not do the atomic get_and_clear AIUI because Linux doesn't use any bits from
the PMD like the dirty bit if THP is not in use (and the processors that I
saw (not s390) should not storing anything in the bits anyway when the page
is not a huge page. I have gone through various scenarios and read both arm
32-bit and 64-bit and x86 64-bit manuals, and I believe it to be safe.

For s390, lets not set the HAVE_MOVE_PMD flag. Does that work for you?

> > > +
> > > +		VM_BUG_ON(!pmd_none(*new_pmd));
> > > +
> > > +		/* Set the new pmd */
> > > +		set_pmd_at(mm, new_addr, new_pmd, pmd);
> > > +		if (new_ptl != old_ptl)
> > > +			spin_unlock(new_ptl);
> > > +		spin_unlock(old_ptl);
> > > +
> > > +		*need_flush = true;
> > > +		return true;
> > > +	}
> > > +	return false;
> > > +}
> > > +
> 
> So the idea is to move the pmd entry to the new location, dragging
> the whole pte table to a new location with a different address.
> I wonder if that is safe in regard to get_user_pages_fast().

Could you elaborate why you feel it may not be?

Are you concerned that the PMD moving interferes with the page walk? Incase
the tree changes during page-walking, the number of pages pinned by
get_user_pages_fast may be less than the number requested. In this case,
get_user_pages_fast would fall back to the slow path which should be
synchronized with the mremap by courtesy of the mm->mmap_sem. But please let
me know the scenario you have in mind and if I missed something.

thanks,

 - Joel

WARNING: multiple messages have this Message-ID (diff)
From: joel@joelfernandes.org (Joel Fernandes)
To: linux-riscv@lists.infradead.org
Subject: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions
Date: Mon, 15 Oct 2018 19:08:53 -0700	[thread overview]
Message-ID: <20181016020853.GA56701@joelaf.mtv.corp.google.com> (raw)
In-Reply-To: <20181015101814.306d257c@mschwideX1>

On Mon, Oct 15, 2018 at 10:18:14AM +0200, Martin Schwidefsky wrote:
> On Mon, 15 Oct 2018 09:10:53 +0200
> Christian Borntraeger <borntraeger@de.ibm.com> wrote:
> 
> > On 10/12/2018 03:37 AM, Joel Fernandes (Google) wrote:
> > > Android needs to mremap large regions of memory during memory management
> > > related operations. The mremap system call can be really slow if THP is
> > > not enabled. The bottleneck is move_page_tables, which is copying each
> > > pte at a time, and can be really slow across a large map. Turning on THP
> > > may not be a viable option, and is not for us. This patch speeds up the
> > > performance for non-THP system by copying at the PMD level when possible.
> > > 
> > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap
> > > completion times drops from 160-250 millesconds to 380-400 microseconds.
> > > 
> > > Before:
> > > Total mremap time for 1GB data: 242321014 nanoseconds.
> > > Total mremap time for 1GB data: 196842467 nanoseconds.
> > > Total mremap time for 1GB data: 167051162 nanoseconds.
> > > 
> > > After:
> > > Total mremap time for 1GB data: 385781 nanoseconds.
> > > Total mremap time for 1GB data: 388959 nanoseconds.
> > > Total mremap time for 1GB data: 402813 nanoseconds.
> > > 
> > > Incase THP is enabled, the optimization is skipped. I also flush the
> > > tlb every time we do this optimization since I couldn't find a way to
> > > determine if the low-level PTEs are dirty. It is seen that the cost of
> > > doing so is not much compared the improvement, on both x86-64 and arm64.
> > > 
> > > Cc: minchan at kernel.org
> > > Cc: pantin at google.com
> > > Cc: hughd at google.com
> > > Cc: lokeshgidra at google.com
> > > Cc: dancol at google.com
> > > Cc: mhocko at kernel.org
> > > Cc: kirill at shutemov.name
> > > Cc: akpm at linux-foundation.org
> > > Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
> > > ---
> > >  mm/mremap.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 62 insertions(+)
> > > 
> > > diff --git a/mm/mremap.c b/mm/mremap.c
> > > index 9e68a02a52b1..d82c485822ef 100644
> > > --- a/mm/mremap.c
> > > +++ b/mm/mremap.c
> > > @@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
> > >  		drop_rmap_locks(vma);
> > >  }
> > >  
> > > +static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
> > > +		  unsigned long new_addr, unsigned long old_end,
> > > +		  pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
> > > +{
> > > +	spinlock_t *old_ptl, *new_ptl;
> > > +	struct mm_struct *mm = vma->vm_mm;
> > > +
> > > +	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)
> > > +	    || old_end - old_addr < PMD_SIZE)
> > > +		return false;
> > > +
> > > +	/*
> > > +	 * The destination pmd shouldn't be established, free_pgtables()
> > > +	 * should have release it.
> > > +	 */
> > > +	if (WARN_ON(!pmd_none(*new_pmd)))
> > > +		return false;
> > > +
> > > +	/*
> > > +	 * We don't have to worry about the ordering of src and dst
> > > +	 * ptlocks because exclusive mmap_sem prevents deadlock.
> > > +	 */
> > > +	old_ptl = pmd_lock(vma->vm_mm, old_pmd);
> > > +	if (old_ptl) {
> > > +		pmd_t pmd;
> > > +
> > > +		new_ptl = pmd_lockptr(mm, new_pmd);
> > > +		if (new_ptl != old_ptl)
> > > +			spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
> > > +
> > > +		/* Clear the pmd */
> > > +		pmd = *old_pmd;
> > > +		pmd_clear(old_pmd);  
> > 
> > Adding Martin Schwidefsky.
> > Is this mapping maybe still in use on other CPUs? If yes, I think for
> > s390 we need to flush here as well (in other word we might need to introduce
> > pmd_clear_flush). On s390 you have to use instructions like CRDTE,IPTE or IDTE
> > to modify page table entries that are still in use. Otherwise you can get a 
> > delayed access exception which is - in contrast to page faults - not recoverable.
> 
> Just clearing an active pmd would be broken for s390. We need the equivalent
> of the ptep_get_and_clear() function for pmds. For s390 this function would
> look like this:
> 
> static inline pte_t pmdp_get_and_clear(struct mm_struct *mm,
>                                        unsigned long addr, pmd_t *pmdp)
> {
>         return pmdp_xchg_lazy(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_INVALID));
> }
> 
> Just like pmdp_huge_get_and_clear() in fact.

I agree architecture like s390 may need additional explicit instructions to
avoid any unrecoverable failure. So the good news is in my last patch I sent, I
have put this behind an architecture flag (HAVE_MOVE_PMD), so we don't have
to enable it with architectures that cannot handle it:
https://www.spinics.net/lists/linux-mm/msg163621.html

Also we are triggering this optimization only if the page is not a transparent
huge page by calling pmd_trans_huge(). For regular pages, it should be safe to
not do the atomic get_and_clear AIUI because Linux doesn't use any bits from
the PMD like the dirty bit if THP is not in use (and the processors that I
saw (not s390) should not storing anything in the bits anyway when the page
is not a huge page. I have gone through various scenarios and read both arm
32-bit and 64-bit and x86 64-bit manuals, and I believe it to be safe.

For s390, lets not set the HAVE_MOVE_PMD flag. Does that work for you?

> > > +
> > > +		VM_BUG_ON(!pmd_none(*new_pmd));
> > > +
> > > +		/* Set the new pmd */
> > > +		set_pmd_at(mm, new_addr, new_pmd, pmd);
> > > +		if (new_ptl != old_ptl)
> > > +			spin_unlock(new_ptl);
> > > +		spin_unlock(old_ptl);
> > > +
> > > +		*need_flush = true;
> > > +		return true;
> > > +	}
> > > +	return false;
> > > +}
> > > +
> 
> So the idea is to move the pmd entry to the new location, dragging
> the whole pte table to a new location with a different address.
> I wonder if that is safe in regard to get_user_pages_fast().

Could you elaborate why you feel it may not be?

Are you concerned that the PMD moving interferes with the page walk? Incase
the tree changes during page-walking, the number of pages pinned by
get_user_pages_fast may be less than the number requested. In this case,
get_user_pages_fast would fall back to the slow path which should be
synchronized with the mremap by courtesy of the mm->mmap_sem. But please let
me know the scenario you have in mind and if I missed something.

thanks,

 - Joel

WARNING: multiple messages have this Message-ID (diff)
From: Joel Fernandes <joel@joelfernandes.org>
To: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: linux-mips@linux-mips.org, Rich Felker <dalias@libc.org>,
	linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Will Deacon <will.deacon@arm.com>,
	mhocko@kernel.org, linux-mm@kvack.org, lokeshgidra@google.com,
	sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org,
	elfring@users.sourceforge.net, Jonas Bonn <jonas@southpole.se>,
	linux-s390@vger.kernel.org, dancol@google.com,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	linux-xtensa@linux-xtensa.org, linux-hexagon@vger.kernel.org,
	Helge Deller <deller@gmx.de>,
	"maintainer:X86 ARCHITECTURE \(32-BIT AND 64-BIT\)"
	<x86@kernel.org>,
	hughd@google.com, "James E.J. Bottomley" <jejb@parisc-linux.org>,
	kasan-dev@googlegroups.com, kvmarm@lists.cs.columbia.edu,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Ingo Molnar <mingo@redhat.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	linux-snps-arc@lists.infradead.org, kernel-team@android.com,
	Sam Creasey <sammy@sammy.net>, Fenghua Yu <fenghua.yu@intel.com>,
	Jeff Dike <jdike@addtoit.com>,
	linux-um@lists.infradead.org,
	Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>,
	Julia Lawall <Julia.Lawall@lip6.fr>,
	linux-m68k@lists.linux-m68k.org, openrisc@lists.librecores.org,
	Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
	nios2-dev@lists.rocketboards.org, kirill@shutemov.name,
	Stafford Horne <shorne@gmail.com>, Guan Xuetao <gxt@pku.edu.cn>,
	linux-arm-kernel@lists.infradead.org,
	Chris Zankel <chris@zankel.net>, Tony Luck <tony.luck@intel.com>,
	Richard Weinberger <richard@nod.at>,
	linux-parisc@vger.kernel.org, pantin@google.com,
	Max Filippov <jcmvbkbc@gmail.com>,
	linux-kernel@vger.kernel.org, minchan@kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-alpha@vger.kernel.org, Ley Foon Tan <lftan@altera.com>,
	akpm@linux-foundation.org, linuxppc-dev@lists.ozlabs.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions
Date: Mon, 15 Oct 2018 19:08:53 -0700	[thread overview]
Message-ID: <20181016020853.GA56701@joelaf.mtv.corp.google.com> (raw)
Message-ID: <20181016020853.1r_1OBIkV5k0vt29CnHsBNWmnWNPM_0_FF2LSTIybEk@z> (raw)
In-Reply-To: <20181015101814.306d257c@mschwideX1>

On Mon, Oct 15, 2018 at 10:18:14AM +0200, Martin Schwidefsky wrote:
> On Mon, 15 Oct 2018 09:10:53 +0200
> Christian Borntraeger <borntraeger@de.ibm.com> wrote:
> 
> > On 10/12/2018 03:37 AM, Joel Fernandes (Google) wrote:
> > > Android needs to mremap large regions of memory during memory management
> > > related operations. The mremap system call can be really slow if THP is
> > > not enabled. The bottleneck is move_page_tables, which is copying each
> > > pte at a time, and can be really slow across a large map. Turning on THP
> > > may not be a viable option, and is not for us. This patch speeds up the
> > > performance for non-THP system by copying at the PMD level when possible.
> > > 
> > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap
> > > completion times drops from 160-250 millesconds to 380-400 microseconds.
> > > 
> > > Before:
> > > Total mremap time for 1GB data: 242321014 nanoseconds.
> > > Total mremap time for 1GB data: 196842467 nanoseconds.
> > > Total mremap time for 1GB data: 167051162 nanoseconds.
> > > 
> > > After:
> > > Total mremap time for 1GB data: 385781 nanoseconds.
> > > Total mremap time for 1GB data: 388959 nanoseconds.
> > > Total mremap time for 1GB data: 402813 nanoseconds.
> > > 
> > > Incase THP is enabled, the optimization is skipped. I also flush the
> > > tlb every time we do this optimization since I couldn't find a way to
> > > determine if the low-level PTEs are dirty. It is seen that the cost of
> > > doing so is not much compared the improvement, on both x86-64 and arm64.
> > > 
> > > Cc: minchan@kernel.org
> > > Cc: pantin@google.com
> > > Cc: hughd@google.com
> > > Cc: lokeshgidra@google.com
> > > Cc: dancol@google.com
> > > Cc: mhocko@kernel.org
> > > Cc: kirill@shutemov.name
> > > Cc: akpm@linux-foundation.org
> > > Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
> > > ---
> > >  mm/mremap.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 62 insertions(+)
> > > 
> > > diff --git a/mm/mremap.c b/mm/mremap.c
> > > index 9e68a02a52b1..d82c485822ef 100644
> > > --- a/mm/mremap.c
> > > +++ b/mm/mremap.c
> > > @@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
> > >  		drop_rmap_locks(vma);
> > >  }
> > >  
> > > +static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
> > > +		  unsigned long new_addr, unsigned long old_end,
> > > +		  pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
> > > +{
> > > +	spinlock_t *old_ptl, *new_ptl;
> > > +	struct mm_struct *mm = vma->vm_mm;
> > > +
> > > +	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)
> > > +	    || old_end - old_addr < PMD_SIZE)
> > > +		return false;
> > > +
> > > +	/*
> > > +	 * The destination pmd shouldn't be established, free_pgtables()
> > > +	 * should have release it.
> > > +	 */
> > > +	if (WARN_ON(!pmd_none(*new_pmd)))
> > > +		return false;
> > > +
> > > +	/*
> > > +	 * We don't have to worry about the ordering of src and dst
> > > +	 * ptlocks because exclusive mmap_sem prevents deadlock.
> > > +	 */
> > > +	old_ptl = pmd_lock(vma->vm_mm, old_pmd);
> > > +	if (old_ptl) {
> > > +		pmd_t pmd;
> > > +
> > > +		new_ptl = pmd_lockptr(mm, new_pmd);
> > > +		if (new_ptl != old_ptl)
> > > +			spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
> > > +
> > > +		/* Clear the pmd */
> > > +		pmd = *old_pmd;
> > > +		pmd_clear(old_pmd);  
> > 
> > Adding Martin Schwidefsky.
> > Is this mapping maybe still in use on other CPUs? If yes, I think for
> > s390 we need to flush here as well (in other word we might need to introduce
> > pmd_clear_flush). On s390 you have to use instructions like CRDTE,IPTE or IDTE
> > to modify page table entries that are still in use. Otherwise you can get a 
> > delayed access exception which is - in contrast to page faults - not recoverable.
> 
> Just clearing an active pmd would be broken for s390. We need the equivalent
> of the ptep_get_and_clear() function for pmds. For s390 this function would
> look like this:
> 
> static inline pte_t pmdp_get_and_clear(struct mm_struct *mm,
>                                        unsigned long addr, pmd_t *pmdp)
> {
>         return pmdp_xchg_lazy(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_INVALID));
> }
> 
> Just like pmdp_huge_get_and_clear() in fact.

I agree architecture like s390 may need additional explicit instructions to
avoid any unrecoverable failure. So the good news is in my last patch I sent, I
have put this behind an architecture flag (HAVE_MOVE_PMD), so we don't have
to enable it with architectures that cannot handle it:
https://www.spinics.net/lists/linux-mm/msg163621.html

Also we are triggering this optimization only if the page is not a transparent
huge page by calling pmd_trans_huge(). For regular pages, it should be safe to
not do the atomic get_and_clear AIUI because Linux doesn't use any bits from
the PMD like the dirty bit if THP is not in use (and the processors that I
saw (not s390) should not storing anything in the bits anyway when the page
is not a huge page. I have gone through various scenarios and read both arm
32-bit and 64-bit and x86 64-bit manuals, and I believe it to be safe.

For s390, lets not set the HAVE_MOVE_PMD flag. Does that work for you?

> > > +
> > > +		VM_BUG_ON(!pmd_none(*new_pmd));
> > > +
> > > +		/* Set the new pmd */
> > > +		set_pmd_at(mm, new_addr, new_pmd, pmd);
> > > +		if (new_ptl != old_ptl)
> > > +			spin_unlock(new_ptl);
> > > +		spin_unlock(old_ptl);
> > > +
> > > +		*need_flush = true;
> > > +		return true;
> > > +	}
> > > +	return false;
> > > +}
> > > +
> 
> So the idea is to move the pmd entry to the new location, dragging
> the whole pte table to a new location with a different address.
> I wonder if that is safe in regard to get_user_pages_fast().

Could you elaborate why you feel it may not be?

Are you concerned that the PMD moving interferes with the page walk? Incase
the tree changes during page-walking, the number of pages pinned by
get_user_pages_fast may be less than the number requested. In this case,
get_user_pages_fast would fall back to the slow path which should be
synchronized with the mremap by courtesy of the mm->mmap_sem. But please let
me know the scenario you have in mind and if I missed something.

thanks,

 - Joel


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

WARNING: multiple messages have this Message-ID (diff)
From: Joel Fernandes <joel@joelfernandes.org>
To: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: linux-mips@linux-mips.org, Rich Felker <dalias@libc.org>,
	linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Will Deacon <will.deacon@arm.com>,
	mhocko@kernel.org, linux-mm@kvack.org, lokeshgidra@google.com,
	sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org,
	elfring@users.sourceforge.net, Jonas Bonn <jonas@southpole.se>,
	linux-s390@vger.kernel.org, dancol@google.com,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	linux-xtensa@linux-xtensa.org, linux-hexagon@vger.kernel.org,
	Helge Deller <deller@gmx.de>,
	"maintainer:X86 ARCHITECTURE \(32-BIT AND 64-BIT\)"
	<x86@kernel.org>,
	hughd@google.com, "James E.J. Bottomley" <jejb@parisc-linux.org>,
	kasan-dev@googlegroups.com, kvmarm@lists.cs.columbia.edu,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Ingo Molnar <mingo@redhat.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	linux-snps-arc@lists.infradead.org, kernel-team@android.com,
	Sam Creasey <sammy@sammy.net>, Fenghua Yu <fenghua.yu@intel.com>,
	Jeff Dike <jdike@addtoit.com>,
	linux-um@lists.infradead.org,
	Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>,
	Julia Lawall <Julia.Lawall@lip6.fr>,
	linux-m68k@lists.linux-m68k.org, openrisc@lists.librecores.org,
	Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
	nios2-dev@lists.rocketboards.org, kirill@shutemov.name,
	Stafford Horne <shorne@gmail.com>, Guan Xuetao <gxt@pku.edu.cn>,
	linux-arm-kernel@lists.infradead.org,
	Chris Zankel <chris@zankel.net>, Tony Luck <tony.luck@intel.com>,
	Richard Weinberger <richard@nod.at>,
	linux-parisc@vger.kernel.org, pantin@google.com,
	Max Filippov <jcmvbkbc@gmail.com>,
	linux-kernel@vger.kernel.org, minchan@kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-alpha@vger.kernel.org, Ley Foon Tan <lftan@altera.com>,
	akpm@linux-foundation.org, linuxppc-dev@lists.ozlabs.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions
Date: Mon, 15 Oct 2018 19:08:53 -0700	[thread overview]
Message-ID: <20181016020853.GA56701@joelaf.mtv.corp.google.com> (raw)
In-Reply-To: <20181015101814.306d257c@mschwideX1>

On Mon, Oct 15, 2018 at 10:18:14AM +0200, Martin Schwidefsky wrote:
> On Mon, 15 Oct 2018 09:10:53 +0200
> Christian Borntraeger <borntraeger@de.ibm.com> wrote:
> 
> > On 10/12/2018 03:37 AM, Joel Fernandes (Google) wrote:
> > > Android needs to mremap large regions of memory during memory management
> > > related operations. The mremap system call can be really slow if THP is
> > > not enabled. The bottleneck is move_page_tables, which is copying each
> > > pte at a time, and can be really slow across a large map. Turning on THP
> > > may not be a viable option, and is not for us. This patch speeds up the
> > > performance for non-THP system by copying at the PMD level when possible.
> > > 
> > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap
> > > completion times drops from 160-250 millesconds to 380-400 microseconds.
> > > 
> > > Before:
> > > Total mremap time for 1GB data: 242321014 nanoseconds.
> > > Total mremap time for 1GB data: 196842467 nanoseconds.
> > > Total mremap time for 1GB data: 167051162 nanoseconds.
> > > 
> > > After:
> > > Total mremap time for 1GB data: 385781 nanoseconds.
> > > Total mremap time for 1GB data: 388959 nanoseconds.
> > > Total mremap time for 1GB data: 402813 nanoseconds.
> > > 
> > > Incase THP is enabled, the optimization is skipped. I also flush the
> > > tlb every time we do this optimization since I couldn't find a way to
> > > determine if the low-level PTEs are dirty. It is seen that the cost of
> > > doing so is not much compared the improvement, on both x86-64 and arm64.
> > > 
> > > Cc: minchan@kernel.org
> > > Cc: pantin@google.com
> > > Cc: hughd@google.com
> > > Cc: lokeshgidra@google.com
> > > Cc: dancol@google.com
> > > Cc: mhocko@kernel.org
> > > Cc: kirill@shutemov.name
> > > Cc: akpm@linux-foundation.org
> > > Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
> > > ---
> > >  mm/mremap.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 62 insertions(+)
> > > 
> > > diff --git a/mm/mremap.c b/mm/mremap.c
> > > index 9e68a02a52b1..d82c485822ef 100644
> > > --- a/mm/mremap.c
> > > +++ b/mm/mremap.c
> > > @@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
> > >  		drop_rmap_locks(vma);
> > >  }
> > >  
> > > +static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
> > > +		  unsigned long new_addr, unsigned long old_end,
> > > +		  pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
> > > +{
> > > +	spinlock_t *old_ptl, *new_ptl;
> > > +	struct mm_struct *mm = vma->vm_mm;
> > > +
> > > +	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)
> > > +	    || old_end - old_addr < PMD_SIZE)
> > > +		return false;
> > > +
> > > +	/*
> > > +	 * The destination pmd shouldn't be established, free_pgtables()
> > > +	 * should have release it.
> > > +	 */
> > > +	if (WARN_ON(!pmd_none(*new_pmd)))
> > > +		return false;
> > > +
> > > +	/*
> > > +	 * We don't have to worry about the ordering of src and dst
> > > +	 * ptlocks because exclusive mmap_sem prevents deadlock.
> > > +	 */
> > > +	old_ptl = pmd_lock(vma->vm_mm, old_pmd);
> > > +	if (old_ptl) {
> > > +		pmd_t pmd;
> > > +
> > > +		new_ptl = pmd_lockptr(mm, new_pmd);
> > > +		if (new_ptl != old_ptl)
> > > +			spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
> > > +
> > > +		/* Clear the pmd */
> > > +		pmd = *old_pmd;
> > > +		pmd_clear(old_pmd);  
> > 
> > Adding Martin Schwidefsky.
> > Is this mapping maybe still in use on other CPUs? If yes, I think for
> > s390 we need to flush here as well (in other word we might need to introduce
> > pmd_clear_flush). On s390 you have to use instructions like CRDTE,IPTE or IDTE
> > to modify page table entries that are still in use. Otherwise you can get a 
> > delayed access exception which is - in contrast to page faults - not recoverable.
> 
> Just clearing an active pmd would be broken for s390. We need the equivalent
> of the ptep_get_and_clear() function for pmds. For s390 this function would
> look like this:
> 
> static inline pte_t pmdp_get_and_clear(struct mm_struct *mm,
>                                        unsigned long addr, pmd_t *pmdp)
> {
>         return pmdp_xchg_lazy(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_INVALID));
> }
> 
> Just like pmdp_huge_get_and_clear() in fact.

I agree architecture like s390 may need additional explicit instructions to
avoid any unrecoverable failure. So the good news is in my last patch I sent, I
have put this behind an architecture flag (HAVE_MOVE_PMD), so we don't have
to enable it with architectures that cannot handle it:
https://www.spinics.net/lists/linux-mm/msg163621.html

Also we are triggering this optimization only if the page is not a transparent
huge page by calling pmd_trans_huge(). For regular pages, it should be safe to
not do the atomic get_and_clear AIUI because Linux doesn't use any bits from
the PMD like the dirty bit if THP is not in use (and the processors that I
saw (not s390) should not storing anything in the bits anyway when the page
is not a huge page. I have gone through various scenarios and read both arm
32-bit and 64-bit and x86 64-bit manuals, and I believe it to be safe.

For s390, lets not set the HAVE_MOVE_PMD flag. Does that work for you?

> > > +
> > > +		VM_BUG_ON(!pmd_none(*new_pmd));
> > > +
> > > +		/* Set the new pmd */
> > > +		set_pmd_at(mm, new_addr, new_pmd, pmd);
> > > +		if (new_ptl != old_ptl)
> > > +			spin_unlock(new_ptl);
> > > +		spin_unlock(old_ptl);
> > > +
> > > +		*need_flush = true;
> > > +		return true;
> > > +	}
> > > +	return false;
> > > +}
> > > +
> 
> So the idea is to move the pmd entry to the new location, dragging
> the whole pte table to a new location with a different address.
> I wonder if that is safe in regard to get_user_pages_fast().

Could you elaborate why you feel it may not be?

Are you concerned that the PMD moving interferes with the page walk? Incase
the tree changes during page-walking, the number of pages pinned by
get_user_pages_fast may be less than the number requested. In this case,
get_user_pages_fast would fall back to the slow path which should be
synchronized with the mremap by courtesy of the mm->mmap_sem. But please let
me know the scenario you have in mind and if I missed something.

thanks,

 - Joel


WARNING: multiple messages have this Message-ID (diff)
From: joel@joelfernandes.org (Joel Fernandes)
To: linux-snps-arc@lists.infradead.org
Subject: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions
Date: Mon, 15 Oct 2018 19:08:53 -0700	[thread overview]
Message-ID: <20181016020853.GA56701@joelaf.mtv.corp.google.com> (raw)
In-Reply-To: <20181015101814.306d257c@mschwideX1>

On Mon, Oct 15, 2018@10:18:14AM +0200, Martin Schwidefsky wrote:
> On Mon, 15 Oct 2018 09:10:53 +0200
> Christian Borntraeger <borntraeger@de.ibm.com> wrote:
> 
> > On 10/12/2018 03:37 AM, Joel Fernandes (Google) wrote:
> > > Android needs to mremap large regions of memory during memory management
> > > related operations. The mremap system call can be really slow if THP is
> > > not enabled. The bottleneck is move_page_tables, which is copying each
> > > pte at a time, and can be really slow across a large map. Turning on THP
> > > may not be a viable option, and is not for us. This patch speeds up the
> > > performance for non-THP system by copying at the PMD level when possible.
> > > 
> > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap
> > > completion times drops from 160-250 millesconds to 380-400 microseconds.
> > > 
> > > Before:
> > > Total mremap time for 1GB data: 242321014 nanoseconds.
> > > Total mremap time for 1GB data: 196842467 nanoseconds.
> > > Total mremap time for 1GB data: 167051162 nanoseconds.
> > > 
> > > After:
> > > Total mremap time for 1GB data: 385781 nanoseconds.
> > > Total mremap time for 1GB data: 388959 nanoseconds.
> > > Total mremap time for 1GB data: 402813 nanoseconds.
> > > 
> > > Incase THP is enabled, the optimization is skipped. I also flush the
> > > tlb every time we do this optimization since I couldn't find a way to
> > > determine if the low-level PTEs are dirty. It is seen that the cost of
> > > doing so is not much compared the improvement, on both x86-64 and arm64.
> > > 
> > > Cc: minchan at kernel.org
> > > Cc: pantin at google.com
> > > Cc: hughd at google.com
> > > Cc: lokeshgidra at google.com
> > > Cc: dancol at google.com
> > > Cc: mhocko at kernel.org
> > > Cc: kirill at shutemov.name
> > > Cc: akpm at linux-foundation.org
> > > Signed-off-by: Joel Fernandes (Google) <joel at joelfernandes.org>
> > > ---
> > >  mm/mremap.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 62 insertions(+)
> > > 
> > > diff --git a/mm/mremap.c b/mm/mremap.c
> > > index 9e68a02a52b1..d82c485822ef 100644
> > > --- a/mm/mremap.c
> > > +++ b/mm/mremap.c
> > > @@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
> > >  		drop_rmap_locks(vma);
> > >  }
> > >  
> > > +static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
> > > +		  unsigned long new_addr, unsigned long old_end,
> > > +		  pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
> > > +{
> > > +	spinlock_t *old_ptl, *new_ptl;
> > > +	struct mm_struct *mm = vma->vm_mm;
> > > +
> > > +	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)
> > > +	    || old_end - old_addr < PMD_SIZE)
> > > +		return false;
> > > +
> > > +	/*
> > > +	 * The destination pmd shouldn't be established, free_pgtables()
> > > +	 * should have release it.
> > > +	 */
> > > +	if (WARN_ON(!pmd_none(*new_pmd)))
> > > +		return false;
> > > +
> > > +	/*
> > > +	 * We don't have to worry about the ordering of src and dst
> > > +	 * ptlocks because exclusive mmap_sem prevents deadlock.
> > > +	 */
> > > +	old_ptl = pmd_lock(vma->vm_mm, old_pmd);
> > > +	if (old_ptl) {
> > > +		pmd_t pmd;
> > > +
> > > +		new_ptl = pmd_lockptr(mm, new_pmd);
> > > +		if (new_ptl != old_ptl)
> > > +			spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
> > > +
> > > +		/* Clear the pmd */
> > > +		pmd = *old_pmd;
> > > +		pmd_clear(old_pmd);  
> > 
> > Adding Martin Schwidefsky.
> > Is this mapping maybe still in use on other CPUs? If yes, I think for
> > s390 we need to flush here as well (in other word we might need to introduce
> > pmd_clear_flush). On s390 you have to use instructions like CRDTE,IPTE or IDTE
> > to modify page table entries that are still in use. Otherwise you can get a 
> > delayed access exception which is - in contrast to page faults - not recoverable.
> 
> Just clearing an active pmd would be broken for s390. We need the equivalent
> of the ptep_get_and_clear() function for pmds. For s390 this function would
> look like this:
> 
> static inline pte_t pmdp_get_and_clear(struct mm_struct *mm,
>                                        unsigned long addr, pmd_t *pmdp)
> {
>         return pmdp_xchg_lazy(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_INVALID));
> }
> 
> Just like pmdp_huge_get_and_clear() in fact.

I agree architecture like s390 may need additional explicit instructions to
avoid any unrecoverable failure. So the good news is in my last patch I sent, I
have put this behind an architecture flag (HAVE_MOVE_PMD), so we don't have
to enable it with architectures that cannot handle it:
https://www.spinics.net/lists/linux-mm/msg163621.html

Also we are triggering this optimization only if the page is not a transparent
huge page by calling pmd_trans_huge(). For regular pages, it should be safe to
not do the atomic get_and_clear AIUI because Linux doesn't use any bits from
the PMD like the dirty bit if THP is not in use (and the processors that I
saw (not s390) should not storing anything in the bits anyway when the page
is not a huge page. I have gone through various scenarios and read both arm
32-bit and 64-bit and x86 64-bit manuals, and I believe it to be safe.

For s390, lets not set the HAVE_MOVE_PMD flag. Does that work for you?

> > > +
> > > +		VM_BUG_ON(!pmd_none(*new_pmd));
> > > +
> > > +		/* Set the new pmd */
> > > +		set_pmd_at(mm, new_addr, new_pmd, pmd);
> > > +		if (new_ptl != old_ptl)
> > > +			spin_unlock(new_ptl);
> > > +		spin_unlock(old_ptl);
> > > +
> > > +		*need_flush = true;
> > > +		return true;
> > > +	}
> > > +	return false;
> > > +}
> > > +
> 
> So the idea is to move the pmd entry to the new location, dragging
> the whole pte table to a new location with a different address.
> I wonder if that is safe in regard to get_user_pages_fast().

Could you elaborate why you feel it may not be?

Are you concerned that the PMD moving interferes with the page walk? Incase
the tree changes during page-walking, the number of pages pinned by
get_user_pages_fast may be less than the number requested. In this case,
get_user_pages_fast would fall back to the slow path which should be
synchronized with the mremap by courtesy of the mm->mmap_sem. But please let
me know the scenario you have in mind and if I missed something.

thanks,

 - Joel

WARNING: multiple messages have this Message-ID (diff)
From: Joel Fernandes <joel@joelfernandes.org>
To: openrisc@lists.librecores.org
Subject: [OpenRISC] [PATCH v2 2/2] mm: speed up mremap by 500x on large regions
Date: Mon, 15 Oct 2018 19:08:53 -0700	[thread overview]
Message-ID: <20181016020853.GA56701@joelaf.mtv.corp.google.com> (raw)
In-Reply-To: <20181015101814.306d257c@mschwideX1>

On Mon, Oct 15, 2018 at 10:18:14AM +0200, Martin Schwidefsky wrote:
> On Mon, 15 Oct 2018 09:10:53 +0200
> Christian Borntraeger <borntraeger@de.ibm.com> wrote:
> 
> > On 10/12/2018 03:37 AM, Joel Fernandes (Google) wrote:
> > > Android needs to mremap large regions of memory during memory management
> > > related operations. The mremap system call can be really slow if THP is
> > > not enabled. The bottleneck is move_page_tables, which is copying each
> > > pte at a time, and can be really slow across a large map. Turning on THP
> > > may not be a viable option, and is not for us. This patch speeds up the
> > > performance for non-THP system by copying at the PMD level when possible.
> > > 
> > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap
> > > completion times drops from 160-250 millesconds to 380-400 microseconds.
> > > 
> > > Before:
> > > Total mremap time for 1GB data: 242321014 nanoseconds.
> > > Total mremap time for 1GB data: 196842467 nanoseconds.
> > > Total mremap time for 1GB data: 167051162 nanoseconds.
> > > 
> > > After:
> > > Total mremap time for 1GB data: 385781 nanoseconds.
> > > Total mremap time for 1GB data: 388959 nanoseconds.
> > > Total mremap time for 1GB data: 402813 nanoseconds.
> > > 
> > > Incase THP is enabled, the optimization is skipped. I also flush the
> > > tlb every time we do this optimization since I couldn't find a way to
> > > determine if the low-level PTEs are dirty. It is seen that the cost of
> > > doing so is not much compared the improvement, on both x86-64 and arm64.
> > > 
> > > Cc: minchan at kernel.org
> > > Cc: pantin at google.com
> > > Cc: hughd at google.com
> > > Cc: lokeshgidra at google.com
> > > Cc: dancol at google.com
> > > Cc: mhocko at kernel.org
> > > Cc: kirill at shutemov.name
> > > Cc: akpm at linux-foundation.org
> > > Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
> > > ---
> > >  mm/mremap.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 62 insertions(+)
> > > 
> > > diff --git a/mm/mremap.c b/mm/mremap.c
> > > index 9e68a02a52b1..d82c485822ef 100644
> > > --- a/mm/mremap.c
> > > +++ b/mm/mremap.c
> > > @@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
> > >  		drop_rmap_locks(vma);
> > >  }
> > >  
> > > +static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
> > > +		  unsigned long new_addr, unsigned long old_end,
> > > +		  pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
> > > +{
> > > +	spinlock_t *old_ptl, *new_ptl;
> > > +	struct mm_struct *mm = vma->vm_mm;
> > > +
> > > +	if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)
> > > +	    || old_end - old_addr < PMD_SIZE)
> > > +		return false;
> > > +
> > > +	/*
> > > +	 * The destination pmd shouldn't be established, free_pgtables()
> > > +	 * should have release it.
> > > +	 */
> > > +	if (WARN_ON(!pmd_none(*new_pmd)))
> > > +		return false;
> > > +
> > > +	/*
> > > +	 * We don't have to worry about the ordering of src and dst
> > > +	 * ptlocks because exclusive mmap_sem prevents deadlock.
> > > +	 */
> > > +	old_ptl = pmd_lock(vma->vm_mm, old_pmd);
> > > +	if (old_ptl) {
> > > +		pmd_t pmd;
> > > +
> > > +		new_ptl = pmd_lockptr(mm, new_pmd);
> > > +		if (new_ptl != old_ptl)
> > > +			spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
> > > +
> > > +		/* Clear the pmd */
> > > +		pmd = *old_pmd;
> > > +		pmd_clear(old_pmd);  
> > 
> > Adding Martin Schwidefsky.
> > Is this mapping maybe still in use on other CPUs? If yes, I think for
> > s390 we need to flush here as well (in other word we might need to introduce
> > pmd_clear_flush). On s390 you have to use instructions like CRDTE,IPTE or IDTE
> > to modify page table entries that are still in use. Otherwise you can get a 
> > delayed access exception which is - in contrast to page faults - not recoverable.
> 
> Just clearing an active pmd would be broken for s390. We need the equivalent
> of the ptep_get_and_clear() function for pmds. For s390 this function would
> look like this:
> 
> static inline pte_t pmdp_get_and_clear(struct mm_struct *mm,
>                                        unsigned long addr, pmd_t *pmdp)
> {
>         return pmdp_xchg_lazy(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_INVALID));
> }
> 
> Just like pmdp_huge_get_and_clear() in fact.

I agree architecture like s390 may need additional explicit instructions to
avoid any unrecoverable failure. So the good news is in my last patch I sent, I
have put this behind an architecture flag (HAVE_MOVE_PMD), so we don't have
to enable it with architectures that cannot handle it:
https://www.spinics.net/lists/linux-mm/msg163621.html

Also we are triggering this optimization only if the page is not a transparent
huge page by calling pmd_trans_huge(). For regular pages, it should be safe to
not do the atomic get_and_clear AIUI because Linux doesn't use any bits from
the PMD like the dirty bit if THP is not in use (and the processors that I
saw (not s390) should not storing anything in the bits anyway when the page
is not a huge page. I have gone through various scenarios and read both arm
32-bit and 64-bit and x86 64-bit manuals, and I believe it to be safe.

For s390, lets not set the HAVE_MOVE_PMD flag. Does that work for you?

> > > +
> > > +		VM_BUG_ON(!pmd_none(*new_pmd));
> > > +
> > > +		/* Set the new pmd */
> > > +		set_pmd_at(mm, new_addr, new_pmd, pmd);
> > > +		if (new_ptl != old_ptl)
> > > +			spin_unlock(new_ptl);
> > > +		spin_unlock(old_ptl);
> > > +
> > > +		*need_flush = true;
> > > +		return true;
> > > +	}
> > > +	return false;
> > > +}
> > > +
> 
> So the idea is to move the pmd entry to the new location, dragging
> the whole pte table to a new location with a different address.
> I wonder if that is safe in regard to get_user_pages_fast().

Could you elaborate why you feel it may not be?

Are you concerned that the PMD moving interferes with the page walk? Incase
the tree changes during page-walking, the number of pages pinned by
get_user_pages_fast may be less than the number requested. In this case,
get_user_pages_fast would fall back to the slow path which should be
synchronized with the mremap by courtesy of the mm->mmap_sem. But please let
me know the scenario you have in mind and if I missed something.

thanks,

 - Joel


  reply	other threads:[~2018-10-16  2:08 UTC|newest]

Thread overview: 317+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-12  1:37 [PATCH v2 1/2] treewide: remove unused address argument from pte_alloc functions Joel Fernandes (Google)
2018-10-12  1:37 ` [OpenRISC] " Joel Fernandes
2018-10-12  1:37 ` Joel Fernandes (Google)
2018-10-12  1:37 ` Joel Fernandes (Google)
2018-10-12  1:37 ` Joel Fernandes (Google)
2018-10-12  1:37 ` Joel Fernandes (Google)
2018-10-12  1:37 ` Joel Fernandes (Google)
2018-10-12  1:37 ` Joel Fernandes (Google)
2018-10-12  1:37 ` Joel Fernandes (Google)
2018-10-12  1:37 ` [PATCH v2 2/2] mm: speed up mremap by 500x on large regions Joel Fernandes (Google)
2018-10-12  1:37   ` [OpenRISC] " Joel Fernandes
2018-10-12  1:37   ` Joel Fernandes (Google)
2018-10-12  1:37   ` Joel Fernandes (Google)
2018-10-12  1:37   ` Joel Fernandes (Google)
2018-10-12  1:37   ` Joel Fernandes (Google)
2018-10-12  1:37   ` Joel Fernandes (Google)
2018-10-12  1:37   ` Joel Fernandes (Google)
2018-10-12  1:37   ` Joel Fernandes (Google)
2018-10-12  6:40   ` Anton Ivanov
2018-10-12 11:30   ` Kirill A. Shutemov
2018-10-12 11:30     ` [OpenRISC] " Kirill A. Shutemov
2018-10-12 11:30     ` Kirill A. Shutemov
2018-10-12 11:30     ` Kirill A. Shutemov
2018-10-12 11:30     ` Kirill A. Shutemov
2018-10-12 11:30     ` Kirill A. Shutemov
2018-10-12 11:30     ` Kirill A. Shutemov
2018-10-12 11:30     ` Kirill A. Shutemov
2018-10-12 11:36     ` Kirill A. Shutemov
2018-10-12 11:36       ` [OpenRISC] " Kirill A. Shutemov
2018-10-12 11:36       ` Kirill A. Shutemov
2018-10-12 11:36       ` Kirill A. Shutemov
2018-10-12 11:36       ` Kirill A. Shutemov
2018-10-12 11:36       ` Kirill A. Shutemov
2018-10-12 11:36       ` Kirill A. Shutemov
2018-10-12 11:36       ` Kirill A. Shutemov
2018-10-12 12:50     ` Joel Fernandes
2018-10-12 12:50       ` [OpenRISC] " Joel Fernandes
2018-10-12 12:50       ` Joel Fernandes
2018-10-12 12:50       ` Joel Fernandes
2018-10-12 12:50       ` Joel Fernandes
2018-10-12 12:50       ` Joel Fernandes
2018-10-12 12:50       ` Joel Fernandes
2018-10-12 12:50       ` Joel Fernandes
2018-10-12 13:19       ` Kirill A. Shutemov
2018-10-12 13:19         ` [OpenRISC] " Kirill A. Shutemov
2018-10-12 13:19         ` Kirill A. Shutemov
2018-10-12 13:19         ` Kirill A. Shutemov
2018-10-12 13:19         ` Kirill A. Shutemov
2018-10-12 13:19         ` Kirill A. Shutemov
2018-10-12 13:19         ` Kirill A. Shutemov
2018-10-12 13:19         ` Kirill A. Shutemov
2018-10-12 16:57         ` Joel Fernandes
2018-10-12 16:57           ` [OpenRISC] " Joel Fernandes
2018-10-12 16:57           ` Joel Fernandes
2018-10-12 16:57           ` Joel Fernandes
2018-10-12 16:57           ` Joel Fernandes
2018-10-12 16:57           ` Joel Fernandes
2018-10-12 16:57           ` Joel Fernandes
2018-10-12 16:57           ` Joel Fernandes
2018-10-12 21:33           ` Kirill A. Shutemov
2018-10-12 21:33             ` [OpenRISC] " Kirill A. Shutemov
2018-10-12 21:33             ` Kirill A. Shutemov
2018-10-12 21:33             ` Kirill A. Shutemov
2018-10-12 21:33             ` Kirill A. Shutemov
2018-10-12 21:33             ` Kirill A. Shutemov
2018-10-12 21:33             ` Kirill A. Shutemov
2018-10-12 21:33             ` Kirill A. Shutemov
2018-10-12 18:18       ` David Miller
2018-10-12 18:18         ` [OpenRISC] " David Miller
2018-10-12 18:18         ` David Miller
2018-10-12 18:18         ` David Miller
2018-10-12 18:18         ` David Miller
2018-10-12 18:18         ` David Miller
2018-10-12 18:18         ` David Miller
2018-10-12 18:18         ` David Miller
2018-10-13  1:35         ` Joel Fernandes
2018-10-13  1:35           ` Joel Fernandes
2018-10-13  1:35           ` Joel Fernandes
2018-10-13  1:35           ` Joel Fernandes
2018-10-13  1:35           ` Joel Fernandes
2018-10-13  1:35           ` Joel Fernandes
2018-10-13  1:35           ` Joel Fernandes
2018-10-13  1:35           ` Joel Fernandes
2018-10-13  1:39           ` Daniel Colascione
2018-10-13  1:39             ` Daniel Colascione
2018-10-13  1:39             ` Daniel Colascione
2018-10-13  1:39             ` Daniel Colascione
2018-10-13  1:39             ` Daniel Colascione
2018-10-13  1:39             ` Daniel Colascione
2018-10-13  1:39             ` Daniel Colascione
2018-10-13  1:39             ` Daniel Colascione
2018-10-13  1:44             ` Joel Fernandes
2018-10-13  1:44               ` Joel Fernandes
2018-10-13  1:44               ` Joel Fernandes
2018-10-13  1:44               ` Joel Fernandes
2018-10-13  1:44               ` Joel Fernandes
2018-10-13  1:44               ` Joel Fernandes
2018-10-13  1:44               ` Joel Fernandes
2018-10-13  1:44               ` Joel Fernandes
2018-10-13  1:54               ` Daniel Colascione
2018-10-13  1:54                 ` Daniel Colascione
2018-10-13  1:54                 ` Daniel Colascione
2018-10-13  1:54                 ` Daniel Colascione
2018-10-13  1:54                 ` Daniel Colascione
2018-10-13  1:54                 ` Daniel Colascione
2018-10-13  1:54                 ` Daniel Colascione
2018-10-13  1:54                 ` Daniel Colascione
2018-10-13  2:10                 ` Joel Fernandes
2018-10-13  2:10                   ` Joel Fernandes
2018-10-13  2:10                   ` Joel Fernandes
2018-10-13  2:10                   ` Joel Fernandes
2018-10-13  2:10                   ` Joel Fernandes
2018-10-13  2:10                   ` Joel Fernandes
2018-10-13  2:10                   ` Joel Fernandes
2018-10-13  2:10                   ` Joel Fernandes
2018-10-13  2:25                   ` Daniel Colascione
2018-10-13  2:25                     ` Daniel Colascione
2018-10-13  2:25                     ` Daniel Colascione
2018-10-13  2:25                     ` Daniel Colascione
2018-10-13  2:25                     ` Daniel Colascione
2018-10-13  2:25                     ` Daniel Colascione
2018-10-13  2:25                     ` Daniel Colascione
2018-10-13 17:50                     ` Joel Fernandes
2018-10-13 17:50                       ` Joel Fernandes
2018-10-13 17:50                       ` Joel Fernandes
2018-10-13 17:50                       ` Joel Fernandes
2018-10-13 17:50                       ` Joel Fernandes
2018-10-13 17:50                       ` Joel Fernandes
2018-10-13 17:50                       ` Joel Fernandes
2018-10-12 18:02     ` David Miller
2018-10-12 18:02       ` [OpenRISC] " David Miller
2018-10-12 18:02       ` David Miller
2018-10-12 18:02       ` David Miller
2018-10-12 18:02       ` David Miller
2018-10-12 18:02       ` David Miller
2018-10-12 18:02       ` David Miller
2018-10-12 18:02       ` David Miller
2018-10-12 14:09   ` Anton Ivanov
2018-10-12 14:09     ` [OpenRISC] " Anton Ivanov
2018-10-12 14:09     ` Anton Ivanov
2018-10-12 14:09     ` Anton Ivanov
2018-10-12 14:09     ` Anton Ivanov
2018-10-12 14:09     ` Anton Ivanov
2018-10-12 14:09     ` Anton Ivanov
2018-10-12 14:09     ` Anton Ivanov
2018-10-12 14:37     ` Kirill A. Shutemov
2018-10-12 14:37       ` [OpenRISC] " Kirill A. Shutemov
2018-10-12 14:37       ` Kirill A. Shutemov
2018-10-12 14:37       ` Kirill A. Shutemov
2018-10-12 14:37       ` Kirill A. Shutemov
2018-10-12 14:37       ` Kirill A. Shutemov
2018-10-12 14:37       ` Kirill A. Shutemov
2018-10-12 14:37       ` Kirill A. Shutemov
2018-10-12 14:48       ` Anton Ivanov
2018-10-12 14:48         ` [OpenRISC] " Anton Ivanov
2018-10-12 14:48         ` Anton Ivanov
2018-10-12 14:48         ` Anton Ivanov
2018-10-12 14:48         ` Anton Ivanov
2018-10-12 14:48         ` Anton Ivanov
2018-10-12 14:48         ` Anton Ivanov
2018-10-12 14:48         ` Anton Ivanov
2018-10-12 16:42         ` Anton Ivanov
2018-10-12 16:42           ` Anton Ivanov
2018-10-12 16:42           ` [OpenRISC] " Anton Ivanov
2018-10-12 16:42           ` Anton Ivanov
2018-10-12 16:42           ` Anton Ivanov
2018-10-12 16:42           ` Anton Ivanov
2018-10-12 16:42           ` Anton Ivanov
2018-10-12 16:42           ` Anton Ivanov
2018-10-12 16:42           ` Anton Ivanov
2018-10-12 16:42           ` Anton Ivanov
2018-10-12 16:42           ` Anton Ivanov
2018-10-12 16:50           ` Joel Fernandes
2018-10-12 16:50             ` Joel Fernandes
2018-10-12 16:50             ` [OpenRISC] " Joel Fernandes
2018-10-12 16:50             ` Joel Fernandes
2018-10-12 16:50             ` Joel Fernandes
2018-10-12 16:50             ` Joel Fernandes
2018-10-12 16:50             ` Joel Fernandes
2018-10-12 16:50             ` Joel Fernandes
2018-10-12 16:50             ` Joel Fernandes
2018-10-12 16:50             ` Joel Fernandes
2018-10-12 16:58             ` Anton Ivanov
2018-10-12 16:58               ` [OpenRISC] " Anton Ivanov
2018-10-12 16:58               ` Anton Ivanov
2018-10-12 16:58               ` Anton Ivanov
2018-10-12 16:58               ` Anton Ivanov
2018-10-12 16:58               ` Anton Ivanov
2018-10-12 16:58               ` Anton Ivanov
2018-10-12 16:58               ` Anton Ivanov
2018-10-12 16:58               ` Anton Ivanov
2018-10-12 17:06               ` Joel Fernandes
2018-10-12 17:06                 ` [OpenRISC] " Joel Fernandes
2018-10-12 17:06                 ` Joel Fernandes
2018-10-12 17:06                 ` Joel Fernandes
2018-10-12 17:06                 ` Joel Fernandes
2018-10-12 17:06                 ` Joel Fernandes
2018-10-12 17:06                 ` Joel Fernandes
2018-10-12 17:06                 ` Joel Fernandes
2018-10-12 21:40           ` Kirill A. Shutemov
2018-10-12 21:40             ` Kirill A. Shutemov
2018-10-12 21:40             ` [OpenRISC] " Kirill A. Shutemov
2018-10-12 21:40             ` Kirill A. Shutemov
2018-10-12 21:40             ` Kirill A. Shutemov
2018-10-12 21:40             ` Kirill A. Shutemov
2018-10-12 21:40             ` Kirill A. Shutemov
2018-10-12 21:40             ` Kirill A. Shutemov
2018-10-12 21:40             ` Kirill A. Shutemov
2018-10-12 21:40             ` Kirill A. Shutemov
2018-10-13  6:10             ` Anton Ivanov
2018-10-13  6:10               ` [OpenRISC] " Anton Ivanov
2018-10-13  6:10               ` Anton Ivanov
2018-10-13  6:10               ` Anton Ivanov
2018-10-13  6:10               ` Anton Ivanov
2018-10-13  6:10               ` Anton Ivanov
2018-10-13  6:10               ` Anton Ivanov
2018-10-13  6:10               ` Anton Ivanov
2018-10-13  6:10               ` Anton Ivanov
2018-10-15  7:10   ` Christian Borntraeger
2018-10-15  7:10     ` [OpenRISC] " Christian Borntraeger
2018-10-15  7:10     ` Christian Borntraeger
2018-10-15  7:10     ` Christian Borntraeger
2018-10-15  7:10     ` Christian Borntraeger
2018-10-15  7:10     ` Christian Borntraeger
2018-10-15  7:10     ` Christian Borntraeger
2018-10-15  7:10     ` Christian Borntraeger
2018-10-15  8:18     ` Martin Schwidefsky
2018-10-15  8:18       ` [OpenRISC] " Martin Schwidefsky
2018-10-15  8:18       ` Martin Schwidefsky
2018-10-15  8:18       ` Martin Schwidefsky
2018-10-15  8:18       ` Martin Schwidefsky
2018-10-15  8:18       ` Martin Schwidefsky
2018-10-15  8:18       ` Martin Schwidefsky
2018-10-15  8:18       ` Martin Schwidefsky
2018-10-16  2:08       ` Joel Fernandes [this message]
2018-10-16  2:08         ` [OpenRISC] " Joel Fernandes
2018-10-16  2:08         ` Joel Fernandes
2018-10-16  2:08         ` Joel Fernandes
2018-10-16  2:08         ` Joel Fernandes
2018-10-16  2:08         ` Joel Fernandes
2018-10-16  2:08         ` Joel Fernandes
2018-10-16  2:08         ` Joel Fernandes
2018-10-12 11:09 ` [PATCH v2 1/2] treewide: remove unused address argument from pte_alloc functions Kirill A. Shutemov
2018-10-12 11:09   ` [OpenRISC] " Kirill A. Shutemov
2018-10-12 11:09   ` Kirill A. Shutemov
2018-10-12 11:09   ` Kirill A. Shutemov
2018-10-12 11:09   ` Kirill A. Shutemov
2018-10-12 11:09   ` Kirill A. Shutemov
2018-10-12 11:09   ` Kirill A. Shutemov
2018-10-12 11:09   ` Kirill A. Shutemov
2018-10-12 16:37   ` Joel Fernandes
2018-10-12 16:37     ` [OpenRISC] " Joel Fernandes
2018-10-12 16:37     ` Joel Fernandes
2018-10-12 16:37     ` Joel Fernandes
2018-10-12 16:37     ` Joel Fernandes
2018-10-12 16:37     ` Joel Fernandes
2018-10-12 16:37     ` Joel Fernandes
2018-10-12 16:37     ` Joel Fernandes
2018-10-12 13:56 ` Anton Ivanov
2018-10-12 13:56   ` [OpenRISC] " Anton Ivanov
2018-10-12 13:56   ` Anton Ivanov
2018-10-12 13:56   ` Anton Ivanov
2018-10-12 13:56   ` Anton Ivanov
2018-10-12 13:56   ` Anton Ivanov
2018-10-12 13:56   ` Anton Ivanov
2018-10-12 13:56   ` Anton Ivanov
2018-10-12 16:34   ` Joel Fernandes
2018-10-12 16:34     ` [OpenRISC] " Joel Fernandes
2018-10-12 16:34     ` Joel Fernandes
2018-10-12 16:34     ` Joel Fernandes
2018-10-12 16:34     ` Joel Fernandes
2018-10-12 16:34     ` Joel Fernandes
2018-10-12 16:34     ` Joel Fernandes
2018-10-12 16:34     ` Joel Fernandes
2018-10-12 16:38     ` Julia Lawall
2018-10-12 16:38       ` [OpenRISC] " Julia Lawall
2018-10-12 16:38       ` Julia Lawall
2018-10-12 16:38       ` Julia Lawall
2018-10-12 16:38       ` Julia Lawall
2018-10-12 16:38       ` Julia Lawall
2018-10-12 16:38       ` Julia Lawall
2018-10-12 16:38       ` Julia Lawall
2018-10-12 16:46       ` Joel Fernandes
2018-10-12 16:46         ` [OpenRISC] " Joel Fernandes
2018-10-12 16:46         ` Joel Fernandes
2018-10-12 16:46         ` Joel Fernandes
2018-10-12 16:46         ` Joel Fernandes
2018-10-12 16:46         ` Joel Fernandes
2018-10-12 16:46         ` Joel Fernandes
2018-10-12 16:46         ` Joel Fernandes
2018-10-12 18:51 ` SF Markus Elfring
2018-10-12 18:51   ` [OpenRISC] " SF Markus Elfring
2018-10-12 18:51   ` SF Markus Elfring
2018-10-12 18:51   ` SF Markus Elfring
2018-10-12 18:51   ` SF Markus Elfring
2018-10-12 18:51   ` SF Markus Elfring
2018-10-12 18:51   ` SF Markus Elfring
2018-10-12 18:51   ` SF Markus Elfring
2018-10-12 18:51   ` SF Markus Elfring
2018-10-12 19:42   ` Joel Fernandes
2018-10-12 19:42     ` [OpenRISC] " Joel Fernandes
2018-10-12 19:42     ` Joel Fernandes
2018-10-12 19:42     ` Joel Fernandes
2018-10-12 19:42     ` Joel Fernandes
2018-10-12 19:42     ` Joel Fernandes
2018-10-12 19:42     ` Joel Fernandes
2018-10-12 19:42     ` Joel Fernandes
2018-10-12 19:42     ` Joel Fernandes
2018-10-13  9:22     ` SF Markus Elfring
2018-10-13  9:22       ` [OpenRISC] " SF Markus Elfring
2018-10-13  9:22       ` SF Markus Elfring
2018-10-13  9:22       ` SF Markus Elfring
2018-10-13  9:22       ` SF Markus Elfring
2018-10-13  9:22       ` SF Markus Elfring
2018-10-13  9:22       ` SF Markus Elfring
2018-10-13  9:22       ` SF Markus Elfring
2018-10-13  9:22       ` SF Markus Elfring

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181016020853.GA56701@joelaf.mtv.corp.google.com \
    --to=joel@joelfernandes.org \
    --cc=borntraeger@de.ibm.co \
    --cc=catalin.marinas@arm.com \
    --cc=dalias@libc.org \
    --cc=dancol@google.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=deller@gmx.de \
    --cc=elfring@users.sourceforge.net \
    --cc=hughd@google.com \
    --cc=jejb@parisc-linux.org \
    --cc=jonas@southpole.se \
    --cc=kasan-dev@googlegroups.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-hexagon@vger.kernel.org \
    --cc=linux-ia64@vger.kernel.org \
    --cc=linux-mips@linux-mips.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=linux-sh@vger.kernel.org \
    --cc=linux-xtensa@linux-xtensa.org \
    --cc=lokeshgidra@google.com \
    --cc=mhocko@kernel.org \
    --cc=peterz@infradead.org \
    --cc=schwidefsky@de.ibm.com \
    --cc=sparclinux@vger.kernel.org \
    --cc=will.deacon@arm.com \
    --cc=x86@kernel.org \
    --cc=ysato@users.sourceforge.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.