All of lore.kernel.org
 help / color / mirror / Atom feed
From: Joel Fernandes <joel@joelfernandes.org>
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: linux-mips@linux-mips.org, Rich Felker <dalias@libc.org>,
	linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Will Deacon <will.deacon@arm.com>,
	mhocko@kernel.org, linux-mm@kvack.org, lokeshgidra@google.com,
	sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org,
	elfring@users.sourceforge.net, Jonas Bonn <jonas@southpole.se>,
	linux-s390@vger.kernel.org, dancol@google.com,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	linux-xtensa@linux-xtensa.org, linux-hexagon@vger.kernel.org,
	Helge Deller <deller@gmx.de>,
	"maintainer:X86 ARCHITECTURE \(32-BIT AND 64-BIT\)"
	<x86@kernel.org>,
	hughd@google.com, "James E.J. Bottomley" <jejb@parisc-linux.org>,
	kasan-dev@googlegroups.com, kvmarm@lists.cs.columbia.edu,
	Ingo Molnar <mingo@redhat.com>,
	Geert Uytter
Subject: Re: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions
Date: Fri, 12 Oct 2018 09:57:19 -0700	[thread overview]
Message-ID: <20181012165719.GE223066@joelaf.mtv.corp.google.com> (raw)
In-Reply-To: <20181012131946.zoab2lpfmrycmuju@kshutemo-mobl1>

On Fri, Oct 12, 2018 at 04:19:46PM +0300, Kirill A. Shutemov wrote:
> On Fri, Oct 12, 2018 at 05:50:46AM -0700, Joel Fernandes wrote:
> > On Fri, Oct 12, 2018 at 02:30:56PM +0300, Kirill A. Shutemov wrote:
> > > On Thu, Oct 11, 2018 at 06:37:56PM -0700, Joel Fernandes (Google) wrote:
> > > > Android needs to mremap large regions of memory during memory management
> > > > related operations. The mremap system call can be really slow if THP is
> > > > not enabled. The bottleneck is move_page_tables, which is copying each
> > > > pte at a time, and can be really slow across a large map. Turning on THP
> > > > may not be a viable option, and is not for us. This patch speeds up the
> > > > performance for non-THP system by copying at the PMD level when possible.
> > > > 
> > > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap
> > > > completion times drops from 160-250 millesconds to 380-400 microseconds.
> > > > 
> > > > Before:
> > > > Total mremap time for 1GB data: 242321014 nanoseconds.
> > > > Total mremap time for 1GB data: 196842467 nanoseconds.
> > > > Total mremap time for 1GB data: 167051162 nanoseconds.
> > > > 
> > > > After:
> > > > Total mremap time for 1GB data: 385781 nanoseconds.
> > > > Total mremap time for 1GB data: 388959 nanoseconds.
> > > > Total mremap time for 1GB data: 402813 nanoseconds.
> > > > 
> > > > Incase THP is enabled, the optimization is skipped. I also flush the
> > > > tlb every time we do this optimization since I couldn't find a way to
> > > > determine if the low-level PTEs are dirty. It is seen that the cost of
> > > > doing so is not much compared the improvement, on both x86-64 and arm64.
> > > 
> > > I looked into the code more and noticed move_pte() helper called from
> > > move_ptes(). It changes PTE entry to suite new address.
> > > 
> > > It is only defined in non-trivial way on Sparc. I don't know much about
> > > Sparc and it's hard for me to say if the optimization will break anything
> > > there.
> > 
> > Sparc's move_pte seems to be flushing the D-cache to prevent aliasing. It is
> > not modifying the PTE itself AFAICS:
> > 
> > #ifdef DCACHE_ALIASING_POSSIBLE
> > #define __HAVE_ARCH_MOVE_PTE
> > #define move_pte(pte, prot, old_addr, new_addr)                         \
> > ({                                                                      \
> >         pte_t newpte = (pte);                                           \
> >         if (tlb_type != hypervisor && pte_present(pte)) {               \
> >                 unsigned long this_pfn = pte_pfn(pte);                  \
> >                                                                         \
> >                 if (pfn_valid(this_pfn) &&                              \
> >                     (((old_addr) ^ (new_addr)) & (1 << 13)))            \
> >                         flush_dcache_page_all(current->mm,              \
> >                                               pfn_to_page(this_pfn));   \
> >         }                                                               \
> >         newpte;                                                         \
> > })
> > #endif
> > 
> > If its an issue, then how do transparent huge pages work on Sparc?  I don't
> > see the huge page code (move_huge_pages) during mremap doing anything special
> > for Sparc architecture when moving PMDs..
> 
> My *guess* is that it will work fine on Sparc as it apprarently it only
> cares about change in bit 13 of virtual address. It will never happen for
> huge pages or when PTE page tables move.
> 
> But I just realized that the problem is bigger: since we pass new_addr to
> the set_pte_at() we would need to audit all implementations that they are
> safe with just moving PTE page table.
> 
> I would rather go with per-architecture enabling. It's much safer.

I'm Ok with the per-arch enabling, I agree its safer. So I should be adding a
a new __HAVE_ARCH_MOVE_PMD right, or did you have a better name for that?

Also, do you feel we should still need to remove the address argument from
set_pte_alloc? Or should we leave that alone if we do per-arch?
I figure I spent a bunch of time on that already anyway, and its a clean up
anyway, so may as well do it. But perhaps that "pte_alloc cleanup" can then
be a separate patch independent of this series?

> > Also, do we not flush the caches from any path when we munmap address space?
> > We do call do_munmap on the old mapping from mremap after moving to the new one.
> 
> Are you sure about that? It can be hided deeper in architecture-specific
> code.

I am sure we do call do_munmap, I was asking if we flush the caches as well.
If we're enabling this per architecture, then I guess it does not matter for
the purposes of this patch.

thanks,

 - Joel

WARNING: multiple messages have this Message-ID (diff)
From: Joel Fernandes <joel@joelfernandes.org>
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: linux-mips@linux-mips.org, Rich Felker <dalias@libc.org>,
	linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Will Deacon <will.deacon@arm.com>,
	mhocko@kernel.org, linux-mm@kvack.org, lokeshgidra@google.com,
	sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org,
	elfring@users.sourceforge.net, Jonas Bonn <jonas@southpole.se>,
	linux-s390@vger.kernel.org, dancol@google.com,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	linux-xtensa@linux-xtensa.org, linux-hexagon@vger.kernel.org,
	Helge Deller <deller@gmx.de>,
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)"
	<x86@kernel.org>,
	hughd@google.com, "James E.J. Bottomley" <jejb@parisc-linux.org>,
	kasan-dev@googlegroups.com, kvmarm@lists.cs.columbia.edu,
	Ingo Molnar <mingo@redhat.com>,
	Geert
Subject: Re: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions
Date: Fri, 12 Oct 2018 09:57:19 -0700	[thread overview]
Message-ID: <20181012165719.GE223066@joelaf.mtv.corp.google.com> (raw)
In-Reply-To: <20181012131946.zoab2lpfmrycmuju@kshutemo-mobl1>

On Fri, Oct 12, 2018 at 04:19:46PM +0300, Kirill A. Shutemov wrote:
> On Fri, Oct 12, 2018 at 05:50:46AM -0700, Joel Fernandes wrote:
> > On Fri, Oct 12, 2018 at 02:30:56PM +0300, Kirill A. Shutemov wrote:
> > > On Thu, Oct 11, 2018 at 06:37:56PM -0700, Joel Fernandes (Google) wrote:
> > > > Android needs to mremap large regions of memory during memory management
> > > > related operations. The mremap system call can be really slow if THP is
> > > > not enabled. The bottleneck is move_page_tables, which is copying each
> > > > pte at a time, and can be really slow across a large map. Turning on THP
> > > > may not be a viable option, and is not for us. This patch speeds up the
> > > > performance for non-THP system by copying at the PMD level when possible.
> > > > 
> > > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap
> > > > completion times drops from 160-250 millesconds to 380-400 microseconds.
> > > > 
> > > > Before:
> > > > Total mremap time for 1GB data: 242321014 nanoseconds.
> > > > Total mremap time for 1GB data: 196842467 nanoseconds.
> > > > Total mremap time for 1GB data: 167051162 nanoseconds.
> > > > 
> > > > After:
> > > > Total mremap time for 1GB data: 385781 nanoseconds.
> > > > Total mremap time for 1GB data: 388959 nanoseconds.
> > > > Total mremap time for 1GB data: 402813 nanoseconds.
> > > > 
> > > > Incase THP is enabled, the optimization is skipped. I also flush the
> > > > tlb every time we do this optimization since I couldn't find a way to
> > > > determine if the low-level PTEs are dirty. It is seen that the cost of
> > > > doing so is not much compared the improvement, on both x86-64 and arm64.
> > > 
> > > I looked into the code more and noticed move_pte() helper called from
> > > move_ptes(). It changes PTE entry to suite new address.
> > > 
> > > It is only defined in non-trivial way on Sparc. I don't know much about
> > > Sparc and it's hard for me to say if the optimization will break anything
> > > there.
> > 
> > Sparc's move_pte seems to be flushing the D-cache to prevent aliasing. It is
> > not modifying the PTE itself AFAICS:
> > 
> > #ifdef DCACHE_ALIASING_POSSIBLE
> > #define __HAVE_ARCH_MOVE_PTE
> > #define move_pte(pte, prot, old_addr, new_addr)                         \
> > ({                                                                      \
> >         pte_t newpte = (pte);                                           \
> >         if (tlb_type != hypervisor && pte_present(pte)) {               \
> >                 unsigned long this_pfn = pte_pfn(pte);                  \
> >                                                                         \
> >                 if (pfn_valid(this_pfn) &&                              \
> >                     (((old_addr) ^ (new_addr)) & (1 << 13)))            \
> >                         flush_dcache_page_all(current->mm,              \
> >                                               pfn_to_page(this_pfn));   \
> >         }                                                               \
> >         newpte;                                                         \
> > })
> > #endif
> > 
> > If its an issue, then how do transparent huge pages work on Sparc?  I don't
> > see the huge page code (move_huge_pages) during mremap doing anything special
> > for Sparc architecture when moving PMDs..
> 
> My *guess* is that it will work fine on Sparc as it apprarently it only
> cares about change in bit 13 of virtual address. It will never happen for
> huge pages or when PTE page tables move.
> 
> But I just realized that the problem is bigger: since we pass new_addr to
> the set_pte_at() we would need to audit all implementations that they are
> safe with just moving PTE page table.
> 
> I would rather go with per-architecture enabling. It's much safer.

I'm Ok with the per-arch enabling, I agree its safer. So I should be adding a
a new __HAVE_ARCH_MOVE_PMD right, or did you have a better name for that?

Also, do you feel we should still need to remove the address argument from
set_pte_alloc? Or should we leave that alone if we do per-arch?
I figure I spent a bunch of time on that already anyway, and its a clean up
anyway, so may as well do it. But perhaps that "pte_alloc cleanup" can then
be a separate patch independent of this series?

> > Also, do we not flush the caches from any path when we munmap address space?
> > We do call do_munmap on the old mapping from mremap after moving to the new one.
> 
> Are you sure about that? It can be hided deeper in architecture-specific
> code.

I am sure we do call do_munmap, I was asking if we flush the caches as well.
If we're enabling this per architecture, then I guess it does not matter for
the purposes of this patch.

thanks,

 - Joel

WARNING: multiple messages have this Message-ID (diff)
From: Joel Fernandes <joel@joelfernandes.org>
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: linux-kernel@vger.kernel.org, kernel-team@android.com,
	minchan@kernel.org, pantin@google.com, hughd@google.com,
	lokeshgidra@google.com, dancol@google.com, mhocko@kernel.org,
	akpm@linux-foundation.org,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	Andy Lutomirski <luto@kernel.org>, Borislav Petkov <bp@alien8.de>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Chris Zankel <chris@zankel.net>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	elfring@users.sourceforge.net, Fenghua Yu <fenghua.yu@intel.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Guan Xuetao <gxt@pku.edu.cn>, Helge Deller <deller@gmx.de>,
	Ingo Molnar <mingo@redhat.com>,
	"James E.J. Bottomley" <jejb@parisc-linux.org>,
	Jeff Dike <jdike@addtoit.com>, Jonas Bonn <jonas@southpole.se>,
	Julia Lawall <Julia.Lawall@lip6.fr>,
	kasan-dev@googlegroups.com, kvmarm@lists.cs.columbia.edu,
	Ley Foon Tan <lftan@altera.com>,
	linux-alpha@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org,
	linux-m68k@lists.linux-m68k.org, linux-mips@linux-mips.org,
	linux-mm@kvack.org, linux-parisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	linux-snps-arc@lists.infradead.org, linux-um@lists.infradead.org,
	linux-xtensa@linux-xtensa.org, Max Filippov <jcmvbkbc@gmail.com>,
	nios2-dev@lists.rocketboards.org, openrisc@lists.librecores.org,
	Peter Zijlstra <peterz@infradead.org>,
	Richard Weinberger <richard@nod.at>,
	Rich Felker <dalias@libc.org>, Sam Creasey <sammy@sammy.net>,
	sparclinux@vger.kernel.org, Stafford Horne <shorne@gmail.com>,
	Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>,
	Thomas Gleixner <tglx@linutronix.de>,
	Tony Luck <tony.luck@intel.com>,
	Will Deacon <will.deacon@arm.com>,
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)"
	<x86@kernel.org>, Yoshinori Sato <ysato@users.sourceforge.jp>
Subject: Re: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions
Date: Fri, 12 Oct 2018 09:57:19 -0700	[thread overview]
Message-ID: <20181012165719.GE223066@joelaf.mtv.corp.google.com> (raw)
In-Reply-To: <20181012131946.zoab2lpfmrycmuju@kshutemo-mobl1>

On Fri, Oct 12, 2018 at 04:19:46PM +0300, Kirill A. Shutemov wrote:
> On Fri, Oct 12, 2018 at 05:50:46AM -0700, Joel Fernandes wrote:
> > On Fri, Oct 12, 2018 at 02:30:56PM +0300, Kirill A. Shutemov wrote:
> > > On Thu, Oct 11, 2018 at 06:37:56PM -0700, Joel Fernandes (Google) wrote:
> > > > Android needs to mremap large regions of memory during memory management
> > > > related operations. The mremap system call can be really slow if THP is
> > > > not enabled. The bottleneck is move_page_tables, which is copying each
> > > > pte at a time, and can be really slow across a large map. Turning on THP
> > > > may not be a viable option, and is not for us. This patch speeds up the
> > > > performance for non-THP system by copying at the PMD level when possible.
> > > > 
> > > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap
> > > > completion times drops from 160-250 millesconds to 380-400 microseconds.
> > > > 
> > > > Before:
> > > > Total mremap time for 1GB data: 242321014 nanoseconds.
> > > > Total mremap time for 1GB data: 196842467 nanoseconds.
> > > > Total mremap time for 1GB data: 167051162 nanoseconds.
> > > > 
> > > > After:
> > > > Total mremap time for 1GB data: 385781 nanoseconds.
> > > > Total mremap time for 1GB data: 388959 nanoseconds.
> > > > Total mremap time for 1GB data: 402813 nanoseconds.
> > > > 
> > > > Incase THP is enabled, the optimization is skipped. I also flush the
> > > > tlb every time we do this optimization since I couldn't find a way to
> > > > determine if the low-level PTEs are dirty. It is seen that the cost of
> > > > doing so is not much compared the improvement, on both x86-64 and arm64.
> > > 
> > > I looked into the code more and noticed move_pte() helper called from
> > > move_ptes(). It changes PTE entry to suite new address.
> > > 
> > > It is only defined in non-trivial way on Sparc. I don't know much about
> > > Sparc and it's hard for me to say if the optimization will break anything
> > > there.
> > 
> > Sparc's move_pte seems to be flushing the D-cache to prevent aliasing. It is
> > not modifying the PTE itself AFAICS:
> > 
> > #ifdef DCACHE_ALIASING_POSSIBLE
> > #define __HAVE_ARCH_MOVE_PTE
> > #define move_pte(pte, prot, old_addr, new_addr)                         \
> > ({                                                                      \
> >         pte_t newpte = (pte);                                           \
> >         if (tlb_type != hypervisor && pte_present(pte)) {               \
> >                 unsigned long this_pfn = pte_pfn(pte);                  \
> >                                                                         \
> >                 if (pfn_valid(this_pfn) &&                              \
> >                     (((old_addr) ^ (new_addr)) & (1 << 13)))            \
> >                         flush_dcache_page_all(current->mm,              \
> >                                               pfn_to_page(this_pfn));   \
> >         }                                                               \
> >         newpte;                                                         \
> > })
> > #endif
> > 
> > If its an issue, then how do transparent huge pages work on Sparc?  I don't
> > see the huge page code (move_huge_pages) during mremap doing anything special
> > for Sparc architecture when moving PMDs..
> 
> My *guess* is that it will work fine on Sparc as it apprarently it only
> cares about change in bit 13 of virtual address. It will never happen for
> huge pages or when PTE page tables move.
> 
> But I just realized that the problem is bigger: since we pass new_addr to
> the set_pte_at() we would need to audit all implementations that they are
> safe with just moving PTE page table.
> 
> I would rather go with per-architecture enabling. It's much safer.

I'm Ok with the per-arch enabling, I agree its safer. So I should be adding a
a new __HAVE_ARCH_MOVE_PMD right, or did you have a better name for that?

Also, do you feel we should still need to remove the address argument from
set_pte_alloc? Or should we leave that alone if we do per-arch?
I figure I spent a bunch of time on that already anyway, and its a clean up
anyway, so may as well do it. But perhaps that "pte_alloc cleanup" can then
be a separate patch independent of this series?

> > Also, do we not flush the caches from any path when we munmap address space?
> > We do call do_munmap on the old mapping from mremap after moving to the new one.
> 
> Are you sure about that? It can be hided deeper in architecture-specific
> code.

I am sure we do call do_munmap, I was asking if we flush the caches as well.
If we're enabling this per architecture, then I guess it does not matter for
the purposes of this patch.

thanks,

 - Joel

WARNING: multiple messages have this Message-ID (diff)
From: joel@joelfernandes.org (Joel Fernandes)
To: linux-riscv@lists.infradead.org
Subject: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions
Date: Fri, 12 Oct 2018 09:57:19 -0700	[thread overview]
Message-ID: <20181012165719.GE223066@joelaf.mtv.corp.google.com> (raw)
In-Reply-To: <20181012131946.zoab2lpfmrycmuju@kshutemo-mobl1>

On Fri, Oct 12, 2018 at 04:19:46PM +0300, Kirill A. Shutemov wrote:
> On Fri, Oct 12, 2018 at 05:50:46AM -0700, Joel Fernandes wrote:
> > On Fri, Oct 12, 2018 at 02:30:56PM +0300, Kirill A. Shutemov wrote:
> > > On Thu, Oct 11, 2018 at 06:37:56PM -0700, Joel Fernandes (Google) wrote:
> > > > Android needs to mremap large regions of memory during memory management
> > > > related operations. The mremap system call can be really slow if THP is
> > > > not enabled. The bottleneck is move_page_tables, which is copying each
> > > > pte at a time, and can be really slow across a large map. Turning on THP
> > > > may not be a viable option, and is not for us. This patch speeds up the
> > > > performance for non-THP system by copying at the PMD level when possible.
> > > > 
> > > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap
> > > > completion times drops from 160-250 millesconds to 380-400 microseconds.
> > > > 
> > > > Before:
> > > > Total mremap time for 1GB data: 242321014 nanoseconds.
> > > > Total mremap time for 1GB data: 196842467 nanoseconds.
> > > > Total mremap time for 1GB data: 167051162 nanoseconds.
> > > > 
> > > > After:
> > > > Total mremap time for 1GB data: 385781 nanoseconds.
> > > > Total mremap time for 1GB data: 388959 nanoseconds.
> > > > Total mremap time for 1GB data: 402813 nanoseconds.
> > > > 
> > > > Incase THP is enabled, the optimization is skipped. I also flush the
> > > > tlb every time we do this optimization since I couldn't find a way to
> > > > determine if the low-level PTEs are dirty. It is seen that the cost of
> > > > doing so is not much compared the improvement, on both x86-64 and arm64.
> > > 
> > > I looked into the code more and noticed move_pte() helper called from
> > > move_ptes(). It changes PTE entry to suite new address.
> > > 
> > > It is only defined in non-trivial way on Sparc. I don't know much about
> > > Sparc and it's hard for me to say if the optimization will break anything
> > > there.
> > 
> > Sparc's move_pte seems to be flushing the D-cache to prevent aliasing. It is
> > not modifying the PTE itself AFAICS:
> > 
> > #ifdef DCACHE_ALIASING_POSSIBLE
> > #define __HAVE_ARCH_MOVE_PTE
> > #define move_pte(pte, prot, old_addr, new_addr)                         \
> > ({                                                                      \
> >         pte_t newpte = (pte);                                           \
> >         if (tlb_type != hypervisor && pte_present(pte)) {               \
> >                 unsigned long this_pfn = pte_pfn(pte);                  \
> >                                                                         \
> >                 if (pfn_valid(this_pfn) &&                              \
> >                     (((old_addr) ^ (new_addr)) & (1 << 13)))            \
> >                         flush_dcache_page_all(current->mm,              \
> >                                               pfn_to_page(this_pfn));   \
> >         }                                                               \
> >         newpte;                                                         \
> > })
> > #endif
> > 
> > If its an issue, then how do transparent huge pages work on Sparc?  I don't
> > see the huge page code (move_huge_pages) during mremap doing anything special
> > for Sparc architecture when moving PMDs..
> 
> My *guess* is that it will work fine on Sparc as it apprarently it only
> cares about change in bit 13 of virtual address. It will never happen for
> huge pages or when PTE page tables move.
> 
> But I just realized that the problem is bigger: since we pass new_addr to
> the set_pte_at() we would need to audit all implementations that they are
> safe with just moving PTE page table.
> 
> I would rather go with per-architecture enabling. It's much safer.

I'm Ok with the per-arch enabling, I agree its safer. So I should be adding a
a new __HAVE_ARCH_MOVE_PMD right, or did you have a better name for that?

Also, do you feel we should still need to remove the address argument from
set_pte_alloc? Or should we leave that alone if we do per-arch?
I figure I spent a bunch of time on that already anyway, and its a clean up
anyway, so may as well do it. But perhaps that "pte_alloc cleanup" can then
be a separate patch independent of this series?

> > Also, do we not flush the caches from any path when we munmap address space?
> > We do call do_munmap on the old mapping from mremap after moving to the new one.
> 
> Are you sure about that? It can be hided deeper in architecture-specific
> code.

I am sure we do call do_munmap, I was asking if we flush the caches as well.
If we're enabling this per architecture, then I guess it does not matter for
the purposes of this patch.

thanks,

 - Joel

WARNING: multiple messages have this Message-ID (diff)
From: Joel Fernandes <joel@joelfernandes.org>
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: linux-mips@linux-mips.org, Rich Felker <dalias@libc.org>,
	linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Will Deacon <will.deacon@arm.com>,
	mhocko@kernel.org, linux-mm@kvack.org, lokeshgidra@google.com,
	sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org,
	elfring@users.sourceforge.net, Jonas Bonn <jonas@southpole.se>,
	linux-s390@vger.kernel.org, dancol@google.com,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	linux-xtensa@linux-xtensa.org, linux-hexagon@vger.kernel.org,
	Helge Deller <deller@gmx.de>,
	"maintainer:X86 ARCHITECTURE \(32-BIT AND 64-BIT\)"
	<x86@kernel.org>,
	hughd@google.com, "James E.J. Bottomley" <jejb@parisc-linux.org>,
	kasan-dev@googlegroups.com, kvmarm@lists.cs.columbia.edu,
	Ingo Molnar <mingo@redhat.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	linux-snps-arc@lists.infradead.org, kernel-team@android.com,
	Sam Creasey <sammy@sammy.net>, Fenghua Yu <fenghua.yu@intel.com>,
	Jeff Dike <jdike@addtoit.com>,
	linux-um@lists.infradead.org,
	Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>,
	Julia Lawall <Julia.Lawall@lip6.fr>,
	linux-m68k@lists.linux-m68k.org, openrisc@lists.librecores.org,
	Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
	nios2-dev@lists.rocketboards.org,
	Stafford Horne <shorne@gmail.com>, Guan Xuetao <gxt@pku.edu.cn>,
	linux-arm-kernel@lists.infradead.org,
	Chris Zankel <chris@zankel.net>, Tony Luck <tony.luck@intel.com>,
	Richard Weinberger <richard@nod.at>,
	linux-parisc@vger.kernel.org, pantin@google.com,
	Max Filippov <jcmvbkbc@gmail.com>,
	linux-kernel@vger.kernel.org, minchan@kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-alpha@vger.kernel.org, Ley Foon Tan <lftan@altera.com>,
	akpm@linux-foundation.org, linuxppc-dev@lists.ozlabs.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions
Date: Fri, 12 Oct 2018 09:57:19 -0700	[thread overview]
Message-ID: <20181012165719.GE223066@joelaf.mtv.corp.google.com> (raw)
Message-ID: <20181012165719.h9_BG4sb9p4byDZYnrOGJO6VQH3meqZs5zkXHplCxOY@z> (raw)
In-Reply-To: <20181012131946.zoab2lpfmrycmuju@kshutemo-mobl1>

On Fri, Oct 12, 2018 at 04:19:46PM +0300, Kirill A. Shutemov wrote:
> On Fri, Oct 12, 2018 at 05:50:46AM -0700, Joel Fernandes wrote:
> > On Fri, Oct 12, 2018 at 02:30:56PM +0300, Kirill A. Shutemov wrote:
> > > On Thu, Oct 11, 2018 at 06:37:56PM -0700, Joel Fernandes (Google) wrote:
> > > > Android needs to mremap large regions of memory during memory management
> > > > related operations. The mremap system call can be really slow if THP is
> > > > not enabled. The bottleneck is move_page_tables, which is copying each
> > > > pte at a time, and can be really slow across a large map. Turning on THP
> > > > may not be a viable option, and is not for us. This patch speeds up the
> > > > performance for non-THP system by copying at the PMD level when possible.
> > > > 
> > > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap
> > > > completion times drops from 160-250 millesconds to 380-400 microseconds.
> > > > 
> > > > Before:
> > > > Total mremap time for 1GB data: 242321014 nanoseconds.
> > > > Total mremap time for 1GB data: 196842467 nanoseconds.
> > > > Total mremap time for 1GB data: 167051162 nanoseconds.
> > > > 
> > > > After:
> > > > Total mremap time for 1GB data: 385781 nanoseconds.
> > > > Total mremap time for 1GB data: 388959 nanoseconds.
> > > > Total mremap time for 1GB data: 402813 nanoseconds.
> > > > 
> > > > Incase THP is enabled, the optimization is skipped. I also flush the
> > > > tlb every time we do this optimization since I couldn't find a way to
> > > > determine if the low-level PTEs are dirty. It is seen that the cost of
> > > > doing so is not much compared the improvement, on both x86-64 and arm64.
> > > 
> > > I looked into the code more and noticed move_pte() helper called from
> > > move_ptes(). It changes PTE entry to suite new address.
> > > 
> > > It is only defined in non-trivial way on Sparc. I don't know much about
> > > Sparc and it's hard for me to say if the optimization will break anything
> > > there.
> > 
> > Sparc's move_pte seems to be flushing the D-cache to prevent aliasing. It is
> > not modifying the PTE itself AFAICS:
> > 
> > #ifdef DCACHE_ALIASING_POSSIBLE
> > #define __HAVE_ARCH_MOVE_PTE
> > #define move_pte(pte, prot, old_addr, new_addr)                         \
> > ({                                                                      \
> >         pte_t newpte = (pte);                                           \
> >         if (tlb_type != hypervisor && pte_present(pte)) {               \
> >                 unsigned long this_pfn = pte_pfn(pte);                  \
> >                                                                         \
> >                 if (pfn_valid(this_pfn) &&                              \
> >                     (((old_addr) ^ (new_addr)) & (1 << 13)))            \
> >                         flush_dcache_page_all(current->mm,              \
> >                                               pfn_to_page(this_pfn));   \
> >         }                                                               \
> >         newpte;                                                         \
> > })
> > #endif
> > 
> > If its an issue, then how do transparent huge pages work on Sparc?  I don't
> > see the huge page code (move_huge_pages) during mremap doing anything special
> > for Sparc architecture when moving PMDs..
> 
> My *guess* is that it will work fine on Sparc as it apprarently it only
> cares about change in bit 13 of virtual address. It will never happen for
> huge pages or when PTE page tables move.
> 
> But I just realized that the problem is bigger: since we pass new_addr to
> the set_pte_at() we would need to audit all implementations that they are
> safe with just moving PTE page table.
> 
> I would rather go with per-architecture enabling. It's much safer.

I'm Ok with the per-arch enabling, I agree its safer. So I should be adding a
a new __HAVE_ARCH_MOVE_PMD right, or did you have a better name for that?

Also, do you feel we should still need to remove the address argument from
set_pte_alloc? Or should we leave that alone if we do per-arch?
I figure I spent a bunch of time on that already anyway, and its a clean up
anyway, so may as well do it. But perhaps that "pte_alloc cleanup" can then
be a separate patch independent of this series?

> > Also, do we not flush the caches from any path when we munmap address space?
> > We do call do_munmap on the old mapping from mremap after moving to the new one.
> 
> Are you sure about that? It can be hided deeper in architecture-specific
> code.

I am sure we do call do_munmap, I was asking if we flush the caches as well.
If we're enabling this per architecture, then I guess it does not matter for
the purposes of this patch.

thanks,

 - Joel


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

WARNING: multiple messages have this Message-ID (diff)
From: Joel Fernandes <joel@joelfernandes.org>
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: linux-mips@linux-mips.org, Rich Felker <dalias@libc.org>,
	linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Will Deacon <will.deacon@arm.com>,
	mhocko@kernel.org, linux-mm@kvack.org, lokeshgidra@google.com,
	sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org,
	elfring@users.sourceforge.net, Jonas Bonn <jonas@southpole.se>,
	linux-s390@vger.kernel.org, dancol@google.com,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	linux-xtensa@linux-xtensa.org, linux-hexagon@vger.kernel.org,
	Helge Deller <deller@gmx.de>,
	"maintainer:X86 ARCHITECTURE \(32-BIT AND 64-BIT\)"
	<x86@kernel.org>,
	hughd@google.com, "James E.J. Bottomley" <jejb@parisc-linux.org>,
	kasan-dev@googlegroups.com, kvmarm@lists.cs.columbia.edu,
	Ingo Molnar <mingo@redhat.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	linux-snps-arc@lists.infradead.org, kernel-team@android.com,
	Sam Creasey <sammy@sammy.net>, Fenghua Yu <fenghua.yu@intel.com>,
	Jeff Dike <jdike@addtoit.com>,
	linux-um@lists.infradead.org,
	Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>,
	Julia Lawall <Julia.Lawall@lip6.fr>,
	linux-m68k@lists.linux-m68k.org, openrisc@lists.librecores.org,
	Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
	nios2-dev@lists.rocketboards.org,
	Stafford Horne <shorne@gmail.com>, Guan Xuetao <gxt@pku.edu.cn>,
	linux-arm-kernel@lists.infradead.org,
	Chris Zankel <chris@zankel.net>, Tony Luck <tony.luck@intel.com>,
	Richard Weinberger <richard@nod.at>,
	linux-parisc@vger.kernel.org, pantin@google.com,
	Max Filippov <jcmvbkbc@gmail.com>,
	linux-kernel@vger.kernel.org, minchan@kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-alpha@vger.kernel.org, Ley Foon Tan <lftan@altera.com>,
	akpm@linux-foundation.org, linuxppc-dev@lists.ozlabs.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions
Date: Fri, 12 Oct 2018 09:57:19 -0700	[thread overview]
Message-ID: <20181012165719.GE223066@joelaf.mtv.corp.google.com> (raw)
In-Reply-To: <20181012131946.zoab2lpfmrycmuju@kshutemo-mobl1>

On Fri, Oct 12, 2018 at 04:19:46PM +0300, Kirill A. Shutemov wrote:
> On Fri, Oct 12, 2018 at 05:50:46AM -0700, Joel Fernandes wrote:
> > On Fri, Oct 12, 2018 at 02:30:56PM +0300, Kirill A. Shutemov wrote:
> > > On Thu, Oct 11, 2018 at 06:37:56PM -0700, Joel Fernandes (Google) wrote:
> > > > Android needs to mremap large regions of memory during memory management
> > > > related operations. The mremap system call can be really slow if THP is
> > > > not enabled. The bottleneck is move_page_tables, which is copying each
> > > > pte at a time, and can be really slow across a large map. Turning on THP
> > > > may not be a viable option, and is not for us. This patch speeds up the
> > > > performance for non-THP system by copying at the PMD level when possible.
> > > > 
> > > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap
> > > > completion times drops from 160-250 millesconds to 380-400 microseconds.
> > > > 
> > > > Before:
> > > > Total mremap time for 1GB data: 242321014 nanoseconds.
> > > > Total mremap time for 1GB data: 196842467 nanoseconds.
> > > > Total mremap time for 1GB data: 167051162 nanoseconds.
> > > > 
> > > > After:
> > > > Total mremap time for 1GB data: 385781 nanoseconds.
> > > > Total mremap time for 1GB data: 388959 nanoseconds.
> > > > Total mremap time for 1GB data: 402813 nanoseconds.
> > > > 
> > > > Incase THP is enabled, the optimization is skipped. I also flush the
> > > > tlb every time we do this optimization since I couldn't find a way to
> > > > determine if the low-level PTEs are dirty. It is seen that the cost of
> > > > doing so is not much compared the improvement, on both x86-64 and arm64.
> > > 
> > > I looked into the code more and noticed move_pte() helper called from
> > > move_ptes(). It changes PTE entry to suite new address.
> > > 
> > > It is only defined in non-trivial way on Sparc. I don't know much about
> > > Sparc and it's hard for me to say if the optimization will break anything
> > > there.
> > 
> > Sparc's move_pte seems to be flushing the D-cache to prevent aliasing. It is
> > not modifying the PTE itself AFAICS:
> > 
> > #ifdef DCACHE_ALIASING_POSSIBLE
> > #define __HAVE_ARCH_MOVE_PTE
> > #define move_pte(pte, prot, old_addr, new_addr)                         \
> > ({                                                                      \
> >         pte_t newpte = (pte);                                           \
> >         if (tlb_type != hypervisor && pte_present(pte)) {               \
> >                 unsigned long this_pfn = pte_pfn(pte);                  \
> >                                                                         \
> >                 if (pfn_valid(this_pfn) &&                              \
> >                     (((old_addr) ^ (new_addr)) & (1 << 13)))            \
> >                         flush_dcache_page_all(current->mm,              \
> >                                               pfn_to_page(this_pfn));   \
> >         }                                                               \
> >         newpte;                                                         \
> > })
> > #endif
> > 
> > If its an issue, then how do transparent huge pages work on Sparc?  I don't
> > see the huge page code (move_huge_pages) during mremap doing anything special
> > for Sparc architecture when moving PMDs..
> 
> My *guess* is that it will work fine on Sparc as it apprarently it only
> cares about change in bit 13 of virtual address. It will never happen for
> huge pages or when PTE page tables move.
> 
> But I just realized that the problem is bigger: since we pass new_addr to
> the set_pte_at() we would need to audit all implementations that they are
> safe with just moving PTE page table.
> 
> I would rather go with per-architecture enabling. It's much safer.

I'm Ok with the per-arch enabling, I agree its safer. So I should be adding a
a new __HAVE_ARCH_MOVE_PMD right, or did you have a better name for that?

Also, do you feel we should still need to remove the address argument from
set_pte_alloc? Or should we leave that alone if we do per-arch?
I figure I spent a bunch of time on that already anyway, and its a clean up
anyway, so may as well do it. But perhaps that "pte_alloc cleanup" can then
be a separate patch independent of this series?

> > Also, do we not flush the caches from any path when we munmap address space?
> > We do call do_munmap on the old mapping from mremap after moving to the new one.
> 
> Are you sure about that? It can be hided deeper in architecture-specific
> code.

I am sure we do call do_munmap, I was asking if we flush the caches as well.
If we're enabling this per architecture, then I guess it does not matter for
the purposes of this patch.

thanks,

 - Joel


WARNING: multiple messages have this Message-ID (diff)
From: joel@joelfernandes.org (Joel Fernandes)
To: linux-snps-arc@lists.infradead.org
Subject: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions
Date: Fri, 12 Oct 2018 09:57:19 -0700	[thread overview]
Message-ID: <20181012165719.GE223066@joelaf.mtv.corp.google.com> (raw)
In-Reply-To: <20181012131946.zoab2lpfmrycmuju@kshutemo-mobl1>

On Fri, Oct 12, 2018@04:19:46PM +0300, Kirill A. Shutemov wrote:
> On Fri, Oct 12, 2018@05:50:46AM -0700, Joel Fernandes wrote:
> > On Fri, Oct 12, 2018@02:30:56PM +0300, Kirill A. Shutemov wrote:
> > > On Thu, Oct 11, 2018@06:37:56PM -0700, Joel Fernandes (Google) wrote:
> > > > Android needs to mremap large regions of memory during memory management
> > > > related operations. The mremap system call can be really slow if THP is
> > > > not enabled. The bottleneck is move_page_tables, which is copying each
> > > > pte at a time, and can be really slow across a large map. Turning on THP
> > > > may not be a viable option, and is not for us. This patch speeds up the
> > > > performance for non-THP system by copying at the PMD level when possible.
> > > > 
> > > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap
> > > > completion times drops from 160-250 millesconds to 380-400 microseconds.
> > > > 
> > > > Before:
> > > > Total mremap time for 1GB data: 242321014 nanoseconds.
> > > > Total mremap time for 1GB data: 196842467 nanoseconds.
> > > > Total mremap time for 1GB data: 167051162 nanoseconds.
> > > > 
> > > > After:
> > > > Total mremap time for 1GB data: 385781 nanoseconds.
> > > > Total mremap time for 1GB data: 388959 nanoseconds.
> > > > Total mremap time for 1GB data: 402813 nanoseconds.
> > > > 
> > > > Incase THP is enabled, the optimization is skipped. I also flush the
> > > > tlb every time we do this optimization since I couldn't find a way to
> > > > determine if the low-level PTEs are dirty. It is seen that the cost of
> > > > doing so is not much compared the improvement, on both x86-64 and arm64.
> > > 
> > > I looked into the code more and noticed move_pte() helper called from
> > > move_ptes(). It changes PTE entry to suite new address.
> > > 
> > > It is only defined in non-trivial way on Sparc. I don't know much about
> > > Sparc and it's hard for me to say if the optimization will break anything
> > > there.
> > 
> > Sparc's move_pte seems to be flushing the D-cache to prevent aliasing. It is
> > not modifying the PTE itself AFAICS:
> > 
> > #ifdef DCACHE_ALIASING_POSSIBLE
> > #define __HAVE_ARCH_MOVE_PTE
> > #define move_pte(pte, prot, old_addr, new_addr)                         \
> > ({                                                                      \
> >         pte_t newpte = (pte);                                           \
> >         if (tlb_type != hypervisor && pte_present(pte)) {               \
> >                 unsigned long this_pfn = pte_pfn(pte);                  \
> >                                                                         \
> >                 if (pfn_valid(this_pfn) &&                              \
> >                     (((old_addr) ^ (new_addr)) & (1 << 13)))            \
> >                         flush_dcache_page_all(current->mm,              \
> >                                               pfn_to_page(this_pfn));   \
> >         }                                                               \
> >         newpte;                                                         \
> > })
> > #endif
> > 
> > If its an issue, then how do transparent huge pages work on Sparc?  I don't
> > see the huge page code (move_huge_pages) during mremap doing anything special
> > for Sparc architecture when moving PMDs..
> 
> My *guess* is that it will work fine on Sparc as it apprarently it only
> cares about change in bit 13 of virtual address. It will never happen for
> huge pages or when PTE page tables move.
> 
> But I just realized that the problem is bigger: since we pass new_addr to
> the set_pte_at() we would need to audit all implementations that they are
> safe with just moving PTE page table.
> 
> I would rather go with per-architecture enabling. It's much safer.

I'm Ok with the per-arch enabling, I agree its safer. So I should be adding a
a new __HAVE_ARCH_MOVE_PMD right, or did you have a better name for that?

Also, do you feel we should still need to remove the address argument from
set_pte_alloc? Or should we leave that alone if we do per-arch?
I figure I spent a bunch of time on that already anyway, and its a clean up
anyway, so may as well do it. But perhaps that "pte_alloc cleanup" can then
be a separate patch independent of this series?

> > Also, do we not flush the caches from any path when we munmap address space?
> > We do call do_munmap on the old mapping from mremap after moving to the new one.
> 
> Are you sure about that? It can be hided deeper in architecture-specific
> code.

I am sure we do call do_munmap, I was asking if we flush the caches as well.
If we're enabling this per architecture, then I guess it does not matter for
the purposes of this patch.

thanks,

 - Joel

WARNING: multiple messages have this Message-ID (diff)
From: Joel Fernandes <joel@joelfernandes.org>
To: openrisc@lists.librecores.org
Subject: [OpenRISC] [PATCH v2 2/2] mm: speed up mremap by 500x on large regions
Date: Fri, 12 Oct 2018 09:57:19 -0700	[thread overview]
Message-ID: <20181012165719.GE223066@joelaf.mtv.corp.google.com> (raw)
In-Reply-To: <20181012131946.zoab2lpfmrycmuju@kshutemo-mobl1>

On Fri, Oct 12, 2018 at 04:19:46PM +0300, Kirill A. Shutemov wrote:
> On Fri, Oct 12, 2018 at 05:50:46AM -0700, Joel Fernandes wrote:
> > On Fri, Oct 12, 2018 at 02:30:56PM +0300, Kirill A. Shutemov wrote:
> > > On Thu, Oct 11, 2018 at 06:37:56PM -0700, Joel Fernandes (Google) wrote:
> > > > Android needs to mremap large regions of memory during memory management
> > > > related operations. The mremap system call can be really slow if THP is
> > > > not enabled. The bottleneck is move_page_tables, which is copying each
> > > > pte at a time, and can be really slow across a large map. Turning on THP
> > > > may not be a viable option, and is not for us. This patch speeds up the
> > > > performance for non-THP system by copying at the PMD level when possible.
> > > > 
> > > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap
> > > > completion times drops from 160-250 millesconds to 380-400 microseconds.
> > > > 
> > > > Before:
> > > > Total mremap time for 1GB data: 242321014 nanoseconds.
> > > > Total mremap time for 1GB data: 196842467 nanoseconds.
> > > > Total mremap time for 1GB data: 167051162 nanoseconds.
> > > > 
> > > > After:
> > > > Total mremap time for 1GB data: 385781 nanoseconds.
> > > > Total mremap time for 1GB data: 388959 nanoseconds.
> > > > Total mremap time for 1GB data: 402813 nanoseconds.
> > > > 
> > > > Incase THP is enabled, the optimization is skipped. I also flush the
> > > > tlb every time we do this optimization since I couldn't find a way to
> > > > determine if the low-level PTEs are dirty. It is seen that the cost of
> > > > doing so is not much compared the improvement, on both x86-64 and arm64.
> > > 
> > > I looked into the code more and noticed move_pte() helper called from
> > > move_ptes(). It changes PTE entry to suite new address.
> > > 
> > > It is only defined in non-trivial way on Sparc. I don't know much about
> > > Sparc and it's hard for me to say if the optimization will break anything
> > > there.
> > 
> > Sparc's move_pte seems to be flushing the D-cache to prevent aliasing. It is
> > not modifying the PTE itself AFAICS:
> > 
> > #ifdef DCACHE_ALIASING_POSSIBLE
> > #define __HAVE_ARCH_MOVE_PTE
> > #define move_pte(pte, prot, old_addr, new_addr)                         \
> > ({                                                                      \
> >         pte_t newpte = (pte);                                           \
> >         if (tlb_type != hypervisor && pte_present(pte)) {               \
> >                 unsigned long this_pfn = pte_pfn(pte);                  \
> >                                                                         \
> >                 if (pfn_valid(this_pfn) &&                              \
> >                     (((old_addr) ^ (new_addr)) & (1 << 13)))            \
> >                         flush_dcache_page_all(current->mm,              \
> >                                               pfn_to_page(this_pfn));   \
> >         }                                                               \
> >         newpte;                                                         \
> > })
> > #endif
> > 
> > If its an issue, then how do transparent huge pages work on Sparc?  I don't
> > see the huge page code (move_huge_pages) during mremap doing anything special
> > for Sparc architecture when moving PMDs..
> 
> My *guess* is that it will work fine on Sparc as it apprarently it only
> cares about change in bit 13 of virtual address. It will never happen for
> huge pages or when PTE page tables move.
> 
> But I just realized that the problem is bigger: since we pass new_addr to
> the set_pte_at() we would need to audit all implementations that they are
> safe with just moving PTE page table.
> 
> I would rather go with per-architecture enabling. It's much safer.

I'm Ok with the per-arch enabling, I agree its safer. So I should be adding a
a new __HAVE_ARCH_MOVE_PMD right, or did you have a better name for that?

Also, do you feel we should still need to remove the address argument from
set_pte_alloc? Or should we leave that alone if we do per-arch?
I figure I spent a bunch of time on that already anyway, and its a clean up
anyway, so may as well do it. But perhaps that "pte_alloc cleanup" can then
be a separate patch independent of this series?

> > Also, do we not flush the caches from any path when we munmap address space?
> > We do call do_munmap on the old mapping from mremap after moving to the new one.
> 
> Are you sure about that? It can be hided deeper in architecture-specific
> code.

I am sure we do call do_munmap, I was asking if we flush the caches as well.
If we're enabling this per architecture, then I guess it does not matter for
the purposes of this patch.

thanks,

 - Joel


  reply	other threads:[~2018-10-12 16:57 UTC|newest]

Thread overview: 317+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-12  1:37 [PATCH v2 1/2] treewide: remove unused address argument from pte_alloc functions Joel Fernandes (Google)
2018-10-12  1:37 ` [OpenRISC] " Joel Fernandes
2018-10-12  1:37 ` Joel Fernandes (Google)
2018-10-12  1:37 ` Joel Fernandes (Google)
2018-10-12  1:37 ` Joel Fernandes (Google)
2018-10-12  1:37 ` Joel Fernandes (Google)
2018-10-12  1:37 ` Joel Fernandes (Google)
2018-10-12  1:37 ` Joel Fernandes (Google)
2018-10-12  1:37 ` Joel Fernandes (Google)
2018-10-12  1:37 ` [PATCH v2 2/2] mm: speed up mremap by 500x on large regions Joel Fernandes (Google)
2018-10-12  1:37   ` [OpenRISC] " Joel Fernandes
2018-10-12  1:37   ` Joel Fernandes (Google)
2018-10-12  1:37   ` Joel Fernandes (Google)
2018-10-12  1:37   ` Joel Fernandes (Google)
2018-10-12  1:37   ` Joel Fernandes (Google)
2018-10-12  1:37   ` Joel Fernandes (Google)
2018-10-12  1:37   ` Joel Fernandes (Google)
2018-10-12  1:37   ` Joel Fernandes (Google)
2018-10-12  6:40   ` Anton Ivanov
2018-10-12 11:30   ` Kirill A. Shutemov
2018-10-12 11:30     ` [OpenRISC] " Kirill A. Shutemov
2018-10-12 11:30     ` Kirill A. Shutemov
2018-10-12 11:30     ` Kirill A. Shutemov
2018-10-12 11:30     ` Kirill A. Shutemov
2018-10-12 11:30     ` Kirill A. Shutemov
2018-10-12 11:30     ` Kirill A. Shutemov
2018-10-12 11:30     ` Kirill A. Shutemov
2018-10-12 11:36     ` Kirill A. Shutemov
2018-10-12 11:36       ` [OpenRISC] " Kirill A. Shutemov
2018-10-12 11:36       ` Kirill A. Shutemov
2018-10-12 11:36       ` Kirill A. Shutemov
2018-10-12 11:36       ` Kirill A. Shutemov
2018-10-12 11:36       ` Kirill A. Shutemov
2018-10-12 11:36       ` Kirill A. Shutemov
2018-10-12 11:36       ` Kirill A. Shutemov
2018-10-12 12:50     ` Joel Fernandes
2018-10-12 12:50       ` [OpenRISC] " Joel Fernandes
2018-10-12 12:50       ` Joel Fernandes
2018-10-12 12:50       ` Joel Fernandes
2018-10-12 12:50       ` Joel Fernandes
2018-10-12 12:50       ` Joel Fernandes
2018-10-12 12:50       ` Joel Fernandes
2018-10-12 12:50       ` Joel Fernandes
2018-10-12 13:19       ` Kirill A. Shutemov
2018-10-12 13:19         ` [OpenRISC] " Kirill A. Shutemov
2018-10-12 13:19         ` Kirill A. Shutemov
2018-10-12 13:19         ` Kirill A. Shutemov
2018-10-12 13:19         ` Kirill A. Shutemov
2018-10-12 13:19         ` Kirill A. Shutemov
2018-10-12 13:19         ` Kirill A. Shutemov
2018-10-12 13:19         ` Kirill A. Shutemov
2018-10-12 16:57         ` Joel Fernandes [this message]
2018-10-12 16:57           ` [OpenRISC] " Joel Fernandes
2018-10-12 16:57           ` Joel Fernandes
2018-10-12 16:57           ` Joel Fernandes
2018-10-12 16:57           ` Joel Fernandes
2018-10-12 16:57           ` Joel Fernandes
2018-10-12 16:57           ` Joel Fernandes
2018-10-12 16:57           ` Joel Fernandes
2018-10-12 21:33           ` Kirill A. Shutemov
2018-10-12 21:33             ` [OpenRISC] " Kirill A. Shutemov
2018-10-12 21:33             ` Kirill A. Shutemov
2018-10-12 21:33             ` Kirill A. Shutemov
2018-10-12 21:33             ` Kirill A. Shutemov
2018-10-12 21:33             ` Kirill A. Shutemov
2018-10-12 21:33             ` Kirill A. Shutemov
2018-10-12 21:33             ` Kirill A. Shutemov
2018-10-12 18:18       ` David Miller
2018-10-12 18:18         ` [OpenRISC] " David Miller
2018-10-12 18:18         ` David Miller
2018-10-12 18:18         ` David Miller
2018-10-12 18:18         ` David Miller
2018-10-12 18:18         ` David Miller
2018-10-12 18:18         ` David Miller
2018-10-12 18:18         ` David Miller
2018-10-13  1:35         ` Joel Fernandes
2018-10-13  1:35           ` Joel Fernandes
2018-10-13  1:35           ` Joel Fernandes
2018-10-13  1:35           ` Joel Fernandes
2018-10-13  1:35           ` Joel Fernandes
2018-10-13  1:35           ` Joel Fernandes
2018-10-13  1:35           ` Joel Fernandes
2018-10-13  1:35           ` Joel Fernandes
2018-10-13  1:39           ` Daniel Colascione
2018-10-13  1:39             ` Daniel Colascione
2018-10-13  1:39             ` Daniel Colascione
2018-10-13  1:39             ` Daniel Colascione
2018-10-13  1:39             ` Daniel Colascione
2018-10-13  1:39             ` Daniel Colascione
2018-10-13  1:39             ` Daniel Colascione
2018-10-13  1:39             ` Daniel Colascione
2018-10-13  1:44             ` Joel Fernandes
2018-10-13  1:44               ` Joel Fernandes
2018-10-13  1:44               ` Joel Fernandes
2018-10-13  1:44               ` Joel Fernandes
2018-10-13  1:44               ` Joel Fernandes
2018-10-13  1:44               ` Joel Fernandes
2018-10-13  1:44               ` Joel Fernandes
2018-10-13  1:44               ` Joel Fernandes
2018-10-13  1:54               ` Daniel Colascione
2018-10-13  1:54                 ` Daniel Colascione
2018-10-13  1:54                 ` Daniel Colascione
2018-10-13  1:54                 ` Daniel Colascione
2018-10-13  1:54                 ` Daniel Colascione
2018-10-13  1:54                 ` Daniel Colascione
2018-10-13  1:54                 ` Daniel Colascione
2018-10-13  1:54                 ` Daniel Colascione
2018-10-13  2:10                 ` Joel Fernandes
2018-10-13  2:10                   ` Joel Fernandes
2018-10-13  2:10                   ` Joel Fernandes
2018-10-13  2:10                   ` Joel Fernandes
2018-10-13  2:10                   ` Joel Fernandes
2018-10-13  2:10                   ` Joel Fernandes
2018-10-13  2:10                   ` Joel Fernandes
2018-10-13  2:10                   ` Joel Fernandes
2018-10-13  2:25                   ` Daniel Colascione
2018-10-13  2:25                     ` Daniel Colascione
2018-10-13  2:25                     ` Daniel Colascione
2018-10-13  2:25                     ` Daniel Colascione
2018-10-13  2:25                     ` Daniel Colascione
2018-10-13  2:25                     ` Daniel Colascione
2018-10-13  2:25                     ` Daniel Colascione
2018-10-13 17:50                     ` Joel Fernandes
2018-10-13 17:50                       ` Joel Fernandes
2018-10-13 17:50                       ` Joel Fernandes
2018-10-13 17:50                       ` Joel Fernandes
2018-10-13 17:50                       ` Joel Fernandes
2018-10-13 17:50                       ` Joel Fernandes
2018-10-13 17:50                       ` Joel Fernandes
2018-10-12 18:02     ` David Miller
2018-10-12 18:02       ` [OpenRISC] " David Miller
2018-10-12 18:02       ` David Miller
2018-10-12 18:02       ` David Miller
2018-10-12 18:02       ` David Miller
2018-10-12 18:02       ` David Miller
2018-10-12 18:02       ` David Miller
2018-10-12 18:02       ` David Miller
2018-10-12 14:09   ` Anton Ivanov
2018-10-12 14:09     ` [OpenRISC] " Anton Ivanov
2018-10-12 14:09     ` Anton Ivanov
2018-10-12 14:09     ` Anton Ivanov
2018-10-12 14:09     ` Anton Ivanov
2018-10-12 14:09     ` Anton Ivanov
2018-10-12 14:09     ` Anton Ivanov
2018-10-12 14:09     ` Anton Ivanov
2018-10-12 14:37     ` Kirill A. Shutemov
2018-10-12 14:37       ` [OpenRISC] " Kirill A. Shutemov
2018-10-12 14:37       ` Kirill A. Shutemov
2018-10-12 14:37       ` Kirill A. Shutemov
2018-10-12 14:37       ` Kirill A. Shutemov
2018-10-12 14:37       ` Kirill A. Shutemov
2018-10-12 14:37       ` Kirill A. Shutemov
2018-10-12 14:37       ` Kirill A. Shutemov
2018-10-12 14:48       ` Anton Ivanov
2018-10-12 14:48         ` [OpenRISC] " Anton Ivanov
2018-10-12 14:48         ` Anton Ivanov
2018-10-12 14:48         ` Anton Ivanov
2018-10-12 14:48         ` Anton Ivanov
2018-10-12 14:48         ` Anton Ivanov
2018-10-12 14:48         ` Anton Ivanov
2018-10-12 14:48         ` Anton Ivanov
2018-10-12 16:42         ` Anton Ivanov
2018-10-12 16:42           ` Anton Ivanov
2018-10-12 16:42           ` [OpenRISC] " Anton Ivanov
2018-10-12 16:42           ` Anton Ivanov
2018-10-12 16:42           ` Anton Ivanov
2018-10-12 16:42           ` Anton Ivanov
2018-10-12 16:42           ` Anton Ivanov
2018-10-12 16:42           ` Anton Ivanov
2018-10-12 16:42           ` Anton Ivanov
2018-10-12 16:42           ` Anton Ivanov
2018-10-12 16:42           ` Anton Ivanov
2018-10-12 16:50           ` Joel Fernandes
2018-10-12 16:50             ` Joel Fernandes
2018-10-12 16:50             ` [OpenRISC] " Joel Fernandes
2018-10-12 16:50             ` Joel Fernandes
2018-10-12 16:50             ` Joel Fernandes
2018-10-12 16:50             ` Joel Fernandes
2018-10-12 16:50             ` Joel Fernandes
2018-10-12 16:50             ` Joel Fernandes
2018-10-12 16:50             ` Joel Fernandes
2018-10-12 16:50             ` Joel Fernandes
2018-10-12 16:58             ` Anton Ivanov
2018-10-12 16:58               ` [OpenRISC] " Anton Ivanov
2018-10-12 16:58               ` Anton Ivanov
2018-10-12 16:58               ` Anton Ivanov
2018-10-12 16:58               ` Anton Ivanov
2018-10-12 16:58               ` Anton Ivanov
2018-10-12 16:58               ` Anton Ivanov
2018-10-12 16:58               ` Anton Ivanov
2018-10-12 16:58               ` Anton Ivanov
2018-10-12 17:06               ` Joel Fernandes
2018-10-12 17:06                 ` [OpenRISC] " Joel Fernandes
2018-10-12 17:06                 ` Joel Fernandes
2018-10-12 17:06                 ` Joel Fernandes
2018-10-12 17:06                 ` Joel Fernandes
2018-10-12 17:06                 ` Joel Fernandes
2018-10-12 17:06                 ` Joel Fernandes
2018-10-12 17:06                 ` Joel Fernandes
2018-10-12 21:40           ` Kirill A. Shutemov
2018-10-12 21:40             ` Kirill A. Shutemov
2018-10-12 21:40             ` [OpenRISC] " Kirill A. Shutemov
2018-10-12 21:40             ` Kirill A. Shutemov
2018-10-12 21:40             ` Kirill A. Shutemov
2018-10-12 21:40             ` Kirill A. Shutemov
2018-10-12 21:40             ` Kirill A. Shutemov
2018-10-12 21:40             ` Kirill A. Shutemov
2018-10-12 21:40             ` Kirill A. Shutemov
2018-10-12 21:40             ` Kirill A. Shutemov
2018-10-13  6:10             ` Anton Ivanov
2018-10-13  6:10               ` [OpenRISC] " Anton Ivanov
2018-10-13  6:10               ` Anton Ivanov
2018-10-13  6:10               ` Anton Ivanov
2018-10-13  6:10               ` Anton Ivanov
2018-10-13  6:10               ` Anton Ivanov
2018-10-13  6:10               ` Anton Ivanov
2018-10-13  6:10               ` Anton Ivanov
2018-10-13  6:10               ` Anton Ivanov
2018-10-15  7:10   ` Christian Borntraeger
2018-10-15  7:10     ` [OpenRISC] " Christian Borntraeger
2018-10-15  7:10     ` Christian Borntraeger
2018-10-15  7:10     ` Christian Borntraeger
2018-10-15  7:10     ` Christian Borntraeger
2018-10-15  7:10     ` Christian Borntraeger
2018-10-15  7:10     ` Christian Borntraeger
2018-10-15  7:10     ` Christian Borntraeger
2018-10-15  8:18     ` Martin Schwidefsky
2018-10-15  8:18       ` [OpenRISC] " Martin Schwidefsky
2018-10-15  8:18       ` Martin Schwidefsky
2018-10-15  8:18       ` Martin Schwidefsky
2018-10-15  8:18       ` Martin Schwidefsky
2018-10-15  8:18       ` Martin Schwidefsky
2018-10-15  8:18       ` Martin Schwidefsky
2018-10-15  8:18       ` Martin Schwidefsky
2018-10-16  2:08       ` Joel Fernandes
2018-10-16  2:08         ` [OpenRISC] " Joel Fernandes
2018-10-16  2:08         ` Joel Fernandes
2018-10-16  2:08         ` Joel Fernandes
2018-10-16  2:08         ` Joel Fernandes
2018-10-16  2:08         ` Joel Fernandes
2018-10-16  2:08         ` Joel Fernandes
2018-10-16  2:08         ` Joel Fernandes
2018-10-12 11:09 ` [PATCH v2 1/2] treewide: remove unused address argument from pte_alloc functions Kirill A. Shutemov
2018-10-12 11:09   ` [OpenRISC] " Kirill A. Shutemov
2018-10-12 11:09   ` Kirill A. Shutemov
2018-10-12 11:09   ` Kirill A. Shutemov
2018-10-12 11:09   ` Kirill A. Shutemov
2018-10-12 11:09   ` Kirill A. Shutemov
2018-10-12 11:09   ` Kirill A. Shutemov
2018-10-12 11:09   ` Kirill A. Shutemov
2018-10-12 16:37   ` Joel Fernandes
2018-10-12 16:37     ` [OpenRISC] " Joel Fernandes
2018-10-12 16:37     ` Joel Fernandes
2018-10-12 16:37     ` Joel Fernandes
2018-10-12 16:37     ` Joel Fernandes
2018-10-12 16:37     ` Joel Fernandes
2018-10-12 16:37     ` Joel Fernandes
2018-10-12 16:37     ` Joel Fernandes
2018-10-12 13:56 ` Anton Ivanov
2018-10-12 13:56   ` [OpenRISC] " Anton Ivanov
2018-10-12 13:56   ` Anton Ivanov
2018-10-12 13:56   ` Anton Ivanov
2018-10-12 13:56   ` Anton Ivanov
2018-10-12 13:56   ` Anton Ivanov
2018-10-12 13:56   ` Anton Ivanov
2018-10-12 13:56   ` Anton Ivanov
2018-10-12 16:34   ` Joel Fernandes
2018-10-12 16:34     ` [OpenRISC] " Joel Fernandes
2018-10-12 16:34     ` Joel Fernandes
2018-10-12 16:34     ` Joel Fernandes
2018-10-12 16:34     ` Joel Fernandes
2018-10-12 16:34     ` Joel Fernandes
2018-10-12 16:34     ` Joel Fernandes
2018-10-12 16:34     ` Joel Fernandes
2018-10-12 16:38     ` Julia Lawall
2018-10-12 16:38       ` [OpenRISC] " Julia Lawall
2018-10-12 16:38       ` Julia Lawall
2018-10-12 16:38       ` Julia Lawall
2018-10-12 16:38       ` Julia Lawall
2018-10-12 16:38       ` Julia Lawall
2018-10-12 16:38       ` Julia Lawall
2018-10-12 16:38       ` Julia Lawall
2018-10-12 16:46       ` Joel Fernandes
2018-10-12 16:46         ` [OpenRISC] " Joel Fernandes
2018-10-12 16:46         ` Joel Fernandes
2018-10-12 16:46         ` Joel Fernandes
2018-10-12 16:46         ` Joel Fernandes
2018-10-12 16:46         ` Joel Fernandes
2018-10-12 16:46         ` Joel Fernandes
2018-10-12 16:46         ` Joel Fernandes
2018-10-12 18:51 ` SF Markus Elfring
2018-10-12 18:51   ` [OpenRISC] " SF Markus Elfring
2018-10-12 18:51   ` SF Markus Elfring
2018-10-12 18:51   ` SF Markus Elfring
2018-10-12 18:51   ` SF Markus Elfring
2018-10-12 18:51   ` SF Markus Elfring
2018-10-12 18:51   ` SF Markus Elfring
2018-10-12 18:51   ` SF Markus Elfring
2018-10-12 18:51   ` SF Markus Elfring
2018-10-12 19:42   ` Joel Fernandes
2018-10-12 19:42     ` [OpenRISC] " Joel Fernandes
2018-10-12 19:42     ` Joel Fernandes
2018-10-12 19:42     ` Joel Fernandes
2018-10-12 19:42     ` Joel Fernandes
2018-10-12 19:42     ` Joel Fernandes
2018-10-12 19:42     ` Joel Fernandes
2018-10-12 19:42     ` Joel Fernandes
2018-10-12 19:42     ` Joel Fernandes
2018-10-13  9:22     ` SF Markus Elfring
2018-10-13  9:22       ` [OpenRISC] " SF Markus Elfring
2018-10-13  9:22       ` SF Markus Elfring
2018-10-13  9:22       ` SF Markus Elfring
2018-10-13  9:22       ` SF Markus Elfring
2018-10-13  9:22       ` SF Markus Elfring
2018-10-13  9:22       ` SF Markus Elfring
2018-10-13  9:22       ` SF Markus Elfring
2018-10-13  9:22       ` SF Markus Elfring

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181012165719.GE223066@joelaf.mtv.corp.google.com \
    --to=joel@joelfernandes.org \
    --cc=catalin.marinas@arm.com \
    --cc=dalias@libc.org \
    --cc=dancol@google.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=deller@gmx.de \
    --cc=elfring@users.sourceforge.net \
    --cc=hughd@google.com \
    --cc=jejb@parisc-linux.org \
    --cc=jonas@southpole.se \
    --cc=kasan-dev@googlegroups.com \
    --cc=kirill@shutemov.name \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-hexagon@vger.kernel.org \
    --cc=linux-ia64@vger.kernel.org \
    --cc=linux-mips@linux-mips.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=linux-sh@vger.kernel.org \
    --cc=linux-xtensa@linux-xtensa.org \
    --cc=lokeshgidra@google.com \
    --cc=mhocko@kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=sparclinux@vger.kernel.org \
    --cc=will.deacon@arm.com \
    --cc=x86@kernel.org \
    --cc=ysato@users.sourceforge.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.