From mboxrd@z Thu Jan 1 00:00:00 1970 From: Joel Fernandes Subject: Re: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions Date: Mon, 15 Oct 2018 19:08:53 -0700 Message-ID: <20181016020853.GA56701@joelaf.mtv.corp.google.com> References: <20181012013756.11285-1-joel@joelfernandes.org> <20181012013756.11285-2-joel@joelfernandes.org> <6580a62b-69c6-f2e3-767c-bd36b977bea2@de.ibm.com> <20181015101814.306d257c@mschwideX1> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Cc: linux-mips@linux-mips.org, Rich Felker , linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org, Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , mhocko@kernel.org, linux-mm@kvack.org, lokeshgidra@google.com, sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org, elfring@users.sourceforge.net, Jonas Bonn , linux-s390@vger.kernel.org, dancol@google.com, Yoshinori Sato , linux-xtensa@linux-xtensa.org, linux-hexagon@vger.kernel.org, Helge Deller , "maintainer:X86 ARCHITECTURE \(32-BIT AND 64-BIT\)" , hughd@google.com, "James E.J. Bottomley" , kasan-dev@googlegroups.com, kvmarm@lists.cs.columbia.edu, Christian Borntraeger Return-path: In-Reply-To: <20181015101814.306d257c@mschwideX1> List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-riscv-bounces+glpr-linux-riscv=m.gmane.org@lists.infradead.org On Mon, Oct 15, 2018 at 10:18:14AM +0200, Martin Schwidefsky wrote: > On Mon, 15 Oct 2018 09:10:53 +0200 > Christian Borntraeger wrote: > > > On 10/12/2018 03:37 AM, Joel Fernandes (Google) wrote: > > > Android needs to mremap large regions of memory during memory management > > > related operations. The mremap system call can be really slow if THP is > > > not enabled. The bottleneck is move_page_tables, which is copying each > > > pte at a time, and can be really slow across a large map. Turning on THP > > > may not be a viable option, and is not for us. This patch speeds up the > > > performance for non-THP system by copying at the PMD level when possible. > > > > > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap > > > completion times drops from 160-250 millesconds to 380-400 microseconds. > > > > > > Before: > > > Total mremap time for 1GB data: 242321014 nanoseconds. > > > Total mremap time for 1GB data: 196842467 nanoseconds. > > > Total mremap time for 1GB data: 167051162 nanoseconds. > > > > > > After: > > > Total mremap time for 1GB data: 385781 nanoseconds. > > > Total mremap time for 1GB data: 388959 nanoseconds. > > > Total mremap time for 1GB data: 402813 nanoseconds. > > > > > > Incase THP is enabled, the optimization is skipped. I also flush the > > > tlb every time we do this optimization since I couldn't find a way to > > > determine if the low-level PTEs are dirty. It is seen that the cost of > > > doing so is not much compared the improvement, on both x86-64 and arm64. > > > > > > Cc: minchan@kernel.org > > > Cc: pantin@google.com > > > Cc: hughd@google.com > > > Cc: lokeshgidra@google.com > > > Cc: dancol@google.com > > > Cc: mhocko@kernel.org > > > Cc: kirill@shutemov.name > > > Cc: akpm@linux-foundation.org > > > Signed-off-by: Joel Fernandes (Google) > > > --- > > > mm/mremap.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++ > > > 1 file changed, 62 insertions(+) > > > > > > diff --git a/mm/mremap.c b/mm/mremap.c > > > index 9e68a02a52b1..d82c485822ef 100644 > > > --- a/mm/mremap.c > > > +++ b/mm/mremap.c > > > @@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, > > > drop_rmap_locks(vma); > > > } > > > > > > +static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, > > > + unsigned long new_addr, unsigned long old_end, > > > + pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush) > > > +{ > > > + spinlock_t *old_ptl, *new_ptl; > > > + struct mm_struct *mm = vma->vm_mm; > > > + > > > + if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK) > > > + || old_end - old_addr < PMD_SIZE) > > > + return false; > > > + > > > + /* > > > + * The destination pmd shouldn't be established, free_pgtables() > > > + * should have release it. > > > + */ > > > + if (WARN_ON(!pmd_none(*new_pmd))) > > > + return false; > > > + > > > + /* > > > + * We don't have to worry about the ordering of src and dst > > > + * ptlocks because exclusive mmap_sem prevents deadlock. > > > + */ > > > + old_ptl = pmd_lock(vma->vm_mm, old_pmd); > > > + if (old_ptl) { > > > + pmd_t pmd; > > > + > > > + new_ptl = pmd_lockptr(mm, new_pmd); > > > + if (new_ptl != old_ptl) > > > + spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); > > > + > > > + /* Clear the pmd */ > > > + pmd = *old_pmd; > > > + pmd_clear(old_pmd); > > > > Adding Martin Schwidefsky. > > Is this mapping maybe still in use on other CPUs? If yes, I think for > > s390 we need to flush here as well (in other word we might need to introduce > > pmd_clear_flush). On s390 you have to use instructions like CRDTE,IPTE or IDTE > > to modify page table entries that are still in use. Otherwise you can get a > > delayed access exception which is - in contrast to page faults - not recoverable. > > Just clearing an active pmd would be broken for s390. We need the equivalent > of the ptep_get_and_clear() function for pmds. For s390 this function would > look like this: > > static inline pte_t pmdp_get_and_clear(struct mm_struct *mm, > unsigned long addr, pmd_t *pmdp) > { > return pmdp_xchg_lazy(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_INVALID)); > } > > Just like pmdp_huge_get_and_clear() in fact. I agree architecture like s390 may need additional explicit instructions to avoid any unrecoverable failure. So the good news is in my last patch I sent, I have put this behind an architecture flag (HAVE_MOVE_PMD), so we don't have to enable it with architectures that cannot handle it: https://www.spinics.net/lists/linux-mm/msg163621.html Also we are triggering this optimization only if the page is not a transparent huge page by calling pmd_trans_huge(). For regular pages, it should be safe to not do the atomic get_and_clear AIUI because Linux doesn't use any bits from the PMD like the dirty bit if THP is not in use (and the processors that I saw (not s390) should not storing anything in the bits anyway when the page is not a huge page. I have gone through various scenarios and read both arm 32-bit and 64-bit and x86 64-bit manuals, and I believe it to be safe. For s390, lets not set the HAVE_MOVE_PMD flag. Does that work for you? > > > + > > > + VM_BUG_ON(!pmd_none(*new_pmd)); > > > + > > > + /* Set the new pmd */ > > > + set_pmd_at(mm, new_addr, new_pmd, pmd); > > > + if (new_ptl != old_ptl) > > > + spin_unlock(new_ptl); > > > + spin_unlock(old_ptl); > > > + > > > + *need_flush = true; > > > + return true; > > > + } > > > + return false; > > > +} > > > + > > So the idea is to move the pmd entry to the new location, dragging > the whole pte table to a new location with a different address. > I wonder if that is safe in regard to get_user_pages_fast(). Could you elaborate why you feel it may not be? Are you concerned that the PMD moving interferes with the page walk? Incase the tree changes during page-walking, the number of pages pinned by get_user_pages_fast may be less than the number requested. In this case, get_user_pages_fast would fall back to the slow path which should be synchronized with the mremap by courtesy of the mm->mmap_sem. But please let me know the scenario you have in mind and if I missed something. thanks, - Joel From mboxrd@z Thu Jan 1 00:00:00 1970 From: Joel Fernandes Subject: Re: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions Date: Mon, 15 Oct 2018 19:08:53 -0700 Message-ID: <20181016020853.GA56701@joelaf.mtv.corp.google.com> References: <20181012013756.11285-1-joel@joelfernandes.org> <20181012013756.11285-2-joel@joelfernandes.org> <6580a62b-69c6-f2e3-767c-bd36b977bea2@de.ibm.com> <20181015101814.306d257c@mschwideX1> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <20181015101814.306d257c@mschwideX1> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+glpr-linux-riscv=m.gmane.org@lists.infradead.org List-Archive: List-Post: To: Martin Schwidefsky Cc: linux-mips@linux-mips.org, Rich Felker , linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org, Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , mhocko@kernel.org, linux-mm@kvack.org, lokeshgidra@google.com, sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org, elfring@users.sourceforge.net, Jonas Bonn , linux-s390@vger.kernel.org, dancol@google.com, Yoshinori Sato , linux-xtensa@linux-xtensa.org, linux-hexagon@vger.kernel.org, Helge Deller , "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" , hughd@google.com, "James E.J. Bottomley" , kasan-dev@googlegroups.com, kvmarm@lists.cs.columbia.edu, Christian Borntraeger List-ID: On Mon, Oct 15, 2018 at 10:18:14AM +0200, Martin Schwidefsky wrote: > On Mon, 15 Oct 2018 09:10:53 +0200 > Christian Borntraeger wrote: > > > On 10/12/2018 03:37 AM, Joel Fernandes (Google) wrote: > > > Android needs to mremap large regions of memory during memory management > > > related operations. The mremap system call can be really slow if THP is > > > not enabled. The bottleneck is move_page_tables, which is copying each > > > pte at a time, and can be really slow across a large map. Turning on THP > > > may not be a viable option, and is not for us. This patch speeds up the > > > performance for non-THP system by copying at the PMD level when possible. > > > > > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap > > > completion times drops from 160-250 millesconds to 380-400 microseconds. > > > > > > Before: > > > Total mremap time for 1GB data: 242321014 nanoseconds. > > > Total mremap time for 1GB data: 196842467 nanoseconds. > > > Total mremap time for 1GB data: 167051162 nanoseconds. > > > > > > After: > > > Total mremap time for 1GB data: 385781 nanoseconds. > > > Total mremap time for 1GB data: 388959 nanoseconds. > > > Total mremap time for 1GB data: 402813 nanoseconds. > > > > > > Incase THP is enabled, the optimization is skipped. I also flush the > > > tlb every time we do this optimization since I couldn't find a way to > > > determine if the low-level PTEs are dirty. It is seen that the cost of > > > doing so is not much compared the improvement, on both x86-64 and arm64. > > > > > > Cc: minchan@kernel.org > > > Cc: pantin@google.com > > > Cc: hughd@google.com > > > Cc: lokeshgidra@google.com > > > Cc: dancol@google.com > > > Cc: mhocko@kernel.org > > > Cc: kirill@shutemov.name > > > Cc: akpm@linux-foundation.org > > > Signed-off-by: Joel Fernandes (Google) > > > --- > > > mm/mremap.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++ > > > 1 file changed, 62 insertions(+) > > > > > > diff --git a/mm/mremap.c b/mm/mremap.c > > > index 9e68a02a52b1..d82c485822ef 100644 > > > --- a/mm/mremap.c > > > +++ b/mm/mremap.c > > > @@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, > > > drop_rmap_locks(vma); > > > } > > > > > > +static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, > > > + unsigned long new_addr, unsigned long old_end, > > > + pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush) > > > +{ > > > + spinlock_t *old_ptl, *new_ptl; > > > + struct mm_struct *mm = vma->vm_mm; > > > + > > > + if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK) > > > + || old_end - old_addr < PMD_SIZE) > > > + return false; > > > + > > > + /* > > > + * The destination pmd shouldn't be established, free_pgtables() > > > + * should have release it. > > > + */ > > > + if (WARN_ON(!pmd_none(*new_pmd))) > > > + return false; > > > + > > > + /* > > > + * We don't have to worry about the ordering of src and dst > > > + * ptlocks because exclusive mmap_sem prevents deadlock. > > > + */ > > > + old_ptl = pmd_lock(vma->vm_mm, old_pmd); > > > + if (old_ptl) { > > > + pmd_t pmd; > > > + > > > + new_ptl = pmd_lockptr(mm, new_pmd); > > > + if (new_ptl != old_ptl) > > > + spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); > > > + > > > + /* Clear the pmd */ > > > + pmd = *old_pmd; > > > + pmd_clear(old_pmd); > > > > Adding Martin Schwidefsky. > > Is this mapping maybe still in use on other CPUs? If yes, I think for > > s390 we need to flush here as well (in other word we might need to introduce > > pmd_clear_flush). On s390 you have to use instructions like CRDTE,IPTE or IDTE > > to modify page table entries that are still in use. Otherwise you can get a > > delayed access exception which is - in contrast to page faults - not recoverable. > > Just clearing an active pmd would be broken for s390. We need the equivalent > of the ptep_get_and_clear() function for pmds. For s390 this function would > look like this: > > static inline pte_t pmdp_get_and_clear(struct mm_struct *mm, > unsigned long addr, pmd_t *pmdp) > { > return pmdp_xchg_lazy(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_INVALID)); > } > > Just like pmdp_huge_get_and_clear() in fact. I agree architecture like s390 may need additional explicit instructions to avoid any unrecoverable failure. So the good news is in my last patch I sent, I have put this behind an architecture flag (HAVE_MOVE_PMD), so we don't have to enable it with architectures that cannot handle it: https://www.spinics.net/lists/linux-mm/msg163621.html Also we are triggering this optimization only if the page is not a transparent huge page by calling pmd_trans_huge(). For regular pages, it should be safe to not do the atomic get_and_clear AIUI because Linux doesn't use any bits from the PMD like the dirty bit if THP is not in use (and the processors that I saw (not s390) should not storing anything in the bits anyway when the page is not a huge page. I have gone through various scenarios and read both arm 32-bit and 64-bit and x86 64-bit manuals, and I believe it to be safe. For s390, lets not set the HAVE_MOVE_PMD flag. Does that work for you? > > > + > > > + VM_BUG_ON(!pmd_none(*new_pmd)); > > > + > > > + /* Set the new pmd */ > > > + set_pmd_at(mm, new_addr, new_pmd, pmd); > > > + if (new_ptl != old_ptl) > > > + spin_unlock(new_ptl); > > > + spin_unlock(old_ptl); > > > + > > > + *need_flush = true; > > > + return true; > > > + } > > > + return false; > > > +} > > > + > > So the idea is to move the pmd entry to the new location, dragging > the whole pte table to a new location with a different address. > I wonder if that is safe in regard to get_user_pages_fast(). Could you elaborate why you feel it may not be? Are you concerned that the PMD moving interferes with the page walk? Incase the tree changes during page-walking, the number of pages pinned by get_user_pages_fast may be less than the number requested. In this case, get_user_pages_fast would fall back to the slow path which should be synchronized with the mremap by courtesy of the mm->mmap_sem. But please let me know the scenario you have in mind and if I missed something. thanks, - Joel From mboxrd@z Thu Jan 1 00:00:00 1970 Received: with ECARTIS (v1.0.0; list linux-mips); Tue, 16 Oct 2018 04:09:12 +0200 (CEST) Received: from mail-pg1-x543.google.com ([IPv6:2607:f8b0:4864:20::543]:34199 "EHLO mail-pg1-x543.google.com" rhost-flags-OK-OK-OK-OK) by eddie.linux-mips.org with ESMTP id S23990392AbeJPCJCzfS8H (ORCPT ); Tue, 16 Oct 2018 04:09:02 +0200 Received: by mail-pg1-x543.google.com with SMTP id g12-v6so10051427pgs.1 for ; Mon, 15 Oct 2018 19:09:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=Cz67bHFDMvGJFAPVrnGd7nRR9VaEfFbq9lmMgtQ7ALI=; b=Ok8PoT2K5GRd0KamvoP9eqKuZhLNGS9A/TQQUFKNVaiKMTPdWZBOlKBJnP/SBNp0fB eaNg0TIhicPK/28Gv9xiSnJPwjChRHhy71Hj8UEp7IXgBHdi4gGzcrB8K+qvQHybEgT/ 4RZeGqEX+CUKeDt+Zqt+rRqDUsxY24+s4U2g8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=Cz67bHFDMvGJFAPVrnGd7nRR9VaEfFbq9lmMgtQ7ALI=; b=IZPBBpgbX06DQZ5bDWsGrVm45qdxKqHYckF+1AV57d1BmHb+tV6R1ijXKUrGmgt2hz 5bS+DVRPCBCUtwVswRGh6nwaV/EYyGxPTobWhu+Mpe3p2euotHc/f2hTA6JTECQlIvJz 1kA8bSJnz20U3B8JB/14RxELoECSP6L5bXBgi8YUp0ptSXAHipJxP59KgTxHyYilReh8 FNP/qg51FWPBlfNX2Ws4k2q7SnIfogMkoNZEAqSfcEaCNvtlx55ZpRKd6tnIbhdKS6MW AVRQbHVe0L8H+m4VFPNMg+apJFlNtazkW734pBAaTMMat8ywtUkYWjkBDsWU3qDXsbzt +j7w== X-Gm-Message-State: ABuFfoiTHbXvlX7tjBxBW/Yc+PIIEB7ODxjdtdO7OFOOK0OYT5fuKeVY J9WXhrHrxAWjYbY2Dt4xTORd2A== X-Google-Smtp-Source: ACcGV61tXshD0kdUz1rx/VWy95C7tPdFHs9NIxbQ7U95KgElxYyhNYgyjAz7jKjVcYQ4X5DlhOsyjw== X-Received: by 2002:a62:6643:: with SMTP id a64-v6mr19935671pfc.202.1539655735629; Mon, 15 Oct 2018 19:08:55 -0700 (PDT) Received: from localhost ([2620:0:1000:1601:3aef:314f:b9ea:889f]) by smtp.gmail.com with ESMTPSA id m15-v6sm19964319pgt.28.2018.10.15.19.08.53 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 15 Oct 2018 19:08:54 -0700 (PDT) Date: Mon, 15 Oct 2018 19:08:53 -0700 From: Joel Fernandes To: Martin Schwidefsky Cc: Christian Borntraeger , linux-kernel@vger.kernel.org, kernel-team@android.com, minchan@kernel.org, pantin@google.com, hughd@google.com, lokeshgidra@google.com, dancol@google.com, mhocko@kernel.org, kirill@shutemov.name, akpm@linux-foundation.org, Andrey Ryabinin , Andy Lutomirski , Borislav Petkov , Catalin Marinas , Chris Zankel , Dave Hansen , "David S. Miller" , elfring@users.sourceforge.net, Fenghua Yu , Geert Uytterhoeven , Guan Xuetao , Helge Deller , Ingo Molnar , "James E.J. Bottomley" , Jeff Dike , Jonas Bonn , Julia Lawall , kasan-dev@googlegroups.com, kvmarm@lists.cs.columbia.edu, Ley Foon Tan , linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@linux-mips.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-um@lists.infradead.org, linux-xtensa@linux-xtensa.org, Max Filippov , nios2-dev@lists.rocketboards.org, openrisc@lists.librecores.org, Peter Zijlstra , Richard Weinberger , Rich Felker , Sam Creasey , sparclinux@vger.kernel.org, Stafford Horne , Stefan Kristiansson , Thomas Gleixner , Tony Luck , Will Deacon , "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" , Yoshinori Sato Subject: Re: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions Message-ID: <20181016020853.GA56701@joelaf.mtv.corp.google.com> References: <20181012013756.11285-1-joel@joelfernandes.org> <20181012013756.11285-2-joel@joelfernandes.org> <6580a62b-69c6-f2e3-767c-bd36b977bea2@de.ibm.com> <20181015101814.306d257c@mschwideX1> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20181015101814.306d257c@mschwideX1> User-Agent: Mutt/1.10.1 (2018-07-13) Return-Path: X-Envelope-To: <"|/home/ecartis/ecartis -s linux-mips"> (uid 0) X-Orcpt: rfc822;linux-mips@linux-mips.org Original-Recipient: rfc822;linux-mips@linux-mips.org X-archive-position: 66860 X-ecartis-version: Ecartis v1.0.0 Sender: linux-mips-bounce@linux-mips.org Errors-to: linux-mips-bounce@linux-mips.org X-original-sender: joel@joelfernandes.org Precedence: bulk List-help: List-unsubscribe: List-software: Ecartis version 1.0.0 List-Id: linux-mips X-List-ID: linux-mips List-subscribe: List-owner: List-post: List-archive: X-list: linux-mips On Mon, Oct 15, 2018 at 10:18:14AM +0200, Martin Schwidefsky wrote: > On Mon, 15 Oct 2018 09:10:53 +0200 > Christian Borntraeger wrote: > > > On 10/12/2018 03:37 AM, Joel Fernandes (Google) wrote: > > > Android needs to mremap large regions of memory during memory management > > > related operations. The mremap system call can be really slow if THP is > > > not enabled. The bottleneck is move_page_tables, which is copying each > > > pte at a time, and can be really slow across a large map. Turning on THP > > > may not be a viable option, and is not for us. This patch speeds up the > > > performance for non-THP system by copying at the PMD level when possible. > > > > > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap > > > completion times drops from 160-250 millesconds to 380-400 microseconds. > > > > > > Before: > > > Total mremap time for 1GB data: 242321014 nanoseconds. > > > Total mremap time for 1GB data: 196842467 nanoseconds. > > > Total mremap time for 1GB data: 167051162 nanoseconds. > > > > > > After: > > > Total mremap time for 1GB data: 385781 nanoseconds. > > > Total mremap time for 1GB data: 388959 nanoseconds. > > > Total mremap time for 1GB data: 402813 nanoseconds. > > > > > > Incase THP is enabled, the optimization is skipped. I also flush the > > > tlb every time we do this optimization since I couldn't find a way to > > > determine if the low-level PTEs are dirty. It is seen that the cost of > > > doing so is not much compared the improvement, on both x86-64 and arm64. > > > > > > Cc: minchan@kernel.org > > > Cc: pantin@google.com > > > Cc: hughd@google.com > > > Cc: lokeshgidra@google.com > > > Cc: dancol@google.com > > > Cc: mhocko@kernel.org > > > Cc: kirill@shutemov.name > > > Cc: akpm@linux-foundation.org > > > Signed-off-by: Joel Fernandes (Google) > > > --- > > > mm/mremap.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++ > > > 1 file changed, 62 insertions(+) > > > > > > diff --git a/mm/mremap.c b/mm/mremap.c > > > index 9e68a02a52b1..d82c485822ef 100644 > > > --- a/mm/mremap.c > > > +++ b/mm/mremap.c > > > @@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, > > > drop_rmap_locks(vma); > > > } > > > > > > +static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, > > > + unsigned long new_addr, unsigned long old_end, > > > + pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush) > > > +{ > > > + spinlock_t *old_ptl, *new_ptl; > > > + struct mm_struct *mm = vma->vm_mm; > > > + > > > + if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK) > > > + || old_end - old_addr < PMD_SIZE) > > > + return false; > > > + > > > + /* > > > + * The destination pmd shouldn't be established, free_pgtables() > > > + * should have release it. > > > + */ > > > + if (WARN_ON(!pmd_none(*new_pmd))) > > > + return false; > > > + > > > + /* > > > + * We don't have to worry about the ordering of src and dst > > > + * ptlocks because exclusive mmap_sem prevents deadlock. > > > + */ > > > + old_ptl = pmd_lock(vma->vm_mm, old_pmd); > > > + if (old_ptl) { > > > + pmd_t pmd; > > > + > > > + new_ptl = pmd_lockptr(mm, new_pmd); > > > + if (new_ptl != old_ptl) > > > + spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); > > > + > > > + /* Clear the pmd */ > > > + pmd = *old_pmd; > > > + pmd_clear(old_pmd); > > > > Adding Martin Schwidefsky. > > Is this mapping maybe still in use on other CPUs? If yes, I think for > > s390 we need to flush here as well (in other word we might need to introduce > > pmd_clear_flush). On s390 you have to use instructions like CRDTE,IPTE or IDTE > > to modify page table entries that are still in use. Otherwise you can get a > > delayed access exception which is - in contrast to page faults - not recoverable. > > Just clearing an active pmd would be broken for s390. We need the equivalent > of the ptep_get_and_clear() function for pmds. For s390 this function would > look like this: > > static inline pte_t pmdp_get_and_clear(struct mm_struct *mm, > unsigned long addr, pmd_t *pmdp) > { > return pmdp_xchg_lazy(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_INVALID)); > } > > Just like pmdp_huge_get_and_clear() in fact. I agree architecture like s390 may need additional explicit instructions to avoid any unrecoverable failure. So the good news is in my last patch I sent, I have put this behind an architecture flag (HAVE_MOVE_PMD), so we don't have to enable it with architectures that cannot handle it: https://www.spinics.net/lists/linux-mm/msg163621.html Also we are triggering this optimization only if the page is not a transparent huge page by calling pmd_trans_huge(). For regular pages, it should be safe to not do the atomic get_and_clear AIUI because Linux doesn't use any bits from the PMD like the dirty bit if THP is not in use (and the processors that I saw (not s390) should not storing anything in the bits anyway when the page is not a huge page. I have gone through various scenarios and read both arm 32-bit and 64-bit and x86 64-bit manuals, and I believe it to be safe. For s390, lets not set the HAVE_MOVE_PMD flag. Does that work for you? > > > + > > > + VM_BUG_ON(!pmd_none(*new_pmd)); > > > + > > > + /* Set the new pmd */ > > > + set_pmd_at(mm, new_addr, new_pmd, pmd); > > > + if (new_ptl != old_ptl) > > > + spin_unlock(new_ptl); > > > + spin_unlock(old_ptl); > > > + > > > + *need_flush = true; > > > + return true; > > > + } > > > + return false; > > > +} > > > + > > So the idea is to move the pmd entry to the new location, dragging > the whole pte table to a new location with a different address. > I wonder if that is safe in regard to get_user_pages_fast(). Could you elaborate why you feel it may not be? Are you concerned that the PMD moving interferes with the page walk? Incase the tree changes during page-walking, the number of pages pinned by get_user_pages_fast may be less than the number requested. In this case, get_user_pages_fast would fall back to the slow path which should be synchronized with the mremap by courtesy of the mm->mmap_sem. But please let me know the scenario you have in mind and if I missed something. thanks, - Joel From mboxrd@z Thu Jan 1 00:00:00 1970 From: joel@joelfernandes.org (Joel Fernandes) Date: Mon, 15 Oct 2018 19:08:53 -0700 Subject: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions In-Reply-To: <20181015101814.306d257c@mschwideX1> References: <20181012013756.11285-1-joel@joelfernandes.org> <20181012013756.11285-2-joel@joelfernandes.org> <6580a62b-69c6-f2e3-767c-bd36b977bea2@de.ibm.com> <20181015101814.306d257c@mschwideX1> Message-ID: <20181016020853.GA56701@joelaf.mtv.corp.google.com> To: linux-riscv@lists.infradead.org List-Id: linux-riscv.lists.infradead.org On Mon, Oct 15, 2018 at 10:18:14AM +0200, Martin Schwidefsky wrote: > On Mon, 15 Oct 2018 09:10:53 +0200 > Christian Borntraeger wrote: > > > On 10/12/2018 03:37 AM, Joel Fernandes (Google) wrote: > > > Android needs to mremap large regions of memory during memory management > > > related operations. The mremap system call can be really slow if THP is > > > not enabled. The bottleneck is move_page_tables, which is copying each > > > pte at a time, and can be really slow across a large map. Turning on THP > > > may not be a viable option, and is not for us. This patch speeds up the > > > performance for non-THP system by copying at the PMD level when possible. > > > > > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap > > > completion times drops from 160-250 millesconds to 380-400 microseconds. > > > > > > Before: > > > Total mremap time for 1GB data: 242321014 nanoseconds. > > > Total mremap time for 1GB data: 196842467 nanoseconds. > > > Total mremap time for 1GB data: 167051162 nanoseconds. > > > > > > After: > > > Total mremap time for 1GB data: 385781 nanoseconds. > > > Total mremap time for 1GB data: 388959 nanoseconds. > > > Total mremap time for 1GB data: 402813 nanoseconds. > > > > > > Incase THP is enabled, the optimization is skipped. I also flush the > > > tlb every time we do this optimization since I couldn't find a way to > > > determine if the low-level PTEs are dirty. It is seen that the cost of > > > doing so is not much compared the improvement, on both x86-64 and arm64. > > > > > > Cc: minchan at kernel.org > > > Cc: pantin at google.com > > > Cc: hughd at google.com > > > Cc: lokeshgidra at google.com > > > Cc: dancol at google.com > > > Cc: mhocko at kernel.org > > > Cc: kirill at shutemov.name > > > Cc: akpm at linux-foundation.org > > > Signed-off-by: Joel Fernandes (Google) > > > --- > > > mm/mremap.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++ > > > 1 file changed, 62 insertions(+) > > > > > > diff --git a/mm/mremap.c b/mm/mremap.c > > > index 9e68a02a52b1..d82c485822ef 100644 > > > --- a/mm/mremap.c > > > +++ b/mm/mremap.c > > > @@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, > > > drop_rmap_locks(vma); > > > } > > > > > > +static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, > > > + unsigned long new_addr, unsigned long old_end, > > > + pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush) > > > +{ > > > + spinlock_t *old_ptl, *new_ptl; > > > + struct mm_struct *mm = vma->vm_mm; > > > + > > > + if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK) > > > + || old_end - old_addr < PMD_SIZE) > > > + return false; > > > + > > > + /* > > > + * The destination pmd shouldn't be established, free_pgtables() > > > + * should have release it. > > > + */ > > > + if (WARN_ON(!pmd_none(*new_pmd))) > > > + return false; > > > + > > > + /* > > > + * We don't have to worry about the ordering of src and dst > > > + * ptlocks because exclusive mmap_sem prevents deadlock. > > > + */ > > > + old_ptl = pmd_lock(vma->vm_mm, old_pmd); > > > + if (old_ptl) { > > > + pmd_t pmd; > > > + > > > + new_ptl = pmd_lockptr(mm, new_pmd); > > > + if (new_ptl != old_ptl) > > > + spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); > > > + > > > + /* Clear the pmd */ > > > + pmd = *old_pmd; > > > + pmd_clear(old_pmd); > > > > Adding Martin Schwidefsky. > > Is this mapping maybe still in use on other CPUs? If yes, I think for > > s390 we need to flush here as well (in other word we might need to introduce > > pmd_clear_flush). On s390 you have to use instructions like CRDTE,IPTE or IDTE > > to modify page table entries that are still in use. Otherwise you can get a > > delayed access exception which is - in contrast to page faults - not recoverable. > > Just clearing an active pmd would be broken for s390. We need the equivalent > of the ptep_get_and_clear() function for pmds. For s390 this function would > look like this: > > static inline pte_t pmdp_get_and_clear(struct mm_struct *mm, > unsigned long addr, pmd_t *pmdp) > { > return pmdp_xchg_lazy(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_INVALID)); > } > > Just like pmdp_huge_get_and_clear() in fact. I agree architecture like s390 may need additional explicit instructions to avoid any unrecoverable failure. So the good news is in my last patch I sent, I have put this behind an architecture flag (HAVE_MOVE_PMD), so we don't have to enable it with architectures that cannot handle it: https://www.spinics.net/lists/linux-mm/msg163621.html Also we are triggering this optimization only if the page is not a transparent huge page by calling pmd_trans_huge(). For regular pages, it should be safe to not do the atomic get_and_clear AIUI because Linux doesn't use any bits from the PMD like the dirty bit if THP is not in use (and the processors that I saw (not s390) should not storing anything in the bits anyway when the page is not a huge page. I have gone through various scenarios and read both arm 32-bit and 64-bit and x86 64-bit manuals, and I believe it to be safe. For s390, lets not set the HAVE_MOVE_PMD flag. Does that work for you? > > > + > > > + VM_BUG_ON(!pmd_none(*new_pmd)); > > > + > > > + /* Set the new pmd */ > > > + set_pmd_at(mm, new_addr, new_pmd, pmd); > > > + if (new_ptl != old_ptl) > > > + spin_unlock(new_ptl); > > > + spin_unlock(old_ptl); > > > + > > > + *need_flush = true; > > > + return true; > > > + } > > > + return false; > > > +} > > > + > > So the idea is to move the pmd entry to the new location, dragging > the whole pte table to a new location with a different address. > I wonder if that is safe in regard to get_user_pages_fast(). Could you elaborate why you feel it may not be? Are you concerned that the PMD moving interferes with the page walk? Incase the tree changes during page-walking, the number of pages pinned by get_user_pages_fast may be less than the number requested. In this case, get_user_pages_fast would fall back to the slow path which should be synchronized with the mremap by courtesy of the mm->mmap_sem. But please let me know the scenario you have in mind and if I missed something. thanks, - Joel From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED,USER_AGENT_MUTT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34880C04EB9 for ; Tue, 16 Oct 2018 02:09:42 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F0C942089E for ; Tue, 16 Oct 2018 02:09:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="uBGd1ZcM"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=joelfernandes.org header.i=@joelfernandes.org header.b="Ok8PoT2K" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F0C942089E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=joelfernandes.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-riscv-bounces+infradead-linux-riscv=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=a/zypxRt4u9HeE4YG4/v/j0pjNQpmj0Y5dp6iNWBkaM=; b=uBGd1ZcMQXYeEm Ncmbm7nNx15mBYW4N7F2rjRd6ZoBzJy/XrTQXBR4cwxJyRYb7MYe08yUGHIDtu62dAOc+DCZ4bxmh oAWzAhNmm4anw8wrFsupfYQuS9aC1GqjD/JnXQkiyJTCPphyk2BDmX5zMOXCC3Jg5sbzs5w0hZPCx LDQYRfVbOG70BPNuKpkGY5DCKwcF7YqVkgy/PWJQgBo0AIcsOo5if4/Kq/7c4VQDvvtb7nhXpbQmj KZ/VEWkrh+NnFhaOpzGRDnXzEKUAWGLAf/Zq/p0Nu+g+rB1JpTDGvoOr+5JdsGiQ4Tjf5GOPvNsUb v32bt1aARQreTXx6LeVg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gCEnh-0005kY-GE; Tue, 16 Oct 2018 02:09:37 +0000 Received: from mail-pg1-x544.google.com ([2607:f8b0:4864:20::544]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1gCEnE-0005OC-5g for linux-riscv@lists.infradead.org; Tue, 16 Oct 2018 02:09:29 +0000 Received: by mail-pg1-x544.google.com with SMTP id t70-v6so10035975pgd.12 for ; Mon, 15 Oct 2018 19:08:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=Cz67bHFDMvGJFAPVrnGd7nRR9VaEfFbq9lmMgtQ7ALI=; b=Ok8PoT2K5GRd0KamvoP9eqKuZhLNGS9A/TQQUFKNVaiKMTPdWZBOlKBJnP/SBNp0fB eaNg0TIhicPK/28Gv9xiSnJPwjChRHhy71Hj8UEp7IXgBHdi4gGzcrB8K+qvQHybEgT/ 4RZeGqEX+CUKeDt+Zqt+rRqDUsxY24+s4U2g8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=Cz67bHFDMvGJFAPVrnGd7nRR9VaEfFbq9lmMgtQ7ALI=; b=jyccON+9pTWpaeQ47azzdqPk8kTKv6gG+SFmJWnx3iATQ//HnOFxb3hOTJOCEuGyyS /wViIASKLzEQz/3etW8uuuSeqk1XzVYEsi3Zw5jaariJ2RfIesa6ioxfLWyysYEYdHEA TReMPeeH3XCmFYE5JvayirUuUAc5hYe1ZWIsP64grcsyDpfBrnOx+qsSsc65ZuqhWqv0 u9lGioIZknZZKJgvg9eVCOM2oPcZDXmM+uNKsSfk6H62onhbCbb06VfB/8FRE2zKOYcR eOREkycdoS6ZW0pz4CKAAgerZj2gQLJUIjAfiEtmLsD5HoGrsCj3plnEm3RDBv+ULABx h7nw== X-Gm-Message-State: ABuFfog58LV1mljMp9gxfwygI66ouA+il4tYl2RweorPxyY9GUN5yEaS ZKZU1pgzFhGPHAmCe4STRkE59w== X-Google-Smtp-Source: ACcGV61tXshD0kdUz1rx/VWy95C7tPdFHs9NIxbQ7U95KgElxYyhNYgyjAz7jKjVcYQ4X5DlhOsyjw== X-Received: by 2002:a62:6643:: with SMTP id a64-v6mr19935671pfc.202.1539655735629; Mon, 15 Oct 2018 19:08:55 -0700 (PDT) Received: from localhost ([2620:0:1000:1601:3aef:314f:b9ea:889f]) by smtp.gmail.com with ESMTPSA id m15-v6sm19964319pgt.28.2018.10.15.19.08.53 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 15 Oct 2018 19:08:54 -0700 (PDT) Date: Mon, 15 Oct 2018 19:08:53 -0700 From: Joel Fernandes To: Martin Schwidefsky Subject: Re: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions Message-ID: <20181016020853.GA56701@joelaf.mtv.corp.google.com> References: <20181012013756.11285-1-joel@joelfernandes.org> <20181012013756.11285-2-joel@joelfernandes.org> <6580a62b-69c6-f2e3-767c-bd36b977bea2@de.ibm.com> <20181015101814.306d257c@mschwideX1> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20181015101814.306d257c@mschwideX1> User-Agent: Mutt/1.10.1 (2018-07-13) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181015_190908_317437_C07725AE X-CRM114-Status: GOOD ( 36.11 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-mips@linux-mips.org, Rich Felker , linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org, Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , mhocko@kernel.org, linux-mm@kvack.org, lokeshgidra@google.com, sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org, elfring@users.sourceforge.net, Jonas Bonn , linux-s390@vger.kernel.org, dancol@google.com, Yoshinori Sato , linux-xtensa@linux-xtensa.org, linux-hexagon@vger.kernel.org, Helge Deller , "maintainer:X86 ARCHITECTURE \(32-BIT AND 64-BIT\)" , hughd@google.com, "James E.J. Bottomley" , kasan-dev@googlegroups.com, kvmarm@lists.cs.columbia.edu, Christian Borntraeger , Ingo Molnar , Geert Uytterhoeven , Andrey Ryabinin , linux-snps-arc@lists.infradead.org, kernel-team@android.com, Sam Creasey , Fenghua Yu , Jeff Dike , linux-um@lists.infradead.org, Stefan Kristiansson , Julia Lawall , linux-m68k@lists.linux-m68k.org, openrisc@lists.librecores.org, Borislav Petkov , Andy Lutomirski , nios2-dev@lists.rocketboards.org, kirill@shutemov.name, Stafford Horne , Guan Xuetao , linux-arm-kernel@lists.infradead.org, Chris Zankel , Tony Luck , Richard Weinberger , linux-parisc@vger.kernel.org, pantin@google.com, Max Filippov , linux-kernel@vger.kernel.org, minchan@kernel.org, Thomas Gleixner , linux-alpha@vger.kernel.org, Ley Foon Tan , akpm@linux-foundation.org, linuxppc-dev@lists.ozlabs.org, "David S. Miller" Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-riscv" Errors-To: linux-riscv-bounces+infradead-linux-riscv=archiver.kernel.org@lists.infradead.org Message-ID: <20181016020853.1r_1OBIkV5k0vt29CnHsBNWmnWNPM_0_FF2LSTIybEk@z> On Mon, Oct 15, 2018 at 10:18:14AM +0200, Martin Schwidefsky wrote: > On Mon, 15 Oct 2018 09:10:53 +0200 > Christian Borntraeger wrote: > > > On 10/12/2018 03:37 AM, Joel Fernandes (Google) wrote: > > > Android needs to mremap large regions of memory during memory management > > > related operations. The mremap system call can be really slow if THP is > > > not enabled. The bottleneck is move_page_tables, which is copying each > > > pte at a time, and can be really slow across a large map. Turning on THP > > > may not be a viable option, and is not for us. This patch speeds up the > > > performance for non-THP system by copying at the PMD level when possible. > > > > > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap > > > completion times drops from 160-250 millesconds to 380-400 microseconds. > > > > > > Before: > > > Total mremap time for 1GB data: 242321014 nanoseconds. > > > Total mremap time for 1GB data: 196842467 nanoseconds. > > > Total mremap time for 1GB data: 167051162 nanoseconds. > > > > > > After: > > > Total mremap time for 1GB data: 385781 nanoseconds. > > > Total mremap time for 1GB data: 388959 nanoseconds. > > > Total mremap time for 1GB data: 402813 nanoseconds. > > > > > > Incase THP is enabled, the optimization is skipped. I also flush the > > > tlb every time we do this optimization since I couldn't find a way to > > > determine if the low-level PTEs are dirty. It is seen that the cost of > > > doing so is not much compared the improvement, on both x86-64 and arm64. > > > > > > Cc: minchan@kernel.org > > > Cc: pantin@google.com > > > Cc: hughd@google.com > > > Cc: lokeshgidra@google.com > > > Cc: dancol@google.com > > > Cc: mhocko@kernel.org > > > Cc: kirill@shutemov.name > > > Cc: akpm@linux-foundation.org > > > Signed-off-by: Joel Fernandes (Google) > > > --- > > > mm/mremap.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++ > > > 1 file changed, 62 insertions(+) > > > > > > diff --git a/mm/mremap.c b/mm/mremap.c > > > index 9e68a02a52b1..d82c485822ef 100644 > > > --- a/mm/mremap.c > > > +++ b/mm/mremap.c > > > @@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, > > > drop_rmap_locks(vma); > > > } > > > > > > +static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, > > > + unsigned long new_addr, unsigned long old_end, > > > + pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush) > > > +{ > > > + spinlock_t *old_ptl, *new_ptl; > > > + struct mm_struct *mm = vma->vm_mm; > > > + > > > + if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK) > > > + || old_end - old_addr < PMD_SIZE) > > > + return false; > > > + > > > + /* > > > + * The destination pmd shouldn't be established, free_pgtables() > > > + * should have release it. > > > + */ > > > + if (WARN_ON(!pmd_none(*new_pmd))) > > > + return false; > > > + > > > + /* > > > + * We don't have to worry about the ordering of src and dst > > > + * ptlocks because exclusive mmap_sem prevents deadlock. > > > + */ > > > + old_ptl = pmd_lock(vma->vm_mm, old_pmd); > > > + if (old_ptl) { > > > + pmd_t pmd; > > > + > > > + new_ptl = pmd_lockptr(mm, new_pmd); > > > + if (new_ptl != old_ptl) > > > + spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); > > > + > > > + /* Clear the pmd */ > > > + pmd = *old_pmd; > > > + pmd_clear(old_pmd); > > > > Adding Martin Schwidefsky. > > Is this mapping maybe still in use on other CPUs? If yes, I think for > > s390 we need to flush here as well (in other word we might need to introduce > > pmd_clear_flush). On s390 you have to use instructions like CRDTE,IPTE or IDTE > > to modify page table entries that are still in use. Otherwise you can get a > > delayed access exception which is - in contrast to page faults - not recoverable. > > Just clearing an active pmd would be broken for s390. We need the equivalent > of the ptep_get_and_clear() function for pmds. For s390 this function would > look like this: > > static inline pte_t pmdp_get_and_clear(struct mm_struct *mm, > unsigned long addr, pmd_t *pmdp) > { > return pmdp_xchg_lazy(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_INVALID)); > } > > Just like pmdp_huge_get_and_clear() in fact. I agree architecture like s390 may need additional explicit instructions to avoid any unrecoverable failure. So the good news is in my last patch I sent, I have put this behind an architecture flag (HAVE_MOVE_PMD), so we don't have to enable it with architectures that cannot handle it: https://www.spinics.net/lists/linux-mm/msg163621.html Also we are triggering this optimization only if the page is not a transparent huge page by calling pmd_trans_huge(). For regular pages, it should be safe to not do the atomic get_and_clear AIUI because Linux doesn't use any bits from the PMD like the dirty bit if THP is not in use (and the processors that I saw (not s390) should not storing anything in the bits anyway when the page is not a huge page. I have gone through various scenarios and read both arm 32-bit and 64-bit and x86 64-bit manuals, and I believe it to be safe. For s390, lets not set the HAVE_MOVE_PMD flag. Does that work for you? > > > + > > > + VM_BUG_ON(!pmd_none(*new_pmd)); > > > + > > > + /* Set the new pmd */ > > > + set_pmd_at(mm, new_addr, new_pmd, pmd); > > > + if (new_ptl != old_ptl) > > > + spin_unlock(new_ptl); > > > + spin_unlock(old_ptl); > > > + > > > + *need_flush = true; > > > + return true; > > > + } > > > + return false; > > > +} > > > + > > So the idea is to move the pmd entry to the new location, dragging > the whole pte table to a new location with a different address. > I wonder if that is safe in regard to get_user_pages_fast(). Could you elaborate why you feel it may not be? Are you concerned that the PMD moving interferes with the page walk? Incase the tree changes during page-walking, the number of pages pinned by get_user_pages_fast may be less than the number requested. In this case, get_user_pages_fast would fall back to the slow path which should be synchronized with the mremap by courtesy of the mm->mmap_sem. But please let me know the scenario you have in mind and if I missed something. thanks, - Joel _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.1 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_PASS,USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B965C04EB9 for ; Tue, 16 Oct 2018 03:30:50 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8E71E20869 for ; Tue, 16 Oct 2018 03:30:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=joelfernandes.org header.i=@joelfernandes.org header.b="Ok8PoT2K" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8E71E20869 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=joelfernandes.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 42Z16b3xQfzF3Sp for ; Tue, 16 Oct 2018 14:30:47 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=joelfernandes.org Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=joelfernandes.org header.i=@joelfernandes.org header.b="Ok8PoT2K"; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=joelfernandes.org (client-ip=2607:f8b0:4864:20::543; helo=mail-pg1-x543.google.com; envelope-from=joel@joelfernandes.org; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=joelfernandes.org Authentication-Results: lists.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=joelfernandes.org header.i=@joelfernandes.org header.b="Ok8PoT2K"; dkim-atps=neutral Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 42YzJB5KTzzF3Ds for ; Tue, 16 Oct 2018 13:08:58 +1100 (AEDT) Received: by mail-pg1-x543.google.com with SMTP id n31-v6so10045508pgm.7 for ; Mon, 15 Oct 2018 19:08:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=Cz67bHFDMvGJFAPVrnGd7nRR9VaEfFbq9lmMgtQ7ALI=; b=Ok8PoT2K5GRd0KamvoP9eqKuZhLNGS9A/TQQUFKNVaiKMTPdWZBOlKBJnP/SBNp0fB eaNg0TIhicPK/28Gv9xiSnJPwjChRHhy71Hj8UEp7IXgBHdi4gGzcrB8K+qvQHybEgT/ 4RZeGqEX+CUKeDt+Zqt+rRqDUsxY24+s4U2g8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=Cz67bHFDMvGJFAPVrnGd7nRR9VaEfFbq9lmMgtQ7ALI=; b=mvvrNTejrOUkV4Xo3WmkPZCwjnm+T23nl27Xv3ZBjFlTX1S6JfIw3YKrkVfRUe4f2v tJRMdAXqdOWTOzde7+Xfu1qK1WkN+YqGZYl7blz6WKabbdJ6PnFag6oWmmPND8FR8hnL V+8qW8/6+ICE6+vXjBFuyGNVihPujN4WPj1tb/ik9KC/xTVgmOnlo0moEn6H0WCwiFrS tvBMBcNgbZoOEVKfRFZB/ZRLfk84dIMjo6c2PumXfjBJjSyTg/qsB4h2EObdjWmepd9Q Aenj0YOmQQRNygcs+j0hAUhprxf7mW33pJxWn5zklRbosmQDZL0o6b1eLti62KZ0byPP TISw== X-Gm-Message-State: ABuFfohst0sLqw5nKEfstgkGzsMq6/qzM6qdXcalTPuR6THzWjNcZ/Aj XGne0+Yw6uoVJSGq0787q5klbQ== X-Google-Smtp-Source: ACcGV61tXshD0kdUz1rx/VWy95C7tPdFHs9NIxbQ7U95KgElxYyhNYgyjAz7jKjVcYQ4X5DlhOsyjw== X-Received: by 2002:a62:6643:: with SMTP id a64-v6mr19935671pfc.202.1539655735629; Mon, 15 Oct 2018 19:08:55 -0700 (PDT) Received: from localhost ([2620:0:1000:1601:3aef:314f:b9ea:889f]) by smtp.gmail.com with ESMTPSA id m15-v6sm19964319pgt.28.2018.10.15.19.08.53 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 15 Oct 2018 19:08:54 -0700 (PDT) Date: Mon, 15 Oct 2018 19:08:53 -0700 From: Joel Fernandes To: Martin Schwidefsky Subject: Re: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions Message-ID: <20181016020853.GA56701@joelaf.mtv.corp.google.com> References: <20181012013756.11285-1-joel@joelfernandes.org> <20181012013756.11285-2-joel@joelfernandes.org> <6580a62b-69c6-f2e3-767c-bd36b977bea2@de.ibm.com> <20181015101814.306d257c@mschwideX1> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20181015101814.306d257c@mschwideX1> User-Agent: Mutt/1.10.1 (2018-07-13) X-Mailman-Approved-At: Tue, 16 Oct 2018 13:52:47 +1100 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-mips@linux-mips.org, Rich Felker , linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org, Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , mhocko@kernel.org, linux-mm@kvack.org, lokeshgidra@google.com, sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org, elfring@users.sourceforge.net, Jonas Bonn , linux-s390@vger.kernel.org, dancol@google.com, Yoshinori Sato , linux-xtensa@linux-xtensa.org, linux-hexagon@vger.kernel.org, Helge Deller , "maintainer:X86 ARCHITECTURE \(32-BIT AND 64-BIT\)" , hughd@google.com, "James E.J. Bottomley" , kasan-dev@googlegroups.com, kvmarm@lists.cs.columbia.edu, Christian Borntraeger , Ingo Molnar , Geert Uytterhoeven , Andrey Ryabinin , linux-snps-arc@lists.infradead.org, kernel-team@android.com, Sam Creasey , Fenghua Yu , Jeff Dike , linux-um@lists.infradead.org, Stefan Kristiansson , Julia Lawall , linux-m68k@lists.linux-m68k.org, openrisc@lists.librecores.org, Borislav Petkov , Andy Lutomirski , nios2-dev@lists.rocketboards.org, kirill@shutemov.name, Stafford Horne , Guan Xuetao , linux-arm-kernel@lists.infradead.org, Chris Zankel , Tony Luck , Richard Weinberger , linux-parisc@vger.kernel.org, pantin@google.com, Max Filippov , linux-kernel@vger.kernel.org, minchan@kernel.org, Thomas Gleixner , linux-alpha@vger.kernel.org, Ley Foon Tan , akpm@linux-foundation.org, linuxppc-dev@lists.ozlabs.org, "David S. Miller" Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On Mon, Oct 15, 2018 at 10:18:14AM +0200, Martin Schwidefsky wrote: > On Mon, 15 Oct 2018 09:10:53 +0200 > Christian Borntraeger wrote: > > > On 10/12/2018 03:37 AM, Joel Fernandes (Google) wrote: > > > Android needs to mremap large regions of memory during memory management > > > related operations. The mremap system call can be really slow if THP is > > > not enabled. The bottleneck is move_page_tables, which is copying each > > > pte at a time, and can be really slow across a large map. Turning on THP > > > may not be a viable option, and is not for us. This patch speeds up the > > > performance for non-THP system by copying at the PMD level when possible. > > > > > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap > > > completion times drops from 160-250 millesconds to 380-400 microseconds. > > > > > > Before: > > > Total mremap time for 1GB data: 242321014 nanoseconds. > > > Total mremap time for 1GB data: 196842467 nanoseconds. > > > Total mremap time for 1GB data: 167051162 nanoseconds. > > > > > > After: > > > Total mremap time for 1GB data: 385781 nanoseconds. > > > Total mremap time for 1GB data: 388959 nanoseconds. > > > Total mremap time for 1GB data: 402813 nanoseconds. > > > > > > Incase THP is enabled, the optimization is skipped. I also flush the > > > tlb every time we do this optimization since I couldn't find a way to > > > determine if the low-level PTEs are dirty. It is seen that the cost of > > > doing so is not much compared the improvement, on both x86-64 and arm64. > > > > > > Cc: minchan@kernel.org > > > Cc: pantin@google.com > > > Cc: hughd@google.com > > > Cc: lokeshgidra@google.com > > > Cc: dancol@google.com > > > Cc: mhocko@kernel.org > > > Cc: kirill@shutemov.name > > > Cc: akpm@linux-foundation.org > > > Signed-off-by: Joel Fernandes (Google) > > > --- > > > mm/mremap.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++ > > > 1 file changed, 62 insertions(+) > > > > > > diff --git a/mm/mremap.c b/mm/mremap.c > > > index 9e68a02a52b1..d82c485822ef 100644 > > > --- a/mm/mremap.c > > > +++ b/mm/mremap.c > > > @@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, > > > drop_rmap_locks(vma); > > > } > > > > > > +static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, > > > + unsigned long new_addr, unsigned long old_end, > > > + pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush) > > > +{ > > > + spinlock_t *old_ptl, *new_ptl; > > > + struct mm_struct *mm = vma->vm_mm; > > > + > > > + if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK) > > > + || old_end - old_addr < PMD_SIZE) > > > + return false; > > > + > > > + /* > > > + * The destination pmd shouldn't be established, free_pgtables() > > > + * should have release it. > > > + */ > > > + if (WARN_ON(!pmd_none(*new_pmd))) > > > + return false; > > > + > > > + /* > > > + * We don't have to worry about the ordering of src and dst > > > + * ptlocks because exclusive mmap_sem prevents deadlock. > > > + */ > > > + old_ptl = pmd_lock(vma->vm_mm, old_pmd); > > > + if (old_ptl) { > > > + pmd_t pmd; > > > + > > > + new_ptl = pmd_lockptr(mm, new_pmd); > > > + if (new_ptl != old_ptl) > > > + spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); > > > + > > > + /* Clear the pmd */ > > > + pmd = *old_pmd; > > > + pmd_clear(old_pmd); > > > > Adding Martin Schwidefsky. > > Is this mapping maybe still in use on other CPUs? If yes, I think for > > s390 we need to flush here as well (in other word we might need to introduce > > pmd_clear_flush). On s390 you have to use instructions like CRDTE,IPTE or IDTE > > to modify page table entries that are still in use. Otherwise you can get a > > delayed access exception which is - in contrast to page faults - not recoverable. > > Just clearing an active pmd would be broken for s390. We need the equivalent > of the ptep_get_and_clear() function for pmds. For s390 this function would > look like this: > > static inline pte_t pmdp_get_and_clear(struct mm_struct *mm, > unsigned long addr, pmd_t *pmdp) > { > return pmdp_xchg_lazy(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_INVALID)); > } > > Just like pmdp_huge_get_and_clear() in fact. I agree architecture like s390 may need additional explicit instructions to avoid any unrecoverable failure. So the good news is in my last patch I sent, I have put this behind an architecture flag (HAVE_MOVE_PMD), so we don't have to enable it with architectures that cannot handle it: https://www.spinics.net/lists/linux-mm/msg163621.html Also we are triggering this optimization only if the page is not a transparent huge page by calling pmd_trans_huge(). For regular pages, it should be safe to not do the atomic get_and_clear AIUI because Linux doesn't use any bits from the PMD like the dirty bit if THP is not in use (and the processors that I saw (not s390) should not storing anything in the bits anyway when the page is not a huge page. I have gone through various scenarios and read both arm 32-bit and 64-bit and x86 64-bit manuals, and I believe it to be safe. For s390, lets not set the HAVE_MOVE_PMD flag. Does that work for you? > > > + > > > + VM_BUG_ON(!pmd_none(*new_pmd)); > > > + > > > + /* Set the new pmd */ > > > + set_pmd_at(mm, new_addr, new_pmd, pmd); > > > + if (new_ptl != old_ptl) > > > + spin_unlock(new_ptl); > > > + spin_unlock(old_ptl); > > > + > > > + *need_flush = true; > > > + return true; > > > + } > > > + return false; > > > +} > > > + > > So the idea is to move the pmd entry to the new location, dragging > the whole pte table to a new location with a different address. > I wonder if that is safe in regard to get_user_pages_fast(). Could you elaborate why you feel it may not be? Are you concerned that the PMD moving interferes with the page walk? Incase the tree changes during page-walking, the number of pages pinned by get_user_pages_fast may be less than the number requested. In this case, get_user_pages_fast would fall back to the slow path which should be synchronized with the mremap by courtesy of the mm->mmap_sem. But please let me know the scenario you have in mind and if I missed something. thanks, - Joel From mboxrd@z Thu Jan 1 00:00:00 1970 From: joel@joelfernandes.org (Joel Fernandes) Date: Mon, 15 Oct 2018 19:08:53 -0700 Subject: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions In-Reply-To: <20181015101814.306d257c@mschwideX1> References: <20181012013756.11285-1-joel@joelfernandes.org> <20181012013756.11285-2-joel@joelfernandes.org> <6580a62b-69c6-f2e3-767c-bd36b977bea2@de.ibm.com> <20181015101814.306d257c@mschwideX1> List-ID: Message-ID: <20181016020853.GA56701@joelaf.mtv.corp.google.com> To: linux-snps-arc@lists.infradead.org On Mon, Oct 15, 2018@10:18:14AM +0200, Martin Schwidefsky wrote: > On Mon, 15 Oct 2018 09:10:53 +0200 > Christian Borntraeger wrote: > > > On 10/12/2018 03:37 AM, Joel Fernandes (Google) wrote: > > > Android needs to mremap large regions of memory during memory management > > > related operations. The mremap system call can be really slow if THP is > > > not enabled. The bottleneck is move_page_tables, which is copying each > > > pte at a time, and can be really slow across a large map. Turning on THP > > > may not be a viable option, and is not for us. This patch speeds up the > > > performance for non-THP system by copying at the PMD level when possible. > > > > > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap > > > completion times drops from 160-250 millesconds to 380-400 microseconds. > > > > > > Before: > > > Total mremap time for 1GB data: 242321014 nanoseconds. > > > Total mremap time for 1GB data: 196842467 nanoseconds. > > > Total mremap time for 1GB data: 167051162 nanoseconds. > > > > > > After: > > > Total mremap time for 1GB data: 385781 nanoseconds. > > > Total mremap time for 1GB data: 388959 nanoseconds. > > > Total mremap time for 1GB data: 402813 nanoseconds. > > > > > > Incase THP is enabled, the optimization is skipped. I also flush the > > > tlb every time we do this optimization since I couldn't find a way to > > > determine if the low-level PTEs are dirty. It is seen that the cost of > > > doing so is not much compared the improvement, on both x86-64 and arm64. > > > > > > Cc: minchan at kernel.org > > > Cc: pantin at google.com > > > Cc: hughd at google.com > > > Cc: lokeshgidra at google.com > > > Cc: dancol at google.com > > > Cc: mhocko at kernel.org > > > Cc: kirill at shutemov.name > > > Cc: akpm at linux-foundation.org > > > Signed-off-by: Joel Fernandes (Google) > > > --- > > > mm/mremap.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++ > > > 1 file changed, 62 insertions(+) > > > > > > diff --git a/mm/mremap.c b/mm/mremap.c > > > index 9e68a02a52b1..d82c485822ef 100644 > > > --- a/mm/mremap.c > > > +++ b/mm/mremap.c > > > @@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, > > > drop_rmap_locks(vma); > > > } > > > > > > +static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, > > > + unsigned long new_addr, unsigned long old_end, > > > + pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush) > > > +{ > > > + spinlock_t *old_ptl, *new_ptl; > > > + struct mm_struct *mm = vma->vm_mm; > > > + > > > + if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK) > > > + || old_end - old_addr < PMD_SIZE) > > > + return false; > > > + > > > + /* > > > + * The destination pmd shouldn't be established, free_pgtables() > > > + * should have release it. > > > + */ > > > + if (WARN_ON(!pmd_none(*new_pmd))) > > > + return false; > > > + > > > + /* > > > + * We don't have to worry about the ordering of src and dst > > > + * ptlocks because exclusive mmap_sem prevents deadlock. > > > + */ > > > + old_ptl = pmd_lock(vma->vm_mm, old_pmd); > > > + if (old_ptl) { > > > + pmd_t pmd; > > > + > > > + new_ptl = pmd_lockptr(mm, new_pmd); > > > + if (new_ptl != old_ptl) > > > + spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); > > > + > > > + /* Clear the pmd */ > > > + pmd = *old_pmd; > > > + pmd_clear(old_pmd); > > > > Adding Martin Schwidefsky. > > Is this mapping maybe still in use on other CPUs? If yes, I think for > > s390 we need to flush here as well (in other word we might need to introduce > > pmd_clear_flush). On s390 you have to use instructions like CRDTE,IPTE or IDTE > > to modify page table entries that are still in use. Otherwise you can get a > > delayed access exception which is - in contrast to page faults - not recoverable. > > Just clearing an active pmd would be broken for s390. We need the equivalent > of the ptep_get_and_clear() function for pmds. For s390 this function would > look like this: > > static inline pte_t pmdp_get_and_clear(struct mm_struct *mm, > unsigned long addr, pmd_t *pmdp) > { > return pmdp_xchg_lazy(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_INVALID)); > } > > Just like pmdp_huge_get_and_clear() in fact. I agree architecture like s390 may need additional explicit instructions to avoid any unrecoverable failure. So the good news is in my last patch I sent, I have put this behind an architecture flag (HAVE_MOVE_PMD), so we don't have to enable it with architectures that cannot handle it: https://www.spinics.net/lists/linux-mm/msg163621.html Also we are triggering this optimization only if the page is not a transparent huge page by calling pmd_trans_huge(). For regular pages, it should be safe to not do the atomic get_and_clear AIUI because Linux doesn't use any bits from the PMD like the dirty bit if THP is not in use (and the processors that I saw (not s390) should not storing anything in the bits anyway when the page is not a huge page. I have gone through various scenarios and read both arm 32-bit and 64-bit and x86 64-bit manuals, and I believe it to be safe. For s390, lets not set the HAVE_MOVE_PMD flag. Does that work for you? > > > + > > > + VM_BUG_ON(!pmd_none(*new_pmd)); > > > + > > > + /* Set the new pmd */ > > > + set_pmd_at(mm, new_addr, new_pmd, pmd); > > > + if (new_ptl != old_ptl) > > > + spin_unlock(new_ptl); > > > + spin_unlock(old_ptl); > > > + > > > + *need_flush = true; > > > + return true; > > > + } > > > + return false; > > > +} > > > + > > So the idea is to move the pmd entry to the new location, dragging > the whole pte table to a new location with a different address. > I wonder if that is safe in regard to get_user_pages_fast(). Could you elaborate why you feel it may not be? Are you concerned that the PMD moving interferes with the page walk? Incase the tree changes during page-walking, the number of pages pinned by get_user_pages_fast may be less than the number requested. In this case, get_user_pages_fast would fall back to the slow path which should be synchronized with the mremap by courtesy of the mm->mmap_sem. But please let me know the scenario you have in mind and if I missed something. thanks, - Joel From mboxrd@z Thu Jan 1 00:00:00 1970 From: Joel Fernandes Date: Mon, 15 Oct 2018 19:08:53 -0700 Subject: [OpenRISC] [PATCH v2 2/2] mm: speed up mremap by 500x on large regions In-Reply-To: <20181015101814.306d257c@mschwideX1> References: <20181012013756.11285-1-joel@joelfernandes.org> <20181012013756.11285-2-joel@joelfernandes.org> <6580a62b-69c6-f2e3-767c-bd36b977bea2@de.ibm.com> <20181015101814.306d257c@mschwideX1> Message-ID: <20181016020853.GA56701@joelaf.mtv.corp.google.com> List-Id: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: openrisc@lists.librecores.org On Mon, Oct 15, 2018 at 10:18:14AM +0200, Martin Schwidefsky wrote: > On Mon, 15 Oct 2018 09:10:53 +0200 > Christian Borntraeger wrote: > > > On 10/12/2018 03:37 AM, Joel Fernandes (Google) wrote: > > > Android needs to mremap large regions of memory during memory management > > > related operations. The mremap system call can be really slow if THP is > > > not enabled. The bottleneck is move_page_tables, which is copying each > > > pte at a time, and can be really slow across a large map. Turning on THP > > > may not be a viable option, and is not for us. This patch speeds up the > > > performance for non-THP system by copying at the PMD level when possible. > > > > > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap > > > completion times drops from 160-250 millesconds to 380-400 microseconds. > > > > > > Before: > > > Total mremap time for 1GB data: 242321014 nanoseconds. > > > Total mremap time for 1GB data: 196842467 nanoseconds. > > > Total mremap time for 1GB data: 167051162 nanoseconds. > > > > > > After: > > > Total mremap time for 1GB data: 385781 nanoseconds. > > > Total mremap time for 1GB data: 388959 nanoseconds. > > > Total mremap time for 1GB data: 402813 nanoseconds. > > > > > > Incase THP is enabled, the optimization is skipped. I also flush the > > > tlb every time we do this optimization since I couldn't find a way to > > > determine if the low-level PTEs are dirty. It is seen that the cost of > > > doing so is not much compared the improvement, on both x86-64 and arm64. > > > > > > Cc: minchan at kernel.org > > > Cc: pantin at google.com > > > Cc: hughd at google.com > > > Cc: lokeshgidra at google.com > > > Cc: dancol at google.com > > > Cc: mhocko at kernel.org > > > Cc: kirill at shutemov.name > > > Cc: akpm at linux-foundation.org > > > Signed-off-by: Joel Fernandes (Google) > > > --- > > > mm/mremap.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++ > > > 1 file changed, 62 insertions(+) > > > > > > diff --git a/mm/mremap.c b/mm/mremap.c > > > index 9e68a02a52b1..d82c485822ef 100644 > > > --- a/mm/mremap.c > > > +++ b/mm/mremap.c > > > @@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, > > > drop_rmap_locks(vma); > > > } > > > > > > +static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, > > > + unsigned long new_addr, unsigned long old_end, > > > + pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush) > > > +{ > > > + spinlock_t *old_ptl, *new_ptl; > > > + struct mm_struct *mm = vma->vm_mm; > > > + > > > + if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK) > > > + || old_end - old_addr < PMD_SIZE) > > > + return false; > > > + > > > + /* > > > + * The destination pmd shouldn't be established, free_pgtables() > > > + * should have release it. > > > + */ > > > + if (WARN_ON(!pmd_none(*new_pmd))) > > > + return false; > > > + > > > + /* > > > + * We don't have to worry about the ordering of src and dst > > > + * ptlocks because exclusive mmap_sem prevents deadlock. > > > + */ > > > + old_ptl = pmd_lock(vma->vm_mm, old_pmd); > > > + if (old_ptl) { > > > + pmd_t pmd; > > > + > > > + new_ptl = pmd_lockptr(mm, new_pmd); > > > + if (new_ptl != old_ptl) > > > + spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); > > > + > > > + /* Clear the pmd */ > > > + pmd = *old_pmd; > > > + pmd_clear(old_pmd); > > > > Adding Martin Schwidefsky. > > Is this mapping maybe still in use on other CPUs? If yes, I think for > > s390 we need to flush here as well (in other word we might need to introduce > > pmd_clear_flush). On s390 you have to use instructions like CRDTE,IPTE or IDTE > > to modify page table entries that are still in use. Otherwise you can get a > > delayed access exception which is - in contrast to page faults - not recoverable. > > Just clearing an active pmd would be broken for s390. We need the equivalent > of the ptep_get_and_clear() function for pmds. For s390 this function would > look like this: > > static inline pte_t pmdp_get_and_clear(struct mm_struct *mm, > unsigned long addr, pmd_t *pmdp) > { > return pmdp_xchg_lazy(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_INVALID)); > } > > Just like pmdp_huge_get_and_clear() in fact. I agree architecture like s390 may need additional explicit instructions to avoid any unrecoverable failure. So the good news is in my last patch I sent, I have put this behind an architecture flag (HAVE_MOVE_PMD), so we don't have to enable it with architectures that cannot handle it: https://www.spinics.net/lists/linux-mm/msg163621.html Also we are triggering this optimization only if the page is not a transparent huge page by calling pmd_trans_huge(). For regular pages, it should be safe to not do the atomic get_and_clear AIUI because Linux doesn't use any bits from the PMD like the dirty bit if THP is not in use (and the processors that I saw (not s390) should not storing anything in the bits anyway when the page is not a huge page. I have gone through various scenarios and read both arm 32-bit and 64-bit and x86 64-bit manuals, and I believe it to be safe. For s390, lets not set the HAVE_MOVE_PMD flag. Does that work for you? > > > + > > > + VM_BUG_ON(!pmd_none(*new_pmd)); > > > + > > > + /* Set the new pmd */ > > > + set_pmd_at(mm, new_addr, new_pmd, pmd); > > > + if (new_ptl != old_ptl) > > > + spin_unlock(new_ptl); > > > + spin_unlock(old_ptl); > > > + > > > + *need_flush = true; > > > + return true; > > > + } > > > + return false; > > > +} > > > + > > So the idea is to move the pmd entry to the new location, dragging > the whole pte table to a new location with a different address. > I wonder if that is safe in regard to get_user_pages_fast(). Could you elaborate why you feel it may not be? Are you concerned that the PMD moving interferes with the page walk? Incase the tree changes during page-walking, the number of pages pinned by get_user_pages_fast may be less than the number requested. In this case, get_user_pages_fast would fall back to the slow path which should be synchronized with the mremap by courtesy of the mm->mmap_sem. But please let me know the scenario you have in mind and if I missed something. thanks, - Joel