From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.6 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68D84C433DF for ; Thu, 15 Oct 2020 10:55:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0009922260 for ; Thu, 15 Oct 2020 10:55:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1602759357; bh=te9muqVH1TRG904u4JJ4FrQCvPp6MwdDgE/Yv30w3PU=; h=Date:From:To:Cc:Subject:References:In-Reply-To:List-ID:From; b=e4ctFQu+/Elz2nIZfYtx0IfvBu1MHTEnbcw7PQHIrdsBxteowX8GOZINs2E1gkhE5 O/QFMpkfov/AqvKnL3Tj4PPnhR2pBBnfAefe+UCiHXjZgdVdEwLDq2lgnxAMimGlot e28fa8QUlDydLk73rkGZpqLDbZ/u6Y523Xl4fvTY= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726019AbgJOKz4 (ORCPT ); Thu, 15 Oct 2020 06:55:56 -0400 Received: from mail.kernel.org ([198.145.29.99]:39562 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726654AbgJOKz4 (ORCPT ); Thu, 15 Oct 2020 06:55:56 -0400 Received: from willie-the-truck (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3A66B22249; Thu, 15 Oct 2020 10:55:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1602759355; bh=te9muqVH1TRG904u4JJ4FrQCvPp6MwdDgE/Yv30w3PU=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ckJ3wyU5jbwGNq4/krnO1Iy1JWHTNJXBDCx/Edql4YEfgqOnqGc+7gvGYh77bhyG6 ZzO52I3qNsQ+7Ma4k76UevjKNz+Su710g/JPRn1RDafdL7bYn/BUxX+tX5YYYmcJWV 0h1zvbDXckgkqYAVWhf9Fo7QapayWowt7udV4KE0= Date: Thu, 15 Oct 2020 11:55:43 +0100 From: Will Deacon To: Kalesh Singh Cc: surenb@google.com, minchan@google.com, joelaf@google.com, lokeshgidra@google.com, kernel-team@android.com, "Kirill A . Shutemov" , Catalin Marinas , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , Shuah Khan , Peter Zijlstra , Kees Cook , "Aneesh Kumar K.V" , Sami Tolvanen , Masahiro Yamada , Josh Poimboeuf , Frederic Weisbecker , Krzysztof Kozlowski , Hassan Naveed , Arnd Bergmann , Christian Brauner , Anshuman Khandual , Mark Brown , Gavin Shan , Mike Rapoport , Steven Price , Jia He , John Hubbard , Mike Kravetz , Greg Kroah-Hartman , Ram Pai , Mina Almasry , Ralph Campbell , Sandipan Das , Dave Hansen , Masami Hiramatsu , Jason Gunthorpe , Brian Geffon , SeongJae Park , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org Subject: Re: [PATCH v4 2/5] arm64: mremap speedup - Enable HAVE_MOVE_PMD Message-ID: <20201015105542.GA5110@willie-the-truck> References: <20201014005320.2233162-1-kaleshsingh@google.com> <20201014005320.2233162-3-kaleshsingh@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201014005320.2233162-3-kaleshsingh@google.com> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org On Wed, Oct 14, 2020 at 12:53:07AM +0000, Kalesh Singh wrote: > HAVE_MOVE_PMD enables remapping pages at the PMD level if both the > source and destination addresses are PMD-aligned. > > HAVE_MOVE_PMD is already enabled on x86. The original patch [1] that > introduced this config did not enable it on arm64 at the time because > of performance issues with flushing the TLB on every PMD move. These > issues have since been addressed in more recent releases with > improvements to the arm64 TLB invalidation and core mmu_gather code as > Will Deacon mentioned in [2]. > > From the data below, it can be inferred that there is approximately > 8x improvement in performance when HAVE_MOVE_PMD is enabled on arm64. > > --------- Test Results ---------- > > The following results were obtained on an arm64 device running a 5.4 > kernel, by remapping a PMD-aligned, 1GB sized region to a PMD-aligned > destination. The results from 10 iterations of the test are given below. > All times are in nanoseconds. > > Control HAVE_MOVE_PMD > > 9220833 1247761 > 9002552 1219896 > 9254115 1094792 > 8725885 1227760 > 9308646 1043698 > 9001667 1101771 > 8793385 1159896 > 8774636 1143594 > 9553125 1025833 > 9374010 1078125 > > 9100885.4 1134312.6 <-- Mean Time in nanoseconds > > Total mremap time for a 1GB sized PMD-aligned region drops from > ~9.1 milliseconds to ~1.1 milliseconds. (~8x speedup). > > [1] https://lore.kernel.org/r/20181108181201.88826-3-joelaf@google.com > [2] https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg140837.html > > Signed-off-by: Kalesh Singh > Acked-by: Kirill A. Shutemov > Cc: Catalin Marinas > Cc: Will Deacon > Cc: Andrew Morton > --- > Changes in v4: > - Add Kirill's Acked-by. Argh, I thought we already enabled this for PMDs back in 2018! Looks like that we forgot to actually do that after I improved the performance of the TLB invalidation. I'll pick this one patch up for 5.10. Will