From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90320CCA483 for ; Sat, 23 Jul 2022 09:17:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2606C6B0073; Sat, 23 Jul 2022 05:17:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 149A78E0001; Sat, 23 Jul 2022 05:17:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB6646B0075; Sat, 23 Jul 2022 05:17:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id DB2EF6B0073 for ; Sat, 23 Jul 2022 05:17:11 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id A327E160E9A for ; Sat, 23 Jul 2022 09:17:11 +0000 (UTC) X-FDA: 79717810662.22.AFBABE6 Received: from out30-57.freemail.mail.aliyun.com (out30-57.freemail.mail.aliyun.com [115.124.30.57]) by imf16.hostedemail.com (Postfix) with ESMTP id A6466180096 for ; Sat, 23 Jul 2022 09:17:09 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R191e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04400;MF=xhao@linux.alibaba.com;NM=1;PH=DS;RN=25;SR=0;TI=SMTPD_---0VK95yOH_1658567821; Received: from B-X3VXMD6M-2058.local(mailfrom:xhao@linux.alibaba.com fp:SMTPD_---0VK95yOH_1658567821) by smtp.aliyun-inc.com; Sat, 23 Jul 2022 17:17:04 +0800 From: xhao@linux.alibaba.com Reply-To: xhao@linux.alibaba.com Subject: Re: [PATCH v2 0/4] mm: arm64: bring up BATCHED_UNMAP_TLB_FLUSH To: Yicong Yang , Barry Song <21cnbao@gmail.com> Cc: Andrew Morton , Linux-MM , LAK , x86 , Catalin Marinas , Will Deacon , Linux Doc Mailing List , Jonathan Corbet , Arnd Bergmann , LKML , Darren Hart , huzhanyuan@oppo.com, =?UTF-8?B?5p2O5Z+56ZSLKHdpbmsp?= , =?UTF-8?B?5byg6K+X5piOKFNpbW9uIFpoYW5nKQ==?= , =?UTF-8?B?6YOt5YGl?= , real mz , linux-mips@vger.kernel.org, openrisc@lists.librecores.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, yangyicong@hisilicon.com, "tiantao (H)" References: <20220711034615.482895-1-21cnbao@gmail.com> <24f5e25b-3946-b92a-975b-c34688005398@linux.alibaba.com> <8e603deb-7023-5de5-c958-8911971aec24@huawei.com> Message-ID: <4ee2d2ec-1c75-64af-2a75-761ea7e52c08@linux.alibaba.com> Date: Sat, 23 Jul 2022 17:17:01 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Thunderbird/78.14.0 MIME-Version: 1.0 In-Reply-To: <8e603deb-7023-5de5-c958-8911971aec24@huawei.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1658567831; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3Fh7NG2nYNKWkWi3Etswsg//TclRC9f4PjBrnlDMwEw=; b=mOPNtV1Y6kwS4JSl+J73IkKWwH6mK+A6FbjhhTWjKzCPbK171Q6Nyg0IigTWGhEB7yePI0 O4DfK+MFcp5zR4qoWJyt+Qkxu7HUBKifcOaz576CDVbhje979v2AUVgm95s3WTfSCPbC+K XAdRV6x0S4XmeaXN/IpjMqWZmlhHk2g= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1658567831; a=rsa-sha256; cv=none; b=Rf9MucH6ZRbbX745MUqbWF3oTD23gBclK9oS2NaR+TfQtbEsw9u34Dar3HGKrq5b9FWCvv cMf6CijvBEShW+TNPciAUhyVk0Rwo90EVZo7IbFpKM4JBApLM9xftdKYpQCjGXq3Du/Ssv BSiXg/pnLxdEX1BDLE8qWZ1Q88Qa100= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf16.hostedemail.com: domain of xhao@linux.alibaba.com designates 115.124.30.57 as permitted sender) smtp.mailfrom=xhao@linux.alibaba.com X-Stat-Signature: poos47dsoriwzjadoipswdx6wigbc85q X-Rspamd-Queue-Id: A6466180096 Authentication-Results: imf16.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf16.hostedemail.com: domain of xhao@linux.alibaba.com designates 115.124.30.57 as permitted sender) smtp.mailfrom=xhao@linux.alibaba.com X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1658567829-130507 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 7/18/22 9:28 PM, Yicong Yang wrote: > On 2022/7/14 12:51, Barry Song wrote: >> On Thu, Jul 14, 2022 at 3:29 PM Xin Hao wrote: >>> Hi barry. >>> >>> I do some test on Kunpeng arm64 machine use Unixbench. >>> >>> The test result as below. >>> >>> One core, we can see the performance improvement above +30%. >> I am really pleased to see the 30%+ improvement on unixbench on single core. >> >>> ./Run -c 1 -i 1 shell1 >>> w/o >>> System Benchmarks Partial Index BASELINE RESULT INDEX >>> Shell Scripts (1 concurrent) 42.4 5481.0 1292.7 >>> ======== >>> System Benchmarks Index Score (Partial Only) 1292.7 >>> >>> w/ >>> System Benchmarks Partial Index BASELINE RESULT INDEX >>> Shell Scripts (1 concurrent) 42.4 6974.6 1645.0 >>> ======== >>> System Benchmarks Index Score (Partial Only) 1645.0 >>> >>> >>> But with whole cores, there have little performance degradation above -5% >> That is sad as we might get more concurrency between mprotect(), madvise(), >> mremap(), zap_pte_range() and the deferred tlbi. >> >>> ./Run -c 96 -i 1 shell1 >>> w/o >>> Shell Scripts (1 concurrent) 80765.5 lpm (60.0 s, 1 >>> samples) >>> System Benchmarks Partial Index BASELINE RESULT INDEX >>> Shell Scripts (1 concurrent) 42.4 80765.5 19048.5 >>> ======== >>> System Benchmarks Index Score (Partial Only) 19048.5 >>> >>> w >>> Shell Scripts (1 concurrent) 76333.6 lpm (60.0 s, 1 >>> samples) >>> System Benchmarks Partial Index BASELINE RESULT INDEX >>> Shell Scripts (1 concurrent) 42.4 76333.6 18003.2 >>> ======== >>> System Benchmarks Index Score (Partial Only) 18003.2 >>> >>> ---------------------------------------------------------------------------------------------- >>> >>> >>> After discuss with you, and do some changes in the patch. >>> >>> ndex a52381a680db..1ecba81f1277 100644 >>> --- a/mm/rmap.c >>> +++ b/mm/rmap.c >>> @@ -727,7 +727,11 @@ void flush_tlb_batched_pending(struct mm_struct *mm) >>> int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT; >>> >>> if (pending != flushed) { >>> +#ifdef CONFIG_ARCH_HAS_MM_CPUMASK >>> flush_tlb_mm(mm); >>> +#else >>> + dsb(ish); >>> +#endif >>> >> i was guessing the problem might be flush_tlb_batched_pending() >> so i asked you to change this to verify my guess. >> > flush_tlb_batched_pending() looks like the critical path for this issue then the code > above can mitigate this. > > I cannot reproduce this on a 2P 128C Kunpeng920 server. The kernel is based on the > v5.19-rc6 and unixbench of version 5.1.3. The result of `./Run -c 128 -i 1 shell1` is: > iter-1 iter-2 iter-3 > w/o 17708.1 17637.1 17630.1 > w 17766.0 17752.3 17861.7 > > And flush_tlb_batched_pending()isn't the hot spot with the patch: > 7.00% sh [kernel.kallsyms] [k] ptep_clear_flush > 4.17% sh [kernel.kallsyms] [k] ptep_set_access_flags > 2.43% multi.sh [kernel.kallsyms] [k] ptep_clear_flush > 1.98% sh [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore > 1.69% sh [kernel.kallsyms] [k] next_uptodate_page > 1.66% sort [kernel.kallsyms] [k] ptep_clear_flush > 1.56% multi.sh [kernel.kallsyms] [k] ptep_set_access_flags > 1.27% sh [kernel.kallsyms] [k] page_counter_cancel > 1.11% sh [kernel.kallsyms] [k] page_remove_rmap > 1.06% sh [kernel.kallsyms] [k] perf_event_alloc > > Hi Xin Hao, > > I'm not sure the test setup as well as the config is same with yours. (96C vs 128C > should not be the reason I think). Did you check that the 5% is a fluctuation or > not? It'll be helpful if more information provided for reproducing this issue. Yes, not always the 5% reduce,  there exist a fluctuation. > > Thanks. > >> /* >>> * If the new TLB flushing is pending during flushing, leave >>> * mm->tlb_flush_batched as is, to avoid losing flushing. >>> >>> there have a performance improvement with whole cores, above +30% >> But I don't think it is a proper patch. There is no guarantee the cpu calling >> flush_tlb_batched_pending is exactly the cpu sending the deferred >> tlbi. so the solution is unsafe. But since this temporary code can bring the >> 30%+ performance improvement back for high concurrency, we have huge >> potential to finally make it. >> >> Unfortunately I don't have an arm64 server to debug on this. I only have >> 8 cores which are unlikely to reproduce regression which happens in >> high concurrency with 96 parallel tasks. >> >> So I'd ask if @yicong or someone else working on kunpeng or other >> arm64 servers is able to actually debug and figure out a proper >> patch for this, then add the patch as 5/5 into this series? >> >>> ./Run -c 96 -i 1 shell1 >>> 96 CPUs in system; running 96 parallel copies of tests >>> >>> Shell Scripts (1 concurrent) 109229.0 lpm (60.0 s, 1 samples) >>> System Benchmarks Partial Index BASELINE RESULT INDEX >>> Shell Scripts (1 concurrent) 42.4 109229.0 25761.6 >>> ======== >>> System Benchmarks Index Score (Partial Only) 25761.6 >>> >>> >>> Tested-by: Xin Hao >> Thanks for your testing! >> >>> Looking forward to your next version patch. >>> >>> On 7/11/22 11:46 AM, Barry Song wrote: >>>> Though ARM64 has the hardware to do tlb shootdown, the hardware >>>> broadcasting is not free. >>>> A simplest micro benchmark shows even on snapdragon 888 with only >>>> 8 cores, the overhead for ptep_clear_flush is huge even for paging >>>> out one page mapped by only one process: >>>> 5.36% a.out [kernel.kallsyms] [k] ptep_clear_flush >>>> >>>> While pages are mapped by multiple processes or HW has more CPUs, >>>> the cost should become even higher due to the bad scalability of >>>> tlb shootdown. >>>> >>>> The same benchmark can result in 16.99% CPU consumption on ARM64 >>>> server with around 100 cores according to Yicong's test on patch >>>> 4/4. >>>> >>>> This patchset leverages the existing BATCHED_UNMAP_TLB_FLUSH by >>>> 1. only send tlbi instructions in the first stage - >>>> arch_tlbbatch_add_mm() >>>> 2. wait for the completion of tlbi by dsb while doing tlbbatch >>>> sync in arch_tlbbatch_flush() >>>> My testing on snapdragon shows the overhead of ptep_clear_flush >>>> is removed by the patchset. The micro benchmark becomes 5% faster >>>> even for one page mapped by single process on snapdragon 888. >>>> >>>> >>>> -v2: >>>> 1. Collected Yicong's test result on kunpeng920 ARM64 server; >>>> 2. Removed the redundant vma parameter in arch_tlbbatch_add_mm() >>>> according to the comments of Peter Zijlstra and Dave Hansen >>>> 3. Added ARCH_HAS_MM_CPUMASK rather than checking if mm_cpumask >>>> is empty according to the comments of Nadav Amit >>>> >>>> Thanks, Yicong, Peter, Dave and Nadav for your testing or reviewing >>>> , and comments. >>>> >>>> -v1: >>>> https://lore.kernel.org/lkml/20220707125242.425242-1-21cnbao@gmail.com/ >>>> >>>> Barry Song (4): >>>> Revert "Documentation/features: mark BATCHED_UNMAP_TLB_FLUSH doesn't >>>> apply to ARM64" >>>> mm: rmap: Allow platforms without mm_cpumask to defer TLB flush >>>> mm: rmap: Extend tlbbatch APIs to fit new platforms >>>> arm64: support batched/deferred tlb shootdown during page reclamation >>>> >>>> Documentation/features/arch-support.txt | 1 - >>>> .../features/vm/TLB/arch-support.txt | 2 +- >>>> arch/arm/Kconfig | 1 + >>>> arch/arm64/Kconfig | 1 + >>>> arch/arm64/include/asm/tlbbatch.h | 12 ++++++++++ >>>> arch/arm64/include/asm/tlbflush.h | 23 +++++++++++++++++-- >>>> arch/loongarch/Kconfig | 1 + >>>> arch/mips/Kconfig | 1 + >>>> arch/openrisc/Kconfig | 1 + >>>> arch/powerpc/Kconfig | 1 + >>>> arch/riscv/Kconfig | 1 + >>>> arch/s390/Kconfig | 1 + >>>> arch/um/Kconfig | 1 + >>>> arch/x86/Kconfig | 1 + >>>> arch/x86/include/asm/tlbflush.h | 3 ++- >>>> mm/Kconfig | 3 +++ >>>> mm/rmap.c | 14 +++++++---- >>>> 17 files changed, 59 insertions(+), 9 deletions(-) >>>> create mode 100644 arch/arm64/include/asm/tlbbatch.h >>>> >>> -- >>> Best Regards! >>> Xin Hao >>> >> Thanks >> Barry >> . >> -- Best Regards! Xin Hao