All of lore.kernel.org
 help / color / mirror / Atom feed
From: Barry Song <21cnbao@gmail.com>
To: xhao@linux.alibaba.com
Cc: "Andrew Morton" <akpm@linux-foundation.org>,
	Linux-MM <linux-mm@kvack.org>,
	LAK <linux-arm-kernel@lists.infradead.org>, x86 <x86@kernel.org>,
	"Catalin Marinas" <catalin.marinas@arm.com>,
	"Will Deacon" <will@kernel.org>,
	"Linux Doc Mailing List" <linux-doc@vger.kernel.org>,
	"Jonathan Corbet" <corbet@lwn.net>,
	"Arnd Bergmann" <arnd@arndb.de>,
	LKML <linux-kernel@vger.kernel.org>,
	"Darren Hart" <darren@os.amperecomputing.com>,
	"Yicong Yang" <yangyicong@hisilicon.com>,
	huzhanyuan@oppo.com, "李培锋(wink)" <lipeifeng@oppo.com>,
	"张诗明(Simon Zhang)" <zhangshiming@oppo.com>, 郭健 <guojian@oppo.com>,
	"real mz" <realmz6@gmail.com>,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: Re: [PATCH v2 0/4] mm: arm64: bring up BATCHED_UNMAP_TLB_FLUSH
Date: Thu, 14 Jul 2022 16:51:45 +1200	[thread overview]
Message-ID: <CAGsJ_4zjnmQV6LT3yo--K-qD-92=hBmgfK121=n-Y0oEFX8RnQ@mail.gmail.com> (raw)
In-Reply-To: <24f5e25b-3946-b92a-975b-c34688005398@linux.alibaba.com>

On Thu, Jul 14, 2022 at 3:29 PM Xin Hao <xhao@linux.alibaba.com> wrote:
>
> Hi barry.
>
> I do some test on Kunpeng arm64 machine use Unixbench.
>
> The test  result as below.
>
> One core, we can see the performance improvement above +30%.

I am really pleased to see the 30%+ improvement on unixbench on single core.

> ./Run -c 1 -i 1 shell1
> w/o
> System Benchmarks Partial Index              BASELINE RESULT INDEX
> Shell Scripts (1 concurrent)                     42.4 5481.0 1292.7
> ========
> System Benchmarks Index Score (Partial Only)                         1292.7
>
> w/
> System Benchmarks Partial Index              BASELINE RESULT INDEX
> Shell Scripts (1 concurrent)                     42.4 6974.6 1645.0
> ========
> System Benchmarks Index Score (Partial Only)                         1645.0
>
>
> But with whole cores, there have little performance degradation above -5%

That is sad as we might get more concurrency between mprotect(), madvise(),
mremap(), zap_pte_range() and the deferred tlbi.

>
> ./Run -c 96 -i 1 shell1
> w/o
> Shell Scripts (1 concurrent)                  80765.5 lpm   (60.0 s, 1
> samples)
> System Benchmarks Partial Index              BASELINE RESULT INDEX
> Shell Scripts (1 concurrent)                     42.4 80765.5 19048.5
> ========
> System Benchmarks Index Score (Partial Only)                        19048.5
>
> w
> Shell Scripts (1 concurrent)                  76333.6 lpm   (60.0 s, 1
> samples)
> System Benchmarks Partial Index              BASELINE RESULT INDEX
> Shell Scripts (1 concurrent)                     42.4 76333.6 18003.2
> ========
> System Benchmarks Index Score (Partial Only)                        18003.2
>
> ----------------------------------------------------------------------------------------------
>
>
> After discuss with you, and do some changes in the patch.
>
> ndex a52381a680db..1ecba81f1277 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -727,7 +727,11 @@ void flush_tlb_batched_pending(struct mm_struct *mm)
>          int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT;
>
>          if (pending != flushed) {
> +#ifdef CONFIG_ARCH_HAS_MM_CPUMASK
>                  flush_tlb_mm(mm);
> +#else
> +               dsb(ish);
> +#endif
>

i was guessing the problem might be flush_tlb_batched_pending()
so i asked you to change this to verify my guess.

     /*
>                   * If the new TLB flushing is pending during flushing, leave
>                   * mm->tlb_flush_batched as is, to avoid losing flushing.
>
> there have a performance improvement with whole cores, above +30%

But I don't think it is a proper patch. There is no guarantee the cpu calling
flush_tlb_batched_pending is exactly the cpu sending the deferred
tlbi. so the solution is unsafe. But since this temporary code can bring the
30%+ performance improvement back for high concurrency, we have huge
potential to finally make it.

Unfortunately I don't have an arm64 server to debug on this. I only have
8 cores which are unlikely to reproduce regression which happens in
high concurrency with 96 parallel tasks.

So I'd ask if @yicong or someone else working on kunpeng or other
arm64 servers  is able to actually debug and figure out a proper
patch for this, then add the patch as 5/5 into this series?

>
> ./Run -c 96 -i 1 shell1
> 96 CPUs in system; running 96 parallel copies of tests
>
> Shell Scripts (1 concurrent)                 109229.0 lpm   (60.0 s, 1 samples)
> System Benchmarks Partial Index              BASELINE       RESULT    INDEX
> Shell Scripts (1 concurrent)                     42.4     109229.0  25761.6
>                                                                     ========
> System Benchmarks Index Score (Partial Only)                        25761.6
>
>
> Tested-by: Xin Hao<xhao@linux.alibaba.com>

Thanks for your testing!

>
> Looking forward to your next version patch.
>
> On 7/11/22 11:46 AM, Barry Song wrote:
> > Though ARM64 has the hardware to do tlb shootdown, the hardware
> > broadcasting is not free.
> > A simplest micro benchmark shows even on snapdragon 888 with only
> > 8 cores, the overhead for ptep_clear_flush is huge even for paging
> > out one page mapped by only one process:
> > 5.36%  a.out    [kernel.kallsyms]  [k] ptep_clear_flush
> >
> > While pages are mapped by multiple processes or HW has more CPUs,
> > the cost should become even higher due to the bad scalability of
> > tlb shootdown.
> >
> > The same benchmark can result in 16.99% CPU consumption on ARM64
> > server with around 100 cores according to Yicong's test on patch
> > 4/4.
> >
> > This patchset leverages the existing BATCHED_UNMAP_TLB_FLUSH by
> > 1. only send tlbi instructions in the first stage -
> >       arch_tlbbatch_add_mm()
> > 2. wait for the completion of tlbi by dsb while doing tlbbatch
> >       sync in arch_tlbbatch_flush()
> > My testing on snapdragon shows the overhead of ptep_clear_flush
> > is removed by the patchset. The micro benchmark becomes 5% faster
> > even for one page mapped by single process on snapdragon 888.
> >
> >
> > -v2:
> > 1. Collected Yicong's test result on kunpeng920 ARM64 server;
> > 2. Removed the redundant vma parameter in arch_tlbbatch_add_mm()
> >     according to the comments of Peter Zijlstra and Dave Hansen
> > 3. Added ARCH_HAS_MM_CPUMASK rather than checking if mm_cpumask
> >     is empty according to the comments of Nadav Amit
> >
> > Thanks, Yicong, Peter, Dave and Nadav for your testing or reviewing
> > , and comments.
> >
> > -v1:
> > https://lore.kernel.org/lkml/20220707125242.425242-1-21cnbao@gmail.com/
> >
> > Barry Song (4):
> >    Revert "Documentation/features: mark BATCHED_UNMAP_TLB_FLUSH doesn't
> >      apply to ARM64"
> >    mm: rmap: Allow platforms without mm_cpumask to defer TLB flush
> >    mm: rmap: Extend tlbbatch APIs to fit new platforms
> >    arm64: support batched/deferred tlb shootdown during page reclamation
> >
> >   Documentation/features/arch-support.txt       |  1 -
> >   .../features/vm/TLB/arch-support.txt          |  2 +-
> >   arch/arm/Kconfig                              |  1 +
> >   arch/arm64/Kconfig                            |  1 +
> >   arch/arm64/include/asm/tlbbatch.h             | 12 ++++++++++
> >   arch/arm64/include/asm/tlbflush.h             | 23 +++++++++++++++++--
> >   arch/loongarch/Kconfig                        |  1 +
> >   arch/mips/Kconfig                             |  1 +
> >   arch/openrisc/Kconfig                         |  1 +
> >   arch/powerpc/Kconfig                          |  1 +
> >   arch/riscv/Kconfig                            |  1 +
> >   arch/s390/Kconfig                             |  1 +
> >   arch/um/Kconfig                               |  1 +
> >   arch/x86/Kconfig                              |  1 +
> >   arch/x86/include/asm/tlbflush.h               |  3 ++-
> >   mm/Kconfig                                    |  3 +++
> >   mm/rmap.c                                     | 14 +++++++----
> >   17 files changed, 59 insertions(+), 9 deletions(-)
> >   create mode 100644 arch/arm64/include/asm/tlbbatch.h
> >
> --
> Best Regards!
> Xin Hao
>

Thanks
Barry

WARNING: multiple messages have this Message-ID (diff)
From: Barry Song <21cnbao@gmail.com>
To: xhao@linux.alibaba.com
Cc: "Linux Doc Mailing List" <linux-doc@vger.kernel.org>,
	"Catalin Marinas" <catalin.marinas@arm.com>,
	"Yicong Yang" <yangyicong@hisilicon.com>,
	Linux-MM <linux-mm@kvack.org>, 郭健 <guojian@oppo.com>,
	linux-riscv@lists.infradead.org, "Will Deacon" <will@kernel.org>,
	linux-s390@vger.kernel.org,
	"张诗明(Simon Zhang)" <zhangshiming@oppo.com>,
	"李培锋(wink)" <lipeifeng@oppo.com>,
	"Jonathan Corbet" <corbet@lwn.net>, x86 <x86@kernel.org>,
	linux-mips@vger.kernel.org, "Arnd Bergmann" <arnd@arndb.de>,
	"real mz" <realmz6@gmail.com>,
	openrisc@lists.librecores.org,
	"Darren Hart" <darren@os.amperecomputing.com>,
	LAK <linux-arm-kernel@lists.infradead.org>,
	LKML <linux-kernel@vger.kernel.org>,
	huzhanyuan@oppo.com, "Andrew Morton" <akpm@linux-foundation.org>,
	linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH v2 0/4] mm: arm64: bring up BATCHED_UNMAP_TLB_FLUSH
Date: Thu, 14 Jul 2022 16:51:45 +1200	[thread overview]
Message-ID: <CAGsJ_4zjnmQV6LT3yo--K-qD-92=hBmgfK121=n-Y0oEFX8RnQ@mail.gmail.com> (raw)
In-Reply-To: <24f5e25b-3946-b92a-975b-c34688005398@linux.alibaba.com>

On Thu, Jul 14, 2022 at 3:29 PM Xin Hao <xhao@linux.alibaba.com> wrote:
>
> Hi barry.
>
> I do some test on Kunpeng arm64 machine use Unixbench.
>
> The test  result as below.
>
> One core, we can see the performance improvement above +30%.

I am really pleased to see the 30%+ improvement on unixbench on single core.

> ./Run -c 1 -i 1 shell1
> w/o
> System Benchmarks Partial Index              BASELINE RESULT INDEX
> Shell Scripts (1 concurrent)                     42.4 5481.0 1292.7
> ========
> System Benchmarks Index Score (Partial Only)                         1292.7
>
> w/
> System Benchmarks Partial Index              BASELINE RESULT INDEX
> Shell Scripts (1 concurrent)                     42.4 6974.6 1645.0
> ========
> System Benchmarks Index Score (Partial Only)                         1645.0
>
>
> But with whole cores, there have little performance degradation above -5%

That is sad as we might get more concurrency between mprotect(), madvise(),
mremap(), zap_pte_range() and the deferred tlbi.

>
> ./Run -c 96 -i 1 shell1
> w/o
> Shell Scripts (1 concurrent)                  80765.5 lpm   (60.0 s, 1
> samples)
> System Benchmarks Partial Index              BASELINE RESULT INDEX
> Shell Scripts (1 concurrent)                     42.4 80765.5 19048.5
> ========
> System Benchmarks Index Score (Partial Only)                        19048.5
>
> w
> Shell Scripts (1 concurrent)                  76333.6 lpm   (60.0 s, 1
> samples)
> System Benchmarks Partial Index              BASELINE RESULT INDEX
> Shell Scripts (1 concurrent)                     42.4 76333.6 18003.2
> ========
> System Benchmarks Index Score (Partial Only)                        18003.2
>
> ----------------------------------------------------------------------------------------------
>
>
> After discuss with you, and do some changes in the patch.
>
> ndex a52381a680db..1ecba81f1277 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -727,7 +727,11 @@ void flush_tlb_batched_pending(struct mm_struct *mm)
>          int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT;
>
>          if (pending != flushed) {
> +#ifdef CONFIG_ARCH_HAS_MM_CPUMASK
>                  flush_tlb_mm(mm);
> +#else
> +               dsb(ish);
> +#endif
>

i was guessing the problem might be flush_tlb_batched_pending()
so i asked you to change this to verify my guess.

     /*
>                   * If the new TLB flushing is pending during flushing, leave
>                   * mm->tlb_flush_batched as is, to avoid losing flushing.
>
> there have a performance improvement with whole cores, above +30%

But I don't think it is a proper patch. There is no guarantee the cpu calling
flush_tlb_batched_pending is exactly the cpu sending the deferred
tlbi. so the solution is unsafe. But since this temporary code can bring the
30%+ performance improvement back for high concurrency, we have huge
potential to finally make it.

Unfortunately I don't have an arm64 server to debug on this. I only have
8 cores which are unlikely to reproduce regression which happens in
high concurrency with 96 parallel tasks.

So I'd ask if @yicong or someone else working on kunpeng or other
arm64 servers  is able to actually debug and figure out a proper
patch for this, then add the patch as 5/5 into this series?

>
> ./Run -c 96 -i 1 shell1
> 96 CPUs in system; running 96 parallel copies of tests
>
> Shell Scripts (1 concurrent)                 109229.0 lpm   (60.0 s, 1 samples)
> System Benchmarks Partial Index              BASELINE       RESULT    INDEX
> Shell Scripts (1 concurrent)                     42.4     109229.0  25761.6
>                                                                     ========
> System Benchmarks Index Score (Partial Only)                        25761.6
>
>
> Tested-by: Xin Hao<xhao@linux.alibaba.com>

Thanks for your testing!

>
> Looking forward to your next version patch.
>
> On 7/11/22 11:46 AM, Barry Song wrote:
> > Though ARM64 has the hardware to do tlb shootdown, the hardware
> > broadcasting is not free.
> > A simplest micro benchmark shows even on snapdragon 888 with only
> > 8 cores, the overhead for ptep_clear_flush is huge even for paging
> > out one page mapped by only one process:
> > 5.36%  a.out    [kernel.kallsyms]  [k] ptep_clear_flush
> >
> > While pages are mapped by multiple processes or HW has more CPUs,
> > the cost should become even higher due to the bad scalability of
> > tlb shootdown.
> >
> > The same benchmark can result in 16.99% CPU consumption on ARM64
> > server with around 100 cores according to Yicong's test on patch
> > 4/4.
> >
> > This patchset leverages the existing BATCHED_UNMAP_TLB_FLUSH by
> > 1. only send tlbi instructions in the first stage -
> >       arch_tlbbatch_add_mm()
> > 2. wait for the completion of tlbi by dsb while doing tlbbatch
> >       sync in arch_tlbbatch_flush()
> > My testing on snapdragon shows the overhead of ptep_clear_flush
> > is removed by the patchset. The micro benchmark becomes 5% faster
> > even for one page mapped by single process on snapdragon 888.
> >
> >
> > -v2:
> > 1. Collected Yicong's test result on kunpeng920 ARM64 server;
> > 2. Removed the redundant vma parameter in arch_tlbbatch_add_mm()
> >     according to the comments of Peter Zijlstra and Dave Hansen
> > 3. Added ARCH_HAS_MM_CPUMASK rather than checking if mm_cpumask
> >     is empty according to the comments of Nadav Amit
> >
> > Thanks, Yicong, Peter, Dave and Nadav for your testing or reviewing
> > , and comments.
> >
> > -v1:
> > https://lore.kernel.org/lkml/20220707125242.425242-1-21cnbao@gmail.com/
> >
> > Barry Song (4):
> >    Revert "Documentation/features: mark BATCHED_UNMAP_TLB_FLUSH doesn't
> >      apply to ARM64"
> >    mm: rmap: Allow platforms without mm_cpumask to defer TLB flush
> >    mm: rmap: Extend tlbbatch APIs to fit new platforms
> >    arm64: support batched/deferred tlb shootdown during page reclamation
> >
> >   Documentation/features/arch-support.txt       |  1 -
> >   .../features/vm/TLB/arch-support.txt          |  2 +-
> >   arch/arm/Kconfig                              |  1 +
> >   arch/arm64/Kconfig                            |  1 +
> >   arch/arm64/include/asm/tlbbatch.h             | 12 ++++++++++
> >   arch/arm64/include/asm/tlbflush.h             | 23 +++++++++++++++++--
> >   arch/loongarch/Kconfig                        |  1 +
> >   arch/mips/Kconfig                             |  1 +
> >   arch/openrisc/Kconfig                         |  1 +
> >   arch/powerpc/Kconfig                          |  1 +
> >   arch/riscv/Kconfig                            |  1 +
> >   arch/s390/Kconfig                             |  1 +
> >   arch/um/Kconfig                               |  1 +
> >   arch/x86/Kconfig                              |  1 +
> >   arch/x86/include/asm/tlbflush.h               |  3 ++-
> >   mm/Kconfig                                    |  3 +++
> >   mm/rmap.c                                     | 14 +++++++----
> >   17 files changed, 59 insertions(+), 9 deletions(-)
> >   create mode 100644 arch/arm64/include/asm/tlbbatch.h
> >
> --
> Best Regards!
> Xin Hao
>

Thanks
Barry

WARNING: multiple messages have this Message-ID (diff)
From: Barry Song <21cnbao@gmail.com>
To: xhao@linux.alibaba.com
Cc: "Andrew Morton" <akpm@linux-foundation.org>,
	Linux-MM <linux-mm@kvack.org>,
	LAK <linux-arm-kernel@lists.infradead.org>, x86 <x86@kernel.org>,
	"Catalin Marinas" <catalin.marinas@arm.com>,
	"Will Deacon" <will@kernel.org>,
	"Linux Doc Mailing List" <linux-doc@vger.kernel.org>,
	"Jonathan Corbet" <corbet@lwn.net>,
	"Arnd Bergmann" <arnd@arndb.de>,
	LKML <linux-kernel@vger.kernel.org>,
	"Darren Hart" <darren@os.amperecomputing.com>,
	"Yicong Yang" <yangyicong@hisilicon.com>,
	huzhanyuan@oppo.com, "李培锋(wink)" <lipeifeng@oppo.com>,
	"张诗明(Simon Zhang)" <zhangshiming@oppo.com>, 郭健 <guojian@oppo.com>,
	"real mz" <realmz6@gmail.com>,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: Re: [PATCH v2 0/4] mm: arm64: bring up BATCHED_UNMAP_TLB_FLUSH
Date: Thu, 14 Jul 2022 16:51:45 +1200	[thread overview]
Message-ID: <CAGsJ_4zjnmQV6LT3yo--K-qD-92=hBmgfK121=n-Y0oEFX8RnQ@mail.gmail.com> (raw)
In-Reply-To: <24f5e25b-3946-b92a-975b-c34688005398@linux.alibaba.com>

On Thu, Jul 14, 2022 at 3:29 PM Xin Hao <xhao@linux.alibaba.com> wrote:
>
> Hi barry.
>
> I do some test on Kunpeng arm64 machine use Unixbench.
>
> The test  result as below.
>
> One core, we can see the performance improvement above +30%.

I am really pleased to see the 30%+ improvement on unixbench on single core.

> ./Run -c 1 -i 1 shell1
> w/o
> System Benchmarks Partial Index              BASELINE RESULT INDEX
> Shell Scripts (1 concurrent)                     42.4 5481.0 1292.7
> ========
> System Benchmarks Index Score (Partial Only)                         1292.7
>
> w/
> System Benchmarks Partial Index              BASELINE RESULT INDEX
> Shell Scripts (1 concurrent)                     42.4 6974.6 1645.0
> ========
> System Benchmarks Index Score (Partial Only)                         1645.0
>
>
> But with whole cores, there have little performance degradation above -5%

That is sad as we might get more concurrency between mprotect(), madvise(),
mremap(), zap_pte_range() and the deferred tlbi.

>
> ./Run -c 96 -i 1 shell1
> w/o
> Shell Scripts (1 concurrent)                  80765.5 lpm   (60.0 s, 1
> samples)
> System Benchmarks Partial Index              BASELINE RESULT INDEX
> Shell Scripts (1 concurrent)                     42.4 80765.5 19048.5
> ========
> System Benchmarks Index Score (Partial Only)                        19048.5
>
> w
> Shell Scripts (1 concurrent)                  76333.6 lpm   (60.0 s, 1
> samples)
> System Benchmarks Partial Index              BASELINE RESULT INDEX
> Shell Scripts (1 concurrent)                     42.4 76333.6 18003.2
> ========
> System Benchmarks Index Score (Partial Only)                        18003.2
>
> ----------------------------------------------------------------------------------------------
>
>
> After discuss with you, and do some changes in the patch.
>
> ndex a52381a680db..1ecba81f1277 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -727,7 +727,11 @@ void flush_tlb_batched_pending(struct mm_struct *mm)
>          int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT;
>
>          if (pending != flushed) {
> +#ifdef CONFIG_ARCH_HAS_MM_CPUMASK
>                  flush_tlb_mm(mm);
> +#else
> +               dsb(ish);
> +#endif
>

i was guessing the problem might be flush_tlb_batched_pending()
so i asked you to change this to verify my guess.

     /*
>                   * If the new TLB flushing is pending during flushing, leave
>                   * mm->tlb_flush_batched as is, to avoid losing flushing.
>
> there have a performance improvement with whole cores, above +30%

But I don't think it is a proper patch. There is no guarantee the cpu calling
flush_tlb_batched_pending is exactly the cpu sending the deferred
tlbi. so the solution is unsafe. But since this temporary code can bring the
30%+ performance improvement back for high concurrency, we have huge
potential to finally make it.

Unfortunately I don't have an arm64 server to debug on this. I only have
8 cores which are unlikely to reproduce regression which happens in
high concurrency with 96 parallel tasks.

So I'd ask if @yicong or someone else working on kunpeng or other
arm64 servers  is able to actually debug and figure out a proper
patch for this, then add the patch as 5/5 into this series?

>
> ./Run -c 96 -i 1 shell1
> 96 CPUs in system; running 96 parallel copies of tests
>
> Shell Scripts (1 concurrent)                 109229.0 lpm   (60.0 s, 1 samples)
> System Benchmarks Partial Index              BASELINE       RESULT    INDEX
> Shell Scripts (1 concurrent)                     42.4     109229.0  25761.6
>                                                                     ========
> System Benchmarks Index Score (Partial Only)                        25761.6
>
>
> Tested-by: Xin Hao<xhao@linux.alibaba.com>

Thanks for your testing!

>
> Looking forward to your next version patch.
>
> On 7/11/22 11:46 AM, Barry Song wrote:
> > Though ARM64 has the hardware to do tlb shootdown, the hardware
> > broadcasting is not free.
> > A simplest micro benchmark shows even on snapdragon 888 with only
> > 8 cores, the overhead for ptep_clear_flush is huge even for paging
> > out one page mapped by only one process:
> > 5.36%  a.out    [kernel.kallsyms]  [k] ptep_clear_flush
> >
> > While pages are mapped by multiple processes or HW has more CPUs,
> > the cost should become even higher due to the bad scalability of
> > tlb shootdown.
> >
> > The same benchmark can result in 16.99% CPU consumption on ARM64
> > server with around 100 cores according to Yicong's test on patch
> > 4/4.
> >
> > This patchset leverages the existing BATCHED_UNMAP_TLB_FLUSH by
> > 1. only send tlbi instructions in the first stage -
> >       arch_tlbbatch_add_mm()
> > 2. wait for the completion of tlbi by dsb while doing tlbbatch
> >       sync in arch_tlbbatch_flush()
> > My testing on snapdragon shows the overhead of ptep_clear_flush
> > is removed by the patchset. The micro benchmark becomes 5% faster
> > even for one page mapped by single process on snapdragon 888.
> >
> >
> > -v2:
> > 1. Collected Yicong's test result on kunpeng920 ARM64 server;
> > 2. Removed the redundant vma parameter in arch_tlbbatch_add_mm()
> >     according to the comments of Peter Zijlstra and Dave Hansen
> > 3. Added ARCH_HAS_MM_CPUMASK rather than checking if mm_cpumask
> >     is empty according to the comments of Nadav Amit
> >
> > Thanks, Yicong, Peter, Dave and Nadav for your testing or reviewing
> > , and comments.
> >
> > -v1:
> > https://lore.kernel.org/lkml/20220707125242.425242-1-21cnbao@gmail.com/
> >
> > Barry Song (4):
> >    Revert "Documentation/features: mark BATCHED_UNMAP_TLB_FLUSH doesn't
> >      apply to ARM64"
> >    mm: rmap: Allow platforms without mm_cpumask to defer TLB flush
> >    mm: rmap: Extend tlbbatch APIs to fit new platforms
> >    arm64: support batched/deferred tlb shootdown during page reclamation
> >
> >   Documentation/features/arch-support.txt       |  1 -
> >   .../features/vm/TLB/arch-support.txt          |  2 +-
> >   arch/arm/Kconfig                              |  1 +
> >   arch/arm64/Kconfig                            |  1 +
> >   arch/arm64/include/asm/tlbbatch.h             | 12 ++++++++++
> >   arch/arm64/include/asm/tlbflush.h             | 23 +++++++++++++++++--
> >   arch/loongarch/Kconfig                        |  1 +
> >   arch/mips/Kconfig                             |  1 +
> >   arch/openrisc/Kconfig                         |  1 +
> >   arch/powerpc/Kconfig                          |  1 +
> >   arch/riscv/Kconfig                            |  1 +
> >   arch/s390/Kconfig                             |  1 +
> >   arch/um/Kconfig                               |  1 +
> >   arch/x86/Kconfig                              |  1 +
> >   arch/x86/include/asm/tlbflush.h               |  3 ++-
> >   mm/Kconfig                                    |  3 +++
> >   mm/rmap.c                                     | 14 +++++++----
> >   17 files changed, 59 insertions(+), 9 deletions(-)
> >   create mode 100644 arch/arm64/include/asm/tlbbatch.h
> >
> --
> Best Regards!
> Xin Hao
>

Thanks
Barry

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

WARNING: multiple messages have this Message-ID (diff)
From: Barry Song <21cnbao@gmail.com>
To: xhao@linux.alibaba.com
Cc: "Andrew Morton" <akpm@linux-foundation.org>,
	Linux-MM <linux-mm@kvack.org>,
	LAK <linux-arm-kernel@lists.infradead.org>, x86 <x86@kernel.org>,
	"Catalin Marinas" <catalin.marinas@arm.com>,
	"Will Deacon" <will@kernel.org>,
	"Linux Doc Mailing List" <linux-doc@vger.kernel.org>,
	"Jonathan Corbet" <corbet@lwn.net>,
	"Arnd Bergmann" <arnd@arndb.de>,
	LKML <linux-kernel@vger.kernel.org>,
	"Darren Hart" <darren@os.amperecomputing.com>,
	"Yicong Yang" <yangyicong@hisilicon.com>,
	huzhanyuan@oppo.com, "李培锋(wink)" <lipeifeng@oppo.com>,
	"张诗明(Simon Zhang)" <zhangshiming@oppo.com>, 郭健 <guojian@oppo.com>,
	"real mz" <realmz6@gmail.com>,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org
Subject: Re: [PATCH v2 0/4] mm: arm64: bring up BATCHED_UNMAP_TLB_FLUSH
Date: Thu, 14 Jul 2022 16:51:45 +1200	[thread overview]
Message-ID: <CAGsJ_4zjnmQV6LT3yo--K-qD-92=hBmgfK121=n-Y0oEFX8RnQ@mail.gmail.com> (raw)
In-Reply-To: <24f5e25b-3946-b92a-975b-c34688005398@linux.alibaba.com>

On Thu, Jul 14, 2022 at 3:29 PM Xin Hao <xhao@linux.alibaba.com> wrote:
>
> Hi barry.
>
> I do some test on Kunpeng arm64 machine use Unixbench.
>
> The test  result as below.
>
> One core, we can see the performance improvement above +30%.

I am really pleased to see the 30%+ improvement on unixbench on single core.

> ./Run -c 1 -i 1 shell1
> w/o
> System Benchmarks Partial Index              BASELINE RESULT INDEX
> Shell Scripts (1 concurrent)                     42.4 5481.0 1292.7
> ========
> System Benchmarks Index Score (Partial Only)                         1292.7
>
> w/
> System Benchmarks Partial Index              BASELINE RESULT INDEX
> Shell Scripts (1 concurrent)                     42.4 6974.6 1645.0
> ========
> System Benchmarks Index Score (Partial Only)                         1645.0
>
>
> But with whole cores, there have little performance degradation above -5%

That is sad as we might get more concurrency between mprotect(), madvise(),
mremap(), zap_pte_range() and the deferred tlbi.

>
> ./Run -c 96 -i 1 shell1
> w/o
> Shell Scripts (1 concurrent)                  80765.5 lpm   (60.0 s, 1
> samples)
> System Benchmarks Partial Index              BASELINE RESULT INDEX
> Shell Scripts (1 concurrent)                     42.4 80765.5 19048.5
> ========
> System Benchmarks Index Score (Partial Only)                        19048.5
>
> w
> Shell Scripts (1 concurrent)                  76333.6 lpm   (60.0 s, 1
> samples)
> System Benchmarks Partial Index              BASELINE RESULT INDEX
> Shell Scripts (1 concurrent)                     42.4 76333.6 18003.2
> ========
> System Benchmarks Index Score (Partial Only)                        18003.2
>
> ----------------------------------------------------------------------------------------------
>
>
> After discuss with you, and do some changes in the patch.
>
> ndex a52381a680db..1ecba81f1277 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -727,7 +727,11 @@ void flush_tlb_batched_pending(struct mm_struct *mm)
>          int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT;
>
>          if (pending != flushed) {
> +#ifdef CONFIG_ARCH_HAS_MM_CPUMASK
>                  flush_tlb_mm(mm);
> +#else
> +               dsb(ish);
> +#endif
>

i was guessing the problem might be flush_tlb_batched_pending()
so i asked you to change this to verify my guess.

     /*
>                   * If the new TLB flushing is pending during flushing, leave
>                   * mm->tlb_flush_batched as is, to avoid losing flushing.
>
> there have a performance improvement with whole cores, above +30%

But I don't think it is a proper patch. There is no guarantee the cpu calling
flush_tlb_batched_pending is exactly the cpu sending the deferred
tlbi. so the solution is unsafe. But since this temporary code can bring the
30%+ performance improvement back for high concurrency, we have huge
potential to finally make it.

Unfortunately I don't have an arm64 server to debug on this. I only have
8 cores which are unlikely to reproduce regression which happens in
high concurrency with 96 parallel tasks.

So I'd ask if @yicong or someone else working on kunpeng or other
arm64 servers  is able to actually debug and figure out a proper
patch for this, then add the patch as 5/5 into this series?

>
> ./Run -c 96 -i 1 shell1
> 96 CPUs in system; running 96 parallel copies of tests
>
> Shell Scripts (1 concurrent)                 109229.0 lpm   (60.0 s, 1 samples)
> System Benchmarks Partial Index              BASELINE       RESULT    INDEX
> Shell Scripts (1 concurrent)                     42.4     109229.0  25761.6
>                                                                     ========
> System Benchmarks Index Score (Partial Only)                        25761.6
>
>
> Tested-by: Xin Hao<xhao@linux.alibaba.com>

Thanks for your testing!

>
> Looking forward to your next version patch.
>
> On 7/11/22 11:46 AM, Barry Song wrote:
> > Though ARM64 has the hardware to do tlb shootdown, the hardware
> > broadcasting is not free.
> > A simplest micro benchmark shows even on snapdragon 888 with only
> > 8 cores, the overhead for ptep_clear_flush is huge even for paging
> > out one page mapped by only one process:
> > 5.36%  a.out    [kernel.kallsyms]  [k] ptep_clear_flush
> >
> > While pages are mapped by multiple processes or HW has more CPUs,
> > the cost should become even higher due to the bad scalability of
> > tlb shootdown.
> >
> > The same benchmark can result in 16.99% CPU consumption on ARM64
> > server with around 100 cores according to Yicong's test on patch
> > 4/4.
> >
> > This patchset leverages the existing BATCHED_UNMAP_TLB_FLUSH by
> > 1. only send tlbi instructions in the first stage -
> >       arch_tlbbatch_add_mm()
> > 2. wait for the completion of tlbi by dsb while doing tlbbatch
> >       sync in arch_tlbbatch_flush()
> > My testing on snapdragon shows the overhead of ptep_clear_flush
> > is removed by the patchset. The micro benchmark becomes 5% faster
> > even for one page mapped by single process on snapdragon 888.
> >
> >
> > -v2:
> > 1. Collected Yicong's test result on kunpeng920 ARM64 server;
> > 2. Removed the redundant vma parameter in arch_tlbbatch_add_mm()
> >     according to the comments of Peter Zijlstra and Dave Hansen
> > 3. Added ARCH_HAS_MM_CPUMASK rather than checking if mm_cpumask
> >     is empty according to the comments of Nadav Amit
> >
> > Thanks, Yicong, Peter, Dave and Nadav for your testing or reviewing
> > , and comments.
> >
> > -v1:
> > https://lore.kernel.org/lkml/20220707125242.425242-1-21cnbao@gmail.com/
> >
> > Barry Song (4):
> >    Revert "Documentation/features: mark BATCHED_UNMAP_TLB_FLUSH doesn't
> >      apply to ARM64"
> >    mm: rmap: Allow platforms without mm_cpumask to defer TLB flush
> >    mm: rmap: Extend tlbbatch APIs to fit new platforms
> >    arm64: support batched/deferred tlb shootdown during page reclamation
> >
> >   Documentation/features/arch-support.txt       |  1 -
> >   .../features/vm/TLB/arch-support.txt          |  2 +-
> >   arch/arm/Kconfig                              |  1 +
> >   arch/arm64/Kconfig                            |  1 +
> >   arch/arm64/include/asm/tlbbatch.h             | 12 ++++++++++
> >   arch/arm64/include/asm/tlbflush.h             | 23 +++++++++++++++++--
> >   arch/loongarch/Kconfig                        |  1 +
> >   arch/mips/Kconfig                             |  1 +
> >   arch/openrisc/Kconfig                         |  1 +
> >   arch/powerpc/Kconfig                          |  1 +
> >   arch/riscv/Kconfig                            |  1 +
> >   arch/s390/Kconfig                             |  1 +
> >   arch/um/Kconfig                               |  1 +
> >   arch/x86/Kconfig                              |  1 +
> >   arch/x86/include/asm/tlbflush.h               |  3 ++-
> >   mm/Kconfig                                    |  3 +++
> >   mm/rmap.c                                     | 14 +++++++----
> >   17 files changed, 59 insertions(+), 9 deletions(-)
> >   create mode 100644 arch/arm64/include/asm/tlbbatch.h
> >
> --
> Best Regards!
> Xin Hao
>

Thanks
Barry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2022-07-14  4:59 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-11  3:46 [PATCH v2 0/4] mm: arm64: bring up BATCHED_UNMAP_TLB_FLUSH Barry Song
2022-07-11  3:46 ` Barry Song
2022-07-11  3:46 ` Barry Song
2022-07-11  3:46 ` Barry Song
2022-07-11  3:46 ` [PATCH v2 1/4] Revert "Documentation/features: mark BATCHED_UNMAP_TLB_FLUSH doesn't apply to ARM64" Barry Song
2022-07-11  3:46   ` Barry Song
2022-07-11  3:46   ` Barry Song
2022-07-11  3:46   ` Barry Song
2022-07-11  3:46 ` [PATCH v2 2/4] mm: rmap: Allow platforms without mm_cpumask to defer TLB flush Barry Song
2022-07-11  3:46   ` Barry Song
2022-07-11  3:46   ` Barry Song
2022-07-11  3:46   ` Barry Song
2022-07-11 13:35   ` Kefeng Wang
2022-07-11 13:35     ` Kefeng Wang
2022-07-11 13:35     ` Kefeng Wang
2022-07-11 13:35     ` Kefeng Wang
2022-07-11 22:52     ` Barry Song
2022-07-11 22:52       ` Barry Song
2022-07-11 22:52       ` Barry Song
2022-07-11 22:52       ` Barry Song
2022-07-11  3:46 ` [PATCH v2 3/4] mm: rmap: Extend tlbbatch APIs to fit new platforms Barry Song
2022-07-11  3:46   ` Barry Song
2022-07-11  3:46   ` Barry Song
2022-07-11  3:46   ` Barry Song
2022-07-11  3:46 ` [PATCH v2 4/4] arm64: support batched/deferred tlb shootdown during page reclamation Barry Song
2022-07-11  3:46   ` Barry Song
2022-07-11  3:46   ` Barry Song
2022-07-11  3:46   ` Barry Song
2022-07-14  3:28 ` [PATCH v2 0/4] mm: arm64: bring up BATCHED_UNMAP_TLB_FLUSH Xin Hao
2022-07-14  3:28   ` Xin Hao
2022-07-14  3:28   ` Xin Hao
2022-07-14  3:28   ` Xin Hao
2022-07-14  4:51   ` Barry Song [this message]
2022-07-14  4:51     ` Barry Song
2022-07-14  4:51     ` Barry Song
2022-07-14  4:51     ` Barry Song
2022-07-15  2:47     ` Yicong Yang
2022-07-15  2:47       ` Yicong Yang
2022-07-15  2:47       ` Yicong Yang
2022-07-15  2:47       ` Yicong Yang
2022-07-18 13:28     ` Yicong Yang
2022-07-18 13:28       ` Yicong Yang
2022-07-18 13:28       ` Yicong Yang
2022-07-18 13:28       ` Yicong Yang
2022-07-20 11:18       ` Barry Song
2022-07-20 11:18         ` Barry Song
2022-07-20 11:18         ` Barry Song
2022-07-20 11:18         ` Barry Song
2022-07-23  9:22         ` xhao
2022-07-23  9:22           ` xhao
2022-07-23  9:22           ` xhao
2022-07-23  9:22           ` xhao
2022-07-23  9:17       ` xhao
2022-07-23  9:17         ` xhao
2022-07-23  9:17         ` xhao
2022-07-23  9:17         ` xhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAGsJ_4zjnmQV6LT3yo--K-qD-92=hBmgfK121=n-Y0oEFX8RnQ@mail.gmail.com' \
    --to=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=arnd@arndb.de \
    --cc=catalin.marinas@arm.com \
    --cc=corbet@lwn.net \
    --cc=darren@os.amperecomputing.com \
    --cc=guojian@oppo.com \
    --cc=huzhanyuan@oppo.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mips@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=lipeifeng@oppo.com \
    --cc=openrisc@lists.librecores.org \
    --cc=realmz6@gmail.com \
    --cc=will@kernel.org \
    --cc=x86@kernel.org \
    --cc=xhao@linux.alibaba.com \
    --cc=yangyicong@hisilicon.com \
    --cc=zhangshiming@oppo.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.