linux-arch.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Minchan Kim <minchan@kernel.org>
To: Nadav Amit <nadav.amit@gmail.com>
Cc: kernel test robot <xiaolong.ye@intel.com>,
	"open list:MEMORY MANAGEMENT" <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Ingo Molnar <mingo@redhat.com>,
	Russell King <linux@armlinux.org.uk>,
	Tony Luck <tony.luck@intel.com>,
	Martin Schwidefsky <schwidefsky@de.ibm.com>,
	"David S. Miller" <davem@davemloft.net>,
	Heiko Carstens <heiko.carstens@de.ibm.com>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Jeff Dike <jdike@addtoit.com>,
	linux-arch@vger.kernel.org, lkp@01.org
Subject: Re: [lkp-robot] [mm]  7674270022:  will-it-scale.per_process_ops -19.3% regression
Date: Tue, 8 Aug 2017 17:08:21 +0900	[thread overview]
Message-ID: <20170808080821.GA31730@bbox> (raw)
In-Reply-To: <970B5DC5-BFC2-461E-AC46-F71B3691D301@gmail.com>

On Mon, Aug 07, 2017 at 10:51:00PM -0700, Nadav Amit wrote:
> Nadav Amit <nadav.amit@gmail.com> wrote:
> 
> > Minchan Kim <minchan@kernel.org> wrote:
> > 
> >> Hi,
> >> 
> >> On Tue, Aug 08, 2017 at 09:19:23AM +0800, kernel test robot wrote:
> >>> Greeting,
> >>> 
> >>> FYI, we noticed a -19.3% regression of will-it-scale.per_process_ops due to commit:
> >>> 
> >>> 
> >>> commit: 76742700225cad9df49f05399381ac3f1ec3dc60 ("mm: fix MADV_[FREE|DONTNEED] TLB flush miss problem")
> >>> url: https://github.com/0day-ci/linux/commits/Nadav-Amit/mm-migrate-prevent-racy-access-to-tlb_flush_pending/20170802-205715
> >>> 
> >>> 
> >>> in testcase: will-it-scale
> >>> on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
> >>> with following parameters:
> >>> 
> >>> 	nr_task: 16
> >>> 	mode: process
> >>> 	test: brk1
> >>> 	cpufreq_governor: performance
> >>> 
> >>> test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
> >>> test-url: https://github.com/antonblanchard/will-it-scale
> >> 
> >> Thanks for the report.
> >> Could you explain what kinds of workload you are testing?
> >> 
> >> Does it calls frequently madvise(MADV_DONTNEED) in parallel on multiple
> >> threads?
> > 
> > According to the description it is "testcase:brk increase/decrease of one
> > page”. According to the mode it spawns multiple processes, not threads.
> > 
> > Since a single page is unmapped each time, and the iTLB-loads increase
> > dramatically, I would suspect that for some reason a full TLB flush is
> > caused during do_munmap().
> > 
> > If I find some free time, I’ll try to profile the workload - but feel free
> > to beat me to it.
> 
> The root-cause appears to be that tlb_finish_mmu() does not call
> dec_tlb_flush_pending() - as it should. Any chance you can take care of it?

Oops, but with second looking, it seems it's not my fault. ;-)
https://marc.info/?l=linux-mm&m=150156699114088&w=2

Anyway, thanks for the pointing out.
xiaolong.ye, could you retest with this fix?

From 83012114c9cd9304f0d55d899bb4b9329d0e22ac Mon Sep 17 00:00:00 2001
From: Minchan Kim <minchan@kernel.org>
Date: Tue, 8 Aug 2017 17:05:19 +0900
Subject: [PATCH] mm: decrease tlb flush pending count in tlb_finish_mmu

The tlb pending count increased by tlb_gather_mmu should be decreased
at tlb_finish_mmu. Otherwise, A lot of TLB happens which makes
performance regression.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/memory.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/memory.c b/mm/memory.c
index 34b1fcb829e4..ad2617552f55 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -423,6 +423,7 @@ void tlb_finish_mmu(struct mmu_gather *tlb,
 	bool force = mm_tlb_flush_nested(tlb->mm);
 
 	arch_tlb_finish_mmu(tlb, start, end, force);
+	dec_tlb_flush_pending(tlb->mm);
 }
 
 /*
-- 
2.7.4

WARNING: multiple messages have this Message-ID (diff)
From: Minchan Kim <minchan@kernel.org>
To: Nadav Amit <nadav.amit@gmail.com>
Cc: kernel test robot <xiaolong.ye@intel.com>,
	"open list:MEMORY MANAGEMENT" <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Ingo Molnar <mingo@redhat.com>,
	Russell King <linux@armlinux.org.uk>,
	Tony Luck <tony.luck@intel.com>,
	Martin Schwidefsky <schwidefsky@de.ibm.com>,
	"David S. Miller" <davem@davemloft.net>,
	Heiko Carstens <heiko.carstens@de.ibm.com>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Jeff Dike <jdike@addtoit.com>,
	linux-arch@vger.kernel.org, lkp@01.org
Subject: Re: [lkp-robot] [mm]  7674270022:  will-it-scale.per_process_ops -19.3% regression
Date: Tue, 8 Aug 2017 17:08:21 +0900	[thread overview]
Message-ID: <20170808080821.GA31730@bbox> (raw)
Message-ID: <20170808080821.6MaCpkO692TJaoRZUEBhajFFnPkA7yaxWFu6j15fMLw@z> (raw)
In-Reply-To: <970B5DC5-BFC2-461E-AC46-F71B3691D301@gmail.com>

On Mon, Aug 07, 2017 at 10:51:00PM -0700, Nadav Amit wrote:
> Nadav Amit <nadav.amit@gmail.com> wrote:
> 
> > Minchan Kim <minchan@kernel.org> wrote:
> > 
> >> Hi,
> >> 
> >> On Tue, Aug 08, 2017 at 09:19:23AM +0800, kernel test robot wrote:
> >>> Greeting,
> >>> 
> >>> FYI, we noticed a -19.3% regression of will-it-scale.per_process_ops due to commit:
> >>> 
> >>> 
> >>> commit: 76742700225cad9df49f05399381ac3f1ec3dc60 ("mm: fix MADV_[FREE|DONTNEED] TLB flush miss problem")
> >>> url: https://github.com/0day-ci/linux/commits/Nadav-Amit/mm-migrate-prevent-racy-access-to-tlb_flush_pending/20170802-205715
> >>> 
> >>> 
> >>> in testcase: will-it-scale
> >>> on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
> >>> with following parameters:
> >>> 
> >>> 	nr_task: 16
> >>> 	mode: process
> >>> 	test: brk1
> >>> 	cpufreq_governor: performance
> >>> 
> >>> test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
> >>> test-url: https://github.com/antonblanchard/will-it-scale
> >> 
> >> Thanks for the report.
> >> Could you explain what kinds of workload you are testing?
> >> 
> >> Does it calls frequently madvise(MADV_DONTNEED) in parallel on multiple
> >> threads?
> > 
> > According to the description it is "testcase:brk increase/decrease of one
> > page”. According to the mode it spawns multiple processes, not threads.
> > 
> > Since a single page is unmapped each time, and the iTLB-loads increase
> > dramatically, I would suspect that for some reason a full TLB flush is
> > caused during do_munmap().
> > 
> > If I find some free time, I’ll try to profile the workload - but feel free
> > to beat me to it.
> 
> The root-cause appears to be that tlb_finish_mmu() does not call
> dec_tlb_flush_pending() - as it should. Any chance you can take care of it?

Oops, but with second looking, it seems it's not my fault. ;-)
https://marc.info/?l=linux-mm&m=150156699114088&w=2

Anyway, thanks for the pointing out.
xiaolong.ye, could you retest with this fix?

  parent reply	other threads:[~2017-08-08  8:08 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20170802000818.4760-1-namit@vmware.com>
2017-08-02  0:08 ` [PATCH v6 4/7] mm: refactoring TLB gathering API Nadav Amit
2017-08-02  0:08   ` Nadav Amit
2017-08-11  9:23   ` Peter Zijlstra
2017-08-11  9:23     ` Peter Zijlstra
2017-08-11 17:12     ` Nadav Amit
2017-08-11 17:12       ` Nadav Amit
2017-08-14  0:49       ` Minchan Kim
2017-08-02  0:08 ` [PATCH v6 6/7] mm: fix MADV_[FREE|DONTNEED] TLB flush miss problem Nadav Amit
2017-08-02  0:08   ` Nadav Amit
2017-08-08  1:19   ` [lkp-robot] [mm] 7674270022: will-it-scale.per_process_ops -19.3% regression kernel test robot
2017-08-08  2:28     ` Minchan Kim
2017-08-08  2:28       ` Minchan Kim
2017-08-08  4:23       ` Nadav Amit
2017-08-08  4:23         ` Nadav Amit
2017-08-08  5:51         ` Nadav Amit
2017-08-08  5:51           ` Nadav Amit
2017-08-08  8:08           ` Minchan Kim [this message]
2017-08-08  8:08             ` Minchan Kim
2017-08-08  8:16             ` Nadav Amit
2017-08-09  1:25             ` Ye Xiaolong
2017-08-09  2:59             ` Ye Xiaolong
2017-08-09  2:59               ` Ye Xiaolong
2017-08-10  4:13               ` Minchan Kim
2017-08-10  4:13                 ` Minchan Kim
2017-08-10  4:14                 ` Nadav Amit
2017-08-10  4:14                   ` Nadav Amit
2017-08-10  4:20                   ` Minchan Kim
2017-08-11 13:30   ` [PATCH v6 6/7] mm: fix MADV_[FREE|DONTNEED] TLB flush miss problem Peter Zijlstra
2017-08-11 13:30     ` Peter Zijlstra
2017-08-13  6:14     ` Nadav Amit
2017-08-13  6:14       ` Nadav Amit
2017-08-13 12:08       ` Peter Zijlstra
2017-08-13 12:08         ` Peter Zijlstra
2017-08-14  1:26     ` Minchan Kim
2017-08-14  1:26       ` Minchan Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170808080821.GA31730@bbox \
    --to=minchan@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=davem@davemloft.net \
    --cc=heiko.carstens@de.ibm.com \
    --cc=jdike@addtoit.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux@armlinux.org.uk \
    --cc=lkp@01.org \
    --cc=mingo@redhat.com \
    --cc=nadav.amit@gmail.com \
    --cc=schwidefsky@de.ibm.com \
    --cc=tony.luck@intel.com \
    --cc=xiaolong.ye@intel.com \
    --cc=ysato@users.sourceforge.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).