From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752093AbdHHII0 (ORCPT ); Tue, 8 Aug 2017 04:08:26 -0400 Received: from LGEAMRELO11.lge.com ([156.147.23.51]:47166 "EHLO lgeamrelo11.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751922AbdHHIIY (ORCPT ); Tue, 8 Aug 2017 04:08:24 -0400 X-Original-SENDERIP: 156.147.1.121 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 10.177.220.163 X-Original-MAILFROM: minchan@kernel.org Date: Tue, 8 Aug 2017 17:08:21 +0900 From: Minchan Kim To: Nadav Amit Cc: kernel test robot , "open list:MEMORY MANAGEMENT" , LKML , Andrew Morton , Ingo Molnar , Russell King , Tony Luck , Martin Schwidefsky , "David S. Miller" , Heiko Carstens , Yoshinori Sato , Jeff Dike , linux-arch@vger.kernel.org, lkp@01.org Subject: Re: [lkp-robot] [mm] 7674270022: will-it-scale.per_process_ops -19.3% regression Message-ID: <20170808080821.GA31730@bbox> References: <20170802000818.4760-7-namit@vmware.com> <20170808011923.GE25554@yexl-desktop> <20170808022830.GA28570@bbox> <93CA4B47-95C2-43A2-8E92-B142CAB1DAF7@gmail.com> <970B5DC5-BFC2-461E-AC46-F71B3691D301@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <970B5DC5-BFC2-461E-AC46-F71B3691D301@gmail.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Aug 07, 2017 at 10:51:00PM -0700, Nadav Amit wrote: > Nadav Amit wrote: > > > Minchan Kim wrote: > > > >> Hi, > >> > >> On Tue, Aug 08, 2017 at 09:19:23AM +0800, kernel test robot wrote: > >>> Greeting, > >>> > >>> FYI, we noticed a -19.3% regression of will-it-scale.per_process_ops due to commit: > >>> > >>> > >>> commit: 76742700225cad9df49f05399381ac3f1ec3dc60 ("mm: fix MADV_[FREE|DONTNEED] TLB flush miss problem") > >>> url: https://github.com/0day-ci/linux/commits/Nadav-Amit/mm-migrate-prevent-racy-access-to-tlb_flush_pending/20170802-205715 > >>> > >>> > >>> in testcase: will-it-scale > >>> on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory > >>> with following parameters: > >>> > >>> nr_task: 16 > >>> mode: process > >>> test: brk1 > >>> cpufreq_governor: performance > >>> > >>> test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two. > >>> test-url: https://github.com/antonblanchard/will-it-scale > >> > >> Thanks for the report. > >> Could you explain what kinds of workload you are testing? > >> > >> Does it calls frequently madvise(MADV_DONTNEED) in parallel on multiple > >> threads? > > > > According to the description it is "testcase:brk increase/decrease of one > > page”. According to the mode it spawns multiple processes, not threads. > > > > Since a single page is unmapped each time, and the iTLB-loads increase > > dramatically, I would suspect that for some reason a full TLB flush is > > caused during do_munmap(). > > > > If I find some free time, I’ll try to profile the workload - but feel free > > to beat me to it. > > The root-cause appears to be that tlb_finish_mmu() does not call > dec_tlb_flush_pending() - as it should. Any chance you can take care of it? Oops, but with second looking, it seems it's not my fault. ;-) https://marc.info/?l=linux-mm&m=150156699114088&w=2 Anyway, thanks for the pointing out. xiaolong.ye, could you retest with this fix? >>From 83012114c9cd9304f0d55d899bb4b9329d0e22ac Mon Sep 17 00:00:00 2001 From: Minchan Kim Date: Tue, 8 Aug 2017 17:05:19 +0900 Subject: [PATCH] mm: decrease tlb flush pending count in tlb_finish_mmu The tlb pending count increased by tlb_gather_mmu should be decreased at tlb_finish_mmu. Otherwise, A lot of TLB happens which makes performance regression. Signed-off-by: Minchan Kim --- mm/memory.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/memory.c b/mm/memory.c index 34b1fcb829e4..ad2617552f55 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -423,6 +423,7 @@ void tlb_finish_mmu(struct mmu_gather *tlb, bool force = mm_tlb_flush_nested(tlb->mm); arch_tlb_finish_mmu(tlb, start, end, force); + dec_tlb_flush_pending(tlb->mm); } /* -- 2.7.4 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Minchan Kim Subject: Re: [lkp-robot] [mm] 7674270022: will-it-scale.per_process_ops -19.3% regression Date: Tue, 8 Aug 2017 17:08:21 +0900 Message-ID: <20170808080821.GA31730@bbox> References: <20170802000818.4760-7-namit@vmware.com> <20170808011923.GE25554@yexl-desktop> <20170808022830.GA28570@bbox> <93CA4B47-95C2-43A2-8E92-B142CAB1DAF7@gmail.com> <970B5DC5-BFC2-461E-AC46-F71B3691D301@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Return-path: Content-Disposition: inline In-Reply-To: <970B5DC5-BFC2-461E-AC46-F71B3691D301@gmail.com> Sender: linux-kernel-owner@vger.kernel.org To: Nadav Amit Cc: kernel test robot , "open list:MEMORY MANAGEMENT" , LKML , Andrew Morton , Ingo Molnar , Russell King , Tony Luck , Martin Schwidefsky , "David S. Miller" , Heiko Carstens , Yoshinori Sato , Jeff Dike , linux-arch@vger.kernel.org, lkp@01.org List-Id: linux-arch.vger.kernel.org On Mon, Aug 07, 2017 at 10:51:00PM -0700, Nadav Amit wrote: > Nadav Amit wrote: > > > Minchan Kim wrote: > > > >> Hi, > >> > >> On Tue, Aug 08, 2017 at 09:19:23AM +0800, kernel test robot wrote: > >>> Greeting, > >>> > >>> FYI, we noticed a -19.3% regression of will-it-scale.per_process_ops due to commit: > >>> > >>> > >>> commit: 76742700225cad9df49f05399381ac3f1ec3dc60 ("mm: fix MADV_[FREE|DONTNEED] TLB flush miss problem") > >>> url: https://github.com/0day-ci/linux/commits/Nadav-Amit/mm-migrate-prevent-racy-access-to-tlb_flush_pending/20170802-205715 > >>> > >>> > >>> in testcase: will-it-scale > >>> on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory > >>> with following parameters: > >>> > >>> nr_task: 16 > >>> mode: process > >>> test: brk1 > >>> cpufreq_governor: performance > >>> > >>> test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two. > >>> test-url: https://github.com/antonblanchard/will-it-scale > >> > >> Thanks for the report. > >> Could you explain what kinds of workload you are testing? > >> > >> Does it calls frequently madvise(MADV_DONTNEED) in parallel on multiple > >> threads? > > > > According to the description it is "testcase:brk increase/decrease of one > > page”. According to the mode it spawns multiple processes, not threads. > > > > Since a single page is unmapped each time, and the iTLB-loads increase > > dramatically, I would suspect that for some reason a full TLB flush is > > caused during do_munmap(). > > > > If I find some free time, I’ll try to profile the workload - but feel free > > to beat me to it. > > The root-cause appears to be that tlb_finish_mmu() does not call > dec_tlb_flush_pending() - as it should. Any chance you can take care of it? Oops, but with second looking, it seems it's not my fault. ;-) https://marc.info/?l=linux-mm&m=150156699114088&w=2 Anyway, thanks for the pointing out. xiaolong.ye, could you retest with this fix? >From 83012114c9cd9304f0d55d899bb4b9329d0e22ac Mon Sep 17 00:00:00 2001 From: Minchan Kim Date: Tue, 8 Aug 2017 17:05:19 +0900 Subject: [PATCH] mm: decrease tlb flush pending count in tlb_finish_mmu The tlb pending count increased by tlb_gather_mmu should be decreased at tlb_finish_mmu. Otherwise, A lot of TLB happens which makes performance regression. Signed-off-by: Minchan Kim --- mm/memory.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/memory.c b/mm/memory.c index 34b1fcb829e4..ad2617552f55 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -423,6 +423,7 @@ void tlb_finish_mmu(struct mmu_gather *tlb, bool force = mm_tlb_flush_nested(tlb->mm); arch_tlb_finish_mmu(tlb, start, end, force); + dec_tlb_flush_pending(tlb->mm); } /* -- 2.7.4 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from LGEAMRELO11.lge.com ([156.147.23.51]:47165 "EHLO lgeamrelo11.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750818AbdHHIIY (ORCPT ); Tue, 8 Aug 2017 04:08:24 -0400 Date: Tue, 8 Aug 2017 17:08:21 +0900 From: Minchan Kim Subject: Re: [lkp-robot] [mm] 7674270022: will-it-scale.per_process_ops -19.3% regression Message-ID: <20170808080821.GA31730@bbox> References: <20170802000818.4760-7-namit@vmware.com> <20170808011923.GE25554@yexl-desktop> <20170808022830.GA28570@bbox> <93CA4B47-95C2-43A2-8E92-B142CAB1DAF7@gmail.com> <970B5DC5-BFC2-461E-AC46-F71B3691D301@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <970B5DC5-BFC2-461E-AC46-F71B3691D301@gmail.com> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Nadav Amit Cc: kernel test robot , "open list:MEMORY MANAGEMENT" , LKML , Andrew Morton , Ingo Molnar , Russell King , Tony Luck , Martin Schwidefsky , "David S. Miller" , Heiko Carstens , Yoshinori Sato , Jeff Dike , linux-arch@vger.kernel.org, lkp@01.org Message-ID: <20170808080821.6MaCpkO692TJaoRZUEBhajFFnPkA7yaxWFu6j15fMLw@z> On Mon, Aug 07, 2017 at 10:51:00PM -0700, Nadav Amit wrote: > Nadav Amit wrote: > > > Minchan Kim wrote: > > > >> Hi, > >> > >> On Tue, Aug 08, 2017 at 09:19:23AM +0800, kernel test robot wrote: > >>> Greeting, > >>> > >>> FYI, we noticed a -19.3% regression of will-it-scale.per_process_ops due to commit: > >>> > >>> > >>> commit: 76742700225cad9df49f05399381ac3f1ec3dc60 ("mm: fix MADV_[FREE|DONTNEED] TLB flush miss problem") > >>> url: https://github.com/0day-ci/linux/commits/Nadav-Amit/mm-migrate-prevent-racy-access-to-tlb_flush_pending/20170802-205715 > >>> > >>> > >>> in testcase: will-it-scale > >>> on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory > >>> with following parameters: > >>> > >>> nr_task: 16 > >>> mode: process > >>> test: brk1 > >>> cpufreq_governor: performance > >>> > >>> test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two. > >>> test-url: https://github.com/antonblanchard/will-it-scale > >> > >> Thanks for the report. > >> Could you explain what kinds of workload you are testing? > >> > >> Does it calls frequently madvise(MADV_DONTNEED) in parallel on multiple > >> threads? > > > > According to the description it is "testcase:brk increase/decrease of one > > page”. According to the mode it spawns multiple processes, not threads. > > > > Since a single page is unmapped each time, and the iTLB-loads increase > > dramatically, I would suspect that for some reason a full TLB flush is > > caused during do_munmap(). > > > > If I find some free time, I’ll try to profile the workload - but feel free > > to beat me to it. > > The root-cause appears to be that tlb_finish_mmu() does not call > dec_tlb_flush_pending() - as it should. Any chance you can take care of it? Oops, but with second looking, it seems it's not my fault. ;-) https://marc.info/?l=linux-mm&m=150156699114088&w=2 Anyway, thanks for the pointing out. xiaolong.ye, could you retest with this fix? From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f72.google.com (mail-pg0-f72.google.com [74.125.83.72]) by kanga.kvack.org (Postfix) with ESMTP id C3D086B025F for ; Tue, 8 Aug 2017 04:08:25 -0400 (EDT) Received: by mail-pg0-f72.google.com with SMTP id w187so27757070pgb.10 for ; Tue, 08 Aug 2017 01:08:25 -0700 (PDT) Received: from lgeamrelo11.lge.com (LGEAMRELO11.lge.com. [156.147.23.51]) by mx.google.com with ESMTP id 3si555253plm.81.2017.08.08.01.08.23 for ; Tue, 08 Aug 2017 01:08:24 -0700 (PDT) Date: Tue, 8 Aug 2017 17:08:21 +0900 From: Minchan Kim Subject: Re: [lkp-robot] [mm] 7674270022: will-it-scale.per_process_ops -19.3% regression Message-ID: <20170808080821.GA31730@bbox> References: <20170802000818.4760-7-namit@vmware.com> <20170808011923.GE25554@yexl-desktop> <20170808022830.GA28570@bbox> <93CA4B47-95C2-43A2-8E92-B142CAB1DAF7@gmail.com> <970B5DC5-BFC2-461E-AC46-F71B3691D301@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <970B5DC5-BFC2-461E-AC46-F71B3691D301@gmail.com> Sender: owner-linux-mm@kvack.org List-ID: To: Nadav Amit Cc: kernel test robot , "open list:MEMORY MANAGEMENT" , LKML , Andrew Morton , Ingo Molnar , Russell King , Tony Luck , Martin Schwidefsky , "David S. Miller" , Heiko Carstens , Yoshinori Sato , Jeff Dike , linux-arch@vger.kernel.org, lkp@01.org On Mon, Aug 07, 2017 at 10:51:00PM -0700, Nadav Amit wrote: > Nadav Amit wrote: > > > Minchan Kim wrote: > > > >> Hi, > >> > >> On Tue, Aug 08, 2017 at 09:19:23AM +0800, kernel test robot wrote: > >>> Greeting, > >>> > >>> FYI, we noticed a -19.3% regression of will-it-scale.per_process_ops due to commit: > >>> > >>> > >>> commit: 76742700225cad9df49f05399381ac3f1ec3dc60 ("mm: fix MADV_[FREE|DONTNEED] TLB flush miss problem") > >>> url: https://github.com/0day-ci/linux/commits/Nadav-Amit/mm-migrate-prevent-racy-access-to-tlb_flush_pending/20170802-205715 > >>> > >>> > >>> in testcase: will-it-scale > >>> on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory > >>> with following parameters: > >>> > >>> nr_task: 16 > >>> mode: process > >>> test: brk1 > >>> cpufreq_governor: performance > >>> > >>> test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two. > >>> test-url: https://github.com/antonblanchard/will-it-scale > >> > >> Thanks for the report. > >> Could you explain what kinds of workload you are testing? > >> > >> Does it calls frequently madvise(MADV_DONTNEED) in parallel on multiple > >> threads? > > > > According to the description it is "testcase:brk increase/decrease of one > > pagea??. According to the mode it spawns multiple processes, not threads. > > > > Since a single page is unmapped each time, and the iTLB-loads increase > > dramatically, I would suspect that for some reason a full TLB flush is > > caused during do_munmap(). > > > > If I find some free time, Ia??ll try to profile the workload - but feel free > > to beat me to it. > > The root-cause appears to be that tlb_finish_mmu() does not call > dec_tlb_flush_pending() - as it should. Any chance you can take care of it? Oops, but with second looking, it seems it's not my fault. ;-) https://marc.info/?l=linux-mm&m=150156699114088&w=2 Anyway, thanks for the pointing out. xiaolong.ye, could you retest with this fix? From mboxrd@z Thu Jan 1 00:00:00 1970 Content-Type: multipart/mixed; boundary="===============1642367258809862998==" MIME-Version: 1.0 From: Minchan Kim To: lkp@lists.01.org Subject: Re: [lkp-robot] [mm] 7674270022: will-it-scale.per_process_ops -19.3% regression Date: Tue, 08 Aug 2017 17:08:21 +0900 Message-ID: <20170808080821.GA31730@bbox> In-Reply-To: <970B5DC5-BFC2-461E-AC46-F71B3691D301@gmail.com> List-Id: --===============1642367258809862998== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On Mon, Aug 07, 2017 at 10:51:00PM -0700, Nadav Amit wrote: > Nadav Amit wrote: > = > > Minchan Kim wrote: > > = > >> Hi, > >> = > >> On Tue, Aug 08, 2017 at 09:19:23AM +0800, kernel test robot wrote: > >>> Greeting, > >>> = > >>> FYI, we noticed a -19.3% regression of will-it-scale.per_process_ops = due to commit: > >>> = > >>> = > >>> commit: 76742700225cad9df49f05399381ac3f1ec3dc60 ("mm: fix MADV_[FREE= |DONTNEED] TLB flush miss problem") > >>> url: https://github.com/0day-ci/linux/commits/Nadav-Amit/mm-migrate-p= revent-racy-access-to-tlb_flush_pending/20170802-205715 > >>> = > >>> = > >>> in testcase: will-it-scale > >>> on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz= with 64G memory > >>> with following parameters: > >>> = > >>> nr_task: 16 > >>> mode: process > >>> test: brk1 > >>> cpufreq_governor: performance > >>> = > >>> test-description: Will It Scale takes a testcase and runs it from 1 t= hrough to n parallel copies to see if the testcase will scale. It builds bo= th a process and threads based test in order to see any differences between= the two. > >>> test-url: https://github.com/antonblanchard/will-it-scale > >> = > >> Thanks for the report. > >> Could you explain what kinds of workload you are testing? > >> = > >> Does it calls frequently madvise(MADV_DONTNEED) in parallel on multiple > >> threads? > > = > > According to the description it is "testcase:brk increase/decrease of o= ne > > page=E2=80=9D. According to the mode it spawns multiple processes, not = threads. > > = > > Since a single page is unmapped each time, and the iTLB-loads increase > > dramatically, I would suspect that for some reason a full TLB flush is > > caused during do_munmap(). > > = > > If I find some free time, I=E2=80=99ll try to profile the workload - bu= t feel free > > to beat me to it. > = > The root-cause appears to be that tlb_finish_mmu() does not call > dec_tlb_flush_pending() - as it should. Any chance you can take care of i= t? Oops, but with second looking, it seems it's not my fault. ;-) https://marc.info/?l=3Dlinux-mm&m=3D150156699114088&w=3D2 Anyway, thanks for the pointing out. xiaolong.ye, could you retest with this fix? >>From 83012114c9cd9304f0d55d899bb4b9329d0e22ac Mon Sep 17 00:00:00 2001 From: Minchan Kim Date: Tue, 8 Aug 2017 17:05:19 +0900 Subject: [PATCH] mm: decrease tlb flush pending count in tlb_finish_mmu The tlb pending count increased by tlb_gather_mmu should be decreased at tlb_finish_mmu. Otherwise, A lot of TLB happens which makes performance regression. Signed-off-by: Minchan Kim --- mm/memory.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/memory.c b/mm/memory.c index 34b1fcb829e4..ad2617552f55 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -423,6 +423,7 @@ void tlb_finish_mmu(struct mmu_gather *tlb, bool force =3D mm_tlb_flush_nested(tlb->mm); = arch_tlb_finish_mmu(tlb, start, end, force); + dec_tlb_flush_pending(tlb->mm); } = /* -- = 2.7.4 --===============1642367258809862998==--