All of lore.kernel.org
 help / color / mirror / Atom feed
From: Rik van Riel <riel@surriel.com>
To: "Huang, Ying" <ying.huang@intel.com>,
	Nathan Chancellor <nathan@kernel.org>
Cc: kernel test robot <yujie.liu@intel.com>,
	lkp@lists.01.org, lkp@intel.com,
	Andrew Morton <akpm@linux-foundation.org>,
	Yang Shi <shy828301@gmail.com>,
	Matthew Wilcox <willy@infradead.org>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	feng.tang@intel.com, zhengjun.xing@linux.intel.com,
	fengwei.yin@intel.com
Subject: Re: [mm] f35b5d7d67: will-it-scale.per_process_ops -95.5% regression
Date: Thu, 20 Oct 2022 11:28:16 -0400	[thread overview]
Message-ID: <366045a27a96e01d0526d63fd78d4f3c5d1f530b.camel@surriel.com> (raw)
In-Reply-To: <871qr3nkw2.fsf@yhuang6-desk2.ccr.corp.intel.com>

[-- Attachment #1: Type: text/plain, Size: 2072 bytes --]

On Thu, 2022-10-20 at 13:07 +0800, Huang, Ying wrote:
> 
> Nathan Chancellor <nathan@kernel.org> writes:
> > 
> > For what it's worth, I just bisected a massive and visible
> > performance
> > regression on my Threadripper 3990X workstation to commit
> > f35b5d7d676e
> > ("mm: align larger anonymous mappings on THP boundaries"), which
> > seems
> > directly related to this report/analysis. I initially noticed this
> > because my full set of kernel builds against mainline went from 2
> > hours
> > and 20 minutes or so to over 3 hours. Zeroing in on x86_64
> > allmodconfig,
> > which I used for the bisect:
> > 
> > @ 7b5a0b664ebe ("mm/page_ext: remove unused variable in
> > offline_page_ext"):
> > 
> > Benchmark 1: make -skj128 LLVM=1 allmodconfig all
> >   Time (mean ± σ):     318.172 s ±  0.730 s    [User: 31750.902 s,
> > System: 4564.246 s]
> >   Range (min … max):   317.332 s … 318.662 s    3 runs
> > 
> > @ f35b5d7d676e ("mm: align larger anonymous mappings on THP
> > boundaries"):
> > 
> > Benchmark 1: make -skj128 LLVM=1 allmodconfig all
> >   Time (mean ± σ):     406.688 s ±  0.676 s    [User: 31819.526 s,
System: 16327.022 s]
> >   Range (min … max):   405.954 s … 407.284 s    3 run
> 
> Have you tried to build with gcc?  Want to check whether is this
> clang
> specific issue or not.

This may indeed be something LLVM specific. In previous tests,
GCC has generally seen a benefit from increased THP usage.
Many other applications also benefit from getting more THPs.

LLVM showing 10% system time before this change, and a whopping
30% system time after that change, suggests that LLVM is behaving
quite differently from GCC in some ways.

If we can figure out what these differences are, maybe we can
just fine tune the code to avoid this issue.

I'll try to play around with LLVM compilation a little bit next
week, to see if I can figure out what might be going on. I wonder
if LLVM is doing lots of mremap calls or something...

-- 
All Rights Reversed.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

WARNING: multiple messages have this Message-ID (diff)
From: Rik van Riel <riel@surriel.com>
To: lkp@lists.01.org
Subject: Re: [mm] f35b5d7d67: will-it-scale.per_process_ops -95.5% regression
Date: Thu, 20 Oct 2022 11:28:16 -0400	[thread overview]
Message-ID: <366045a27a96e01d0526d63fd78d4f3c5d1f530b.camel@surriel.com> (raw)
In-Reply-To: <871qr3nkw2.fsf@yhuang6-desk2.ccr.corp.intel.com>

[-- Attachment #1: Type: text/plain, Size: 2072 bytes --]

On Thu, 2022-10-20 at 13:07 +0800, Huang, Ying wrote:
> 
> Nathan Chancellor <nathan@kernel.org> writes:
> > 
> > For what it's worth, I just bisected a massive and visible
> > performance
> > regression on my Threadripper 3990X workstation to commit
> > f35b5d7d676e
> > ("mm: align larger anonymous mappings on THP boundaries"), which
> > seems
> > directly related to this report/analysis. I initially noticed this
> > because my full set of kernel builds against mainline went from 2
> > hours
> > and 20 minutes or so to over 3 hours. Zeroing in on x86_64
> > allmodconfig,
> > which I used for the bisect:
> > 
> > @ 7b5a0b664ebe ("mm/page_ext: remove unused variable in
> > offline_page_ext"):
> > 
> > Benchmark 1: make -skj128 LLVM=1 allmodconfig all
> >   Time (mean ± σ):     318.172 s ±  0.730 s    [User: 31750.902 s,
> > System: 4564.246 s]
> >   Range (min … max):   317.332 s … 318.662 s    3 runs
> > 
> > @ f35b5d7d676e ("mm: align larger anonymous mappings on THP
> > boundaries"):
> > 
> > Benchmark 1: make -skj128 LLVM=1 allmodconfig all
> >   Time (mean ± σ):     406.688 s ±  0.676 s    [User: 31819.526 s,
System: 16327.022 s]
> >   Range (min … max):   405.954 s … 407.284 s    3 run
> 
> Have you tried to build with gcc?  Want to check whether is this
> clang
> specific issue or not.

This may indeed be something LLVM specific. In previous tests,
GCC has generally seen a benefit from increased THP usage.
Many other applications also benefit from getting more THPs.

LLVM showing 10% system time before this change, and a whopping
30% system time after that change, suggests that LLVM is behaving
quite differently from GCC in some ways.

If we can figure out what these differences are, maybe we can
just fine tune the code to avoid this issue.

I'll try to play around with LLVM compilation a little bit next
week, to see if I can figure out what might be going on. I wonder
if LLVM is doing lots of mremap calls or something...

-- 
All Rights Reversed.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

  reply	other threads:[~2022-10-20 15:30 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-18  8:44 [mm] f35b5d7d67: will-it-scale.per_process_ops -95.5% regression kernel test robot
2022-10-18  8:44 ` kernel test robot
2022-10-19  2:05 ` Huang, Ying
2022-10-19  2:05   ` Huang, Ying
2022-10-20  4:23   ` Nathan Chancellor
2022-10-20  4:23     ` Nathan Chancellor
2022-10-20  5:07     ` Huang, Ying
2022-10-20  5:07       ` Huang, Ying
2022-10-20 15:28       ` Rik van Riel [this message]
2022-10-20 15:28         ` Rik van Riel
2022-10-20 17:16         ` Nathan Chancellor
2022-10-20 17:16           ` Nathan Chancellor
2022-11-28  6:40           ` Nathan Chancellor
2022-12-01 18:33             ` Thorsten Leemhuis
2022-12-01 20:29               ` Rik van Riel
2022-12-01 21:22                 ` Andrew Morton
2022-12-01 21:44                   ` Yang Shi
2022-12-02  8:46                   ` Thorsten Leemhuis
2022-12-02 18:44                     ` Andrew Morton
2022-12-02 19:37                       ` Thorsten Leemhuis
2022-12-01 21:35                 ` Nathan Chancellor
2022-12-16 11:48                 ` Yin, Fengwei
2022-10-20 16:40       ` Yujie Liu
2022-10-20 16:40         ` Yujie Liu
2022-11-29  8:59     ` [mm] f35b5d7d67: will-it-scale.per_process_ops -95.5% regression #forregzbot Thorsten Leemhuis
2022-12-02  6:43       ` Thorsten Leemhuis

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=366045a27a96e01d0526d63fd78d4f3c5d1f530b.camel@surriel.com \
    --to=riel@surriel.com \
    --cc=akpm@linux-foundation.org \
    --cc=feng.tang@intel.com \
    --cc=fengwei.yin@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lkp@intel.com \
    --cc=lkp@lists.01.org \
    --cc=nathan@kernel.org \
    --cc=shy828301@gmail.com \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    --cc=yujie.liu@intel.com \
    --cc=zhengjun.xing@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.