From: Linus Torvalds <torvalds@linux-foundation.org>
To: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Feng Tang <feng.tang@intel.com>, Oleg Nesterov <oleg@redhat.com>,
Jiri Olsa <jolsa@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
kernel test robot <rong.a.chen@intel.com>,
Ingo Molnar <mingo@kernel.org>,
Vince Weaver <vincent.weaver@maine.edu>,
Jiri Olsa <jolsa@kernel.org>,
Alexander Shishkin <alexander.shishkin@linux.intel.com>,
Arnaldo Carvalho de Melo <acme@kernel.org>,
Arnaldo Carvalho de Melo <acme@redhat.com>,
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>,
Ravi Bangoria <ravi.bangoria@linux.ibm.com>,
Stephane Eranian <eranian@google.com>,
Thomas Gleixner <tglx@linutronix.de>,
LKML <linux-kernel@vger.kernel.org>,
lkp@lists.01.org, andi.kleen@intel.com, "Huang,
Ying" <ying.huang@intel.com>
Subject: Re: [LKP] Re: [perf/x86] 81ec3f3c4c: will-it-scale.per_process_ops -5.5% regression
Date: Mon, 24 Feb 2020 14:12:51 -0800 [thread overview]
Message-ID: <CAHk-=wgXr1JcW3hyomWh8Y8Kr9wNq-+6r+CocY8EfXvuW7giHg@mail.gmail.com> (raw)
In-Reply-To: <8736azzlwq.fsf@x220.int.ebiederm.org>
On Mon, Feb 24, 2020 at 2:02 PM Eric W. Biederman <ebiederm@xmission.com> wrote:
>
> Other than scratching my head about why are we optimizing neither do I.
You can see the background on lore
https://lore.kernel.org/lkml/20200205123216.GO12867@shao2-debian/
and the thread about the largely unexplained regression there. I had a
wild handwaving theory on what's going on in
https://lore.kernel.org/lkml/CAHk-=wjkSb1OkiCSn_fzf2v7A=K0bNsUEeQa+06XMhTO+oQUaA@mail.gmail.com/
but yes, the contention only happens once you have a lot of cores.
That said, I suspect it actually improves performance on that
microbenchmark even without the contention - just not as noticeably.
I'm running a kernel with the patch right now, but I wasn't going to
boot back into an old kernel just to test that. I was hoping that the
kernel test robot people would just check it out.
> It would help to have a comment somewhere in the code or the commit
> message that says the issue is contetion under load.
Note that even without the contention, on that "send a lot of signals"
case it does avoid the second atomic op, and the profile really does
look better.
That profile improvement I can see even on my own machine, and I see
how the nasty CPU bug avoidance (the "verw" on the system call exit
path) goes from 30% to 31% cost.
And that increase in the relative cost of the "verw" on the profile
must mean that the actual real code just improved in performance (even
if I didn't actually time it).
With the contention, you get that added odd extra regression that
seems to depend on exact cacheline placement.
So I think the patch improves performance (for this "lots of queued
signals" case) in general, and I hope it will also then get rid of
that contention regression.
Linus
next prev parent reply other threads:[~2020-02-24 22:19 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-05 12:32 [perf/x86] 81ec3f3c4c: will-it-scale.per_process_ops -5.5% regression kernel test robot
2020-02-05 12:58 ` Peter Zijlstra
2020-02-06 3:04 ` [LKP] " Li, Philip
2020-02-21 8:03 ` Feng Tang
2020-02-21 10:58 ` Peter Zijlstra
2020-02-21 13:20 ` Jiri Olsa
2020-02-23 14:11 ` Feng Tang
2020-02-23 17:37 ` Linus Torvalds
2020-02-24 0:33 ` Feng Tang
2020-02-24 1:06 ` Linus Torvalds
2020-02-24 1:58 ` Huang, Ying
2020-02-24 2:19 ` Feng Tang
2020-02-24 13:20 ` Feng Tang
2020-02-24 19:24 ` Linus Torvalds
2020-02-24 19:42 ` Kleen, Andi
2020-02-24 20:09 ` Linus Torvalds
2020-02-24 20:47 ` Linus Torvalds
2020-02-24 21:20 ` Eric W. Biederman
2020-02-24 21:43 ` Linus Torvalds
2020-02-24 21:59 ` Eric W. Biederman
2020-02-24 22:12 ` Linus Torvalds [this message]
2020-02-25 2:57 ` Feng Tang
2020-02-25 3:15 ` Linus Torvalds
2020-02-25 4:53 ` Feng Tang
2020-02-23 19:36 ` Jiri Olsa
2020-02-21 18:05 ` Kleen, Andi
2020-02-22 12:43 ` Feng Tang
2020-02-22 17:08 ` Kleen, Andi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAHk-=wgXr1JcW3hyomWh8Y8Kr9wNq-+6r+CocY8EfXvuW7giHg@mail.gmail.com' \
--to=torvalds@linux-foundation.org \
--cc=acme@kernel.org \
--cc=acme@redhat.com \
--cc=alexander.shishkin@linux.intel.com \
--cc=andi.kleen@intel.com \
--cc=ebiederm@xmission.com \
--cc=eranian@google.com \
--cc=feng.tang@intel.com \
--cc=jolsa@kernel.org \
--cc=jolsa@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lkp@lists.01.org \
--cc=mingo@kernel.org \
--cc=naveen.n.rao@linux.vnet.ibm.com \
--cc=oleg@redhat.com \
--cc=peterz@infradead.org \
--cc=ravi.bangoria@linux.ibm.com \
--cc=rong.a.chen@intel.com \
--cc=tglx@linutronix.de \
--cc=vincent.weaver@maine.edu \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).