From: "Huang\, Ying" <ying.huang@intel.com>
To: Borislav Petkov <bp@suse.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>,
"Denys Vlasenko" <dvlasenk@redhat.com>,
"Peter Zijlstra" <peterz@infradead.org>,
"Brian Gerst" <brgerst@gmail.com>,
LKML <linux-kernel@vger.kernel.org>,
"Andy Lutomirski" <luto@amacapital.net>,
lkp@01.org, "Thomas Gleixner" <tglx@linutronix.de>,
"Linus Torvalds" <torvalds@linux-foundation.org>,
"Ingo Molnar" <mingo@kernel.org>,
"Ville Syrjälä" <ville.syrjala@linux.intel.com>
Subject: Re: [LKP] [lkp] [x86/hweight] 65ea11ec6a: will-it-scale.per_process_ops 9.3% improvement
Date: Wed, 17 Aug 2016 15:29:04 -0700 [thread overview]
Message-ID: <87r39n58sv.fsf@yhuang-mobile.sh.intel.com> (raw)
In-Reply-To: <20160817054605.GA6728@nazgul.tnic> (Borislav Petkov's message of "Wed, 17 Aug 2016 07:46:05 +0200")
Borislav Petkov <bp@suse.de> writes:
> On Tue, Aug 16, 2016 at 04:09:19PM -0700, H. Peter Anvin wrote:
>> On August 16, 2016 10:16:35 AM PDT, Borislav Petkov <bp@suse.de> wrote:
>> >On Tue, Aug 16, 2016 at 09:59:00AM -0700, H. Peter Anvin wrote:
>> >> Dang...
>> >
>> >Isn't 9.3% improvement a good thing(tm) ?
>>
>> Yes, it's huge. The only explanation I could imagine is that scrambling %rdi caused the scheduler to do completely the wrong thing.
>
> I'm questioning the validity, actually. Report says test machine was
> Sandy Bridge-EP and I'd bet good money this one has POPCNT support so
> how are we even hitting that __sw_hweight64() path, at all?
We done 8 tests for the base and 4 tests for the head, and the result is
quite stable.
I found there is another change between the two comments,
base:
"perf-stat.branch-miss-rate": [
0.3089533646503185,
0.3099821038600304,
0.3123762964028104,
0.311511881793534,
0.31231973343587144,
0.3096478429327263,
0.31166037272389924,
0.3097364392684626
],
first bad commit:
"perf-stat.branch-miss-rate": [
0.039853905034485354,
0.0402472142423231,
0.04380682345704418,
0.04319082390667179
],
branch-miss-rate decreased from ~0.30% to ~0.043%.
So I guess there are some code alignment change, which caused decreased
branch miss rate.
Best Regards,
Huang, Ying
next prev parent reply other threads:[~2016-08-17 22:29 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-08-16 14:26 [lkp] [x86/hweight] 65ea11ec6a: will-it-scale.per_process_ops 9.3% improvement kernel test robot
2016-08-16 16:59 ` H. Peter Anvin
2016-08-16 17:16 ` Borislav Petkov
2016-08-16 23:09 ` H. Peter Anvin
2016-08-17 5:46 ` Borislav Petkov
2016-08-17 22:29 ` Huang, Ying [this message]
2016-08-18 3:45 ` [LKP] " Borislav Petkov
2016-08-18 3:54 ` Huang, Ying
2016-08-18 4:11 ` Borislav Petkov
2016-08-25 9:22 ` Borislav Petkov
2016-08-25 10:05 ` H. Peter Anvin
2016-08-25 11:45 ` Borislav Petkov
2016-08-25 20:07 ` H. Peter Anvin
2016-08-18 3:57 ` H. Peter Anvin
2016-08-17 6:48 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87r39n58sv.fsf@yhuang-mobile.sh.intel.com \
--to=ying.huang@intel.com \
--cc=bp@suse.de \
--cc=brgerst@gmail.com \
--cc=dvlasenk@redhat.com \
--cc=hpa@zytor.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lkp@01.org \
--cc=luto@amacapital.net \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
--cc=ville.syrjala@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).