All of lore.kernel.org
 help / color / mirror / Atom feed
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: Kyle Huey <me@kylehuey.com>
Cc: Kevin Tian <kevin.tian@intel.com>, Wei Liu <wei.liu2@citrix.com>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel@lists.xen.org, Jan Beulich <JBeulich@suse.com>,
	Robert O'Callahan <robert@ocallahan.org>
Subject: Re: [PATCH v3 2/2] x86/Intel: virtualize support for cpuid faulting
Date: Mon, 24 Oct 2016 17:15:00 -0400	[thread overview]
Message-ID: <00879b94-00cc-ed04-85cc-e7c5d9c72d65@oracle.com> (raw)
In-Reply-To: <CAP045Aox9-wJzoT=sRVWJe9KzraHDa8OBEbLWLV-sMsy-K6qAA@mail.gmail.com>

On 10/24/2016 03:22 PM, Kyle Huey wrote:
> On Mon, Oct 24, 2016 at 8:05 AM, Boris Ostrovsky
> <boris.ostrovsky@oracle.com> wrote:
>> On 10/24/2016 12:18 AM, Kyle Huey wrote:
>>> The anomalies we see appear to be related to, or at least triggerable
>>> by, the performance monitoring interrupt.  The following program runs
>>> a loop of roughly 2^25 conditional branches.  It takes one argument,
>>> the number of conditional branches to program the PMI to trigger on.
>>> The default is 50,000, and if you run the program with that it'll
>>> produce the same value every time.  If you drop it to 5000 or so
>>> you'll probably see occasional off-by-one discrepancies.  If you drop
>>> it to 500 the performance counter values fluctuate wildly.
>> Yes, it does change but I also see the difference on baremetal (although
>> not as big as it is in an HVM guest):
>> ostr@workbase> ./pmu 500
>> Period is 500
>> Counted 5950003 conditional branches
>> ostr@workbase> ./pmu 500
>> Period is 500
>> Counted 5850003 conditional branches
>> ostr@workbase> ./pmu 500
>> Period is 500
>> Counted 7530107 conditional branches
>> ostr@workbase>
> Yeah, you're right.  I simplified the testcase too far.  I have
> included a better one.  This testcase is stable on bare metal (down to
> an interrupt every 10 branches, I didn't try below that) and more
> accurately represents what our software actually does. 

When I run this program in a loop the first iteration is always off:

ostr@workbase> while [ true ]; do taskset -c 0 ./pmu 500|grep -v Period
; done
Counted 33554556 conditional branches
Counted 33554729 conditional branches
Counted 33554729 conditional branches
...

but then it indeed is stable. Could it be "priming" the branch
predictor? Does this counter count mis-predicted branches? Probably not
since the first number is smaller than the rest.


>  rr acts as a
> ptrace supervisor to the process being recorded, and it seems that
> context switching between the supervisor and tracee processes
> stabilizes the performance counter values somehow.

Not sure I understand what you mean by this. The supervising thread is
presumably sitting in kernel (in waitpid()) so nothing should be counted
for it.

-boris



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2016-10-24 21:15 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-14 19:47 [PATCH v3] x86/Intel: virtualize support for cpuid faulting Kyle Huey
2016-10-14 19:47 ` [PATCH v3 1/2] x86/Intel: Expose cpuid_faulting_enabled so it can be used elsewhere Kyle Huey
2016-10-17 12:35   ` Andrew Cooper
2016-10-17 12:43   ` Wei Liu
2016-10-14 19:47 ` [PATCH v3 2/2] x86/Intel: virtualize support for cpuid faulting Kyle Huey
2016-10-17 12:32   ` Wei Liu
2016-10-20  5:10     ` Kyle Huey
2016-10-20  7:56       ` Andrew Cooper
2016-10-20 13:55         ` Kyle Huey
2016-10-20 14:11           ` Andrew Cooper
2016-10-20 14:40             ` Boris Ostrovsky
2016-10-21 15:52               ` Kyle Huey
2016-10-24  4:18                 ` Kyle Huey
2016-10-24 15:05                   ` Boris Ostrovsky
2016-10-24 19:22                     ` Kyle Huey
2016-10-24 21:15                       ` Boris Ostrovsky [this message]
2016-10-17 12:49   ` Andrew Cooper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=00879b94-00cc-ed04-85cc-e7c5d9c72d65@oracle.com \
    --to=boris.ostrovsky@oracle.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=jun.nakajima@intel.com \
    --cc=kevin.tian@intel.com \
    --cc=me@kylehuey.com \
    --cc=robert@ocallahan.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.