linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Wei Wang <wei.w.wang@intel.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
	pbonzini@redhat.com, ak@linux.intel.com, kan.liang@intel.com,
	mingo@redhat.com, rkrcmar@redhat.com, like.xu@intel.com,
	jannh@google.com, arei.gonglei@huawei.com, jmattson@google.com
Subject: Re: [PATCH v7 08/12] KVM/x86/vPMU: Add APIs to support host save/restore the guest lbr stack
Date: Tue, 09 Jul 2019 11:04:21 +0800	[thread overview]
Message-ID: <5D240435.2040801@intel.com> (raw)
In-Reply-To: <20190708144831.GN3402@hirez.programming.kicks-ass.net>

On 07/08/2019 10:48 PM, Peter Zijlstra wrote:
> On Mon, Jul 08, 2019 at 09:23:15AM +0800, Wei Wang wrote:
>> From: Like Xu <like.xu@intel.com>
>>
>> This patch adds support to enable/disable the host side save/restore
> This patch should be disqualified on Changelog alone...
>
>    Documentation/process/submitting-patches.rst:instead of "[This patch] makes xyzzy do frotz" or "[I] changed xyzzy

OK, we will discard "This patch" in the description:

To enable/disable the host side save/restore for the guest lbr stack on
vCPU switching, the host creates a perf event for the vCPU..

>> for the guest lbr stack on vCPU switching. To enable that, the host
>> creates a perf event for the vCPU, and the event attributes are set
>> to the user callstack mode lbr so that all the conditions are meet in
>> the host perf subsystem to save the lbr stack on task switching.
>>
>> The host side lbr perf event are created only for the purpose of saving
>> and restoring the lbr stack. There is no need to enable the lbr
>> functionality for this perf event, because the feature is essentially
>> used in the vCPU. So perf_event_create is invoked with need_counter=false
>> to get no counter assigned for the perf event.
>>
>> The vcpu_lbr field is added to cpuc, to indicate if the lbr perf event is
>> used by the vCPU only for context switching. When the perf subsystem
>> handles this event (e.g. lbr enable or read lbr stack on PMI) and finds
>> it's non-zero, it simply returns.
> *WHY* does the host need to save/restore? Why not make VMENTER/VMEXIT do
> this?

Because the VMX transition is much more frequent than the vCPU switching.
On SKL, saving 32 LBR entries could add 3000~4000 cycles overhead, this
would be too large for the frequent VMX transitions.

LBR state is saved when vCPU is scheduled out to ensure that this vCPU's
LBR data doesn't get lost (as another vCPU or host thread that is 
scheduled in
may use LBR)


> Many of these patches don't explain why things are done; that's a
> problem.

We'll improve, please help when you find anywhere isn't clear to you, 
thanks.

Best,
Wei

  reply	other threads:[~2019-07-09  2:58 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-08  1:23 [PATCH v7 00/12] Guest LBR Enabling Wei Wang
2019-07-08  1:23 ` [PATCH v7 01/12] perf/x86: fix the variable type of the LBR MSRs Wei Wang
2019-07-08  1:23 ` [PATCH v7 02/12] perf/x86: add a function to get the lbr stack Wei Wang
2019-07-08  1:23 ` [PATCH v7 03/12] KVM/x86: KVM_CAP_X86_GUEST_LBR Wei Wang
2019-07-08  1:23 ` [PATCH v7 04/12] KVM/x86: intel_pmu_lbr_enable Wei Wang
2019-07-08  1:23 ` [PATCH v7 05/12] KVM/x86/vPMU: tweak kvm_pmu_get_msr Wei Wang
2019-07-08  1:23 ` [PATCH v7 06/12] KVM/x86: expose MSR_IA32_PERF_CAPABILITIES to the guest Wei Wang
2019-07-08  1:23 ` [PATCH v7 07/12] perf/x86: no counter allocation support Wei Wang
2019-07-08 14:29   ` Peter Zijlstra
2019-07-09  2:58     ` Wei Wang
2019-07-09  9:43       ` Peter Zijlstra
2019-07-09 11:36         ` Wei Wang
2019-07-08  1:23 ` [PATCH v7 08/12] KVM/x86/vPMU: Add APIs to support host save/restore the guest lbr stack Wei Wang
2019-07-08 14:48   ` Peter Zijlstra
2019-07-09  3:04     ` Wei Wang [this message]
2019-07-09  9:39       ` Peter Zijlstra
2019-07-09 11:34         ` Wei Wang
2019-07-09 12:19           ` Peter Zijlstra
2019-07-10  8:19             ` Wei Wang
2019-07-09 11:45   ` Peter Zijlstra
2019-07-10  8:21     ` Wei Wang
2019-07-08  1:23 ` [PATCH v7 09/12] perf/x86: save/restore LBR_SELECT on vCPU switching Wei Wang
2019-07-08  1:23 ` [PATCH v7 10/12] KVM/x86/lbr: lazy save the guest lbr stack Wei Wang
2019-07-08 14:53   ` Peter Zijlstra
2019-07-08 15:11     ` Andi Kleen
2019-07-09 11:39       ` Peter Zijlstra
2019-07-09  3:14     ` Wei Wang
2019-07-08  1:23 ` [PATCH v7 11/12] KVM/x86: remove the common handling of the debugctl msr Wei Wang
2019-07-08  1:23 ` [PATCH v7 12/12] KVM/VMX/vPMU: support to report GLOBAL_STATUS_LBRS_FROZEN Wei Wang
2019-07-08 15:09   ` Peter Zijlstra
2019-07-09  3:24     ` Wei Wang
2019-07-09 11:35       ` Peter Zijlstra
2019-07-10  9:23         ` Wei Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5D240435.2040801@intel.com \
    --to=wei.w.wang@intel.com \
    --cc=ak@linux.intel.com \
    --cc=arei.gonglei@huawei.com \
    --cc=jannh@google.com \
    --cc=jmattson@google.com \
    --cc=kan.liang@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=like.xu@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rkrcmar@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).