From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Cooper Subject: Re: [PATCH v7 02/19] VPMU: Mark context LOADED before registers are loaded Date: Fri, 6 Jun 2014 18:59:32 +0100 Message-ID: <53920184.7090000@citrix.com> References: <1402076415-26475-1-git-send-email-boris.ostrovsky@oracle.com> <1402076415-26475-3-git-send-email-boris.ostrovsky@oracle.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1402076415-26475-3-git-send-email-boris.ostrovsky@oracle.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Boris Ostrovsky Cc: kevin.tian@intel.com, keir@xen.org, JBeulich@suse.com, jun.nakajima@intel.com, tim@xen.org, dietmar.hahn@ts.fujitsu.com, xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com List-Id: xen-devel@lists.xenproject.org On 06/06/14 18:39, Boris Ostrovsky wrote: > Because a PMU interrupt may be generated as soon as PMU registers are loaded (or, > more precisely, as soon as HW PMU is "armed") we don't want to delay marking > context as LOADED until after registers are loaded. Otherwise during interrupt > handling VPMU_CONTEXT_LOADED may not be set and this could be confusing. > > (Technically, only SVM needs this change right now since VMX will "arm" PMU later, > during VMRUN when global control register is loaded from VMCS. However, both > AMD and Intel code will require this patch when we introduce PV VPMU). > > Signed-off-by: Boris Ostrovsky > Acked-by: Kevin Tian > Reviewed-by: Dietmar Hahn > Tested-by: Dietmar Hahn Reviewed-by: Andrew Cooper > --- > xen/arch/x86/hvm/svm/vpmu.c | 2 ++ > xen/arch/x86/hvm/vmx/vpmu_core2.c | 2 ++ > xen/arch/x86/hvm/vpmu.c | 3 +-- > 3 files changed, 5 insertions(+), 2 deletions(-) > > diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c > index 66a3815..3ac7d53 100644 > --- a/xen/arch/x86/hvm/svm/vpmu.c > +++ b/xen/arch/x86/hvm/svm/vpmu.c > @@ -203,6 +203,8 @@ static void amd_vpmu_load(struct vcpu *v) > return; > } > > + vpmu_set(vpmu, VPMU_CONTEXT_LOADED); > + > context_load(v); > } > > diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c > index 3129ebd..ccd14d9 100644 > --- a/xen/arch/x86/hvm/vmx/vpmu_core2.c > +++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c > @@ -369,6 +369,8 @@ static void core2_vpmu_load(struct vcpu *v) > if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) ) > return; > > + vpmu_set(vpmu, VPMU_CONTEXT_LOADED); > + > __core2_vpmu_load(v); > } > > diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c > index 21fbaba..63765fa 100644 > --- a/xen/arch/x86/hvm/vpmu.c > +++ b/xen/arch/x86/hvm/vpmu.c > @@ -211,10 +211,9 @@ void vpmu_load(struct vcpu *v) > if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load ) > { > apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc); > + /* Arch code needs to set VPMU_CONTEXT_LOADED */ > vpmu->arch_vpmu_ops->arch_vpmu_load(v); > } > - > - vpmu_set(vpmu, VPMU_CONTEXT_LOADED); > } > > void vpmu_initialise(struct vcpu *v)