From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752484AbdGRWLn (ORCPT ); Tue, 18 Jul 2017 18:11:43 -0400 Received: from mx1.redhat.com ([209.132.183.28]:55174 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751969AbdGRWLm (ORCPT ); Tue, 18 Jul 2017 18:11:42 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com B9AC9C0587FA Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=jolsa@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com B9AC9C0587FA Date: Wed, 19 Jul 2017 00:11:38 +0200 From: Jiri Olsa To: Peter Zijlstra Cc: Jiri Olsa , lkml , Ingo Molnar , Andi Kleen , Alexander Shishkin , Kan Liang Subject: Re: [PATCH] perf/x86/intel: Add proper condition to run sched_task callbacks Message-ID: <20170718221138.GC13553@krava> References: <20170717150156.11784-1-jolsa@kernel.org> <20170718091444.afqdtgjijcztv2mn@hirez.programming.kicks-ass.net> <20170718092932.GA26157@krava> <20170718123734.a7hgi3wwduuejz7f@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170718123734.a7hgi3wwduuejz7f@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.8.3 (2017-05-23) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Tue, 18 Jul 2017 22:11:42 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jul 18, 2017 at 02:37:34PM +0200, Peter Zijlstra wrote: > On Tue, Jul 18, 2017 at 11:29:32AM +0200, Jiri Olsa wrote: > > > because we have 2 places using the same callback > > - PEBS drain for free running counters > > - LBR save/store > > > > both of them called from intel_pmu_sched_task > > > > so let's say PEBS drain setup the callback for the event, > > but in the callback itself (intel_pmu_sched_task) we will > > also run the code for LBR save/restore, which we did not > > ask for, but the code in intel_pmu_sched_task does not > > check for that > > Ah fair enough; Changelog confused me. > > > I'm adding conditions to recognize the work that needs > > to be done in the callback > > > > another option might be to add support for more x86_pmu::sched_task > > callbacks, which might be cleaner > > Right; either that or pull the condition into the functions themselves > to create less churn. Something like so I suppose... ok, will send new version jirka > > > diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c > index aa62437d1aa1..2d533d4c0e2c 100644 > --- a/arch/x86/events/intel/core.c > +++ b/arch/x86/events/intel/core.c > @@ -3265,10 +3265,8 @@ static void intel_pmu_cpu_dying(int cpu) > static void intel_pmu_sched_task(struct perf_event_context *ctx, > bool sched_in) > { > - if (x86_pmu.pebs_active) > - intel_pmu_pebs_sched_task(ctx, sched_in); > - if (x86_pmu.lbr_nr) > - intel_pmu_lbr_sched_task(ctx, sched_in); > + intel_pmu_pebs_sched_task(ctx, sched_in); > + intel_pmu_lbr_sched_task(ctx, sched_in); > } > > PMU_FORMAT_ATTR(offcore_rsp, "config1:0-63"); > diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c > index c6d23ffe422d..6ee7ebdc8555 100644 > --- a/arch/x86/events/intel/ds.c > +++ b/arch/x86/events/intel/ds.c > @@ -606,12 +606,6 @@ static inline void intel_pmu_drain_pebs_buffer(void) > x86_pmu.drain_pebs(®s); > } > > -void intel_pmu_pebs_sched_task(struct perf_event_context *ctx, bool sched_in) > -{ > - if (!sched_in) > - intel_pmu_drain_pebs_buffer(); > -} > - > /* > * PEBS > */ > @@ -816,6 +810,14 @@ static inline bool pebs_needs_sched_cb(struct cpu_hw_events *cpuc) > return cpuc->n_pebs && (cpuc->n_pebs == cpuc->n_large_pebs); > } > > +void intel_pmu_pebs_sched_task(struct perf_event_context *ctx, bool sched_in) > +{ > + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); > + > + if (!sched_in && pebs_needs_sched_cb(cpuc)) > + intel_pmu_drain_pebs_buffer(); > +} > + > static inline void pebs_update_threshold(struct cpu_hw_events *cpuc) > { > struct debug_store *ds = cpuc->ds; > diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c > index eb261656a320..955457a30197 100644 > --- a/arch/x86/events/intel/lbr.c > +++ b/arch/x86/events/intel/lbr.c > @@ -380,8 +380,12 @@ static void __intel_pmu_lbr_save(struct x86_perf_task_context *task_ctx) > > void intel_pmu_lbr_sched_task(struct perf_event_context *ctx, bool sched_in) > { > + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); > struct x86_perf_task_context *task_ctx; > > + if (!cpuc->lbr_users) > + return; > + > /* > * If LBR callstack feature is enabled and the stack was saved when > * the task was scheduled out, restore the stack. Otherwise flush