All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jiri Olsa <jolsa@redhat.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Jiri Olsa <jolsa@kernel.org>, lkml <linux-kernel@vger.kernel.org>,
	Ingo Molnar <mingo@kernel.org>, Andi Kleen <andi@firstfloor.org>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Kan Liang <kan.liang@intel.com>
Subject: Re: [PATCH] perf/x86/intel: Add proper condition to run sched_task callbacks
Date: Wed, 19 Jul 2017 00:11:38 +0200	[thread overview]
Message-ID: <20170718221138.GC13553@krava> (raw)
In-Reply-To: <20170718123734.a7hgi3wwduuejz7f@hirez.programming.kicks-ass.net>

On Tue, Jul 18, 2017 at 02:37:34PM +0200, Peter Zijlstra wrote:
> On Tue, Jul 18, 2017 at 11:29:32AM +0200, Jiri Olsa wrote:
> 
> > because we have 2 places using the same callback
> >   - PEBS drain for free running counters
> >   - LBR save/store
> > 
> > both of them called from intel_pmu_sched_task
> > 
> > so let's say PEBS drain setup the callback for the event,
> > but in the callback itself (intel_pmu_sched_task) we will
> > also run the code for LBR save/restore, which we did not
> > ask for, but the code in intel_pmu_sched_task does not
> > check for that
> 
> Ah fair enough; Changelog confused me.
> 
> > I'm adding conditions to recognize the work that needs
> > to be done in the callback
> > 
> > another option might be to add support for more x86_pmu::sched_task
> > callbacks, which might be cleaner
> 
> Right; either that or pull the condition into the functions themselves
> to create less churn. Something like so I suppose...

ok, will send new version

jirka

> 
> 
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index aa62437d1aa1..2d533d4c0e2c 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -3265,10 +3265,8 @@ static void intel_pmu_cpu_dying(int cpu)
>  static void intel_pmu_sched_task(struct perf_event_context *ctx,
>  				 bool sched_in)
>  {
> -	if (x86_pmu.pebs_active)
> -		intel_pmu_pebs_sched_task(ctx, sched_in);
> -	if (x86_pmu.lbr_nr)
> -		intel_pmu_lbr_sched_task(ctx, sched_in);
> +	intel_pmu_pebs_sched_task(ctx, sched_in);
> +	intel_pmu_lbr_sched_task(ctx, sched_in);
>  }
>  
>  PMU_FORMAT_ATTR(offcore_rsp, "config1:0-63");
> diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
> index c6d23ffe422d..6ee7ebdc8555 100644
> --- a/arch/x86/events/intel/ds.c
> +++ b/arch/x86/events/intel/ds.c
> @@ -606,12 +606,6 @@ static inline void intel_pmu_drain_pebs_buffer(void)
>  	x86_pmu.drain_pebs(&regs);
>  }
>  
> -void intel_pmu_pebs_sched_task(struct perf_event_context *ctx, bool sched_in)
> -{
> -	if (!sched_in)
> -		intel_pmu_drain_pebs_buffer();
> -}
> -
>  /*
>   * PEBS
>   */
> @@ -816,6 +810,14 @@ static inline bool pebs_needs_sched_cb(struct cpu_hw_events *cpuc)
>  	return cpuc->n_pebs && (cpuc->n_pebs == cpuc->n_large_pebs);
>  }
>  
> +void intel_pmu_pebs_sched_task(struct perf_event_context *ctx, bool sched_in)
> +{
> +	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> +
> +	if (!sched_in && pebs_needs_sched_cb(cpuc))
> +		intel_pmu_drain_pebs_buffer();
> +}
> +
>  static inline void pebs_update_threshold(struct cpu_hw_events *cpuc)
>  {
>  	struct debug_store *ds = cpuc->ds;
> diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
> index eb261656a320..955457a30197 100644
> --- a/arch/x86/events/intel/lbr.c
> +++ b/arch/x86/events/intel/lbr.c
> @@ -380,8 +380,12 @@ static void __intel_pmu_lbr_save(struct x86_perf_task_context *task_ctx)
>  
>  void intel_pmu_lbr_sched_task(struct perf_event_context *ctx, bool sched_in)
>  {
> +	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
>  	struct x86_perf_task_context *task_ctx;
>  
> +	if (!cpuc->lbr_users)
> +		return;
> +
>  	/*
>  	 * If LBR callstack feature is enabled and the stack was saved when
>  	 * the task was scheduled out, restore the stack. Otherwise flush

  reply	other threads:[~2017-07-18 22:11 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-17 15:01 [PATCH] perf/x86/intel: Add proper condition to run sched_task callbacks Jiri Olsa
2017-07-18  9:14 ` Peter Zijlstra
2017-07-18  9:29   ` Jiri Olsa
2017-07-18 12:37     ` Peter Zijlstra
2017-07-18 22:11       ` Jiri Olsa [this message]
2017-07-19  7:52       ` Jiri Olsa
2017-07-21  9:38         ` [tip:perf/urgent] " tip-bot for Jiri Olsa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170718221138.GC13553@krava \
    --to=jolsa@redhat.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=andi@firstfloor.org \
    --cc=jolsa@kernel.org \
    --cc=kan.liang@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.