linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: kan.liang@linux.intel.com
Cc: acme@kernel.org, mingo@redhat.com, linux-kernel@vger.kernel.org,
	tglx@linutronix.de, jolsa@kernel.org, eranian@google.com,
	alexander.shishkin@linux.intel.com, ak@linux.intel.com
Subject: Re: [RESEND PATCH V3 3/8] perf/x86/intel: Support hardware TopDown metrics
Date: Wed, 28 Aug 2019 17:19:21 +0200	[thread overview]
Message-ID: <20190828151921.GD17205@worktop.programming.kicks-ass.net> (raw)
In-Reply-To: <20190826144740.10163-4-kan.liang@linux.intel.com>

On Mon, Aug 26, 2019 at 07:47:35AM -0700, kan.liang@linux.intel.com wrote:
> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index 54534ff00940..1ae23db5c2d7 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -76,6 +76,8 @@ u64 x86_perf_event_update(struct perf_event *event)
>  	if (idx == INTEL_PMC_IDX_FIXED_BTS)
>  		return 0;
>  
> +	if (is_topdown_count(event) && x86_pmu.update_topdown_event)
> +		return x86_pmu.update_topdown_event(event);

If is_topdown_count() is true; it had better bloody well have that
function. But I really hate this.

>  	/*
>  	 * Careful: an NMI might modify the previous event value.
>  	 *
> @@ -1003,6 +1005,10 @@ static int collect_events(struct cpu_hw_events *cpuc, struct perf_event *leader,
>  
>  	max_count = x86_pmu.num_counters + x86_pmu.num_counters_fixed;
>  
> +	/* There are 4 TopDown metrics events. */
> +	if (x86_pmu.intel_cap.perf_metrics)
> +		max_count += 4;

I'm thinking this is wrong.. this unconditionally allows collecting 4
extra events on every part that has this metrics crud on.

>  	/* current number of events already accepted */
>  	n = cpuc->n_events;
>  
> @@ -1184,6 +1190,10 @@ int x86_perf_event_set_period(struct perf_event *event)
>  	if (idx == INTEL_PMC_IDX_FIXED_BTS)
>  		return 0;
>  
> +	if (unlikely(is_topdown_count(event)) &&
> +	    x86_pmu.set_topdown_event_period)
> +		return x86_pmu.set_topdown_event_period(event);

Same as with the other method; and I similarly hates it.

> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index f4d6335a18e2..616313d7f3d7 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c

> +static int icl_set_topdown_event_period(struct perf_event *event)
> +{
> +	struct hw_perf_event *hwc = &event->hw;
> +	s64 left = local64_read(&hwc->period_left);
> +
> +	/*
> +	 * Clear PERF_METRICS and Fixed counter 3 in initialization.
> +	 * After that, both MSRs will be cleared for each read.
> +	 * Don't need to clear them again.
> +	 */
> +	if (left == x86_pmu.max_period) {
> +		wrmsrl(MSR_CORE_PERF_FIXED_CTR3, 0);
> +		wrmsrl(MSR_PERF_METRICS, 0);
> +		local64_set(&hwc->period_left, 0);
> +	}

This really doesn't make sense to me; if you set FIXED_CTR3 := 0, you'll
never trigger the overflow there; this then seems to suggest the actual
counter value is irrelevant. Therefore you don't actually need this.

> +
> +	perf_event_update_userpage(event);
> +
> +	return 0;
> +}
> +
> +static u64 icl_get_metrics_event_value(u64 metric, u64 slots, int idx)
> +{
> +	u32 val;
> +
> +	/*
> +	 * The metric is reported as an 8bit integer percentage

s/percentage/fraction/

percentage is 1/100, 8bit is 256.

> +	 * suming up to 0xff.
> +	 * slots-in-metric = (Metric / 0xff) * slots
> +	 */
> +	val = (metric >> ((idx - INTEL_PMC_IDX_FIXED_METRIC_BASE) * 8)) & 0xff;
> +	return  mul_u64_u32_div(slots, val, 0xff);

This really utterly blows.. I'd have wasted range to be able to do a
power-of-two fraction here. That is use 8 bits with a range [0,128].

But also; x86_64 seems to lack a sane implementation of that function,
and it currently compiles into utter crap (it can be 2 instructions).

> +}

> +/*
> + * Update all active Topdown events.
> + * PMU has to be disabled before calling this function.

Forgets to explain why...

> + */

  parent reply	other threads:[~2019-08-28 15:31 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-26 14:47 [RESEND PATCH V3 0/8] TopDown metrics support for Icelake kan.liang
2019-08-26 14:47 ` [RESEND PATCH V3 1/8] perf/x86/intel: Set correct mask for TOPDOWN.SLOTS kan.liang
2019-08-28  7:48   ` Peter Zijlstra
2019-08-26 14:47 ` [RESEND PATCH V3 2/8] perf/x86/intel: Basic support for metrics counters kan.liang
2019-08-28  7:48   ` Peter Zijlstra
2019-08-28  7:52   ` Peter Zijlstra
2019-08-28 13:59     ` Liang, Kan
2019-08-28  8:44   ` Peter Zijlstra
2019-08-28  9:02     ` Peter Zijlstra
2019-08-28  9:37       ` Peter Zijlstra
2019-08-28 13:51       ` Liang, Kan
2019-08-28  8:52   ` Peter Zijlstra
2019-08-26 14:47 ` [RESEND PATCH V3 3/8] perf/x86/intel: Support hardware TopDown metrics kan.liang
2019-08-28 15:02   ` Peter Zijlstra
2019-08-28 19:04     ` Andi Kleen
2019-08-31  9:19       ` Peter Zijlstra
2019-09-09 13:40         ` Liang, Kan
2019-08-28 19:35     ` Liang, Kan
2019-08-28 15:19   ` Peter Zijlstra [this message]
2019-08-28 16:11     ` [PATCH] x86/math64: Provide a sane mul_u64_u32_div() implementation for x86_64 Peter Zijlstra
2019-08-29  9:30       ` Peter Zijlstra
2019-08-28 16:17     ` [RESEND PATCH V3 3/8] perf/x86/intel: Support hardware TopDown metrics Andi Kleen
2019-08-28 16:28       ` Peter Zijlstra
2019-08-29  3:11         ` Andi Kleen
2019-08-29  9:17           ` Peter Zijlstra
2019-08-29 13:31     ` Liang, Kan
2019-08-29 13:52       ` Peter Zijlstra
2019-08-29 16:56         ` Liang, Kan
2019-08-31  9:18           ` Peter Zijlstra
2019-08-30 23:18   ` Stephane Eranian
2019-08-31  0:31     ` Andi Kleen
2019-08-31  9:13       ` Stephane Eranian
2019-08-31  9:29         ` Peter Zijlstra
2019-08-31 17:53         ` Andi Kleen
2019-08-26 14:47 ` [RESEND PATCH V3 4/8] perf/x86/intel: Support per thread RDPMC " kan.liang
2019-08-26 14:47 ` [RESEND PATCH V3 5/8] perf/x86/intel: Export TopDown events for Icelake kan.liang
2019-08-26 14:47 ` [RESEND PATCH V3 6/8] perf/x86/intel: Disable sampling read slots and topdown kan.liang
2019-08-26 14:47 ` [RESEND PATCH V3 7/8] perf, tools, stat: Support new per thread TopDown metrics kan.liang
2019-08-26 14:47 ` [RESEND PATCH V3 8/8] perf, tools: Add documentation for topdown metrics kan.liang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190828151921.GD17205@worktop.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=acme@kernel.org \
    --cc=ak@linux.intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=eranian@google.com \
    --cc=jolsa@kernel.org \
    --cc=kan.liang@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).