linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Leeder, Neil" <nleeder@codeaurora.org>
To: Will Deacon <will.deacon@arm.com>, Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	Mark Langsdorf <mlangsdo@redhat.com>,
	Mark Salter <msalter@redhat.com>, Jon Masters <jcm@redhat.com>,
	Timur Tabi <timur@codeaurora.org>,
	cov@codeaurora.org, nleeder@codeaurora.org
Subject: Re: [PATCH v7] soc: qcom: add l2 cache perf events driver
Date: Fri, 11 Nov 2016 16:52:35 -0500	[thread overview]
Message-ID: <e917b871-d6ee-d52b-aedc-b3ee75aed05f@codeaurora.org> (raw)
In-Reply-To: <20161109181652.GK17771@arm.com>

Hi Will,

On 11/9/2016 1:16 PM, Will Deacon wrote:
> On Wed, Nov 09, 2016 at 05:54:13PM +0000, Mark Rutland wrote:
>> On Fri, Oct 28, 2016 at 04:50:13PM -0400, Neil Leeder wrote:
>>> +	struct perf_event *events[MAX_L2_CTRS];
>>> +	struct l2cache_pmu *l2cache_pmu;
>>> +	DECLARE_BITMAP(used_counters, MAX_L2_CTRS);
>>> +	DECLARE_BITMAP(used_groups, L2_EVT_GROUP_MAX + 1);
>>> +	int group_to_counter[L2_EVT_GROUP_MAX + 1];
>>> +	int irq;
>>> +	/* The CPU that is used for collecting events on this cluster */
>>> +	int on_cpu;
>>> +	/* All the CPUs associated with this cluster */
>>> +	cpumask_t cluster_cpus;
>>
>> I'm still uncertain about aggregating all cluster PMUs into a larger
>> PMU, given the cluster PMUs are logically independent (at least in terms
>> of the programming model).
>>
>> However, from what I understand the x86 uncore PMU drivers aggregate
>> symmetric instances of uncore PMUs (and also aggregate across packages
>> to the same logical PMU).
>>
>> Whatever we do, it would be nice for the uncore drivers to align on a
>> common behaviour (and I think we're currently going the oppposite route
>> with Cavium's uncore PMU). Will, thoughts?
>
> I'm not a big fan of aggregating this stuff. Ultimately, the user in the
> driving seat of perf is going to need some knowledge about the toplogy of
> the system in order to perform sensible profiling using an uncore PMU.
> If the kernel tries to present a single, unified PMU then we paint ourselves
> into a corner when the hardware isn't as symmetric as we want it to be
> (big/little on the CPU side is the extreme example of this). If we want
> to be consistent, then exposing each uncore unit as a separate PMU is
> the way to go. That doesn't mean we can't aggregate the components of a
> distributed PMU (e.g. the CCN or the SMMU), but we don't want to aggregate
> at the programming interface/IP block level.
>
> We could consider exposing some topology information in sysfs if that's
> seen as an issue with the non-aggregated case.
>
> Will

So is there a use-case for individual uncore PMUs when they can't be 
used in task mode or per-cpu?

The main (only?) use will be in system mode, in which case surely it 
makes sense to provide a single aggregated count?

With individual PMUs exposed there will be potentially dozens of nodes 
for userspace to collect from which would make perf command-line usage 
unwieldy at best.

Neil
-- 
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm 
Technologies Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project.

  reply	other threads:[~2016-11-11 21:53 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-28 20:50 [PATCH v7] soc: qcom: add l2 cache perf events driver Neil Leeder
2016-11-09 17:54 ` Mark Rutland
2016-11-09 18:16   ` Will Deacon
2016-11-11 21:52     ` Leeder, Neil [this message]
2016-11-16 10:37       ` Mark Rutland
2016-11-10 23:25   ` Leeder, Neil
2016-11-11 11:50     ` Mark Rutland

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e917b871-d6ee-d52b-aedc-b3ee75aed05f@codeaurora.org \
    --to=nleeder@codeaurora.org \
    --cc=acme@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=cov@codeaurora.org \
    --cc=jcm@redhat.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=mlangsdo@redhat.com \
    --cc=msalter@redhat.com \
    --cc=peterz@infradead.org \
    --cc=timur@codeaurora.org \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).