linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yicong Yang <yangyicong@huawei.com>
To: Jonathan Cameron <Jonathan.Cameron@Huawei.com>,
	Jie Zhan <zhanjie9@hisilicon.com>
Cc: <yangyicong@hisilicon.com>, <acme@kernel.org>,
	<mark.rutland@arm.com>, <peterz@infradead.org>,
	<mingo@redhat.com>, <james.clark@arm.com>,
	<alexander.shishkin@linux.intel.com>,
	<linux-perf-users@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <21cnbao@gmail.com>,
	<tim.c.chen@intel.com>, <prime.zeng@hisilicon.com>,
	<shenyang39@huawei.com>, <linuxarm@huawei.com>
Subject: Re: [PATCH] perf stat: Support per-cluster aggregation
Date: Mon, 27 Mar 2023 14:20:57 +0800	[thread overview]
Message-ID: <4c9b7281-a4d3-5327-bc27-173af69219a4@huawei.com> (raw)
In-Reply-To: <20230324123031.0000013c@Huawei.com>

Hi Jie and Jonathan,

On 2023/3/24 20:30, Jonathan Cameron wrote:
> On Fri, 24 Mar 2023 12:24:22 +0000
> Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
> 
>> On Fri, 24 Mar 2023 10:34:33 +0800
>> Jie Zhan <zhanjie9@hisilicon.com> wrote:
>>
>>> On 13/03/2023 16:59, Yicong Yang wrote:  
>>>> From: Yicong Yang <yangyicong@hisilicon.com>
>>>>
>>>> Some platforms have 'cluster' topology and CPUs in the cluster will
>>>> share resources like L3 Cache Tag (for HiSilicon Kunpeng SoC) or L2
>>>> cache (for Intel Jacobsville). Currently parsing and building cluster
>>>> topology have been supported since [1].
>>>>
>>>> perf stat has already supported aggregation for other topologies like
>>>> die or socket, etc. It'll be useful to aggregate per-cluster to find
>>>> problems like L3T bandwidth contention or imbalance.
>>>>
>>>> This patch adds support for "--per-cluster" option for per-cluster
>>>> aggregation. Also update the docs and related test. The output will
>>>> be like:
>>>>
>>>> [root@localhost tmp]# perf stat -a -e LLC-load --per-cluster -- sleep 5
>>>>
>>>>   Performance counter stats for 'system wide':
>>>>
>>>> S56-D0-CLS158    4      1,321,521,570      LLC-load
>>>> S56-D0-CLS594    4        794,211,453      LLC-load
>>>> S56-D0-CLS1030    4             41,623      LLC-load
>>>> S56-D0-CLS1466    4             41,646      LLC-load
>>>> S56-D0-CLS1902    4             16,863      LLC-load
>>>> S56-D0-CLS2338    4             15,721      LLC-load
>>>> S56-D0-CLS2774    4             22,671      LLC-load
>>>> [...]
>>>>
>>>> [1] commit c5e22feffdd7 ("topology: Represent clusters of CPUs within a die")
>>>>
>>>> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>    
>>>
>>> An end user may have to check sysfs to figure out what CPUs those 
>>> cluster IDs account for.
>>>
>>> Any better method to show the mapping between CPUs and cluster IDs?  
>>
>> The cluster code is capable of using the ACPI_PPTT_ACPI_PROCESSOR_ID field
>> if valid for the cluster level of PPTT.
>>
>> The numbers in the example above look like offsets into the PPTT table
>> so I think the PPTT table is missing that information.
>>

Yes it is, the PPTT doesn't give a valid ID on my machine, for cluster and other
topologies. It's not a problem of this patch.

>> Whilst not a great description anyway (it's just an index), the UUID
>> that would be in there can convey more info on which cluster this is.
>>
>>
>>>
>>> Perhaps adding a conditional cluster id (when there are clusters) in the 
>>> "--per-core" output may help.  
>>
>> That's an interesting idea.  You'd want to include the other levels
>> if doing that.  So whenever you do a --per-xxx it also provides the
>> cluster / die / node / socket etc as relevant 'above' the level of xxx
>> Fun is that node and die can flip which would make this tricky to do.
> 
> Ignore me on this.  I'd not looked at the patch closely when I wrote
> this.  Clearly a lot of this information is already provided - the
> suggestion was to consider adding cluster to that mix which makes
> sense to me.
> 

In the early version of this patch I added the cluster info in the "--per-core"
output as "Sxxx-Dxxx-CLSxxx-Cxxx". But I decide to keep it as is to not break
the existed tools/scripts using --per-core outputs. Maybe we can add it later
if there's requirement.

Thanks,
Yicong


  reply	other threads:[~2023-03-27  6:21 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-13  8:59 [PATCH] perf stat: Support per-cluster aggregation Yicong Yang
2023-03-23 13:03 ` Yicong Yang
2023-03-24  2:34 ` Jie Zhan
2023-03-24 12:24   ` Jonathan Cameron
2023-03-24 12:30     ` Jonathan Cameron
2023-03-27  6:20       ` Yicong Yang [this message]
2023-03-24 18:05 ` Chen, Tim C
2023-03-27  4:03   ` Yicong Yang
2023-03-29  6:47   ` Namhyung Kim
2023-03-29 12:46     ` Yicong Yang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4c9b7281-a4d3-5327-bc27-173af69219a4@huawei.com \
    --to=yangyicong@huawei.com \
    --cc=21cnbao@gmail.com \
    --cc=Jonathan.Cameron@Huawei.com \
    --cc=acme@kernel.org \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=james.clark@arm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=prime.zeng@hisilicon.com \
    --cc=shenyang39@huawei.com \
    --cc=tim.c.chen@intel.com \
    --cc=yangyicong@hisilicon.com \
    --cc=zhanjie9@hisilicon.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).