From: Nageswara Sastry <rnsastry@linux.ibm.com>
To: kajoljain <kjain@linux.ibm.com>,
mpe@ellerman.id.au, linuxppc-dev@lists.ozlabs.org,
nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org,
peterz@infradead.org, dan.j.williams@intel.com,
ira.weiny@intel.com, vishal.l.verma@intel.com
Cc: santosh@fossix.org, maddy@linux.ibm.com,
aneesh.kumar@linux.ibm.com, atrajeev@linux.vnet.ibm.com,
vaibhav@linux.ibm.com, tglx@linutronix.de
Subject: Re: [PATCH v6 0/4] Add perf interface to expose nvdimm
Date: Fri, 25 Feb 2022 16:41:22 +0530 [thread overview]
Message-ID: <d7945e63-4cd6-1947-ed9f-a81203226c47@linux.ibm.com> (raw)
In-Reply-To: <ea6bc468-c7ae-c844-5111-8f0dc3207f89@linux.ibm.com>
On 25/02/22 12:08 pm, kajoljain wrote:
>
>
> On 2/25/22 11:25, Nageswara Sastry wrote:
>>
>>
>> On 17/02/22 10:03 pm, Kajol Jain wrote:
>>> Patchset adds performance stats reporting support for nvdimm.
>>> Added interface includes support for pmu register/unregister
>>> functions. A structure is added called nvdimm_pmu to be used for
>>> adding arch/platform specific data such as cpumask, nvdimm device
>>> pointer and pmu event functions like event_init/add/read/del.
>>> User could use the standard perf tool to access perf events
>>> exposed via pmu.
>>>
>>> Interface also defines supported event list, config fields for the
>>> event attributes and their corresponding bit values which are exported
>>> via sysfs. Patch 3 exposes IBM pseries platform nmem* device
>>> performance stats using this interface.
>>>
>>> Result from power9 pseries lpar with 2 nvdimm device:
>>>
>>> Ex: List all event by perf list
>>>
>>> command:# perf list nmem
>>>
>>> nmem0/cache_rh_cnt/ [Kernel PMU event]
>>> nmem0/cache_wh_cnt/ [Kernel PMU event]
>>> nmem0/cri_res_util/ [Kernel PMU event]
>>> nmem0/ctl_res_cnt/ [Kernel PMU event]
>>> nmem0/ctl_res_tm/ [Kernel PMU event]
>>> nmem0/fast_w_cnt/ [Kernel PMU event]
>>> nmem0/host_l_cnt/ [Kernel PMU event]
>>> nmem0/host_l_dur/ [Kernel PMU event]
>>> nmem0/host_s_cnt/ [Kernel PMU event]
>>> nmem0/host_s_dur/ [Kernel PMU event]
>>> nmem0/med_r_cnt/ [Kernel PMU event]
>>> nmem0/med_r_dur/ [Kernel PMU event]
>>> nmem0/med_w_cnt/ [Kernel PMU event]
>>> nmem0/med_w_dur/ [Kernel PMU event]
>>> nmem0/mem_life/ [Kernel PMU event]
>>> nmem0/poweron_secs/ [Kernel PMU event]
>>> ...
>>> nmem1/mem_life/ [Kernel PMU event]
>>> nmem1/poweron_secs/ [Kernel PMU event]
>>>
>>> Patch1:
>>> Introduces the nvdimm_pmu structure
>>> Patch2:
>>> Adds common interface to add arch/platform specific data
>>> includes nvdimm device pointer, pmu data along with
>>> pmu event functions. It also defines supported event list
>>> and adds attribute groups for format, events and cpumask.
>>> It also adds code for cpu hotplug support.
>>> Patch3:
>>> Add code in arch/powerpc/platform/pseries/papr_scm.c to expose
>>> nmem* pmu. It fills in the nvdimm_pmu structure with pmu name,
>>> capabilities, cpumask and event functions and then registers
>>> the pmu by adding callbacks to register_nvdimm_pmu.
>>> Patch4:
>>> Sysfs documentation patch
>>>
>>> Changelog
>>
>> Tested these patches with the automated tests at
>> avocado-misc-tests/perf/perf_nmem.py
>> URL:
>> https://github.com/avocado-framework-tests/avocado-misc-tests/blob/master/perf/perf_nmem.py
>>
>>
>> 1. On the system where target id and online id were different then not
>> seeing value in 'cpumask' and those tests failed.
>>
>> Example:
>> Log from dmesg
>> ...
>> papr_scm ibm,persistent-memory:ibm,pmemory@44100003: Region registered
>> with target node 1 and online node 0
>> ...
>
> Hi Nageswara Sastry,
> Thanks for testing the patch set. Yes you right, incase target
> node id and online node id is different, it can happen when target
> node is not online and hence can cause this issue, thanks for pointing
> it.
>
> Function dev_to_node will return node id for a given nvdimm device which
> can be offline in some scenarios. We should use numa node id return by
> numa_map_to_online_node function in that scenario. This function incase
> given node is offline, it will lookup for next closest online node and
> return that nodeid.
>
> Can you try with below change and see, if you are still getting this
> issue. Please let me know.
>
> diff --git a/arch/powerpc/platforms/pseries/papr_scm.c
> b/arch/powerpc/platforms/pseries/papr_scm.c
> index bdf2620db461..4dd513d7c029 100644
> --- a/arch/powerpc/platforms/pseries/papr_scm.c
> +++ b/arch/powerpc/platforms/pseries/papr_scm.c
> @@ -536,7 +536,7 @@ static void papr_scm_pmu_register(struct
> papr_scm_priv *p)
> PERF_PMU_CAP_NO_EXCLUDE;
>
> /*updating the cpumask variable */
> - nodeid = dev_to_node(&p->pdev->dev);
> + nodeid = numa_map_to_online_node(dev_to_node(&p->pdev->dev));
> nd_pmu->arch_cpumask = *cpumask_of_node(nodeid);
>
> Thanks,
> Kajol Jain
>
With the above patch all the tests are passing on the system where
target id and online id were different. Here is the the result:
(1/9) perf_nmem.py:perfNMEM.test_pmu_register_dmesg: PASS (3.47 s)
(2/9) perf_nmem.py:perfNMEM.test_sysfs: PASS (1.15 s)
(3/9) perf_nmem.py:perfNMEM.test_pmu_count: PASS (1.08 s)
(4/9) perf_nmem.py:perfNMEM.test_all_events: PASS (18.15 s)
(5/9) perf_nmem.py:perfNMEM.test_all_group_events: PASS (2.22 s)
(6/9) perf_nmem.py:perfNMEM.test_mixed_events: CANCEL: With single PMU
mixed events test is not possible. (1.18 s)
(7/9) perf_nmem.py:perfNMEM.test_pmu_cpumask: PASS (1.12 s)
(8/9) perf_nmem.py:perfNMEM.test_cpumask: PASS (1.17 s)
(9/9) perf_nmem.py:perfNMEM.test_cpumask_cpu_off: PASS (1.81 s)
Tested-by: Nageswara R Sastry <rnsastry@linux.ibm.com>
--
Thanks and Regards
R.Nageswara Sastry
next prev parent reply other threads:[~2022-02-25 11:11 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-17 16:33 [PATCH v6 0/4] Add perf interface to expose nvdimm Kajol Jain
2022-02-17 16:33 ` [PATCH v6 1/4] drivers/nvdimm: Add nvdimm pmu structure Kajol Jain
2022-02-17 16:33 ` [PATCH v6 2/4] drivers/nvdimm: Add perf interface to expose nvdimm performance stats Kajol Jain
2022-02-17 16:33 ` [PATCH v6 3/4] powerpc/papr_scm: Add perf interface support Kajol Jain
2022-02-17 16:33 ` [PATCH v6 4/4] docs: ABI: sysfs-bus-nvdimm: Document sysfs event format entries for nvdimm pmu Kajol Jain
2022-02-18 18:06 ` [PATCH v6 0/4] Add perf interface to expose nvdimm Dan Williams
2022-02-23 19:07 ` Dan Williams
2022-02-23 21:17 ` Dan Williams
2022-02-25 5:55 ` Nageswara Sastry
2022-02-25 6:38 ` kajoljain
2022-02-25 7:47 ` Aneesh Kumar K V
2022-02-25 8:39 ` kajoljain
2022-02-25 11:11 ` Nageswara Sastry [this message]
2022-02-25 11:23 ` kajoljain
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d7945e63-4cd6-1947-ed9f-a81203226c47@linux.ibm.com \
--to=rnsastry@linux.ibm.com \
--cc=aneesh.kumar@linux.ibm.com \
--cc=atrajeev@linux.vnet.ibm.com \
--cc=dan.j.williams@intel.com \
--cc=ira.weiny@intel.com \
--cc=kjain@linux.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=maddy@linux.ibm.com \
--cc=mpe@ellerman.id.au \
--cc=nvdimm@lists.linux.dev \
--cc=peterz@infradead.org \
--cc=santosh@fossix.org \
--cc=tglx@linutronix.de \
--cc=vaibhav@linux.ibm.com \
--cc=vishal.l.verma@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).