All of lore.kernel.org
 help / color / mirror / Atom feed
From: Robin Murphy <robin.murphy@arm.com>
To: Andrew Murray <andrew.murray@arm.com>,
	Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
Cc: "lorenzo.pieralisi@arm.com" <lorenzo.pieralisi@arm.com>,
	"mark.rutland@arm.com" <mark.rutland@arm.com>,
	"vkilari@codeaurora.org" <vkilari@codeaurora.org>,
	"neil.m.leeder@gmail.com" <neil.m.leeder@gmail.com>,
	"jean-philippe.brucker@arm.com" <jean-philippe.brucker@arm.com>,
	"pabba@codeaurora.org" <pabba@codeaurora.org>,
	John Garry <john.garry@huawei.com>,
	"will.deacon@arm.com" <will.deacon@arm.com>,
	"rruigrok@codeaurora.org" <rruigrok@codeaurora.org>,
	Linuxarm <linuxarm@huawei.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-acpi@vger.kernel.org" <linux-acpi@vger.kernel.org>,
	"Guohanjun (Hanjun Guo)" <guohanjun@huawei.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH v5 2/4] perf: add arm64 smmuv3 pmu driver
Date: Thu, 24 Jan 2019 18:25:19 +0000	[thread overview]
Message-ID: <f83054eb-3fe3-0041-7a9f-6f7e58cea5c8@arm.com> (raw)
In-Reply-To: <20190123121406.GH8120@e119886-lin.cambridge.arm.com>

On 23/01/2019 12:14, Andrew Murray wrote:
[...]
>>>> +static inline void smmu_pmu_counter_set_value(struct smmu_pmu
>>> *smmu_pmu,
>>>> +					      u32 idx, u64 value)
>>>> +{
>>>> +	if (smmu_pmu->counter_mask & BIT(32))
>>>> +		writeq(value, smmu_pmu->reloc_base + SMMU_PMCG_EVCNTR(idx,
>>> 8));
>>>> +	else
>>>> +		writel(value, smmu_pmu->reloc_base + SMMU_PMCG_EVCNTR(idx,
>>> 4));
>>>
>>> The arm64 IO macros use __force u32 and so it's probably OK to provide a 64
>>> bit
>>> value to writel. But you could use something like lower_32_bits for clarity.
>>
>> Yes, macro uses __force u32. I will change it to make it more clear though.

To be pedantic, the first cast which the value actually undergoes is to 
(__u32) ;)

Ultimately, __raw_writel() takes a u32, so in that sense it's never a 
problem to pass a u64, since unsigned truncation is well-defined in the 
C standard. The casting involved in the I/O accessors is mostly about 
going to an endian-specific type and back again.

[...]
>>>> +static void smmu_pmu_event_start(struct perf_event *event, int flags)
>>>> +{
>>>> +	struct smmu_pmu *smmu_pmu = to_smmu_pmu(event->pmu);
>>>> +	struct hw_perf_event *hwc = &event->hw;
>>>> +	int idx = hwc->idx;
>>>> +	u32 filter_span, filter_sid;
>>>> +	u32 evtyper;
>>>> +
>>>> +	hwc->state = 0;
>>>> +
>>>> +	smmu_pmu_set_period(smmu_pmu, hwc);
>>>> +
>>>> +	smmu_pmu_get_event_filter(event, &filter_span, &filter_sid);
>>>> +
>>>> +	evtyper = get_event(event) |
>>>> +		  filter_span << SMMU_PMCG_SID_SPAN_SHIFT;
>>>> +
>>>> +	smmu_pmu_set_evtyper(smmu_pmu, idx, evtyper);
>>>> +	smmu_pmu_set_smr(smmu_pmu, idx, filter_sid);
>>>> +	smmu_pmu_interrupt_enable(smmu_pmu, idx);
>>>> +	smmu_pmu_counter_enable(smmu_pmu, idx);
>>>> +}
>>>> +
>>>> +static void smmu_pmu_event_stop(struct perf_event *event, int flags)
>>>> +{
>>>> +	struct smmu_pmu *smmu_pmu = to_smmu_pmu(event->pmu);
>>>> +	struct hw_perf_event *hwc = &event->hw;
>>>> +	int idx = hwc->idx;
>>>> +
>>>> +	if (hwc->state & PERF_HES_STOPPED)
>>>> +		return;
>>>> +
>>>> +	smmu_pmu_counter_disable(smmu_pmu, idx);
>>>
>>> Is it intentional not to call smmu_pmu_interrupt_disable here?
>>
>> Yes, it is. Earlier patch had the interrupt toggling and Robin pointed out that
>> it is not really needed as counters are anyway stopped and explicitly disabling
>> may not solve the spurious interrupt case as well.
>>
> 
> Ah apologies for not seeing that in earlier reviews.

Hmm, I didn't exactly mean "keep enabling it every time an event is 
restarted and never disable it anywhere", though. I was thinking more 
along the lines of enabling in event_add() and disabling in event_del() 
(i.e. effectively tying it to allocation and deallocation of the counter).

Robin.

WARNING: multiple messages have this Message-ID (diff)
From: Robin Murphy <robin.murphy@arm.com>
To: Andrew Murray <andrew.murray@arm.com>,
	Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
Cc: "mark.rutland@arm.com" <mark.rutland@arm.com>,
	"vkilari@codeaurora.org" <vkilari@codeaurora.org>,
	"lorenzo.pieralisi@arm.com" <lorenzo.pieralisi@arm.com>,
	"neil.m.leeder@gmail.com" <neil.m.leeder@gmail.com>,
	"jean-philippe.brucker@arm.com" <jean-philippe.brucker@arm.com>,
	"pabba@codeaurora.org" <pabba@codeaurora.org>,
	John Garry <john.garry@huawei.com>,
	"will.deacon@arm.com" <will.deacon@arm.com>,
	"rruigrok@codeaurora.org" <rruigrok@codeaurora.org>,
	Linuxarm <linuxarm@huawei.com>,
	"linux-acpi@vger.kernel.org" <linux-acpi@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	"Guohanjun \(Hanjun Guo\)" <guohanjun@huawei.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v5 2/4] perf: add arm64 smmuv3 pmu driver
Date: Thu, 24 Jan 2019 18:25:19 +0000	[thread overview]
Message-ID: <f83054eb-3fe3-0041-7a9f-6f7e58cea5c8@arm.com> (raw)
In-Reply-To: <20190123121406.GH8120@e119886-lin.cambridge.arm.com>

On 23/01/2019 12:14, Andrew Murray wrote:
[...]
>>>> +static inline void smmu_pmu_counter_set_value(struct smmu_pmu
>>> *smmu_pmu,
>>>> +					      u32 idx, u64 value)
>>>> +{
>>>> +	if (smmu_pmu->counter_mask & BIT(32))
>>>> +		writeq(value, smmu_pmu->reloc_base + SMMU_PMCG_EVCNTR(idx,
>>> 8));
>>>> +	else
>>>> +		writel(value, smmu_pmu->reloc_base + SMMU_PMCG_EVCNTR(idx,
>>> 4));
>>>
>>> The arm64 IO macros use __force u32 and so it's probably OK to provide a 64
>>> bit
>>> value to writel. But you could use something like lower_32_bits for clarity.
>>
>> Yes, macro uses __force u32. I will change it to make it more clear though.

To be pedantic, the first cast which the value actually undergoes is to 
(__u32) ;)

Ultimately, __raw_writel() takes a u32, so in that sense it's never a 
problem to pass a u64, since unsigned truncation is well-defined in the 
C standard. The casting involved in the I/O accessors is mostly about 
going to an endian-specific type and back again.

[...]
>>>> +static void smmu_pmu_event_start(struct perf_event *event, int flags)
>>>> +{
>>>> +	struct smmu_pmu *smmu_pmu = to_smmu_pmu(event->pmu);
>>>> +	struct hw_perf_event *hwc = &event->hw;
>>>> +	int idx = hwc->idx;
>>>> +	u32 filter_span, filter_sid;
>>>> +	u32 evtyper;
>>>> +
>>>> +	hwc->state = 0;
>>>> +
>>>> +	smmu_pmu_set_period(smmu_pmu, hwc);
>>>> +
>>>> +	smmu_pmu_get_event_filter(event, &filter_span, &filter_sid);
>>>> +
>>>> +	evtyper = get_event(event) |
>>>> +		  filter_span << SMMU_PMCG_SID_SPAN_SHIFT;
>>>> +
>>>> +	smmu_pmu_set_evtyper(smmu_pmu, idx, evtyper);
>>>> +	smmu_pmu_set_smr(smmu_pmu, idx, filter_sid);
>>>> +	smmu_pmu_interrupt_enable(smmu_pmu, idx);
>>>> +	smmu_pmu_counter_enable(smmu_pmu, idx);
>>>> +}
>>>> +
>>>> +static void smmu_pmu_event_stop(struct perf_event *event, int flags)
>>>> +{
>>>> +	struct smmu_pmu *smmu_pmu = to_smmu_pmu(event->pmu);
>>>> +	struct hw_perf_event *hwc = &event->hw;
>>>> +	int idx = hwc->idx;
>>>> +
>>>> +	if (hwc->state & PERF_HES_STOPPED)
>>>> +		return;
>>>> +
>>>> +	smmu_pmu_counter_disable(smmu_pmu, idx);
>>>
>>> Is it intentional not to call smmu_pmu_interrupt_disable here?
>>
>> Yes, it is. Earlier patch had the interrupt toggling and Robin pointed out that
>> it is not really needed as counters are anyway stopped and explicitly disabling
>> may not solve the spurious interrupt case as well.
>>
> 
> Ah apologies for not seeing that in earlier reviews.

Hmm, I didn't exactly mean "keep enabling it every time an event is 
restarted and never disable it anywhere", though. I was thinking more 
along the lines of enabling in event_add() and disabling in event_del() 
(i.e. effectively tying it to allocation and deallocation of the counter).

Robin.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2019-01-24 18:25 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-30 15:47 [PATCH v5 0/4] arm64 SMMUv3 PMU driver with IORT support Shameer Kolothum
2018-11-30 15:47 ` Shameer Kolothum
2018-11-30 15:47 ` Shameer Kolothum
2018-11-30 15:47 ` [PATCH v5 1/4] acpi: arm64: add iort support for PMCG Shameer Kolothum
2018-11-30 15:47   ` Shameer Kolothum
2018-11-30 15:47   ` Shameer Kolothum
2019-01-24 17:31   ` Robin Murphy
2019-01-24 17:31     ` Robin Murphy
2018-11-30 15:47 ` [PATCH v5 2/4] perf: add arm64 smmuv3 pmu driver Shameer Kolothum
2018-11-30 15:47   ` Shameer Kolothum
2018-11-30 15:47   ` Shameer Kolothum
2019-01-22 16:23   ` Andrew Murray
2019-01-22 16:23     ` Andrew Murray
2019-01-23 11:02     ` Shameerali Kolothum Thodi
2019-01-23 11:02       ` Shameerali Kolothum Thodi
2019-01-23 11:02       ` Shameerali Kolothum Thodi
2019-01-23 12:14       ` Andrew Murray
2019-01-23 12:14         ` Andrew Murray
2019-01-23 12:14         ` Andrew Murray
2019-01-24 18:25         ` Robin Murphy [this message]
2019-01-24 18:25           ` Robin Murphy
2019-01-24 18:25           ` Robin Murphy
2019-01-25  9:22           ` Shameerali Kolothum Thodi
2019-01-25  9:22             ` Shameerali Kolothum Thodi
2019-01-25  9:22             ` Shameerali Kolothum Thodi
2019-01-25 15:13   ` Robin Murphy
2019-01-25 15:13     ` Robin Murphy
2019-01-28  9:10     ` Shameerali Kolothum Thodi
2019-01-28  9:10       ` Shameerali Kolothum Thodi
2019-01-28  9:10       ` Shameerali Kolothum Thodi
2018-11-30 15:47 ` [PATCH v5 3/4] perf/smmuv3: Add MSI irq support Shameer Kolothum
2018-11-30 15:47   ` Shameer Kolothum
2018-11-30 15:47   ` Shameer Kolothum
2019-01-25 16:12   ` Robin Murphy
2019-01-25 16:12     ` Robin Murphy
2018-11-30 15:47 ` [PATCH v5 4/4] perf/smmuv3_pmu: Enable HiSilicon Erratum 162001800 quirk Shameer Kolothum
2018-11-30 15:47   ` Shameer Kolothum
2018-11-30 15:47   ` Shameer Kolothum
2019-01-25 18:32   ` Robin Murphy
2019-01-25 18:32     ` Robin Murphy
2019-01-28  9:24     ` Shameerali Kolothum Thodi
2019-01-28  9:24       ` Shameerali Kolothum Thodi
2019-01-28  9:24       ` Shameerali Kolothum Thodi
2019-01-22 14:38 ` [PATCH v5 0/4] arm64 SMMUv3 PMU driver with IORT support Shameerali Kolothum Thodi
2019-01-22 14:38   ` Shameerali Kolothum Thodi
2019-01-22 14:38   ` Shameerali Kolothum Thodi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f83054eb-3fe3-0041-7a9f-6f7e58cea5c8@arm.com \
    --to=robin.murphy@arm.com \
    --cc=andrew.murray@arm.com \
    --cc=guohanjun@huawei.com \
    --cc=jean-philippe.brucker@arm.com \
    --cc=john.garry@huawei.com \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=lorenzo.pieralisi@arm.com \
    --cc=mark.rutland@arm.com \
    --cc=neil.m.leeder@gmail.com \
    --cc=pabba@codeaurora.org \
    --cc=rruigrok@codeaurora.org \
    --cc=shameerali.kolothum.thodi@huawei.com \
    --cc=vkilari@codeaurora.org \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.