linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org>
To: Jordan Crouse <jcrouse@codeaurora.org>
Cc: Will Deacon <will@kernel.org>,
	Robin Murphy <robin.murphy@arm.com>,
	Joerg Roedel <joro@8bytes.org>, Rob Clark <robdclark@gmail.com>,
	iommu@lists.linux-foundation.org,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	Akhil P Oommen <akhilpo@codeaurora.org>,
	Bjorn Andersson <bjorn.andersson@linaro.org>,
	freedreno@lists.freedesktop.org,
	"Kristian H . Kristensen" <hoegsberg@google.com>,
	dri-devel@lists.freedesktop.org,
	Sharat Masetty <smasetty@codeaurora.org>,
	Jonathan Marek <jonathan@marek.ca>
Subject: Re: [PATCHv5 4/6] drm/msm/a6xx: Add support for using system cache(LLC)
Date: Mon, 28 Sep 2020 17:56:55 +0530	[thread overview]
Message-ID: <800c2108606cb921fef1ffc27569ffb2@codeaurora.org> (raw)
In-Reply-To: <20200923150320.GD31425@jcrouse1-lnx.qualcomm.com>

Hi Jordan,

On 2020-09-23 20:33, Jordan Crouse wrote:
> On Tue, Sep 22, 2020 at 11:48:17AM +0530, Sai Prakash Ranjan wrote:
>> From: Sharat Masetty <smasetty@codeaurora.org>
>> 
>> The last level system cache can be partitioned to 32 different
>> slices of which GPU has two slices preallocated. One slice is
>> used for caching GPU buffers and the other slice is used for
>> caching the GPU SMMU pagetables. This talks to the core system
>> cache driver to acquire the slice handles, configure the SCID's
>> to those slices and activates and deactivates the slices upon
>> GPU power collapse and restore.
>> 
>> Some support from the IOMMU driver is also needed to make use
>> of the system cache to set the right TCR attributes. GPU then
>> has the ability to override a few cacheability parameters which
>> it does to override write-allocate to write-no-allocate as the
>> GPU hardware does not benefit much from it.
>> 
>> DOMAIN_ATTR_SYS_CACHE is another domain level attribute used by the
>> IOMMU driver to set the right attributes to cache the hardware
>> pagetables into the system cache.
>> 
>> Signed-off-by: Sharat Masetty <smasetty@codeaurora.org>
>> [saiprakash.ranjan: fix to set attr before device attach to iommu and 
>> rebase]
>> Signed-off-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org>
>> ---
>>  drivers/gpu/drm/msm/adreno/a6xx_gpu.c   | 83 
>> +++++++++++++++++++++++++
>>  drivers/gpu/drm/msm/adreno/a6xx_gpu.h   |  4 ++
>>  drivers/gpu/drm/msm/adreno/adreno_gpu.c | 17 +++++
>>  3 files changed, 104 insertions(+)
>> 
>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c 
>> b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
>> index 8915882e4444..151190ff62f7 100644
>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
>> @@ -8,7 +8,9 @@
>>  #include "a6xx_gpu.h"
>>  #include "a6xx_gmu.xml.h"
>> 
>> +#include <linux/bitfield.h>
>>  #include <linux/devfreq.h>
>> +#include <linux/soc/qcom/llcc-qcom.h>
>> 
>>  #define GPU_PAS_ID 13
>> 
>> @@ -1022,6 +1024,79 @@ static irqreturn_t a6xx_irq(struct msm_gpu 
>> *gpu)
>>  	return IRQ_HANDLED;
>>  }
>> 
>> +static void a6xx_llc_rmw(struct a6xx_gpu *a6xx_gpu, u32 reg, u32 
>> mask, u32 or)
>> +{
>> +	return msm_rmw(a6xx_gpu->llc_mmio + (reg << 2), mask, or);
>> +}
>> +
>> +static void a6xx_llc_write(struct a6xx_gpu *a6xx_gpu, u32 reg, u32 
>> value)
>> +{
>> +	return msm_writel(value, a6xx_gpu->llc_mmio + (reg << 2));
>> +}
>> +
>> +static void a6xx_llc_deactivate(struct a6xx_gpu *a6xx_gpu)
>> +{
>> +	llcc_slice_deactivate(a6xx_gpu->llc_slice);
>> +	llcc_slice_deactivate(a6xx_gpu->htw_llc_slice);
>> +}
>> +
>> +static void a6xx_llc_activate(struct a6xx_gpu *a6xx_gpu)
>> +{
>> +	u32 cntl1_regval = 0;
>> +
>> +	if (IS_ERR(a6xx_gpu->llc_mmio))
>> +		return;
>> +
>> +	if (!llcc_slice_activate(a6xx_gpu->llc_slice)) {
>> +		u32 gpu_scid = llcc_get_slice_id(a6xx_gpu->llc_slice);
>> +
>> +		gpu_scid &= 0x1f;
>> +		cntl1_regval = (gpu_scid << 0) | (gpu_scid << 5) | (gpu_scid << 10) 
>> |
>> +			       (gpu_scid << 15) | (gpu_scid << 20);
>> +	}
>> +
>> +	if (!llcc_slice_activate(a6xx_gpu->htw_llc_slice)) {
>> +		u32 gpuhtw_scid = llcc_get_slice_id(a6xx_gpu->htw_llc_slice);
>> +
>> +		gpuhtw_scid &= 0x1f;
>> +		cntl1_regval |= FIELD_PREP(GENMASK(29, 25), gpuhtw_scid);
>> +	}
>> +
>> +	if (cntl1_regval) {
>> +		/*
>> +		 * Program the slice IDs for the various GPU blocks and GPU MMU
>> +		 * pagetables
>> +		 */
>> +		a6xx_llc_write(a6xx_gpu, REG_A6XX_CX_MISC_SYSTEM_CACHE_CNTL_1, 
>> cntl1_regval);
>> +
>> +		/*
>> +		 * Program cacheability overrides to not allocate cache lines on
>> +		 * a write miss
>> +		 */
>> +		a6xx_llc_rmw(a6xx_gpu, REG_A6XX_CX_MISC_SYSTEM_CACHE_CNTL_0, 0xF, 
>> 0x03);
>> +	}
>> +}
> 
> This code has been around long enough that it pre-dates a650. On a650 
> and other
> MMU-500 targets the htw_llc is configured by the firmware and the 
> llc_slice is
> configured in a different register.
> 
> I don't think we need to pause everything and add support for the 
> MMU-500 path,
> but we do need a way to disallow LLCC on affected targets until such 
> time that
> we can get it fixed up.
> 

Thanks for taking a close look, does something like below look ok or 
something
else is needed here?

+         /* Till the time we get in LLCC support for A650 */
+         if (!(info && info->revn == 650))
+                 a6xx_llc_slices_init(pdev, a6xx_gpu);

Thanks,
Sai

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a 
member
of Code Aurora Forum, hosted by The Linux Foundation

  reply	other threads:[~2020-09-28 12:27 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-22  6:18 [PATCHv5 0/6] System Cache support for GPU and required SMMU support Sai Prakash Ranjan
2020-09-22  6:18 ` [PATCHv5 1/6] iommu/io-pgtable-arm: Add support to use system cache Sai Prakash Ranjan
2020-09-22  6:18 ` [PATCHv5 2/6] iommu/arm-smmu: Add domain attribute for " Sai Prakash Ranjan
2020-09-22  6:18 ` [PATCHv5 3/6] drm/msm: rearrange the gpu_rmw() function Sai Prakash Ranjan
2020-09-22  6:18 ` [PATCHv5 4/6] drm/msm/a6xx: Add support for using system cache(LLC) Sai Prakash Ranjan
2020-09-23 15:03   ` Jordan Crouse
2020-09-28 12:26     ` Sai Prakash Ranjan [this message]
2020-09-28 16:11       ` Jordan Crouse
2020-09-28 16:30         ` Sai Prakash Ranjan
2020-09-22  6:18 ` [PATCHv5 5/6] iommu: arm-smmu-impl: Use table to list QCOM implementations Sai Prakash Ranjan
2020-09-23 15:24   ` Robin Murphy
2020-09-28 12:28     ` Sai Prakash Ranjan
2020-09-22  6:18 ` [PATCHv5 6/6] iommu: arm-smmu-impl: Add a space before open parenthesis Sai Prakash Ranjan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=800c2108606cb921fef1ffc27569ffb2@codeaurora.org \
    --to=saiprakash.ranjan@codeaurora.org \
    --cc=akhilpo@codeaurora.org \
    --cc=bjorn.andersson@linaro.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=freedreno@lists.freedesktop.org \
    --cc=hoegsberg@google.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jcrouse@codeaurora.org \
    --cc=jonathan@marek.ca \
    --cc=joro@8bytes.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=robdclark@gmail.com \
    --cc=robin.murphy@arm.com \
    --cc=smasetty@codeaurora.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).