linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Ard Biesheuvel <ard.biesheuvel@linaro.org>
To: Vivek Gautam <vivek.gautam@codeaurora.org>,
	 Bjorn Andersson <bjorn.andersson@linaro.org>
Cc: pdaly@codeaurora.org,
	linux-arm-msm <linux-arm-msm@vger.kernel.org>,
	Will Deacon <will.deacon@arm.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	Robin Murphy <robin.murphy@arm.com>,
	linux-arm-kernel <linux-arm-kernel@lists.infradead.org>,
	pratikp@codeaurora.org
Subject: Re: [PATCH 0/3] iommu/arm-smmu: Add support to use Last level cache
Date: Tue, 29 Jan 2019 16:02:41 +0100	[thread overview]
Message-ID: <CAKv+Gu9c2YYQdDNV_9Lgs3nb8MFXNvLj8GdeOfEnHvPAw6qAfg@mail.gmail.com> (raw)
In-Reply-To: <CAFp+6iGJLNA-sgr+rCZsf20h8Ha0aVh4zKcxYQ_nYjP9CemVpw@mail.gmail.com>

(+ Bjorn)

On Mon, 28 Jan 2019 at 12:27, Vivek Gautam <vivek.gautam@codeaurora.org> wrote:
>
> Hi Ard,
>
> On Thu, Jan 24, 2019 at 1:25 PM Ard Biesheuvel
> <ard.biesheuvel@linaro.org> wrote:
> >
> > On Thu, 24 Jan 2019 at 07:58, Vivek Gautam <vivek.gautam@codeaurora.org> wrote:
> > >
> > > On Mon, Jan 21, 2019 at 7:55 PM Ard Biesheuvel
> > > <ard.biesheuvel@linaro.org> wrote:
> > > >
> > > > On Mon, 21 Jan 2019 at 14:56, Robin Murphy <robin.murphy@arm.com> wrote:
> > > > >
> > > > > On 21/01/2019 13:36, Ard Biesheuvel wrote:
> > > > > > On Mon, 21 Jan 2019 at 14:25, Robin Murphy <robin.murphy@arm.com> wrote:
> > > > > >>
> > > > > >> On 21/01/2019 10:50, Ard Biesheuvel wrote:
> > > > > >>> On Mon, 21 Jan 2019 at 11:17, Vivek Gautam <vivek.gautam@codeaurora.org> wrote:
> > > > > >>>>
> > > > > >>>> Hi,
> > > > > >>>>
> > > > > >>>>
> > > > > >>>> On Mon, Jan 21, 2019 at 12:56 PM Ard Biesheuvel
> > > > > >>>> <ard.biesheuvel@linaro.org> wrote:
> > > > > >>>>>
> > > > > >>>>> On Mon, 21 Jan 2019 at 06:54, Vivek Gautam <vivek.gautam@codeaurora.org> wrote:
> > > > > >>>>>>
> > > > > >>>>>> Qualcomm SoCs have an additional level of cache called as
> > > > > >>>>>> System cache, aka. Last level cache (LLC). This cache sits right
> > > > > >>>>>> before the DDR, and is tightly coupled with the memory controller.
> > > > > >>>>>> The clients using this cache request their slices from this
> > > > > >>>>>> system cache, make it active, and can then start using it.
> > > > > >>>>>> For these clients with smmu, to start using the system cache for
> > > > > >>>>>> buffers and, related page tables [1], memory attributes need to be
> > > > > >>>>>> set accordingly. This series add the required support.
> > > > > >>>>>>
> > > > > >>>>>
> > > > > >>>>> Does this actually improve performance on reads from a device? The
> > > > > >>>>> non-cache coherent DMA routines perform an unconditional D-cache
> > > > > >>>>> invalidate by VA to the PoC before reading from the buffers filled by
> > > > > >>>>> the device, and I would expect the PoC to be defined as lying beyond
> > > > > >>>>> the LLC to still guarantee the architected behavior.
> > > > > >>>>
> > > > > >>>> We have seen performance improvements when running Manhattan
> > > > > >>>> GFXBench benchmarks.
> > > > > >>>>
> > > > > >>>
> > > > > >>> Ah ok, that makes sense, since in that case, the data flow is mostly
> > > > > >>> to the device, not from the device.
> > > > > >>>
> > > > > >>>> As for the PoC, from my knowledge on sdm845 the system cache, aka
> > > > > >>>> Last level cache (LLC) lies beyond the point of coherency.
> > > > > >>>> Non-cache coherent buffers will not be cached to system cache also, and
> > > > > >>>> no additional software cache maintenance ops are required for system cache.
> > > > > >>>> Pratik can add more if I am missing something.
> > > > > >>>>
> > > > > >>>> To take care of the memory attributes from DMA APIs side, we can add a
> > > > > >>>> DMA_ATTR definition to take care of any dma non-coherent APIs calls.
> > > > > >>>>
> > > > > >>>
> > > > > >>> So does the device use the correct inner non-cacheable, outer
> > > > > >>> writeback cacheable attributes if the SMMU is in pass-through?
> > > > > >>>
> > > > > >>> We have been looking into another use case where the fact that the
> > > > > >>> SMMU overrides memory attributes is causing issues (WC mappings used
> > > > > >>> by the radeon and amdgpu driver). So if the SMMU would honour the
> > > > > >>> existing attributes, would you still need the SMMU changes?
> > > > > >>
> > > > > >> Even if we could force a stage 2 mapping with the weakest pagetable
> > > > > >> attributes (such that combining would work), there would still need to
> > > > > >> be a way to set the TCR attributes appropriately if this behaviour is
> > > > > >> wanted for the SMMU's own table walks as well.
> > > > > >>
> > > > > >
> > > > > > Isn't that just a matter of implementing support for SMMUs that lack
> > > > > > the 'dma-coherent' attribute?
> > > > >
> > > > > Not quite - in general they need INC-ONC attributes in case there
> > > > > actually is something in the architectural outer-cacheable domain.
> > > >
> > > > But is it a problem to use INC-ONC attributes for the SMMU PTW on this
> > > > chip? AIUI, the reason for the SMMU changes is to avoid the
> > > > performance hit of snooping, which is more expensive than cache
> > > > maintenance of SMMU page tables. So are you saying the by-VA cache
> > > > maintenance is not relayed to this system cache, resulting in page
> > > > table updates to be invisible to masters using INC-ONC attributes?
> > >
> > > The reason for this SMMU changes is that the non-coherent devices
> > > can't access the inner caches at all. But they have a way to allocate
> > > and lookup in system cache.
> > >
> > > CPU will by default make use of system cache when the inner-cacheable
> > > and outer-cacheable memory attribute is set.
> > >
> > > So for SMMU page tables to be visible to PTW,
> > > -- For IO coherent clients, the CPU cache maintenance operations are not
> > > required for buffers marked Normal Cached to achieve a coherent view of
> > > memory. However, client-specific cache maintenance may still be
> > > required for devices
> > > with local caches (for example, compute DSP local L1 or L2).
> >
> > Why would devices need to access the SMMU page tables?
>
> No, the devices don't need to access the page tables, rather the PTW does.
> Sorry for mixing it up.
>
> >
> > > -- For non-IO coherent clients, the CPU cache maintenance operations (cleans
> > > and/or invalidates) are required at buffer handoff points for buffers marked as
> > > Normal Cached in any CPU page table in order to observe the latest updates.
> > >
> >
> > Indeed, and this is what your non-coherent SMMU PTW requires, and what
> > you /should/ get when you omit the 'dma-coherent' property from its DT
> > node (and if you don't, it is a bug in the SMMU driver that should get
> > fixed)
> >
> > The question is whether using inner-non-cached/outer-cacheable
> > attributes for the PTW is required for correctness, or whether it is
> > merely an optimization (since the point of this exercise was to avoid
> > snoop latency from the SMMU PTW). If it is an optimization, I would
> > like to understand whether the performance delta between SMMU page
> > tables in DRAM vs SMMU page tables in the LLC justifies these
> > intrusive changes to the SMMU driver.
>
> IIUC, SMMU uses the TCR configurations to decide how PTW should access
> the memory. TCR doesn't direct CPU whether to use cacheable or non -cacheable
> memory to allocate page tables. Is that right?

Correct

> Currently, these TCR configurations are set for inner-cacheable, and
> outer-cacheable.
> With this, is it assumed that PTW would snoop into the CPU caches for
> any updates
> of the page tables?
>
`
Yes, and if I understand the issue correctly, this snooping is costly,
which is why you want to avoid it, right?

> When we omit 'dma-coherent', CPU will allocate non-coherent memory
> for these page tables, and software has to explicitly flush CPU caches to
> make the changes visible to SMMU.

Indeed. But I would expect the TCR configuration to reflect this as
well, and that doesn't appear the case.

> The CPU will still mark this memory as Normal Cached, i.e. inner cached,
> outer cached, and the non-IO coherent SMMU PTW won't be able to snoop into
> CPU caches. Does the following code in io-pgtable-arm.c ensures that SMMU
> sees the latest page tables?
>
>    } else if (!(cfg->quirks & IO_PGTABLE_QUIRK_NO_DMA) &&
>                     !(pte & ARM_LPAE_PTE_SW_SYNC)) {
>                  __arm_lpae_sync_pte(ptep, cfg);
>    }
>

I don't know the history of why NO_DMA is implemented as a quirk (and
why it is called like that in the first place).
But it indeed appears that this is where the cache maintenance occurs
for non-coherent PTWs.

> This change is mostly to get optimized PTW. As seen in the patch [1] for GPU,
> there's a separate slice for page tables - "gpuhtw_llc_slice".
> Let me try to get the numbers for this optimization.
>

Yes, please. We'd need to compare page tables in the LLC with page
tables in system RAM, and for completeness, it would be nice to
include the cache-coherent configuration as well.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2019-01-29 15:03 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-21  5:53 [PATCH 0/3] iommu/arm-smmu: Add support to use Last level cache Vivek Gautam
2019-01-21  5:53 ` [PATCH 1/3] iommu/arm-smmu: Move to bitmap for arm_smmu_domain atrributes Vivek Gautam
2019-01-21 13:51   ` Robin Murphy
2019-01-22 17:06     ` Vivek Gautam
2019-01-21  5:53 ` [PATCH 2/3] iommu/io-pgtable-arm: Add support to use system cache Vivek Gautam
2019-01-21  5:53 ` [PATCH 3/3] iommu/arm-smmu: " Vivek Gautam
2019-01-21  7:26 ` [PATCH 0/3] iommu/arm-smmu: Add support to use Last level cache Ard Biesheuvel
2019-01-21 10:17   ` Vivek Gautam
2019-01-21 10:50     ` Ard Biesheuvel
2019-01-21 13:25       ` Robin Murphy
2019-01-21 13:36         ` Ard Biesheuvel
2019-01-21 13:56           ` Robin Murphy
2019-01-21 14:24             ` Ard Biesheuvel
2019-01-21 15:15               ` Robin Murphy
2019-01-24  6:58               ` Vivek Gautam
2019-01-24  7:54                 ` Ard Biesheuvel
2019-01-28 11:27                   ` Vivek Gautam
2019-01-29 15:02                     ` Ard Biesheuvel [this message]
2019-01-30  5:39                       ` Vivek Gautam

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKv+Gu9c2YYQdDNV_9Lgs3nb8MFXNvLj8GdeOfEnHvPAw6qAfg@mail.gmail.com \
    --to=ard.biesheuvel@linaro.org \
    --cc=bjorn.andersson@linaro.org \
    --cc=iommu@lists.linux-foundation.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pdaly@codeaurora.org \
    --cc=pratikp@codeaurora.org \
    --cc=robin.murphy@arm.com \
    --cc=vivek.gautam@codeaurora.org \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).