All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jon Nettleton <jon@solid-run.com>
To: Robin Murphy <robin.murphy@arm.com>
Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>,
	linux-arm-kernel <linux-arm-kernel@lists.infradead.org>,
	ACPI Devel Maling List <linux-acpi@vger.kernel.org>,
	iommu@lists.linux-foundation.org, Linuxarm <linuxarm@huawei.com>,
	Steven Price <steven.price@arm.com>,
	Hanjun Guo <guohanjun@huawei.com>,
	yangyicong <yangyicong@huawei.com>,
	Sami.Mujawar@arm.com, wanghuiqiang <wanghuiqiang@huawei.com>
Subject: Re: [PATCH v5 7/8] iommu/arm-smmu: Get associated RMR info and install bypass SMR
Date: Tue, 29 Jun 2021 18:25:51 +0200	[thread overview]
Message-ID: <CABdtJHsz+ycVffJTyekau_OY6ROmoTBWAGd_guikxauT=nnuJQ@mail.gmail.com> (raw)
In-Reply-To: <b33c1525-5a74-a985-fd39-f4df8614f210@arm.com>

On Tue, Jun 29, 2021 at 3:23 PM Robin Murphy <robin.murphy@arm.com> wrote:
>
> On 2021-06-29 08:03, Jon Nettleton wrote:
> > On Mon, Jun 14, 2021 at 12:06 PM Robin Murphy <robin.murphy@arm.com> wrote:
> >>
> >> On 2021-05-24 12:02, Shameer Kolothum wrote:
> >>> From: Jon Nettleton <jon@solid-run.com>
> >>>
> >>> Check if there is any RMR info associated with the devices behind
> >>> the SMMU and if any, install bypass SMRs for them. This is to
> >>> keep any ongoing traffic associated with these devices alive
> >>> when we enable/reset SMMU during probe().
> >>>
> >>> Signed-off-by: Jon Nettleton <jon@solid-run.com>
> >>> Signed-off-by: Steven Price <steven.price@arm.com>
> >>> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
> >>> ---
> >>>    drivers/iommu/arm/arm-smmu/arm-smmu.c | 65 +++++++++++++++++++++++++++
> >>>    1 file changed, 65 insertions(+)
> >>>
> >>> diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c
> >>> index 6f72c4d208ca..56db3d3238fc 100644
> >>> --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
> >>> +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
> >>> @@ -2042,6 +2042,67 @@ err_reset_platform_ops: __maybe_unused;
> >>>        return err;
> >>>    }
> >>>
> >>> +static void arm_smmu_rmr_install_bypass_smr(struct arm_smmu_device *smmu)
> >>> +{
> >>> +     struct list_head rmr_list;
> >>> +     struct iommu_resv_region *e;
> >>> +     int i, cnt = 0;
> >>> +     u32 smr;
> >>> +     u32 reg;
> >>> +
> >>> +     INIT_LIST_HEAD(&rmr_list);
> >>> +     if (iommu_dma_get_rmrs(dev_fwnode(smmu->dev), &rmr_list))
> >>> +             return;
> >>> +
> >>> +     reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sCR0);
> >>> +
> >>> +     if ((reg & ARM_SMMU_sCR0_USFCFG) && !(reg & ARM_SMMU_sCR0_CLIENTPD)) {
> >>> +             /*
> >>> +              * SMMU is already enabled and disallowing bypass, so preserve
> >>> +              * the existing SMRs
> >>> +              */
> >>> +             for (i = 0; i < smmu->num_mapping_groups; i++) {
> >>> +                     smr = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_SMR(i));
> >>
> >> To reiterate, just because a bootloader/crashed kernel/whatever may have
> >> left some configuration behind doesn't mean that it matters (e.g. what
> >> if these SMRs are pointing at translation contexts?). All we should be
> >> doing here is setting the relevant RMR accommodations in our "clean
> >> slate" software state before the reset routine applies it to the
> >> hardware, just like patch #5 does for SMMUv3.
> >>
> >> Trying to safely reset an SMMU when we discover it with CLIENTPD=0 is
> >> really another issue entirely, and I'd say is beyond the scope of this
> >> series
> >>
> >>> +                     if (!FIELD_GET(ARM_SMMU_SMR_VALID, smr))
> >>> +                             continue;
> >>
> >> Note that that's not even necessarily correct (thanks to EXIDS).
> >>
> >>> +                     smmu->smrs[i].id = FIELD_GET(ARM_SMMU_SMR_ID, smr);
> >>> +                     smmu->smrs[i].mask = FIELD_GET(ARM_SMMU_SMR_MASK, smr);
> >>> +                     smmu->smrs[i].valid = true;
> >>> +             }
> >>> +     }
> >>> +
> >>> +     list_for_each_entry(e, &rmr_list, list) {
> >>> +             u32 sid = e->fw_data.rmr.sid;
> >>> +
> >>> +             i = arm_smmu_find_sme(smmu, sid, ~0);
> >>> +             if (i < 0)
> >>> +                     continue;
> >>> +             if (smmu->s2crs[i].count == 0) {
> >>> +                     smmu->smrs[i].id = sid;
> >>> +                     smmu->smrs[i].mask = ~0;
> >>
> >> This is very wrong (as has now already been pointed out).
> >>
> >>> +                     smmu->smrs[i].valid = true;
> >>> +             }
> >>> +             smmu->s2crs[i].count++;
> >>> +             smmu->s2crs[i].type = S2CR_TYPE_BYPASS;
> >>> +             smmu->s2crs[i].privcfg = S2CR_PRIVCFG_DEFAULT;
> >>> +             smmu->s2crs[i].cbndx = 0xff;
> >>
> >> Nit: cbndx is left uninitialised for bypass/fault entries elsewhere, so
> >> there's little point touching it here.
> >>
> >>> +
> >>> +             cnt++;
> >>> +     }
> >>> +
> >>> +     if ((reg & ARM_SMMU_sCR0_USFCFG) && !(reg & ARM_SMMU_sCR0_CLIENTPD)) {
> >>> +             /* Remove the valid bit for unused SMRs */
> >>> +             for (i = 0; i < smmu->num_mapping_groups; i++) {
> >>> +                     if (smmu->s2crs[i].count == 0)
> >>> +                             smmu->smrs[i].valid = false;
> >>> +             }
> >>
> >> If this dance is only about avoiding stream match conflicts when trying
> >> to reprogram live SMRs, simply turning the SMMU off beforehand would be
> >> a lot simpler.
> >
> > Robin,
> >
> > I am not sure what you mean here, and maybe Steve wants to jump in and
> > help clarify.
> >
> > My understanding is that "dance" is required for regions that need to
> > continue to work
> > throughout the boot process.  I think the example I have seen the most
> > is for GOP drivers that
> > use system memory rather than dedicated VRAM.
>
> All that is required is to install bypass SMEs for the relevant streams
> before enabling the SMMU. That much is achieved by the
> list_for_each_entry(e, &rmr_list, list) loop setting up initial state
> followed by arm_smmu_device_reset(). The "dance" I'm referring to is the
> additional reading out of (possibly nonsense) SMR state beforehand to
> pre-bias the SMR allocator, then trying to clean up the remnants afterwards.
>
> If we're going to pretend to be robust against finding the SMMU already
> enabled *with* live traffic ongoing, there's frankly an awful lot more
> care we'd need to take beyond here (and for some aspects there's still
> no right answer). If on the other hand we're implicitly assuming that
> any boot-time enabled SMRs exactly match the RMR configuration anyway,
> then simply disabling the SMMU until we enable it again in the reset
> routine still preserves the necessary bypass behaviour for RMR streams
> while sidestepping any issues related to reprogramming live SMMU state.
>
> Robin.
>
> >>> +     }
> >>> +
> >>> +     dev_notice(smmu->dev, "\tpreserved %d boot mapping%s\n", cnt,
> >>> +                cnt == 1 ? "" : "s");
> >>> +     iommu_dma_put_rmrs(dev_fwnode(smmu->dev), &rmr_list);
> >>> +}
> >>> +
> >>>    static int arm_smmu_device_probe(struct platform_device *pdev)
> >>>    {
> >>>        struct resource *res;
> >>> @@ -2168,6 +2229,10 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
> >>>        }
> >>>
> >>>        platform_set_drvdata(pdev, smmu);
> >>> +
> >>> +     /* Check for RMRs and install bypass SMRs if any */
> >>> +     arm_smmu_rmr_install_bypass_smr(smmu);
> >>> +
> >>>        arm_smmu_device_reset(smmu);
> >>>        arm_smmu_test_smr_masks(smmu);
> >>>
> >>>

Shameer,

Sorry for the delays.  Here is a diff of the changes that should
address the issues pointed out by Robin,
I have tested that this works as expected on my HoneyComb LX2160A.

diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c
b/drivers/iommu/arm/arm-smmu/arm-smmu.c
index ab7b9db77625..a358bd326d0b 100644
--- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
+++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
@@ -2068,29 +2068,21 @@ static void
arm_smmu_rmr_install_bypass_smr(struct arm_smmu_device *smmu)
        struct list_head rmr_list;
        struct iommu_resv_region *e;
        int i, cnt = 0;
-       u32 smr;
        u32 reg;

        INIT_LIST_HEAD(&rmr_list);
        if (iommu_dma_get_rmrs(dev_fwnode(smmu->dev), &rmr_list))
                return;

-       reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sCR0);
+       /* Rather than trying to look at existing mappings that
+        * are setup by the firmware and then invalidate the ones
+        * that do no have matching RMR entries, just disable the
+        * SMMU until it gets enabled again in the reset routine.
+        */

-       if ((reg & ARM_SMMU_sCR0_USFCFG) && !(reg & ARM_SMMU_sCR0_CLIENTPD)) {
-               /*
-                * SMMU is already enabled and disallowing bypass, so preserve
-                * the existing SMRs
-                */
-               for (i = 0; i < smmu->num_mapping_groups; i++) {
-                       smr = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_SMR(i));
-                       if (!FIELD_GET(ARM_SMMU_SMR_VALID, smr))
-                               continue;
-                       smmu->smrs[i].id = FIELD_GET(ARM_SMMU_SMR_ID, smr);
-                       smmu->smrs[i].mask = FIELD_GET(ARM_SMMU_SMR_MASK, smr);
-                       smmu->smrs[i].valid = true;
-               }
-       }
+       reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sCR0);
+       reg &= ~ARM_SMMU_sCR0_CLIENTPD;
+       arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_sCR0, reg);

        list_for_each_entry(e, &rmr_list, list) {
                u32 sid = e->fw_data.rmr.sid;
@@ -2100,25 +2092,16 @@ static void
arm_smmu_rmr_install_bypass_smr(struct arm_smmu_device *smmu)
                        continue;
                if (smmu->s2crs[i].count == 0) {
                        smmu->smrs[i].id = sid;
-                       smmu->smrs[i].mask = ~0;
+                       smmu->smrs[i].mask = 0;
                        smmu->smrs[i].valid = true;
                }
                smmu->s2crs[i].count++;
                smmu->s2crs[i].type = S2CR_TYPE_BYPASS;
                smmu->s2crs[i].privcfg = S2CR_PRIVCFG_DEFAULT;
-               smmu->s2crs[i].cbndx = 0xff;

                cnt++;
        }

-       if ((reg & ARM_SMMU_sCR0_USFCFG) && !(reg & ARM_SMMU_sCR0_CLIENTPD)) {
-               /* Remove the valid bit for unused SMRs */
-               for (i = 0; i < smmu->num_mapping_groups; i++) {
-                       if (smmu->s2crs[i].count == 0)
-                               smmu->smrs[i].valid = false;
-               }
-       }
-
        dev_notice(smmu->dev, "\tpreserved %d boot mapping%s\n", cnt,
                   cnt == 1 ? "" : "s");
        iommu_dma_put_rmrs(dev_fwnode(smmu->dev), &rmr_list);

Please include that in your next patch series.  Let me know if you
want me to send you the patch direct
off the list.

Thanks,
Jon

WARNING: multiple messages have this Message-ID (diff)
From: Jon Nettleton <jon@solid-run.com>
To: Robin Murphy <robin.murphy@arm.com>
Cc: Linuxarm <linuxarm@huawei.com>,
	Steven Price <steven.price@arm.com>,
	ACPI Devel Maling List <linux-acpi@vger.kernel.org>,
	iommu@lists.linux-foundation.org,
	wanghuiqiang <wanghuiqiang@huawei.com>,
	Hanjun Guo <guohanjun@huawei.com>,
	yangyicong <yangyicong@huawei.com>,
	Sami.Mujawar@arm.com,
	linux-arm-kernel <linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH v5 7/8] iommu/arm-smmu: Get associated RMR info and install bypass SMR
Date: Tue, 29 Jun 2021 18:25:51 +0200	[thread overview]
Message-ID: <CABdtJHsz+ycVffJTyekau_OY6ROmoTBWAGd_guikxauT=nnuJQ@mail.gmail.com> (raw)
In-Reply-To: <b33c1525-5a74-a985-fd39-f4df8614f210@arm.com>

On Tue, Jun 29, 2021 at 3:23 PM Robin Murphy <robin.murphy@arm.com> wrote:
>
> On 2021-06-29 08:03, Jon Nettleton wrote:
> > On Mon, Jun 14, 2021 at 12:06 PM Robin Murphy <robin.murphy@arm.com> wrote:
> >>
> >> On 2021-05-24 12:02, Shameer Kolothum wrote:
> >>> From: Jon Nettleton <jon@solid-run.com>
> >>>
> >>> Check if there is any RMR info associated with the devices behind
> >>> the SMMU and if any, install bypass SMRs for them. This is to
> >>> keep any ongoing traffic associated with these devices alive
> >>> when we enable/reset SMMU during probe().
> >>>
> >>> Signed-off-by: Jon Nettleton <jon@solid-run.com>
> >>> Signed-off-by: Steven Price <steven.price@arm.com>
> >>> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
> >>> ---
> >>>    drivers/iommu/arm/arm-smmu/arm-smmu.c | 65 +++++++++++++++++++++++++++
> >>>    1 file changed, 65 insertions(+)
> >>>
> >>> diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c
> >>> index 6f72c4d208ca..56db3d3238fc 100644
> >>> --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
> >>> +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
> >>> @@ -2042,6 +2042,67 @@ err_reset_platform_ops: __maybe_unused;
> >>>        return err;
> >>>    }
> >>>
> >>> +static void arm_smmu_rmr_install_bypass_smr(struct arm_smmu_device *smmu)
> >>> +{
> >>> +     struct list_head rmr_list;
> >>> +     struct iommu_resv_region *e;
> >>> +     int i, cnt = 0;
> >>> +     u32 smr;
> >>> +     u32 reg;
> >>> +
> >>> +     INIT_LIST_HEAD(&rmr_list);
> >>> +     if (iommu_dma_get_rmrs(dev_fwnode(smmu->dev), &rmr_list))
> >>> +             return;
> >>> +
> >>> +     reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sCR0);
> >>> +
> >>> +     if ((reg & ARM_SMMU_sCR0_USFCFG) && !(reg & ARM_SMMU_sCR0_CLIENTPD)) {
> >>> +             /*
> >>> +              * SMMU is already enabled and disallowing bypass, so preserve
> >>> +              * the existing SMRs
> >>> +              */
> >>> +             for (i = 0; i < smmu->num_mapping_groups; i++) {
> >>> +                     smr = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_SMR(i));
> >>
> >> To reiterate, just because a bootloader/crashed kernel/whatever may have
> >> left some configuration behind doesn't mean that it matters (e.g. what
> >> if these SMRs are pointing at translation contexts?). All we should be
> >> doing here is setting the relevant RMR accommodations in our "clean
> >> slate" software state before the reset routine applies it to the
> >> hardware, just like patch #5 does for SMMUv3.
> >>
> >> Trying to safely reset an SMMU when we discover it with CLIENTPD=0 is
> >> really another issue entirely, and I'd say is beyond the scope of this
> >> series
> >>
> >>> +                     if (!FIELD_GET(ARM_SMMU_SMR_VALID, smr))
> >>> +                             continue;
> >>
> >> Note that that's not even necessarily correct (thanks to EXIDS).
> >>
> >>> +                     smmu->smrs[i].id = FIELD_GET(ARM_SMMU_SMR_ID, smr);
> >>> +                     smmu->smrs[i].mask = FIELD_GET(ARM_SMMU_SMR_MASK, smr);
> >>> +                     smmu->smrs[i].valid = true;
> >>> +             }
> >>> +     }
> >>> +
> >>> +     list_for_each_entry(e, &rmr_list, list) {
> >>> +             u32 sid = e->fw_data.rmr.sid;
> >>> +
> >>> +             i = arm_smmu_find_sme(smmu, sid, ~0);
> >>> +             if (i < 0)
> >>> +                     continue;
> >>> +             if (smmu->s2crs[i].count == 0) {
> >>> +                     smmu->smrs[i].id = sid;
> >>> +                     smmu->smrs[i].mask = ~0;
> >>
> >> This is very wrong (as has now already been pointed out).
> >>
> >>> +                     smmu->smrs[i].valid = true;
> >>> +             }
> >>> +             smmu->s2crs[i].count++;
> >>> +             smmu->s2crs[i].type = S2CR_TYPE_BYPASS;
> >>> +             smmu->s2crs[i].privcfg = S2CR_PRIVCFG_DEFAULT;
> >>> +             smmu->s2crs[i].cbndx = 0xff;
> >>
> >> Nit: cbndx is left uninitialised for bypass/fault entries elsewhere, so
> >> there's little point touching it here.
> >>
> >>> +
> >>> +             cnt++;
> >>> +     }
> >>> +
> >>> +     if ((reg & ARM_SMMU_sCR0_USFCFG) && !(reg & ARM_SMMU_sCR0_CLIENTPD)) {
> >>> +             /* Remove the valid bit for unused SMRs */
> >>> +             for (i = 0; i < smmu->num_mapping_groups; i++) {
> >>> +                     if (smmu->s2crs[i].count == 0)
> >>> +                             smmu->smrs[i].valid = false;
> >>> +             }
> >>
> >> If this dance is only about avoiding stream match conflicts when trying
> >> to reprogram live SMRs, simply turning the SMMU off beforehand would be
> >> a lot simpler.
> >
> > Robin,
> >
> > I am not sure what you mean here, and maybe Steve wants to jump in and
> > help clarify.
> >
> > My understanding is that "dance" is required for regions that need to
> > continue to work
> > throughout the boot process.  I think the example I have seen the most
> > is for GOP drivers that
> > use system memory rather than dedicated VRAM.
>
> All that is required is to install bypass SMEs for the relevant streams
> before enabling the SMMU. That much is achieved by the
> list_for_each_entry(e, &rmr_list, list) loop setting up initial state
> followed by arm_smmu_device_reset(). The "dance" I'm referring to is the
> additional reading out of (possibly nonsense) SMR state beforehand to
> pre-bias the SMR allocator, then trying to clean up the remnants afterwards.
>
> If we're going to pretend to be robust against finding the SMMU already
> enabled *with* live traffic ongoing, there's frankly an awful lot more
> care we'd need to take beyond here (and for some aspects there's still
> no right answer). If on the other hand we're implicitly assuming that
> any boot-time enabled SMRs exactly match the RMR configuration anyway,
> then simply disabling the SMMU until we enable it again in the reset
> routine still preserves the necessary bypass behaviour for RMR streams
> while sidestepping any issues related to reprogramming live SMMU state.
>
> Robin.
>
> >>> +     }
> >>> +
> >>> +     dev_notice(smmu->dev, "\tpreserved %d boot mapping%s\n", cnt,
> >>> +                cnt == 1 ? "" : "s");
> >>> +     iommu_dma_put_rmrs(dev_fwnode(smmu->dev), &rmr_list);
> >>> +}
> >>> +
> >>>    static int arm_smmu_device_probe(struct platform_device *pdev)
> >>>    {
> >>>        struct resource *res;
> >>> @@ -2168,6 +2229,10 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
> >>>        }
> >>>
> >>>        platform_set_drvdata(pdev, smmu);
> >>> +
> >>> +     /* Check for RMRs and install bypass SMRs if any */
> >>> +     arm_smmu_rmr_install_bypass_smr(smmu);
> >>> +
> >>>        arm_smmu_device_reset(smmu);
> >>>        arm_smmu_test_smr_masks(smmu);
> >>>
> >>>

Shameer,

Sorry for the delays.  Here is a diff of the changes that should
address the issues pointed out by Robin,
I have tested that this works as expected on my HoneyComb LX2160A.

diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c
b/drivers/iommu/arm/arm-smmu/arm-smmu.c
index ab7b9db77625..a358bd326d0b 100644
--- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
+++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
@@ -2068,29 +2068,21 @@ static void
arm_smmu_rmr_install_bypass_smr(struct arm_smmu_device *smmu)
        struct list_head rmr_list;
        struct iommu_resv_region *e;
        int i, cnt = 0;
-       u32 smr;
        u32 reg;

        INIT_LIST_HEAD(&rmr_list);
        if (iommu_dma_get_rmrs(dev_fwnode(smmu->dev), &rmr_list))
                return;

-       reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sCR0);
+       /* Rather than trying to look at existing mappings that
+        * are setup by the firmware and then invalidate the ones
+        * that do no have matching RMR entries, just disable the
+        * SMMU until it gets enabled again in the reset routine.
+        */

-       if ((reg & ARM_SMMU_sCR0_USFCFG) && !(reg & ARM_SMMU_sCR0_CLIENTPD)) {
-               /*
-                * SMMU is already enabled and disallowing bypass, so preserve
-                * the existing SMRs
-                */
-               for (i = 0; i < smmu->num_mapping_groups; i++) {
-                       smr = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_SMR(i));
-                       if (!FIELD_GET(ARM_SMMU_SMR_VALID, smr))
-                               continue;
-                       smmu->smrs[i].id = FIELD_GET(ARM_SMMU_SMR_ID, smr);
-                       smmu->smrs[i].mask = FIELD_GET(ARM_SMMU_SMR_MASK, smr);
-                       smmu->smrs[i].valid = true;
-               }
-       }
+       reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sCR0);
+       reg &= ~ARM_SMMU_sCR0_CLIENTPD;
+       arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_sCR0, reg);

        list_for_each_entry(e, &rmr_list, list) {
                u32 sid = e->fw_data.rmr.sid;
@@ -2100,25 +2092,16 @@ static void
arm_smmu_rmr_install_bypass_smr(struct arm_smmu_device *smmu)
                        continue;
                if (smmu->s2crs[i].count == 0) {
                        smmu->smrs[i].id = sid;
-                       smmu->smrs[i].mask = ~0;
+                       smmu->smrs[i].mask = 0;
                        smmu->smrs[i].valid = true;
                }
                smmu->s2crs[i].count++;
                smmu->s2crs[i].type = S2CR_TYPE_BYPASS;
                smmu->s2crs[i].privcfg = S2CR_PRIVCFG_DEFAULT;
-               smmu->s2crs[i].cbndx = 0xff;

                cnt++;
        }

-       if ((reg & ARM_SMMU_sCR0_USFCFG) && !(reg & ARM_SMMU_sCR0_CLIENTPD)) {
-               /* Remove the valid bit for unused SMRs */
-               for (i = 0; i < smmu->num_mapping_groups; i++) {
-                       if (smmu->s2crs[i].count == 0)
-                               smmu->smrs[i].valid = false;
-               }
-       }
-
        dev_notice(smmu->dev, "\tpreserved %d boot mapping%s\n", cnt,
                   cnt == 1 ? "" : "s");
        iommu_dma_put_rmrs(dev_fwnode(smmu->dev), &rmr_list);

Please include that in your next patch series.  Let me know if you
want me to send you the patch direct
off the list.

Thanks,
Jon
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

WARNING: multiple messages have this Message-ID (diff)
From: Jon Nettleton <jon@solid-run.com>
To: Robin Murphy <robin.murphy@arm.com>
Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>,
	 linux-arm-kernel <linux-arm-kernel@lists.infradead.org>,
	 ACPI Devel Maling List <linux-acpi@vger.kernel.org>,
	iommu@lists.linux-foundation.org,  Linuxarm <linuxarm@huawei.com>,
	Steven Price <steven.price@arm.com>,
	 Hanjun Guo <guohanjun@huawei.com>,
	yangyicong <yangyicong@huawei.com>,
	Sami.Mujawar@arm.com, wanghuiqiang <wanghuiqiang@huawei.com>
Subject: Re: [PATCH v5 7/8] iommu/arm-smmu: Get associated RMR info and install bypass SMR
Date: Tue, 29 Jun 2021 18:25:51 +0200	[thread overview]
Message-ID: <CABdtJHsz+ycVffJTyekau_OY6ROmoTBWAGd_guikxauT=nnuJQ@mail.gmail.com> (raw)
In-Reply-To: <b33c1525-5a74-a985-fd39-f4df8614f210@arm.com>

On Tue, Jun 29, 2021 at 3:23 PM Robin Murphy <robin.murphy@arm.com> wrote:
>
> On 2021-06-29 08:03, Jon Nettleton wrote:
> > On Mon, Jun 14, 2021 at 12:06 PM Robin Murphy <robin.murphy@arm.com> wrote:
> >>
> >> On 2021-05-24 12:02, Shameer Kolothum wrote:
> >>> From: Jon Nettleton <jon@solid-run.com>
> >>>
> >>> Check if there is any RMR info associated with the devices behind
> >>> the SMMU and if any, install bypass SMRs for them. This is to
> >>> keep any ongoing traffic associated with these devices alive
> >>> when we enable/reset SMMU during probe().
> >>>
> >>> Signed-off-by: Jon Nettleton <jon@solid-run.com>
> >>> Signed-off-by: Steven Price <steven.price@arm.com>
> >>> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
> >>> ---
> >>>    drivers/iommu/arm/arm-smmu/arm-smmu.c | 65 +++++++++++++++++++++++++++
> >>>    1 file changed, 65 insertions(+)
> >>>
> >>> diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c
> >>> index 6f72c4d208ca..56db3d3238fc 100644
> >>> --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
> >>> +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
> >>> @@ -2042,6 +2042,67 @@ err_reset_platform_ops: __maybe_unused;
> >>>        return err;
> >>>    }
> >>>
> >>> +static void arm_smmu_rmr_install_bypass_smr(struct arm_smmu_device *smmu)
> >>> +{
> >>> +     struct list_head rmr_list;
> >>> +     struct iommu_resv_region *e;
> >>> +     int i, cnt = 0;
> >>> +     u32 smr;
> >>> +     u32 reg;
> >>> +
> >>> +     INIT_LIST_HEAD(&rmr_list);
> >>> +     if (iommu_dma_get_rmrs(dev_fwnode(smmu->dev), &rmr_list))
> >>> +             return;
> >>> +
> >>> +     reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sCR0);
> >>> +
> >>> +     if ((reg & ARM_SMMU_sCR0_USFCFG) && !(reg & ARM_SMMU_sCR0_CLIENTPD)) {
> >>> +             /*
> >>> +              * SMMU is already enabled and disallowing bypass, so preserve
> >>> +              * the existing SMRs
> >>> +              */
> >>> +             for (i = 0; i < smmu->num_mapping_groups; i++) {
> >>> +                     smr = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_SMR(i));
> >>
> >> To reiterate, just because a bootloader/crashed kernel/whatever may have
> >> left some configuration behind doesn't mean that it matters (e.g. what
> >> if these SMRs are pointing at translation contexts?). All we should be
> >> doing here is setting the relevant RMR accommodations in our "clean
> >> slate" software state before the reset routine applies it to the
> >> hardware, just like patch #5 does for SMMUv3.
> >>
> >> Trying to safely reset an SMMU when we discover it with CLIENTPD=0 is
> >> really another issue entirely, and I'd say is beyond the scope of this
> >> series
> >>
> >>> +                     if (!FIELD_GET(ARM_SMMU_SMR_VALID, smr))
> >>> +                             continue;
> >>
> >> Note that that's not even necessarily correct (thanks to EXIDS).
> >>
> >>> +                     smmu->smrs[i].id = FIELD_GET(ARM_SMMU_SMR_ID, smr);
> >>> +                     smmu->smrs[i].mask = FIELD_GET(ARM_SMMU_SMR_MASK, smr);
> >>> +                     smmu->smrs[i].valid = true;
> >>> +             }
> >>> +     }
> >>> +
> >>> +     list_for_each_entry(e, &rmr_list, list) {
> >>> +             u32 sid = e->fw_data.rmr.sid;
> >>> +
> >>> +             i = arm_smmu_find_sme(smmu, sid, ~0);
> >>> +             if (i < 0)
> >>> +                     continue;
> >>> +             if (smmu->s2crs[i].count == 0) {
> >>> +                     smmu->smrs[i].id = sid;
> >>> +                     smmu->smrs[i].mask = ~0;
> >>
> >> This is very wrong (as has now already been pointed out).
> >>
> >>> +                     smmu->smrs[i].valid = true;
> >>> +             }
> >>> +             smmu->s2crs[i].count++;
> >>> +             smmu->s2crs[i].type = S2CR_TYPE_BYPASS;
> >>> +             smmu->s2crs[i].privcfg = S2CR_PRIVCFG_DEFAULT;
> >>> +             smmu->s2crs[i].cbndx = 0xff;
> >>
> >> Nit: cbndx is left uninitialised for bypass/fault entries elsewhere, so
> >> there's little point touching it here.
> >>
> >>> +
> >>> +             cnt++;
> >>> +     }
> >>> +
> >>> +     if ((reg & ARM_SMMU_sCR0_USFCFG) && !(reg & ARM_SMMU_sCR0_CLIENTPD)) {
> >>> +             /* Remove the valid bit for unused SMRs */
> >>> +             for (i = 0; i < smmu->num_mapping_groups; i++) {
> >>> +                     if (smmu->s2crs[i].count == 0)
> >>> +                             smmu->smrs[i].valid = false;
> >>> +             }
> >>
> >> If this dance is only about avoiding stream match conflicts when trying
> >> to reprogram live SMRs, simply turning the SMMU off beforehand would be
> >> a lot simpler.
> >
> > Robin,
> >
> > I am not sure what you mean here, and maybe Steve wants to jump in and
> > help clarify.
> >
> > My understanding is that "dance" is required for regions that need to
> > continue to work
> > throughout the boot process.  I think the example I have seen the most
> > is for GOP drivers that
> > use system memory rather than dedicated VRAM.
>
> All that is required is to install bypass SMEs for the relevant streams
> before enabling the SMMU. That much is achieved by the
> list_for_each_entry(e, &rmr_list, list) loop setting up initial state
> followed by arm_smmu_device_reset(). The "dance" I'm referring to is the
> additional reading out of (possibly nonsense) SMR state beforehand to
> pre-bias the SMR allocator, then trying to clean up the remnants afterwards.
>
> If we're going to pretend to be robust against finding the SMMU already
> enabled *with* live traffic ongoing, there's frankly an awful lot more
> care we'd need to take beyond here (and for some aspects there's still
> no right answer). If on the other hand we're implicitly assuming that
> any boot-time enabled SMRs exactly match the RMR configuration anyway,
> then simply disabling the SMMU until we enable it again in the reset
> routine still preserves the necessary bypass behaviour for RMR streams
> while sidestepping any issues related to reprogramming live SMMU state.
>
> Robin.
>
> >>> +     }
> >>> +
> >>> +     dev_notice(smmu->dev, "\tpreserved %d boot mapping%s\n", cnt,
> >>> +                cnt == 1 ? "" : "s");
> >>> +     iommu_dma_put_rmrs(dev_fwnode(smmu->dev), &rmr_list);
> >>> +}
> >>> +
> >>>    static int arm_smmu_device_probe(struct platform_device *pdev)
> >>>    {
> >>>        struct resource *res;
> >>> @@ -2168,6 +2229,10 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
> >>>        }
> >>>
> >>>        platform_set_drvdata(pdev, smmu);
> >>> +
> >>> +     /* Check for RMRs and install bypass SMRs if any */
> >>> +     arm_smmu_rmr_install_bypass_smr(smmu);
> >>> +
> >>>        arm_smmu_device_reset(smmu);
> >>>        arm_smmu_test_smr_masks(smmu);
> >>>
> >>>

Shameer,

Sorry for the delays.  Here is a diff of the changes that should
address the issues pointed out by Robin,
I have tested that this works as expected on my HoneyComb LX2160A.

diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c
b/drivers/iommu/arm/arm-smmu/arm-smmu.c
index ab7b9db77625..a358bd326d0b 100644
--- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
+++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
@@ -2068,29 +2068,21 @@ static void
arm_smmu_rmr_install_bypass_smr(struct arm_smmu_device *smmu)
        struct list_head rmr_list;
        struct iommu_resv_region *e;
        int i, cnt = 0;
-       u32 smr;
        u32 reg;

        INIT_LIST_HEAD(&rmr_list);
        if (iommu_dma_get_rmrs(dev_fwnode(smmu->dev), &rmr_list))
                return;

-       reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sCR0);
+       /* Rather than trying to look at existing mappings that
+        * are setup by the firmware and then invalidate the ones
+        * that do no have matching RMR entries, just disable the
+        * SMMU until it gets enabled again in the reset routine.
+        */

-       if ((reg & ARM_SMMU_sCR0_USFCFG) && !(reg & ARM_SMMU_sCR0_CLIENTPD)) {
-               /*
-                * SMMU is already enabled and disallowing bypass, so preserve
-                * the existing SMRs
-                */
-               for (i = 0; i < smmu->num_mapping_groups; i++) {
-                       smr = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_SMR(i));
-                       if (!FIELD_GET(ARM_SMMU_SMR_VALID, smr))
-                               continue;
-                       smmu->smrs[i].id = FIELD_GET(ARM_SMMU_SMR_ID, smr);
-                       smmu->smrs[i].mask = FIELD_GET(ARM_SMMU_SMR_MASK, smr);
-                       smmu->smrs[i].valid = true;
-               }
-       }
+       reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sCR0);
+       reg &= ~ARM_SMMU_sCR0_CLIENTPD;
+       arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_sCR0, reg);

        list_for_each_entry(e, &rmr_list, list) {
                u32 sid = e->fw_data.rmr.sid;
@@ -2100,25 +2092,16 @@ static void
arm_smmu_rmr_install_bypass_smr(struct arm_smmu_device *smmu)
                        continue;
                if (smmu->s2crs[i].count == 0) {
                        smmu->smrs[i].id = sid;
-                       smmu->smrs[i].mask = ~0;
+                       smmu->smrs[i].mask = 0;
                        smmu->smrs[i].valid = true;
                }
                smmu->s2crs[i].count++;
                smmu->s2crs[i].type = S2CR_TYPE_BYPASS;
                smmu->s2crs[i].privcfg = S2CR_PRIVCFG_DEFAULT;
-               smmu->s2crs[i].cbndx = 0xff;

                cnt++;
        }

-       if ((reg & ARM_SMMU_sCR0_USFCFG) && !(reg & ARM_SMMU_sCR0_CLIENTPD)) {
-               /* Remove the valid bit for unused SMRs */
-               for (i = 0; i < smmu->num_mapping_groups; i++) {
-                       if (smmu->s2crs[i].count == 0)
-                               smmu->smrs[i].valid = false;
-               }
-       }
-
        dev_notice(smmu->dev, "\tpreserved %d boot mapping%s\n", cnt,
                   cnt == 1 ? "" : "s");
        iommu_dma_put_rmrs(dev_fwnode(smmu->dev), &rmr_list);

Please include that in your next patch series.  Let me know if you
want me to send you the patch direct
off the list.

Thanks,
Jon

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-06-29 16:26 UTC|newest]

Thread overview: 121+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-24 11:02 [PATCH v5 0/8] ACPI/IORT: Support for IORT RMR node Shameer Kolothum
2021-05-24 11:02 ` Shameer Kolothum
2021-05-24 11:02 ` Shameer Kolothum
2021-05-24 11:02 ` [PATCH v5 1/8] ACPI/IORT: Add support for RMR node parsing Shameer Kolothum
2021-05-24 11:02   ` Shameer Kolothum
2021-05-24 11:02   ` Shameer Kolothum
2021-06-14 11:14   ` Robin Murphy
2021-06-14 11:14     ` Robin Murphy
2021-06-14 11:14     ` Robin Murphy
2021-06-14 12:37     ` Shameerali Kolothum Thodi
2021-06-14 12:37       ` Shameerali Kolothum Thodi
2021-06-14 12:37       ` Shameerali Kolothum Thodi
2021-05-24 11:02 ` [PATCH v5 2/8] iommu/dma: Introduce generic helper to retrieve RMR info Shameer Kolothum
2021-05-24 11:02   ` Shameer Kolothum
2021-05-24 11:02   ` Shameer Kolothum
2021-05-24 15:35   ` kernel test robot
2021-05-24 15:35     ` kernel test robot
2021-05-24 15:35     ` kernel test robot
2021-05-24 15:35     ` kernel test robot
2021-05-24 11:02 ` [PATCH v5 3/8] ACPI/IORT: Add a helper to retrieve RMR memory regions Shameer Kolothum
2021-05-24 11:02   ` Shameer Kolothum
2021-05-24 11:02   ` Shameer Kolothum
2021-05-26  7:53   ` Laurentiu Tudor
2021-05-26  7:53     ` Laurentiu Tudor
2021-05-26  7:53     ` Laurentiu Tudor
2021-05-26 16:36     ` Shameerali Kolothum Thodi
2021-05-26 16:36       ` Shameerali Kolothum Thodi
2021-05-26 16:36       ` Shameerali Kolothum Thodi
2021-05-26 17:11       ` Laurentiu Tudor
2021-05-26 17:11         ` Laurentiu Tudor
2021-05-26 17:11         ` Laurentiu Tudor
2021-06-03 12:27         ` Jon Nettleton
2021-06-03 12:27           ` Jon Nettleton
2021-06-03 12:27           ` Jon Nettleton
2021-06-03 12:32           ` Laurentiu Tudor
2021-06-03 12:32             ` Laurentiu Tudor
2021-06-03 12:32             ` Laurentiu Tudor
2021-05-27  4:25       ` Jon Nettleton
2021-05-27  4:25         ` Jon Nettleton
2021-05-27  4:25         ` Jon Nettleton
2021-06-14 10:35       ` Robin Murphy
2021-06-14 10:35         ` Robin Murphy
2021-06-14 10:35         ` Robin Murphy
2021-06-14 11:23   ` Robin Murphy
2021-06-14 11:23     ` Robin Murphy
2021-06-14 11:23     ` Robin Murphy
2021-06-14 12:49     ` Shameerali Kolothum Thodi
2021-06-14 12:49       ` Shameerali Kolothum Thodi
2021-06-14 12:49       ` Shameerali Kolothum Thodi
2021-06-29 17:34       ` Jon Nettleton
2021-06-29 17:34         ` Jon Nettleton
2021-06-29 17:34         ` Jon Nettleton
2021-07-04  7:38         ` Jon Nettleton
2021-07-04  7:38           ` Jon Nettleton
2021-07-04  7:38           ` Jon Nettleton
2021-07-05  9:10           ` Shameerali Kolothum Thodi
2021-07-05  9:10             ` Shameerali Kolothum Thodi
2021-07-05  9:10             ` Shameerali Kolothum Thodi
2021-05-24 11:02 ` [PATCH v5 4/8] iommu/arm-smmu-v3: Introduce strtab init helper Shameer Kolothum
2021-05-24 11:02   ` Shameer Kolothum
2021-05-24 11:02   ` Shameer Kolothum
2021-05-24 11:02 ` [PATCH v5 5/8] iommu/arm-smmu-v3: Add bypass flag to arm_smmu_write_strtab_ent() Shameer Kolothum
2021-05-24 11:02   ` Shameer Kolothum
2021-05-24 11:02   ` Shameer Kolothum
2021-06-14 10:23   ` Robin Murphy
2021-06-14 10:23     ` Robin Murphy
2021-06-14 10:23     ` Robin Murphy
2021-06-14 12:51     ` Shameerali Kolothum Thodi
2021-06-14 12:51       ` Shameerali Kolothum Thodi
2021-06-14 12:51       ` Shameerali Kolothum Thodi
2021-05-24 11:02 ` [PATCH v5 6/8] iommu/arm-smmu-v3: Get associated RMR info and install Shameer Kolothum
2021-05-24 11:02   ` Shameer Kolothum
2021-05-24 11:02   ` Shameer Kolothum
2021-06-14 10:15   ` Robin Murphy
2021-06-14 10:15     ` Robin Murphy
2021-06-14 10:15     ` Robin Murphy
2021-05-24 11:02 ` [PATCH v5 7/8] iommu/arm-smmu: Get associated RMR info and install bypass SMR Shameer Kolothum
2021-05-24 11:02   ` Shameer Kolothum
2021-05-24 11:02   ` Shameer Kolothum
2021-06-03  8:52   ` Jon Nettleton
2021-06-03  8:52     ` Jon Nettleton
2021-06-03  8:52     ` Jon Nettleton
2021-06-03 11:27     ` Steven Price
2021-06-03 11:27       ` Steven Price
2021-06-03 11:27       ` Steven Price
2021-06-03 11:51       ` Jon Nettleton
2021-06-03 11:51         ` Jon Nettleton
2021-06-03 11:51         ` Jon Nettleton
2021-06-13  7:40         ` Jon Nettleton
2021-06-13  7:40           ` Jon Nettleton
2021-06-13  7:40           ` Jon Nettleton
2021-06-14  9:23           ` Robin Murphy
2021-06-14  9:23             ` Robin Murphy
2021-06-14  9:23             ` Robin Murphy
2021-06-14 10:06   ` Robin Murphy
2021-06-14 10:06     ` Robin Murphy
2021-06-14 10:06     ` Robin Murphy
2021-06-14 16:51     ` Shameerali Kolothum Thodi
2021-06-14 16:51       ` Shameerali Kolothum Thodi
2021-06-14 16:51       ` Shameerali Kolothum Thodi
2021-06-15  8:02       ` Jon Nettleton
2021-06-15  8:02         ` Jon Nettleton
2021-06-15  8:02         ` Jon Nettleton
2021-06-29  7:03     ` Jon Nettleton
2021-06-29  7:03       ` Jon Nettleton
2021-06-29  7:03       ` Jon Nettleton
2021-06-29 13:22       ` Robin Murphy
2021-06-29 13:22         ` Robin Murphy
2021-06-29 13:22         ` Robin Murphy
2021-06-29 16:25         ` Jon Nettleton [this message]
2021-06-29 16:25           ` Jon Nettleton
2021-06-29 16:25           ` Jon Nettleton
2021-06-30  8:50           ` Shameerali Kolothum Thodi
2021-06-30  8:50             ` Shameerali Kolothum Thodi
2021-06-30  8:50             ` Shameerali Kolothum Thodi
2021-05-24 11:02 ` [PATCH v5 8/8] iommu/dma: Reserve any RMR regions associated with a dev Shameer Kolothum
2021-05-24 11:02   ` Shameer Kolothum
2021-05-24 11:02   ` Shameer Kolothum
2021-05-24 15:18 ` [PATCH v5 0/8] ACPI/IORT: Support for IORT RMR node Steven Price
2021-05-24 15:18   ` Steven Price
2021-05-24 15:18   ` Steven Price

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CABdtJHsz+ycVffJTyekau_OY6ROmoTBWAGd_guikxauT=nnuJQ@mail.gmail.com' \
    --to=jon@solid-run.com \
    --cc=Sami.Mujawar@arm.com \
    --cc=guohanjun@huawei.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linuxarm@huawei.com \
    --cc=robin.murphy@arm.com \
    --cc=shameerali.kolothum.thodi@huawei.com \
    --cc=steven.price@arm.com \
    --cc=wanghuiqiang@huawei.com \
    --cc=yangyicong@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.