From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DFFDC433EF for ; Tue, 10 May 2022 15:43:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346142AbiEJPrt (ORCPT ); Tue, 10 May 2022 11:47:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346138AbiEJPrs (ORCPT ); Tue, 10 May 2022 11:47:48 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3955627BC6C; Tue, 10 May 2022 08:43:49 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id AA52C61373; Tue, 10 May 2022 15:43:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C1C27C385C2; Tue, 10 May 2022 15:43:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652197428; bh=EdXBI+t7Z+CJBmYfgvbfu0giiu3XH2CkqB+Azf5RDVQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KVcRv+FL9wW265Vw51lIR2fjD3+rsRgj7RrmIjHeUYaT0KJb5rRyDnsi/Y3+VRiCE bu5zirEw5Oss/qfIlr/J5Y3qUO/pbeCZHBM0k3+sfsws7xTekRnuezkjrS820+BfLI Lui71NVLu+fDNAn5hlP0CTyrYo0pw+r/gnv5ydRUSYka1MvdBr1Hhav61jtjyFaG3i Q+z7zKBdA8hUjmzsk1TXNj70RyUT+QX9UYpkudlTRWH/7YOVpWMdCINPH0bI0KiQ4k NI+pel3ebimtdJbHyrIdH3ZJ1sdg9XEzcdt/ZtTY8pdOUl1OuuyMdpZ+DkCjzdUPZ5 UevIcrBvzm7tA== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Ashish Mhetre , Robin Murphy , Krishna Reddy , Pritesh Raithatha , Will Deacon , Sasha Levin , thierry.reding@gmail.com, joro@8bytes.org, linux-tegra@vger.kernel.org, linux-arm-kernel@lists.infradead.org, iommu@lists.linux-foundation.org Subject: [PATCH AUTOSEL 5.17 04/21] iommu: arm-smmu: disable large page mappings for Nvidia arm-smmu Date: Tue, 10 May 2022 11:43:23 -0400 Message-Id: <20220510154340.153400-4-sashal@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220510154340.153400-1-sashal@kernel.org> References: <20220510154340.153400-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org From: Ashish Mhetre [ Upstream commit 4a25f2ea0e030b2fc852c4059a50181bfc5b2f57 ] Tegra194 and Tegra234 SoCs have the erratum that causes walk cache entries to not be invalidated correctly. The problem is that the walk cache index generated for IOVA is not same across translation and invalidation requests. This is leading to page faults when PMD entry is released during unmap and populated with new PTE table during subsequent map request. Disabling large page mappings avoids the release of PMD entry and avoid translations seeing stale PMD entry in walk cache. Fix this by limiting the page mappings to PAGE_SIZE for Tegra194 and Tegra234 devices. This is recommended fix from Tegra hardware design team. Acked-by: Robin Murphy Reviewed-by: Krishna Reddy Co-developed-by: Pritesh Raithatha Signed-off-by: Pritesh Raithatha Signed-off-by: Ashish Mhetre Link: https://lore.kernel.org/r/20220421081504.24678-1-amhetre@nvidia.com Signed-off-by: Will Deacon Signed-off-by: Sasha Levin --- drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c | 30 ++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c b/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c index 01e9b50b10a1..87bf522b9d2e 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c @@ -258,6 +258,34 @@ static void nvidia_smmu_probe_finalize(struct arm_smmu_device *smmu, struct devi dev_name(dev), err); } +static int nvidia_smmu_init_context(struct arm_smmu_domain *smmu_domain, + struct io_pgtable_cfg *pgtbl_cfg, + struct device *dev) +{ + struct arm_smmu_device *smmu = smmu_domain->smmu; + const struct device_node *np = smmu->dev->of_node; + + /* + * Tegra194 and Tegra234 SoCs have the erratum that causes walk cache + * entries to not be invalidated correctly. The problem is that the walk + * cache index generated for IOVA is not same across translation and + * invalidation requests. This is leading to page faults when PMD entry + * is released during unmap and populated with new PTE table during + * subsequent map request. Disabling large page mappings avoids the + * release of PMD entry and avoid translations seeing stale PMD entry in + * walk cache. + * Fix this by limiting the page mappings to PAGE_SIZE on Tegra194 and + * Tegra234. + */ + if (of_device_is_compatible(np, "nvidia,tegra234-smmu") || + of_device_is_compatible(np, "nvidia,tegra194-smmu")) { + smmu->pgsize_bitmap = PAGE_SIZE; + pgtbl_cfg->pgsize_bitmap = smmu->pgsize_bitmap; + } + + return 0; +} + static const struct arm_smmu_impl nvidia_smmu_impl = { .read_reg = nvidia_smmu_read_reg, .write_reg = nvidia_smmu_write_reg, @@ -268,10 +296,12 @@ static const struct arm_smmu_impl nvidia_smmu_impl = { .global_fault = nvidia_smmu_global_fault, .context_fault = nvidia_smmu_context_fault, .probe_finalize = nvidia_smmu_probe_finalize, + .init_context = nvidia_smmu_init_context, }; static const struct arm_smmu_impl nvidia_smmu_single_impl = { .probe_finalize = nvidia_smmu_probe_finalize, + .init_context = nvidia_smmu_init_context, }; struct arm_smmu_device *nvidia_smmu_impl_init(struct arm_smmu_device *smmu) -- 2.35.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 52D13C43217 for ; Tue, 10 May 2022 15:43:56 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 1412440103; Tue, 10 May 2022 15:43:56 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id CyheHBTTZvta; Tue, 10 May 2022 15:43:55 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by smtp2.osuosl.org (Postfix) with ESMTPS id DA9E040022; Tue, 10 May 2022 15:43:54 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id A6046C0039; Tue, 10 May 2022 15:43:54 +0000 (UTC) Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by lists.linuxfoundation.org (Postfix) with ESMTP id DEBF9C002D for ; Tue, 10 May 2022 15:43:52 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id CBAA8400DB for ; Tue, 10 May 2022 15:43:52 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id bgXAcRZHN4iN for ; Tue, 10 May 2022 15:43:50 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by smtp2.osuosl.org (Postfix) with ESMTPS id ED2E640022 for ; Tue, 10 May 2022 15:43:49 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id AAC2261456; Tue, 10 May 2022 15:43:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C1C27C385C2; Tue, 10 May 2022 15:43:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652197428; bh=EdXBI+t7Z+CJBmYfgvbfu0giiu3XH2CkqB+Azf5RDVQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KVcRv+FL9wW265Vw51lIR2fjD3+rsRgj7RrmIjHeUYaT0KJb5rRyDnsi/Y3+VRiCE bu5zirEw5Oss/qfIlr/J5Y3qUO/pbeCZHBM0k3+sfsws7xTekRnuezkjrS820+BfLI Lui71NVLu+fDNAn5hlP0CTyrYo0pw+r/gnv5ydRUSYka1MvdBr1Hhav61jtjyFaG3i Q+z7zKBdA8hUjmzsk1TXNj70RyUT+QX9UYpkudlTRWH/7YOVpWMdCINPH0bI0KiQ4k NI+pel3ebimtdJbHyrIdH3ZJ1sdg9XEzcdt/ZtTY8pdOUl1OuuyMdpZ+DkCjzdUPZ5 UevIcrBvzm7tA== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Subject: [PATCH AUTOSEL 5.17 04/21] iommu: arm-smmu: disable large page mappings for Nvidia arm-smmu Date: Tue, 10 May 2022 11:43:23 -0400 Message-Id: <20220510154340.153400-4-sashal@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220510154340.153400-1-sashal@kernel.org> References: <20220510154340.153400-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Cc: Sasha Levin , Will Deacon , Ashish Mhetre , iommu@lists.linux-foundation.org, Pritesh Raithatha , thierry.reding@gmail.com, linux-tegra@vger.kernel.org, Robin Murphy , linux-arm-kernel@lists.infradead.org X-BeenThere: iommu@lists.linux-foundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Development issues for Linux IOMMU support List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: iommu-bounces@lists.linux-foundation.org Sender: "iommu" From: Ashish Mhetre [ Upstream commit 4a25f2ea0e030b2fc852c4059a50181bfc5b2f57 ] Tegra194 and Tegra234 SoCs have the erratum that causes walk cache entries to not be invalidated correctly. The problem is that the walk cache index generated for IOVA is not same across translation and invalidation requests. This is leading to page faults when PMD entry is released during unmap and populated with new PTE table during subsequent map request. Disabling large page mappings avoids the release of PMD entry and avoid translations seeing stale PMD entry in walk cache. Fix this by limiting the page mappings to PAGE_SIZE for Tegra194 and Tegra234 devices. This is recommended fix from Tegra hardware design team. Acked-by: Robin Murphy Reviewed-by: Krishna Reddy Co-developed-by: Pritesh Raithatha Signed-off-by: Pritesh Raithatha Signed-off-by: Ashish Mhetre Link: https://lore.kernel.org/r/20220421081504.24678-1-amhetre@nvidia.com Signed-off-by: Will Deacon Signed-off-by: Sasha Levin --- drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c | 30 ++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c b/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c index 01e9b50b10a1..87bf522b9d2e 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c @@ -258,6 +258,34 @@ static void nvidia_smmu_probe_finalize(struct arm_smmu_device *smmu, struct devi dev_name(dev), err); } +static int nvidia_smmu_init_context(struct arm_smmu_domain *smmu_domain, + struct io_pgtable_cfg *pgtbl_cfg, + struct device *dev) +{ + struct arm_smmu_device *smmu = smmu_domain->smmu; + const struct device_node *np = smmu->dev->of_node; + + /* + * Tegra194 and Tegra234 SoCs have the erratum that causes walk cache + * entries to not be invalidated correctly. The problem is that the walk + * cache index generated for IOVA is not same across translation and + * invalidation requests. This is leading to page faults when PMD entry + * is released during unmap and populated with new PTE table during + * subsequent map request. Disabling large page mappings avoids the + * release of PMD entry and avoid translations seeing stale PMD entry in + * walk cache. + * Fix this by limiting the page mappings to PAGE_SIZE on Tegra194 and + * Tegra234. + */ + if (of_device_is_compatible(np, "nvidia,tegra234-smmu") || + of_device_is_compatible(np, "nvidia,tegra194-smmu")) { + smmu->pgsize_bitmap = PAGE_SIZE; + pgtbl_cfg->pgsize_bitmap = smmu->pgsize_bitmap; + } + + return 0; +} + static const struct arm_smmu_impl nvidia_smmu_impl = { .read_reg = nvidia_smmu_read_reg, .write_reg = nvidia_smmu_write_reg, @@ -268,10 +296,12 @@ static const struct arm_smmu_impl nvidia_smmu_impl = { .global_fault = nvidia_smmu_global_fault, .context_fault = nvidia_smmu_context_fault, .probe_finalize = nvidia_smmu_probe_finalize, + .init_context = nvidia_smmu_init_context, }; static const struct arm_smmu_impl nvidia_smmu_single_impl = { .probe_finalize = nvidia_smmu_probe_finalize, + .init_context = nvidia_smmu_init_context, }; struct arm_smmu_device *nvidia_smmu_impl_init(struct arm_smmu_device *smmu) -- 2.35.1 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 17B03C433F5 for ; Tue, 10 May 2022 15:45:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=IqWDvfjNK9rRAH0qD6Xzm0HxxCS3XjQ3CaAk2JwOAvA=; b=KJaSzQF+4tBPRq FlZ087diQVVxDFNZCxmBG7qdAvWGcdkySE/bEJl0lIZxIW0k/yWGduFpcEylJLTMRr/KVQ+2P7Bza oN18gMyxVUWerA/yPFx/qjdYrIKXmnJ3qGeZhqujI+LDKKTp4I7vzs+q2GN2wbFgOAJ6d3jTC8Zb0 AOvVKpeVWlewbZoPxYdnhA3O+kIDYTrwhOrd10a5XBibVemKFZBhPatCL2Pb+kRBgz7hvShOvQPsH lmazZAWh6T3QvO7c1WU2I6v1qR7Pm4m4T/voh+Um41DRm5n1pgRb7P210O/3VgWrjNnNduQFTQxgg y/LD3hG6rOohY43/ZqjA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1noS1h-002njw-Ac; Tue, 10 May 2022 15:43:53 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1noS1d-002nj2-AJ for linux-arm-kernel@lists.infradead.org; Tue, 10 May 2022 15:43:50 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id AAC2261456; Tue, 10 May 2022 15:43:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C1C27C385C2; Tue, 10 May 2022 15:43:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652197428; bh=EdXBI+t7Z+CJBmYfgvbfu0giiu3XH2CkqB+Azf5RDVQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KVcRv+FL9wW265Vw51lIR2fjD3+rsRgj7RrmIjHeUYaT0KJb5rRyDnsi/Y3+VRiCE bu5zirEw5Oss/qfIlr/J5Y3qUO/pbeCZHBM0k3+sfsws7xTekRnuezkjrS820+BfLI Lui71NVLu+fDNAn5hlP0CTyrYo0pw+r/gnv5ydRUSYka1MvdBr1Hhav61jtjyFaG3i Q+z7zKBdA8hUjmzsk1TXNj70RyUT+QX9UYpkudlTRWH/7YOVpWMdCINPH0bI0KiQ4k NI+pel3ebimtdJbHyrIdH3ZJ1sdg9XEzcdt/ZtTY8pdOUl1OuuyMdpZ+DkCjzdUPZ5 UevIcrBvzm7tA== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Subject: [PATCH AUTOSEL 5.17 04/21] iommu: arm-smmu: disable large page mappings for Nvidia arm-smmu Date: Tue, 10 May 2022 11:43:23 -0400 Message-Id: <20220510154340.153400-4-sashal@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220510154340.153400-1-sashal@kernel.org> References: <20220510154340.153400-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220510_084349_474576_1D1DAB07 X-CRM114-Status: GOOD ( 15.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sasha Levin , Will Deacon , Ashish Mhetre , joro@8bytes.org, iommu@lists.linux-foundation.org, Pritesh Raithatha , thierry.reding@gmail.com, linux-tegra@vger.kernel.org, Robin Murphy , linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Ashish Mhetre [ Upstream commit 4a25f2ea0e030b2fc852c4059a50181bfc5b2f57 ] Tegra194 and Tegra234 SoCs have the erratum that causes walk cache entries to not be invalidated correctly. The problem is that the walk cache index generated for IOVA is not same across translation and invalidation requests. This is leading to page faults when PMD entry is released during unmap and populated with new PTE table during subsequent map request. Disabling large page mappings avoids the release of PMD entry and avoid translations seeing stale PMD entry in walk cache. Fix this by limiting the page mappings to PAGE_SIZE for Tegra194 and Tegra234 devices. This is recommended fix from Tegra hardware design team. Acked-by: Robin Murphy Reviewed-by: Krishna Reddy Co-developed-by: Pritesh Raithatha Signed-off-by: Pritesh Raithatha Signed-off-by: Ashish Mhetre Link: https://lore.kernel.org/r/20220421081504.24678-1-amhetre@nvidia.com Signed-off-by: Will Deacon Signed-off-by: Sasha Levin --- drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c | 30 ++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c b/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c index 01e9b50b10a1..87bf522b9d2e 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c @@ -258,6 +258,34 @@ static void nvidia_smmu_probe_finalize(struct arm_smmu_device *smmu, struct devi dev_name(dev), err); } +static int nvidia_smmu_init_context(struct arm_smmu_domain *smmu_domain, + struct io_pgtable_cfg *pgtbl_cfg, + struct device *dev) +{ + struct arm_smmu_device *smmu = smmu_domain->smmu; + const struct device_node *np = smmu->dev->of_node; + + /* + * Tegra194 and Tegra234 SoCs have the erratum that causes walk cache + * entries to not be invalidated correctly. The problem is that the walk + * cache index generated for IOVA is not same across translation and + * invalidation requests. This is leading to page faults when PMD entry + * is released during unmap and populated with new PTE table during + * subsequent map request. Disabling large page mappings avoids the + * release of PMD entry and avoid translations seeing stale PMD entry in + * walk cache. + * Fix this by limiting the page mappings to PAGE_SIZE on Tegra194 and + * Tegra234. + */ + if (of_device_is_compatible(np, "nvidia,tegra234-smmu") || + of_device_is_compatible(np, "nvidia,tegra194-smmu")) { + smmu->pgsize_bitmap = PAGE_SIZE; + pgtbl_cfg->pgsize_bitmap = smmu->pgsize_bitmap; + } + + return 0; +} + static const struct arm_smmu_impl nvidia_smmu_impl = { .read_reg = nvidia_smmu_read_reg, .write_reg = nvidia_smmu_write_reg, @@ -268,10 +296,12 @@ static const struct arm_smmu_impl nvidia_smmu_impl = { .global_fault = nvidia_smmu_global_fault, .context_fault = nvidia_smmu_context_fault, .probe_finalize = nvidia_smmu_probe_finalize, + .init_context = nvidia_smmu_init_context, }; static const struct arm_smmu_impl nvidia_smmu_single_impl = { .probe_finalize = nvidia_smmu_probe_finalize, + .init_context = nvidia_smmu_init_context, }; struct arm_smmu_device *nvidia_smmu_impl_init(struct arm_smmu_device *smmu) -- 2.35.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel