From: Zhen Lei <thunder.leizhen@huawei.com> To: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>, Robin Murphy <robin.murphy@arm.com>, Will Deacon <will.deacon@arm.com>, Joerg Roedel <joro@8bytes.org>, linux-arm-kernel <linux-arm-kernel@lists.infradead.org>, iommu <iommu@lists.linux-foundation.org>, linux-kernel <linux-kernel@vger.kernel.org> Cc: Zhen Lei <thunder.leizhen@huawei.com> Subject: [PATCH v3 2/6] iommu/dma: add support for non-strict mode Date: Thu, 12 Jul 2018 14:18:28 +0800 [thread overview] Message-ID: <1531376312-2192-3-git-send-email-thunder.leizhen@huawei.com> (raw) In-Reply-To: <1531376312-2192-1-git-send-email-thunder.leizhen@huawei.com> 1. Save the related domain pointer in struct iommu_dma_cookie, make iovad capable call domain->ops->flush_iotlb_all to flush TLB. 2. Add a new iommu capability: IOMMU_CAP_NON_STRICT, which used to indicate that the iommu domain support non-strict mode. 3. During the iommu domain initialization phase, call capable() to check whether it support non-strcit mode. If so, call init_iova_flush_queue to register iovad->flush_cb callback. 4. All unmap(contains iova-free) APIs will finally invoke __iommu_dma_unmap -->iommu_dma_free_iova. If the domain is non-strict, call queue_iova to put off iova freeing. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> --- drivers/iommu/dma-iommu.c | 25 +++++++++++++++++++++++++ include/linux/iommu.h | 7 +++++++ 2 files changed, 32 insertions(+) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index ddcbbdb..9f0c77a 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -55,6 +55,7 @@ struct iommu_dma_cookie { }; struct list_head msi_page_list; spinlock_t msi_lock; + struct iommu_domain *domain_non_strict; }; static inline size_t cookie_msi_granule(struct iommu_dma_cookie *cookie) @@ -257,6 +258,17 @@ static int iova_reserve_iommu_regions(struct device *dev, return ret; } +static void iommu_dma_flush_iotlb_all(struct iova_domain *iovad) +{ + struct iommu_dma_cookie *cookie; + struct iommu_domain *domain; + + cookie = container_of(iovad, struct iommu_dma_cookie, iovad); + domain = cookie->domain_non_strict; + + domain->ops->flush_iotlb_all(domain); +} + /** * iommu_dma_init_domain - Initialise a DMA mapping domain * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie() @@ -272,6 +284,7 @@ static int iova_reserve_iommu_regions(struct device *dev, int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, u64 size, struct device *dev) { + const struct iommu_ops *ops = domain->ops; struct iommu_dma_cookie *cookie = domain->iova_cookie; struct iova_domain *iovad = &cookie->iovad; unsigned long order, base_pfn, end_pfn; @@ -308,6 +321,15 @@ int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, } init_iova_domain(iovad, 1UL << order, base_pfn); + + if ((ops->capable && ops->capable(IOMMU_CAP_NON_STRICT)) && + (IOMMU_DOMAIN_STRICT_MODE(domain) == IOMMU_NON_STRICT)) { + BUG_ON(!ops->flush_iotlb_all); + + cookie->domain_non_strict = domain; + init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all, NULL); + } + if (!dev) return 0; @@ -390,6 +412,9 @@ static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, /* The MSI case is only ever cleaning up its most recent allocation */ if (cookie->type == IOMMU_DMA_MSI_COOKIE) cookie->msi_iova -= size; + else if (cookie->domain_non_strict) + queue_iova(iovad, iova_pfn(iovad, iova), + size >> iova_shift(iovad), 0); else free_iova_fast(iovad, iova_pfn(iovad, iova), size >> iova_shift(iovad)); diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 19938ee..82ed979 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -86,6 +86,12 @@ struct iommu_domain_geometry { #define IOMMU_DOMAIN_DMA (__IOMMU_DOMAIN_PAGING | \ __IOMMU_DOMAIN_DMA_API) +#define IOMMU_STRICT 0 +#define IOMMU_NON_STRICT 1 +#define IOMMU_STRICT_MODE_MASK 1UL +#define IOMMU_DOMAIN_STRICT_MODE(domain) \ + (domain->type != IOMMU_DOMAIN_UNMANAGED) + struct iommu_domain { unsigned type; const struct iommu_ops *ops; @@ -101,6 +107,7 @@ enum iommu_cap { transactions */ IOMMU_CAP_INTR_REMAP, /* IOMMU supports interrupt isolation */ IOMMU_CAP_NOEXEC, /* IOMMU_NOEXEC flag */ + IOMMU_CAP_NON_STRICT, /* IOMMU supports non-strict mode */ }; /* -- 1.8.3
WARNING: multiple messages have this Message-ID (diff)
From: thunder.leizhen@huawei.com (Zhen Lei) To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v3 2/6] iommu/dma: add support for non-strict mode Date: Thu, 12 Jul 2018 14:18:28 +0800 [thread overview] Message-ID: <1531376312-2192-3-git-send-email-thunder.leizhen@huawei.com> (raw) In-Reply-To: <1531376312-2192-1-git-send-email-thunder.leizhen@huawei.com> 1. Save the related domain pointer in struct iommu_dma_cookie, make iovad capable call domain->ops->flush_iotlb_all to flush TLB. 2. Add a new iommu capability: IOMMU_CAP_NON_STRICT, which used to indicate that the iommu domain support non-strict mode. 3. During the iommu domain initialization phase, call capable() to check whether it support non-strcit mode. If so, call init_iova_flush_queue to register iovad->flush_cb callback. 4. All unmap(contains iova-free) APIs will finally invoke __iommu_dma_unmap -->iommu_dma_free_iova. If the domain is non-strict, call queue_iova to put off iova freeing. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> --- drivers/iommu/dma-iommu.c | 25 +++++++++++++++++++++++++ include/linux/iommu.h | 7 +++++++ 2 files changed, 32 insertions(+) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index ddcbbdb..9f0c77a 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -55,6 +55,7 @@ struct iommu_dma_cookie { }; struct list_head msi_page_list; spinlock_t msi_lock; + struct iommu_domain *domain_non_strict; }; static inline size_t cookie_msi_granule(struct iommu_dma_cookie *cookie) @@ -257,6 +258,17 @@ static int iova_reserve_iommu_regions(struct device *dev, return ret; } +static void iommu_dma_flush_iotlb_all(struct iova_domain *iovad) +{ + struct iommu_dma_cookie *cookie; + struct iommu_domain *domain; + + cookie = container_of(iovad, struct iommu_dma_cookie, iovad); + domain = cookie->domain_non_strict; + + domain->ops->flush_iotlb_all(domain); +} + /** * iommu_dma_init_domain - Initialise a DMA mapping domain * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie() @@ -272,6 +284,7 @@ static int iova_reserve_iommu_regions(struct device *dev, int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, u64 size, struct device *dev) { + const struct iommu_ops *ops = domain->ops; struct iommu_dma_cookie *cookie = domain->iova_cookie; struct iova_domain *iovad = &cookie->iovad; unsigned long order, base_pfn, end_pfn; @@ -308,6 +321,15 @@ int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, } init_iova_domain(iovad, 1UL << order, base_pfn); + + if ((ops->capable && ops->capable(IOMMU_CAP_NON_STRICT)) && + (IOMMU_DOMAIN_STRICT_MODE(domain) == IOMMU_NON_STRICT)) { + BUG_ON(!ops->flush_iotlb_all); + + cookie->domain_non_strict = domain; + init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all, NULL); + } + if (!dev) return 0; @@ -390,6 +412,9 @@ static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, /* The MSI case is only ever cleaning up its most recent allocation */ if (cookie->type == IOMMU_DMA_MSI_COOKIE) cookie->msi_iova -= size; + else if (cookie->domain_non_strict) + queue_iova(iovad, iova_pfn(iovad, iova), + size >> iova_shift(iovad), 0); else free_iova_fast(iovad, iova_pfn(iovad, iova), size >> iova_shift(iovad)); diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 19938ee..82ed979 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -86,6 +86,12 @@ struct iommu_domain_geometry { #define IOMMU_DOMAIN_DMA (__IOMMU_DOMAIN_PAGING | \ __IOMMU_DOMAIN_DMA_API) +#define IOMMU_STRICT 0 +#define IOMMU_NON_STRICT 1 +#define IOMMU_STRICT_MODE_MASK 1UL +#define IOMMU_DOMAIN_STRICT_MODE(domain) \ + (domain->type != IOMMU_DOMAIN_UNMANAGED) + struct iommu_domain { unsigned type; const struct iommu_ops *ops; @@ -101,6 +107,7 @@ enum iommu_cap { transactions */ IOMMU_CAP_INTR_REMAP, /* IOMMU supports interrupt isolation */ IOMMU_CAP_NOEXEC, /* IOMMU_NOEXEC flag */ + IOMMU_CAP_NON_STRICT, /* IOMMU supports non-strict mode */ }; /* -- 1.8.3
next prev parent reply other threads:[~2018-07-12 6:19 UTC|newest] Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top 2018-07-12 6:18 [PATCH v3 0/6] add non-strict mode support for arm-smmu-v3 Zhen Lei 2018-07-12 6:18 ` Zhen Lei 2018-07-12 6:18 ` [PATCH v3 1/6] iommu/arm-smmu-v3: fix the implementation of flush_iotlb_all hook Zhen Lei 2018-07-12 6:18 ` Zhen Lei 2018-07-12 6:18 ` Zhen Lei [this message] 2018-07-12 6:18 ` [PATCH v3 2/6] iommu/dma: add support for non-strict mode Zhen Lei 2018-07-24 22:01 ` Robin Murphy 2018-07-24 22:01 ` Robin Murphy 2018-07-24 22:01 ` Robin Murphy 2018-07-26 4:15 ` Leizhen (ThunderTown) 2018-07-26 4:15 ` Leizhen (ThunderTown) 2018-07-26 4:15 ` Leizhen (ThunderTown) 2018-07-12 6:18 ` [PATCH v3 3/6] iommu/amd: use default branch to deal with all non-supported capabilities Zhen Lei 2018-07-12 6:18 ` Zhen Lei 2018-07-12 6:18 ` Zhen Lei 2018-07-12 6:18 ` [PATCH v3 4/6] iommu/io-pgtable-arm: add support for non-strict mode Zhen Lei 2018-07-12 6:18 ` Zhen Lei 2018-07-24 22:25 ` Robin Murphy 2018-07-24 22:25 ` Robin Murphy 2018-07-26 7:20 ` Leizhen (ThunderTown) 2018-07-26 7:20 ` Leizhen (ThunderTown) 2018-07-26 14:35 ` Robin Murphy 2018-07-26 14:35 ` Robin Murphy 2018-08-06 1:32 ` Yang, Shunyong 2018-08-06 1:32 ` Yang, Shunyong 2018-08-06 1:32 ` Yang, Shunyong 2018-08-14 8:33 ` Leizhen (ThunderTown) 2018-08-14 8:33 ` Leizhen (ThunderTown) 2018-08-14 8:33 ` Leizhen (ThunderTown) 2018-08-14 8:35 ` Will Deacon 2018-08-14 8:35 ` Will Deacon 2018-08-14 8:35 ` Will Deacon 2018-08-14 10:02 ` Robin Murphy 2018-08-14 10:02 ` Robin Murphy 2018-08-15 1:43 ` Yang, Shunyong 2018-08-15 1:43 ` Yang, Shunyong 2018-08-15 1:43 ` Yang, Shunyong 2018-08-15 7:33 ` Will Deacon 2018-08-15 7:33 ` Will Deacon 2018-08-15 7:33 ` Will Deacon 2018-08-15 7:35 ` Will Deacon 2018-08-15 7:35 ` Will Deacon 2018-08-15 7:35 ` Will Deacon 2018-08-16 0:43 ` Yang, Shunyong 2018-08-16 0:43 ` Yang, Shunyong 2018-08-16 0:43 ` Yang, Shunyong 2018-07-12 6:18 ` [PATCH v3 5/6] iommu/arm-smmu-v3: " Zhen Lei 2018-07-12 6:18 ` Zhen Lei 2018-07-12 6:18 ` [PATCH v3 6/6] iommu/arm-smmu-v3: add bootup option "iommu_strict_mode" Zhen Lei 2018-07-12 6:18 ` Zhen Lei 2018-07-24 22:46 ` Robin Murphy 2018-07-24 22:46 ` Robin Murphy 2018-07-26 7:41 ` Leizhen (ThunderTown) 2018-07-26 7:41 ` Leizhen (ThunderTown) 2018-07-24 21:51 ` [PATCH v3 0/6] add non-strict mode support for arm-smmu-v3 Robin Murphy 2018-07-24 21:51 ` Robin Murphy 2018-07-26 3:44 ` Leizhen (ThunderTown) 2018-07-26 3:44 ` Leizhen (ThunderTown) 2018-07-26 3:44 ` Leizhen (ThunderTown) 2018-07-26 14:16 ` Robin Murphy 2018-07-26 14:16 ` Robin Murphy 2018-07-27 2:49 ` Leizhen (ThunderTown) 2018-07-27 2:49 ` Leizhen (ThunderTown) 2018-07-27 9:37 ` Will Deacon 2018-07-27 9:37 ` Will Deacon 2018-07-27 9:37 ` Will Deacon
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1531376312-2192-3-git-send-email-thunder.leizhen@huawei.com \ --to=thunder.leizhen@huawei.com \ --cc=iommu@lists.linux-foundation.org \ --cc=jean-philippe.brucker@arm.com \ --cc=joro@8bytes.org \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=robin.murphy@arm.com \ --cc=will.deacon@arm.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.