All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jacob Pan <jacob.jun.pan@linux.intel.com>
To: iommu@lists.linux-foundation.org,
	LKML <linux-kernel@vger.kernel.org>,
	Joerg Roedel <joro@8bytes.org>, Jason Gunthorpe <jgg@nvidia.com>,
	"Christoph Hellwig" <hch@infradead.org>
Cc: "Lu Baolu" <baolu.lu@linux.intel.com>,
	Raj Ashok <ashok.raj@intel.com>,
	"Kumar, Sanjay K" <sanjay.k.kumar@intel.com>,
	Dave Jiang <dave.jiang@intel.com>,
	Tony Luck <tony.luck@intel.com>,
	mike.campin@intel.com, Yi Liu <yi.l.liu@intel.com>,
	"Tian, Kevin" <kevin.tian@intel.com>
Subject: [RFC 6/7] iommu: Add KVA map API
Date: Tue, 21 Sep 2021 13:29:40 -0700	[thread overview]
Message-ID: <1632256181-36071-7-git-send-email-jacob.jun.pan@linux.intel.com> (raw)
In-Reply-To: <1632256181-36071-1-git-send-email-jacob.jun.pan@linux.intel.com>

This patch adds KVA map API. It enforces KVA address range checking and
other potential sanity checks. Currently, only the direct map range is
checked.
For trusted devices, this API returns immediately after the above sanity
check. For untrusted devices, this API serves as a simple wrapper around
IOMMU map/unmap APIs. 
OPEN: Alignment at the minimum page size is required, not as rich and
flexible as DMA-APIs.

Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
---
 drivers/iommu/iommu.c | 57 +++++++++++++++++++++++++++++++++++++++++++
 include/linux/iommu.h |  5 ++++
 2 files changed, 62 insertions(+)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index acfdcd7ebd6a..45ba55941209 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2490,6 +2490,63 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova,
 }
 EXPORT_SYMBOL_GPL(iommu_map);
 
+/*
+ * REVISIT: This might not be sufficient. Could also check permission match,
+ * exclude kernel text, etc.
+ */
+static inline bool is_kernel_direct_map(unsigned long start, phys_addr_t size)
+{
+	return (start >= PAGE_OFFSET) && ((start + size) <= VMALLOC_START);
+}
+
+/**
+ * @brief Map kernel virtual address for DMA remap. DMA request with
+ *	domain's default PASID will target kernel virtual address space.
+ *
+ * @param domain	Domain contains the PASID
+ * @param page		Kernel virtual address
+ * @param size		Size to map
+ * @param prot		Permissions
+ * @return int		0 on success or error code
+ */
+int iommu_map_kva(struct iommu_domain *domain, struct page *page,
+		  size_t size, int prot)
+{
+	phys_addr_t phys = page_to_phys(page);
+	void *kva = phys_to_virt(phys);
+
+	/*
+	 * TODO: Limit DMA to kernel direct mapping only, avoid dynamic range
+	 * until we have mmu_notifier for making IOTLB coherent with CPU.
+	 */
+	if (!is_kernel_direct_map((unsigned long)kva, size))
+		return -EINVAL;
+	/* KVA domain type indicates shared CPU page table, skip building
+	 * IOMMU page tables. This is the fast mode where only sanity check
+	 * is performed.
+	 */
+	if (domain->type == IOMMU_DOMAIN_KVA)
+		return 0;
+
+	return iommu_map(domain, (unsigned long)kva, phys, size, prot);
+}
+EXPORT_SYMBOL_GPL(iommu_map_kva);
+
+int iommu_unmap_kva(struct iommu_domain *domain, void *kva,
+		    size_t size)
+{
+	if (!is_kernel_direct_map((unsigned long)kva, size))
+		return -EINVAL;
+
+	if (domain->type == IOMMU_DOMAIN_KVA) {
+		pr_debug_ratelimited("unmap kva skipped %llx", (u64)kva);
+		return 0;
+	}
+	/* REVISIT: do we need a fast version? */
+	return iommu_unmap(domain, (unsigned long)kva, size);
+}
+EXPORT_SYMBOL_GPL(iommu_unmap_kva);
+
 int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova,
 	      phys_addr_t paddr, size_t size, int prot)
 {
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index cd8225f6bc23..c0fac050ca57 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -427,6 +427,11 @@ extern size_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
 extern size_t iommu_map_sg_atomic(struct iommu_domain *domain,
 				  unsigned long iova, struct scatterlist *sg,
 				  unsigned int nents, int prot);
+extern int iommu_map_kva(struct iommu_domain *domain,
+			 struct page *page, size_t size, int prot);
+extern int iommu_unmap_kva(struct iommu_domain *domain,
+			   void *kva, size_t size);
+
 extern phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova);
 extern void iommu_set_fault_handler(struct iommu_domain *domain,
 			iommu_fault_handler_t handler, void *token);
-- 
2.25.1


WARNING: multiple messages have this Message-ID (diff)
From: Jacob Pan <jacob.jun.pan@linux.intel.com>
To: iommu@lists.linux-foundation.org,
	LKML <linux-kernel@vger.kernel.org>,
	Joerg Roedel <joro@8bytes.org>, Jason Gunthorpe <jgg@nvidia.com>,
	"Christoph Hellwig" <hch@infradead.org>
Cc: "Tian, Kevin" <kevin.tian@intel.com>,
	Tony Luck <tony.luck@intel.com>,
	Dave Jiang <dave.jiang@intel.com>,
	Raj Ashok <ashok.raj@intel.com>,
	"Kumar, Sanjay K" <sanjay.k.kumar@intel.com>,
	mike.campin@intel.com
Subject: [RFC 6/7] iommu: Add KVA map API
Date: Tue, 21 Sep 2021 13:29:40 -0700	[thread overview]
Message-ID: <1632256181-36071-7-git-send-email-jacob.jun.pan@linux.intel.com> (raw)
In-Reply-To: <1632256181-36071-1-git-send-email-jacob.jun.pan@linux.intel.com>

This patch adds KVA map API. It enforces KVA address range checking and
other potential sanity checks. Currently, only the direct map range is
checked.
For trusted devices, this API returns immediately after the above sanity
check. For untrusted devices, this API serves as a simple wrapper around
IOMMU map/unmap APIs. 
OPEN: Alignment at the minimum page size is required, not as rich and
flexible as DMA-APIs.

Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
---
 drivers/iommu/iommu.c | 57 +++++++++++++++++++++++++++++++++++++++++++
 include/linux/iommu.h |  5 ++++
 2 files changed, 62 insertions(+)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index acfdcd7ebd6a..45ba55941209 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2490,6 +2490,63 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova,
 }
 EXPORT_SYMBOL_GPL(iommu_map);
 
+/*
+ * REVISIT: This might not be sufficient. Could also check permission match,
+ * exclude kernel text, etc.
+ */
+static inline bool is_kernel_direct_map(unsigned long start, phys_addr_t size)
+{
+	return (start >= PAGE_OFFSET) && ((start + size) <= VMALLOC_START);
+}
+
+/**
+ * @brief Map kernel virtual address for DMA remap. DMA request with
+ *	domain's default PASID will target kernel virtual address space.
+ *
+ * @param domain	Domain contains the PASID
+ * @param page		Kernel virtual address
+ * @param size		Size to map
+ * @param prot		Permissions
+ * @return int		0 on success or error code
+ */
+int iommu_map_kva(struct iommu_domain *domain, struct page *page,
+		  size_t size, int prot)
+{
+	phys_addr_t phys = page_to_phys(page);
+	void *kva = phys_to_virt(phys);
+
+	/*
+	 * TODO: Limit DMA to kernel direct mapping only, avoid dynamic range
+	 * until we have mmu_notifier for making IOTLB coherent with CPU.
+	 */
+	if (!is_kernel_direct_map((unsigned long)kva, size))
+		return -EINVAL;
+	/* KVA domain type indicates shared CPU page table, skip building
+	 * IOMMU page tables. This is the fast mode where only sanity check
+	 * is performed.
+	 */
+	if (domain->type == IOMMU_DOMAIN_KVA)
+		return 0;
+
+	return iommu_map(domain, (unsigned long)kva, phys, size, prot);
+}
+EXPORT_SYMBOL_GPL(iommu_map_kva);
+
+int iommu_unmap_kva(struct iommu_domain *domain, void *kva,
+		    size_t size)
+{
+	if (!is_kernel_direct_map((unsigned long)kva, size))
+		return -EINVAL;
+
+	if (domain->type == IOMMU_DOMAIN_KVA) {
+		pr_debug_ratelimited("unmap kva skipped %llx", (u64)kva);
+		return 0;
+	}
+	/* REVISIT: do we need a fast version? */
+	return iommu_unmap(domain, (unsigned long)kva, size);
+}
+EXPORT_SYMBOL_GPL(iommu_unmap_kva);
+
 int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova,
 	      phys_addr_t paddr, size_t size, int prot)
 {
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index cd8225f6bc23..c0fac050ca57 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -427,6 +427,11 @@ extern size_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
 extern size_t iommu_map_sg_atomic(struct iommu_domain *domain,
 				  unsigned long iova, struct scatterlist *sg,
 				  unsigned int nents, int prot);
+extern int iommu_map_kva(struct iommu_domain *domain,
+			 struct page *page, size_t size, int prot);
+extern int iommu_unmap_kva(struct iommu_domain *domain,
+			   void *kva, size_t size);
+
 extern phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova);
 extern void iommu_set_fault_handler(struct iommu_domain *domain,
 			iommu_fault_handler_t handler, void *token);
-- 
2.25.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  parent reply	other threads:[~2021-09-22  5:13 UTC|newest]

Thread overview: 58+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-21 20:29 [RFC 0/7] Support in-kernel DMA with PASID and SVA Jacob Pan
2021-09-21 20:29 ` Jacob Pan
2021-09-21 20:29 ` [RFC 1/7] ioasid: reserve special PASID for in-kernel DMA Jacob Pan
2021-09-21 20:29   ` Jacob Pan
2021-09-21 20:29 ` [RFC 2/7] dma-iommu: Add API for DMA request with PASID Jacob Pan
2021-09-21 20:29   ` Jacob Pan
2021-09-21 20:29 ` [RFC 3/7] iommu/vt-d: Add DMA w/ PASID support for PA and IOVA Jacob Pan
2021-09-21 20:29   ` Jacob Pan
2021-09-21 20:29 ` [RFC 4/7] dma-iommu: Add support for DMA w/ PASID in KVA Jacob Pan
2021-09-21 20:29   ` Jacob Pan
2021-09-21 20:29 ` [RFC 5/7] iommu/vt-d: Add support for KVA PASID mode Jacob Pan
2021-09-21 20:29   ` Jacob Pan
2021-09-21 20:29 ` Jacob Pan [this message]
2021-09-21 20:29   ` [RFC 6/7] iommu: Add KVA map API Jacob Pan
2021-09-21 20:29 ` [RFC 7/7] dma/idxd: Use dma-iommu PASID API instead of SVA lib Jacob Pan
2021-09-21 20:29   ` Jacob Pan
2021-09-22 17:04 ` [RFC 0/7] Support in-kernel DMA with PASID and SVA Jason Gunthorpe
2021-09-22 17:04   ` Jason Gunthorpe via iommu
2021-09-29 19:37 ` Jacob Pan
2021-09-29 19:37   ` Jacob Pan
2021-09-29 19:39   ` Jason Gunthorpe
2021-09-29 19:39     ` Jason Gunthorpe via iommu
2021-09-29 22:57     ` Jacob Pan
2021-09-29 22:57       ` Jacob Pan
2021-09-29 23:43       ` Jason Gunthorpe
2021-09-29 23:43         ` Jason Gunthorpe via iommu
2021-09-30 14:22         ` Campin, Mike
2021-09-30 14:22           ` Campin, Mike
2021-09-30 15:21           ` Jacob Pan
2021-09-30 15:21             ` Jacob Pan
2021-10-01 12:24 ` Barry Song
2021-10-01 12:24   ` Barry Song
2021-10-01 12:36   ` Jason Gunthorpe
2021-10-01 12:36     ` Jason Gunthorpe via iommu
2021-10-01 12:45     ` Barry Song
2021-10-01 12:45       ` Barry Song
2021-10-04 16:40       ` Jacob Pan
2021-10-04 16:40         ` Jacob Pan
2021-10-04 18:21         ` Jason Gunthorpe
2021-10-04 18:21           ` Jason Gunthorpe via iommu
2021-10-07  5:43           ` Barry Song
2021-10-07  5:43             ` Barry Song
2021-10-07 11:32             ` Jason Gunthorpe
2021-10-07 11:32               ` Jason Gunthorpe via iommu
2021-10-07 11:54               ` Barry Song
2021-10-07 11:54                 ` Barry Song
2021-10-07 11:59                 ` Jason Gunthorpe
2021-10-07 11:59                   ` Jason Gunthorpe via iommu
2021-10-07 17:50                   ` Jacob Pan
2021-10-07 17:50                     ` Jacob Pan
2021-10-07 17:48                     ` Jason Gunthorpe
2021-10-07 17:48                       ` Jason Gunthorpe via iommu
2021-10-07 18:08                       ` Jacob Pan
2021-10-07 18:08                         ` Jacob Pan
2021-10-07 19:11             ` Jacob Pan
2021-10-07 19:11               ` Jacob Pan
2021-10-07 19:10               ` Jason Gunthorpe
2021-10-07 19:10                 ` Jason Gunthorpe via iommu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1632256181-36071-7-git-send-email-jacob.jun.pan@linux.intel.com \
    --to=jacob.jun.pan@linux.intel.com \
    --cc=ashok.raj@intel.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=dave.jiang@intel.com \
    --cc=hch@infradead.org \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jgg@nvidia.com \
    --cc=joro@8bytes.org \
    --cc=kevin.tian@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mike.campin@intel.com \
    --cc=sanjay.k.kumar@intel.com \
    --cc=tony.luck@intel.com \
    --cc=yi.l.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.