xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 00/16] Restricted DMA
@ 2021-04-22  8:14 Claire Chang
  2021-04-22  8:14 ` [PATCH v5 01/16] swiotlb: Fix the type of index Claire Chang
                   ` (16 more replies)
  0 siblings, 17 replies; 29+ messages in thread
From: Claire Chang @ 2021-04-22  8:14 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

This series implements mitigations for lack of DMA access control on
systems without an IOMMU, which could result in the DMA accessing the
system memory at unexpected times and/or unexpected addresses, possibly
leading to data leakage or corruption.

For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
not behind an IOMMU. As PCI-e, by design, gives the device full access to
system memory, a vulnerability in the Wi-Fi firmware could easily escalate
to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
full chain of exploits; [2], [3]).

To mitigate the security concerns, we introduce restricted DMA. Restricted
DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
specially allocated region and does memory allocation from the same region.
The feature on its own provides a basic level of protection against the DMA
overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system needs
to provide a way to restrict the DMA to a predefined memory region (this is
usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).

[1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
[1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
[2] https://blade.tencent.com/en/advisories/qualpwn/
[3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
[4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132

v5:
  Rebase on latest linux-next

v4:
  - Fix spinlock bad magic
  - Use rmem->name for debugfs entry
  - Address the comments in v3

v3:
  Using only one reserved memory region for both streaming DMA and memory
  allocation.
  https://lore.kernel.org/patchwork/cover/1360992/

v2:
  Building on top of swiotlb.
  https://lore.kernel.org/patchwork/cover/1280705/

v1:
  Using dma_map_ops.
  https://lore.kernel.org/patchwork/cover/1271660/

Claire Chang (16):
  swiotlb: Fix the type of index
  swiotlb: Refactor swiotlb init functions
  swiotlb: Refactor swiotlb_create_debugfs
  swiotlb: Add DMA_RESTRICTED_POOL
  swiotlb: Add restricted DMA pool initialization
  swiotlb: Add a new get_io_tlb_mem getter
  swiotlb: Update is_swiotlb_buffer to add a struct device argument
  swiotlb: Update is_swiotlb_active to add a struct device argument
  swiotlb: Bounce data from/to restricted DMA pool if available
  swiotlb: Move alloc_size to find_slots
  swiotlb: Refactor swiotlb_tbl_unmap_single
  dma-direct: Add a new wrapper __dma_direct_free_pages()
  swiotlb: Add restricted DMA alloc/free support.
  dma-direct: Allocate memory from restricted DMA pool if available
  dt-bindings: of: Add restricted DMA pool
  of: Add plumbing for restricted DMA pool

 .../reserved-memory/reserved-memory.txt       |  24 ++
 drivers/gpu/drm/i915/gem/i915_gem_internal.c  |   2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c         |   2 +-
 drivers/iommu/dma-iommu.c                     |  12 +-
 drivers/of/address.c                          |  25 ++
 drivers/of/device.c                           |   3 +
 drivers/of/of_private.h                       |   5 +
 drivers/pci/xen-pcifront.c                    |   2 +-
 drivers/xen/swiotlb-xen.c                     |   2 +-
 include/linux/device.h                        |   4 +
 include/linux/swiotlb.h                       |  41 ++-
 kernel/dma/Kconfig                            |  14 +
 kernel/dma/direct.c                           |  57 +++--
 kernel/dma/direct.h                           |   9 +-
 kernel/dma/swiotlb.c                          | 242 +++++++++++++-----
 15 files changed, 347 insertions(+), 97 deletions(-)

-- 
2.31.1.368.gbe11c130af-goog



^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH v5 01/16] swiotlb: Fix the type of index
  2021-04-22  8:14 [PATCH v5 00/16] Restricted DMA Claire Chang
@ 2021-04-22  8:14 ` Claire Chang
  2021-04-23  7:11   ` Christoph Hellwig
  2021-04-22  8:14 ` [PATCH v5 02/16] swiotlb: Refactor swiotlb init functions Claire Chang
                   ` (15 subsequent siblings)
  16 siblings, 1 reply; 29+ messages in thread
From: Claire Chang @ 2021-04-22  8:14 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

Fix the type of index from unsigned int to int since find_slots() might
return -1.

Fixes: 0774983bc923 ("swiotlb: refactor swiotlb_tbl_map_single")
Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 0a5b6f7e75bc..8635a57f88e9 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -499,7 +499,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 {
 	struct io_tlb_mem *mem = io_tlb_default_mem;
 	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
-	unsigned int index, i;
+	unsigned int i;
+	int index;
 	phys_addr_t tlb_addr;
 
 	if (!mem)
-- 
2.31.1.368.gbe11c130af-goog



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 02/16] swiotlb: Refactor swiotlb init functions
  2021-04-22  8:14 [PATCH v5 00/16] Restricted DMA Claire Chang
  2021-04-22  8:14 ` [PATCH v5 01/16] swiotlb: Fix the type of index Claire Chang
@ 2021-04-22  8:14 ` Claire Chang
  2021-04-22  8:14 ` [PATCH v5 03/16] swiotlb: Refactor swiotlb_create_debugfs Claire Chang
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 29+ messages in thread
From: Claire Chang @ 2021-04-22  8:14 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
initialization to make the code reusable.

Note that we now also call set_memory_decrypted in swiotlb_init_with_tbl.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 51 ++++++++++++++++++++++----------------------
 1 file changed, 25 insertions(+), 26 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 8635a57f88e9..3f1adee35097 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -166,9 +166,30 @@ void __init swiotlb_update_mem_attributes(void)
 	memset(vaddr, 0, bytes);
 }
 
-int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
+				    unsigned long nslabs, bool late_alloc)
 {
+	void *vaddr = phys_to_virt(start);
 	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
+
+	mem->nslabs = nslabs;
+	mem->start = start;
+	mem->end = mem->start + bytes;
+	mem->index = 0;
+	mem->late_alloc = late_alloc;
+	spin_lock_init(&mem->lock);
+	for (i = 0; i < mem->nslabs; i++) {
+		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
+		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
+		mem->slots[i].alloc_size = 0;
+	}
+
+	set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
+	memset(vaddr, 0, bytes);
+}
+
+int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+{
 	struct io_tlb_mem *mem;
 	size_t alloc_size;
 
@@ -184,16 +205,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 	if (!mem)
 		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
 		      __func__, alloc_size, PAGE_SIZE);
-	mem->nslabs = nslabs;
-	mem->start = __pa(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
+
+	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
 
 	io_tlb_default_mem = mem;
 	if (verbose)
@@ -280,7 +293,6 @@ swiotlb_late_init_with_default_size(size_t default_size)
 int
 swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 {
-	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
 	struct io_tlb_mem *mem;
 
 	if (swiotlb_force == SWIOTLB_NO_FORCE)
@@ -295,20 +307,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 	if (!mem)
 		return -ENOMEM;
 
-	mem->nslabs = nslabs;
-	mem->start = virt_to_phys(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	mem->late_alloc = 1;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
-
-	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
-	memset(tlb, 0, bytes);
+	swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
 
 	io_tlb_default_mem = mem;
 	swiotlb_print_info();
-- 

I'm not 100% sure if set_memory_decrypted is safe to call in
swiotlb_init_with_tbl, but I didn't hit any issue booting with this.

2.31.1.368.gbe11c130af-goog



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 03/16] swiotlb: Refactor swiotlb_create_debugfs
  2021-04-22  8:14 [PATCH v5 00/16] Restricted DMA Claire Chang
  2021-04-22  8:14 ` [PATCH v5 01/16] swiotlb: Fix the type of index Claire Chang
  2021-04-22  8:14 ` [PATCH v5 02/16] swiotlb: Refactor swiotlb init functions Claire Chang
@ 2021-04-22  8:14 ` Claire Chang
  2021-04-22  8:14 ` [PATCH v5 04/16] swiotlb: Add DMA_RESTRICTED_POOL Claire Chang
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 29+ messages in thread
From: Claire Chang @ 2021-04-22  8:14 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

Split the debugfs creation to make the code reusable for supporting
different bounce buffer pools, e.g. restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 3f1adee35097..57a9adb920bf 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -660,18 +660,24 @@ EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
 #ifdef CONFIG_DEBUG_FS
 
-static int __init swiotlb_create_debugfs(void)
+static void swiotlb_create_debugfs(struct io_tlb_mem *mem, const char *name,
+				   struct dentry *node)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
-
 	if (!mem)
-		return 0;
-	mem->debugfs = debugfs_create_dir("swiotlb", NULL);
+		return;
+
+	mem->debugfs = debugfs_create_dir(name, node);
 	debugfs_create_ulong("io_tlb_nslabs", 0400, mem->debugfs, &mem->nslabs);
 	debugfs_create_ulong("io_tlb_used", 0400, mem->debugfs, &mem->used);
+}
+
+static int __init swiotlb_create_default_debugfs(void)
+{
+	swiotlb_create_debugfs(io_tlb_default_mem, "swiotlb", NULL);
+
 	return 0;
 }
 
-late_initcall(swiotlb_create_debugfs);
+late_initcall(swiotlb_create_default_debugfs);
 
 #endif
-- 
2.31.1.368.gbe11c130af-goog



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 04/16] swiotlb: Add DMA_RESTRICTED_POOL
  2021-04-22  8:14 [PATCH v5 00/16] Restricted DMA Claire Chang
                   ` (2 preceding siblings ...)
  2021-04-22  8:14 ` [PATCH v5 03/16] swiotlb: Refactor swiotlb_create_debugfs Claire Chang
@ 2021-04-22  8:14 ` Claire Chang
  2021-04-22  8:14 ` [PATCH v5 05/16] swiotlb: Add restricted DMA pool initialization Claire Chang
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 29+ messages in thread
From: Claire Chang @ 2021-04-22  8:14 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

Add a new kconfig symbol, DMA_RESTRICTED_POOL, for restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/Kconfig | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig
index 77b405508743..3e961dc39634 100644
--- a/kernel/dma/Kconfig
+++ b/kernel/dma/Kconfig
@@ -80,6 +80,20 @@ config SWIOTLB
 	bool
 	select NEED_DMA_MAP_STATE
 
+config DMA_RESTRICTED_POOL
+	bool "DMA Restricted Pool"
+	depends on OF && OF_RESERVED_MEM
+	select SWIOTLB
+	help
+	  This enables support for restricted DMA pools which provide a level of
+	  DMA memory protection on systems with limited hardware protection
+	  capabilities, such as those lacking an IOMMU.
+
+	  For more information see
+	  <Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt>
+	  and <kernel/dma/swiotlb.c>.
+	  If unsure, say "n".
+
 #
 # Should be selected if we can mmap non-coherent mappings to userspace.
 # The only thing that is really required is a way to set an uncached bit
-- 
2.31.1.368.gbe11c130af-goog



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 05/16] swiotlb: Add restricted DMA pool initialization
  2021-04-22  8:14 [PATCH v5 00/16] Restricted DMA Claire Chang
                   ` (3 preceding siblings ...)
  2021-04-22  8:14 ` [PATCH v5 04/16] swiotlb: Add DMA_RESTRICTED_POOL Claire Chang
@ 2021-04-22  8:14 ` Claire Chang
  2021-04-23 11:34   ` Steven Price
  2021-04-22  8:14 ` [PATCH v5 06/16] swiotlb: Add a new get_io_tlb_mem getter Claire Chang
                   ` (11 subsequent siblings)
  16 siblings, 1 reply; 29+ messages in thread
From: Claire Chang @ 2021-04-22  8:14 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

Add the initialization function to create restricted DMA pools from
matching reserved-memory nodes.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/device.h  |  4 +++
 include/linux/swiotlb.h |  3 +-
 kernel/dma/swiotlb.c    | 80 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 86 insertions(+), 1 deletion(-)

diff --git a/include/linux/device.h b/include/linux/device.h
index 38a2071cf776..4987608ea4ff 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -416,6 +416,7 @@ struct dev_links_info {
  * @dma_pools:	Dma pools (if dma'ble device).
  * @dma_mem:	Internal for coherent mem override.
  * @cma_area:	Contiguous memory area for dma allocations
+ * @dma_io_tlb_mem: Internal for swiotlb io_tlb_mem override.
  * @archdata:	For arch-specific additions.
  * @of_node:	Associated device tree node.
  * @fwnode:	Associated device node supplied by platform firmware.
@@ -521,6 +522,9 @@ struct device {
 #ifdef CONFIG_DMA_CMA
 	struct cma *cma_area;		/* contiguous memory area for dma
 					   allocations */
+#endif
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	struct io_tlb_mem *dma_io_tlb_mem;
 #endif
 	/* arch specific additions */
 	struct dev_archdata	archdata;
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 216854a5e513..03ad6e3b4056 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -72,7 +72,8 @@ extern enum swiotlb_force swiotlb_force;
  *		range check to see if the memory was in fact allocated by this
  *		API.
  * @nslabs:	The number of IO TLB blocks (in groups of 64) between @start and
- *		@end. This is command line adjustable via setup_io_tlb_npages.
+ *		@end. For default swiotlb, this is command line adjustable via
+ *		setup_io_tlb_npages.
  * @used:	The number of used IO TLB block.
  * @list:	The free list describing the number of free entries available
  *		from each index.
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 57a9adb920bf..ffbb8724e06c 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -39,6 +39,13 @@
 #ifdef CONFIG_DEBUG_FS
 #include <linux/debugfs.h>
 #endif
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_fdt.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/slab.h>
+#endif
 
 #include <asm/io.h>
 #include <asm/dma.h>
@@ -681,3 +688,76 @@ static int __init swiotlb_create_default_debugfs(void)
 late_initcall(swiotlb_create_default_debugfs);
 
 #endif
+
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
+				    struct device *dev)
+{
+	struct io_tlb_mem *mem = rmem->priv;
+	unsigned long nslabs = rmem->size >> IO_TLB_SHIFT;
+
+	if (dev->dma_io_tlb_mem)
+		return 0;
+
+	/* Since multiple devices can share the same pool, the private data,
+	 * io_tlb_mem struct, will be initialized by the first device attached
+	 * to it.
+	 */
+	if (!mem) {
+		mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL);
+		if (!mem)
+			return -ENOMEM;
+#ifdef CONFIG_ARM
+		if (!PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
+			kfree(mem);
+			return -EINVAL;
+		}
+#endif /* CONFIG_ARM */
+		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
+
+		rmem->priv = mem;
+	}
+
+#ifdef CONFIG_DEBUG_FS
+	if (!io_tlb_default_mem->debugfs)
+		io_tlb_default_mem->debugfs =
+			debugfs_create_dir("swiotlb", NULL);
+
+	swiotlb_create_debugfs(mem, rmem->name, io_tlb_default_mem->debugfs);
+#endif /* CONFIG_DEBUG_FS */
+
+	dev->dma_io_tlb_mem = mem;
+
+	return 0;
+}
+
+static void rmem_swiotlb_device_release(struct reserved_mem *rmem,
+					struct device *dev)
+{
+	if (dev)
+		dev->dma_io_tlb_mem = NULL;
+}
+
+static const struct reserved_mem_ops rmem_swiotlb_ops = {
+	.device_init = rmem_swiotlb_device_init,
+	.device_release = rmem_swiotlb_device_release,
+};
+
+static int __init rmem_swiotlb_setup(struct reserved_mem *rmem)
+{
+	unsigned long node = rmem->fdt_node;
+
+	if (of_get_flat_dt_prop(node, "reusable", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,cma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,dma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "no-map", NULL))
+		return -EINVAL;
+
+	rmem->ops = &rmem_swiotlb_ops;
+	pr_info("Reserved memory: created device swiotlb memory pool at %pa, size %ld MiB\n",
+		&rmem->base, (unsigned long)rmem->size / SZ_1M);
+	return 0;
+}
+
+RESERVEDMEM_OF_DECLARE(dma, "restricted-dma-pool", rmem_swiotlb_setup);
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
-- 
2.31.1.368.gbe11c130af-goog



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 06/16] swiotlb: Add a new get_io_tlb_mem getter
  2021-04-22  8:14 [PATCH v5 00/16] Restricted DMA Claire Chang
                   ` (4 preceding siblings ...)
  2021-04-22  8:14 ` [PATCH v5 05/16] swiotlb: Add restricted DMA pool initialization Claire Chang
@ 2021-04-22  8:14 ` Claire Chang
  2021-04-22  8:14 ` [PATCH v5 07/16] swiotlb: Update is_swiotlb_buffer to add a struct device argument Claire Chang
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 29+ messages in thread
From: Claire Chang @ 2021-04-22  8:14 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

Add a new getter, get_io_tlb_mem, to help select the io_tlb_mem struct.
The restricted DMA pool is preferred if available.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 03ad6e3b4056..b469f04cca26 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -2,6 +2,7 @@
 #ifndef __LINUX_SWIOTLB_H
 #define __LINUX_SWIOTLB_H
 
+#include <linux/device.h>
 #include <linux/dma-direction.h>
 #include <linux/init.h>
 #include <linux/types.h>
@@ -102,6 +103,16 @@ struct io_tlb_mem {
 };
 extern struct io_tlb_mem *io_tlb_default_mem;
 
+static inline struct io_tlb_mem *get_io_tlb_mem(struct device *dev)
+{
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	if (dev && dev->dma_io_tlb_mem)
+		return dev->dma_io_tlb_mem;
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
+
+	return io_tlb_default_mem;
+}
+
 static inline bool is_swiotlb_buffer(phys_addr_t paddr)
 {
 	struct io_tlb_mem *mem = io_tlb_default_mem;
-- 
2.31.1.368.gbe11c130af-goog



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 07/16] swiotlb: Update is_swiotlb_buffer to add a struct device argument
  2021-04-22  8:14 [PATCH v5 00/16] Restricted DMA Claire Chang
                   ` (5 preceding siblings ...)
  2021-04-22  8:14 ` [PATCH v5 06/16] swiotlb: Add a new get_io_tlb_mem getter Claire Chang
@ 2021-04-22  8:14 ` Claire Chang
  2021-04-22  8:15 ` [PATCH v5 08/16] swiotlb: Update is_swiotlb_active " Claire Chang
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 29+ messages in thread
From: Claire Chang @ 2021-04-22  8:14 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

Update is_swiotlb_buffer to add a struct device argument. This will be
useful later to allow for restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/iommu/dma-iommu.c | 12 ++++++------
 drivers/xen/swiotlb-xen.c |  2 +-
 include/linux/swiotlb.h   |  6 +++---
 kernel/dma/direct.c       |  6 +++---
 kernel/dma/direct.h       |  6 +++---
 5 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 7bcdd1205535..a5df35bfd150 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -504,7 +504,7 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
 
 	__iommu_dma_unmap(dev, dma_addr, size);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 
@@ -575,7 +575,7 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
 	}
 
 	iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
-	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
+	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys))
 		swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs);
 	return iova;
 }
@@ -781,7 +781,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev,
 	if (!dev_is_dma_coherent(dev))
 		arch_sync_dma_for_cpu(phys, size, dir);
 
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_cpu(dev, phys, size, dir);
 }
 
@@ -794,7 +794,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev,
 		return;
 
 	phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_device(dev, phys, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -815,7 +815,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir);
 
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_cpu(dev, sg_phys(sg),
 						    sg->length, dir);
 	}
@@ -832,7 +832,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev,
 		return;
 
 	for_each_sg(sgl, sg, nelems, i) {
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_device(dev, sg_phys(sg),
 						       sg->length, dir);
 
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 4c89afc0df62..0c6ed09f8513 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -100,7 +100,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 	 * in our domain. Therefore _only_ check address within our domain.
 	 */
 	if (pfn_valid(PFN_DOWN(paddr)))
-		return is_swiotlb_buffer(paddr);
+		return is_swiotlb_buffer(dev, paddr);
 	return 0;
 }
 
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index b469f04cca26..2a6cca07540b 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -113,9 +113,9 @@ static inline struct io_tlb_mem *get_io_tlb_mem(struct device *dev)
 	return io_tlb_default_mem;
 }
 
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
 
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
@@ -127,7 +127,7 @@ bool is_swiotlb_active(void);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index f737e3347059..84c9feb5474a 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -343,7 +343,7 @@ void dma_direct_sync_sg_for_device(struct device *dev,
 	for_each_sg(sgl, sg, nents, i) {
 		phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg));
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_device(dev, paddr, sg->length,
 						       dir);
 
@@ -369,7 +369,7 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(paddr, sg->length, dir);
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_cpu(dev, paddr, sg->length,
 						    dir);
 
@@ -504,7 +504,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr)
 {
 	return !dev_is_dma_coherent(dev) ||
-		is_swiotlb_buffer(dma_to_phys(dev, dma_addr));
+	       is_swiotlb_buffer(dev, dma_to_phys(dev, dma_addr));
 }
 
 /**
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 50afc05b6f1d..13e9e7158d94 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -56,7 +56,7 @@ static inline void dma_direct_sync_single_for_device(struct device *dev,
 {
 	phys_addr_t paddr = dma_to_phys(dev, addr);
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_device(dev, paddr, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -73,7 +73,7 @@ static inline void dma_direct_sync_single_for_cpu(struct device *dev,
 		arch_sync_dma_for_cpu_all();
 	}
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_cpu(dev, paddr, size, dir);
 
 	if (dir == DMA_FROM_DEVICE)
@@ -113,7 +113,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr,
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
 		dma_direct_sync_single_for_cpu(dev, addr, size, dir);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 #endif /* _KERNEL_DMA_DIRECT_H */
-- 
2.31.1.368.gbe11c130af-goog



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 08/16] swiotlb: Update is_swiotlb_active to add a struct device argument
  2021-04-22  8:14 [PATCH v5 00/16] Restricted DMA Claire Chang
                   ` (6 preceding siblings ...)
  2021-04-22  8:14 ` [PATCH v5 07/16] swiotlb: Update is_swiotlb_buffer to add a struct device argument Claire Chang
@ 2021-04-22  8:15 ` Claire Chang
  2021-04-23 13:31   ` Robin Murphy
  2021-04-22  8:15 ` [PATCH v5 09/16] swiotlb: Bounce data from/to restricted DMA pool if available Claire Chang
                   ` (8 subsequent siblings)
  16 siblings, 1 reply; 29+ messages in thread
From: Claire Chang @ 2021-04-22  8:15 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

Update is_swiotlb_active to add a struct device argument. This will be
useful later to allow for restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c        | 2 +-
 drivers/pci/xen-pcifront.c                   | 2 +-
 include/linux/swiotlb.h                      | 4 ++--
 kernel/dma/direct.c                          | 2 +-
 kernel/dma/swiotlb.c                         | 4 ++--
 6 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index ce6b664b10aa..7d48c433446b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -42,7 +42,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 
 	max_order = MAX_ORDER;
 #ifdef CONFIG_SWIOTLB
-	if (is_swiotlb_active()) {
+	if (is_swiotlb_active(NULL)) {
 		unsigned int max_segment;
 
 		max_segment = swiotlb_max_segment();
diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c
index e8b506a6685b..2a2ae6d6cf6d 100644
--- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
@@ -321,7 +321,7 @@ nouveau_ttm_init(struct nouveau_drm *drm)
 	}
 
 #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86)
-	need_swiotlb = is_swiotlb_active();
+	need_swiotlb = is_swiotlb_active(NULL);
 #endif
 
 	ret = ttm_device_init(&drm->ttm.bdev, &nouveau_bo_driver, drm->dev->dev,
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index b7a8f3a1921f..6d548ce53ce7 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -693,7 +693,7 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
 
 	spin_unlock(&pcifront_dev_lock);
 
-	if (!err && !is_swiotlb_active()) {
+	if (!err && !is_swiotlb_active(NULL)) {
 		err = pci_xen_swiotlb_init_late();
 		if (err)
 			dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 2a6cca07540b..c530c976d18b 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -123,7 +123,7 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
-bool is_swiotlb_active(void);
+bool is_swiotlb_active(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
@@ -143,7 +143,7 @@ static inline size_t swiotlb_max_mapping_size(struct device *dev)
 	return SIZE_MAX;
 }
 
-static inline bool is_swiotlb_active(void)
+static inline bool is_swiotlb_active(struct device *dev)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 84c9feb5474a..7a88c34d0867 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -495,7 +495,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
 size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
-	if (is_swiotlb_active() &&
+	if (is_swiotlb_active(dev) &&
 	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index ffbb8724e06c..1d221343f1c8 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -659,9 +659,9 @@ size_t swiotlb_max_mapping_size(struct device *dev)
 	return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE;
 }
 
-bool is_swiotlb_active(void)
+bool is_swiotlb_active(struct device *dev)
 {
-	return io_tlb_default_mem != NULL;
+	return get_io_tlb_mem(dev) != NULL;
 }
 EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
-- 
2.31.1.368.gbe11c130af-goog



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 09/16] swiotlb: Bounce data from/to restricted DMA pool if available
  2021-04-22  8:14 [PATCH v5 00/16] Restricted DMA Claire Chang
                   ` (7 preceding siblings ...)
  2021-04-22  8:15 ` [PATCH v5 08/16] swiotlb: Update is_swiotlb_active " Claire Chang
@ 2021-04-22  8:15 ` Claire Chang
  2021-04-22  8:15 ` [PATCH v5 10/16] swiotlb: Move alloc_size to find_slots Claire Chang
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 29+ messages in thread
From: Claire Chang @ 2021-04-22  8:15 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

Regardless of swiotlb setting, the restricted DMA pool is preferred if
available.

The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system
needs to provide a way to lock down the memory access, e.g., MPU.

Note that is_dev_swiotlb_force doesn't check if
swiotlb_force == SWIOTLB_FORCE. Otherwise the memory allocation behavior
with default swiotlb will be changed by the following patche
("dma-direct: Allocate memory from restricted DMA pool if available").

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h | 13 +++++++++++++
 kernel/dma/direct.h     |  3 ++-
 kernel/dma/swiotlb.c    |  8 ++++----
 3 files changed, 19 insertions(+), 5 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index c530c976d18b..0c5a18d9cf89 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -120,6 +120,15 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
 
+static inline bool is_dev_swiotlb_force(struct device *dev)
+{
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	if (dev->dma_io_tlb_mem)
+		return true;
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
+	return false;
+}
+
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
@@ -131,6 +140,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
+static inline bool is_dev_swiotlb_force(struct device *dev)
+{
+	return false;
+}
 static inline void swiotlb_exit(void)
 {
 }
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 13e9e7158d94..f94813674e23 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -87,7 +87,8 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
 	phys_addr_t phys = page_to_phys(page) + offset;
 	dma_addr_t dma_addr = phys_to_dma(dev, phys);
 
-	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
+	if (unlikely(swiotlb_force == SWIOTLB_FORCE) ||
+	    is_dev_swiotlb_force(dev))
 		return swiotlb_map(dev, phys, size, dir, attrs);
 
 	if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 1d221343f1c8..96ff36d8ec53 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -344,7 +344,7 @@ void __init swiotlb_exit(void)
 static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size,
 			   enum dma_data_direction dir)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
 	int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT;
 	phys_addr_t orig_addr = mem->slots[index].orig_addr;
 	size_t alloc_size = mem->slots[index].alloc_size;
@@ -426,7 +426,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
 static int find_slots(struct device *dev, phys_addr_t orig_addr,
 		size_t alloc_size)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
 	unsigned long boundary_mask = dma_get_seg_boundary(dev);
 	dma_addr_t tbl_dma_addr =
 		phys_to_dma_unencrypted(dev, mem->start) & boundary_mask;
@@ -503,7 +503,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 		size_t mapping_size, size_t alloc_size,
 		enum dma_data_direction dir, unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
 	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
 	unsigned int i;
 	int index;
@@ -554,7 +554,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 			      size_t mapping_size, enum dma_data_direction dir,
 			      unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = get_io_tlb_mem(hwdev);
 	unsigned long flags;
 	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
-- 
2.31.1.368.gbe11c130af-goog



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 10/16] swiotlb: Move alloc_size to find_slots
  2021-04-22  8:14 [PATCH v5 00/16] Restricted DMA Claire Chang
                   ` (8 preceding siblings ...)
  2021-04-22  8:15 ` [PATCH v5 09/16] swiotlb: Bounce data from/to restricted DMA pool if available Claire Chang
@ 2021-04-22  8:15 ` Claire Chang
  2021-04-22  8:15 ` [PATCH v5 11/16] swiotlb: Refactor swiotlb_tbl_unmap_single Claire Chang
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 29+ messages in thread
From: Claire Chang @ 2021-04-22  8:15 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

Move the maintenance of alloc_size to find_slots for better code
reusability later.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 96ff36d8ec53..b7d634d7a7eb 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -479,8 +479,11 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr,
 	return -1;
 
 found:
-	for (i = index; i < index + nslots; i++)
+	for (i = index; i < index + nslots; i++) {
 		mem->slots[i].list = 0;
+		mem->slots[i].alloc_size =
+			alloc_size - ((i - index) << IO_TLB_SHIFT);
+	}
 	for (i = index - 1;
 	     io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 &&
 	     mem->slots[i].list; i--)
@@ -535,11 +538,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	 * This is needed when we sync the memory.  Then we sync the buffer if
 	 * needed.
 	 */
-	for (i = 0; i < nr_slots(alloc_size + offset); i++) {
+	for (i = 0; i < nr_slots(alloc_size + offset); i++)
 		mem->slots[index + i].orig_addr = slot_addr(orig_addr, i);
-		mem->slots[index + i].alloc_size =
-			alloc_size - (i << IO_TLB_SHIFT);
-	}
 	tlb_addr = slot_addr(mem->start, index) + offset;
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
 	    (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
-- 
2.31.1.368.gbe11c130af-goog



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 11/16] swiotlb: Refactor swiotlb_tbl_unmap_single
  2021-04-22  8:14 [PATCH v5 00/16] Restricted DMA Claire Chang
                   ` (9 preceding siblings ...)
  2021-04-22  8:15 ` [PATCH v5 10/16] swiotlb: Move alloc_size to find_slots Claire Chang
@ 2021-04-22  8:15 ` Claire Chang
  2021-04-22  8:15 ` [PATCH v5 12/16] dma-direct: Add a new wrapper __dma_direct_free_pages() Claire Chang
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 29+ messages in thread
From: Claire Chang @ 2021-04-22  8:15 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

Add a new function, release_slots, to make the code reusable for supporting
different bounce buffer pools, e.g. restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 35 ++++++++++++++++++++---------------
 1 file changed, 20 insertions(+), 15 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index b7d634d7a7eb..af0feb8eaead 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -547,27 +547,15 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	return tlb_addr;
 }
 
-/*
- * tlb_addr is the physical address of the bounce buffer to unmap.
- */
-void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
-			      size_t mapping_size, enum dma_data_direction dir,
-			      unsigned long attrs)
+static void release_slots(struct device *dev, phys_addr_t tlb_addr)
 {
-	struct io_tlb_mem *mem = get_io_tlb_mem(hwdev);
+	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
 	unsigned long flags;
-	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
+	unsigned int offset = swiotlb_align_offset(dev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
 	int nslots = nr_slots(mem->slots[index].alloc_size + offset);
 	int count, i;
 
-	/*
-	 * First, sync the memory before unmapping the entry
-	 */
-	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
-	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
-		swiotlb_bounce(hwdev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
-
 	/*
 	 * Return the buffer to the free list by setting the corresponding
 	 * entries to indicate the number of contiguous entries available.
@@ -602,6 +590,23 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 	spin_unlock_irqrestore(&mem->lock, flags);
 }
 
+/*
+ * tlb_addr is the physical address of the bounce buffer to unmap.
+ */
+void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr,
+			      size_t mapping_size, enum dma_data_direction dir,
+			      unsigned long attrs)
+{
+	/*
+	 * First, sync the memory before unmapping the entry
+	 */
+	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
+		swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
+
+	release_slots(dev, tlb_addr);
+}
+
 void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
 		size_t size, enum dma_data_direction dir)
 {
-- 
2.31.1.368.gbe11c130af-goog



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 12/16] dma-direct: Add a new wrapper __dma_direct_free_pages()
  2021-04-22  8:14 [PATCH v5 00/16] Restricted DMA Claire Chang
                   ` (10 preceding siblings ...)
  2021-04-22  8:15 ` [PATCH v5 11/16] swiotlb: Refactor swiotlb_tbl_unmap_single Claire Chang
@ 2021-04-22  8:15 ` Claire Chang
  2021-04-22  8:15 ` [PATCH v5 13/16] swiotlb: Add restricted DMA alloc/free support Claire Chang
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 29+ messages in thread
From: Claire Chang @ 2021-04-22  8:15 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

Add a new wrapper __dma_direct_free_pages() that will be useful later
for swiotlb_free().

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/direct.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 7a88c34d0867..7a27f0510fcc 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -75,6 +75,12 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 		min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit);
 }
 
+static void __dma_direct_free_pages(struct device *dev, struct page *page,
+				    size_t size)
+{
+	dma_free_contiguous(dev, page, size);
+}
+
 static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 		gfp_t gfp)
 {
@@ -237,7 +243,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 			return NULL;
 	}
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -273,7 +279,7 @@ void dma_direct_free(struct device *dev, size_t size,
 	else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
 		arch_dma_clear_uncached(cpu_addr, size);
 
-	dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size);
+	__dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size);
 }
 
 struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
@@ -310,7 +316,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	*dma_handle = phys_to_dma_direct(dev, page_to_phys(page));
 	return page;
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -329,7 +335,7 @@ void dma_direct_free_pages(struct device *dev, size_t size,
 	if (force_dma_unencrypted(dev))
 		set_memory_encrypted((unsigned long)vaddr, 1 << page_order);
 
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 }
 
 #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \
-- 
2.31.1.368.gbe11c130af-goog



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 13/16] swiotlb: Add restricted DMA alloc/free support.
  2021-04-22  8:14 [PATCH v5 00/16] Restricted DMA Claire Chang
                   ` (11 preceding siblings ...)
  2021-04-22  8:15 ` [PATCH v5 12/16] dma-direct: Add a new wrapper __dma_direct_free_pages() Claire Chang
@ 2021-04-22  8:15 ` Claire Chang
  2021-04-22  8:15 ` [PATCH v5 14/16] dma-direct: Allocate memory from restricted DMA pool if available Claire Chang
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 29+ messages in thread
From: Claire Chang @ 2021-04-22  8:15 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

Add the functions, swiotlb_{alloc,free} to support the memory allocation
from restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h |  4 ++++
 kernel/dma/swiotlb.c    | 35 +++++++++++++++++++++++++++++++++--
 2 files changed, 37 insertions(+), 2 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 0c5a18d9cf89..e8cf49bd90c5 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -134,6 +134,10 @@ unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
 bool is_swiotlb_active(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long size);
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size);
+bool swiotlb_free(struct device *dev, struct page *page, size_t size);
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
 static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index af0feb8eaead..274272c79080 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -454,8 +454,9 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr,
 
 	index = wrap = wrap_index(mem, ALIGN(mem->index, stride));
 	do {
-		if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
-		    (orig_addr & iotlb_align_mask)) {
+		if (orig_addr &&
+		    (slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
+			    (orig_addr & iotlb_align_mask)) {
 			index = wrap_index(mem, index + 1);
 			continue;
 		}
@@ -695,6 +696,36 @@ late_initcall(swiotlb_create_default_debugfs);
 #endif
 
 #ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size)
+{
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
+	phys_addr_t tlb_addr;
+	int index;
+
+	if (!mem)
+		return NULL;
+
+	index = find_slots(dev, 0, size);
+	if (index == -1)
+		return NULL;
+
+	tlb_addr = slot_addr(mem->start, index);
+
+	return pfn_to_page(PFN_DOWN(tlb_addr));
+}
+
+bool swiotlb_free(struct device *dev, struct page *page, size_t size)
+{
+	phys_addr_t tlb_addr = page_to_phys(page);
+
+	if (!is_swiotlb_buffer(dev, tlb_addr))
+		return false;
+
+	release_slots(dev, tlb_addr);
+
+	return true;
+}
+
 static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
 				    struct device *dev)
 {
-- 
2.31.1.368.gbe11c130af-goog



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 14/16] dma-direct: Allocate memory from restricted DMA pool if available
  2021-04-22  8:14 [PATCH v5 00/16] Restricted DMA Claire Chang
                   ` (12 preceding siblings ...)
  2021-04-22  8:15 ` [PATCH v5 13/16] swiotlb: Add restricted DMA alloc/free support Claire Chang
@ 2021-04-22  8:15 ` Claire Chang
  2021-04-23 13:46   ` Robin Murphy
  2021-04-22  8:15 ` [PATCH v5 15/16] dt-bindings: of: Add restricted DMA pool Claire Chang
                   ` (2 subsequent siblings)
  16 siblings, 1 reply; 29+ messages in thread
From: Claire Chang @ 2021-04-22  8:15 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

The restricted DMA pool is preferred if available.

The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system
needs to provide a way to lock down the memory access, e.g., MPU.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/direct.c | 35 ++++++++++++++++++++++++++---------
 1 file changed, 26 insertions(+), 9 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 7a27f0510fcc..29523d2a9845 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -78,6 +78,10 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 static void __dma_direct_free_pages(struct device *dev, struct page *page,
 				    size_t size)
 {
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	if (swiotlb_free(dev, page, size))
+		return;
+#endif
 	dma_free_contiguous(dev, page, size);
 }
 
@@ -92,7 +96,17 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 
 	gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
 					   &phys_limit);
-	page = dma_alloc_contiguous(dev, size, gfp);
+
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	page = swiotlb_alloc(dev, size);
+	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
+		__dma_direct_free_pages(dev, page, size);
+		page = NULL;
+	}
+#endif
+
+	if (!page)
+		page = dma_alloc_contiguous(dev, size, gfp);
 	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
 		dma_free_contiguous(dev, page, size);
 		page = NULL;
@@ -148,7 +162,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 		gfp |= __GFP_NOWARN;
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
 		page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
 		if (!page)
 			return NULL;
@@ -161,8 +175,8 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev))
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_dev_swiotlb_force(dev))
 		return arch_dma_alloc(dev, size, dma_handle, gfp, attrs);
 
 	/*
@@ -172,7 +186,9 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
 	    !gfpflags_allow_blocking(gfp) &&
 	    (force_dma_unencrypted(dev) ||
-	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev))))
+	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
+	      !dev_is_dma_coherent(dev))) &&
+	    !is_dev_swiotlb_force(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	/* we always manually zero the memory once we are done */
@@ -253,15 +269,15 @@ void dma_direct_free(struct device *dev, size_t size,
 	unsigned int page_order = get_order(size);
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
 		/* cpu_addr is a struct page cookie, not a kernel address */
 		dma_free_contiguous(dev, cpu_addr, size);
 		return;
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev)) {
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_dev_swiotlb_force(dev)) {
 		arch_dma_free(dev, size, cpu_addr, dma_addr, attrs);
 		return;
 	}
@@ -289,7 +305,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	void *ret;
 
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
-	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp))
+	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) &&
+	    !is_dev_swiotlb_force(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	page = __dma_direct_alloc_pages(dev, size, gfp);
-- 
2.31.1.368.gbe11c130af-goog



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 15/16] dt-bindings: of: Add restricted DMA pool
  2021-04-22  8:14 [PATCH v5 00/16] Restricted DMA Claire Chang
                   ` (13 preceding siblings ...)
  2021-04-22  8:15 ` [PATCH v5 14/16] dma-direct: Allocate memory from restricted DMA pool if available Claire Chang
@ 2021-04-22  8:15 ` Claire Chang
  2021-04-22  8:15 ` [PATCH v5 16/16] of: Add plumbing for " Claire Chang
  2021-05-10  9:53 ` [PATCH v5 00/16] Restricted DMA Claire Chang
  16 siblings, 0 replies; 29+ messages in thread
From: Claire Chang @ 2021-04-22  8:15 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

Introduce the new compatible string, restricted-dma-pool, for restricted
DMA. One can specify the address and length of the restricted DMA memory
region by restricted-dma-pool in the reserved-memory node.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 .../reserved-memory/reserved-memory.txt       | 24 +++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
index e8d3096d922c..fc9a12c2f679 100644
--- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
+++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
@@ -51,6 +51,20 @@ compatible (optional) - standard definition
           used as a shared pool of DMA buffers for a set of devices. It can
           be used by an operating system to instantiate the necessary pool
           management subsystem if necessary.
+        - restricted-dma-pool: This indicates a region of memory meant to be
+          used as a pool of restricted DMA buffers for a set of devices. The
+          memory region would be the only region accessible to those devices.
+          When using this, the no-map and reusable properties must not be set,
+          so the operating system can create a virtual mapping that will be used
+          for synchronization. The main purpose for restricted DMA is to
+          mitigate the lack of DMA access control on systems without an IOMMU,
+          which could result in the DMA accessing the system memory at
+          unexpected times and/or unexpected addresses, possibly leading to data
+          leakage or corruption. The feature on its own provides a basic level
+          of protection against the DMA overwriting buffer contents at
+          unexpected times. However, to protect against general data leakage and
+          system memory corruption, the system needs to provide way to lock down
+          the memory access, e.g., MPU.
         - vendor specific string in the form <vendor>,[<device>-]<usage>
 no-map (optional) - empty property
     - Indicates the operating system must not create a virtual mapping
@@ -120,6 +134,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 			compatible = "acme,multimedia-memory";
 			reg = <0x77000000 0x4000000>;
 		};
+
+		restricted_dma_mem_reserved: restricted_dma_mem_reserved {
+			compatible = "restricted-dma-pool";
+			reg = <0x50000000 0x400000>;
+		};
 	};
 
 	/* ... */
@@ -138,4 +157,9 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 		memory-region = <&multimedia_reserved>;
 		/* ... */
 	};
+
+	pcie_device: pcie_device@0,0 {
+		memory-region = <&restricted_dma_mem_reserved>;
+		/* ... */
+	};
 };
-- 
2.31.1.368.gbe11c130af-goog



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 16/16] of: Add plumbing for restricted DMA pool
  2021-04-22  8:14 [PATCH v5 00/16] Restricted DMA Claire Chang
                   ` (14 preceding siblings ...)
  2021-04-22  8:15 ` [PATCH v5 15/16] dt-bindings: of: Add restricted DMA pool Claire Chang
@ 2021-04-22  8:15 ` Claire Chang
  2021-04-23  2:52   ` Claire Chang
  2021-04-23 13:35   ` Robin Murphy
  2021-05-10  9:53 ` [PATCH v5 00/16] Restricted DMA Claire Chang
  16 siblings, 2 replies; 29+ messages in thread
From: Claire Chang @ 2021-04-22  8:15 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

If a device is not behind an IOMMU, we look up the device node and set
up the restricted DMA when the restricted-dma-pool is presented.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/of/address.c    | 25 +++++++++++++++++++++++++
 drivers/of/device.c     |  3 +++
 drivers/of/of_private.h |  5 +++++
 3 files changed, 33 insertions(+)

diff --git a/drivers/of/address.c b/drivers/of/address.c
index 54f221dde267..fff3adfe4986 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -8,6 +8,7 @@
 #include <linux/logic_pio.h>
 #include <linux/module.h>
 #include <linux/of_address.h>
+#include <linux/of_reserved_mem.h>
 #include <linux/pci.h>
 #include <linux/pci_regs.h>
 #include <linux/sizes.h>
@@ -1109,6 +1110,30 @@ bool of_dma_is_coherent(struct device_node *np)
 }
 EXPORT_SYMBOL_GPL(of_dma_is_coherent);
 
+int of_dma_set_restricted_buffer(struct device *dev)
+{
+	struct device_node *node;
+	int count, i;
+
+	if (!dev->of_node)
+		return 0;
+
+	count = of_property_count_elems_of_size(dev->of_node, "memory-region",
+						sizeof(phandle));
+	for (i = 0; i < count; i++) {
+		node = of_parse_phandle(dev->of_node, "memory-region", i);
+		/* There might be multiple memory regions, but only one
+		 * restriced-dma-pool region is allowed.
+		 */
+		if (of_device_is_compatible(node, "restricted-dma-pool") &&
+		    of_device_is_available(node))
+			return of_reserved_mem_device_init_by_idx(
+				dev, dev->of_node, i);
+	}
+
+	return 0;
+}
+
 /**
  * of_mmio_is_nonposted - Check if device uses non-posted MMIO
  * @np:	device node
diff --git a/drivers/of/device.c b/drivers/of/device.c
index c5a9473a5fb1..d8d865223e51 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -165,6 +165,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
 
 	arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
 
+	if (!iommu)
+		return of_dma_set_restricted_buffer(dev);
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(of_dma_configure_id);
diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
index d717efbd637d..e9237f5eff48 100644
--- a/drivers/of/of_private.h
+++ b/drivers/of/of_private.h
@@ -163,12 +163,17 @@ struct bus_dma_region;
 #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA)
 int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map);
+int of_dma_set_restricted_buffer(struct device *dev);
 #else
 static inline int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map)
 {
 	return -ENODEV;
 }
+static inline int of_dma_get_restricted_buffer(struct device *dev)
+{
+	return -ENODEV;
+}
 #endif
 
 #endif /* _LINUX_OF_PRIVATE_H */
-- 
2.31.1.368.gbe11c130af-goog



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 16/16] of: Add plumbing for restricted DMA pool
  2021-04-22  8:15 ` [PATCH v5 16/16] of: Add plumbing for " Claire Chang
@ 2021-04-23  2:52   ` Claire Chang
  2021-04-23 13:35   ` Robin Murphy
  1 sibling, 0 replies; 29+ messages in thread
From: Claire Chang @ 2021-04-23  2:52 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, Tomasz Figa, bskeggs,
	bhelgaas, chris, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

On Thu, Apr 22, 2021 at 4:17 PM Claire Chang <tientzu@chromium.org> wrote:
>
> If a device is not behind an IOMMU, we look up the device node and set
> up the restricted DMA when the restricted-dma-pool is presented.
>
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>  drivers/of/address.c    | 25 +++++++++++++++++++++++++
>  drivers/of/device.c     |  3 +++
>  drivers/of/of_private.h |  5 +++++
>  3 files changed, 33 insertions(+)
>
> diff --git a/drivers/of/address.c b/drivers/of/address.c
> index 54f221dde267..fff3adfe4986 100644
> --- a/drivers/of/address.c
> +++ b/drivers/of/address.c
> @@ -8,6 +8,7 @@
>  #include <linux/logic_pio.h>
>  #include <linux/module.h>
>  #include <linux/of_address.h>
> +#include <linux/of_reserved_mem.h>
>  #include <linux/pci.h>
>  #include <linux/pci_regs.h>
>  #include <linux/sizes.h>
> @@ -1109,6 +1110,30 @@ bool of_dma_is_coherent(struct device_node *np)
>  }
>  EXPORT_SYMBOL_GPL(of_dma_is_coherent);
>
> +int of_dma_set_restricted_buffer(struct device *dev)
> +{
> +       struct device_node *node;
> +       int count, i;
> +
> +       if (!dev->of_node)
> +               return 0;
> +
> +       count = of_property_count_elems_of_size(dev->of_node, "memory-region",
> +                                               sizeof(phandle));
> +       for (i = 0; i < count; i++) {
> +               node = of_parse_phandle(dev->of_node, "memory-region", i);
> +               /* There might be multiple memory regions, but only one
> +                * restriced-dma-pool region is allowed.
> +                */
> +               if (of_device_is_compatible(node, "restricted-dma-pool") &&
> +                   of_device_is_available(node))
> +                       return of_reserved_mem_device_init_by_idx(
> +                               dev, dev->of_node, i);
> +       }
> +
> +       return 0;
> +}
> +
>  /**
>   * of_mmio_is_nonposted - Check if device uses non-posted MMIO
>   * @np:        device node
> diff --git a/drivers/of/device.c b/drivers/of/device.c
> index c5a9473a5fb1..d8d865223e51 100644
> --- a/drivers/of/device.c
> +++ b/drivers/of/device.c
> @@ -165,6 +165,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
>
>         arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
>
> +       if (!iommu)
> +               return of_dma_set_restricted_buffer(dev);
> +
>         return 0;
>  }
>  EXPORT_SYMBOL_GPL(of_dma_configure_id);
> diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
> index d717efbd637d..e9237f5eff48 100644
> --- a/drivers/of/of_private.h
> +++ b/drivers/of/of_private.h
> @@ -163,12 +163,17 @@ struct bus_dma_region;
>  #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA)
>  int of_dma_get_range(struct device_node *np,
>                 const struct bus_dma_region **map);
> +int of_dma_set_restricted_buffer(struct device *dev);
>  #else
>  static inline int of_dma_get_range(struct device_node *np,
>                 const struct bus_dma_region **map)
>  {
>         return -ENODEV;
>  }
> +static inline int of_dma_get_restricted_buffer(struct device *dev)

This one should be of_dma_set_restricted_buffer. Sorry for the typo.

> +{
> +       return -ENODEV;
> +}
>  #endif
>
>  #endif /* _LINUX_OF_PRIVATE_H */
> --
> 2.31.1.368.gbe11c130af-goog
>


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 01/16] swiotlb: Fix the type of index
  2021-04-22  8:14 ` [PATCH v5 01/16] swiotlb: Fix the type of index Claire Chang
@ 2021-04-23  7:11   ` Christoph Hellwig
  0 siblings, 0 replies; 29+ messages in thread
From: Christoph Hellwig @ 2021-04-23  7:11 UTC (permalink / raw)
  To: Claire Chang
  Cc: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski,
	benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

On Thu, Apr 22, 2021 at 04:14:53PM +0800, Claire Chang wrote:
> Fix the type of index from unsigned int to int since find_slots() might
> return -1.
> 
> Fixes: 0774983bc923 ("swiotlb: refactor swiotlb_tbl_map_single")
> Signed-off-by: Claire Chang <tientzu@chromium.org>

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

it really should go into 5.12.  I'm not sure if Konrad is going to
be able to queue this up due to his vacation, so I'm tempted to just
queue it up in the dma-mapping tree.


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 05/16] swiotlb: Add restricted DMA pool initialization
  2021-04-22  8:14 ` [PATCH v5 05/16] swiotlb: Add restricted DMA pool initialization Claire Chang
@ 2021-04-23 11:34   ` Steven Price
  2021-04-26 16:37     ` Claire Chang
  0 siblings, 1 reply; 29+ messages in thread
From: Steven Price @ 2021-04-23 11:34 UTC (permalink / raw)
  To: Claire Chang, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

On 22/04/2021 09:14, Claire Chang wrote:
> Add the initialization function to create restricted DMA pools from
> matching reserved-memory nodes.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>   include/linux/device.h  |  4 +++
>   include/linux/swiotlb.h |  3 +-
>   kernel/dma/swiotlb.c    | 80 +++++++++++++++++++++++++++++++++++++++++
>   3 files changed, 86 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/device.h b/include/linux/device.h
> index 38a2071cf776..4987608ea4ff 100644
> --- a/include/linux/device.h
> +++ b/include/linux/device.h
> @@ -416,6 +416,7 @@ struct dev_links_info {
>    * @dma_pools:	Dma pools (if dma'ble device).
>    * @dma_mem:	Internal for coherent mem override.
>    * @cma_area:	Contiguous memory area for dma allocations
> + * @dma_io_tlb_mem: Internal for swiotlb io_tlb_mem override.
>    * @archdata:	For arch-specific additions.
>    * @of_node:	Associated device tree node.
>    * @fwnode:	Associated device node supplied by platform firmware.
> @@ -521,6 +522,9 @@ struct device {
>   #ifdef CONFIG_DMA_CMA
>   	struct cma *cma_area;		/* contiguous memory area for dma
>   					   allocations */
> +#endif
> +#ifdef CONFIG_DMA_RESTRICTED_POOL
> +	struct io_tlb_mem *dma_io_tlb_mem;
>   #endif
>   	/* arch specific additions */
>   	struct dev_archdata	archdata;
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 216854a5e513..03ad6e3b4056 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -72,7 +72,8 @@ extern enum swiotlb_force swiotlb_force;
>    *		range check to see if the memory was in fact allocated by this
>    *		API.
>    * @nslabs:	The number of IO TLB blocks (in groups of 64) between @start and
> - *		@end. This is command line adjustable via setup_io_tlb_npages.
> + *		@end. For default swiotlb, this is command line adjustable via
> + *		setup_io_tlb_npages.
>    * @used:	The number of used IO TLB block.
>    * @list:	The free list describing the number of free entries available
>    *		from each index.
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 57a9adb920bf..ffbb8724e06c 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -39,6 +39,13 @@
>   #ifdef CONFIG_DEBUG_FS
>   #include <linux/debugfs.h>
>   #endif
> +#ifdef CONFIG_DMA_RESTRICTED_POOL
> +#include <linux/io.h>
> +#include <linux/of.h>
> +#include <linux/of_fdt.h>
> +#include <linux/of_reserved_mem.h>
> +#include <linux/slab.h>
> +#endif
>   
>   #include <asm/io.h>
>   #include <asm/dma.h>
> @@ -681,3 +688,76 @@ static int __init swiotlb_create_default_debugfs(void)
>   late_initcall(swiotlb_create_default_debugfs);
>   
>   #endif
> +
> +#ifdef CONFIG_DMA_RESTRICTED_POOL
> +static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
> +				    struct device *dev)
> +{
> +	struct io_tlb_mem *mem = rmem->priv;
> +	unsigned long nslabs = rmem->size >> IO_TLB_SHIFT;
> +
> +	if (dev->dma_io_tlb_mem)
> +		return 0;
> +
> +	/* Since multiple devices can share the same pool, the private data,
> +	 * io_tlb_mem struct, will be initialized by the first device attached
> +	 * to it.
> +	 */
> +	if (!mem) {
> +		mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL);
> +		if (!mem)
> +			return -ENOMEM;
> +#ifdef CONFIG_ARM
> +		if (!PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
> +			kfree(mem);
> +			return -EINVAL;
> +		}
> +#endif /* CONFIG_ARM */
> +		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
> +
> +		rmem->priv = mem;
> +	}
> +
> +#ifdef CONFIG_DEBUG_FS
> +	if (!io_tlb_default_mem->debugfs)
> +		io_tlb_default_mem->debugfs =
> +			debugfs_create_dir("swiotlb", NULL);

At this point it's possible for io_tlb_default_mem to be NULL, leading 
to a splat.

But even then if it's not and we have the situation where debugfs==NULL 
then the debugfs_create_dir() here will cause a subsequent attempt in 
swiotlb_create_debugfs() to fail (directory already exists) leading to 
mem->debugfs being assigned an error value. I suspect the creation of 
the debugfs directory needs to be separated from io_tlb_default_mem 
being set.

Other than that I gave this series a go with our prototype of Arm's 
Confidential Computer Architecture[1] - since the majority of the 
guest's memory is protected from the host the restricted DMA pool allows 
(only) a small area to be shared with the host.

After fixing (well hacking round) the above it all seems to be working 
fine with virtio drivers.

Thanks,

Steve

[1] 
https://www.arm.com/why-arm/architecture/security-features/arm-confidential-compute-architecture


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 08/16] swiotlb: Update is_swiotlb_active to add a struct device argument
  2021-04-22  8:15 ` [PATCH v5 08/16] swiotlb: Update is_swiotlb_active " Claire Chang
@ 2021-04-23 13:31   ` Robin Murphy
  2021-04-26 16:37     ` Claire Chang
  0 siblings, 1 reply; 29+ messages in thread
From: Robin Murphy @ 2021-04-23 13:31 UTC (permalink / raw)
  To: Claire Chang, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	grant.likely, xypron.glpk, Thierry Reding, mingo, bauerman,
	peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

On 2021-04-22 09:15, Claire Chang wrote:
> Update is_swiotlb_active to add a struct device argument. This will be
> useful later to allow for restricted DMA pool.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>   drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +-
>   drivers/gpu/drm/nouveau/nouveau_ttm.c        | 2 +-
>   drivers/pci/xen-pcifront.c                   | 2 +-
>   include/linux/swiotlb.h                      | 4 ++--
>   kernel/dma/direct.c                          | 2 +-
>   kernel/dma/swiotlb.c                         | 4 ++--
>   6 files changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> index ce6b664b10aa..7d48c433446b 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> @@ -42,7 +42,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
>   
>   	max_order = MAX_ORDER;
>   #ifdef CONFIG_SWIOTLB
> -	if (is_swiotlb_active()) {
> +	if (is_swiotlb_active(NULL)) {
>   		unsigned int max_segment;
>   
>   		max_segment = swiotlb_max_segment();
> diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c
> index e8b506a6685b..2a2ae6d6cf6d 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
> @@ -321,7 +321,7 @@ nouveau_ttm_init(struct nouveau_drm *drm)
>   	}
>   
>   #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86)
> -	need_swiotlb = is_swiotlb_active();
> +	need_swiotlb = is_swiotlb_active(NULL);
>   #endif
>   
>   	ret = ttm_device_init(&drm->ttm.bdev, &nouveau_bo_driver, drm->dev->dev,
> diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
> index b7a8f3a1921f..6d548ce53ce7 100644
> --- a/drivers/pci/xen-pcifront.c
> +++ b/drivers/pci/xen-pcifront.c
> @@ -693,7 +693,7 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
>   
>   	spin_unlock(&pcifront_dev_lock);
>   
> -	if (!err && !is_swiotlb_active()) {
> +	if (!err && !is_swiotlb_active(NULL)) {
>   		err = pci_xen_swiotlb_init_late();
>   		if (err)
>   			dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 2a6cca07540b..c530c976d18b 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -123,7 +123,7 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
>   void __init swiotlb_exit(void);
>   unsigned int swiotlb_max_segment(void);
>   size_t swiotlb_max_mapping_size(struct device *dev);
> -bool is_swiotlb_active(void);
> +bool is_swiotlb_active(struct device *dev);
>   void __init swiotlb_adjust_size(unsigned long size);
>   #else
>   #define swiotlb_force SWIOTLB_NO_FORCE
> @@ -143,7 +143,7 @@ static inline size_t swiotlb_max_mapping_size(struct device *dev)
>   	return SIZE_MAX;
>   }
>   
> -static inline bool is_swiotlb_active(void)
> +static inline bool is_swiotlb_active(struct device *dev)
>   {
>   	return false;
>   }
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 84c9feb5474a..7a88c34d0867 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -495,7 +495,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
>   size_t dma_direct_max_mapping_size(struct device *dev)
>   {
>   	/* If SWIOTLB is active, use its maximum mapping size */
> -	if (is_swiotlb_active() &&
> +	if (is_swiotlb_active(dev) &&
>   	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))

I wonder if it's worth trying to fold these other conditions into 
is_swiotlb_active() itself? I'm not entirely sure what matters for Xen, 
but for the other cases it seems like they probably only care about 
whether bouncing may occur for their particular device or not (possibly 
they want to be using dma_max_mapping_size() now anyway - TBH I'm 
struggling to make sense of what the swiotlb_max_segment business is 
supposed to mean).

Otherwise, patch #9 will need to touch here as well to make sure that 
per-device forced bouncing is reflected correctly.

Robin.

>   		return swiotlb_max_mapping_size(dev);
>   	return SIZE_MAX;
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index ffbb8724e06c..1d221343f1c8 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -659,9 +659,9 @@ size_t swiotlb_max_mapping_size(struct device *dev)
>   	return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE;
>   }
>   
> -bool is_swiotlb_active(void)
> +bool is_swiotlb_active(struct device *dev)
>   {
> -	return io_tlb_default_mem != NULL;
> +	return get_io_tlb_mem(dev) != NULL;
>   }
>   EXPORT_SYMBOL_GPL(is_swiotlb_active);
>   
> 


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 16/16] of: Add plumbing for restricted DMA pool
  2021-04-22  8:15 ` [PATCH v5 16/16] of: Add plumbing for " Claire Chang
  2021-04-23  2:52   ` Claire Chang
@ 2021-04-23 13:35   ` Robin Murphy
  2021-04-26 16:38     ` Claire Chang
  1 sibling, 1 reply; 29+ messages in thread
From: Robin Murphy @ 2021-04-23 13:35 UTC (permalink / raw)
  To: Claire Chang, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	grant.likely, xypron.glpk, Thierry Reding, mingo, bauerman,
	peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

On 2021-04-22 09:15, Claire Chang wrote:
> If a device is not behind an IOMMU, we look up the device node and set
> up the restricted DMA when the restricted-dma-pool is presented.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>   drivers/of/address.c    | 25 +++++++++++++++++++++++++
>   drivers/of/device.c     |  3 +++
>   drivers/of/of_private.h |  5 +++++
>   3 files changed, 33 insertions(+)
> 
> diff --git a/drivers/of/address.c b/drivers/of/address.c
> index 54f221dde267..fff3adfe4986 100644
> --- a/drivers/of/address.c
> +++ b/drivers/of/address.c
> @@ -8,6 +8,7 @@
>   #include <linux/logic_pio.h>
>   #include <linux/module.h>
>   #include <linux/of_address.h>
> +#include <linux/of_reserved_mem.h>
>   #include <linux/pci.h>
>   #include <linux/pci_regs.h>
>   #include <linux/sizes.h>
> @@ -1109,6 +1110,30 @@ bool of_dma_is_coherent(struct device_node *np)
>   }
>   EXPORT_SYMBOL_GPL(of_dma_is_coherent);
>   
> +int of_dma_set_restricted_buffer(struct device *dev)
> +{
> +	struct device_node *node;
> +	int count, i;
> +
> +	if (!dev->of_node)
> +		return 0;
> +
> +	count = of_property_count_elems_of_size(dev->of_node, "memory-region",
> +						sizeof(phandle));
> +	for (i = 0; i < count; i++) {
> +		node = of_parse_phandle(dev->of_node, "memory-region", i);
> +		/* There might be multiple memory regions, but only one
> +		 * restriced-dma-pool region is allowed.
> +		 */

What's the use-case for having multiple regions if the restricted pool 
is by definition the only one accessible?

Robin.

> +		if (of_device_is_compatible(node, "restricted-dma-pool") &&
> +		    of_device_is_available(node))
> +			return of_reserved_mem_device_init_by_idx(
> +				dev, dev->of_node, i);
> +	}
> +
> +	return 0;
> +}
> +
>   /**
>    * of_mmio_is_nonposted - Check if device uses non-posted MMIO
>    * @np:	device node
> diff --git a/drivers/of/device.c b/drivers/of/device.c
> index c5a9473a5fb1..d8d865223e51 100644
> --- a/drivers/of/device.c
> +++ b/drivers/of/device.c
> @@ -165,6 +165,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
>   
>   	arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
>   
> +	if (!iommu)
> +		return of_dma_set_restricted_buffer(dev);
> +
>   	return 0;
>   }
>   EXPORT_SYMBOL_GPL(of_dma_configure_id);
> diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
> index d717efbd637d..e9237f5eff48 100644
> --- a/drivers/of/of_private.h
> +++ b/drivers/of/of_private.h
> @@ -163,12 +163,17 @@ struct bus_dma_region;
>   #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA)
>   int of_dma_get_range(struct device_node *np,
>   		const struct bus_dma_region **map);
> +int of_dma_set_restricted_buffer(struct device *dev);
>   #else
>   static inline int of_dma_get_range(struct device_node *np,
>   		const struct bus_dma_region **map)
>   {
>   	return -ENODEV;
>   }
> +static inline int of_dma_get_restricted_buffer(struct device *dev)
> +{
> +	return -ENODEV;
> +}
>   #endif
>   
>   #endif /* _LINUX_OF_PRIVATE_H */
> 


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 14/16] dma-direct: Allocate memory from restricted DMA pool if available
  2021-04-22  8:15 ` [PATCH v5 14/16] dma-direct: Allocate memory from restricted DMA pool if available Claire Chang
@ 2021-04-23 13:46   ` Robin Murphy
  2021-05-03 14:26     ` Claire Chang
  0 siblings, 1 reply; 29+ messages in thread
From: Robin Murphy @ 2021-04-23 13:46 UTC (permalink / raw)
  To: Claire Chang, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	grant.likely, xypron.glpk, Thierry Reding, mingo, bauerman,
	peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, nouveau, rodrigo.vivi,
	thomas.hellstrom

On 2021-04-22 09:15, Claire Chang wrote:
> The restricted DMA pool is preferred if available.
> 
> The restricted DMA pools provide a basic level of protection against the
> DMA overwriting buffer contents at unexpected times. However, to protect
> against general data leakage and system memory corruption, the system
> needs to provide a way to lock down the memory access, e.g., MPU.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>   kernel/dma/direct.c | 35 ++++++++++++++++++++++++++---------
>   1 file changed, 26 insertions(+), 9 deletions(-)
> 
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 7a27f0510fcc..29523d2a9845 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -78,6 +78,10 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
>   static void __dma_direct_free_pages(struct device *dev, struct page *page,
>   				    size_t size)
>   {
> +#ifdef CONFIG_DMA_RESTRICTED_POOL
> +	if (swiotlb_free(dev, page, size))
> +		return;
> +#endif
>   	dma_free_contiguous(dev, page, size);
>   }
>   
> @@ -92,7 +96,17 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
>   
>   	gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
>   					   &phys_limit);
> -	page = dma_alloc_contiguous(dev, size, gfp);
> +
> +#ifdef CONFIG_DMA_RESTRICTED_POOL
> +	page = swiotlb_alloc(dev, size);
> +	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
> +		__dma_direct_free_pages(dev, page, size);
> +		page = NULL;
> +	}
> +#endif
> +
> +	if (!page)
> +		page = dma_alloc_contiguous(dev, size, gfp);
>   	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
>   		dma_free_contiguous(dev, page, size);
>   		page = NULL;
> @@ -148,7 +162,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
>   		gfp |= __GFP_NOWARN;
>   
>   	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
> -	    !force_dma_unencrypted(dev)) {
> +	    !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
>   		page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
>   		if (!page)
>   			return NULL;
> @@ -161,8 +175,8 @@ void *dma_direct_alloc(struct device *dev, size_t size,
>   	}
>   
>   	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
> -	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
> -	    !dev_is_dma_coherent(dev))
> +	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
> +	    !is_dev_swiotlb_force(dev))
>   		return arch_dma_alloc(dev, size, dma_handle, gfp, attrs);
>   
>   	/*
> @@ -172,7 +186,9 @@ void *dma_direct_alloc(struct device *dev, size_t size,
>   	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
>   	    !gfpflags_allow_blocking(gfp) &&
>   	    (force_dma_unencrypted(dev) ||
> -	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev))))
> +	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
> +	      !dev_is_dma_coherent(dev))) &&
> +	    !is_dev_swiotlb_force(dev))
>   		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
>   
>   	/* we always manually zero the memory once we are done */
> @@ -253,15 +269,15 @@ void dma_direct_free(struct device *dev, size_t size,
>   	unsigned int page_order = get_order(size);
>   
>   	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
> -	    !force_dma_unencrypted(dev)) {
> +	    !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
>   		/* cpu_addr is a struct page cookie, not a kernel address */
>   		dma_free_contiguous(dev, cpu_addr, size);
>   		return;
>   	}
>   
>   	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
> -	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
> -	    !dev_is_dma_coherent(dev)) {
> +	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
> +	    !is_dev_swiotlb_force(dev)) {
>   		arch_dma_free(dev, size, cpu_addr, dma_addr, attrs);
>   		return;
>   	}
> @@ -289,7 +305,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
>   	void *ret;
>   
>   	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
> -	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp))
> +	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) &&
> +	    !is_dev_swiotlb_force(dev))
>   		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);

Wait, this seems broken for non-coherent devices - in that case we need 
to return a non-cacheable address, but we can't simply fall through into 
the remapping path below in GFP_ATOMIC context. That's why we need the 
atomic pool concept in the first place :/

Unless I've overlooked something, we're still using the regular 
cacheable linear map address of the dma_io_tlb_mem buffer, no?

Robin.

>   
>   	page = __dma_direct_alloc_pages(dev, size, gfp);
> 


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 05/16] swiotlb: Add restricted DMA pool initialization
  2021-04-23 11:34   ` Steven Price
@ 2021-04-26 16:37     ` Claire Chang
  2021-04-28  9:50       ` Steven Price
  0 siblings, 1 reply; 29+ messages in thread
From: Claire Chang @ 2021-04-26 16:37 UTC (permalink / raw)
  To: Steven Price
  Cc: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski,
	benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, Tomasz Figa, bskeggs,
	Bjorn Helgaas, chris, Daniel Vetter, airlied, dri-devel,
	intel-gfx, jani.nikula, Jianxiong Gao, joonas.lahtinen,
	linux-pci, maarten.lankhorst, matthew.auld, nouveau,
	rodrigo.vivi, thomas.hellstrom

On Fri, Apr 23, 2021 at 7:34 PM Steven Price <steven.price@arm.com> wrote:
>
> On 22/04/2021 09:14, Claire Chang wrote:
> > Add the initialization function to create restricted DMA pools from
> > matching reserved-memory nodes.
> >
> > Signed-off-by: Claire Chang <tientzu@chromium.org>
> > ---
> >   include/linux/device.h  |  4 +++
> >   include/linux/swiotlb.h |  3 +-
> >   kernel/dma/swiotlb.c    | 80 +++++++++++++++++++++++++++++++++++++++++
> >   3 files changed, 86 insertions(+), 1 deletion(-)
> >
> > diff --git a/include/linux/device.h b/include/linux/device.h
> > index 38a2071cf776..4987608ea4ff 100644
> > --- a/include/linux/device.h
> > +++ b/include/linux/device.h
> > @@ -416,6 +416,7 @@ struct dev_links_info {
> >    * @dma_pools:      Dma pools (if dma'ble device).
> >    * @dma_mem:        Internal for coherent mem override.
> >    * @cma_area:       Contiguous memory area for dma allocations
> > + * @dma_io_tlb_mem: Internal for swiotlb io_tlb_mem override.
> >    * @archdata:       For arch-specific additions.
> >    * @of_node:        Associated device tree node.
> >    * @fwnode: Associated device node supplied by platform firmware.
> > @@ -521,6 +522,9 @@ struct device {
> >   #ifdef CONFIG_DMA_CMA
> >       struct cma *cma_area;           /* contiguous memory area for dma
> >                                          allocations */
> > +#endif
> > +#ifdef CONFIG_DMA_RESTRICTED_POOL
> > +     struct io_tlb_mem *dma_io_tlb_mem;
> >   #endif
> >       /* arch specific additions */
> >       struct dev_archdata     archdata;
> > diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> > index 216854a5e513..03ad6e3b4056 100644
> > --- a/include/linux/swiotlb.h
> > +++ b/include/linux/swiotlb.h
> > @@ -72,7 +72,8 @@ extern enum swiotlb_force swiotlb_force;
> >    *          range check to see if the memory was in fact allocated by this
> >    *          API.
> >    * @nslabs: The number of IO TLB blocks (in groups of 64) between @start and
> > - *           @end. This is command line adjustable via setup_io_tlb_npages.
> > + *           @end. For default swiotlb, this is command line adjustable via
> > + *           setup_io_tlb_npages.
> >    * @used:   The number of used IO TLB block.
> >    * @list:   The free list describing the number of free entries available
> >    *          from each index.
> > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> > index 57a9adb920bf..ffbb8724e06c 100644
> > --- a/kernel/dma/swiotlb.c
> > +++ b/kernel/dma/swiotlb.c
> > @@ -39,6 +39,13 @@
> >   #ifdef CONFIG_DEBUG_FS
> >   #include <linux/debugfs.h>
> >   #endif
> > +#ifdef CONFIG_DMA_RESTRICTED_POOL
> > +#include <linux/io.h>
> > +#include <linux/of.h>
> > +#include <linux/of_fdt.h>
> > +#include <linux/of_reserved_mem.h>
> > +#include <linux/slab.h>
> > +#endif
> >
> >   #include <asm/io.h>
> >   #include <asm/dma.h>
> > @@ -681,3 +688,76 @@ static int __init swiotlb_create_default_debugfs(void)
> >   late_initcall(swiotlb_create_default_debugfs);
> >
> >   #endif
> > +
> > +#ifdef CONFIG_DMA_RESTRICTED_POOL
> > +static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
> > +                                 struct device *dev)
> > +{
> > +     struct io_tlb_mem *mem = rmem->priv;
> > +     unsigned long nslabs = rmem->size >> IO_TLB_SHIFT;
> > +
> > +     if (dev->dma_io_tlb_mem)
> > +             return 0;
> > +
> > +     /* Since multiple devices can share the same pool, the private data,
> > +      * io_tlb_mem struct, will be initialized by the first device attached
> > +      * to it.
> > +      */
> > +     if (!mem) {
> > +             mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL);
> > +             if (!mem)
> > +                     return -ENOMEM;
> > +#ifdef CONFIG_ARM
> > +             if (!PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
> > +                     kfree(mem);
> > +                     return -EINVAL;
> > +             }
> > +#endif /* CONFIG_ARM */
> > +             swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
> > +
> > +             rmem->priv = mem;
> > +     }
> > +
> > +#ifdef CONFIG_DEBUG_FS
> > +     if (!io_tlb_default_mem->debugfs)
> > +             io_tlb_default_mem->debugfs =
> > +                     debugfs_create_dir("swiotlb", NULL);
>
> At this point it's possible for io_tlb_default_mem to be NULL, leading
> to a splat.

Thanks for pointing this out.

>
> But even then if it's not and we have the situation where debugfs==NULL
> then the debugfs_create_dir() here will cause a subsequent attempt in
> swiotlb_create_debugfs() to fail (directory already exists) leading to
> mem->debugfs being assigned an error value. I suspect the creation of
> the debugfs directory needs to be separated from io_tlb_default_mem
> being set.

debugfs creation should move into the if (!mem) {...} above to avoid
duplication.
I think having a separated struct dentry pointer for the default
debugfs should be enough?

if (!debugfs)
    debugfs = debugfs_create_dir("swiotlb", NULL);
swiotlb_create_debugfs(mem, rmem->name, debugfs);

>
> Other than that I gave this series a go with our prototype of Arm's
> Confidential Computer Architecture[1] - since the majority of the
> guest's memory is protected from the host the restricted DMA pool allows
> (only) a small area to be shared with the host.
>
> After fixing (well hacking round) the above it all seems to be working
> fine with virtio drivers.
>
> Thanks,
>
> Steve
>
> [1]
> https://www.arm.com/why-arm/architecture/security-features/arm-confidential-compute-architecture


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 08/16] swiotlb: Update is_swiotlb_active to add a struct device argument
  2021-04-23 13:31   ` Robin Murphy
@ 2021-04-26 16:37     ` Claire Chang
  0 siblings, 0 replies; 29+ messages in thread
From: Claire Chang @ 2021-04-26 16:37 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski,
	benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	grant.likely, xypron.glpk, Thierry Reding, mingo, bauerman,
	peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, Tomasz Figa, bskeggs,
	Bjorn Helgaas, chris, Daniel Vetter, airlied, dri-devel,
	intel-gfx, jani.nikula, Jianxiong Gao, joonas.lahtinen,
	linux-pci, maarten.lankhorst, matthew.auld, nouveau,
	rodrigo.vivi, thomas.hellstrom

On Fri, Apr 23, 2021 at 9:31 PM Robin Murphy <robin.murphy@arm.com> wrote:
>
> On 2021-04-22 09:15, Claire Chang wrote:
> > Update is_swiotlb_active to add a struct device argument. This will be
> > useful later to allow for restricted DMA pool.
> >
> > Signed-off-by: Claire Chang <tientzu@chromium.org>
> > ---
> >   drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +-
> >   drivers/gpu/drm/nouveau/nouveau_ttm.c        | 2 +-
> >   drivers/pci/xen-pcifront.c                   | 2 +-
> >   include/linux/swiotlb.h                      | 4 ++--
> >   kernel/dma/direct.c                          | 2 +-
> >   kernel/dma/swiotlb.c                         | 4 ++--
> >   6 files changed, 8 insertions(+), 8 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> > index ce6b664b10aa..7d48c433446b 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> > @@ -42,7 +42,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
> >
> >       max_order = MAX_ORDER;
> >   #ifdef CONFIG_SWIOTLB
> > -     if (is_swiotlb_active()) {
> > +     if (is_swiotlb_active(NULL)) {
> >               unsigned int max_segment;
> >
> >               max_segment = swiotlb_max_segment();
> > diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c
> > index e8b506a6685b..2a2ae6d6cf6d 100644
> > --- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
> > +++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
> > @@ -321,7 +321,7 @@ nouveau_ttm_init(struct nouveau_drm *drm)
> >       }
> >
> >   #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86)
> > -     need_swiotlb = is_swiotlb_active();
> > +     need_swiotlb = is_swiotlb_active(NULL);
> >   #endif
> >
> >       ret = ttm_device_init(&drm->ttm.bdev, &nouveau_bo_driver, drm->dev->dev,
> > diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
> > index b7a8f3a1921f..6d548ce53ce7 100644
> > --- a/drivers/pci/xen-pcifront.c
> > +++ b/drivers/pci/xen-pcifront.c
> > @@ -693,7 +693,7 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
> >
> >       spin_unlock(&pcifront_dev_lock);
> >
> > -     if (!err && !is_swiotlb_active()) {
> > +     if (!err && !is_swiotlb_active(NULL)) {
> >               err = pci_xen_swiotlb_init_late();
> >               if (err)
> >                       dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
> > diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> > index 2a6cca07540b..c530c976d18b 100644
> > --- a/include/linux/swiotlb.h
> > +++ b/include/linux/swiotlb.h
> > @@ -123,7 +123,7 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
> >   void __init swiotlb_exit(void);
> >   unsigned int swiotlb_max_segment(void);
> >   size_t swiotlb_max_mapping_size(struct device *dev);
> > -bool is_swiotlb_active(void);
> > +bool is_swiotlb_active(struct device *dev);
> >   void __init swiotlb_adjust_size(unsigned long size);
> >   #else
> >   #define swiotlb_force SWIOTLB_NO_FORCE
> > @@ -143,7 +143,7 @@ static inline size_t swiotlb_max_mapping_size(struct device *dev)
> >       return SIZE_MAX;
> >   }
> >
> > -static inline bool is_swiotlb_active(void)
> > +static inline bool is_swiotlb_active(struct device *dev)
> >   {
> >       return false;
> >   }
> > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> > index 84c9feb5474a..7a88c34d0867 100644
> > --- a/kernel/dma/direct.c
> > +++ b/kernel/dma/direct.c
> > @@ -495,7 +495,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
> >   size_t dma_direct_max_mapping_size(struct device *dev)
> >   {
> >       /* If SWIOTLB is active, use its maximum mapping size */
> > -     if (is_swiotlb_active() &&
> > +     if (is_swiotlb_active(dev) &&
> >           (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
>
> I wonder if it's worth trying to fold these other conditions into
> is_swiotlb_active() itself? I'm not entirely sure what matters for Xen,
> but for the other cases it seems like they probably only care about
> whether bouncing may occur for their particular device or not (possibly
> they want to be using dma_max_mapping_size() now anyway - TBH I'm
> struggling to make sense of what the swiotlb_max_segment business is
> supposed to mean).

I think leaving those conditions outside of is_swiotlb_active() might
help avoid confusion with is_dev_swiotlb_force() in patch #9? We need
is_dev_swiotlb_force() only because the restricted DMA pool supports
memory allocation, but the default swiotlb doesn't.

>
> Otherwise, patch #9 will need to touch here as well to make sure that
> per-device forced bouncing is reflected correctly.

You're right. Otherwise, is_dev_swiotlb_force is needed here.


>
> Robin.
>
> >               return swiotlb_max_mapping_size(dev);
> >       return SIZE_MAX;
> > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> > index ffbb8724e06c..1d221343f1c8 100644
> > --- a/kernel/dma/swiotlb.c
> > +++ b/kernel/dma/swiotlb.c
> > @@ -659,9 +659,9 @@ size_t swiotlb_max_mapping_size(struct device *dev)
> >       return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE;
> >   }
> >
> > -bool is_swiotlb_active(void)
> > +bool is_swiotlb_active(struct device *dev)
> >   {
> > -     return io_tlb_default_mem != NULL;
> > +     return get_io_tlb_mem(dev) != NULL;
> >   }
> >   EXPORT_SYMBOL_GPL(is_swiotlb_active);
> >
> >


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 16/16] of: Add plumbing for restricted DMA pool
  2021-04-23 13:35   ` Robin Murphy
@ 2021-04-26 16:38     ` Claire Chang
  0 siblings, 0 replies; 29+ messages in thread
From: Claire Chang @ 2021-04-26 16:38 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski,
	benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	grant.likely, xypron.glpk, Thierry Reding, mingo, bauerman,
	peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, Tomasz Figa, bskeggs,
	Bjorn Helgaas, chris, Daniel Vetter, airlied, dri-devel,
	intel-gfx, jani.nikula, Jianxiong Gao, joonas.lahtinen,
	linux-pci, maarten.lankhorst, matthew.auld, nouveau,
	rodrigo.vivi, thomas.hellstrom

On Fri, Apr 23, 2021 at 9:35 PM Robin Murphy <robin.murphy@arm.com> wrote:
>
> On 2021-04-22 09:15, Claire Chang wrote:
> > If a device is not behind an IOMMU, we look up the device node and set
> > up the restricted DMA when the restricted-dma-pool is presented.
> >
> > Signed-off-by: Claire Chang <tientzu@chromium.org>
> > ---
> >   drivers/of/address.c    | 25 +++++++++++++++++++++++++
> >   drivers/of/device.c     |  3 +++
> >   drivers/of/of_private.h |  5 +++++
> >   3 files changed, 33 insertions(+)
> >
> > diff --git a/drivers/of/address.c b/drivers/of/address.c
> > index 54f221dde267..fff3adfe4986 100644
> > --- a/drivers/of/address.c
> > +++ b/drivers/of/address.c
> > @@ -8,6 +8,7 @@
> >   #include <linux/logic_pio.h>
> >   #include <linux/module.h>
> >   #include <linux/of_address.h>
> > +#include <linux/of_reserved_mem.h>
> >   #include <linux/pci.h>
> >   #include <linux/pci_regs.h>
> >   #include <linux/sizes.h>
> > @@ -1109,6 +1110,30 @@ bool of_dma_is_coherent(struct device_node *np)
> >   }
> >   EXPORT_SYMBOL_GPL(of_dma_is_coherent);
> >
> > +int of_dma_set_restricted_buffer(struct device *dev)
> > +{
> > +     struct device_node *node;
> > +     int count, i;
> > +
> > +     if (!dev->of_node)
> > +             return 0;
> > +
> > +     count = of_property_count_elems_of_size(dev->of_node, "memory-region",
> > +                                             sizeof(phandle));
> > +     for (i = 0; i < count; i++) {
> > +             node = of_parse_phandle(dev->of_node, "memory-region", i);
> > +             /* There might be multiple memory regions, but only one
> > +              * restriced-dma-pool region is allowed.
> > +              */
>
> What's the use-case for having multiple regions if the restricted pool
> is by definition the only one accessible?

There might be a device coherent pool (shared-dma-pool) and
dma_alloc_attrs might allocate memory from that pool [1].
I'm not sure if it makes sense to have another device coherent pool
while using restricted DMA pool though.

[1] https://elixir.bootlin.com/linux/v5.12/source/kernel/dma/mapping.c#L435


>
> Robin.
>
> > +             if (of_device_is_compatible(node, "restricted-dma-pool") &&
> > +                 of_device_is_available(node))
> > +                     return of_reserved_mem_device_init_by_idx(
> > +                             dev, dev->of_node, i);
> > +     }
> > +
> > +     return 0;
> > +}
> > +
> >   /**
> >    * of_mmio_is_nonposted - Check if device uses non-posted MMIO
> >    * @np:     device node
> > diff --git a/drivers/of/device.c b/drivers/of/device.c
> > index c5a9473a5fb1..d8d865223e51 100644
> > --- a/drivers/of/device.c
> > +++ b/drivers/of/device.c
> > @@ -165,6 +165,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
> >
> >       arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
> >
> > +     if (!iommu)
> > +             return of_dma_set_restricted_buffer(dev);
> > +
> >       return 0;
> >   }
> >   EXPORT_SYMBOL_GPL(of_dma_configure_id);
> > diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
> > index d717efbd637d..e9237f5eff48 100644
> > --- a/drivers/of/of_private.h
> > +++ b/drivers/of/of_private.h
> > @@ -163,12 +163,17 @@ struct bus_dma_region;
> >   #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA)
> >   int of_dma_get_range(struct device_node *np,
> >               const struct bus_dma_region **map);
> > +int of_dma_set_restricted_buffer(struct device *dev);
> >   #else
> >   static inline int of_dma_get_range(struct device_node *np,
> >               const struct bus_dma_region **map)
> >   {
> >       return -ENODEV;
> >   }
> > +static inline int of_dma_get_restricted_buffer(struct device *dev)
> > +{
> > +     return -ENODEV;
> > +}
> >   #endif
> >
> >   #endif /* _LINUX_OF_PRIVATE_H */
> >


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 05/16] swiotlb: Add restricted DMA pool initialization
  2021-04-26 16:37     ` Claire Chang
@ 2021-04-28  9:50       ` Steven Price
  0 siblings, 0 replies; 29+ messages in thread
From: Steven Price @ 2021-04-28  9:50 UTC (permalink / raw)
  To: Claire Chang
  Cc: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski,
	benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, Tomasz Figa, bskeggs,
	Bjorn Helgaas, chris, Daniel Vetter, airlied, dri-devel,
	intel-gfx, jani.nikula, Jianxiong Gao, joonas.lahtinen,
	linux-pci, maarten.lankhorst, matthew.auld, nouveau,
	rodrigo.vivi, thomas.hellstrom

On 26/04/2021 17:37, Claire Chang wrote:
> On Fri, Apr 23, 2021 at 7:34 PM Steven Price <steven.price@arm.com> wrote:
[...]
>>
>> But even then if it's not and we have the situation where debugfs==NULL
>> then the debugfs_create_dir() here will cause a subsequent attempt in
>> swiotlb_create_debugfs() to fail (directory already exists) leading to
>> mem->debugfs being assigned an error value. I suspect the creation of
>> the debugfs directory needs to be separated from io_tlb_default_mem
>> being set.
> 
> debugfs creation should move into the if (!mem) {...} above to avoid
> duplication.
> I think having a separated struct dentry pointer for the default
> debugfs should be enough?
> 
> if (!debugfs)
>      debugfs = debugfs_create_dir("swiotlb", NULL);
> swiotlb_create_debugfs(mem, rmem->name, debugfs);

Yes that looks like a good solution to me. Although I'd name the 
variable something a bit more descriptive than just "debugfs" e.g. 
"debugfs_dir" or "debugfs_root".

Thanks,

Steve


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 14/16] dma-direct: Allocate memory from restricted DMA pool if available
  2021-04-23 13:46   ` Robin Murphy
@ 2021-05-03 14:26     ` Claire Chang
  0 siblings, 0 replies; 29+ messages in thread
From: Claire Chang @ 2021-05-03 14:26 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski,
	benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	grant.likely, xypron.glpk, Thierry Reding, mingo, bauerman,
	peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, Tomasz Figa, bskeggs,
	Bjorn Helgaas, chris, Daniel Vetter, airlied, dri-devel,
	intel-gfx, jani.nikula, Jianxiong Gao, joonas.lahtinen,
	linux-pci, maarten.lankhorst, matthew.auld, nouveau,
	rodrigo.vivi, thomas.hellstrom

On Fri, Apr 23, 2021 at 9:46 PM Robin Murphy <robin.murphy@arm.com> wrote:
>
> On 2021-04-22 09:15, Claire Chang wrote:
> > The restricted DMA pool is preferred if available.
> >
> > The restricted DMA pools provide a basic level of protection against the
> > DMA overwriting buffer contents at unexpected times. However, to protect
> > against general data leakage and system memory corruption, the system
> > needs to provide a way to lock down the memory access, e.g., MPU.
> >
> > Signed-off-by: Claire Chang <tientzu@chromium.org>
> > ---
> >   kernel/dma/direct.c | 35 ++++++++++++++++++++++++++---------
> >   1 file changed, 26 insertions(+), 9 deletions(-)
> >
> > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> > index 7a27f0510fcc..29523d2a9845 100644
> > --- a/kernel/dma/direct.c
> > +++ b/kernel/dma/direct.c
> > @@ -78,6 +78,10 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
> >   static void __dma_direct_free_pages(struct device *dev, struct page *page,
> >                                   size_t size)
> >   {
> > +#ifdef CONFIG_DMA_RESTRICTED_POOL
> > +     if (swiotlb_free(dev, page, size))
> > +             return;
> > +#endif
> >       dma_free_contiguous(dev, page, size);
> >   }
> >
> > @@ -92,7 +96,17 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
> >
> >       gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
> >                                          &phys_limit);
> > -     page = dma_alloc_contiguous(dev, size, gfp);
> > +
> > +#ifdef CONFIG_DMA_RESTRICTED_POOL
> > +     page = swiotlb_alloc(dev, size);
> > +     if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
> > +             __dma_direct_free_pages(dev, page, size);
> > +             page = NULL;
> > +     }
> > +#endif
> > +
> > +     if (!page)
> > +             page = dma_alloc_contiguous(dev, size, gfp);
> >       if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
> >               dma_free_contiguous(dev, page, size);
> >               page = NULL;
> > @@ -148,7 +162,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
> >               gfp |= __GFP_NOWARN;
> >
> >       if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
> > -         !force_dma_unencrypted(dev)) {
> > +         !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
> >               page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
> >               if (!page)
> >                       return NULL;
> > @@ -161,8 +175,8 @@ void *dma_direct_alloc(struct device *dev, size_t size,
> >       }
> >
> >       if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
> > -         !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
> > -         !dev_is_dma_coherent(dev))
> > +         !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
> > +         !is_dev_swiotlb_force(dev))
> >               return arch_dma_alloc(dev, size, dma_handle, gfp, attrs);
> >
> >       /*
> > @@ -172,7 +186,9 @@ void *dma_direct_alloc(struct device *dev, size_t size,
> >       if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
> >           !gfpflags_allow_blocking(gfp) &&
> >           (force_dma_unencrypted(dev) ||
> > -          (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev))))
> > +          (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
> > +           !dev_is_dma_coherent(dev))) &&
> > +         !is_dev_swiotlb_force(dev))
> >               return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
> >
> >       /* we always manually zero the memory once we are done */
> > @@ -253,15 +269,15 @@ void dma_direct_free(struct device *dev, size_t size,
> >       unsigned int page_order = get_order(size);
> >
> >       if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
> > -         !force_dma_unencrypted(dev)) {
> > +         !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
> >               /* cpu_addr is a struct page cookie, not a kernel address */
> >               dma_free_contiguous(dev, cpu_addr, size);
> >               return;
> >       }
> >
> >       if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
> > -         !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
> > -         !dev_is_dma_coherent(dev)) {
> > +         !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
> > +         !is_dev_swiotlb_force(dev)) {
> >               arch_dma_free(dev, size, cpu_addr, dma_addr, attrs);
> >               return;
> >       }
> > @@ -289,7 +305,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
> >       void *ret;
> >
> >       if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
> > -         force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp))
> > +         force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) &&
> > +         !is_dev_swiotlb_force(dev))
> >               return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
>
> Wait, this seems broken for non-coherent devices - in that case we need
> to return a non-cacheable address, but we can't simply fall through into
> the remapping path below in GFP_ATOMIC context. That's why we need the
> atomic pool concept in the first place :/

Sorry for the late reply. I'm not very familiar with this. I wonder if
the memory returned here must be coherent. If yes, could we say for
this case, one must set up another device coherent pool
(shared-dma-pool) and go with dma_alloc_from_dev_coherent()[1]?

[1] https://elixir.bootlin.com/linux/v5.12/source/kernel/dma/mapping.c#L435

>
> Unless I've overlooked something, we're still using the regular
> cacheable linear map address of the dma_io_tlb_mem buffer, no?
>
> Robin.
>
> >
> >       page = __dma_direct_alloc_pages(dev, size, gfp);
> >


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 00/16] Restricted DMA
  2021-04-22  8:14 [PATCH v5 00/16] Restricted DMA Claire Chang
                   ` (15 preceding siblings ...)
  2021-04-22  8:15 ` [PATCH v5 16/16] of: Add plumbing for " Claire Chang
@ 2021-05-10  9:53 ` Claire Chang
  16 siblings, 0 replies; 29+ messages in thread
From: Claire Chang @ 2021-05-10  9:53 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon, Frank Rowand, Konrad Rzeszutek Wilk,
	boris.ostrovsky, jgross, Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, Tomasz Figa, bskeggs,
	Bjorn Helgaas, chris, Daniel Vetter, airlied, dri-devel,
	intel-gfx, jani.nikula, Jianxiong Gao, joonas.lahtinen,
	linux-pci, maarten.lankhorst, matthew.auld, nouveau,
	rodrigo.vivi, thomas.hellstrom

v6: https://lore.kernel.org/patchwork/cover/1423201/


^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2021-05-10 10:02 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-22  8:14 [PATCH v5 00/16] Restricted DMA Claire Chang
2021-04-22  8:14 ` [PATCH v5 01/16] swiotlb: Fix the type of index Claire Chang
2021-04-23  7:11   ` Christoph Hellwig
2021-04-22  8:14 ` [PATCH v5 02/16] swiotlb: Refactor swiotlb init functions Claire Chang
2021-04-22  8:14 ` [PATCH v5 03/16] swiotlb: Refactor swiotlb_create_debugfs Claire Chang
2021-04-22  8:14 ` [PATCH v5 04/16] swiotlb: Add DMA_RESTRICTED_POOL Claire Chang
2021-04-22  8:14 ` [PATCH v5 05/16] swiotlb: Add restricted DMA pool initialization Claire Chang
2021-04-23 11:34   ` Steven Price
2021-04-26 16:37     ` Claire Chang
2021-04-28  9:50       ` Steven Price
2021-04-22  8:14 ` [PATCH v5 06/16] swiotlb: Add a new get_io_tlb_mem getter Claire Chang
2021-04-22  8:14 ` [PATCH v5 07/16] swiotlb: Update is_swiotlb_buffer to add a struct device argument Claire Chang
2021-04-22  8:15 ` [PATCH v5 08/16] swiotlb: Update is_swiotlb_active " Claire Chang
2021-04-23 13:31   ` Robin Murphy
2021-04-26 16:37     ` Claire Chang
2021-04-22  8:15 ` [PATCH v5 09/16] swiotlb: Bounce data from/to restricted DMA pool if available Claire Chang
2021-04-22  8:15 ` [PATCH v5 10/16] swiotlb: Move alloc_size to find_slots Claire Chang
2021-04-22  8:15 ` [PATCH v5 11/16] swiotlb: Refactor swiotlb_tbl_unmap_single Claire Chang
2021-04-22  8:15 ` [PATCH v5 12/16] dma-direct: Add a new wrapper __dma_direct_free_pages() Claire Chang
2021-04-22  8:15 ` [PATCH v5 13/16] swiotlb: Add restricted DMA alloc/free support Claire Chang
2021-04-22  8:15 ` [PATCH v5 14/16] dma-direct: Allocate memory from restricted DMA pool if available Claire Chang
2021-04-23 13:46   ` Robin Murphy
2021-05-03 14:26     ` Claire Chang
2021-04-22  8:15 ` [PATCH v5 15/16] dt-bindings: of: Add restricted DMA pool Claire Chang
2021-04-22  8:15 ` [PATCH v5 16/16] of: Add plumbing for " Claire Chang
2021-04-23  2:52   ` Claire Chang
2021-04-23 13:35   ` Robin Murphy
2021-04-26 16:38     ` Claire Chang
2021-05-10  9:53 ` [PATCH v5 00/16] Restricted DMA Claire Chang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).