linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v9 00/14] Restricted DMA
@ 2021-06-11 15:26 Claire Chang
  2021-06-11 15:26 ` [PATCH v9 01/14] swiotlb: Refactor swiotlb init functions Claire Chang
                   ` (14 more replies)
  0 siblings, 15 replies; 30+ messages in thread
From: Claire Chang @ 2021-06-11 15:26 UTC (permalink / raw)
  To: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

This series implements mitigations for lack of DMA access control on
systems without an IOMMU, which could result in the DMA accessing the
system memory at unexpected times and/or unexpected addresses, possibly
leading to data leakage or corruption.

For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
not behind an IOMMU. As PCI-e, by design, gives the device full access to
system memory, a vulnerability in the Wi-Fi firmware could easily escalate
to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
full chain of exploits; [2], [3]).

To mitigate the security concerns, we introduce restricted DMA. Restricted
DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
specially allocated region and does memory allocation from the same region.
The feature on its own provides a basic level of protection against the DMA
overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system needs
to provide a way to restrict the DMA to a predefined memory region (this is
usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).

[1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
[1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
[2] https://blade.tencent.com/en/advisories/qualpwn/
[3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
[4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132

v9:
Address the comments in v7 to
  - set swiotlb active pool to dev->dma_io_tlb_mem
  - get rid of get_io_tlb_mem
  - dig out the device struct for is_swiotlb_active
  - move debugfs_create_dir out of swiotlb_create_debugfs
  - do set_memory_decrypted conditionally in swiotlb_init_io_tlb_mem
  - use IS_ENABLED in kernel/dma/direct.c
  - fix redefinition of 'of_dma_set_restricted_buffer'

v8:
- Fix reserved-memory.txt and add the reg property in example.
- Fix sizeof for of_property_count_elems_of_size in
  drivers/of/address.c#of_dma_set_restricted_buffer.
- Apply Will's suggestion to try the OF node having DMA configuration in
  drivers/of/address.c#of_dma_set_restricted_buffer.
- Fix typo in the comment of drivers/of/address.c#of_dma_set_restricted_buffer.
- Add error message for PageHighMem in
  kernel/dma/swiotlb.c#rmem_swiotlb_device_init and move it to
  rmem_swiotlb_setup.
- Fix the message string in rmem_swiotlb_setup.
https://lore.kernel.org/patchwork/cover/1437112/

v7:
Fix debugfs, PageHighMem and comment style in rmem_swiotlb_device_init
https://lore.kernel.org/patchwork/cover/1431031/

v6:
Address the comments in v5
https://lore.kernel.org/patchwork/cover/1423201/

v5:
Rebase on latest linux-next
https://lore.kernel.org/patchwork/cover/1416899/

v4:
- Fix spinlock bad magic
- Use rmem->name for debugfs entry
- Address the comments in v3
https://lore.kernel.org/patchwork/cover/1378113/

v3:
Using only one reserved memory region for both streaming DMA and memory
allocation.
https://lore.kernel.org/patchwork/cover/1360992/

v2:
Building on top of swiotlb.
https://lore.kernel.org/patchwork/cover/1280705/

v1:
Using dma_map_ops.
https://lore.kernel.org/patchwork/cover/1271660/


Claire Chang (14):
  swiotlb: Refactor swiotlb init functions
  swiotlb: Refactor swiotlb_create_debugfs
  swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used
  swiotlb: Add restricted DMA pool initialization
  swiotlb: Update is_swiotlb_buffer to add a struct device argument
  swiotlb: Update is_swiotlb_active to add a struct device argument
  swiotlb: Bounce data from/to restricted DMA pool if available
  swiotlb: Move alloc_size to find_slots
  swiotlb: Refactor swiotlb_tbl_unmap_single
  dma-direct: Add a new wrapper __dma_direct_free_pages()
  swiotlb: Add restricted DMA alloc/free support.
  dma-direct: Allocate memory from restricted DMA pool if available
  dt-bindings: of: Add restricted DMA pool
  of: Add plumbing for restricted DMA pool

 .../reserved-memory/reserved-memory.txt       |  36 ++-
 drivers/gpu/drm/i915/gem/i915_gem_internal.c  |   2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c         |   2 +-
 drivers/iommu/dma-iommu.c                     |  12 +-
 drivers/of/address.c                          |  33 +++
 drivers/of/device.c                           |   6 +
 drivers/of/of_private.h                       |   6 +
 drivers/pci/xen-pcifront.c                    |   2 +-
 drivers/xen/swiotlb-xen.c                     |   2 +-
 include/linux/device.h                        |   4 +
 include/linux/swiotlb.h                       |  45 +++-
 kernel/dma/Kconfig                            |  14 +
 kernel/dma/direct.c                           |  62 +++--
 kernel/dma/direct.h                           |   9 +-
 kernel/dma/swiotlb.c                          | 242 +++++++++++++-----
 15 files changed, 376 insertions(+), 101 deletions(-)

-- 
2.32.0.272.g935e593368-goog


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v9 01/14] swiotlb: Refactor swiotlb init functions
  2021-06-11 15:26 [PATCH v9 00/14] Restricted DMA Claire Chang
@ 2021-06-11 15:26 ` Claire Chang
  2021-06-14  6:16   ` Christoph Hellwig
  2021-06-11 15:26 ` [PATCH v9 02/14] swiotlb: Refactor swiotlb_create_debugfs Claire Chang
                   ` (13 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Claire Chang @ 2021-06-11 15:26 UTC (permalink / raw)
  To: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
initialization to make the code reusable.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 53 ++++++++++++++++++++++----------------------
 1 file changed, 27 insertions(+), 26 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 8ca7d505d61c..1a1208c81e85 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -168,9 +168,32 @@ void __init swiotlb_update_mem_attributes(void)
 	memset(vaddr, 0, bytes);
 }
 
-int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
+				    unsigned long nslabs, bool late_alloc,
+				    bool memory_decrypted)
 {
+	void *vaddr = phys_to_virt(start);
 	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
+
+	mem->nslabs = nslabs;
+	mem->start = start;
+	mem->end = mem->start + bytes;
+	mem->index = 0;
+	mem->late_alloc = late_alloc;
+	spin_lock_init(&mem->lock);
+	for (i = 0; i < mem->nslabs; i++) {
+		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
+		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
+		mem->slots[i].alloc_size = 0;
+	}
+
+	if (memory_decrypted)
+		set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
+	memset(vaddr, 0, bytes);
+}
+
+int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+{
 	struct io_tlb_mem *mem;
 	size_t alloc_size;
 
@@ -186,16 +209,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 	if (!mem)
 		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
 		      __func__, alloc_size, PAGE_SIZE);
-	mem->nslabs = nslabs;
-	mem->start = __pa(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
+
+	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false, false);
 
 	io_tlb_default_mem = mem;
 	if (verbose)
@@ -282,7 +297,6 @@ swiotlb_late_init_with_default_size(size_t default_size)
 int
 swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 {
-	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
 	struct io_tlb_mem *mem;
 
 	if (swiotlb_force == SWIOTLB_NO_FORCE)
@@ -297,20 +311,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 	if (!mem)
 		return -ENOMEM;
 
-	mem->nslabs = nslabs;
-	mem->start = virt_to_phys(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	mem->late_alloc = 1;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
-
-	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
-	memset(tlb, 0, bytes);
+	swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true, true);
 
 	io_tlb_default_mem = mem;
 	swiotlb_print_info();
-- 
2.32.0.272.g935e593368-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v9 02/14] swiotlb: Refactor swiotlb_create_debugfs
  2021-06-11 15:26 [PATCH v9 00/14] Restricted DMA Claire Chang
  2021-06-11 15:26 ` [PATCH v9 01/14] swiotlb: Refactor swiotlb init functions Claire Chang
@ 2021-06-11 15:26 ` Claire Chang
  2021-06-14  6:17   ` Christoph Hellwig
  2021-06-11 15:26 ` [PATCH v9 03/14] swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used Claire Chang
                   ` (12 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Claire Chang @ 2021-06-11 15:26 UTC (permalink / raw)
  To: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

Split the debugfs creation to make the code reusable for supporting
different bounce buffer pools, e.g. restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 23 ++++++++++++++++-------
 1 file changed, 16 insertions(+), 7 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 1a1208c81e85..8a3e2b3b246d 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -64,6 +64,9 @@
 enum swiotlb_force swiotlb_force;
 
 struct io_tlb_mem *io_tlb_default_mem;
+#ifdef CONFIG_DEBUG_FS
+static struct dentry *debugfs_dir;
+#endif
 
 /*
  * Max segment that we can provide which (if pages are contingous) will
@@ -664,18 +667,24 @@ EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
 #ifdef CONFIG_DEBUG_FS
 
-static int __init swiotlb_create_debugfs(void)
+static void swiotlb_create_debugfs_files(struct io_tlb_mem *mem)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
-
-	if (!mem)
-		return 0;
-	mem->debugfs = debugfs_create_dir("swiotlb", NULL);
 	debugfs_create_ulong("io_tlb_nslabs", 0400, mem->debugfs, &mem->nslabs);
 	debugfs_create_ulong("io_tlb_used", 0400, mem->debugfs, &mem->used);
+}
+
+static int __init swiotlb_create_default_debugfs(void)
+{
+	struct io_tlb_mem *mem = io_tlb_default_mem;
+
+	debugfs_dir = debugfs_create_dir("swiotlb", NULL);
+	if (mem) {
+		mem->debugfs = debugfs_dir;
+		swiotlb_create_debugfs_files(mem);
+	}
 	return 0;
 }
 
-late_initcall(swiotlb_create_debugfs);
+late_initcall(swiotlb_create_default_debugfs);
 
 #endif
-- 
2.32.0.272.g935e593368-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v9 03/14] swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used
  2021-06-11 15:26 [PATCH v9 00/14] Restricted DMA Claire Chang
  2021-06-11 15:26 ` [PATCH v9 01/14] swiotlb: Refactor swiotlb init functions Claire Chang
  2021-06-11 15:26 ` [PATCH v9 02/14] swiotlb: Refactor swiotlb_create_debugfs Claire Chang
@ 2021-06-11 15:26 ` Claire Chang
  2021-06-11 15:33   ` Claire Chang
  2021-06-11 15:26 ` [PATCH v9 04/14] swiotlb: Add restricted DMA pool initialization Claire Chang
                   ` (11 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Claire Chang @ 2021-06-11 15:26 UTC (permalink / raw)
  To: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

Always have the pointer to the swiotlb pool used in struct device. This
could help simplify the code for other pools.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/of/device.c     | 3 +++
 include/linux/device.h  | 4 ++++
 include/linux/swiotlb.h | 8 ++++++++
 kernel/dma/swiotlb.c    | 8 ++++----
 4 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/drivers/of/device.c b/drivers/of/device.c
index c5a9473a5fb1..1defdf15ba95 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -165,6 +165,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
 
 	arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
 
+	if (IS_ENABLED(CONFIG_SWIOTLB))
+		swiotlb_set_io_tlb_default_mem(dev);
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(of_dma_configure_id);
diff --git a/include/linux/device.h b/include/linux/device.h
index 4443e12238a0..2e9a378c9100 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -432,6 +432,7 @@ struct dev_links_info {
  * @dma_pools:	Dma pools (if dma'ble device).
  * @dma_mem:	Internal for coherent mem override.
  * @cma_area:	Contiguous memory area for dma allocations
+ * @dma_io_tlb_mem: Pointer to the swiotlb pool used.  Not for driver use.
  * @archdata:	For arch-specific additions.
  * @of_node:	Associated device tree node.
  * @fwnode:	Associated device node supplied by platform firmware.
@@ -540,6 +541,9 @@ struct device {
 #ifdef CONFIG_DMA_CMA
 	struct cma *cma_area;		/* contiguous memory area for dma
 					   allocations */
+#endif
+#ifdef CONFIG_SWIOTLB
+	struct io_tlb_mem *dma_io_tlb_mem;
 #endif
 	/* arch specific additions */
 	struct dev_archdata	archdata;
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 216854a5e513..008125ccd509 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -108,6 +108,11 @@ static inline bool is_swiotlb_buffer(phys_addr_t paddr)
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
 
+static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
+{
+	dev->dma_io_tlb_mem = io_tlb_default_mem;
+}
+
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
@@ -119,6 +124,9 @@ static inline bool is_swiotlb_buffer(phys_addr_t paddr)
 {
 	return false;
 }
+static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
+{
+}
 static inline void swiotlb_exit(void)
 {
 }
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 8a3e2b3b246d..29b950ab1351 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -344,7 +344,7 @@ void __init swiotlb_exit(void)
 static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size,
 			   enum dma_data_direction dir)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT;
 	phys_addr_t orig_addr = mem->slots[index].orig_addr;
 	size_t alloc_size = mem->slots[index].alloc_size;
@@ -426,7 +426,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
 static int find_slots(struct device *dev, phys_addr_t orig_addr,
 		size_t alloc_size)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long boundary_mask = dma_get_seg_boundary(dev);
 	dma_addr_t tbl_dma_addr =
 		phys_to_dma_unencrypted(dev, mem->start) & boundary_mask;
@@ -503,7 +503,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 		size_t mapping_size, size_t alloc_size,
 		enum dma_data_direction dir, unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
 	unsigned int i;
 	int index;
@@ -554,7 +554,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 			      size_t mapping_size, enum dma_data_direction dir,
 			      unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
 	unsigned long flags;
 	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
-- 
2.32.0.272.g935e593368-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v9 04/14] swiotlb: Add restricted DMA pool initialization
  2021-06-11 15:26 [PATCH v9 00/14] Restricted DMA Claire Chang
                   ` (2 preceding siblings ...)
  2021-06-11 15:26 ` [PATCH v9 03/14] swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used Claire Chang
@ 2021-06-11 15:26 ` Claire Chang
  2021-06-14  6:21   ` Christoph Hellwig
  2021-06-11 15:26 ` [PATCH v9 05/14] swiotlb: Update is_swiotlb_buffer to add a struct device argument Claire Chang
                   ` (10 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Claire Chang @ 2021-06-11 15:26 UTC (permalink / raw)
  To: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

Add the initialization function to create restricted DMA pools from
matching reserved-memory nodes.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h |  3 +-
 kernel/dma/Kconfig      | 14 ++++++++
 kernel/dma/swiotlb.c    | 75 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 91 insertions(+), 1 deletion(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 008125ccd509..ec0c01796c8a 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -72,7 +72,8 @@ extern enum swiotlb_force swiotlb_force;
  *		range check to see if the memory was in fact allocated by this
  *		API.
  * @nslabs:	The number of IO TLB blocks (in groups of 64) between @start and
- *		@end. This is command line adjustable via setup_io_tlb_npages.
+ *		@end. For default swiotlb, this is command line adjustable via
+ *		setup_io_tlb_npages.
  * @used:	The number of used IO TLB block.
  * @list:	The free list describing the number of free entries available
  *		from each index.
diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig
index 77b405508743..3e961dc39634 100644
--- a/kernel/dma/Kconfig
+++ b/kernel/dma/Kconfig
@@ -80,6 +80,20 @@ config SWIOTLB
 	bool
 	select NEED_DMA_MAP_STATE
 
+config DMA_RESTRICTED_POOL
+	bool "DMA Restricted Pool"
+	depends on OF && OF_RESERVED_MEM
+	select SWIOTLB
+	help
+	  This enables support for restricted DMA pools which provide a level of
+	  DMA memory protection on systems with limited hardware protection
+	  capabilities, such as those lacking an IOMMU.
+
+	  For more information see
+	  <Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt>
+	  and <kernel/dma/swiotlb.c>.
+	  If unsure, say "n".
+
 #
 # Should be selected if we can mmap non-coherent mappings to userspace.
 # The only thing that is really required is a way to set an uncached bit
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 29b950ab1351..c4a071d6a63f 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -39,6 +39,13 @@
 #ifdef CONFIG_DEBUG_FS
 #include <linux/debugfs.h>
 #endif
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_fdt.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/slab.h>
+#endif
 
 #include <asm/io.h>
 #include <asm/dma.h>
@@ -688,3 +695,71 @@ static int __init swiotlb_create_default_debugfs(void)
 late_initcall(swiotlb_create_default_debugfs);
 
 #endif
+
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
+				    struct device *dev)
+{
+	struct io_tlb_mem *mem = rmem->priv;
+	unsigned long nslabs = rmem->size >> IO_TLB_SHIFT;
+
+	/*
+	 * Since multiple devices can share the same pool, the private data,
+	 * io_tlb_mem struct, will be initialized by the first device attached
+	 * to it.
+	 */
+	if (!mem) {
+		mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL);
+		if (!mem)
+			return -ENOMEM;
+
+		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false, true);
+
+		rmem->priv = mem;
+
+		if (IS_ENABLED(CONFIG_DEBUG_FS)) {
+			mem->debugfs =
+				debugfs_create_dir(rmem->name, debugfs_dir);
+			swiotlb_create_debugfs_files(mem);
+		}
+	}
+
+	dev->dma_io_tlb_mem = mem;
+
+	return 0;
+}
+
+static void rmem_swiotlb_device_release(struct reserved_mem *rmem,
+					struct device *dev)
+{
+	dev->dma_io_tlb_mem = io_tlb_default_mem;
+}
+
+static const struct reserved_mem_ops rmem_swiotlb_ops = {
+	.device_init = rmem_swiotlb_device_init,
+	.device_release = rmem_swiotlb_device_release,
+};
+
+static int __init rmem_swiotlb_setup(struct reserved_mem *rmem)
+{
+	unsigned long node = rmem->fdt_node;
+
+	if (of_get_flat_dt_prop(node, "reusable", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,cma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,dma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "no-map", NULL))
+		return -EINVAL;
+
+	if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
+		pr_err("Restricted DMA pool must be accessible within the linear mapping.");
+		return -EINVAL;
+	}
+
+	rmem->ops = &rmem_swiotlb_ops;
+	pr_info("Reserved memory: created restricted DMA pool at %pa, size %ld MiB\n",
+		&rmem->base, (unsigned long)rmem->size / SZ_1M);
+	return 0;
+}
+
+RESERVEDMEM_OF_DECLARE(dma, "restricted-dma-pool", rmem_swiotlb_setup);
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
-- 
2.32.0.272.g935e593368-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v9 05/14] swiotlb: Update is_swiotlb_buffer to add a struct device argument
  2021-06-11 15:26 [PATCH v9 00/14] Restricted DMA Claire Chang
                   ` (3 preceding siblings ...)
  2021-06-11 15:26 ` [PATCH v9 04/14] swiotlb: Add restricted DMA pool initialization Claire Chang
@ 2021-06-11 15:26 ` Claire Chang
  2021-06-14  6:21   ` Christoph Hellwig
  2021-06-11 15:26 ` [PATCH v9 06/14] swiotlb: Update is_swiotlb_active " Claire Chang
                   ` (9 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Claire Chang @ 2021-06-11 15:26 UTC (permalink / raw)
  To: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

Update is_swiotlb_buffer to add a struct device argument. This will be
useful later to allow for restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/iommu/dma-iommu.c | 12 ++++++------
 drivers/xen/swiotlb-xen.c |  2 +-
 include/linux/swiotlb.h   |  7 ++++---
 kernel/dma/direct.c       |  6 +++---
 kernel/dma/direct.h       |  6 +++---
 5 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 5d96fcc45fec..1a6a08908245 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -506,7 +506,7 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
 
 	__iommu_dma_unmap(dev, dma_addr, size);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 
@@ -577,7 +577,7 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
 	}
 
 	iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
-	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
+	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys))
 		swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs);
 	return iova;
 }
@@ -783,7 +783,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev,
 	if (!dev_is_dma_coherent(dev))
 		arch_sync_dma_for_cpu(phys, size, dir);
 
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_cpu(dev, phys, size, dir);
 }
 
@@ -796,7 +796,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev,
 		return;
 
 	phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_device(dev, phys, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -817,7 +817,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir);
 
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_cpu(dev, sg_phys(sg),
 						    sg->length, dir);
 	}
@@ -834,7 +834,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev,
 		return;
 
 	for_each_sg(sgl, sg, nelems, i) {
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_device(dev, sg_phys(sg),
 						       sg->length, dir);
 
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 24d11861ac7d..0c4fb34f11ab 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -100,7 +100,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 	 * in our domain. Therefore _only_ check address within our domain.
 	 */
 	if (pfn_valid(PFN_DOWN(paddr)))
-		return is_swiotlb_buffer(paddr);
+		return is_swiotlb_buffer(dev, paddr);
 	return 0;
 }
 
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index ec0c01796c8a..921b469c6ad2 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -2,6 +2,7 @@
 #ifndef __LINUX_SWIOTLB_H
 #define __LINUX_SWIOTLB_H
 
+#include <linux/device.h>
 #include <linux/dma-direction.h>
 #include <linux/init.h>
 #include <linux/types.h>
@@ -102,9 +103,9 @@ struct io_tlb_mem {
 };
 extern struct io_tlb_mem *io_tlb_default_mem;
 
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
@@ -121,7 +122,7 @@ bool is_swiotlb_active(void);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index f737e3347059..84c9feb5474a 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -343,7 +343,7 @@ void dma_direct_sync_sg_for_device(struct device *dev,
 	for_each_sg(sgl, sg, nents, i) {
 		phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg));
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_device(dev, paddr, sg->length,
 						       dir);
 
@@ -369,7 +369,7 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(paddr, sg->length, dir);
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_cpu(dev, paddr, sg->length,
 						    dir);
 
@@ -504,7 +504,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr)
 {
 	return !dev_is_dma_coherent(dev) ||
-		is_swiotlb_buffer(dma_to_phys(dev, dma_addr));
+	       is_swiotlb_buffer(dev, dma_to_phys(dev, dma_addr));
 }
 
 /**
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 50afc05b6f1d..13e9e7158d94 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -56,7 +56,7 @@ static inline void dma_direct_sync_single_for_device(struct device *dev,
 {
 	phys_addr_t paddr = dma_to_phys(dev, addr);
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_device(dev, paddr, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -73,7 +73,7 @@ static inline void dma_direct_sync_single_for_cpu(struct device *dev,
 		arch_sync_dma_for_cpu_all();
 	}
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_cpu(dev, paddr, size, dir);
 
 	if (dir == DMA_FROM_DEVICE)
@@ -113,7 +113,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr,
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
 		dma_direct_sync_single_for_cpu(dev, addr, size, dir);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 #endif /* _KERNEL_DMA_DIRECT_H */
-- 
2.32.0.272.g935e593368-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v9 06/14] swiotlb: Update is_swiotlb_active to add a struct device argument
  2021-06-11 15:26 [PATCH v9 00/14] Restricted DMA Claire Chang
                   ` (4 preceding siblings ...)
  2021-06-11 15:26 ` [PATCH v9 05/14] swiotlb: Update is_swiotlb_buffer to add a struct device argument Claire Chang
@ 2021-06-11 15:26 ` Claire Chang
  2021-06-11 15:34   ` Claire Chang
  2021-06-14  6:23   ` Christoph Hellwig
  2021-06-11 15:26 ` [PATCH v9 07/14] swiotlb: Bounce data from/to restricted DMA pool if available Claire Chang
                   ` (8 subsequent siblings)
  14 siblings, 2 replies; 30+ messages in thread
From: Claire Chang @ 2021-06-11 15:26 UTC (permalink / raw)
  To: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

Update is_swiotlb_active to add a struct device argument. This will be
useful later to allow for restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c        | 2 +-
 drivers/pci/xen-pcifront.c                   | 2 +-
 include/linux/swiotlb.h                      | 4 ++--
 kernel/dma/direct.c                          | 2 +-
 kernel/dma/swiotlb.c                         | 4 ++--
 6 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index ce6b664b10aa..89a894354263 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -42,7 +42,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 
 	max_order = MAX_ORDER;
 #ifdef CONFIG_SWIOTLB
-	if (is_swiotlb_active()) {
+	if (is_swiotlb_active(obj->base.dev->dev)) {
 		unsigned int max_segment;
 
 		max_segment = swiotlb_max_segment();
diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c
index f4c2e46b6fe1..2ca9d9a9e5d5 100644
--- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
@@ -276,7 +276,7 @@ nouveau_ttm_init(struct nouveau_drm *drm)
 	}
 
 #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86)
-	need_swiotlb = is_swiotlb_active();
+	need_swiotlb = is_swiotlb_active(dev->dev);
 #endif
 
 	ret = ttm_device_init(&drm->ttm.bdev, &nouveau_bo_driver, drm->dev->dev,
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index b7a8f3a1921f..0d56985bfe81 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -693,7 +693,7 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
 
 	spin_unlock(&pcifront_dev_lock);
 
-	if (!err && !is_swiotlb_active()) {
+	if (!err && !is_swiotlb_active(&pdev->xdev->dev)) {
 		err = pci_xen_swiotlb_init_late();
 		if (err)
 			dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 921b469c6ad2..06cf17a80f5c 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -118,7 +118,7 @@ static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
-bool is_swiotlb_active(void);
+bool is_swiotlb_active(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
@@ -141,7 +141,7 @@ static inline size_t swiotlb_max_mapping_size(struct device *dev)
 	return SIZE_MAX;
 }
 
-static inline bool is_swiotlb_active(void)
+static inline bool is_swiotlb_active(struct device *dev)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 84c9feb5474a..7a88c34d0867 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -495,7 +495,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
 size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
-	if (is_swiotlb_active() &&
+	if (is_swiotlb_active(dev) &&
 	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index c4a071d6a63f..21e99907edd6 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -666,9 +666,9 @@ size_t swiotlb_max_mapping_size(struct device *dev)
 	return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE;
 }
 
-bool is_swiotlb_active(void)
+bool is_swiotlb_active(struct device *dev)
 {
-	return io_tlb_default_mem != NULL;
+	return dev->dma_io_tlb_mem != NULL;
 }
 EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
-- 
2.32.0.272.g935e593368-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v9 07/14] swiotlb: Bounce data from/to restricted DMA pool if available
  2021-06-11 15:26 [PATCH v9 00/14] Restricted DMA Claire Chang
                   ` (5 preceding siblings ...)
  2021-06-11 15:26 ` [PATCH v9 06/14] swiotlb: Update is_swiotlb_active " Claire Chang
@ 2021-06-11 15:26 ` Claire Chang
  2021-06-14  6:25   ` Christoph Hellwig
  2021-06-11 15:26 ` [PATCH v9 08/14] swiotlb: Move alloc_size to find_slots Claire Chang
                   ` (7 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Claire Chang @ 2021-06-11 15:26 UTC (permalink / raw)
  To: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

Regardless of swiotlb setting, the restricted DMA pool is preferred if
available.

The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system
needs to provide a way to lock down the memory access, e.g., MPU.

Note that is_dev_swiotlb_force doesn't check if
swiotlb_force == SWIOTLB_FORCE. Otherwise the memory allocation behavior
with default swiotlb will be changed by the following patche
("dma-direct: Allocate memory from restricted DMA pool if available").

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h | 10 +++++++++-
 kernel/dma/direct.c     |  3 ++-
 kernel/dma/direct.h     |  3 ++-
 kernel/dma/swiotlb.c    |  1 +
 4 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 06cf17a80f5c..8200c100fe10 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -85,6 +85,7 @@ extern enum swiotlb_force swiotlb_force;
  *		unmap calls.
  * @debugfs:	The dentry to debugfs.
  * @late_alloc:	%true if allocated using the page allocator
+ * @force_swiotlb: %true if swiotlb is forced
  */
 struct io_tlb_mem {
 	phys_addr_t start;
@@ -95,6 +96,7 @@ struct io_tlb_mem {
 	spinlock_t lock;
 	struct dentry *debugfs;
 	bool late_alloc;
+	bool force_swiotlb;
 	struct io_tlb_slot {
 		phys_addr_t orig_addr;
 		size_t alloc_size;
@@ -115,6 +117,11 @@ static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
 	dev->dma_io_tlb_mem = io_tlb_default_mem;
 }
 
+static inline bool is_dev_swiotlb_force(struct device *dev)
+{
+	return dev->dma_io_tlb_mem->force_swiotlb;
+}
+
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
@@ -126,8 +133,9 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
-static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
+static inline bool is_dev_swiotlb_force(struct device *dev)
 {
+	return false;
 }
 static inline void swiotlb_exit(void)
 {
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 7a88c34d0867..078f7087e466 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -496,7 +496,8 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
 	if (is_swiotlb_active(dev) &&
-	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
+	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE ||
+	     is_dev_swiotlb_force(dev)))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
 }
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 13e9e7158d94..f94813674e23 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -87,7 +87,8 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
 	phys_addr_t phys = page_to_phys(page) + offset;
 	dma_addr_t dma_addr = phys_to_dma(dev, phys);
 
-	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
+	if (unlikely(swiotlb_force == SWIOTLB_FORCE) ||
+	    is_dev_swiotlb_force(dev))
 		return swiotlb_map(dev, phys, size, dir, attrs);
 
 	if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 21e99907edd6..e5ccc198d0a7 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -714,6 +714,7 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
 			return -ENOMEM;
 
 		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false, true);
+		mem->force_swiotlb = true;
 
 		rmem->priv = mem;
 
-- 
2.32.0.272.g935e593368-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v9 08/14] swiotlb: Move alloc_size to find_slots
  2021-06-11 15:26 [PATCH v9 00/14] Restricted DMA Claire Chang
                   ` (6 preceding siblings ...)
  2021-06-11 15:26 ` [PATCH v9 07/14] swiotlb: Bounce data from/to restricted DMA pool if available Claire Chang
@ 2021-06-11 15:26 ` Claire Chang
  2021-06-14  6:25   ` Christoph Hellwig
  2021-06-11 15:26 ` [PATCH v9 09/14] swiotlb: Refactor swiotlb_tbl_unmap_single Claire Chang
                   ` (6 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Claire Chang @ 2021-06-11 15:26 UTC (permalink / raw)
  To: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

Move the maintenance of alloc_size to find_slots for better code
reusability later.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index e5ccc198d0a7..364c6c822063 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -486,8 +486,11 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr,
 	return -1;
 
 found:
-	for (i = index; i < index + nslots; i++)
+	for (i = index; i < index + nslots; i++) {
 		mem->slots[i].list = 0;
+		mem->slots[i].alloc_size =
+			alloc_size - ((i - index) << IO_TLB_SHIFT);
+	}
 	for (i = index - 1;
 	     io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 &&
 	     mem->slots[i].list; i--)
@@ -542,11 +545,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	 * This is needed when we sync the memory.  Then we sync the buffer if
 	 * needed.
 	 */
-	for (i = 0; i < nr_slots(alloc_size + offset); i++) {
+	for (i = 0; i < nr_slots(alloc_size + offset); i++)
 		mem->slots[index + i].orig_addr = slot_addr(orig_addr, i);
-		mem->slots[index + i].alloc_size =
-			alloc_size - (i << IO_TLB_SHIFT);
-	}
 	tlb_addr = slot_addr(mem->start, index) + offset;
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
 	    (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
-- 
2.32.0.272.g935e593368-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v9 09/14] swiotlb: Refactor swiotlb_tbl_unmap_single
  2021-06-11 15:26 [PATCH v9 00/14] Restricted DMA Claire Chang
                   ` (7 preceding siblings ...)
  2021-06-11 15:26 ` [PATCH v9 08/14] swiotlb: Move alloc_size to find_slots Claire Chang
@ 2021-06-11 15:26 ` Claire Chang
  2021-06-14  6:26   ` Christoph Hellwig
  2021-06-11 15:26 ` [PATCH v9 10/14] dma-direct: Add a new wrapper __dma_direct_free_pages() Claire Chang
                   ` (5 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Claire Chang @ 2021-06-11 15:26 UTC (permalink / raw)
  To: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

Add a new function, release_slots, to make the code reusable for supporting
different bounce buffer pools, e.g. restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 35 ++++++++++++++++++++---------------
 1 file changed, 20 insertions(+), 15 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 364c6c822063..a6562573f090 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -554,27 +554,15 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	return tlb_addr;
 }
 
-/*
- * tlb_addr is the physical address of the bounce buffer to unmap.
- */
-void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
-			      size_t mapping_size, enum dma_data_direction dir,
-			      unsigned long attrs)
+static void release_slots(struct device *dev, phys_addr_t tlb_addr)
 {
-	struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long flags;
-	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
+	unsigned int offset = swiotlb_align_offset(dev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
 	int nslots = nr_slots(mem->slots[index].alloc_size + offset);
 	int count, i;
 
-	/*
-	 * First, sync the memory before unmapping the entry
-	 */
-	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
-	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
-		swiotlb_bounce(hwdev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
-
 	/*
 	 * Return the buffer to the free list by setting the corresponding
 	 * entries to indicate the number of contiguous entries available.
@@ -609,6 +597,23 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 	spin_unlock_irqrestore(&mem->lock, flags);
 }
 
+/*
+ * tlb_addr is the physical address of the bounce buffer to unmap.
+ */
+void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr,
+			      size_t mapping_size, enum dma_data_direction dir,
+			      unsigned long attrs)
+{
+	/*
+	 * First, sync the memory before unmapping the entry
+	 */
+	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
+		swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
+
+	release_slots(dev, tlb_addr);
+}
+
 void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
 		size_t size, enum dma_data_direction dir)
 {
-- 
2.32.0.272.g935e593368-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v9 10/14] dma-direct: Add a new wrapper __dma_direct_free_pages()
  2021-06-11 15:26 [PATCH v9 00/14] Restricted DMA Claire Chang
                   ` (8 preceding siblings ...)
  2021-06-11 15:26 ` [PATCH v9 09/14] swiotlb: Refactor swiotlb_tbl_unmap_single Claire Chang
@ 2021-06-11 15:26 ` Claire Chang
  2021-06-11 15:26 ` [PATCH v9 11/14] swiotlb: Add restricted DMA alloc/free support Claire Chang
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Claire Chang @ 2021-06-11 15:26 UTC (permalink / raw)
  To: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

Add a new wrapper __dma_direct_free_pages() that will be useful later
for swiotlb_free().

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/direct.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 078f7087e466..eb4098323bbc 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -75,6 +75,12 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 		min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit);
 }
 
+static void __dma_direct_free_pages(struct device *dev, struct page *page,
+				    size_t size)
+{
+	dma_free_contiguous(dev, page, size);
+}
+
 static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 		gfp_t gfp)
 {
@@ -237,7 +243,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 			return NULL;
 	}
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -273,7 +279,7 @@ void dma_direct_free(struct device *dev, size_t size,
 	else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
 		arch_dma_clear_uncached(cpu_addr, size);
 
-	dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size);
+	__dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size);
 }
 
 struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
@@ -310,7 +316,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	*dma_handle = phys_to_dma_direct(dev, page_to_phys(page));
 	return page;
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -329,7 +335,7 @@ void dma_direct_free_pages(struct device *dev, size_t size,
 	if (force_dma_unencrypted(dev))
 		set_memory_encrypted((unsigned long)vaddr, 1 << page_order);
 
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 }
 
 #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \
-- 
2.32.0.272.g935e593368-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v9 11/14] swiotlb: Add restricted DMA alloc/free support.
  2021-06-11 15:26 [PATCH v9 00/14] Restricted DMA Claire Chang
                   ` (9 preceding siblings ...)
  2021-06-11 15:26 ` [PATCH v9 10/14] dma-direct: Add a new wrapper __dma_direct_free_pages() Claire Chang
@ 2021-06-11 15:26 ` Claire Chang
  2021-06-14  6:28   ` Christoph Hellwig
  2021-06-11 15:26 ` [PATCH v9 12/14] dma-direct: Allocate memory from restricted DMA pool if available Claire Chang
                   ` (3 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Claire Chang @ 2021-06-11 15:26 UTC (permalink / raw)
  To: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

Add the functions, swiotlb_{alloc,free} to support the memory allocation
from restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h | 15 +++++++++++++++
 kernel/dma/swiotlb.c    | 35 +++++++++++++++++++++++++++++++++--
 2 files changed, 48 insertions(+), 2 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 8200c100fe10..d3374497a4f8 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -162,4 +162,19 @@ static inline void swiotlb_adjust_size(unsigned long size)
 extern void swiotlb_print_info(void);
 extern void swiotlb_set_max_segment(unsigned int);
 
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size);
+bool swiotlb_free(struct device *dev, struct page *page, size_t size);
+#else
+static inline struct page *swiotlb_alloc(struct device *dev, size_t size)
+{
+	return NULL;
+}
+static inline bool swiotlb_free(struct device *dev, struct page *page,
+				size_t size)
+{
+	return false;
+}
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
+
 #endif /* __LINUX_SWIOTLB_H */
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index a6562573f090..0a19858da5b8 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -461,8 +461,9 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr,
 
 	index = wrap = wrap_index(mem, ALIGN(mem->index, stride));
 	do {
-		if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
-		    (orig_addr & iotlb_align_mask)) {
+		if (orig_addr &&
+		    (slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
+			    (orig_addr & iotlb_align_mask)) {
 			index = wrap_index(mem, index + 1);
 			continue;
 		}
@@ -702,6 +703,36 @@ late_initcall(swiotlb_create_default_debugfs);
 #endif
 
 #ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size)
+{
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
+	phys_addr_t tlb_addr;
+	int index;
+
+	if (!mem)
+		return NULL;
+
+	index = find_slots(dev, 0, size);
+	if (index == -1)
+		return NULL;
+
+	tlb_addr = slot_addr(mem->start, index);
+
+	return pfn_to_page(PFN_DOWN(tlb_addr));
+}
+
+bool swiotlb_free(struct device *dev, struct page *page, size_t size)
+{
+	phys_addr_t tlb_addr = page_to_phys(page);
+
+	if (!is_swiotlb_buffer(dev, tlb_addr))
+		return false;
+
+	release_slots(dev, tlb_addr);
+
+	return true;
+}
+
 static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
 				    struct device *dev)
 {
-- 
2.32.0.272.g935e593368-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v9 12/14] dma-direct: Allocate memory from restricted DMA pool if available
  2021-06-11 15:26 [PATCH v9 00/14] Restricted DMA Claire Chang
                   ` (10 preceding siblings ...)
  2021-06-11 15:26 ` [PATCH v9 11/14] swiotlb: Add restricted DMA alloc/free support Claire Chang
@ 2021-06-11 15:26 ` Claire Chang
  2021-06-11 15:26 ` [PATCH v9 13/14] dt-bindings: of: Add restricted DMA pool Claire Chang
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Claire Chang @ 2021-06-11 15:26 UTC (permalink / raw)
  To: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

The restricted DMA pool is preferred if available.

The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system
needs to provide a way to lock down the memory access, e.g., MPU.

Note that since coherent allocation needs remapping, one must set up
another device coherent pool by shared-dma-pool and use
dma_alloc_from_dev_coherent instead for atomic coherent allocation.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/direct.c | 37 ++++++++++++++++++++++++++++---------
 1 file changed, 28 insertions(+), 9 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index eb4098323bbc..73fc4c659ba7 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -78,6 +78,9 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 static void __dma_direct_free_pages(struct device *dev, struct page *page,
 				    size_t size)
 {
+	if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL) &&
+	    swiotlb_free(dev, page, size))
+		return;
 	dma_free_contiguous(dev, page, size);
 }
 
@@ -92,7 +95,17 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 
 	gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
 					   &phys_limit);
-	page = dma_alloc_contiguous(dev, size, gfp);
+	if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL)) {
+		page = swiotlb_alloc(dev, size);
+		if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
+			__dma_direct_free_pages(dev, page, size);
+			page = NULL;
+		}
+		return page;
+	}
+
+	if (!page)
+		page = dma_alloc_contiguous(dev, size, gfp);
 	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
 		dma_free_contiguous(dev, page, size);
 		page = NULL;
@@ -148,7 +161,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 		gfp |= __GFP_NOWARN;
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
 		page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
 		if (!page)
 			return NULL;
@@ -161,18 +174,23 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev))
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_dev_swiotlb_force(dev))
 		return arch_dma_alloc(dev, size, dma_handle, gfp, attrs);
 
 	/*
 	 * Remapping or decrypting memory may block. If either is required and
 	 * we can't block, allocate the memory from the atomic pools.
+	 * If restricted DMA (i.e., is_dev_swiotlb_force) is required, one must
+	 * set up another device coherent pool by shared-dma-pool and use
+	 * dma_alloc_from_dev_coherent instead.
 	 */
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
 	    !gfpflags_allow_blocking(gfp) &&
 	    (force_dma_unencrypted(dev) ||
-	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev))))
+	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
+	      !dev_is_dma_coherent(dev))) &&
+	    !is_dev_swiotlb_force(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	/* we always manually zero the memory once we are done */
@@ -253,15 +271,15 @@ void dma_direct_free(struct device *dev, size_t size,
 	unsigned int page_order = get_order(size);
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
 		/* cpu_addr is a struct page cookie, not a kernel address */
 		dma_free_contiguous(dev, cpu_addr, size);
 		return;
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev)) {
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_dev_swiotlb_force(dev)) {
 		arch_dma_free(dev, size, cpu_addr, dma_addr, attrs);
 		return;
 	}
@@ -289,7 +307,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	void *ret;
 
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
-	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp))
+	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) &&
+	    !is_dev_swiotlb_force(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	page = __dma_direct_alloc_pages(dev, size, gfp);
-- 
2.32.0.272.g935e593368-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v9 13/14] dt-bindings: of: Add restricted DMA pool
  2021-06-11 15:26 [PATCH v9 00/14] Restricted DMA Claire Chang
                   ` (11 preceding siblings ...)
  2021-06-11 15:26 ` [PATCH v9 12/14] dma-direct: Allocate memory from restricted DMA pool if available Claire Chang
@ 2021-06-11 15:26 ` Claire Chang
  2021-06-11 15:26 ` [PATCH v9 14/14] of: Add plumbing for " Claire Chang
  2021-06-15 13:30 ` [PATCH v9 00/14] Restricted DMA Claire Chang
  14 siblings, 0 replies; 30+ messages in thread
From: Claire Chang @ 2021-06-11 15:26 UTC (permalink / raw)
  To: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

Introduce the new compatible string, restricted-dma-pool, for restricted
DMA. One can specify the address and length of the restricted DMA memory
region by restricted-dma-pool in the reserved-memory node.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 .../reserved-memory/reserved-memory.txt       | 36 +++++++++++++++++--
 1 file changed, 33 insertions(+), 3 deletions(-)

diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
index e8d3096d922c..46804f24df05 100644
--- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
+++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
@@ -51,6 +51,23 @@ compatible (optional) - standard definition
           used as a shared pool of DMA buffers for a set of devices. It can
           be used by an operating system to instantiate the necessary pool
           management subsystem if necessary.
+        - restricted-dma-pool: This indicates a region of memory meant to be
+          used as a pool of restricted DMA buffers for a set of devices. The
+          memory region would be the only region accessible to those devices.
+          When using this, the no-map and reusable properties must not be set,
+          so the operating system can create a virtual mapping that will be used
+          for synchronization. The main purpose for restricted DMA is to
+          mitigate the lack of DMA access control on systems without an IOMMU,
+          which could result in the DMA accessing the system memory at
+          unexpected times and/or unexpected addresses, possibly leading to data
+          leakage or corruption. The feature on its own provides a basic level
+          of protection against the DMA overwriting buffer contents at
+          unexpected times. However, to protect against general data leakage and
+          system memory corruption, the system needs to provide way to lock down
+          the memory access, e.g., MPU. Note that since coherent allocation
+          needs remapping, one must set up another device coherent pool by
+          shared-dma-pool and use dma_alloc_from_dev_coherent instead for atomic
+          coherent allocation.
         - vendor specific string in the form <vendor>,[<device>-]<usage>
 no-map (optional) - empty property
     - Indicates the operating system must not create a virtual mapping
@@ -85,10 +102,11 @@ memory-region-names (optional) - a list of names, one for each corresponding
 
 Example
 -------
-This example defines 3 contiguous regions are defined for Linux kernel:
+This example defines 4 contiguous regions for Linux kernel:
 one default of all device drivers (named linux,cma@72000000 and 64MiB in size),
-one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB), and
-one for multimedia processing (named multimedia-memory@77000000, 64MiB).
+one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB),
+one for multimedia processing (named multimedia-memory@77000000, 64MiB), and
+one for restricted dma pool (named restricted_dma_reserved@0x50000000, 64MiB).
 
 / {
 	#address-cells = <1>;
@@ -120,6 +138,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 			compatible = "acme,multimedia-memory";
 			reg = <0x77000000 0x4000000>;
 		};
+
+		restricted_dma_reserved: restricted_dma_reserved {
+			compatible = "restricted-dma-pool";
+			reg = <0x50000000 0x4000000>;
+		};
 	};
 
 	/* ... */
@@ -138,4 +161,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 		memory-region = <&multimedia_reserved>;
 		/* ... */
 	};
+
+	pcie_device: pcie_device@0,0 {
+		reg = <0x83010000 0x0 0x00000000 0x0 0x00100000
+		       0x83010000 0x0 0x00100000 0x0 0x00100000>;
+		memory-region = <&restricted_dma_mem_reserved>;
+		/* ... */
+	};
 };
-- 
2.32.0.272.g935e593368-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v9 14/14] of: Add plumbing for restricted DMA pool
  2021-06-11 15:26 [PATCH v9 00/14] Restricted DMA Claire Chang
                   ` (12 preceding siblings ...)
  2021-06-11 15:26 ` [PATCH v9 13/14] dt-bindings: of: Add restricted DMA pool Claire Chang
@ 2021-06-11 15:26 ` Claire Chang
  2021-06-15 13:30 ` [PATCH v9 00/14] Restricted DMA Claire Chang
  14 siblings, 0 replies; 30+ messages in thread
From: Claire Chang @ 2021-06-11 15:26 UTC (permalink / raw)
  To: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, tientzu, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

If a device is not behind an IOMMU, we look up the device node and set
up the restricted DMA when the restricted-dma-pool is presented.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/of/address.c    | 33 +++++++++++++++++++++++++++++++++
 drivers/of/device.c     |  3 +++
 drivers/of/of_private.h |  6 ++++++
 3 files changed, 42 insertions(+)

diff --git a/drivers/of/address.c b/drivers/of/address.c
index 3b2acca7e363..c8066d95ff0e 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -8,6 +8,7 @@
 #include <linux/logic_pio.h>
 #include <linux/module.h>
 #include <linux/of_address.h>
+#include <linux/of_reserved_mem.h>
 #include <linux/pci.h>
 #include <linux/pci_regs.h>
 #include <linux/sizes.h>
@@ -1001,6 +1002,38 @@ int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map)
 	of_node_put(node);
 	return ret;
 }
+
+int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np)
+{
+	struct device_node *node, *of_node = dev->of_node;
+	int count, i;
+
+	count = of_property_count_elems_of_size(of_node, "memory-region",
+						sizeof(u32));
+	/*
+	 * If dev->of_node doesn't exist or doesn't contain memory-region, try
+	 * the OF node having DMA configuration.
+	 */
+	if (count <= 0) {
+		of_node = np;
+		count = of_property_count_elems_of_size(
+			of_node, "memory-region", sizeof(u32));
+	}
+
+	for (i = 0; i < count; i++) {
+		node = of_parse_phandle(of_node, "memory-region", i);
+		/*
+		 * There might be multiple memory regions, but only one
+		 * restricted-dma-pool region is allowed.
+		 */
+		if (of_device_is_compatible(node, "restricted-dma-pool") &&
+		    of_device_is_available(node))
+			return of_reserved_mem_device_init_by_idx(dev, of_node,
+								  i);
+	}
+
+	return 0;
+}
 #endif /* CONFIG_HAS_DMA */
 
 /**
diff --git a/drivers/of/device.c b/drivers/of/device.c
index 1defdf15ba95..ba4656e77502 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -168,6 +168,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
 	if (IS_ENABLED(CONFIG_SWIOTLB))
 		swiotlb_set_io_tlb_default_mem(dev);
 
+	if (!iommu)
+		return of_dma_set_restricted_buffer(dev, np);
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(of_dma_configure_id);
diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
index 631489f7f8c0..376462798f7e 100644
--- a/drivers/of/of_private.h
+++ b/drivers/of/of_private.h
@@ -163,12 +163,18 @@ struct bus_dma_region;
 #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA)
 int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map);
+int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np);
 #else
 static inline int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map)
 {
 	return -ENODEV;
 }
+static inline int of_dma_set_restricted_buffer(struct device *dev,
+					       struct device_node *np)
+{
+	return -ENODEV;
+}
 #endif
 
 void fdt_init_reserved_mem(void);
-- 
2.32.0.272.g935e593368-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v9 03/14] swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used
  2021-06-11 15:26 ` [PATCH v9 03/14] swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used Claire Chang
@ 2021-06-11 15:33   ` Claire Chang
  2021-06-14  6:20     ` Christoph Hellwig
  0 siblings, 1 reply; 30+ messages in thread
From: Claire Chang @ 2021-06-11 15:33 UTC (permalink / raw)
  To: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, Tomasz Figa, bskeggs,
	Bjorn Helgaas, chris, Daniel Vetter, airlied, dri-devel,
	intel-gfx, jani.nikula, Jianxiong Gao, joonas.lahtinen,
	linux-pci, maarten.lankhorst, matthew.auld, rodrigo.vivi,
	thomas.hellstrom

I'm not sure if this would break arch/x86/pci/sta2x11-fixup.c
swiotlb_late_init_with_default_size is called here
https://elixir.bootlin.com/linux/v5.13-rc5/source/arch/x86/pci/sta2x11-fixup.c#L60

On Fri, Jun 11, 2021 at 11:27 PM Claire Chang <tientzu@chromium.org> wrote:
>
> Always have the pointer to the swiotlb pool used in struct device. This
> could help simplify the code for other pools.
>
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>  drivers/of/device.c     | 3 +++
>  include/linux/device.h  | 4 ++++
>  include/linux/swiotlb.h | 8 ++++++++
>  kernel/dma/swiotlb.c    | 8 ++++----
>  4 files changed, 19 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/of/device.c b/drivers/of/device.c
> index c5a9473a5fb1..1defdf15ba95 100644
> --- a/drivers/of/device.c
> +++ b/drivers/of/device.c
> @@ -165,6 +165,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
>
>         arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
>
> +       if (IS_ENABLED(CONFIG_SWIOTLB))
> +               swiotlb_set_io_tlb_default_mem(dev);
> +
>         return 0;
>  }
>  EXPORT_SYMBOL_GPL(of_dma_configure_id);
> diff --git a/include/linux/device.h b/include/linux/device.h
> index 4443e12238a0..2e9a378c9100 100644
> --- a/include/linux/device.h
> +++ b/include/linux/device.h
> @@ -432,6 +432,7 @@ struct dev_links_info {
>   * @dma_pools: Dma pools (if dma'ble device).
>   * @dma_mem:   Internal for coherent mem override.
>   * @cma_area:  Contiguous memory area for dma allocations
> + * @dma_io_tlb_mem: Pointer to the swiotlb pool used.  Not for driver use.
>   * @archdata:  For arch-specific additions.
>   * @of_node:   Associated device tree node.
>   * @fwnode:    Associated device node supplied by platform firmware.
> @@ -540,6 +541,9 @@ struct device {
>  #ifdef CONFIG_DMA_CMA
>         struct cma *cma_area;           /* contiguous memory area for dma
>                                            allocations */
> +#endif
> +#ifdef CONFIG_SWIOTLB
> +       struct io_tlb_mem *dma_io_tlb_mem;
>  #endif
>         /* arch specific additions */
>         struct dev_archdata     archdata;
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 216854a5e513..008125ccd509 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -108,6 +108,11 @@ static inline bool is_swiotlb_buffer(phys_addr_t paddr)
>         return mem && paddr >= mem->start && paddr < mem->end;
>  }
>
> +static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
> +{
> +       dev->dma_io_tlb_mem = io_tlb_default_mem;
> +}
> +
>  void __init swiotlb_exit(void);
>  unsigned int swiotlb_max_segment(void);
>  size_t swiotlb_max_mapping_size(struct device *dev);
> @@ -119,6 +124,9 @@ static inline bool is_swiotlb_buffer(phys_addr_t paddr)
>  {
>         return false;
>  }
> +static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
> +{
> +}
>  static inline void swiotlb_exit(void)
>  {
>  }
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 8a3e2b3b246d..29b950ab1351 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -344,7 +344,7 @@ void __init swiotlb_exit(void)
>  static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size,
>                            enum dma_data_direction dir)
>  {
> -       struct io_tlb_mem *mem = io_tlb_default_mem;
> +       struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
>         int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT;
>         phys_addr_t orig_addr = mem->slots[index].orig_addr;
>         size_t alloc_size = mem->slots[index].alloc_size;
> @@ -426,7 +426,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
>  static int find_slots(struct device *dev, phys_addr_t orig_addr,
>                 size_t alloc_size)
>  {
> -       struct io_tlb_mem *mem = io_tlb_default_mem;
> +       struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
>         unsigned long boundary_mask = dma_get_seg_boundary(dev);
>         dma_addr_t tbl_dma_addr =
>                 phys_to_dma_unencrypted(dev, mem->start) & boundary_mask;
> @@ -503,7 +503,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
>                 size_t mapping_size, size_t alloc_size,
>                 enum dma_data_direction dir, unsigned long attrs)
>  {
> -       struct io_tlb_mem *mem = io_tlb_default_mem;
> +       struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
>         unsigned int offset = swiotlb_align_offset(dev, orig_addr);
>         unsigned int i;
>         int index;
> @@ -554,7 +554,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
>                               size_t mapping_size, enum dma_data_direction dir,
>                               unsigned long attrs)
>  {
> -       struct io_tlb_mem *mem = io_tlb_default_mem;
> +       struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
>         unsigned long flags;
>         unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
>         int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
> --
> 2.32.0.272.g935e593368-goog
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v9 06/14] swiotlb: Update is_swiotlb_active to add a struct device argument
  2021-06-11 15:26 ` [PATCH v9 06/14] swiotlb: Update is_swiotlb_active " Claire Chang
@ 2021-06-11 15:34   ` Claire Chang
  2021-06-14  6:23   ` Christoph Hellwig
  1 sibling, 0 replies; 30+ messages in thread
From: Claire Chang @ 2021-06-11 15:34 UTC (permalink / raw)
  To: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, Tomasz Figa, bskeggs,
	Bjorn Helgaas, chris, Daniel Vetter, airlied, dri-devel,
	intel-gfx, jani.nikula, Jianxiong Gao, joonas.lahtinen,
	linux-pci, maarten.lankhorst, matthew.auld, rodrigo.vivi,
	thomas.hellstrom

I don't have the HW to verify the change. Hopefully I use the right
device struct for is_swiotlb_active.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v9 01/14] swiotlb: Refactor swiotlb init functions
  2021-06-11 15:26 ` [PATCH v9 01/14] swiotlb: Refactor swiotlb init functions Claire Chang
@ 2021-06-14  6:16   ` Christoph Hellwig
  2021-06-15  4:06     ` Claire Chang
  0 siblings, 1 reply; 30+ messages in thread
From: Christoph Hellwig @ 2021-06-14  6:16 UTC (permalink / raw)
  To: Claire Chang
  Cc: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski, benh, paulus,
	list@263.net:IOMMU DRIVERS, sstabellini, Robin Murphy,
	grant.likely, xypron.glpk, Thierry Reding, mingo, bauerman,
	peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

On Fri, Jun 11, 2021 at 11:26:46PM +0800, Claire Chang wrote:
> +	spin_lock_init(&mem->lock);
> +	for (i = 0; i < mem->nslabs; i++) {
> +		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
> +		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
> +		mem->slots[i].alloc_size = 0;
> +	}
> +
> +	if (memory_decrypted)
> +		set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
> +	memset(vaddr, 0, bytes);

We don't really need to do this call before the memset.  Which means we
can just move it to the callers that care instead of having a bool
argument.

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v9 02/14] swiotlb: Refactor swiotlb_create_debugfs
  2021-06-11 15:26 ` [PATCH v9 02/14] swiotlb: Refactor swiotlb_create_debugfs Claire Chang
@ 2021-06-14  6:17   ` Christoph Hellwig
  0 siblings, 0 replies; 30+ messages in thread
From: Christoph Hellwig @ 2021-06-14  6:17 UTC (permalink / raw)
  To: Claire Chang
  Cc: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski, benh, paulus,
	list@263.net:IOMMU DRIVERS, sstabellini, Robin Murphy,
	grant.likely, xypron.glpk, Thierry Reding, mingo, bauerman,
	peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

On Fri, Jun 11, 2021 at 11:26:47PM +0800, Claire Chang wrote:
> Split the debugfs creation to make the code reusable for supporting
> different bounce buffer pools, e.g. restricted DMA pool.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>  kernel/dma/swiotlb.c | 23 ++++++++++++++++-------
>  1 file changed, 16 insertions(+), 7 deletions(-)
> 
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 1a1208c81e85..8a3e2b3b246d 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -64,6 +64,9 @@
>  enum swiotlb_force swiotlb_force;
>  
>  struct io_tlb_mem *io_tlb_default_mem;
> +#ifdef CONFIG_DEBUG_FS
> +static struct dentry *debugfs_dir;
> +#endif

What about moving this declaration into the main CONFIG_DEBUG_FS block
near the functions using it?

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v9 03/14] swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used
  2021-06-11 15:33   ` Claire Chang
@ 2021-06-14  6:20     ` Christoph Hellwig
  0 siblings, 0 replies; 30+ messages in thread
From: Christoph Hellwig @ 2021-06-14  6:20 UTC (permalink / raw)
  To: Claire Chang
  Cc: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski, benh, paulus,
	list@263.net:IOMMU DRIVERS, sstabellini, Robin Murphy,
	grant.likely, xypron.glpk, Thierry Reding, mingo, bauerman,
	peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, Tomasz Figa, bskeggs,
	Bjorn Helgaas, chris, Daniel Vetter, airlied, dri-devel,
	intel-gfx, jani.nikula, Jianxiong Gao, joonas.lahtinen,
	linux-pci, maarten.lankhorst, matthew.auld, rodrigo.vivi,
	thomas.hellstrom

On Fri, Jun 11, 2021 at 11:33:15PM +0800, Claire Chang wrote:
> I'm not sure if this would break arch/x86/pci/sta2x11-fixup.c
> swiotlb_late_init_with_default_size is called here
> https://elixir.bootlin.com/linux/v5.13-rc5/source/arch/x86/pci/sta2x11-fixup.c#L60

It will.  It will also break all non-OF devices.  I think you need to
initialize the initial pool in device_initialize, which covers all devices.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v9 04/14] swiotlb: Add restricted DMA pool initialization
  2021-06-11 15:26 ` [PATCH v9 04/14] swiotlb: Add restricted DMA pool initialization Claire Chang
@ 2021-06-14  6:21   ` Christoph Hellwig
  0 siblings, 0 replies; 30+ messages in thread
From: Christoph Hellwig @ 2021-06-14  6:21 UTC (permalink / raw)
  To: Claire Chang
  Cc: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski, benh, paulus,
	list@263.net:IOMMU DRIVERS, sstabellini, Robin Murphy,
	grant.likely, xypron.glpk, Thierry Reding, mingo, bauerman,
	peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

On Fri, Jun 11, 2021 at 11:26:49PM +0800, Claire Chang wrote:
> Add the initialization function to create restricted DMA pools from
> matching reserved-memory nodes.

Bisection hazard:  we should only add the new config option when the
code is actually read to be used.  So this patch should move to the end
of the series.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v9 05/14] swiotlb: Update is_swiotlb_buffer to add a struct device argument
  2021-06-11 15:26 ` [PATCH v9 05/14] swiotlb: Update is_swiotlb_buffer to add a struct device argument Claire Chang
@ 2021-06-14  6:21   ` Christoph Hellwig
  0 siblings, 0 replies; 30+ messages in thread
From: Christoph Hellwig @ 2021-06-14  6:21 UTC (permalink / raw)
  To: Claire Chang
  Cc: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski, benh, paulus,
	list@263.net:IOMMU DRIVERS, sstabellini, Robin Murphy,
	grant.likely, xypron.glpk, Thierry Reding, mingo, bauerman,
	peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v9 06/14] swiotlb: Update is_swiotlb_active to add a struct device argument
  2021-06-11 15:26 ` [PATCH v9 06/14] swiotlb: Update is_swiotlb_active " Claire Chang
  2021-06-11 15:34   ` Claire Chang
@ 2021-06-14  6:23   ` Christoph Hellwig
  1 sibling, 0 replies; 30+ messages in thread
From: Christoph Hellwig @ 2021-06-14  6:23 UTC (permalink / raw)
  To: Claire Chang
  Cc: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski, benh, paulus,
	list@263.net:IOMMU DRIVERS, sstabellini, Robin Murphy,
	grant.likely, xypron.glpk, Thierry Reding, mingo, bauerman,
	peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

>  kernel/dma/direct.c                          | 2 +-
>  kernel/dma/swiotlb.c                         | 4 ++--
>  6 files changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> index ce6b664b10aa..89a894354263 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> @@ -42,7 +42,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
>  
>  	max_order = MAX_ORDER;
>  #ifdef CONFIG_SWIOTLB
> -	if (is_swiotlb_active()) {
> +	if (is_swiotlb_active(obj->base.dev->dev)) {

This is the same device used for DMA mapping in
i915_gem_gtt_prepare_pages, so this looks good.

> index f4c2e46b6fe1..2ca9d9a9e5d5 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
> @@ -276,7 +276,7 @@ nouveau_ttm_init(struct nouveau_drm *drm)
>  	}
>  
>  #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86)
> -	need_swiotlb = is_swiotlb_active();
> +	need_swiotlb = is_swiotlb_active(dev->dev);
>  #endif

This looks good, too.

> diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
> index b7a8f3a1921f..0d56985bfe81 100644
> --- a/drivers/pci/xen-pcifront.c
> +++ b/drivers/pci/xen-pcifront.c
> @@ -693,7 +693,7 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
>  
>  	spin_unlock(&pcifront_dev_lock);
>  
> -	if (!err && !is_swiotlb_active()) {
> +	if (!err && !is_swiotlb_active(&pdev->xdev->dev)) {

This looks good as well.

So I think the devices are all good.

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v9 07/14] swiotlb: Bounce data from/to restricted DMA pool if available
  2021-06-11 15:26 ` [PATCH v9 07/14] swiotlb: Bounce data from/to restricted DMA pool if available Claire Chang
@ 2021-06-14  6:25   ` Christoph Hellwig
  0 siblings, 0 replies; 30+ messages in thread
From: Christoph Hellwig @ 2021-06-14  6:25 UTC (permalink / raw)
  To: Claire Chang
  Cc: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski, benh, paulus,
	list@263.net:IOMMU DRIVERS, sstabellini, Robin Murphy,
	grant.likely, xypron.glpk, Thierry Reding, mingo, bauerman,
	peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

On Fri, Jun 11, 2021 at 11:26:52PM +0800, Claire Chang wrote:
> Regardless of swiotlb setting, the restricted DMA pool is preferred if
> available.
> 
> The restricted DMA pools provide a basic level of protection against the
> DMA overwriting buffer contents at unexpected times. However, to protect
> against general data leakage and system memory corruption, the system
> needs to provide a way to lock down the memory access, e.g., MPU.
> 
> Note that is_dev_swiotlb_force doesn't check if
> swiotlb_force == SWIOTLB_FORCE. Otherwise the memory allocation behavior
> with default swiotlb will be changed by the following patche
> ("dma-direct: Allocate memory from restricted DMA pool if available").
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>  include/linux/swiotlb.h | 10 +++++++++-
>  kernel/dma/direct.c     |  3 ++-
>  kernel/dma/direct.h     |  3 ++-
>  kernel/dma/swiotlb.c    |  1 +
>  4 files changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 06cf17a80f5c..8200c100fe10 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -85,6 +85,7 @@ extern enum swiotlb_force swiotlb_force;
>   *		unmap calls.
>   * @debugfs:	The dentry to debugfs.
>   * @late_alloc:	%true if allocated using the page allocator
> + * @force_swiotlb: %true if swiotlb is forced
>   */
>  struct io_tlb_mem {
>  	phys_addr_t start;
> @@ -95,6 +96,7 @@ struct io_tlb_mem {
>  	spinlock_t lock;
>  	struct dentry *debugfs;
>  	bool late_alloc;
> +	bool force_swiotlb;
>  	struct io_tlb_slot {
>  		phys_addr_t orig_addr;
>  		size_t alloc_size;
> @@ -115,6 +117,11 @@ static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
>  	dev->dma_io_tlb_mem = io_tlb_default_mem;
>  }
>  
> +static inline bool is_dev_swiotlb_force(struct device *dev)
> +{
> +	return dev->dma_io_tlb_mem->force_swiotlb;
> +}
> +
>  void __init swiotlb_exit(void);
>  unsigned int swiotlb_max_segment(void);
>  size_t swiotlb_max_mapping_size(struct device *dev);
> @@ -126,8 +133,9 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
>  {
>  	return false;
>  }
> -static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
> +static inline bool is_dev_swiotlb_force(struct device *dev)
>  {
> +	return false;
>  }
>  static inline void swiotlb_exit(void)
>  {
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 7a88c34d0867..078f7087e466 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -496,7 +496,8 @@ size_t dma_direct_max_mapping_size(struct device *dev)
>  {
>  	/* If SWIOTLB is active, use its maximum mapping size */
>  	if (is_swiotlb_active(dev) &&
> -	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
> +	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE ||
> +	     is_dev_swiotlb_force(dev)))

I think we can remove the extra swiotlb_force check here if the
swiotlb_force setting is propagated into io_tlb_default_mem->force
when that is initialized. This avoids an extra check in the fast path.

> -	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
> +	if (unlikely(swiotlb_force == SWIOTLB_FORCE) ||
> +	    is_dev_swiotlb_force(dev))

Same here.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v9 08/14] swiotlb: Move alloc_size to find_slots
  2021-06-11 15:26 ` [PATCH v9 08/14] swiotlb: Move alloc_size to find_slots Claire Chang
@ 2021-06-14  6:25   ` Christoph Hellwig
  0 siblings, 0 replies; 30+ messages in thread
From: Christoph Hellwig @ 2021-06-14  6:25 UTC (permalink / raw)
  To: Claire Chang
  Cc: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski, benh, paulus,
	list@263.net:IOMMU DRIVERS, sstabellini, Robin Murphy,
	grant.likely, xypron.glpk, Thierry Reding, mingo, bauerman,
	peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

On Fri, Jun 11, 2021 at 11:26:53PM +0800, Claire Chang wrote:
> Move the maintenance of alloc_size to find_slots for better code
> reusability later.

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v9 09/14] swiotlb: Refactor swiotlb_tbl_unmap_single
  2021-06-11 15:26 ` [PATCH v9 09/14] swiotlb: Refactor swiotlb_tbl_unmap_single Claire Chang
@ 2021-06-14  6:26   ` Christoph Hellwig
  0 siblings, 0 replies; 30+ messages in thread
From: Christoph Hellwig @ 2021-06-14  6:26 UTC (permalink / raw)
  To: Claire Chang
  Cc: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski, benh, paulus,
	list@263.net:IOMMU DRIVERS, sstabellini, Robin Murphy,
	grant.likely, xypron.glpk, Thierry Reding, mingo, bauerman,
	peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

On Fri, Jun 11, 2021 at 11:26:54PM +0800, Claire Chang wrote:
> Add a new function, release_slots, to make the code reusable for supporting
> different bounce buffer pools, e.g. restricted DMA pool.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>  kernel/dma/swiotlb.c | 35 ++++++++++++++++++++---------------
>  1 file changed, 20 insertions(+), 15 deletions(-)
> 
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 364c6c822063..a6562573f090 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -554,27 +554,15 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
>  	return tlb_addr;
>  }
>  
> -/*
> - * tlb_addr is the physical address of the bounce buffer to unmap.
> - */
> -void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
> -			      size_t mapping_size, enum dma_data_direction dir,
> -			      unsigned long attrs)
> +static void release_slots(struct device *dev, phys_addr_t tlb_addr)
>  {
> -	struct io_tlb_mem *mem = hwdev->dma_io_tlb_mem;
> +	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
>  	unsigned long flags;
> -	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
> +	unsigned int offset = swiotlb_align_offset(dev, tlb_addr);
>  	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
>  	int nslots = nr_slots(mem->slots[index].alloc_size + offset);
>  	int count, i;
>  
> -	/*
> -	 * First, sync the memory before unmapping the entry
> -	 */
> -	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
> -	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
> -		swiotlb_bounce(hwdev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
> -
>  	/*
>  	 * Return the buffer to the free list by setting the corresponding
>  	 * entries to indicate the number of contiguous entries available.
> @@ -609,6 +597,23 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
>  	spin_unlock_irqrestore(&mem->lock, flags);
>  }
>  
> +/*
> + * tlb_addr is the physical address of the bounce buffer to unmap.
> + */
> +void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr,
> +			      size_t mapping_size, enum dma_data_direction dir,
> +			      unsigned long attrs)
> +{
> +	/*
> +	 * First, sync the memory before unmapping the entry
> +	 */
> +	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
> +	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
> +		swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
> +
> +	release_slots(dev, tlb_addr);

Can you give this a swiotlb_ prefix?

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v9 11/14] swiotlb: Add restricted DMA alloc/free support.
  2021-06-11 15:26 ` [PATCH v9 11/14] swiotlb: Add restricted DMA alloc/free support Claire Chang
@ 2021-06-14  6:28   ` Christoph Hellwig
  2021-06-14  6:28     ` Christoph Hellwig
  0 siblings, 1 reply; 30+ messages in thread
From: Christoph Hellwig @ 2021-06-14  6:28 UTC (permalink / raw)
  To: Claire Chang
  Cc: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski, benh, paulus,
	list@263.net:IOMMU DRIVERS, sstabellini, Robin Murphy,
	grant.likely, xypron.glpk, Thierry Reding, mingo, bauerman,
	peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

I think merging this with the next two patches would be a little more
clear.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v9 11/14] swiotlb: Add restricted DMA alloc/free support.
  2021-06-14  6:28   ` Christoph Hellwig
@ 2021-06-14  6:28     ` Christoph Hellwig
  0 siblings, 0 replies; 30+ messages in thread
From: Christoph Hellwig @ 2021-06-14  6:28 UTC (permalink / raw)
  To: Claire Chang
  Cc: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski, benh, paulus,
	list@263.net:IOMMU DRIVERS, sstabellini, Robin Murphy,
	grant.likely, xypron.glpk, Thierry Reding, mingo, bauerman,
	peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, tfiga, bskeggs,
	bhelgaas, chris, daniel, airlied, dri-devel, intel-gfx,
	jani.nikula, jxgao, joonas.lahtinen, linux-pci,
	maarten.lankhorst, matthew.auld, rodrigo.vivi, thomas.hellstrom

On Mon, Jun 14, 2021 at 08:28:01AM +0200, Christoph Hellwig wrote:
> I think merging this with the next two patches would be a little more
> clear.

Sorry, I mean the next patch and the previous one.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v9 01/14] swiotlb: Refactor swiotlb init functions
  2021-06-14  6:16   ` Christoph Hellwig
@ 2021-06-15  4:06     ` Claire Chang
  0 siblings, 0 replies; 30+ messages in thread
From: Claire Chang @ 2021-06-15  4:06 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross, Marek Szyprowski,
	benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, Tomasz Figa, bskeggs,
	Bjorn Helgaas, chris, Daniel Vetter, airlied, dri-devel,
	intel-gfx, jani.nikula, Jianxiong Gao, joonas.lahtinen,
	linux-pci, maarten.lankhorst, matthew.auld, rodrigo.vivi,
	thomas.hellstrom

On Mon, Jun 14, 2021 at 2:16 PM Christoph Hellwig <hch@lst.de> wrote:
>
> On Fri, Jun 11, 2021 at 11:26:46PM +0800, Claire Chang wrote:
> > +     spin_lock_init(&mem->lock);
> > +     for (i = 0; i < mem->nslabs; i++) {
> > +             mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
> > +             mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
> > +             mem->slots[i].alloc_size = 0;
> > +     }
> > +
> > +     if (memory_decrypted)
> > +             set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
> > +     memset(vaddr, 0, bytes);
>
> We don't really need to do this call before the memset.  Which means we
> can just move it to the callers that care instead of having a bool
> argument.
>
> Otherwise looks good:
>
> Reviewed-by: Christoph Hellwig <hch@lst.de>

Thanks for the review. Will wait more days for other reviews and send
v10 to address the comments in this and other patches.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v9 00/14] Restricted DMA
  2021-06-11 15:26 [PATCH v9 00/14] Restricted DMA Claire Chang
                   ` (13 preceding siblings ...)
  2021-06-11 15:26 ` [PATCH v9 14/14] of: Add plumbing for " Claire Chang
@ 2021-06-15 13:30 ` Claire Chang
  14 siblings, 0 replies; 30+ messages in thread
From: Claire Chang @ 2021-06-15 13:30 UTC (permalink / raw)
  To: Rob Herring, mpe, Joerg Roedel, Will Deacon, Frank Rowand,
	Konrad Rzeszutek Wilk, boris.ostrovsky, jgross,
	Christoph Hellwig, Marek Szyprowski
  Cc: benh, paulus, list@263.net:IOMMU DRIVERS, sstabellini,
	Robin Murphy, grant.likely, xypron.glpk, Thierry Reding, mingo,
	bauerman, peterz, Greg KH, Saravana Kannan, Rafael J . Wysocki,
	heikki.krogerus, Andy Shevchenko, Randy Dunlap, Dan Williams,
	Bartosz Golaszewski, linux-devicetree, lkml, linuxppc-dev,
	xen-devel, Nicolas Boichat, Jim Quinlan, Tomasz Figa, bskeggs,
	Bjorn Helgaas, chris, Daniel Vetter, airlied, dri-devel,
	intel-gfx, jani.nikula, Jianxiong Gao, joonas.lahtinen,
	linux-pci, maarten.lankhorst, matthew.auld, rodrigo.vivi,
	thomas.hellstrom

v10 here: https://lore.kernel.org/patchwork/cover/1446882/

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2021-06-15 13:31 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-11 15:26 [PATCH v9 00/14] Restricted DMA Claire Chang
2021-06-11 15:26 ` [PATCH v9 01/14] swiotlb: Refactor swiotlb init functions Claire Chang
2021-06-14  6:16   ` Christoph Hellwig
2021-06-15  4:06     ` Claire Chang
2021-06-11 15:26 ` [PATCH v9 02/14] swiotlb: Refactor swiotlb_create_debugfs Claire Chang
2021-06-14  6:17   ` Christoph Hellwig
2021-06-11 15:26 ` [PATCH v9 03/14] swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used Claire Chang
2021-06-11 15:33   ` Claire Chang
2021-06-14  6:20     ` Christoph Hellwig
2021-06-11 15:26 ` [PATCH v9 04/14] swiotlb: Add restricted DMA pool initialization Claire Chang
2021-06-14  6:21   ` Christoph Hellwig
2021-06-11 15:26 ` [PATCH v9 05/14] swiotlb: Update is_swiotlb_buffer to add a struct device argument Claire Chang
2021-06-14  6:21   ` Christoph Hellwig
2021-06-11 15:26 ` [PATCH v9 06/14] swiotlb: Update is_swiotlb_active " Claire Chang
2021-06-11 15:34   ` Claire Chang
2021-06-14  6:23   ` Christoph Hellwig
2021-06-11 15:26 ` [PATCH v9 07/14] swiotlb: Bounce data from/to restricted DMA pool if available Claire Chang
2021-06-14  6:25   ` Christoph Hellwig
2021-06-11 15:26 ` [PATCH v9 08/14] swiotlb: Move alloc_size to find_slots Claire Chang
2021-06-14  6:25   ` Christoph Hellwig
2021-06-11 15:26 ` [PATCH v9 09/14] swiotlb: Refactor swiotlb_tbl_unmap_single Claire Chang
2021-06-14  6:26   ` Christoph Hellwig
2021-06-11 15:26 ` [PATCH v9 10/14] dma-direct: Add a new wrapper __dma_direct_free_pages() Claire Chang
2021-06-11 15:26 ` [PATCH v9 11/14] swiotlb: Add restricted DMA alloc/free support Claire Chang
2021-06-14  6:28   ` Christoph Hellwig
2021-06-14  6:28     ` Christoph Hellwig
2021-06-11 15:26 ` [PATCH v9 12/14] dma-direct: Allocate memory from restricted DMA pool if available Claire Chang
2021-06-11 15:26 ` [PATCH v9 13/14] dt-bindings: of: Add restricted DMA pool Claire Chang
2021-06-11 15:26 ` [PATCH v9 14/14] of: Add plumbing for " Claire Chang
2021-06-15 13:30 ` [PATCH v9 00/14] Restricted DMA Claire Chang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).