linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] Fix restricted DMA vs swiotlb_exit()
@ 2021-07-19 12:30 Will Deacon
  2021-07-19 12:30 ` [PATCH 1/5] of: Return success from of_dma_set_restricted_buffer() when !OF_ADDRESS Will Deacon
                   ` (5 more replies)
  0 siblings, 6 replies; 15+ messages in thread
From: Will Deacon @ 2021-07-19 12:30 UTC (permalink / raw)
  To: iommu
  Cc: linux-kernel, Will Deacon, Guenter Roeck, Claire Chang,
	Christoph Hellwig, Robin Murphy, Konrad Rzeszutek Wilk,
	Nathan Chancellor

Hi all,

This series fixes the issues which have been reported against the
Restricted DMA series in -next:

  * Fix the build for Sparc as reported by Guenter [1].

  * Rework the lifetime of 'io_tlb_default_mem' so that devices
    can retain valid references to it even after swiotlb_exit(). This,
    in turn, fixes the x86/AMD IOMMU regressions reported by Nathan [2].

I also then added a diagnostic to swiotlb_exit(), as suggested by Konrad
[3] and the final patch frees the underlying buffer memory during the
tear down, but I must confess that I don't know why this wasn't being
done already.

A massive thank you to Nathan for helping to debug this and also for
testing these patches to confirm that they address the issue on his
machine.

Patches are based against swiotlb devel/for-linus-5.15.

Cheers,

Will

[1] https://lore.kernel.org/r/20210702030807.GA2685166@roeck-us.net
[2] https://lore.kernel.org/r/YNvMDFWKXSm4LRfZ@Ryzen-9-3900X.localdomain
[3] https://lore.kernel.org/r/YORsr0h7u5l9DZwh@char.us.oracle.com

Cc: Guenter Roeck <linux@roeck-us.net>
Cc: Claire Chang <tientzu@chromium.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Nathan Chancellor <nathan@kernel.org>

--->8

Will Deacon (5):
  of: Return success from of_dma_set_restricted_buffer() when
    !OF_ADDRESS
  swiotlb: Point io_default_tlb_mem at static allocation
  swiotlb: Remove io_tlb_default_mem indirection
  swiotlb: Emit diagnostic in swiotlb_exit()
  swiotlb: Free tbl memory in swiotlb_exit()

 drivers/base/core.c       |  2 +-
 drivers/of/of_private.h   |  3 +-
 drivers/xen/swiotlb-xen.c |  4 +-
 include/linux/swiotlb.h   |  4 +-
 kernel/dma/swiotlb.c      | 82 +++++++++++++++++++++++----------------
 5 files changed, 56 insertions(+), 39 deletions(-)

-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/5] of: Return success from of_dma_set_restricted_buffer() when !OF_ADDRESS
  2021-07-19 12:30 [PATCH 0/5] Fix restricted DMA vs swiotlb_exit() Will Deacon
@ 2021-07-19 12:30 ` Will Deacon
  2021-07-20  3:35   ` Claire Chang
  2021-07-19 12:30 ` [PATCH 2/5] swiotlb: Point io_default_tlb_mem at static allocation Will Deacon
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 15+ messages in thread
From: Will Deacon @ 2021-07-19 12:30 UTC (permalink / raw)
  To: iommu
  Cc: linux-kernel, Will Deacon, Guenter Roeck, Claire Chang,
	Christoph Hellwig, Robin Murphy, Konrad Rzeszutek Wilk,
	Nathan Chancellor

When CONFIG_OF_ADDRESS=n, of_dma_set_restricted_buffer() returns -ENODEV
and breaks the boot for sparc[64] machines. Return 0 instead, since the
function is essentially a glorified NOP in this configuration.

Cc: Claire Chang <tientzu@chromium.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reported-by: Guenter Roeck <linux@roeck-us.net>
Suggested-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Link: https://lore.kernel.org/r/20210702030807.GA2685166@roeck-us.net
Fixes: fec9b625095f ("of: Add plumbing for restricted DMA pool")
Signed-off-by: Will Deacon <will@kernel.org>
---
 drivers/of/of_private.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
index 376462798f7e..f557bd22b0cf 100644
--- a/drivers/of/of_private.h
+++ b/drivers/of/of_private.h
@@ -173,7 +173,8 @@ static inline int of_dma_get_range(struct device_node *np,
 static inline int of_dma_set_restricted_buffer(struct device *dev,
 					       struct device_node *np)
 {
-	return -ENODEV;
+	/* Do nothing, successfully. */
+	return 0;
 }
 #endif
 
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/5] swiotlb: Point io_default_tlb_mem at static allocation
  2021-07-19 12:30 [PATCH 0/5] Fix restricted DMA vs swiotlb_exit() Will Deacon
  2021-07-19 12:30 ` [PATCH 1/5] of: Return success from of_dma_set_restricted_buffer() when !OF_ADDRESS Will Deacon
@ 2021-07-19 12:30 ` Will Deacon
  2021-07-20  3:35   ` Claire Chang
  2021-07-20  7:51   ` Christoph Hellwig
  2021-07-19 12:30 ` [PATCH 3/5] swiotlb: Remove io_tlb_default_mem indirection Will Deacon
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 15+ messages in thread
From: Will Deacon @ 2021-07-19 12:30 UTC (permalink / raw)
  To: iommu
  Cc: linux-kernel, Will Deacon, Guenter Roeck, Claire Chang,
	Christoph Hellwig, Robin Murphy, Konrad Rzeszutek Wilk,
	Nathan Chancellor

Since commit 69031f500865 ("swiotlb: Set dev->dma_io_tlb_mem to the
swiotlb pool used"), 'struct device' may hold a copy of the global
'io_default_tlb_mem' pointer if the device is using swiotlb for DMA. A
subsequent call to swiotlb_exit() will therefore leave dangling pointers
behind in these device structures, resulting in KASAN splats such as:

  |  BUG: KASAN: use-after-free in __iommu_dma_unmap_swiotlb+0x64/0xb0
  |  Read of size 8 at addr ffff8881d7830000 by task swapper/0/0
  |
  |  CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.12.0-rc3-debug #1
  |  Hardware name: HP HP Desktop M01-F1xxx/87D6, BIOS F.12 12/17/2020
  |  Call Trace:
  |   <IRQ>
  |   dump_stack+0x9c/0xcf
  |   print_address_description.constprop.0+0x18/0x130
  |   kasan_report.cold+0x7f/0x111
  |   __iommu_dma_unmap_swiotlb+0x64/0xb0
  |   nvme_pci_complete_rq+0x73/0x130
  |   blk_complete_reqs+0x6f/0x80
  |   __do_softirq+0xfc/0x3be

Point 'io_default_tlb_mem' at a static structure, so that the per-device
pointers remain valid after swiotlb_exit() has been invoked. The 'slots'
array is still allocated dynamically and referenced via a pointer rather
than a flexible array member.

Cc: Claire Chang <tientzu@chromium.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Fixes: 69031f500865 ("swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used")
Reported-by: Nathan Chancellor <nathan@kernel.org>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 include/linux/swiotlb.h |  2 +-
 kernel/dma/swiotlb.c    | 34 +++++++++++++++++++++-------------
 2 files changed, 22 insertions(+), 14 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 39284ff2a6cd..d3b617c19045 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -103,7 +103,7 @@ struct io_tlb_mem {
 		phys_addr_t orig_addr;
 		size_t alloc_size;
 		unsigned int list;
-	} slots[];
+	} *slots;
 };
 extern struct io_tlb_mem *io_tlb_default_mem;
 
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index f1a9ae7fad8f..992d73cdc944 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -71,6 +71,7 @@
 enum swiotlb_force swiotlb_force;
 
 struct io_tlb_mem *io_tlb_default_mem;
+static struct io_tlb_mem _io_tlb_default_mem;
 
 /*
  * Max segment that we can provide which (if pages are contingous) will
@@ -201,7 +202,7 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
 
 int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 {
-	struct io_tlb_mem *mem;
+	struct io_tlb_mem *mem = &_io_tlb_default_mem;
 	size_t alloc_size;
 
 	if (swiotlb_force == SWIOTLB_NO_FORCE)
@@ -211,9 +212,9 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 	if (WARN_ON_ONCE(io_tlb_default_mem))
 		return -ENOMEM;
 
-	alloc_size = PAGE_ALIGN(struct_size(mem, slots, nslabs));
-	mem = memblock_alloc(alloc_size, PAGE_SIZE);
-	if (!mem)
+	alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));
+	mem->slots = memblock_alloc(alloc_size, PAGE_SIZE);
+	if (!mem->slots)
 		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
 		      __func__, alloc_size, PAGE_SIZE);
 
@@ -304,7 +305,7 @@ swiotlb_late_init_with_default_size(size_t default_size)
 int
 swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 {
-	struct io_tlb_mem *mem;
+	struct io_tlb_mem *mem = &_io_tlb_default_mem;
 	unsigned long bytes = nslabs << IO_TLB_SHIFT;
 
 	if (swiotlb_force == SWIOTLB_NO_FORCE)
@@ -314,12 +315,11 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 	if (WARN_ON_ONCE(io_tlb_default_mem))
 		return -ENOMEM;
 
-	mem = (void *)__get_free_pages(GFP_KERNEL,
-		get_order(struct_size(mem, slots, nslabs)));
-	if (!mem)
+	mem->slots = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
+		get_order(array_size(sizeof(*mem->slots), nslabs)));
+	if (!mem->slots)
 		return -ENOMEM;
 
-	memset(mem, 0, sizeof(*mem));
 	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
 	swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
 
@@ -337,12 +337,13 @@ void __init swiotlb_exit(void)
 	if (!mem)
 		return;
 
-	size = struct_size(mem, slots, mem->nslabs);
+	size = array_size(sizeof(*mem->slots), mem->nslabs);
 	if (mem->late_alloc)
-		free_pages((unsigned long)mem, get_order(size));
+		free_pages((unsigned long)mem->slots, get_order(size));
 	else
-		memblock_free_late(__pa(mem), PAGE_ALIGN(size));
+		memblock_free_late(__pa(mem->slots), PAGE_ALIGN(size));
 	io_tlb_default_mem = NULL;
+	memset(mem, 0, sizeof(*mem));
 }
 
 /*
@@ -783,10 +784,17 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
 	 * to it.
 	 */
 	if (!mem) {
-		mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL);
+		mem = kzalloc(sizeof(*mem), GFP_KERNEL);
 		if (!mem)
 			return -ENOMEM;
 
+		mem->slots = kzalloc(array_size(sizeof(*mem->slots), nslabs),
+				     GFP_KERNEL);
+		if (!mem->slots) {
+			kfree(mem);
+			return -ENOMEM;
+		}
+
 		set_memory_decrypted((unsigned long)phys_to_virt(rmem->base),
 				     rmem->size >> PAGE_SHIFT);
 		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 3/5] swiotlb: Remove io_tlb_default_mem indirection
  2021-07-19 12:30 [PATCH 0/5] Fix restricted DMA vs swiotlb_exit() Will Deacon
  2021-07-19 12:30 ` [PATCH 1/5] of: Return success from of_dma_set_restricted_buffer() when !OF_ADDRESS Will Deacon
  2021-07-19 12:30 ` [PATCH 2/5] swiotlb: Point io_default_tlb_mem at static allocation Will Deacon
@ 2021-07-19 12:30 ` Will Deacon
  2021-07-20  3:35   ` Claire Chang
  2021-07-19 12:30 ` [PATCH 4/5] swiotlb: Emit diagnostic in swiotlb_exit() Will Deacon
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 15+ messages in thread
From: Will Deacon @ 2021-07-19 12:30 UTC (permalink / raw)
  To: iommu
  Cc: linux-kernel, Will Deacon, Guenter Roeck, Claire Chang,
	Christoph Hellwig, Robin Murphy, Konrad Rzeszutek Wilk,
	Nathan Chancellor

The indirection from the global 'io_tlb_default_mem' pointer to the
static '_io_tlb_default_mem' structure is ugly and unnecessary.

Convert all users to reference the static structure directly, using the
'nslabs' field to determine whether swiotlb has been initialised.

Cc: Claire Chang <tientzu@chromium.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 drivers/base/core.c       |  2 +-
 drivers/xen/swiotlb-xen.c |  4 ++--
 include/linux/swiotlb.h   |  2 +-
 kernel/dma/swiotlb.c      | 38 ++++++++++++++++++--------------------
 4 files changed, 22 insertions(+), 24 deletions(-)

diff --git a/drivers/base/core.c b/drivers/base/core.c
index ea5b85354526..b49824001cfa 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -2848,7 +2848,7 @@ void device_initialize(struct device *dev)
 	dev->dma_coherent = dma_default_coherent;
 #endif
 #ifdef CONFIG_SWIOTLB
-	dev->dma_io_tlb_mem = io_tlb_default_mem;
+	dev->dma_io_tlb_mem = &io_tlb_default_mem;
 #endif
 }
 EXPORT_SYMBOL_GPL(device_initialize);
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 785ec7e8be01..f06d9b4f1e0f 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -164,7 +164,7 @@ int __ref xen_swiotlb_init(void)
 	int rc = -ENOMEM;
 	char *start;
 
-	if (io_tlb_default_mem != NULL) {
+	if (io_tlb_default_mem.nslabs) {
 		pr_warn("swiotlb buffer already initialized\n");
 		return -EEXIST;
 	}
@@ -547,7 +547,7 @@ xen_swiotlb_sync_sg_for_device(struct device *dev, struct scatterlist *sgl,
 static int
 xen_swiotlb_dma_supported(struct device *hwdev, u64 mask)
 {
-	return xen_phys_to_dma(hwdev, io_tlb_default_mem->end - 1) <= mask;
+	return xen_phys_to_dma(hwdev, io_tlb_default_mem.end - 1) <= mask;
 }
 
 const struct dma_map_ops xen_swiotlb_dma_ops = {
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index d3b617c19045..b0cb2a9973f4 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -105,7 +105,7 @@ struct io_tlb_mem {
 		unsigned int list;
 	} *slots;
 };
-extern struct io_tlb_mem *io_tlb_default_mem;
+extern struct io_tlb_mem io_tlb_default_mem;
 
 static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 992d73cdc944..7948f274f9bb 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -70,8 +70,7 @@
 
 enum swiotlb_force swiotlb_force;
 
-struct io_tlb_mem *io_tlb_default_mem;
-static struct io_tlb_mem _io_tlb_default_mem;
+struct io_tlb_mem io_tlb_default_mem;
 
 /*
  * Max segment that we can provide which (if pages are contingous) will
@@ -102,7 +101,7 @@ early_param("swiotlb", setup_io_tlb_npages);
 
 unsigned int swiotlb_max_segment(void)
 {
-	return io_tlb_default_mem ? max_segment : 0;
+	return io_tlb_default_mem.nslabs ? max_segment : 0;
 }
 EXPORT_SYMBOL_GPL(swiotlb_max_segment);
 
@@ -135,9 +134,9 @@ void __init swiotlb_adjust_size(unsigned long size)
 
 void swiotlb_print_info(void)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = &io_tlb_default_mem;
 
-	if (!mem) {
+	if (!mem->nslabs) {
 		pr_warn("No low mem\n");
 		return;
 	}
@@ -164,11 +163,11 @@ static inline unsigned long nr_slots(u64 val)
  */
 void __init swiotlb_update_mem_attributes(void)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = &io_tlb_default_mem;
 	void *vaddr;
 	unsigned long bytes;
 
-	if (!mem || mem->late_alloc)
+	if (!mem->nslabs || mem->late_alloc)
 		return;
 	vaddr = phys_to_virt(mem->start);
 	bytes = PAGE_ALIGN(mem->nslabs << IO_TLB_SHIFT);
@@ -202,14 +201,14 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
 
 int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 {
-	struct io_tlb_mem *mem = &_io_tlb_default_mem;
+	struct io_tlb_mem *mem = &io_tlb_default_mem;
 	size_t alloc_size;
 
 	if (swiotlb_force == SWIOTLB_NO_FORCE)
 		return 0;
 
 	/* protect against double initialization */
-	if (WARN_ON_ONCE(io_tlb_default_mem))
+	if (WARN_ON_ONCE(mem->nslabs))
 		return -ENOMEM;
 
 	alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));
@@ -220,7 +219,6 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 
 	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
 
-	io_tlb_default_mem = mem;
 	if (verbose)
 		swiotlb_print_info();
 	swiotlb_set_max_segment(mem->nslabs << IO_TLB_SHIFT);
@@ -305,14 +303,14 @@ swiotlb_late_init_with_default_size(size_t default_size)
 int
 swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 {
-	struct io_tlb_mem *mem = &_io_tlb_default_mem;
+	struct io_tlb_mem *mem = &io_tlb_default_mem;
 	unsigned long bytes = nslabs << IO_TLB_SHIFT;
 
 	if (swiotlb_force == SWIOTLB_NO_FORCE)
 		return 0;
 
 	/* protect against double initialization */
-	if (WARN_ON_ONCE(io_tlb_default_mem))
+	if (WARN_ON_ONCE(mem->nslabs))
 		return -ENOMEM;
 
 	mem->slots = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
@@ -323,7 +321,6 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
 	swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
 
-	io_tlb_default_mem = mem;
 	swiotlb_print_info();
 	swiotlb_set_max_segment(mem->nslabs << IO_TLB_SHIFT);
 	return 0;
@@ -331,10 +328,10 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 
 void __init swiotlb_exit(void)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
 	size_t size;
+	struct io_tlb_mem *mem = &io_tlb_default_mem;
 
-	if (!mem)
+	if (!mem->nslabs)
 		return;
 
 	size = array_size(sizeof(*mem->slots), mem->nslabs);
@@ -342,7 +339,6 @@ void __init swiotlb_exit(void)
 		free_pages((unsigned long)mem->slots, get_order(size));
 	else
 		memblock_free_late(__pa(mem->slots), PAGE_ALIGN(size));
-	io_tlb_default_mem = NULL;
 	memset(mem, 0, sizeof(*mem));
 }
 
@@ -697,7 +693,9 @@ size_t swiotlb_max_mapping_size(struct device *dev)
 
 bool is_swiotlb_active(struct device *dev)
 {
-	return dev->dma_io_tlb_mem != NULL;
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
+
+	return mem && mem->nslabs;
 }
 EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
@@ -712,10 +710,10 @@ static void swiotlb_create_debugfs_files(struct io_tlb_mem *mem)
 
 static int __init swiotlb_create_default_debugfs(void)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = &io_tlb_default_mem;
 
 	debugfs_dir = debugfs_create_dir("swiotlb", NULL);
-	if (mem) {
+	if (mem->nslabs) {
 		mem->debugfs = debugfs_dir;
 		swiotlb_create_debugfs_files(mem);
 	}
@@ -814,7 +812,7 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
 static void rmem_swiotlb_device_release(struct reserved_mem *rmem,
 					struct device *dev)
 {
-	dev->dma_io_tlb_mem = io_tlb_default_mem;
+	dev->dma_io_tlb_mem = &io_tlb_default_mem;
 }
 
 static const struct reserved_mem_ops rmem_swiotlb_ops = {
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 4/5] swiotlb: Emit diagnostic in swiotlb_exit()
  2021-07-19 12:30 [PATCH 0/5] Fix restricted DMA vs swiotlb_exit() Will Deacon
                   ` (2 preceding siblings ...)
  2021-07-19 12:30 ` [PATCH 3/5] swiotlb: Remove io_tlb_default_mem indirection Will Deacon
@ 2021-07-19 12:30 ` Will Deacon
  2021-07-20  3:36   ` Claire Chang
  2021-07-19 12:30 ` [PATCH 5/5] swiotlb: Free tbl memory " Will Deacon
  2021-07-20  7:51 ` [PATCH 0/5] Fix restricted DMA vs swiotlb_exit() Christoph Hellwig
  5 siblings, 1 reply; 15+ messages in thread
From: Will Deacon @ 2021-07-19 12:30 UTC (permalink / raw)
  To: iommu
  Cc: linux-kernel, Will Deacon, Guenter Roeck, Claire Chang,
	Christoph Hellwig, Robin Murphy, Konrad Rzeszutek Wilk,
	Nathan Chancellor

A recent debugging session would have been made a little bit easier if
we had noticed sooner that swiotlb_exit() was being called during boot.

Add a simple diagnostic message to swiotlb_exit() to complement the one
from swiotlb_print_info() during initialisation.

Cc: Claire Chang <tientzu@chromium.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/20210705190352.GA19461@willie-the-truck
Suggested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 kernel/dma/swiotlb.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 7948f274f9bb..b3c793ed9e64 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -334,6 +334,7 @@ void __init swiotlb_exit(void)
 	if (!mem->nslabs)
 		return;
 
+	pr_info("tearing down default memory pool\n");
 	size = array_size(sizeof(*mem->slots), mem->nslabs);
 	if (mem->late_alloc)
 		free_pages((unsigned long)mem->slots, get_order(size));
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 5/5] swiotlb: Free tbl memory in swiotlb_exit()
  2021-07-19 12:30 [PATCH 0/5] Fix restricted DMA vs swiotlb_exit() Will Deacon
                   ` (3 preceding siblings ...)
  2021-07-19 12:30 ` [PATCH 4/5] swiotlb: Emit diagnostic in swiotlb_exit() Will Deacon
@ 2021-07-19 12:30 ` Will Deacon
  2021-07-20  3:36   ` Claire Chang
  2021-07-20  7:51 ` [PATCH 0/5] Fix restricted DMA vs swiotlb_exit() Christoph Hellwig
  5 siblings, 1 reply; 15+ messages in thread
From: Will Deacon @ 2021-07-19 12:30 UTC (permalink / raw)
  To: iommu
  Cc: linux-kernel, Will Deacon, Guenter Roeck, Claire Chang,
	Christoph Hellwig, Robin Murphy, Konrad Rzeszutek Wilk,
	Nathan Chancellor

Although swiotlb_exit() frees the 'slots' metadata array referenced by
'io_tlb_default_mem', it leaves the underlying buffer pages allocated
despite no longer being usable.

Extend swiotlb_exit() to free the buffer pages as well as the slots
array.

Cc: Claire Chang <tientzu@chromium.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 kernel/dma/swiotlb.c | 21 +++++++++++++++------
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index b3c793ed9e64..87c40517e822 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -328,18 +328,27 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 
 void __init swiotlb_exit(void)
 {
-	size_t size;
 	struct io_tlb_mem *mem = &io_tlb_default_mem;
+	unsigned long tbl_vaddr;
+	size_t tbl_size, slots_size;
 
 	if (!mem->nslabs)
 		return;
 
 	pr_info("tearing down default memory pool\n");
-	size = array_size(sizeof(*mem->slots), mem->nslabs);
-	if (mem->late_alloc)
-		free_pages((unsigned long)mem->slots, get_order(size));
-	else
-		memblock_free_late(__pa(mem->slots), PAGE_ALIGN(size));
+	tbl_vaddr = (unsigned long)phys_to_virt(mem->start);
+	tbl_size = PAGE_ALIGN(mem->end - mem->start);
+	slots_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), mem->nslabs));
+
+	set_memory_encrypted(tbl_vaddr, tbl_size >> PAGE_SHIFT);
+	if (mem->late_alloc) {
+		free_pages(tbl_vaddr, get_order(tbl_size));
+		free_pages((unsigned long)mem->slots, get_order(slots_size));
+	} else {
+		memblock_free_late(mem->start, tbl_size);
+		memblock_free_late(__pa(mem->slots), slots_size);
+	}
+
 	memset(mem, 0, sizeof(*mem));
 }
 
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/5] of: Return success from of_dma_set_restricted_buffer() when !OF_ADDRESS
  2021-07-19 12:30 ` [PATCH 1/5] of: Return success from of_dma_set_restricted_buffer() when !OF_ADDRESS Will Deacon
@ 2021-07-20  3:35   ` Claire Chang
  0 siblings, 0 replies; 15+ messages in thread
From: Claire Chang @ 2021-07-20  3:35 UTC (permalink / raw)
  To: Will Deacon
  Cc: list@263.net:IOMMU DRIVERS
	<iommu@lists.linux-foundation.org>,
	Joerg Roedel <joro@8bytes.org>,,
	lkml, Guenter Roeck, Christoph Hellwig, Robin Murphy,
	Konrad Rzeszutek Wilk, Nathan Chancellor

On Mon, Jul 19, 2021 at 8:31 PM Will Deacon <will@kernel.org> wrote:
>
> When CONFIG_OF_ADDRESS=n, of_dma_set_restricted_buffer() returns -ENODEV
> and breaks the boot for sparc[64] machines. Return 0 instead, since the
> function is essentially a glorified NOP in this configuration.
>
> Cc: Claire Chang <tientzu@chromium.org>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Reported-by: Guenter Roeck <linux@roeck-us.net>
> Suggested-by: Robin Murphy <robin.murphy@arm.com>
> Tested-by: Guenter Roeck <linux@roeck-us.net>

Tested-by: Claire Chang <tientzu@chromium.org>

> Link: https://lore.kernel.org/r/20210702030807.GA2685166@roeck-us.net
> Fixes: fec9b625095f ("of: Add plumbing for restricted DMA pool")
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  drivers/of/of_private.h | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
> index 376462798f7e..f557bd22b0cf 100644
> --- a/drivers/of/of_private.h
> +++ b/drivers/of/of_private.h
> @@ -173,7 +173,8 @@ static inline int of_dma_get_range(struct device_node *np,
>  static inline int of_dma_set_restricted_buffer(struct device *dev,
>                                                struct device_node *np)
>  {
> -       return -ENODEV;
> +       /* Do nothing, successfully. */
> +       return 0;
>  }
>  #endif
>
> --
> 2.32.0.402.g57bb445576-goog
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/5] swiotlb: Point io_default_tlb_mem at static allocation
  2021-07-19 12:30 ` [PATCH 2/5] swiotlb: Point io_default_tlb_mem at static allocation Will Deacon
@ 2021-07-20  3:35   ` Claire Chang
  2021-07-20  7:51   ` Christoph Hellwig
  1 sibling, 0 replies; 15+ messages in thread
From: Claire Chang @ 2021-07-20  3:35 UTC (permalink / raw)
  To: Will Deacon
  Cc: list@263.net:IOMMU DRIVERS
	<iommu@lists.linux-foundation.org>,
	Joerg Roedel <joro@8bytes.org>,,
	lkml, Guenter Roeck, Christoph Hellwig, Robin Murphy,
	Konrad Rzeszutek Wilk, Nathan Chancellor

On Mon, Jul 19, 2021 at 8:31 PM Will Deacon <will@kernel.org> wrote:
>
> Since commit 69031f500865 ("swiotlb: Set dev->dma_io_tlb_mem to the
> swiotlb pool used"), 'struct device' may hold a copy of the global
> 'io_default_tlb_mem' pointer if the device is using swiotlb for DMA. A
> subsequent call to swiotlb_exit() will therefore leave dangling pointers
> behind in these device structures, resulting in KASAN splats such as:
>
>   |  BUG: KASAN: use-after-free in __iommu_dma_unmap_swiotlb+0x64/0xb0
>   |  Read of size 8 at addr ffff8881d7830000 by task swapper/0/0
>   |
>   |  CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.12.0-rc3-debug #1
>   |  Hardware name: HP HP Desktop M01-F1xxx/87D6, BIOS F.12 12/17/2020
>   |  Call Trace:
>   |   <IRQ>
>   |   dump_stack+0x9c/0xcf
>   |   print_address_description.constprop.0+0x18/0x130
>   |   kasan_report.cold+0x7f/0x111
>   |   __iommu_dma_unmap_swiotlb+0x64/0xb0
>   |   nvme_pci_complete_rq+0x73/0x130
>   |   blk_complete_reqs+0x6f/0x80
>   |   __do_softirq+0xfc/0x3be
>
> Point 'io_default_tlb_mem' at a static structure, so that the per-device
> pointers remain valid after swiotlb_exit() has been invoked. The 'slots'
> array is still allocated dynamically and referenced via a pointer rather
> than a flexible array member.
>
> Cc: Claire Chang <tientzu@chromium.org>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Fixes: 69031f500865 ("swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used")
> Reported-by: Nathan Chancellor <nathan@kernel.org>
> Tested-by: Nathan Chancellor <nathan@kernel.org>

Tested-by: Claire Chang <tientzu@chromium.org>

> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  include/linux/swiotlb.h |  2 +-
>  kernel/dma/swiotlb.c    | 34 +++++++++++++++++++++-------------
>  2 files changed, 22 insertions(+), 14 deletions(-)
>
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 39284ff2a6cd..d3b617c19045 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -103,7 +103,7 @@ struct io_tlb_mem {
>                 phys_addr_t orig_addr;
>                 size_t alloc_size;
>                 unsigned int list;
> -       } slots[];
> +       } *slots;
>  };
>  extern struct io_tlb_mem *io_tlb_default_mem;
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index f1a9ae7fad8f..992d73cdc944 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -71,6 +71,7 @@
>  enum swiotlb_force swiotlb_force;
>
>  struct io_tlb_mem *io_tlb_default_mem;
> +static struct io_tlb_mem _io_tlb_default_mem;
>
>  /*
>   * Max segment that we can provide which (if pages are contingous) will
> @@ -201,7 +202,7 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
>
>  int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
>  {
> -       struct io_tlb_mem *mem;
> +       struct io_tlb_mem *mem = &_io_tlb_default_mem;
>         size_t alloc_size;
>
>         if (swiotlb_force == SWIOTLB_NO_FORCE)
> @@ -211,9 +212,9 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
>         if (WARN_ON_ONCE(io_tlb_default_mem))
>                 return -ENOMEM;
>
> -       alloc_size = PAGE_ALIGN(struct_size(mem, slots, nslabs));
> -       mem = memblock_alloc(alloc_size, PAGE_SIZE);
> -       if (!mem)
> +       alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));
> +       mem->slots = memblock_alloc(alloc_size, PAGE_SIZE);
> +       if (!mem->slots)
>                 panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
>                       __func__, alloc_size, PAGE_SIZE);
>
> @@ -304,7 +305,7 @@ swiotlb_late_init_with_default_size(size_t default_size)
>  int
>  swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>  {
> -       struct io_tlb_mem *mem;
> +       struct io_tlb_mem *mem = &_io_tlb_default_mem;
>         unsigned long bytes = nslabs << IO_TLB_SHIFT;
>
>         if (swiotlb_force == SWIOTLB_NO_FORCE)
> @@ -314,12 +315,11 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>         if (WARN_ON_ONCE(io_tlb_default_mem))
>                 return -ENOMEM;
>
> -       mem = (void *)__get_free_pages(GFP_KERNEL,
> -               get_order(struct_size(mem, slots, nslabs)));
> -       if (!mem)
> +       mem->slots = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
> +               get_order(array_size(sizeof(*mem->slots), nslabs)));
> +       if (!mem->slots)
>                 return -ENOMEM;
>
> -       memset(mem, 0, sizeof(*mem));
>         set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
>         swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
>
> @@ -337,12 +337,13 @@ void __init swiotlb_exit(void)
>         if (!mem)
>                 return;
>
> -       size = struct_size(mem, slots, mem->nslabs);
> +       size = array_size(sizeof(*mem->slots), mem->nslabs);
>         if (mem->late_alloc)
> -               free_pages((unsigned long)mem, get_order(size));
> +               free_pages((unsigned long)mem->slots, get_order(size));
>         else
> -               memblock_free_late(__pa(mem), PAGE_ALIGN(size));
> +               memblock_free_late(__pa(mem->slots), PAGE_ALIGN(size));
>         io_tlb_default_mem = NULL;
> +       memset(mem, 0, sizeof(*mem));
>  }
>
>  /*
> @@ -783,10 +784,17 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
>          * to it.
>          */
>         if (!mem) {
> -               mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL);
> +               mem = kzalloc(sizeof(*mem), GFP_KERNEL);
>                 if (!mem)
>                         return -ENOMEM;
>
> +               mem->slots = kzalloc(array_size(sizeof(*mem->slots), nslabs),
> +                                    GFP_KERNEL);
> +               if (!mem->slots) {
> +                       kfree(mem);
> +                       return -ENOMEM;
> +               }
> +
>                 set_memory_decrypted((unsigned long)phys_to_virt(rmem->base),
>                                      rmem->size >> PAGE_SHIFT);
>                 swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
> --
> 2.32.0.402.g57bb445576-goog
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 3/5] swiotlb: Remove io_tlb_default_mem indirection
  2021-07-19 12:30 ` [PATCH 3/5] swiotlb: Remove io_tlb_default_mem indirection Will Deacon
@ 2021-07-20  3:35   ` Claire Chang
  0 siblings, 0 replies; 15+ messages in thread
From: Claire Chang @ 2021-07-20  3:35 UTC (permalink / raw)
  To: Will Deacon
  Cc: list@263.net:IOMMU DRIVERS
	<iommu@lists.linux-foundation.org>,
	Joerg Roedel <joro@8bytes.org>,,
	lkml, Guenter Roeck, Christoph Hellwig, Robin Murphy,
	Konrad Rzeszutek Wilk, Nathan Chancellor

On Mon, Jul 19, 2021 at 8:31 PM Will Deacon <will@kernel.org> wrote:
>
> The indirection from the global 'io_tlb_default_mem' pointer to the
> static '_io_tlb_default_mem' structure is ugly and unnecessary.
>
> Convert all users to reference the static structure directly, using the
> 'nslabs' field to determine whether swiotlb has been initialised.
>
> Cc: Claire Chang <tientzu@chromium.org>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Tested-by: Nathan Chancellor <nathan@kernel.org>

Tested-by: Claire Chang <tientzu@chromium.org>

> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  drivers/base/core.c       |  2 +-
>  drivers/xen/swiotlb-xen.c |  4 ++--
>  include/linux/swiotlb.h   |  2 +-
>  kernel/dma/swiotlb.c      | 38 ++++++++++++++++++--------------------
>  4 files changed, 22 insertions(+), 24 deletions(-)
>
> diff --git a/drivers/base/core.c b/drivers/base/core.c
> index ea5b85354526..b49824001cfa 100644
> --- a/drivers/base/core.c
> +++ b/drivers/base/core.c
> @@ -2848,7 +2848,7 @@ void device_initialize(struct device *dev)
>         dev->dma_coherent = dma_default_coherent;
>  #endif
>  #ifdef CONFIG_SWIOTLB
> -       dev->dma_io_tlb_mem = io_tlb_default_mem;
> +       dev->dma_io_tlb_mem = &io_tlb_default_mem;
>  #endif
>  }
>  EXPORT_SYMBOL_GPL(device_initialize);
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 785ec7e8be01..f06d9b4f1e0f 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -164,7 +164,7 @@ int __ref xen_swiotlb_init(void)
>         int rc = -ENOMEM;
>         char *start;
>
> -       if (io_tlb_default_mem != NULL) {
> +       if (io_tlb_default_mem.nslabs) {
>                 pr_warn("swiotlb buffer already initialized\n");
>                 return -EEXIST;
>         }
> @@ -547,7 +547,7 @@ xen_swiotlb_sync_sg_for_device(struct device *dev, struct scatterlist *sgl,
>  static int
>  xen_swiotlb_dma_supported(struct device *hwdev, u64 mask)
>  {
> -       return xen_phys_to_dma(hwdev, io_tlb_default_mem->end - 1) <= mask;
> +       return xen_phys_to_dma(hwdev, io_tlb_default_mem.end - 1) <= mask;
>  }
>
>  const struct dma_map_ops xen_swiotlb_dma_ops = {
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index d3b617c19045..b0cb2a9973f4 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -105,7 +105,7 @@ struct io_tlb_mem {
>                 unsigned int list;
>         } *slots;
>  };
> -extern struct io_tlb_mem *io_tlb_default_mem;
> +extern struct io_tlb_mem io_tlb_default_mem;
>
>  static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
>  {
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 992d73cdc944..7948f274f9bb 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -70,8 +70,7 @@
>
>  enum swiotlb_force swiotlb_force;
>
> -struct io_tlb_mem *io_tlb_default_mem;
> -static struct io_tlb_mem _io_tlb_default_mem;
> +struct io_tlb_mem io_tlb_default_mem;
>
>  /*
>   * Max segment that we can provide which (if pages are contingous) will
> @@ -102,7 +101,7 @@ early_param("swiotlb", setup_io_tlb_npages);
>
>  unsigned int swiotlb_max_segment(void)
>  {
> -       return io_tlb_default_mem ? max_segment : 0;
> +       return io_tlb_default_mem.nslabs ? max_segment : 0;
>  }
>  EXPORT_SYMBOL_GPL(swiotlb_max_segment);
>
> @@ -135,9 +134,9 @@ void __init swiotlb_adjust_size(unsigned long size)
>
>  void swiotlb_print_info(void)
>  {
> -       struct io_tlb_mem *mem = io_tlb_default_mem;
> +       struct io_tlb_mem *mem = &io_tlb_default_mem;
>
> -       if (!mem) {
> +       if (!mem->nslabs) {
>                 pr_warn("No low mem\n");
>                 return;
>         }
> @@ -164,11 +163,11 @@ static inline unsigned long nr_slots(u64 val)
>   */
>  void __init swiotlb_update_mem_attributes(void)
>  {
> -       struct io_tlb_mem *mem = io_tlb_default_mem;
> +       struct io_tlb_mem *mem = &io_tlb_default_mem;
>         void *vaddr;
>         unsigned long bytes;
>
> -       if (!mem || mem->late_alloc)
> +       if (!mem->nslabs || mem->late_alloc)
>                 return;
>         vaddr = phys_to_virt(mem->start);
>         bytes = PAGE_ALIGN(mem->nslabs << IO_TLB_SHIFT);
> @@ -202,14 +201,14 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
>
>  int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
>  {
> -       struct io_tlb_mem *mem = &_io_tlb_default_mem;
> +       struct io_tlb_mem *mem = &io_tlb_default_mem;
>         size_t alloc_size;
>
>         if (swiotlb_force == SWIOTLB_NO_FORCE)
>                 return 0;
>
>         /* protect against double initialization */
> -       if (WARN_ON_ONCE(io_tlb_default_mem))
> +       if (WARN_ON_ONCE(mem->nslabs))
>                 return -ENOMEM;
>
>         alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));
> @@ -220,7 +219,6 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
>
>         swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
>
> -       io_tlb_default_mem = mem;
>         if (verbose)
>                 swiotlb_print_info();
>         swiotlb_set_max_segment(mem->nslabs << IO_TLB_SHIFT);
> @@ -305,14 +303,14 @@ swiotlb_late_init_with_default_size(size_t default_size)
>  int
>  swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>  {
> -       struct io_tlb_mem *mem = &_io_tlb_default_mem;
> +       struct io_tlb_mem *mem = &io_tlb_default_mem;
>         unsigned long bytes = nslabs << IO_TLB_SHIFT;
>
>         if (swiotlb_force == SWIOTLB_NO_FORCE)
>                 return 0;
>
>         /* protect against double initialization */
> -       if (WARN_ON_ONCE(io_tlb_default_mem))
> +       if (WARN_ON_ONCE(mem->nslabs))
>                 return -ENOMEM;
>
>         mem->slots = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
> @@ -323,7 +321,6 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>         set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
>         swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
>
> -       io_tlb_default_mem = mem;
>         swiotlb_print_info();
>         swiotlb_set_max_segment(mem->nslabs << IO_TLB_SHIFT);
>         return 0;
> @@ -331,10 +328,10 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>
>  void __init swiotlb_exit(void)
>  {
> -       struct io_tlb_mem *mem = io_tlb_default_mem;
>         size_t size;
> +       struct io_tlb_mem *mem = &io_tlb_default_mem;
>
> -       if (!mem)
> +       if (!mem->nslabs)
>                 return;
>
>         size = array_size(sizeof(*mem->slots), mem->nslabs);
> @@ -342,7 +339,6 @@ void __init swiotlb_exit(void)
>                 free_pages((unsigned long)mem->slots, get_order(size));
>         else
>                 memblock_free_late(__pa(mem->slots), PAGE_ALIGN(size));
> -       io_tlb_default_mem = NULL;
>         memset(mem, 0, sizeof(*mem));
>  }
>
> @@ -697,7 +693,9 @@ size_t swiotlb_max_mapping_size(struct device *dev)
>
>  bool is_swiotlb_active(struct device *dev)
>  {
> -       return dev->dma_io_tlb_mem != NULL;
> +       struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
> +
> +       return mem && mem->nslabs;
>  }
>  EXPORT_SYMBOL_GPL(is_swiotlb_active);
>
> @@ -712,10 +710,10 @@ static void swiotlb_create_debugfs_files(struct io_tlb_mem *mem)
>
>  static int __init swiotlb_create_default_debugfs(void)
>  {
> -       struct io_tlb_mem *mem = io_tlb_default_mem;
> +       struct io_tlb_mem *mem = &io_tlb_default_mem;
>
>         debugfs_dir = debugfs_create_dir("swiotlb", NULL);
> -       if (mem) {
> +       if (mem->nslabs) {
>                 mem->debugfs = debugfs_dir;
>                 swiotlb_create_debugfs_files(mem);
>         }
> @@ -814,7 +812,7 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
>  static void rmem_swiotlb_device_release(struct reserved_mem *rmem,
>                                         struct device *dev)
>  {
> -       dev->dma_io_tlb_mem = io_tlb_default_mem;
> +       dev->dma_io_tlb_mem = &io_tlb_default_mem;
>  }
>
>  static const struct reserved_mem_ops rmem_swiotlb_ops = {
> --
> 2.32.0.402.g57bb445576-goog
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 4/5] swiotlb: Emit diagnostic in swiotlb_exit()
  2021-07-19 12:30 ` [PATCH 4/5] swiotlb: Emit diagnostic in swiotlb_exit() Will Deacon
@ 2021-07-20  3:36   ` Claire Chang
  0 siblings, 0 replies; 15+ messages in thread
From: Claire Chang @ 2021-07-20  3:36 UTC (permalink / raw)
  To: Will Deacon
  Cc: list@263.net:IOMMU DRIVERS
	<iommu@lists.linux-foundation.org>,
	Joerg Roedel <joro@8bytes.org>,,
	lkml, Guenter Roeck, Christoph Hellwig, Robin Murphy,
	Konrad Rzeszutek Wilk, Nathan Chancellor

On Mon, Jul 19, 2021 at 8:31 PM Will Deacon <will@kernel.org> wrote:
>
> A recent debugging session would have been made a little bit easier if
> we had noticed sooner that swiotlb_exit() was being called during boot.
>
> Add a simple diagnostic message to swiotlb_exit() to complement the one
> from swiotlb_print_info() during initialisation.
>
> Cc: Claire Chang <tientzu@chromium.org>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Link: https://lore.kernel.org/r/20210705190352.GA19461@willie-the-truck
> Suggested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Tested-by: Nathan Chancellor <nathan@kernel.org>

Tested-by: Claire Chang <tientzu@chromium.org>

> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  kernel/dma/swiotlb.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 7948f274f9bb..b3c793ed9e64 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -334,6 +334,7 @@ void __init swiotlb_exit(void)
>         if (!mem->nslabs)
>                 return;
>
> +       pr_info("tearing down default memory pool\n");
>         size = array_size(sizeof(*mem->slots), mem->nslabs);
>         if (mem->late_alloc)
>                 free_pages((unsigned long)mem->slots, get_order(size));
> --
> 2.32.0.402.g57bb445576-goog
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 5/5] swiotlb: Free tbl memory in swiotlb_exit()
  2021-07-19 12:30 ` [PATCH 5/5] swiotlb: Free tbl memory " Will Deacon
@ 2021-07-20  3:36   ` Claire Chang
  2021-07-20  8:35     ` Will Deacon
  0 siblings, 1 reply; 15+ messages in thread
From: Claire Chang @ 2021-07-20  3:36 UTC (permalink / raw)
  To: Will Deacon
  Cc: list@263.net:IOMMU DRIVERS
	<iommu@lists.linux-foundation.org>,
	Joerg Roedel <joro@8bytes.org>,,
	lkml, Guenter Roeck, Christoph Hellwig, Robin Murphy,
	Konrad Rzeszutek Wilk, Nathan Chancellor

On Mon, Jul 19, 2021 at 8:31 PM Will Deacon <will@kernel.org> wrote:
>
> Although swiotlb_exit() frees the 'slots' metadata array referenced by
> 'io_tlb_default_mem', it leaves the underlying buffer pages allocated
> despite no longer being usable.
>
> Extend swiotlb_exit() to free the buffer pages as well as the slots
> array.
>
> Cc: Claire Chang <tientzu@chromium.org>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Tested-by: Nathan Chancellor <nathan@kernel.org>

Tested-by: Claire Chang <tientzu@chromium.org>

> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  kernel/dma/swiotlb.c | 21 +++++++++++++++------
>  1 file changed, 15 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index b3c793ed9e64..87c40517e822 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -328,18 +328,27 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>
>  void __init swiotlb_exit(void)
>  {
> -       size_t size;
>         struct io_tlb_mem *mem = &io_tlb_default_mem;
> +       unsigned long tbl_vaddr;
> +       size_t tbl_size, slots_size;
>
>         if (!mem->nslabs)
>                 return;
>
>         pr_info("tearing down default memory pool\n");
> -       size = array_size(sizeof(*mem->slots), mem->nslabs);
> -       if (mem->late_alloc)
> -               free_pages((unsigned long)mem->slots, get_order(size));
> -       else
> -               memblock_free_late(__pa(mem->slots), PAGE_ALIGN(size));
> +       tbl_vaddr = (unsigned long)phys_to_virt(mem->start);
> +       tbl_size = PAGE_ALIGN(mem->end - mem->start);
> +       slots_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), mem->nslabs));
> +
> +       set_memory_encrypted(tbl_vaddr, tbl_size >> PAGE_SHIFT);
> +       if (mem->late_alloc) {
> +               free_pages(tbl_vaddr, get_order(tbl_size));
> +               free_pages((unsigned long)mem->slots, get_order(slots_size));
> +       } else {
> +               memblock_free_late(mem->start, tbl_size);
> +               memblock_free_late(__pa(mem->slots), slots_size);
> +       }
> +
>         memset(mem, 0, sizeof(*mem));
>  }
>
> --
> 2.32.0.402.g57bb445576-goog
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/5] swiotlb: Point io_default_tlb_mem at static allocation
  2021-07-19 12:30 ` [PATCH 2/5] swiotlb: Point io_default_tlb_mem at static allocation Will Deacon
  2021-07-20  3:35   ` Claire Chang
@ 2021-07-20  7:51   ` Christoph Hellwig
  2021-07-20  8:49     ` Will Deacon
  1 sibling, 1 reply; 15+ messages in thread
From: Christoph Hellwig @ 2021-07-20  7:51 UTC (permalink / raw)
  To: Will Deacon
  Cc: iommu, linux-kernel, Guenter Roeck, Claire Chang,
	Christoph Hellwig, Robin Murphy, Konrad Rzeszutek Wilk,
	Nathan Chancellor

I'd prefer if the next patch is merged into this one, to avoid the
confusing state inbetween entirely.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/5] Fix restricted DMA vs swiotlb_exit()
  2021-07-19 12:30 [PATCH 0/5] Fix restricted DMA vs swiotlb_exit() Will Deacon
                   ` (4 preceding siblings ...)
  2021-07-19 12:30 ` [PATCH 5/5] swiotlb: Free tbl memory " Will Deacon
@ 2021-07-20  7:51 ` Christoph Hellwig
  5 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2021-07-20  7:51 UTC (permalink / raw)
  To: Will Deacon
  Cc: iommu, linux-kernel, Guenter Roeck, Claire Chang,
	Christoph Hellwig, Robin Murphy, Konrad Rzeszutek Wilk,
	Nathan Chancellor

Looks fine except for the patch split nitpick mentioned:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 5/5] swiotlb: Free tbl memory in swiotlb_exit()
  2021-07-20  3:36   ` Claire Chang
@ 2021-07-20  8:35     ` Will Deacon
  0 siblings, 0 replies; 15+ messages in thread
From: Will Deacon @ 2021-07-20  8:35 UTC (permalink / raw)
  To: Claire Chang
  Cc: list@263.net:IOMMU DRIVERS
	<iommu@lists.linux-foundation.org>,
	Joerg Roedel <joro@8bytes.org>,,
	lkml, Guenter Roeck, Christoph Hellwig, Robin Murphy,
	Konrad Rzeszutek Wilk, Nathan Chancellor

On Tue, Jul 20, 2021 at 11:36:19AM +0800, Claire Chang wrote:
> On Mon, Jul 19, 2021 at 8:31 PM Will Deacon <will@kernel.org> wrote:
> >
> > Although swiotlb_exit() frees the 'slots' metadata array referenced by
> > 'io_tlb_default_mem', it leaves the underlying buffer pages allocated
> > despite no longer being usable.
> >
> > Extend swiotlb_exit() to free the buffer pages as well as the slots
> > array.
> >
> > Cc: Claire Chang <tientzu@chromium.org>
> > Cc: Christoph Hellwig <hch@lst.de>
> > Cc: Robin Murphy <robin.murphy@arm.com>
> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Tested-by: Nathan Chancellor <nathan@kernel.org>
> 
> Tested-by: Claire Chang <tientzu@chromium.org>

Thanks, Claire!

Will

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/5] swiotlb: Point io_default_tlb_mem at static allocation
  2021-07-20  7:51   ` Christoph Hellwig
@ 2021-07-20  8:49     ` Will Deacon
  0 siblings, 0 replies; 15+ messages in thread
From: Will Deacon @ 2021-07-20  8:49 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: iommu, linux-kernel, Guenter Roeck, Claire Chang, Robin Murphy,
	Konrad Rzeszutek Wilk, Nathan Chancellor

On Tue, Jul 20, 2021 at 09:51:12AM +0200, Christoph Hellwig wrote:
> I'd prefer if the next patch is merged into this one, to avoid the
> confusing state inbetween entirely.

Of course. For some reason, I half thought this patch would need to go to
stable, but the restricted DMA stuff didn't land so there's no backporting
to worry about.

I'll merge 'em for v2.

Thanks for having a look!

Will

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2021-07-20  8:57 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-19 12:30 [PATCH 0/5] Fix restricted DMA vs swiotlb_exit() Will Deacon
2021-07-19 12:30 ` [PATCH 1/5] of: Return success from of_dma_set_restricted_buffer() when !OF_ADDRESS Will Deacon
2021-07-20  3:35   ` Claire Chang
2021-07-19 12:30 ` [PATCH 2/5] swiotlb: Point io_default_tlb_mem at static allocation Will Deacon
2021-07-20  3:35   ` Claire Chang
2021-07-20  7:51   ` Christoph Hellwig
2021-07-20  8:49     ` Will Deacon
2021-07-19 12:30 ` [PATCH 3/5] swiotlb: Remove io_tlb_default_mem indirection Will Deacon
2021-07-20  3:35   ` Claire Chang
2021-07-19 12:30 ` [PATCH 4/5] swiotlb: Emit diagnostic in swiotlb_exit() Will Deacon
2021-07-20  3:36   ` Claire Chang
2021-07-19 12:30 ` [PATCH 5/5] swiotlb: Free tbl memory " Will Deacon
2021-07-20  3:36   ` Claire Chang
2021-07-20  8:35     ` Will Deacon
2021-07-20  7:51 ` [PATCH 0/5] Fix restricted DMA vs swiotlb_exit() Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).