linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH V2 0/2] swiotlb: Add child io tlb mem support
@ 2022-05-02 12:54 Tianyu Lan
  2022-05-02 12:54 ` [RFC PATCH V2 1/2] swiotlb: Add Child IO TLB " Tianyu Lan
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Tianyu Lan @ 2022-05-02 12:54 UTC (permalink / raw)
  To: hch, m.szyprowski, robin.murphy, michael.h.kelley, kys
  Cc: Tianyu Lan, iommu, linux-kernel, vkuznets, brijesh.singh,
	konrad.wilk, hch, wei.liu, parri.andrea, thomas.lendacky,
	linux-hyperv, andi.kleen, kirill.shutemov

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead to
significant lock contention on the swiotlb lock.

This patch adds child IO TLB mem support to resolve spinlock overhead
among device's queues. Each device may allocate IO tlb mem and setup
child IO TLB mem according to queue number. The number child IO tlb
mem maybe set up equal with device queue number and this helps to resolve
swiotlb spinlock overhead among devices and queues.

Patch 2 introduces IO TLB Block concepts and swiotlb_device_allocate()
API to allocate per-device swiotlb bounce buffer. The new API Accepts
queue number as the number of child IO TLB mem to set up device's IO
TLB mem.

Tianyu Lan (2):
  swiotlb: Add Child IO TLB mem support
  Swiotlb: Add device bounce buffer allocation interface

 include/linux/swiotlb.h |  40 ++++++
 kernel/dma/swiotlb.c    | 290 ++++++++++++++++++++++++++++++++++++++--
 2 files changed, 317 insertions(+), 13 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [RFC PATCH V2 1/2] swiotlb: Add Child IO TLB mem support
  2022-05-02 12:54 [RFC PATCH V2 0/2] swiotlb: Add child io tlb mem support Tianyu Lan
@ 2022-05-02 12:54 ` Tianyu Lan
  2022-05-16  7:34   ` Christoph Hellwig
  2022-05-02 12:54 ` [RFC PATCH V2 2/2] Swiotlb: Add device bounce buffer allocation interface Tianyu Lan
  2022-05-09 11:49 ` [RFC PATCH V2 0/2] swiotlb: Add child io tlb mem support Tianyu Lan
  2 siblings, 1 reply; 7+ messages in thread
From: Tianyu Lan @ 2022-05-02 12:54 UTC (permalink / raw)
  To: hch, m.szyprowski, robin.murphy, michael.h.kelley, kys
  Cc: Tianyu Lan, iommu, linux-kernel, vkuznets, brijesh.singh,
	konrad.wilk, hch, wei.liu, parri.andrea, thomas.lendacky,
	linux-hyperv, andi.kleen, kirill.shutemov

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead to
significant lock contention on the swiotlb lock.

This patch adds child IO TLB mem support to resolve spinlock overhead
among device's queues. Each device may allocate IO tlb mem and setup
child IO TLB mem according to queue number. Swiotlb code allocates
bounce buffer among child IO tlb mem iterately.

Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
 include/linux/swiotlb.h |  7 +++
 kernel/dma/swiotlb.c    | 97 ++++++++++++++++++++++++++++++++++++-----
 2 files changed, 94 insertions(+), 10 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 7ed35dd3de6e..4a3f6a7b4b7e 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -89,6 +89,9 @@ extern enum swiotlb_force swiotlb_force;
  * @late_alloc:	%true if allocated using the page allocator
  * @force_bounce: %true if swiotlb bouncing is forced
  * @for_alloc:  %true if the pool is used for memory allocation
+ * @child_nslot:The number of IO TLB slot in the child IO TLB mem.
+ * @num_child:  The child io tlb mem number in the pool.
+ * @child_start:The child index to start searching in the next round.
  */
 struct io_tlb_mem {
 	phys_addr_t start;
@@ -102,6 +105,10 @@ struct io_tlb_mem {
 	bool late_alloc;
 	bool force_bounce;
 	bool for_alloc;
+	unsigned int num_child;
+	unsigned int child_nslot;
+	unsigned int child_start;
+	struct io_tlb_mem *child;
 	struct io_tlb_slot {
 		phys_addr_t orig_addr;
 		size_t alloc_size;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index e2ef0864eb1e..32e8f42530b6 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -207,6 +207,26 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
 		mem->force_bounce = true;
 
 	spin_lock_init(&mem->lock);
+
+	if (mem->num_child) {
+		mem->child_nslot = nslabs / mem->num_child;
+		mem->child_start = 0;
+
+		/*
+		 * Initialize child IO TLB mem, divide IO TLB pool
+		 * into child number. Reuse parent mem->slot in the
+		 * child mem->slot.
+		 */
+		for (i = 0; i < mem->num_child; i++) {
+			mem->child[i].slots = mem->slots + i * mem->child_nslot;
+			mem->child[i].num_child = 0;
+
+			swiotlb_init_io_tlb_mem(&mem->child[i],
+				start + ((i * mem->child_nslot) << IO_TLB_SHIFT),
+				mem->child_nslot, late_alloc);
+		}
+	}
+
 	for (i = 0; i < mem->nslabs; i++) {
 		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
 		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
@@ -336,16 +356,18 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
 
 	mem->slots = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
 		get_order(array_size(sizeof(*mem->slots), nslabs)));
-	if (!mem->slots) {
-		free_pages((unsigned long)vstart, order);
-		return -ENOMEM;
-	}
+	if (!mem->slots)
+		goto error_slots;
 
 	set_memory_decrypted((unsigned long)vstart, bytes >> PAGE_SHIFT);
 	swiotlb_init_io_tlb_mem(mem, virt_to_phys(vstart), nslabs, true);
 
 	swiotlb_print_info();
 	return 0;
+
+error_slots:
+	free_pages((unsigned long)vstart, order);
+	return -ENOMEM;
 }
 
 void __init swiotlb_exit(void)
@@ -483,10 +505,11 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
  * Find a suitable number of IO TLB entries size that will fit this request and
  * allocate a buffer from that IO TLB pool.
  */
-static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
-			      size_t alloc_size, unsigned int alloc_align_mask)
+static int swiotlb_do_find_slots(struct io_tlb_mem *mem,
+				 struct device *dev, phys_addr_t orig_addr,
+				 size_t alloc_size,
+				 unsigned int alloc_align_mask)
 {
-	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long boundary_mask = dma_get_seg_boundary(dev);
 	dma_addr_t tbl_dma_addr =
 		phys_to_dma_unencrypted(dev, mem->start) & boundary_mask;
@@ -565,6 +588,46 @@ static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
 	return index;
 }
 
+static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
+			      size_t alloc_size, unsigned int alloc_align_mask)
+{
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
+	struct io_tlb_mem *child_mem = mem;
+	int start = 0, i = 0, index;
+
+	if (mem->num_child) {
+		i = start = mem->child_start;
+		mem->child_start = (mem->child_start + 1) % mem->num_child;
+		child_mem = mem->child;
+	}
+
+	do {
+		index = swiotlb_do_find_slots(child_mem + i, dev, orig_addr,
+					      alloc_size, alloc_align_mask);
+		if (index >= 0)
+			return i * mem->child_nslot + index;
+		if (++i >= mem->num_child)
+			i = 0;
+	} while (i != start);
+
+	return -1;
+}
+
+static unsigned long mem_used(struct io_tlb_mem *mem)
+{
+	int i;
+	unsigned long used = 0;
+
+	if (mem->num_child) {
+		for (i = 0; i < mem->num_child; i++)
+			used += mem->child[i].used;
+	} else {
+		used = mem->used;
+	}
+
+	return used;
+}
+
 phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 		size_t mapping_size, size_t alloc_size,
 		unsigned int alloc_align_mask, enum dma_data_direction dir,
@@ -594,7 +657,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 		if (!(attrs & DMA_ATTR_NO_WARN))
 			dev_warn_ratelimited(dev,
 	"swiotlb buffer is full (sz: %zd bytes), total %lu (slots), used %lu (slots)\n",
-				 alloc_size, mem->nslabs, mem->used);
+				     alloc_size, mem->nslabs, mem_used(mem));
 		return (phys_addr_t)DMA_MAPPING_ERROR;
 	}
 
@@ -617,9 +680,9 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	return tlb_addr;
 }
 
-static void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_addr)
+static void swiotlb_do_release_slots(struct io_tlb_mem *mem,
+				     struct device *dev, phys_addr_t tlb_addr)
 {
-	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long flags;
 	unsigned int offset = swiotlb_align_offset(dev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
@@ -660,6 +723,20 @@ static void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_addr)
 	spin_unlock_irqrestore(&mem->lock, flags);
 }
 
+static void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_addr)
+{
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
+	int index, offset;
+
+	if (mem->num_child) {
+		offset = swiotlb_align_offset(dev, tlb_addr);	
+		index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
+		mem = &mem->child[index / mem->child_nslot];
+	}
+
+	swiotlb_do_release_slots(mem, dev, tlb_addr);
+}
+
 /*
  * tlb_addr is the physical address of the bounce buffer to unmap.
  */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [RFC PATCH V2 2/2] Swiotlb: Add device bounce buffer allocation interface
  2022-05-02 12:54 [RFC PATCH V2 0/2] swiotlb: Add child io tlb mem support Tianyu Lan
  2022-05-02 12:54 ` [RFC PATCH V2 1/2] swiotlb: Add Child IO TLB " Tianyu Lan
@ 2022-05-02 12:54 ` Tianyu Lan
  2022-05-09 11:49 ` [RFC PATCH V2 0/2] swiotlb: Add child io tlb mem support Tianyu Lan
  2 siblings, 0 replies; 7+ messages in thread
From: Tianyu Lan @ 2022-05-02 12:54 UTC (permalink / raw)
  To: hch, m.szyprowski, robin.murphy, michael.h.kelley, kys
  Cc: Tianyu Lan, iommu, linux-kernel, vkuznets, brijesh.singh,
	konrad.wilk, hch, wei.liu, parri.andrea, thomas.lendacky,
	linux-hyperv, andi.kleen, kirill.shutemov

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

In SEV/TDX Confidential VM, device DMA transaction needs use swiotlb
bounce buffer to share data with host/hypervisor. The swiotlb spinlock
introduces overhead among devices if they share io tlb mem. Avoid such
issue, introduce swiotlb_device_allocate() to allocate device bounce
buffer from default io tlb pool and set up child IO tlb mem for queue
bounce buffer allocaton according input queue number. Device may have
multi io queues and setting up the same number of child io tlb mem may
help to resolve spinlock overhead among queues.

Introduce IO TLB Block unit(2MB) concepts to allocate big bounce buffer
from default pool for devices. IO TLB segment(256k) is too small.

Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
 include/linux/swiotlb.h |  35 +++++++-
 kernel/dma/swiotlb.c    | 195 +++++++++++++++++++++++++++++++++++++++-
 2 files changed, 225 insertions(+), 5 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 4a3f6a7b4b7e..efd29e884fd7 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -31,6 +31,14 @@ struct scatterlist;
 #define IO_TLB_SHIFT 11
 #define IO_TLB_SIZE (1 << IO_TLB_SHIFT)
 
+/*
+ * IO TLB BLOCK UNIT as device bounce buffer allocation unit.
+ * This allows device allocates bounce buffer from default io
+ * tlb pool.
+ */
+#define IO_TLB_BLOCKSIZE   (8 * IO_TLB_SEGSIZE)
+#define IO_TLB_BLOCK_UNIT  (IO_TLB_BLOCKSIZE << IO_TLB_SHIFT)
+
 /* default to 64MB */
 #define IO_TLB_DEFAULT_SIZE (64UL<<20)
 
@@ -89,9 +97,11 @@ extern enum swiotlb_force swiotlb_force;
  * @late_alloc:	%true if allocated using the page allocator
  * @force_bounce: %true if swiotlb bouncing is forced
  * @for_alloc:  %true if the pool is used for memory allocation
- * @child_nslot:The number of IO TLB slot in the child IO TLB mem.
  * @num_child:  The child io tlb mem number in the pool.
+ * @child_nslot:The number of IO TLB slot in the child IO TLB mem.
+ * @child_nblock:The number of IO TLB block in the child IO TLB mem.
  * @child_start:The child index to start searching in the next round.
+ * @block_start:The block index to start searching in the next round.
  */
 struct io_tlb_mem {
 	phys_addr_t start;
@@ -107,8 +117,16 @@ struct io_tlb_mem {
 	bool for_alloc;
 	unsigned int num_child;
 	unsigned int child_nslot;
+	unsigned int child_nblock;
 	unsigned int child_start;
+	unsigned int block_index;
 	struct io_tlb_mem *child;
+	struct io_tlb_mem *parent;
+	struct io_tlb_block {
+		size_t alloc_size;
+		unsigned long start_slot;
+		unsigned int list;
+	} *block;
 	struct io_tlb_slot {
 		phys_addr_t orig_addr;
 		size_t alloc_size;
@@ -137,6 +155,10 @@ unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
 bool is_swiotlb_active(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long size);
+int swiotlb_device_allocate(struct device *dev,
+			    unsigned int area_num,
+			    unsigned long size);
+void swiotlb_device_free(struct device *dev);
 #else
 static inline void swiotlb_init(bool addressing_limited, unsigned int flags)
 {
@@ -169,6 +191,17 @@ static inline bool is_swiotlb_active(struct device *dev)
 static inline void swiotlb_adjust_size(unsigned long size)
 {
 }
+
+void swiotlb_device_free(struct device *dev)
+{
+}
+
+int swiotlb_device_allocate(struct device *dev,
+			    unsigned int area_num,
+			    unsigned long size)
+{
+	return -ENOMEM;
+}
 #endif /* CONFIG_SWIOTLB */
 
 extern void swiotlb_print_info(void);
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 32e8f42530b6..f8a0711cd9de 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -195,7 +195,8 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
 				    unsigned long nslabs, bool late_alloc)
 {
 	void *vaddr = phys_to_virt(start);
-	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
+	unsigned long bytes = nslabs << IO_TLB_SHIFT, i, j;
+	unsigned int block_num = nslabs / IO_TLB_BLOCKSIZE;
 
 	mem->nslabs = nslabs;
 	mem->start = start;
@@ -210,6 +211,7 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
 
 	if (mem->num_child) {
 		mem->child_nslot = nslabs / mem->num_child;
+		mem->child_nblock = block_num / mem->num_child;
 		mem->child_start = 0;
 
 		/*
@@ -219,15 +221,24 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
 		 */
 		for (i = 0; i < mem->num_child; i++) {
 			mem->child[i].slots = mem->slots + i * mem->child_nslot;
-			mem->child[i].num_child = 0;
+			mem->child[i].block = mem->block + i * mem->child_nblock;
+			mem->child[i].num_child = 0;			
 
 			swiotlb_init_io_tlb_mem(&mem->child[i],
 				start + ((i * mem->child_nslot) << IO_TLB_SHIFT),
 				mem->child_nslot, late_alloc);
 		}
+
+		return;
 	}
 
-	for (i = 0; i < mem->nslabs; i++) {
+	for (i = 0, j = 0; i < mem->nslabs; i++) {
+		if (!(i % IO_TLB_BLOCKSIZE)) {
+			mem->block[j].alloc_size = 0;
+			mem->block[j].list = block_num--;
+			j++;
+		}
+
 		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
 		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
 		mem->slots[i].alloc_size = 0;
@@ -292,6 +303,13 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
 		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
 		      __func__, alloc_size, PAGE_SIZE);
 
+	mem->num_child = 0;
+	mem->block = memblock_alloc(sizeof(struct io_tlb_block) *
+				    (default_nslabs / IO_TLB_BLOCKSIZE),
+				     SMP_CACHE_BYTES);
+	if (!mem->block)
+		panic("%s: Failed to allocate mem->block.\n", __func__);
+
 	swiotlb_init_io_tlb_mem(mem, __pa(tlb), default_nslabs, false);
 	mem->force_bounce = flags & SWIOTLB_FORCE;
 
@@ -316,7 +334,7 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
 	unsigned long nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE);
 	unsigned long bytes;
 	unsigned char *vstart = NULL;
-	unsigned int order;
+	unsigned int order, block_order;
 	int rc = 0;
 
 	if (swiotlb_force_disable)
@@ -354,6 +372,13 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
 		goto retry;
 	}
 
+	block_order = get_order(array_size(sizeof(*mem->block),
+		nslabs / IO_TLB_BLOCKSIZE));
+	mem->block = (struct io_tlb_block *)
+		__get_free_pages(GFP_KERNEL | __GFP_ZERO, block_order);
+	if (!mem->block)
+		goto error_block;
+
 	mem->slots = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
 		get_order(array_size(sizeof(*mem->slots), nslabs)));
 	if (!mem->slots)
@@ -366,6 +391,8 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
 	return 0;
 
 error_slots:
+	free_pages((unsigned long)mem->block, block_order);
+error_block:
 	free_pages((unsigned long)vstart, order);
 	return -ENOMEM;
 }
@@ -375,6 +402,7 @@ void __init swiotlb_exit(void)
 	struct io_tlb_mem *mem = &io_tlb_default_mem;
 	unsigned long tbl_vaddr;
 	size_t tbl_size, slots_size;
+	unsigned int block_array_size, block_order;
 
 	if (swiotlb_force_bounce)
 		return;
@@ -386,12 +414,16 @@ void __init swiotlb_exit(void)
 	tbl_vaddr = (unsigned long)phys_to_virt(mem->start);
 	tbl_size = PAGE_ALIGN(mem->end - mem->start);
 	slots_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), mem->nslabs));
+	block_array_size = array_size(sizeof(*mem->block), mem->nslabs / IO_TLB_BLOCKSIZE);
 
 	set_memory_encrypted(tbl_vaddr, tbl_size >> PAGE_SHIFT);
 	if (mem->late_alloc) {
+		block_order = get_order(block_array_size);
+		free_pages((unsigned long)mem->block, block_order);
 		free_pages(tbl_vaddr, get_order(tbl_size));
 		free_pages((unsigned long)mem->slots, get_order(slots_size));
 	} else {
+		memblock_free_late(__pa(mem->block), block_array_size);
 		memblock_free_late(mem->start, tbl_size);
 		memblock_free_late(__pa(mem->slots), slots_size);
 	}
@@ -839,6 +871,161 @@ static int __init __maybe_unused swiotlb_create_default_debugfs(void)
 late_initcall(swiotlb_create_default_debugfs);
 #endif
 
+static void swiotlb_do_free_block(struct io_tlb_mem *mem,
+		phys_addr_t start, unsigned int block_num)
+{
+
+	unsigned int start_slot = (start - mem->start) >> IO_TLB_SHIFT;
+	unsigned int block_index = start_slot / IO_TLB_BLOCKSIZE;
+	unsigned int mem_block_num = mem->nslabs / IO_TLB_BLOCKSIZE;
+	unsigned long flags;
+	int count, i, num;
+
+	spin_lock_irqsave(&mem->lock, flags);
+	if (block_index + block_num < mem_block_num)
+		count = mem->block[block_index + mem_block_num].list;
+	else
+		count = 0;
+
+
+	for (i = block_index + block_num; i >= block_index; i--) {
+		mem->block[i].list = ++count;
+		/* Todo: recover slot->list and alloc_size here. */
+	}
+
+	for (i = block_index - 1, num = block_index % mem_block_num;
+	    i < num && mem->block[i].list; i--)
+		mem->block[i].list = ++count;
+
+	spin_unlock_irqrestore(&mem->lock, flags);
+}
+
+static void swiotlb_free_block(struct io_tlb_mem *mem,
+			       phys_addr_t start, unsigned int block_num)
+{
+	unsigned int slot_index, child_index;
+
+	if (mem->num_child) {
+		slot_index = (start - mem->start) >> IO_TLB_SHIFT;
+		child_index = slot_index / mem->child_nslot;
+
+		swiotlb_do_free_block(&mem->child[child_index],
+				      start, block_num);
+	} else {
+		swiotlb_do_free_block(mem, start, block_num);
+	}
+}
+
+void swiotlb_device_free(struct device *dev)
+{
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
+	struct io_tlb_mem *parent_mem = dev->dma_io_tlb_mem->parent;
+
+	swiotlb_free_block(parent_mem, mem->start, mem->nslabs / IO_TLB_BLOCKSIZE);
+}
+
+
+static struct page *swiotlb_alloc_block(struct io_tlb_mem *mem, unsigned int block_num)
+{
+	unsigned int block_index, nslot;
+	phys_addr_t tlb_addr;
+	unsigned long flags;
+	int i, j;
+
+	if (!mem || !mem->block)
+		return NULL;
+
+	spin_lock_irqsave(&mem->lock, flags);
+	block_index = mem->block_index;
+
+	/* Todo: Search more blocks. */
+	if (mem->block[block_index].list < block_num) {
+		spin_unlock_irqrestore(&mem->lock, flags);
+		return NULL;
+	}
+
+	/* Update block and slot list. */
+	for (i = block_index; i < block_index + block_num; i++) {
+		mem->block[i].list = 0;
+		mem->block[i].alloc_size = IO_TLB_BLOCKSIZE;
+		for (j = 0; j < IO_TLB_BLOCKSIZE; j++) {
+			nslot = i * IO_TLB_BLOCKSIZE + j;
+			mem->slots[nslot].list = 0;
+			mem->slots[nslot].alloc_size = IO_TLB_SIZE;
+		}
+	}
+
+	mem->index = nslot + 1;
+	mem->block_index += block_num;
+	mem->used += block_num * IO_TLB_BLOCKSIZE;
+	spin_unlock_irqrestore(&mem->lock, flags);
+
+	tlb_addr = slot_addr(mem->start, block_index * IO_TLB_BLOCKSIZE);
+	return pfn_to_page(PFN_DOWN(tlb_addr));
+}
+
+/*
+ * swiotlb_device_allocate - Allocate bounce buffer fo device from
+ * default io tlb pool. The allocation size should be aligned with
+ * IO_TLB_BLOCK_UNIT.
+ */
+int swiotlb_device_allocate(struct device *dev,
+			    unsigned int queue_num,
+			    unsigned long size)
+{
+	struct io_tlb_mem *mem, *parent_mem = dev->dma_io_tlb_mem;
+	unsigned long nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_BLOCKSIZE);
+	struct page *page;
+	int ret = -ENOMEM;
+
+	page = swiotlb_alloc_block(parent_mem, nslabs / IO_TLB_BLOCKSIZE);
+	if (!page)
+		return -ENOMEM;
+
+	mem = kzalloc(sizeof(*mem), GFP_KERNEL);
+	if (!mem)
+		goto error_mem;
+
+	mem->slots = kzalloc(array_size(sizeof(*mem->slots), nslabs),
+			     GFP_KERNEL);
+	if (!mem->slots)
+		goto error_slots;
+
+	mem->block = kcalloc(nslabs / IO_TLB_BLOCKSIZE,
+				sizeof(struct io_tlb_block),
+				GFP_KERNEL);
+	if (!mem->block)
+		goto error_block;
+
+	mem->num_child = queue_num;
+	mem->child = kcalloc(queue_num,
+				sizeof(struct io_tlb_mem),
+				GFP_KERNEL);
+	if (!mem->child)
+		goto error_child;
+
+
+	swiotlb_init_io_tlb_mem(mem, page_to_phys(page), nslabs, true);
+	mem->force_bounce = true;
+	mem->for_alloc = true;
+
+	mem->vaddr = parent_mem->vaddr + page_to_phys(page) -  parent_mem->start;
+	dev->dma_io_tlb_mem->parent = parent_mem;
+	dev->dma_io_tlb_mem = mem;
+	return 0;
+
+error_child:
+	kfree(mem->block);
+error_block:
+	kfree(mem->slots);
+error_slots:
+	kfree(mem);
+error_mem:
+	swiotlb_free_block(mem, page_to_phys(page), nslabs / IO_TLB_BLOCKSIZE);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(swiotlb_device_allocate);
+
 #ifdef CONFIG_DMA_RESTRICTED_POOL
 
 struct page *swiotlb_alloc(struct device *dev, size_t size)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [RFC PATCH V2 0/2] swiotlb: Add child io tlb mem support
  2022-05-02 12:54 [RFC PATCH V2 0/2] swiotlb: Add child io tlb mem support Tianyu Lan
  2022-05-02 12:54 ` [RFC PATCH V2 1/2] swiotlb: Add Child IO TLB " Tianyu Lan
  2022-05-02 12:54 ` [RFC PATCH V2 2/2] Swiotlb: Add device bounce buffer allocation interface Tianyu Lan
@ 2022-05-09 11:49 ` Tianyu Lan
  2 siblings, 0 replies; 7+ messages in thread
From: Tianyu Lan @ 2022-05-09 11:49 UTC (permalink / raw)
  To: hch, robin.murphy
  Cc: Tianyu Lan, iommu, linux-kernel, vkuznets, brijesh.singh,
	konrad.wilk, hch, wei.liu, parri.andrea, thomas.lendacky,
	linux-hyperv, andi.kleen, kirill.shutemov, m.szyprowski,
	michael.h.kelley, kys

On 5/2/2022 8:54 PM, Tianyu Lan wrote:
> From: Tianyu Lan <Tianyu.Lan@microsoft.com>
> 
> Traditionally swiotlb was not performance critical because it was only
> used for slow devices. But in some setups, like TDX/SEV confidential
> guests, all IO has to go through swiotlb. Currently swiotlb only has a
> single lock. Under high IO load with multiple CPUs this can lead to
> significant lock contention on the swiotlb lock.
> 
> This patch adds child IO TLB mem support to resolve spinlock overhead
> among device's queues. Each device may allocate IO tlb mem and setup
> child IO TLB mem according to queue number. The number child IO tlb
> mem maybe set up equal with device queue number and this helps to resolve
> swiotlb spinlock overhead among devices and queues.
> 
> Patch 2 introduces IO TLB Block concepts and swiotlb_device_allocate()
> API to allocate per-device swiotlb bounce buffer. The new API Accepts
> queue number as the number of child IO TLB mem to set up device's IO
> TLB mem.

Gentile ping...

Thanks.
> 
> Tianyu Lan (2):
>    swiotlb: Add Child IO TLB mem support
>    Swiotlb: Add device bounce buffer allocation interface
> 
>   include/linux/swiotlb.h |  40 ++++++
>   kernel/dma/swiotlb.c    | 290 ++++++++++++++++++++++++++++++++++++++--
>   2 files changed, 317 insertions(+), 13 deletions(-)
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC PATCH V2 1/2] swiotlb: Add Child IO TLB mem support
  2022-05-02 12:54 ` [RFC PATCH V2 1/2] swiotlb: Add Child IO TLB " Tianyu Lan
@ 2022-05-16  7:34   ` Christoph Hellwig
  2022-05-16 13:08     ` Tianyu Lan
       [not found]     ` <PH0PR21MB30258D2B3B727A9BCEE039FCD7DD9@PH0PR21MB3025.namprd21.prod.outlook.com>
  0 siblings, 2 replies; 7+ messages in thread
From: Christoph Hellwig @ 2022-05-16  7:34 UTC (permalink / raw)
  To: Tianyu Lan
  Cc: hch, m.szyprowski, robin.murphy, michael.h.kelley, kys,
	Tianyu Lan, iommu, linux-kernel, vkuznets, brijesh.singh,
	konrad.wilk, hch, wei.liu, parri.andrea, thomas.lendacky,
	linux-hyperv, andi.kleen, kirill.shutemov

I don't really understand how 'childs' fit in here.  The code also
doesn't seem to be usable without patch 2 and a caller of the
new functions added in patch 2, so it is rather impossible to review.

Also:

 1) why is SEV/TDX so different from other cases that need bounce
    buffering to treat it different and we can't work on a general
    scalability improvement
 2) per previous discussions at how swiotlb itself works, it is
    clear that another option is to just make pages we DMA to
    shared with the hypervisor.  Why don't we try that at least
    for larger I/O?

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC PATCH V2 1/2] swiotlb: Add Child IO TLB mem support
  2022-05-16  7:34   ` Christoph Hellwig
@ 2022-05-16 13:08     ` Tianyu Lan
       [not found]     ` <PH0PR21MB30258D2B3B727A9BCEE039FCD7DD9@PH0PR21MB3025.namprd21.prod.outlook.com>
  1 sibling, 0 replies; 7+ messages in thread
From: Tianyu Lan @ 2022-05-16 13:08 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: m.szyprowski, robin.murphy, michael.h.kelley, kys, Tianyu Lan,
	iommu, linux-kernel, vkuznets, brijesh.singh, konrad.wilk, hch,
	wei.liu, parri.andrea, thomas.lendacky, linux-hyperv, andi.kleen,
	kirill.shutemov

On 5/16/2022 3:34 PM, Christoph Hellwig wrote:
> I don't really understand how 'childs' fit in here.  The code also
> doesn't seem to be usable without patch 2 and a caller of the
> new functions added in patch 2, so it is rather impossible to review.

Hi Christoph:
      OK. I will merge two patches and add a caller patch. The motivation
is to avoid global spin lock when devices use swiotlb bounce buffer and
this introduces overhead during high throughput cases. In my test
environment, current code can achieve about 24Gb/s network throughput
with SWIOTLB force enabled and it can achieve about 40Gb/s without
SWIOTLB force. Storage also has the same issue.
      Per-device IO TLB mem may resolve global spin lock issue among
devices but device still may have multi queues. Multi queues still need
to share one spin lock. This is why introduce child or IO tlb areas in
the previous patches. Each device queues will have separate child IO TLB
mem and single spin lock to manage their IO TLB buffers.
      Otherwise, global spin lock still cost cpu usage during high 
throughput even when there is performance regression. Each device queues 
needs to spin on the different cpus to acquire the global lock. Child IO
TLB mem also may resolve the cpu issue.

> 
> Also:
> 
>   1) why is SEV/TDX so different from other cases that need bounce
>      buffering to treat it different and we can't work on a general
>      scalability improvement

	Other cases also have global spin lock issue but it depends on
         whether hits the bottleneck. The cpu usage issue may be ignored.

>   2) per previous discussions at how swiotlb itself works, it is
>      clear that another option is to just make pages we DMA to
>      shared with the hypervisor.  Why don't we try that at least
>      for larger I/O?

	For confidential VM(Both TDX and SEV), we need to use bounce
	buffer to copy between private memory that hypervisor can't
	access directly and shared memory. For security consideration,
	confidential VM	should not share IO stack DMA pages with
        	hypervisor directly to avoid attack from hypervisor when IO
	stack handles the DMA data.
	

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC PATCH V2 1/2] swiotlb: Add Child IO TLB mem support
       [not found]     ` <PH0PR21MB30258D2B3B727A9BCEE039FCD7DD9@PH0PR21MB3025.namprd21.prod.outlook.com>
@ 2022-05-31  7:16       ` hch
  0 siblings, 0 replies; 7+ messages in thread
From: hch @ 2022-05-31  7:16 UTC (permalink / raw)
  To: Michael Kelley (LINUX)
  Cc: Christoph Hellwig, Tianyu Lan, robin.murphy, andi.kleen,
	m.szyprowski, KY Srinivasan, Tianyu Lan, iommu, linux-kernel,
	vkuznets, brijesh.singh, konrad.wilk, hch, wei.liu, parri.andrea,
	thomas.lendacky, linux-hyperv, kirill.shutemov

On Mon, May 30, 2022 at 01:52:37AM +0000, Michael Kelley (LINUX) wrote:
> B) The contents of the memory buffer must transition between
> encrypted and not encrypted.  The hardware doesn't provide
> any mechanism to do such a transition "in place".  The only
> way to transition is for the CPU to copy the contents between
> an encrypted area and an unencrypted area of memory.
> 
> Because of (B), we're stuck needing bounce buffers.  There's no
> way to avoid them with the current hardware.  Tianyu also pointed
> out not wanting to expose uninitialized guest memory to the host,
> so, for example, sharing a read buffer with the host requires that
> it first be initialized to zero.

Ok, B is a deal breaker.  I just brought this in because I've received
review comments that state bouncing is just the easiest option for
now and we could map it into the hypervisor in the future.  But at
least for SEV that does not seem like an option without hardware
changes.

> We should reset and make sure we agree on the top-level approach.
> 1) We want general scalability improvements to swiotlb.  These
>     improvements should scale to high CPUs counts (> 100) and for
>     multiple NUMA nodes.
> 2) Drivers should not require any special knowledge of swiotlb to
>     benefit from the improvements.  No new swiotlb APIs should be
>     need to be used by drivers -- the swiotlb scalability improvements
>     should be transparent.
> 3) The scalability improvements should not be based on device
>     boundaries since a single device may have multiple channels
>     doing bounce buffering on multiple CPUs in parallel.

Agreed to all counts.

> The patch from Andi Kleen [3] (not submitted upstream, but referenced
> by Tianyu as the basis for his patches) seems like a good starting point
> for meeting the top-level approach.

Yes, I think doing per-cpu and/or per-node scaling sounds like the
right general approach. Why was this never sent out?

> Andi and Robin had some
> back-and-forth about Andi's patch that I haven't delved into yet, but
> getting that worked out seems like a better overall approach.  I had
> an offline chat with Tianyu, and he would agree as well.

Where was this discussion?

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-05-31  7:16 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-02 12:54 [RFC PATCH V2 0/2] swiotlb: Add child io tlb mem support Tianyu Lan
2022-05-02 12:54 ` [RFC PATCH V2 1/2] swiotlb: Add Child IO TLB " Tianyu Lan
2022-05-16  7:34   ` Christoph Hellwig
2022-05-16 13:08     ` Tianyu Lan
     [not found]     ` <PH0PR21MB30258D2B3B727A9BCEE039FCD7DD9@PH0PR21MB3025.namprd21.prod.outlook.com>
2022-05-31  7:16       ` hch
2022-05-02 12:54 ` [RFC PATCH V2 2/2] Swiotlb: Add device bounce buffer allocation interface Tianyu Lan
2022-05-09 11:49 ` [RFC PATCH V2 0/2] swiotlb: Add child io tlb mem support Tianyu Lan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).