linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2 0/6] x86/Hyper-V: Add Hyper-V Isolation VM support(Second part)
@ 2021-11-23 14:30 Tianyu Lan
  2021-11-23 14:30 ` [PATCH V2 1/6] Swiotlb: Add Swiotlb bounce buffer remap function for HV IVM Tianyu Lan
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Tianyu Lan @ 2021-11-23 14:30 UTC (permalink / raw)
  To: tglx, mingo, bp, dave.hansen, x86, hpa, luto, peterz, jgross,
	sstabellini, boris.ostrovsky, kys, haiyangz, sthemmin, wei.liu,
	decui, joro, will, davem, kuba, jejb, martin.petersen, hch,
	m.szyprowski, robin.murphy, Tianyu.Lan, thomas.lendacky,
	xen-devel, michael.h.kelley
  Cc: iommu, linux-hyperv, linux-kernel, linux-scsi, netdev, vkuznets,
	brijesh.singh, konrad.wilk, parri.andrea, dave.hansen

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

Hyper-V provides two kinds of Isolation VMs. VBS(Virtualization-based
security) and AMD SEV-SNP unenlightened Isolation VMs. This patchset
is to add support for these Isolation VM support in Linux.

The memory of these vms are encrypted and host can't access guest
memory directly. Hyper-V provides new host visibility hvcall and
the guest needs to call new hvcall to mark memory visible to host
before sharing memory with host. For security, all network/storage
stack memory should not be shared with host and so there is bounce
buffer requests.

Vmbus channel ring buffer already plays bounce buffer role because
all data from/to host needs to copy from/to between the ring buffer
and IO stack memory. So mark vmbus channel ring buffer visible.

For SNP isolation VM, guest needs to access the shared memory via
extra address space which is specified by Hyper-V CPUID HYPERV_CPUID_
ISOLATION_CONFIG. The access physical address of the shared memory
should be bounce buffer memory GPA plus with shared_gpa_boundary
reported by CPUID.

This patchset is to enable swiotlb bounce buffer for netvsc/storvsc
in Isolation VM. Add Hyper-V dma ops and provide dma_alloc/free_
noncontiguous and vmap/vunmap_noncontiguous callback. Allocate
rx/tx ring via dma_alloc_noncontiguous() and map them into extra
address space via dma_vmap_noncontiguous().

Change since v1:
     * Add Hyper-V Isolation support check in the cc_platform_has()
       and return true for guest memory encrypt attr.
     * Remove hv isolation check in the sev_setup_arch()

Tianyu Lan (6):
  Swiotlb: Add Swiotlb bounce buffer remap function for HV IVM
  dma-mapping: Add vmap/vunmap_noncontiguous() callback in dma ops
  x86/hyper-v: Add hyperv Isolation VM check in the cc_platform_has()
  hyperv/IOMMU: Enable swiotlb bounce buffer for Isolation VM
  net: netvsc: Add Isolation VM support for netvsc driver
  scsi: storvsc: Add Isolation VM support for storvsc driver

 arch/x86/kernel/cc_platform.c     |  15 +++
 arch/x86/mm/mem_encrypt.c         |   1 +
 arch/x86/xen/pci-swiotlb-xen.c    |   3 +-
 drivers/hv/Kconfig                |   1 +
 drivers/hv/vmbus_drv.c            |   6 +
 drivers/iommu/hyperv-iommu.c      | 164 +++++++++++++++++++++++++
 drivers/net/hyperv/hyperv_net.h   |   5 +
 drivers/net/hyperv/netvsc.c       | 192 +++++++++++++++++++++++++++---
 drivers/net/hyperv/rndis_filter.c |   2 +
 drivers/scsi/storvsc_drv.c        |  37 +++---
 include/linux/dma-map-ops.h       |   3 +
 include/linux/hyperv.h            |  17 +++
 include/linux/swiotlb.h           |   6 +
 kernel/dma/mapping.c              |  18 ++-
 kernel/dma/swiotlb.c              |  53 ++++++++-
 15 files changed, 482 insertions(+), 41 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH V2 1/6] Swiotlb: Add Swiotlb bounce buffer remap function for HV IVM
  2021-11-23 14:30 [PATCH V2 0/6] x86/Hyper-V: Add Hyper-V Isolation VM support(Second part) Tianyu Lan
@ 2021-11-23 14:30 ` Tianyu Lan
  2021-11-23 17:15   ` Michael Kelley (LINUX)
  2021-11-23 14:30 ` [PATCH V2 2/6] dma-mapping: Add vmap/vunmap_noncontiguous() callback in dma ops Tianyu Lan
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Tianyu Lan @ 2021-11-23 14:30 UTC (permalink / raw)
  To: tglx, mingo, bp, dave.hansen, x86, hpa, luto, peterz, jgross,
	sstabellini, boris.ostrovsky, kys, haiyangz, sthemmin, wei.liu,
	decui, joro, will, davem, kuba, jejb, martin.petersen, hch,
	m.szyprowski, robin.murphy, Tianyu.Lan, thomas.lendacky,
	xen-devel, michael.h.kelley
  Cc: iommu, linux-hyperv, linux-kernel, linux-scsi, netdev, vkuznets,
	brijesh.singh, konrad.wilk, parri.andrea, dave.hansen

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
extra address space which is above shared_gpa_boundary (E.G 39 bit
address line) reported by Hyper-V CPUID ISOLATION_CONFIG. The access
physical address will be original physical address + shared_gpa_boundary.
The shared_gpa_boundary in the AMD SEV SNP spec is called virtual top of
memory(vTOM). Memory addresses below vTOM are automatically treated as
private while memory above vTOM is treated as shared.

Expose swiotlb_unencrypted_base for platforms to set unencrypted
memory base offset and platform calls swiotlb_update_mem_attributes()
to remap swiotlb mem to unencrypted address space. memremap() can
not be called in the early stage and so put remapping code into
swiotlb_update_mem_attributes(). Store remap address and use it to copy
data from/to swiotlb bounce buffer.

Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
Change since v1:
	* Rework comment in the swiotlb_init_io_tlb_mem()
	* Make swiotlb_init_io_tlb_mem() back to return void.
---
 include/linux/swiotlb.h |  6 +++++
 kernel/dma/swiotlb.c    | 53 +++++++++++++++++++++++++++++++++++++----
 2 files changed, 54 insertions(+), 5 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 569272871375..f6c3638255d5 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -73,6 +73,9 @@ extern enum swiotlb_force swiotlb_force;
  * @end:	The end address of the swiotlb memory pool. Used to do a quick
  *		range check to see if the memory was in fact allocated by this
  *		API.
+ * @vaddr:	The vaddr of the swiotlb memory pool. The swiotlb memory pool
+ *		may be remapped in the memory encrypted case and store virtual
+ *		address for bounce buffer operation.
  * @nslabs:	The number of IO TLB blocks (in groups of 64) between @start and
  *		@end. For default swiotlb, this is command line adjustable via
  *		setup_io_tlb_npages.
@@ -92,6 +95,7 @@ extern enum swiotlb_force swiotlb_force;
 struct io_tlb_mem {
 	phys_addr_t start;
 	phys_addr_t end;
+	void *vaddr;
 	unsigned long nslabs;
 	unsigned long used;
 	unsigned int index;
@@ -186,4 +190,6 @@ static inline bool is_swiotlb_for_alloc(struct device *dev)
 }
 #endif /* CONFIG_DMA_RESTRICTED_POOL */
 
+extern phys_addr_t swiotlb_unencrypted_base;
+
 #endif /* __LINUX_SWIOTLB_H */
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 8e840fbbed7c..c303fdeba82f 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -50,6 +50,7 @@
 #include <asm/io.h>
 #include <asm/dma.h>
 
+#include <linux/io.h>
 #include <linux/init.h>
 #include <linux/memblock.h>
 #include <linux/iommu-helper.h>
@@ -72,6 +73,8 @@ enum swiotlb_force swiotlb_force;
 
 struct io_tlb_mem io_tlb_default_mem;
 
+phys_addr_t swiotlb_unencrypted_base;
+
 /*
  * Max segment that we can provide which (if pages are contingous) will
  * not be bounced (unless SWIOTLB_FORCE is set).
@@ -155,6 +158,31 @@ static inline unsigned long nr_slots(u64 val)
 	return DIV_ROUND_UP(val, IO_TLB_SIZE);
 }
 
+/*
+ * Remap swioltb memory in the unencrypted physical address space
+ * when swiotlb_unencrypted_base is set. (e.g. for Hyper-V AMD SEV-SNP
+ * Isolation VMs).
+ */
+void *swiotlb_mem_remap(struct io_tlb_mem *mem, unsigned long bytes)
+{
+	void *vaddr;
+
+	if (swiotlb_unencrypted_base) {
+		phys_addr_t paddr = mem->start + swiotlb_unencrypted_base;
+
+		vaddr = memremap(paddr, bytes, MEMREMAP_WB);
+		if (!vaddr) {
+			pr_err("Failed to map the unencrypted memory %llx size %lx.\n",
+			       paddr, bytes);
+			return NULL;
+		}
+
+		return vaddr;
+	}
+
+	return phys_to_virt(mem->start);
+}
+
 /*
  * Early SWIOTLB allocation may be too early to allow an architecture to
  * perform the desired operations.  This function allows the architecture to
@@ -172,7 +200,14 @@ void __init swiotlb_update_mem_attributes(void)
 	vaddr = phys_to_virt(mem->start);
 	bytes = PAGE_ALIGN(mem->nslabs << IO_TLB_SHIFT);
 	set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
-	memset(vaddr, 0, bytes);
+
+	mem->vaddr = swiotlb_mem_remap(mem, bytes);
+	if (!mem->vaddr) {
+		pr_err("Fail to remap swiotlb mem.\n");
+		return;
+	}
+
+	memset(mem->vaddr, 0, bytes);
 }
 
 static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
@@ -196,7 +231,18 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
 		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
 		mem->slots[i].alloc_size = 0;
 	}
+
+	/*
+	 * If swiotlb_unencrypted_base is set, the bounce buffer memory will
+	 * be remapped and cleared in swiotlb_update_mem_attributes.
+	 */
+	if (swiotlb_unencrypted_base)
+		return;
+
+	set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
 	memset(vaddr, 0, bytes);
+	mem->vaddr = vaddr;
+	return;
 }
 
 int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
@@ -318,7 +364,6 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 	if (!mem->slots)
 		return -ENOMEM;
 
-	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
 	swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
 
 	swiotlb_print_info();
@@ -371,7 +416,7 @@ static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size
 	phys_addr_t orig_addr = mem->slots[index].orig_addr;
 	size_t alloc_size = mem->slots[index].alloc_size;
 	unsigned long pfn = PFN_DOWN(orig_addr);
-	unsigned char *vaddr = phys_to_virt(tlb_addr);
+	unsigned char *vaddr = mem->vaddr + tlb_addr - mem->start;
 	unsigned int tlb_offset, orig_addr_offset;
 
 	if (orig_addr == INVALID_PHYS_ADDR)
@@ -806,8 +851,6 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
 			return -ENOMEM;
 		}
 
-		set_memory_decrypted((unsigned long)phys_to_virt(rmem->base),
-				     rmem->size >> PAGE_SHIFT);
 		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
 		mem->force_bounce = true;
 		mem->for_alloc = true;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH V2 2/6] dma-mapping: Add vmap/vunmap_noncontiguous() callback in dma ops
  2021-11-23 14:30 [PATCH V2 0/6] x86/Hyper-V: Add Hyper-V Isolation VM support(Second part) Tianyu Lan
  2021-11-23 14:30 ` [PATCH V2 1/6] Swiotlb: Add Swiotlb bounce buffer remap function for HV IVM Tianyu Lan
@ 2021-11-23 14:30 ` Tianyu Lan
  2021-11-23 14:30 ` [PATCH V2 3/6] x86/hyper-v: Add hyperv Isolation VM check in the cc_platform_has() Tianyu Lan
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Tianyu Lan @ 2021-11-23 14:30 UTC (permalink / raw)
  To: tglx, mingo, bp, dave.hansen, x86, hpa, luto, peterz, jgross,
	sstabellini, boris.ostrovsky, kys, haiyangz, sthemmin, wei.liu,
	decui, joro, will, davem, kuba, jejb, martin.petersen, hch,
	m.szyprowski, robin.murphy, Tianyu.Lan, thomas.lendacky,
	xen-devel, michael.h.kelley
  Cc: iommu, linux-hyperv, linux-kernel, linux-scsi, netdev, vkuznets,
	brijesh.singh, konrad.wilk, parri.andrea, dave.hansen

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

Hyper-V netvsc driver needs to allocate noncontiguous DMA memory and
remap it into unencrypted address space before sharing with host. Add
vmap/vunmap_noncontiguous() callback and handle the remap in the Hyper-V
dma ops callback.

Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
 include/linux/dma-map-ops.h |  3 +++
 kernel/dma/mapping.c        | 18 ++++++++++++++----
 2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
index 0d5b06b3a4a6..f7b9958ca20a 100644
--- a/include/linux/dma-map-ops.h
+++ b/include/linux/dma-map-ops.h
@@ -27,6 +27,9 @@ struct dma_map_ops {
 			unsigned long attrs);
 	void (*free_noncontiguous)(struct device *dev, size_t size,
 			struct sg_table *sgt, enum dma_data_direction dir);
+	void *(*vmap_noncontiguous)(struct device *dev, size_t size,
+			struct sg_table *sgt);
+	void (*vunmap_noncontiguous)(struct device *dev, void *addr);
 	int (*mmap)(struct device *, struct vm_area_struct *,
 			void *, dma_addr_t, size_t, unsigned long attrs);
 
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index 9478eccd1c8e..7fd751d866cc 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -674,8 +674,14 @@ void *dma_vmap_noncontiguous(struct device *dev, size_t size,
 	const struct dma_map_ops *ops = get_dma_ops(dev);
 	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
 
-	if (ops && ops->alloc_noncontiguous)
-		return vmap(sgt_handle(sgt)->pages, count, VM_MAP, PAGE_KERNEL);
+	if (ops) {
+		if (ops->vmap_noncontiguous)
+			return ops->vmap_noncontiguous(dev, size, sgt);
+		else if (ops->alloc_noncontiguous)
+			return vmap(sgt_handle(sgt)->pages, count, VM_MAP,
+				    PAGE_KERNEL);
+	}
+
 	return page_address(sg_page(sgt->sgl));
 }
 EXPORT_SYMBOL_GPL(dma_vmap_noncontiguous);
@@ -684,8 +690,12 @@ void dma_vunmap_noncontiguous(struct device *dev, void *vaddr)
 {
 	const struct dma_map_ops *ops = get_dma_ops(dev);
 
-	if (ops && ops->alloc_noncontiguous)
-		vunmap(vaddr);
+	if (ops) {
+		if (ops->vunmap_noncontiguous)
+			ops->vunmap_noncontiguous(dev, vaddr);
+		else if (ops->alloc_noncontiguous)
+			vunmap(vaddr);
+	}
 }
 EXPORT_SYMBOL_GPL(dma_vunmap_noncontiguous);
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH V2 3/6] x86/hyper-v: Add hyperv Isolation VM check in the cc_platform_has()
  2021-11-23 14:30 [PATCH V2 0/6] x86/Hyper-V: Add Hyper-V Isolation VM support(Second part) Tianyu Lan
  2021-11-23 14:30 ` [PATCH V2 1/6] Swiotlb: Add Swiotlb bounce buffer remap function for HV IVM Tianyu Lan
  2021-11-23 14:30 ` [PATCH V2 2/6] dma-mapping: Add vmap/vunmap_noncontiguous() callback in dma ops Tianyu Lan
@ 2021-11-23 14:30 ` Tianyu Lan
  2021-11-23 14:30 ` [PATCH V2 4/6] hyperv/IOMMU: Enable swiotlb bounce buffer for Isolation VM Tianyu Lan
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Tianyu Lan @ 2021-11-23 14:30 UTC (permalink / raw)
  To: tglx, mingo, bp, dave.hansen, x86, hpa, luto, peterz, jgross,
	sstabellini, boris.ostrovsky, kys, haiyangz, sthemmin, wei.liu,
	decui, joro, will, davem, kuba, jejb, martin.petersen, hch,
	m.szyprowski, robin.murphy, Tianyu.Lan, thomas.lendacky,
	xen-devel, michael.h.kelley
  Cc: iommu, linux-hyperv, linux-kernel, linux-scsi, netdev, vkuznets,
	brijesh.singh, konrad.wilk, parri.andrea, dave.hansen

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

Hyper-V provides Isolation VM which has memory encrypt support. Add
hyperv_cc_platform_has() and return true for check of GUEST_MEM_ENCRYPT
attribute.

Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
 arch/x86/kernel/cc_platform.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/arch/x86/kernel/cc_platform.c b/arch/x86/kernel/cc_platform.c
index 03bb2f343ddb..f3bb0431f5c5 100644
--- a/arch/x86/kernel/cc_platform.c
+++ b/arch/x86/kernel/cc_platform.c
@@ -11,6 +11,7 @@
 #include <linux/cc_platform.h>
 #include <linux/mem_encrypt.h>
 
+#include <asm/mshyperv.h>
 #include <asm/processor.h>
 
 static bool __maybe_unused intel_cc_platform_has(enum cc_attr attr)
@@ -58,9 +59,23 @@ static bool amd_cc_platform_has(enum cc_attr attr)
 #endif
 }
 
+static bool hyperv_cc_platform_has(enum cc_attr attr)
+{
+#ifdef CONFIG_HYPERV
+	if (attr == CC_ATTR_GUEST_MEM_ENCRYPT)
+		return true;
+	else
+		return false;
+#else
+	return false;
+#endif
+}
 
 bool cc_platform_has(enum cc_attr attr)
 {
+	if (hv_is_isolation_supported())
+		return hyperv_cc_platform_has(attr);
+
 	if (sme_me_mask)
 		return amd_cc_platform_has(attr);
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH V2 4/6] hyperv/IOMMU: Enable swiotlb bounce buffer for Isolation VM
  2021-11-23 14:30 [PATCH V2 0/6] x86/Hyper-V: Add Hyper-V Isolation VM support(Second part) Tianyu Lan
                   ` (2 preceding siblings ...)
  2021-11-23 14:30 ` [PATCH V2 3/6] x86/hyper-v: Add hyperv Isolation VM check in the cc_platform_has() Tianyu Lan
@ 2021-11-23 14:30 ` Tianyu Lan
  2021-11-23 17:44   ` Michael Kelley (LINUX)
  2021-11-23 14:30 ` [PATCH V2 5/6] net: netvsc: Add Isolation VM support for netvsc driver Tianyu Lan
  2021-11-23 14:30 ` [PATCH V2 6/6] scsi: storvsc: Add Isolation VM support for storvsc driver Tianyu Lan
  5 siblings, 1 reply; 13+ messages in thread
From: Tianyu Lan @ 2021-11-23 14:30 UTC (permalink / raw)
  To: tglx, mingo, bp, dave.hansen, x86, hpa, luto, peterz, jgross,
	sstabellini, boris.ostrovsky, kys, haiyangz, sthemmin, wei.liu,
	decui, joro, will, davem, kuba, jejb, martin.petersen, hch,
	m.szyprowski, robin.murphy, Tianyu.Lan, thomas.lendacky,
	xen-devel, michael.h.kelley
  Cc: iommu, linux-hyperv, linux-kernel, linux-scsi, netdev, vkuznets,
	brijesh.singh, konrad.wilk, parri.andrea, dave.hansen

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

hyperv Isolation VM requires bounce buffer support to copy
data from/to encrypted memory and so enable swiotlb force
mode to use swiotlb bounce buffer for DMA transaction.

In Isolation VM with AMD SEV, the bounce buffer needs to be
accessed via extra address space which is above shared_gpa_boundary
(E.G 39 bit address line) reported by Hyper-V CPUID ISOLATION_CONFIG.
The access physical address will be original physical address +
shared_gpa_boundary. The shared_gpa_boundary in the AMD SEV SNP
spec is called virtual top of memory(vTOM). Memory addresses below
vTOM are automatically treated as private while memory above
vTOM is treated as shared.

Hyper-V initalizes swiotlb bounce buffer and default swiotlb
needs to be disabled. pci_swiotlb_detect_override() and
pci_swiotlb_detect_4gb() enable the default one. To override
the setting, hyperv_swiotlb_detect() needs to run before
these detect functions which depends on the pci_xen_swiotlb_
init(). Make pci_xen_swiotlb_init() depends on the hyperv_swiotlb
_detect() to keep the order.

Swiotlb bounce buffer code calls set_memory_decrypted()
to mark bounce buffer visible to host and map it in extra
address space via memremap. Populate the shared_gpa_boundary
(vTOM) via swiotlb_unencrypted_base variable.

The map function memremap() can't work in the early place
hyperv_iommu_swiotlb_init() and so call swiotlb_update_mem_attributes()
in the hyperv_iommu_swiotlb_later_init().

Add Hyper-V dma ops and provide alloc/free and vmap/vunmap noncontiguous
callback to handle request of  allocating and mapping noncontiguous dma
memory in vmbus device driver. Netvsc driver will use this. Set dma_ops_
bypass flag for hv device to use dma direct functions during mapping/unmapping
dma page.

Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
Change since v1:
	* Remove hv isolation check in the sev_setup_arch()

 arch/x86/mm/mem_encrypt.c      |   1 +
 arch/x86/xen/pci-swiotlb-xen.c |   3 +-
 drivers/hv/Kconfig             |   1 +
 drivers/hv/vmbus_drv.c         |   6 ++
 drivers/iommu/hyperv-iommu.c   | 164 +++++++++++++++++++++++++++++++++
 include/linux/hyperv.h         |  10 ++
 6 files changed, 184 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 35487305d8af..e48c73b3dd41 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -31,6 +31,7 @@
 #include <asm/processor-flags.h>
 #include <asm/msr.h>
 #include <asm/cmdline.h>
+#include <asm/mshyperv.h>
 
 #include "mm_internal.h"
 
diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index 46df59aeaa06..30fd0600b008 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@@ -4,6 +4,7 @@
 
 #include <linux/dma-map-ops.h>
 #include <linux/pci.h>
+#include <linux/hyperv.h>
 #include <xen/swiotlb-xen.h>
 
 #include <asm/xen/hypervisor.h>
@@ -91,6 +92,6 @@ int pci_xen_swiotlb_init_late(void)
 EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late);
 
 IOMMU_INIT_FINISH(pci_xen_swiotlb_detect,
-		  NULL,
+		  hyperv_swiotlb_detect,
 		  pci_xen_swiotlb_init,
 		  NULL);
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index dd12af20e467..d43b4cd88f57 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -9,6 +9,7 @@ config HYPERV
 	select PARAVIRT
 	select X86_HV_CALLBACK_VECTOR if X86
 	select VMAP_PFN
+	select DMA_OPS_BYPASS
 	help
 	  Select this option to run Linux as a Hyper-V client operating
 	  system.
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index 392c1ac4f819..32dc193e31cd 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -33,6 +33,7 @@
 #include <linux/random.h>
 #include <linux/kernel.h>
 #include <linux/syscore_ops.h>
+#include <linux/dma-map-ops.h>
 #include <clocksource/hyperv_timer.h>
 #include "hyperv_vmbus.h"
 
@@ -2078,6 +2079,7 @@ struct hv_device *vmbus_device_create(const guid_t *type,
 	return child_device_obj;
 }
 
+static u64 vmbus_dma_mask = DMA_BIT_MASK(64);
 /*
  * vmbus_device_register - Register the child device
  */
@@ -2118,6 +2120,10 @@ int vmbus_device_register(struct hv_device *child_device_obj)
 	}
 	hv_debug_add_dev_dir(child_device_obj);
 
+	child_device_obj->device.dma_ops_bypass = true;
+	child_device_obj->device.dma_ops = &hyperv_iommu_dma_ops;
+	child_device_obj->device.dma_mask = &vmbus_dma_mask;
+	child_device_obj->device.dma_parms = &child_device_obj->dma_parms;
 	return 0;
 
 err_kset_unregister:
diff --git a/drivers/iommu/hyperv-iommu.c b/drivers/iommu/hyperv-iommu.c
index e285a220c913..ebcb628e7e8f 100644
--- a/drivers/iommu/hyperv-iommu.c
+++ b/drivers/iommu/hyperv-iommu.c
@@ -13,14 +13,21 @@
 #include <linux/irq.h>
 #include <linux/iommu.h>
 #include <linux/module.h>
+#include <linux/hyperv.h>
+#include <linux/io.h>
 
 #include <asm/apic.h>
 #include <asm/cpu.h>
 #include <asm/hw_irq.h>
 #include <asm/io_apic.h>
+#include <asm/iommu.h>
+#include <asm/iommu_table.h>
 #include <asm/irq_remapping.h>
 #include <asm/hypervisor.h>
 #include <asm/mshyperv.h>
+#include <asm/swiotlb.h>
+#include <linux/dma-map-ops.h>
+#include <linux/dma-direct.h>
 
 #include "irq_remapping.h"
 
@@ -337,4 +344,161 @@ static const struct irq_domain_ops hyperv_root_ir_domain_ops = {
 	.free = hyperv_root_irq_remapping_free,
 };
 
+static void __init hyperv_iommu_swiotlb_init(void)
+{
+	unsigned long hyperv_io_tlb_size;
+	void *hyperv_io_tlb_start;
+
+	/*
+	 * Allocate Hyper-V swiotlb bounce buffer at early place
+	 * to reserve large contiguous memory.
+	 */
+	hyperv_io_tlb_size = swiotlb_size_or_default();
+	hyperv_io_tlb_start = memblock_alloc(hyperv_io_tlb_size, PAGE_SIZE);
+
+	if (!hyperv_io_tlb_start)
+		pr_warn("Fail to allocate Hyper-V swiotlb buffer.\n");
+
+	swiotlb_init_with_tbl(hyperv_io_tlb_start,
+			      hyperv_io_tlb_size >> IO_TLB_SHIFT, true);
+}
+
+int __init hyperv_swiotlb_detect(void)
+{
+	if (!hypervisor_is_type(X86_HYPER_MS_HYPERV))
+		return 0;
+
+	if (!hv_is_isolation_supported())
+		return 0;
+
+	/*
+	 * Enable swiotlb force mode in Isolation VM to
+	 * use swiotlb bounce buffer for dma transaction.
+	 */
+	if (hv_isolation_type_snp())
+		swiotlb_unencrypted_base = ms_hyperv.shared_gpa_boundary;
+	swiotlb_force = SWIOTLB_FORCE;
+	return 1;
+}
+
+static void __init hyperv_iommu_swiotlb_later_init(void)
+{
+	/*
+	 * Swiotlb bounce buffer needs to be mapped in extra address
+	 * space. Map function doesn't work in the early place and so
+	 * call swiotlb_update_mem_attributes() here.
+	 */
+	swiotlb_update_mem_attributes();
+}
+
+IOMMU_INIT_FINISH(hyperv_swiotlb_detect,
+		  NULL, hyperv_iommu_swiotlb_init,
+		  hyperv_iommu_swiotlb_later_init);
+
+static struct sg_table *hyperv_dma_alloc_noncontiguous(struct device *dev,
+		size_t size, enum dma_data_direction dir, gfp_t gfp,
+		unsigned long attrs)
+{
+	struct dma_sgt_handle *sh;
+	struct page **pages;
+	int num_pages = size >> PAGE_SHIFT;
+	void *vaddr, *ptr;
+	int rc, i;
+
+	if (!hv_isolation_type_snp())
+		return NULL;
+
+	sh = kmalloc(sizeof(*sh), gfp);
+	if (!sh)
+		return NULL;
+
+	vaddr = vmalloc(size);
+	if (!vaddr)
+		goto free_sgt;
+
+	pages = kvmalloc_array(num_pages, sizeof(struct page *),
+				    GFP_KERNEL | __GFP_ZERO);
+	if (!pages)
+		goto free_mem;
+
+	for (i = 0, ptr = vaddr; i < num_pages; ++i, ptr += PAGE_SIZE)
+		pages[i] = vmalloc_to_page(ptr);
+
+	rc = sg_alloc_table_from_pages(&sh->sgt, pages, num_pages, 0, size, GFP_KERNEL);
+	if (rc)
+		goto free_pages;
+
+	sh->sgt.sgl->dma_address = (dma_addr_t)vaddr;
+	sh->sgt.sgl->dma_length = size;
+	sh->pages = pages;
+
+	return &sh->sgt;
+
+free_pages:
+	kvfree(pages);
+free_mem:
+	vfree(vaddr);
+free_sgt:
+	kfree(sh);
+	return NULL;
+}
+
+static void hyperv_dma_free_noncontiguous(struct device *dev, size_t size,
+		struct sg_table *sgt, enum dma_data_direction dir)
+{
+	struct dma_sgt_handle *sh = sgt_handle(sgt);
+
+	if (!hv_isolation_type_snp())
+		return;
+
+	vfree((void *)sh->sgt.sgl->dma_address);
+	sg_free_table(&sh->sgt);
+	kvfree(sh->pages);
+	kfree(sh);
+}
+
+static void *hyperv_dma_vmap_noncontiguous(struct device *dev, size_t size,
+			struct sg_table *sgt)
+{
+	int pg_count = size >> PAGE_SHIFT;
+	unsigned long *pfns;
+	struct page **pages = sgt_handle(sgt)->pages;
+	void *vaddr = NULL;
+	int i;
+
+	if (!hv_isolation_type_snp())
+		return NULL;
+
+	if (!pages)
+		return NULL;
+
+	pfns = kcalloc(pg_count, sizeof(*pfns), GFP_KERNEL);
+	if (!pfns)
+		return NULL;
+
+	for (i = 0; i < pg_count; i++)
+		pfns[i] = page_to_pfn(pages[i]) +
+			(ms_hyperv.shared_gpa_boundary >> PAGE_SHIFT);
+
+	vaddr = vmap_pfn(pfns, pg_count, PAGE_KERNEL);
+	kfree(pfns);
+	return vaddr;
+
+}
+
+static void hyperv_dma_vunmap_noncontiguous(struct device *dev, void *addr)
+{
+	if (!hv_isolation_type_snp())
+		return;
+	vunmap(addr);
+}
+
+const struct dma_map_ops hyperv_iommu_dma_ops = {
+		.alloc_noncontiguous = hyperv_dma_alloc_noncontiguous,
+		.free_noncontiguous = hyperv_dma_free_noncontiguous,
+		.vmap_noncontiguous = hyperv_dma_vmap_noncontiguous,
+		.vunmap_noncontiguous = hyperv_dma_vunmap_noncontiguous,
+};
+EXPORT_SYMBOL_GPL(hyperv_iommu_dma_ops);
+
 #endif
diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
index b823311eac79..4d44fb3b3f1c 100644
--- a/include/linux/hyperv.h
+++ b/include/linux/hyperv.h
@@ -1726,6 +1726,16 @@ int hyperv_write_cfg_blk(struct pci_dev *dev, void *buf, unsigned int len,
 int hyperv_reg_block_invalidate(struct pci_dev *dev, void *context,
 				void (*block_invalidate)(void *context,
 							 u64 block_mask));
+#ifdef CONFIG_HYPERV
+int __init hyperv_swiotlb_detect(void);
+#else
+static inline int __init hyperv_swiotlb_detect(void)
+{
+	return 0;
+}
+#endif
+
+extern const struct dma_map_ops hyperv_iommu_dma_ops;
 
 struct hyperv_pci_block_ops {
 	int (*read_block)(struct pci_dev *dev, void *buf, unsigned int buf_len,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH V2 5/6] net: netvsc: Add Isolation VM support for netvsc driver
  2021-11-23 14:30 [PATCH V2 0/6] x86/Hyper-V: Add Hyper-V Isolation VM support(Second part) Tianyu Lan
                   ` (3 preceding siblings ...)
  2021-11-23 14:30 ` [PATCH V2 4/6] hyperv/IOMMU: Enable swiotlb bounce buffer for Isolation VM Tianyu Lan
@ 2021-11-23 14:30 ` Tianyu Lan
  2021-11-23 17:55   ` Michael Kelley (LINUX)
  2021-11-24 17:03   ` Michael Kelley (LINUX)
  2021-11-23 14:30 ` [PATCH V2 6/6] scsi: storvsc: Add Isolation VM support for storvsc driver Tianyu Lan
  5 siblings, 2 replies; 13+ messages in thread
From: Tianyu Lan @ 2021-11-23 14:30 UTC (permalink / raw)
  To: tglx, mingo, bp, dave.hansen, x86, hpa, luto, peterz, jgross,
	sstabellini, boris.ostrovsky, kys, haiyangz, sthemmin, wei.liu,
	decui, joro, will, davem, kuba, jejb, martin.petersen, hch,
	m.szyprowski, robin.murphy, Tianyu.Lan, thomas.lendacky,
	xen-devel, michael.h.kelley
  Cc: iommu, linux-hyperv, linux-kernel, linux-scsi, netdev, vkuznets,
	brijesh.singh, konrad.wilk, parri.andrea, dave.hansen

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
pagebuffer() stills need to be handled. Use DMA API to map/umap
these memory during sending/receiving packet and Hyper-V swiotlb
bounce buffer dma address will be returned. The swiotlb bounce buffer
has been masked to be visible to host during boot up.

Allocate rx/tx ring buffer via dma_alloc_noncontiguous() in Isolation
VM. After calling vmbus_establish_gpadl() which marks these pages visible
to host, map these pages unencrypted addes space via dma_vmap_noncontiguous().

Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
 drivers/net/hyperv/hyperv_net.h   |   5 +
 drivers/net/hyperv/netvsc.c       | 192 +++++++++++++++++++++++++++---
 drivers/net/hyperv/rndis_filter.c |   2 +
 include/linux/hyperv.h            |   6 +
 4 files changed, 190 insertions(+), 15 deletions(-)

diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
index 315278a7cf88..31c77a00d01e 100644
--- a/drivers/net/hyperv/hyperv_net.h
+++ b/drivers/net/hyperv/hyperv_net.h
@@ -164,6 +164,7 @@ struct hv_netvsc_packet {
 	u32 total_bytes;
 	u32 send_buf_index;
 	u32 total_data_buflen;
+	struct hv_dma_range *dma_range;
 };
 
 #define NETVSC_HASH_KEYLEN 40
@@ -1074,6 +1075,7 @@ struct netvsc_device {
 
 	/* Receive buffer allocated by us but manages by NetVSP */
 	void *recv_buf;
+	struct sg_table *recv_sgt;
 	u32 recv_buf_size; /* allocated bytes */
 	struct vmbus_gpadl recv_buf_gpadl_handle;
 	u32 recv_section_cnt;
@@ -1082,6 +1084,7 @@ struct netvsc_device {
 
 	/* Send buffer allocated by us */
 	void *send_buf;
+	struct sg_table *send_sgt;
 	u32 send_buf_size;
 	struct vmbus_gpadl send_buf_gpadl_handle;
 	u32 send_section_cnt;
@@ -1731,4 +1734,6 @@ struct rndis_message {
 #define RETRY_US_HI	10000
 #define RETRY_MAX	2000	/* >10 sec */
 
+void netvsc_dma_unmap(struct hv_device *hv_dev,
+		      struct hv_netvsc_packet *packet);
 #endif /* _HYPERV_NET_H */
diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
index 396bc1c204e6..9cdc71930830 100644
--- a/drivers/net/hyperv/netvsc.c
+++ b/drivers/net/hyperv/netvsc.c
@@ -20,6 +20,7 @@
 #include <linux/vmalloc.h>
 #include <linux/rtnetlink.h>
 #include <linux/prefetch.h>
+#include <linux/gfp.h>
 
 #include <asm/sync_bitops.h>
 #include <asm/mshyperv.h>
@@ -146,15 +147,39 @@ static struct netvsc_device *alloc_net_device(void)
 	return net_device;
 }
 
+static struct hv_device *netvsc_channel_to_device(struct vmbus_channel *channel)
+{
+	struct vmbus_channel *primary = channel->primary_channel;
+
+	return primary ? primary->device_obj : channel->device_obj;
+}
+
 static void free_netvsc_device(struct rcu_head *head)
 {
 	struct netvsc_device *nvdev
 		= container_of(head, struct netvsc_device, rcu);
+	struct hv_device *dev =
+		netvsc_channel_to_device(nvdev->chan_table[0].channel);
 	int i;
 
 	kfree(nvdev->extension);
-	vfree(nvdev->recv_buf);
-	vfree(nvdev->send_buf);
+
+	if (nvdev->recv_sgt) {
+		dma_vunmap_noncontiguous(&dev->device, nvdev->recv_buf);
+		dma_free_noncontiguous(&dev->device, nvdev->recv_buf_size,
+				       nvdev->recv_sgt, DMA_FROM_DEVICE);
+	} else {
+		vfree(nvdev->recv_buf);
+	}
+
+	if (nvdev->send_sgt) {
+		dma_vunmap_noncontiguous(&dev->device, nvdev->send_buf);
+		dma_free_noncontiguous(&dev->device, nvdev->send_buf_size,
+				       nvdev->send_sgt, DMA_TO_DEVICE);
+	} else {
+		vfree(nvdev->send_buf);
+	}
+
 	kfree(nvdev->send_section_map);
 
 	for (i = 0; i < VRSS_CHANNEL_MAX; i++) {
@@ -348,7 +373,21 @@ static int netvsc_init_buf(struct hv_device *device,
 		buf_size = min_t(unsigned int, buf_size,
 				 NETVSC_RECEIVE_BUFFER_SIZE_LEGACY);
 
-	net_device->recv_buf = vzalloc(buf_size);
+	if (hv_isolation_type_snp()) {
+		net_device->recv_sgt =
+			dma_alloc_noncontiguous(&device->device, buf_size,
+						DMA_FROM_DEVICE, GFP_KERNEL, 0);
+		if (!net_device->recv_sgt) {
+			pr_err("Fail to allocate recv buffer buf_size %d.\n.", buf_size);
+			ret = -ENOMEM;
+			goto cleanup;
+		}
+
+		net_device->recv_buf = (void *)net_device->recv_sgt->sgl->dma_address;
+	} else {
+		net_device->recv_buf = vzalloc(buf_size);
+	}
+
 	if (!net_device->recv_buf) {
 		netdev_err(ndev,
 			   "unable to allocate receive buffer of size %u\n",
@@ -357,8 +396,6 @@ static int netvsc_init_buf(struct hv_device *device,
 		goto cleanup;
 	}
 
-	net_device->recv_buf_size = buf_size;
-
 	/*
 	 * Establish the gpadl handle for this buffer on this
 	 * channel.  Note: This call uses the vmbus connection rather
@@ -373,6 +410,19 @@ static int netvsc_init_buf(struct hv_device *device,
 		goto cleanup;
 	}
 
+	if (net_device->recv_sgt) {
+		net_device->recv_buf =
+			dma_vmap_noncontiguous(&device->device, buf_size,
+					       net_device->recv_sgt);
+		if (!net_device->recv_buf) {
+			pr_err("Fail to vmap recv buffer.\n");
+			ret = -ENOMEM;
+			goto cleanup;
+		}
+	}
+
+	net_device->recv_buf_size = buf_size;
+
 	/* Notify the NetVsp of the gpadl handle */
 	init_packet = &net_device->channel_init_pkt;
 	memset(init_packet, 0, sizeof(struct nvsp_message));
@@ -454,14 +504,27 @@ static int netvsc_init_buf(struct hv_device *device,
 	buf_size = device_info->send_sections * device_info->send_section_size;
 	buf_size = round_up(buf_size, PAGE_SIZE);
 
-	net_device->send_buf = vzalloc(buf_size);
+	if (hv_isolation_type_snp()) {
+		net_device->send_sgt =
+			dma_alloc_noncontiguous(&device->device, buf_size,
+						DMA_TO_DEVICE, GFP_KERNEL, 0);
+		if (!net_device->send_sgt) {
+			pr_err("Fail to allocate send buffer buf_size %d.\n.", buf_size);
+			ret = -ENOMEM;
+			goto cleanup;
+		}
+
+		net_device->send_buf = (void *)net_device->send_sgt->sgl->dma_address;
+	} else {
+		net_device->send_buf = vzalloc(buf_size);
+	}
+
 	if (!net_device->send_buf) {
 		netdev_err(ndev, "unable to allocate send buffer of size %u\n",
 			   buf_size);
 		ret = -ENOMEM;
 		goto cleanup;
 	}
-	net_device->send_buf_size = buf_size;
 
 	/* Establish the gpadl handle for this buffer on this
 	 * channel.  Note: This call uses the vmbus connection rather
@@ -476,6 +539,19 @@ static int netvsc_init_buf(struct hv_device *device,
 		goto cleanup;
 	}
 
+	if (net_device->send_sgt) {
+		net_device->send_buf =
+			dma_vmap_noncontiguous(&device->device, buf_size,
+					       net_device->send_sgt);
+		if (!net_device->send_buf) {
+			pr_err("Fail to vmap send buffer.\n");
+			ret = -ENOMEM;
+			goto cleanup;
+		}
+	}
+
+	net_device->send_buf_size = buf_size;
+
 	/* Notify the NetVsp of the gpadl handle */
 	init_packet = &net_device->channel_init_pkt;
 	memset(init_packet, 0, sizeof(struct nvsp_message));
@@ -766,7 +842,7 @@ static void netvsc_send_tx_complete(struct net_device *ndev,
 
 	/* Notify the layer above us */
 	if (likely(skb)) {
-		const struct hv_netvsc_packet *packet
+		struct hv_netvsc_packet *packet
 			= (struct hv_netvsc_packet *)skb->cb;
 		u32 send_index = packet->send_buf_index;
 		struct netvsc_stats *tx_stats;
@@ -782,6 +858,7 @@ static void netvsc_send_tx_complete(struct net_device *ndev,
 		tx_stats->bytes += packet->total_bytes;
 		u64_stats_update_end(&tx_stats->syncp);
 
+		netvsc_dma_unmap(ndev_ctx->device_ctx, packet);
 		napi_consume_skb(skb, budget);
 	}
 
@@ -946,6 +1023,87 @@ static void netvsc_copy_to_send_buf(struct netvsc_device *net_device,
 		memset(dest, 0, padding);
 }
 
+void netvsc_dma_unmap(struct hv_device *hv_dev,
+		      struct hv_netvsc_packet *packet)
+{
+	u32 page_count = packet->cp_partial ?
+		packet->page_buf_cnt - packet->rmsg_pgcnt :
+		packet->page_buf_cnt;
+	int i;
+
+	if (!hv_is_isolation_supported())
+		return;
+
+	if (!packet->dma_range)
+		return;
+
+	for (i = 0; i < page_count; i++)
+		dma_unmap_single(&hv_dev->device, packet->dma_range[i].dma,
+				 packet->dma_range[i].mapping_size,
+				 DMA_TO_DEVICE);
+
+	kfree(packet->dma_range);
+}
+
+/* netvsc_dma_map - Map swiotlb bounce buffer with data page of
+ * packet sent by vmbus_sendpacket_pagebuffer() in the Isolation
+ * VM.
+ *
+ * In isolation VM, netvsc send buffer has been marked visible to
+ * host and so the data copied to send buffer doesn't need to use
+ * bounce buffer. The data pages handled by vmbus_sendpacket_pagebuffer()
+ * may not be copied to send buffer and so these pages need to be
+ * mapped with swiotlb bounce buffer. netvsc_dma_map() is to do
+ * that. The pfns in the struct hv_page_buffer need to be converted
+ * to bounce buffer's pfn. The loop here is necessary because the
+ * entries in the page buffer array are not necessarily full
+ * pages of data.  Each entry in the array has a separate offset and
+ * len that may be non-zero, even for entries in the middle of the
+ * array.  And the entries are not physically contiguous.  So each
+ * entry must be individually mapped rather than as a contiguous unit.
+ * So not use dma_map_sg() here.
+ */
+static int netvsc_dma_map(struct hv_device *hv_dev,
+			  struct hv_netvsc_packet *packet,
+			  struct hv_page_buffer *pb)
+{
+	u32 page_count =  packet->cp_partial ?
+		packet->page_buf_cnt - packet->rmsg_pgcnt :
+		packet->page_buf_cnt;
+	dma_addr_t dma;
+	int i;
+
+	if (!hv_is_isolation_supported())
+		return 0;
+
+	packet->dma_range = kcalloc(page_count,
+				    sizeof(*packet->dma_range),
+				    GFP_KERNEL);
+	if (!packet->dma_range)
+		return -ENOMEM;
+
+	for (i = 0; i < page_count; i++) {
+		char *src = phys_to_virt((pb[i].pfn << HV_HYP_PAGE_SHIFT)
+					 + pb[i].offset);
+		u32 len = pb[i].len;
+
+		dma = dma_map_single(&hv_dev->device, src, len,
+				     DMA_TO_DEVICE);
+		if (dma_mapping_error(&hv_dev->device, dma)) {
+			kfree(packet->dma_range);
+			return -ENOMEM;
+		}
+
+		packet->dma_range[i].dma = dma;
+		packet->dma_range[i].mapping_size = len;
+		pb[i].pfn = dma >> HV_HYP_PAGE_SHIFT;
+		pb[i].offset = offset_in_hvpage(dma);
+		pb[i].len = len;
+	}
+
+	return 0;
+}
+
 static inline int netvsc_send_pkt(
 	struct hv_device *device,
 	struct hv_netvsc_packet *packet,
@@ -986,14 +1144,24 @@ static inline int netvsc_send_pkt(
 
 	trace_nvsp_send_pkt(ndev, out_channel, rpkt);
 
+	packet->dma_range = NULL;
 	if (packet->page_buf_cnt) {
 		if (packet->cp_partial)
 			pb += packet->rmsg_pgcnt;
 
+		ret = netvsc_dma_map(ndev_ctx->device_ctx, packet, pb);
+		if (ret) {
+			ret = -EAGAIN;
+			goto exit;
+		}
+
 		ret = vmbus_sendpacket_pagebuffer(out_channel,
 						  pb, packet->page_buf_cnt,
 						  &nvmsg, sizeof(nvmsg),
 						  req_id);
+
+		if (ret)
+			netvsc_dma_unmap(ndev_ctx->device_ctx, packet);
 	} else {
 		ret = vmbus_sendpacket(out_channel,
 				       &nvmsg, sizeof(nvmsg),
@@ -1001,6 +1169,7 @@ static inline int netvsc_send_pkt(
 				       VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
 	}
 
+exit:
 	if (ret == 0) {
 		atomic_inc_return(&nvchan->queue_sends);
 
@@ -1515,13 +1684,6 @@ static int netvsc_process_raw_pkt(struct hv_device *device,
 	return 0;
 }
 
-static struct hv_device *netvsc_channel_to_device(struct vmbus_channel *channel)
-{
-	struct vmbus_channel *primary = channel->primary_channel;
-
-	return primary ? primary->device_obj : channel->device_obj;
-}
-
 /* Network processing softirq
  * Process data in incoming ring buffer from host
  * Stops when ring is empty or budget is met or exceeded.
diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c
index f6c9c2a670f9..448fcc325ed7 100644
--- a/drivers/net/hyperv/rndis_filter.c
+++ b/drivers/net/hyperv/rndis_filter.c
@@ -361,6 +361,8 @@ static void rndis_filter_receive_response(struct net_device *ndev,
 			}
 		}
 
+		netvsc_dma_unmap(((struct net_device_context *)
+			netdev_priv(ndev))->device_ctx, &request->pkt);
 		complete(&request->wait_event);
 	} else {
 		netdev_err(ndev,
diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
index 4d44fb3b3f1c..8882e46d1070 100644
--- a/include/linux/hyperv.h
+++ b/include/linux/hyperv.h
@@ -25,6 +25,7 @@
 #include <linux/interrupt.h>
 #include <linux/reciprocal_div.h>
 #include <asm/hyperv-tlfs.h>
+#include <linux/dma-map-ops.h>
 
 #define MAX_PAGE_BUFFER_COUNT				32
 #define MAX_MULTIPAGE_BUFFER_COUNT			32 /* 128K */
@@ -1583,6 +1584,11 @@ struct hyperv_service_callback {
 	void (*callback)(void *context);
 };
 
+struct hv_dma_range {
+	dma_addr_t dma;
+	u32 mapping_size;
+};
+
 #define MAX_SRV_VER	0x7ffffff
 extern bool vmbus_prep_negotiate_resp(struct icmsg_hdr *icmsghdrp, u8 *buf, u32 buflen,
 				const int *fw_version, int fw_vercnt,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH V2 6/6] scsi: storvsc: Add Isolation VM support for storvsc driver
  2021-11-23 14:30 [PATCH V2 0/6] x86/Hyper-V: Add Hyper-V Isolation VM support(Second part) Tianyu Lan
                   ` (4 preceding siblings ...)
  2021-11-23 14:30 ` [PATCH V2 5/6] net: netvsc: Add Isolation VM support for netvsc driver Tianyu Lan
@ 2021-11-23 14:30 ` Tianyu Lan
  5 siblings, 0 replies; 13+ messages in thread
From: Tianyu Lan @ 2021-11-23 14:30 UTC (permalink / raw)
  To: tglx, mingo, bp, dave.hansen, x86, hpa, luto, peterz, jgross,
	sstabellini, boris.ostrovsky, kys, haiyangz, sthemmin, wei.liu,
	decui, joro, will, davem, kuba, jejb, martin.petersen, hch,
	m.szyprowski, robin.murphy, Tianyu.Lan, thomas.lendacky,
	xen-devel, michael.h.kelley
  Cc: iommu, linux-hyperv, linux-kernel, linux-scsi, netdev, vkuznets,
	brijesh.singh, konrad.wilk, parri.andrea, dave.hansen

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
storvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
mpb_desc() still needs to be handled. Use DMA API(scsi_dma_map/unmap)
to map these memory during sending/receiving packet and return swiotlb
bounce buffer dma address. In Isolation VM, swiotlb  bounce buffer is
marked to be visible to host and the swiotlb force mode is enabled.

Set device's dma min align mask to HV_HYP_PAGE_SIZE - 1 in order to
keep the original data offset in the bounce buffer.

Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
 drivers/scsi/storvsc_drv.c | 37 +++++++++++++++++++++----------------
 include/linux/hyperv.h     |  1 +
 2 files changed, 22 insertions(+), 16 deletions(-)

diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
index 20595c0ba0ae..ae293600d799 100644
--- a/drivers/scsi/storvsc_drv.c
+++ b/drivers/scsi/storvsc_drv.c
@@ -21,6 +21,8 @@
 #include <linux/device.h>
 #include <linux/hyperv.h>
 #include <linux/blkdev.h>
+#include <linux/dma-mapping.h>
+
 #include <scsi/scsi.h>
 #include <scsi/scsi_cmnd.h>
 #include <scsi/scsi_host.h>
@@ -1336,6 +1338,7 @@ static void storvsc_on_channel_callback(void *context)
 					continue;
 				}
 				request = (struct storvsc_cmd_request *)scsi_cmd_priv(scmnd);
+				scsi_dma_unmap(scmnd);
 			}
 
 			storvsc_on_receive(stor_device, packet, request);
@@ -1749,7 +1752,6 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
 	struct hv_host_device *host_dev = shost_priv(host);
 	struct hv_device *dev = host_dev->dev;
 	struct storvsc_cmd_request *cmd_request = scsi_cmd_priv(scmnd);
-	int i;
 	struct scatterlist *sgl;
 	unsigned int sg_count;
 	struct vmscsi_request *vm_srb;
@@ -1831,10 +1833,11 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
 	payload_sz = sizeof(cmd_request->mpb);
 
 	if (sg_count) {
-		unsigned int hvpgoff, hvpfns_to_add;
 		unsigned long offset_in_hvpg = offset_in_hvpage(sgl->offset);
 		unsigned int hvpg_count = HVPFN_UP(offset_in_hvpg + length);
-		u64 hvpfn;
+		struct scatterlist *sg;
+		unsigned long hvpfn, hvpfns_to_add;
+		int j, i = 0;
 
 		if (hvpg_count > MAX_PAGE_BUFFER_COUNT) {
 
@@ -1848,21 +1851,22 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
 		payload->range.len = length;
 		payload->range.offset = offset_in_hvpg;
 
+		sg_count = scsi_dma_map(scmnd);
+		if (sg_count < 0)
+			return SCSI_MLQUEUE_DEVICE_BUSY;
 
-		for (i = 0; sgl != NULL; sgl = sg_next(sgl)) {
+		for_each_sg(sgl, sg, sg_count, j) {
 			/*
-			 * Init values for the current sgl entry. hvpgoff
-			 * and hvpfns_to_add are in units of Hyper-V size
-			 * pages. Handling the PAGE_SIZE != HV_HYP_PAGE_SIZE
-			 * case also handles values of sgl->offset that are
-			 * larger than PAGE_SIZE. Such offsets are handled
-			 * even on other than the first sgl entry, provided
-			 * they are a multiple of PAGE_SIZE.
+			 * Init values for the current sgl entry. hvpfns_to_add
+			 * is in units of Hyper-V size pages. Handling the
+			 * PAGE_SIZE != HV_HYP_PAGE_SIZE case also handles
+			 * values of sgl->offset that are larger than PAGE_SIZE.
+			 * Such offsets are handled even on other than the first
+			 * sgl entry, provided they are a multiple of PAGE_SIZE.
 			 */
-			hvpgoff = HVPFN_DOWN(sgl->offset);
-			hvpfn = page_to_hvpfn(sg_page(sgl)) + hvpgoff;
-			hvpfns_to_add =	HVPFN_UP(sgl->offset + sgl->length) -
-						hvpgoff;
+			hvpfn = HVPFN_DOWN(sg_dma_address(sg));
+			hvpfns_to_add = HVPFN_UP(sg_dma_address(sg) +
+						 sg_dma_len(sg)) - hvpfn;
 
 			/*
 			 * Fill the next portion of the PFN array with
@@ -1872,7 +1876,7 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
 			 * the PFN array is filled.
 			 */
 			while (hvpfns_to_add--)
-				payload->range.pfn_array[i++] =	hvpfn++;
+				payload->range.pfn_array[i++] = hvpfn++;
 		}
 	}
 
@@ -2016,6 +2020,7 @@ static int storvsc_probe(struct hv_device *device,
 	stor_device->vmscsi_size_delta = sizeof(struct vmscsi_win8_extension);
 	spin_lock_init(&stor_device->lock);
 	hv_set_drvdata(device, stor_device);
+	dma_set_min_align_mask(&device->device, HV_HYP_PAGE_SIZE - 1);
 
 	stor_device->port_number = host->host_no;
 	ret = storvsc_connect_to_vsp(device, storvsc_ringbuffer_size, is_fc);
diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
index 8882e46d1070..2840e51ee5c5 100644
--- a/include/linux/hyperv.h
+++ b/include/linux/hyperv.h
@@ -1262,6 +1262,7 @@ struct hv_device {
 
 	struct vmbus_channel *channel;
 	struct kset	     *channels_kset;
+	struct device_dma_parameters dma_parms;
 
 	/* place holder to keep track of the dir for hv device in debugfs */
 	struct dentry *debug_dir;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* RE: [PATCH V2 1/6] Swiotlb: Add Swiotlb bounce buffer remap function for HV IVM
  2021-11-23 14:30 ` [PATCH V2 1/6] Swiotlb: Add Swiotlb bounce buffer remap function for HV IVM Tianyu Lan
@ 2021-11-23 17:15   ` Michael Kelley (LINUX)
  2021-11-24 14:07     ` Tianyu Lan
  0 siblings, 1 reply; 13+ messages in thread
From: Michael Kelley (LINUX) @ 2021-11-23 17:15 UTC (permalink / raw)
  To: Tianyu Lan, tglx, mingo, bp, dave.hansen, x86, hpa, luto, peterz,
	jgross, sstabellini, boris.ostrovsky, KY Srinivasan,
	Haiyang Zhang, Stephen Hemminger, wei.liu, Dexuan Cui, joro,
	will, davem, kuba, jejb, martin.petersen, hch, m.szyprowski,
	robin.murphy, Tianyu Lan, thomas.lendacky, xen-devel
  Cc: iommu, linux-hyperv, linux-kernel, linux-scsi, netdev, vkuznets,
	brijesh.singh, konrad.wilk, parri.andrea, dave.hansen

From: Tianyu Lan <ltykernel@gmail.com> Sent: Tuesday, November 23, 2021 6:31 AM
> 
> In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
> extra address space which is above shared_gpa_boundary (E.G 39 bit
> address line) reported by Hyper-V CPUID ISOLATION_CONFIG. The access
> physical address will be original physical address + shared_gpa_boundary.
> The shared_gpa_boundary in the AMD SEV SNP spec is called virtual top of
> memory(vTOM). Memory addresses below vTOM are automatically treated as
> private while memory above vTOM is treated as shared.
> 
> Expose swiotlb_unencrypted_base for platforms to set unencrypted
> memory base offset and platform calls swiotlb_update_mem_attributes()
> to remap swiotlb mem to unencrypted address space. memremap() can
> not be called in the early stage and so put remapping code into
> swiotlb_update_mem_attributes(). Store remap address and use it to copy
> data from/to swiotlb bounce buffer.
> 
> Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
> ---
> Change since v1:
> 	* Rework comment in the swiotlb_init_io_tlb_mem()
> 	* Make swiotlb_init_io_tlb_mem() back to return void.
> ---
>  include/linux/swiotlb.h |  6 +++++
>  kernel/dma/swiotlb.c    | 53 +++++++++++++++++++++++++++++++++++++----
>  2 files changed, 54 insertions(+), 5 deletions(-)
> 
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 569272871375..f6c3638255d5 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -73,6 +73,9 @@ extern enum swiotlb_force swiotlb_force;
>   * @end:	The end address of the swiotlb memory pool. Used to do a quick
>   *		range check to see if the memory was in fact allocated by this
>   *		API.
> + * @vaddr:	The vaddr of the swiotlb memory pool. The swiotlb memory pool
> + *		may be remapped in the memory encrypted case and store virtual
> + *		address for bounce buffer operation.
>   * @nslabs:	The number of IO TLB blocks (in groups of 64) between @start and
>   *		@end. For default swiotlb, this is command line adjustable via
>   *		setup_io_tlb_npages.
> @@ -92,6 +95,7 @@ extern enum swiotlb_force swiotlb_force;
>  struct io_tlb_mem {
>  	phys_addr_t start;
>  	phys_addr_t end;
> +	void *vaddr;
>  	unsigned long nslabs;
>  	unsigned long used;
>  	unsigned int index;
> @@ -186,4 +190,6 @@ static inline bool is_swiotlb_for_alloc(struct device *dev)
>  }
>  #endif /* CONFIG_DMA_RESTRICTED_POOL */
> 
> +extern phys_addr_t swiotlb_unencrypted_base;
> +
>  #endif /* __LINUX_SWIOTLB_H */
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 8e840fbbed7c..c303fdeba82f 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -50,6 +50,7 @@
>  #include <asm/io.h>
>  #include <asm/dma.h>
> 
> +#include <linux/io.h>
>  #include <linux/init.h>
>  #include <linux/memblock.h>
>  #include <linux/iommu-helper.h>
> @@ -72,6 +73,8 @@ enum swiotlb_force swiotlb_force;
> 
>  struct io_tlb_mem io_tlb_default_mem;
> 
> +phys_addr_t swiotlb_unencrypted_base;
> +
>  /*
>   * Max segment that we can provide which (if pages are contingous) will
>   * not be bounced (unless SWIOTLB_FORCE is set).
> @@ -155,6 +158,31 @@ static inline unsigned long nr_slots(u64 val)
>  	return DIV_ROUND_UP(val, IO_TLB_SIZE);
>  }
> 
> +/*
> + * Remap swioltb memory in the unencrypted physical address space
> + * when swiotlb_unencrypted_base is set. (e.g. for Hyper-V AMD SEV-SNP
> + * Isolation VMs).
> + */
> +void *swiotlb_mem_remap(struct io_tlb_mem *mem, unsigned long bytes)
> +{
> +	void *vaddr;
> +
> +	if (swiotlb_unencrypted_base) {
> +		phys_addr_t paddr = mem->start + swiotlb_unencrypted_base;
> +
> +		vaddr = memremap(paddr, bytes, MEMREMAP_WB);
> +		if (!vaddr) {
> +			pr_err("Failed to map the unencrypted memory %llx size %lx.\n",
> +			       paddr, bytes);
> +			return NULL;
> +		}
> +
> +		return vaddr;
> +	}
> +
> +	return phys_to_virt(mem->start);
> +}
> +
>  /*
>   * Early SWIOTLB allocation may be too early to allow an architecture to
>   * perform the desired operations.  This function allows the architecture to
> @@ -172,7 +200,14 @@ void __init swiotlb_update_mem_attributes(void)
>  	vaddr = phys_to_virt(mem->start);
>  	bytes = PAGE_ALIGN(mem->nslabs << IO_TLB_SHIFT);
>  	set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
> -	memset(vaddr, 0, bytes);
> +
> +	mem->vaddr = swiotlb_mem_remap(mem, bytes);
> +	if (!mem->vaddr) {
> +		pr_err("Fail to remap swiotlb mem.\n");
> +		return;
> +	}
> +
> +	memset(mem->vaddr, 0, bytes);
>  }

In the error case, do you want to leave mem->vaddr as NULL?  Or is it
better to leave it as the virtual address of mem-start?  Your code leaves it
as NULL.

The interaction between swiotlb_update_mem_attributes() and the helper
function swiotlb_memo_remap() seems kind of clunky.  phys_to_virt() gets called
twice, for example, and two error messages are printed.  The code would be
more straightforward by just putting the helper function inline:

mem->vaddr = phys_to_virt(mem->start);
bytes = PAGE_ALIGN(mem->nslabs << IO_TLB_SHIFT);
set_memory_decrypted((unsigned long)(mem->vaddr), bytes >> PAGE_SHIFT);

if (swiotlb_unencrypted_base) {
	phys_addr_t paddr = mem->start + swiotlb_unencrypted_base;

	mem->vaddr = memremap(paddr, bytes, MEMREMAP_WB);
	if (!mem->vaddr) {
		pr_err("Failed to map the unencrypted memory %llx size %lx.\n",
			       paddr, bytes);
		return;
	}
}

memset(mem->vaddr, 0, bytes);

(This version also leaves mem->vaddr as NULL in the error case.)

> 
>  static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
> @@ -196,7 +231,18 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
>  		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
>  		mem->slots[i].alloc_size = 0;
>  	}
> +
> +	/*
> +	 * If swiotlb_unencrypted_base is set, the bounce buffer memory will
> +	 * be remapped and cleared in swiotlb_update_mem_attributes.
> +	 */
> +	if (swiotlb_unencrypted_base)
> +		return;
> +
> +	set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);

Prior to this patch, and here in the new version as well, the return value from
set_memory_decrypted() is ignored in several places in this file.  As previously
discussed, swiotlb_init_io_tlb_mem() is a void function, so there's no place to
return an error. Is that OK?

>  	memset(vaddr, 0, bytes);
> +	mem->vaddr = vaddr;
> +	return;
>  }
> 
>  int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
> @@ -318,7 +364,6 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>  	if (!mem->slots)
>  		return -ENOMEM;
> 
> -	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
>  	swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
> 
>  	swiotlb_print_info();
> @@ -371,7 +416,7 @@ static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size
>  	phys_addr_t orig_addr = mem->slots[index].orig_addr;
>  	size_t alloc_size = mem->slots[index].alloc_size;
>  	unsigned long pfn = PFN_DOWN(orig_addr);
> -	unsigned char *vaddr = phys_to_virt(tlb_addr);
> +	unsigned char *vaddr = mem->vaddr + tlb_addr - mem->start;
>  	unsigned int tlb_offset, orig_addr_offset;
> 
>  	if (orig_addr == INVALID_PHYS_ADDR)
> @@ -806,8 +851,6 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
>  			return -ENOMEM;
>  		}
> 
> -		set_memory_decrypted((unsigned long)phys_to_virt(rmem->base),
> -				     rmem->size >> PAGE_SHIFT);
>  		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
>  		mem->force_bounce = true;
>  		mem->for_alloc = true;
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH V2 4/6] hyperv/IOMMU: Enable swiotlb bounce buffer for Isolation VM
  2021-11-23 14:30 ` [PATCH V2 4/6] hyperv/IOMMU: Enable swiotlb bounce buffer for Isolation VM Tianyu Lan
@ 2021-11-23 17:44   ` Michael Kelley (LINUX)
  0 siblings, 0 replies; 13+ messages in thread
From: Michael Kelley (LINUX) @ 2021-11-23 17:44 UTC (permalink / raw)
  To: Tianyu Lan, tglx, mingo, bp, dave.hansen, x86, hpa, luto, peterz,
	jgross, sstabellini, boris.ostrovsky, KY Srinivasan,
	Haiyang Zhang, Stephen Hemminger, wei.liu, Dexuan Cui, joro,
	will, davem, kuba, jejb, martin.petersen, hch, m.szyprowski,
	robin.murphy, Tianyu Lan, thomas.lendacky, xen-devel
  Cc: iommu, linux-hyperv, linux-kernel, linux-scsi, netdev, vkuznets,
	brijesh.singh, konrad.wilk, parri.andrea, dave.hansen

From: Tianyu Lan <ltykernel@gmail.com> Sent: Tuesday, November 23, 2021 6:31 AM
> 
> hyperv Isolation VM requires bounce buffer support to copy
> data from/to encrypted memory and so enable swiotlb force
> mode to use swiotlb bounce buffer for DMA transaction.
> 
> In Isolation VM with AMD SEV, the bounce buffer needs to be
> accessed via extra address space which is above shared_gpa_boundary
> (E.G 39 bit address line) reported by Hyper-V CPUID ISOLATION_CONFIG.
> The access physical address will be original physical address +
> shared_gpa_boundary. The shared_gpa_boundary in the AMD SEV SNP
> spec is called virtual top of memory(vTOM). Memory addresses below
> vTOM are automatically treated as private while memory above
> vTOM is treated as shared.
> 
> Hyper-V initalizes swiotlb bounce buffer and default swiotlb
> needs to be disabled. pci_swiotlb_detect_override() and
> pci_swiotlb_detect_4gb() enable the default one. To override
> the setting, hyperv_swiotlb_detect() needs to run before
> these detect functions which depends on the pci_xen_swiotlb_
> init(). Make pci_xen_swiotlb_init() depends on the hyperv_swiotlb
> _detect() to keep the order.
> 
> Swiotlb bounce buffer code calls set_memory_decrypted()
> to mark bounce buffer visible to host and map it in extra
> address space via memremap. Populate the shared_gpa_boundary
> (vTOM) via swiotlb_unencrypted_base variable.
> 
> The map function memremap() can't work in the early place
> hyperv_iommu_swiotlb_init() and so call swiotlb_update_mem_attributes()
> in the hyperv_iommu_swiotlb_later_init().
> 
> Add Hyper-V dma ops and provide alloc/free and vmap/vunmap noncontiguous
> callback to handle request of  allocating and mapping noncontiguous dma
> memory in vmbus device driver. Netvsc driver will use this. Set dma_ops_
> bypass flag for hv device to use dma direct functions during mapping/unmapping
> dma page.
> 
> Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
> ---
> Change since v1:
> 	* Remove hv isolation check in the sev_setup_arch()
> 
>  arch/x86/mm/mem_encrypt.c      |   1 +
>  arch/x86/xen/pci-swiotlb-xen.c |   3 +-
>  drivers/hv/Kconfig             |   1 +
>  drivers/hv/vmbus_drv.c         |   6 ++
>  drivers/iommu/hyperv-iommu.c   | 164 +++++++++++++++++++++++++++++++++
>  include/linux/hyperv.h         |  10 ++
>  6 files changed, 184 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
> index 35487305d8af..e48c73b3dd41 100644
> --- a/arch/x86/mm/mem_encrypt.c
> +++ b/arch/x86/mm/mem_encrypt.c
> @@ -31,6 +31,7 @@
>  #include <asm/processor-flags.h>
>  #include <asm/msr.h>
>  #include <asm/cmdline.h>
> +#include <asm/mshyperv.h>

There is no longer any need to add this #include since code changes to this
file in a previous version of the patch are now gone.

> 
>  #include "mm_internal.h"
> 
> diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
> index 46df59aeaa06..30fd0600b008 100644
> --- a/arch/x86/xen/pci-swiotlb-xen.c
> +++ b/arch/x86/xen/pci-swiotlb-xen.c
> @@ -4,6 +4,7 @@
> 
>  #include <linux/dma-map-ops.h>
>  #include <linux/pci.h>
> +#include <linux/hyperv.h>
>  #include <xen/swiotlb-xen.h>
> 
>  #include <asm/xen/hypervisor.h>
> @@ -91,6 +92,6 @@ int pci_xen_swiotlb_init_late(void)
>  EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late);
> 
>  IOMMU_INIT_FINISH(pci_xen_swiotlb_detect,
> -		  NULL,
> +		  hyperv_swiotlb_detect,
>  		  pci_xen_swiotlb_init,
>  		  NULL);
> diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
> index dd12af20e467..d43b4cd88f57 100644
> --- a/drivers/hv/Kconfig
> +++ b/drivers/hv/Kconfig
> @@ -9,6 +9,7 @@ config HYPERV
>  	select PARAVIRT
>  	select X86_HV_CALLBACK_VECTOR if X86
>  	select VMAP_PFN
> +	select DMA_OPS_BYPASS
>  	help
>  	  Select this option to run Linux as a Hyper-V client operating
>  	  system.
> diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
> index 392c1ac4f819..32dc193e31cd 100644
> --- a/drivers/hv/vmbus_drv.c
> +++ b/drivers/hv/vmbus_drv.c
> @@ -33,6 +33,7 @@
>  #include <linux/random.h>
>  #include <linux/kernel.h>
>  #include <linux/syscore_ops.h>
> +#include <linux/dma-map-ops.h>
>  #include <clocksource/hyperv_timer.h>
>  #include "hyperv_vmbus.h"
> 
> @@ -2078,6 +2079,7 @@ struct hv_device *vmbus_device_create(const guid_t *type,
>  	return child_device_obj;
>  }
> 
> +static u64 vmbus_dma_mask = DMA_BIT_MASK(64);
>  /*
>   * vmbus_device_register - Register the child device
>   */
> @@ -2118,6 +2120,10 @@ int vmbus_device_register(struct hv_device *child_device_obj)
>  	}
>  	hv_debug_add_dev_dir(child_device_obj);
> 
> +	child_device_obj->device.dma_ops_bypass = true;
> +	child_device_obj->device.dma_ops = &hyperv_iommu_dma_ops;
> +	child_device_obj->device.dma_mask = &vmbus_dma_mask;
> +	child_device_obj->device.dma_parms = &child_device_obj->dma_parms;
>  	return 0;
> 
>  err_kset_unregister:
> diff --git a/drivers/iommu/hyperv-iommu.c b/drivers/iommu/hyperv-iommu.c
> index e285a220c913..ebcb628e7e8f 100644
> --- a/drivers/iommu/hyperv-iommu.c
> +++ b/drivers/iommu/hyperv-iommu.c
> @@ -13,14 +13,21 @@
>  #include <linux/irq.h>
>  #include <linux/iommu.h>
>  #include <linux/module.h>
> +#include <linux/hyperv.h>
> +#include <linux/io.h>
> 
>  #include <asm/apic.h>
>  #include <asm/cpu.h>
>  #include <asm/hw_irq.h>
>  #include <asm/io_apic.h>
> +#include <asm/iommu.h>
> +#include <asm/iommu_table.h>
>  #include <asm/irq_remapping.h>
>  #include <asm/hypervisor.h>
>  #include <asm/mshyperv.h>
> +#include <asm/swiotlb.h>
> +#include <linux/dma-map-ops.h>
> +#include <linux/dma-direct.h>
> 
>  #include "irq_remapping.h"
> 
> @@ -337,4 +344,161 @@ static const struct irq_domain_ops hyperv_root_ir_domain_ops = {
>  	.free = hyperv_root_irq_remapping_free,
>  };
> 
> +static void __init hyperv_iommu_swiotlb_init(void)
> +{
> +	unsigned long hyperv_io_tlb_size;
> +	void *hyperv_io_tlb_start;
> +
> +	/*
> +	 * Allocate Hyper-V swiotlb bounce buffer at early place
> +	 * to reserve large contiguous memory.
> +	 */
> +	hyperv_io_tlb_size = swiotlb_size_or_default();
> +	hyperv_io_tlb_start = memblock_alloc(hyperv_io_tlb_size, PAGE_SIZE);
> +
> +	if (!hyperv_io_tlb_start)
> +		pr_warn("Fail to allocate Hyper-V swiotlb buffer.\n");
> +
> +	swiotlb_init_with_tbl(hyperv_io_tlb_start,
> +			      hyperv_io_tlb_size >> IO_TLB_SHIFT, true);
> +}
> +
> +int __init hyperv_swiotlb_detect(void)
> +{
> +	if (!hypervisor_is_type(X86_HYPER_MS_HYPERV))
> +		return 0;
> +
> +	if (!hv_is_isolation_supported())
> +		return 0;
> +
> +	/*
> +	 * Enable swiotlb force mode in Isolation VM to
> +	 * use swiotlb bounce buffer for dma transaction.
> +	 */
> +	if (hv_isolation_type_snp())
> +		swiotlb_unencrypted_base = ms_hyperv.shared_gpa_boundary;
> +	swiotlb_force = SWIOTLB_FORCE;
> +	return 1;
> +}
> +
> +static void __init hyperv_iommu_swiotlb_later_init(void)
> +{
> +	/*
> +	 * Swiotlb bounce buffer needs to be mapped in extra address
> +	 * space. Map function doesn't work in the early place and so
> +	 * call swiotlb_update_mem_attributes() here.
> +	 */
> +	swiotlb_update_mem_attributes();
> +}
> +
> +IOMMU_INIT_FINISH(hyperv_swiotlb_detect,
> +		  NULL, hyperv_iommu_swiotlb_init,
> +		  hyperv_iommu_swiotlb_later_init);
> +
> +static struct sg_table *hyperv_dma_alloc_noncontiguous(struct device *dev,
> +		size_t size, enum dma_data_direction dir, gfp_t gfp,
> +		unsigned long attrs)
> +{
> +	struct dma_sgt_handle *sh;
> +	struct page **pages;
> +	int num_pages = size >> PAGE_SHIFT;

This assumes "size" is a multiple of PAGE_SIZE.  Probably should round
up for safety.

> +	void *vaddr, *ptr;
> +	int rc, i;
> +
> +	if (!hv_isolation_type_snp())
> +		return NULL;
> +
> +	sh = kmalloc(sizeof(*sh), gfp);
> +	if (!sh)
> +		return NULL;
> +
> +	vaddr = vmalloc(size);
> +	if (!vaddr)
> +		goto free_sgt;
> +
> +	pages = kvmalloc_array(num_pages, sizeof(struct page *),
> +				    GFP_KERNEL | __GFP_ZERO);
> +	if (!pages)
> +		goto free_mem;
> +
> +	for (i = 0, ptr = vaddr; i < num_pages; ++i, ptr += PAGE_SIZE)
> +		pages[i] = vmalloc_to_page(ptr);
> +
> +	rc = sg_alloc_table_from_pages(&sh->sgt, pages, num_pages, 0, size, GFP_KERNEL);
> +	if (rc)
> +		goto free_pages;
> +
> +	sh->sgt.sgl->dma_address = (dma_addr_t)vaddr;
> +	sh->sgt.sgl->dma_length = size;

include/linux/scatterlist.h defines macros sg_dma_address() and
sg_dma_len() for accessing these two fields.   It's probably best to use them.

> +	sh->pages = pages;
> +
> +	return &sh->sgt;
> +
> +free_pages:
> +	kvfree(pages);
> +free_mem:
> +	vfree(vaddr);
> +free_sgt:
> +	kfree(sh);
> +	return NULL;
> +}
> +
> +static void hyperv_dma_free_noncontiguous(struct device *dev, size_t size,
> +		struct sg_table *sgt, enum dma_data_direction dir)
> +{
> +	struct dma_sgt_handle *sh = sgt_handle(sgt);
> +
> +	if (!hv_isolation_type_snp())
> +		return;
> +
> +	vfree((void *)sh->sgt.sgl->dma_address);

Use sg_dma_address()

> +	sg_free_table(&sh->sgt);
> +	kvfree(sh->pages);
> +	kfree(sh);
> +}
> +
> +static void *hyperv_dma_vmap_noncontiguous(struct device *dev, size_t size,
> +			struct sg_table *sgt)
> +{
> +	int pg_count = size >> PAGE_SHIFT;

Round up so don't assume size is a multiple of PAGE_SIZE?

> +	unsigned long *pfns;
> +	struct page **pages = sgt_handle(sgt)->pages;
> +	void *vaddr = NULL;
> +	int i;
> +
> +	if (!hv_isolation_type_snp())
> +		return NULL;
> +
> +	if (!pages)
> +		return NULL;
> +
> +	pfns = kcalloc(pg_count, sizeof(*pfns), GFP_KERNEL);
> +	if (!pfns)
> +		return NULL;
> +
> +	for (i = 0; i < pg_count; i++)
> +		pfns[i] = page_to_pfn(pages[i]) +
> +			(ms_hyperv.shared_gpa_boundary >> PAGE_SHIFT);
> +
> +	vaddr = vmap_pfn(pfns, pg_count, PAGE_KERNEL);
> +	kfree(pfns);
> +	return vaddr;
> +
> +}
> +
> +static void hyperv_dma_vunmap_noncontiguous(struct device *dev, void *addr)
> +{
> +	if (!hv_isolation_type_snp())
> +		return;
> +	vunmap(addr);
> +}
> +
> +const struct dma_map_ops hyperv_iommu_dma_ops = {
> +		.alloc_noncontiguous = hyperv_dma_alloc_noncontiguous,
> +		.free_noncontiguous = hyperv_dma_free_noncontiguous,
> +		.vmap_noncontiguous = hyperv_dma_vmap_noncontiguous,
> +		.vunmap_noncontiguous = hyperv_dma_vunmap_noncontiguous,
> +};
> +EXPORT_SYMBOL_GPL(hyperv_iommu_dma_ops);
> +
>  #endif
> diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
> index b823311eac79..4d44fb3b3f1c 100644
> --- a/include/linux/hyperv.h
> +++ b/include/linux/hyperv.h
> @@ -1726,6 +1726,16 @@ int hyperv_write_cfg_blk(struct pci_dev *dev, void *buf, unsigned int len,
>  int hyperv_reg_block_invalidate(struct pci_dev *dev, void *context,
>  				void (*block_invalidate)(void *context,
>  							 u64 block_mask));
> +#ifdef CONFIG_HYPERV
> +int __init hyperv_swiotlb_detect(void);
> +#else
> +static inline int __init hyperv_swiotlb_detect(void)
> +{
> +	return 0;
> +}
> +#endif
> +
> +extern const struct dma_map_ops hyperv_iommu_dma_ops;
> 
>  struct hyperv_pci_block_ops {
>  	int (*read_block)(struct pci_dev *dev, void *buf, unsigned int buf_len,
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH V2 5/6] net: netvsc: Add Isolation VM support for netvsc driver
  2021-11-23 14:30 ` [PATCH V2 5/6] net: netvsc: Add Isolation VM support for netvsc driver Tianyu Lan
@ 2021-11-23 17:55   ` Michael Kelley (LINUX)
  2021-11-24 17:03   ` Michael Kelley (LINUX)
  1 sibling, 0 replies; 13+ messages in thread
From: Michael Kelley (LINUX) @ 2021-11-23 17:55 UTC (permalink / raw)
  To: Tianyu Lan, tglx, mingo, bp, dave.hansen, x86, hpa, luto, peterz,
	jgross, sstabellini, boris.ostrovsky, KY Srinivasan,
	Haiyang Zhang, Stephen Hemminger, wei.liu, Dexuan Cui, joro,
	will, davem, kuba, jejb, martin.petersen, hch, m.szyprowski,
	robin.murphy, Tianyu Lan, thomas.lendacky, xen-devel
  Cc: iommu, linux-hyperv, linux-kernel, linux-scsi, netdev, vkuznets,
	brijesh.singh, konrad.wilk, parri.andrea, dave.hansen

From: Tianyu Lan <ltykernel@gmail.com> Sent: Tuesday, November 23, 2021 6:31 AM
> 
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
> pagebuffer() stills need to be handled. Use DMA API to map/umap
> these memory during sending/receiving packet and Hyper-V swiotlb
> bounce buffer dma address will be returned. The swiotlb bounce buffer
> has been masked to be visible to host during boot up.
> 
> Allocate rx/tx ring buffer via dma_alloc_noncontiguous() in Isolation
> VM. After calling vmbus_establish_gpadl() which marks these pages visible
> to host, map these pages unencrypted addes space via dma_vmap_noncontiguous().
> 
> Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
> ---
>  drivers/net/hyperv/hyperv_net.h   |   5 +
>  drivers/net/hyperv/netvsc.c       | 192 +++++++++++++++++++++++++++---
>  drivers/net/hyperv/rndis_filter.c |   2 +
>  include/linux/hyperv.h            |   6 +
>  4 files changed, 190 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
> index 315278a7cf88..31c77a00d01e 100644
> --- a/drivers/net/hyperv/hyperv_net.h
> +++ b/drivers/net/hyperv/hyperv_net.h
> @@ -164,6 +164,7 @@ struct hv_netvsc_packet {
>  	u32 total_bytes;
>  	u32 send_buf_index;
>  	u32 total_data_buflen;
> +	struct hv_dma_range *dma_range;
>  };
> 
>  #define NETVSC_HASH_KEYLEN 40
> @@ -1074,6 +1075,7 @@ struct netvsc_device {
> 
>  	/* Receive buffer allocated by us but manages by NetVSP */
>  	void *recv_buf;
> +	struct sg_table *recv_sgt;
>  	u32 recv_buf_size; /* allocated bytes */
>  	struct vmbus_gpadl recv_buf_gpadl_handle;
>  	u32 recv_section_cnt;
> @@ -1082,6 +1084,7 @@ struct netvsc_device {
> 
>  	/* Send buffer allocated by us */
>  	void *send_buf;
> +	struct sg_table *send_sgt;
>  	u32 send_buf_size;
>  	struct vmbus_gpadl send_buf_gpadl_handle;
>  	u32 send_section_cnt;
> @@ -1731,4 +1734,6 @@ struct rndis_message {
>  #define RETRY_US_HI	10000
>  #define RETRY_MAX	2000	/* >10 sec */
> 
> +void netvsc_dma_unmap(struct hv_device *hv_dev,
> +		      struct hv_netvsc_packet *packet);
>  #endif /* _HYPERV_NET_H */
> diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
> index 396bc1c204e6..9cdc71930830 100644
> --- a/drivers/net/hyperv/netvsc.c
> +++ b/drivers/net/hyperv/netvsc.c
> @@ -20,6 +20,7 @@
>  #include <linux/vmalloc.h>
>  #include <linux/rtnetlink.h>
>  #include <linux/prefetch.h>
> +#include <linux/gfp.h>
> 
>  #include <asm/sync_bitops.h>
>  #include <asm/mshyperv.h>
> @@ -146,15 +147,39 @@ static struct netvsc_device *alloc_net_device(void)
>  	return net_device;
>  }
> 
> +static struct hv_device *netvsc_channel_to_device(struct vmbus_channel *channel)
> +{
> +	struct vmbus_channel *primary = channel->primary_channel;
> +
> +	return primary ? primary->device_obj : channel->device_obj;
> +}
> +
>  static void free_netvsc_device(struct rcu_head *head)
>  {
>  	struct netvsc_device *nvdev
>  		= container_of(head, struct netvsc_device, rcu);
> +	struct hv_device *dev =
> +		netvsc_channel_to_device(nvdev->chan_table[0].channel);
>  	int i;
> 
>  	kfree(nvdev->extension);
> -	vfree(nvdev->recv_buf);
> -	vfree(nvdev->send_buf);
> +
> +	if (nvdev->recv_sgt) {
> +		dma_vunmap_noncontiguous(&dev->device, nvdev->recv_buf);
> +		dma_free_noncontiguous(&dev->device, nvdev->recv_buf_size,
> +				       nvdev->recv_sgt, DMA_FROM_DEVICE);
> +	} else {
> +		vfree(nvdev->recv_buf);
> +	}
> +
> +	if (nvdev->send_sgt) {
> +		dma_vunmap_noncontiguous(&dev->device, nvdev->send_buf);
> +		dma_free_noncontiguous(&dev->device, nvdev->send_buf_size,
> +				       nvdev->send_sgt, DMA_TO_DEVICE);
> +	} else {
> +		vfree(nvdev->send_buf);
> +	}
> +
>  	kfree(nvdev->send_section_map);
> 
>  	for (i = 0; i < VRSS_CHANNEL_MAX; i++) {
> @@ -348,7 +373,21 @@ static int netvsc_init_buf(struct hv_device *device,
>  		buf_size = min_t(unsigned int, buf_size,
>  				 NETVSC_RECEIVE_BUFFER_SIZE_LEGACY);
> 
> -	net_device->recv_buf = vzalloc(buf_size);
> +	if (hv_isolation_type_snp()) {
> +		net_device->recv_sgt =
> +			dma_alloc_noncontiguous(&device->device, buf_size,
> +						DMA_FROM_DEVICE, GFP_KERNEL, 0);
> +		if (!net_device->recv_sgt) {
> +			pr_err("Fail to allocate recv buffer buf_size %d.\n.", buf_size);
> +			ret = -ENOMEM;
> +			goto cleanup;
> +		}
> +
> +		net_device->recv_buf = (void *)net_device->recv_sgt->sgl->dma_address;

Use sg_dma_address() macro.

> +	} else {
> +		net_device->recv_buf = vzalloc(buf_size);
> +	}
> +
>  	if (!net_device->recv_buf) {
>  		netdev_err(ndev,
>  			   "unable to allocate receive buffer of size %u\n",
> @@ -357,8 +396,6 @@ static int netvsc_init_buf(struct hv_device *device,
>  		goto cleanup;
>  	}
> 
> -	net_device->recv_buf_size = buf_size;
> -
>  	/*
>  	 * Establish the gpadl handle for this buffer on this
>  	 * channel.  Note: This call uses the vmbus connection rather
> @@ -373,6 +410,19 @@ static int netvsc_init_buf(struct hv_device *device,
>  		goto cleanup;
>  	}
> 
> +	if (net_device->recv_sgt) {
> +		net_device->recv_buf =
> +			dma_vmap_noncontiguous(&device->device, buf_size,
> +					       net_device->recv_sgt);
> +		if (!net_device->recv_buf) {
> +			pr_err("Fail to vmap recv buffer.\n");
> +			ret = -ENOMEM;
> +			goto cleanup;
> +		}
> +	}
> +
> +	net_device->recv_buf_size = buf_size;
> +
>  	/* Notify the NetVsp of the gpadl handle */
>  	init_packet = &net_device->channel_init_pkt;
>  	memset(init_packet, 0, sizeof(struct nvsp_message));
> @@ -454,14 +504,27 @@ static int netvsc_init_buf(struct hv_device *device,
>  	buf_size = device_info->send_sections * device_info->send_section_size;
>  	buf_size = round_up(buf_size, PAGE_SIZE);
> 
> -	net_device->send_buf = vzalloc(buf_size);
> +	if (hv_isolation_type_snp()) {
> +		net_device->send_sgt =
> +			dma_alloc_noncontiguous(&device->device, buf_size,
> +						DMA_TO_DEVICE, GFP_KERNEL, 0);
> +		if (!net_device->send_sgt) {
> +			pr_err("Fail to allocate send buffer buf_size %d.\n.", buf_size);
> +			ret = -ENOMEM;
> +			goto cleanup;
> +		}
> +
> +		net_device->send_buf = (void *)net_device->send_sgt->sgl->dma_address;

Use sg_dma_address() macro.

> +	} else {
> +		net_device->send_buf = vzalloc(buf_size);
> +	}
> +
>  	if (!net_device->send_buf) {
>  		netdev_err(ndev, "unable to allocate send buffer of size %u\n",
>  			   buf_size);
>  		ret = -ENOMEM;
>  		goto cleanup;
>  	}
> -	net_device->send_buf_size = buf_size;
> 
>  	/* Establish the gpadl handle for this buffer on this
>  	 * channel.  Note: This call uses the vmbus connection rather
> @@ -476,6 +539,19 @@ static int netvsc_init_buf(struct hv_device *device,
>  		goto cleanup;
>  	}
> 
> +	if (net_device->send_sgt) {
> +		net_device->send_buf =
> +			dma_vmap_noncontiguous(&device->device, buf_size,
> +					       net_device->send_sgt);
> +		if (!net_device->send_buf) {
> +			pr_err("Fail to vmap send buffer.\n");
> +			ret = -ENOMEM;
> +			goto cleanup;
> +		}
> +	}
> +
> +	net_device->send_buf_size = buf_size;
> +
>  	/* Notify the NetVsp of the gpadl handle */
>  	init_packet = &net_device->channel_init_pkt;
>  	memset(init_packet, 0, sizeof(struct nvsp_message));
> @@ -766,7 +842,7 @@ static void netvsc_send_tx_complete(struct net_device *ndev,
> 
>  	/* Notify the layer above us */
>  	if (likely(skb)) {
> -		const struct hv_netvsc_packet *packet
> +		struct hv_netvsc_packet *packet
>  			= (struct hv_netvsc_packet *)skb->cb;
>  		u32 send_index = packet->send_buf_index;
>  		struct netvsc_stats *tx_stats;
> @@ -782,6 +858,7 @@ static void netvsc_send_tx_complete(struct net_device *ndev,
>  		tx_stats->bytes += packet->total_bytes;
>  		u64_stats_update_end(&tx_stats->syncp);
> 
> +		netvsc_dma_unmap(ndev_ctx->device_ctx, packet);
>  		napi_consume_skb(skb, budget);
>  	}
> 
> @@ -946,6 +1023,87 @@ static void netvsc_copy_to_send_buf(struct netvsc_device *net_device,
>  		memset(dest, 0, padding);
>  }
> 
> +void netvsc_dma_unmap(struct hv_device *hv_dev,
> +		      struct hv_netvsc_packet *packet)
> +{
> +	u32 page_count = packet->cp_partial ?
> +		packet->page_buf_cnt - packet->rmsg_pgcnt :
> +		packet->page_buf_cnt;
> +	int i;
> +
> +	if (!hv_is_isolation_supported())
> +		return;
> +
> +	if (!packet->dma_range)
> +		return;
> +
> +	for (i = 0; i < page_count; i++)
> +		dma_unmap_single(&hv_dev->device, packet->dma_range[i].dma,
> +				 packet->dma_range[i].mapping_size,
> +				 DMA_TO_DEVICE);
> +
> +	kfree(packet->dma_range);
> +}
> +
> +/* netvsc_dma_map - Map swiotlb bounce buffer with data page of
> + * packet sent by vmbus_sendpacket_pagebuffer() in the Isolation
> + * VM.
> + *
> + * In isolation VM, netvsc send buffer has been marked visible to
> + * host and so the data copied to send buffer doesn't need to use
> + * bounce buffer. The data pages handled by vmbus_sendpacket_pagebuffer()
> + * may not be copied to send buffer and so these pages need to be
> + * mapped with swiotlb bounce buffer. netvsc_dma_map() is to do
> + * that. The pfns in the struct hv_page_buffer need to be converted
> + * to bounce buffer's pfn. The loop here is necessary because the
> + * entries in the page buffer array are not necessarily full
> + * pages of data.  Each entry in the array has a separate offset and
> + * len that may be non-zero, even for entries in the middle of the
> + * array.  And the entries are not physically contiguous.  So each
> + * entry must be individually mapped rather than as a contiguous unit.
> + * So not use dma_map_sg() here.
> + */
> +static int netvsc_dma_map(struct hv_device *hv_dev,
> +			  struct hv_netvsc_packet *packet,
> +			  struct hv_page_buffer *pb)
> +{
> +	u32 page_count =  packet->cp_partial ?
> +		packet->page_buf_cnt - packet->rmsg_pgcnt :
> +		packet->page_buf_cnt;
> +	dma_addr_t dma;
> +	int i;
> +
> +	if (!hv_is_isolation_supported())
> +		return 0;
> +
> +	packet->dma_range = kcalloc(page_count,
> +				    sizeof(*packet->dma_range),
> +				    GFP_KERNEL);
> +	if (!packet->dma_range)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < page_count; i++) {
> +		char *src = phys_to_virt((pb[i].pfn << HV_HYP_PAGE_SHIFT)
> +					 + pb[i].offset);
> +		u32 len = pb[i].len;
> +
> +		dma = dma_map_single(&hv_dev->device, src, len,
> +				     DMA_TO_DEVICE);
> +		if (dma_mapping_error(&hv_dev->device, dma)) {
> +			kfree(packet->dma_range);
> +			return -ENOMEM;
> +		}
> +
> +		packet->dma_range[i].dma = dma;
> +		packet->dma_range[i].mapping_size = len;
> +		pb[i].pfn = dma >> HV_HYP_PAGE_SHIFT;
> +		pb[i].offset = offset_in_hvpage(dma);
> +		pb[i].len = len;

As noted in comments on an earlier version of this patch, the
pb[i].len and .offset fields should not be changed by doing
dma_map_single().  So there's no need to set them again here.  Adding
a comment to that effect might be good.

> +	}
> +
> +	return 0;
> +}
> +
>  static inline int netvsc_send_pkt(
>  	struct hv_device *device,
>  	struct hv_netvsc_packet *packet,
> @@ -986,14 +1144,24 @@ static inline int netvsc_send_pkt(
> 
>  	trace_nvsp_send_pkt(ndev, out_channel, rpkt);
> 
> +	packet->dma_range = NULL;
>  	if (packet->page_buf_cnt) {
>  		if (packet->cp_partial)
>  			pb += packet->rmsg_pgcnt;
> 
> +		ret = netvsc_dma_map(ndev_ctx->device_ctx, packet, pb);
> +		if (ret) {
> +			ret = -EAGAIN;
> +			goto exit;
> +		}
> +
>  		ret = vmbus_sendpacket_pagebuffer(out_channel,
>  						  pb, packet->page_buf_cnt,
>  						  &nvmsg, sizeof(nvmsg),
>  						  req_id);
> +
> +		if (ret)
> +			netvsc_dma_unmap(ndev_ctx->device_ctx, packet);
>  	} else {
>  		ret = vmbus_sendpacket(out_channel,
>  				       &nvmsg, sizeof(nvmsg),
> @@ -1001,6 +1169,7 @@ static inline int netvsc_send_pkt(
>  				       VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
>  	}
> 
> +exit:
>  	if (ret == 0) {
>  		atomic_inc_return(&nvchan->queue_sends);
> 
> @@ -1515,13 +1684,6 @@ static int netvsc_process_raw_pkt(struct hv_device *device,
>  	return 0;
>  }
> 
> -static struct hv_device *netvsc_channel_to_device(struct vmbus_channel *channel)
> -{
> -	struct vmbus_channel *primary = channel->primary_channel;
> -
> -	return primary ? primary->device_obj : channel->device_obj;
> -}
> -
>  /* Network processing softirq
>   * Process data in incoming ring buffer from host
>   * Stops when ring is empty or budget is met or exceeded.
> diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c
> index f6c9c2a670f9..448fcc325ed7 100644
> --- a/drivers/net/hyperv/rndis_filter.c
> +++ b/drivers/net/hyperv/rndis_filter.c
> @@ -361,6 +361,8 @@ static void rndis_filter_receive_response(struct net_device *ndev,
>  			}
>  		}
> 
> +		netvsc_dma_unmap(((struct net_device_context *)
> +			netdev_priv(ndev))->device_ctx, &request->pkt);
>  		complete(&request->wait_event);
>  	} else {
>  		netdev_err(ndev,
> diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
> index 4d44fb3b3f1c..8882e46d1070 100644
> --- a/include/linux/hyperv.h
> +++ b/include/linux/hyperv.h
> @@ -25,6 +25,7 @@
>  #include <linux/interrupt.h>
>  #include <linux/reciprocal_div.h>
>  #include <asm/hyperv-tlfs.h>
> +#include <linux/dma-map-ops.h>
> 
>  #define MAX_PAGE_BUFFER_COUNT				32
>  #define MAX_MULTIPAGE_BUFFER_COUNT			32 /* 128K */
> @@ -1583,6 +1584,11 @@ struct hyperv_service_callback {
>  	void (*callback)(void *context);
>  };
> 
> +struct hv_dma_range {
> +	dma_addr_t dma;
> +	u32 mapping_size;
> +};
> +
>  #define MAX_SRV_VER	0x7ffffff
>  extern bool vmbus_prep_negotiate_resp(struct icmsg_hdr *icmsghdrp, u8 *buf, u32 buflen,
>  				const int *fw_version, int fw_vercnt,
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH V2 1/6] Swiotlb: Add Swiotlb bounce buffer remap function for HV IVM
  2021-11-23 17:15   ` Michael Kelley (LINUX)
@ 2021-11-24 14:07     ` Tianyu Lan
  0 siblings, 0 replies; 13+ messages in thread
From: Tianyu Lan @ 2021-11-24 14:07 UTC (permalink / raw)
  To: Michael Kelley (LINUX), Christoph Hellwig
  Cc: iommu, linux-hyperv, linux-kernel, linux-scsi, netdev, vkuznets,
	brijesh.singh, konrad.wilk, parri.andrea, dave.hansen, tglx,
	mingo, bp, dave.hansen, x86, hpa, luto, peterz, jgross,
	sstabellini, boris.ostrovsky, KY Srinivasan, Haiyang Zhang,
	Stephen Hemminger, wei.liu, Dexuan Cui, joro, will, davem, kuba,
	jejb, martin.petersen, hch, m.szyprowski, robin.murphy,
	Tianyu Lan, thomas.lendacky, xen-devel

Hi Michael:
	Thanks for your review.

On 11/24/2021 1:15 AM, Michael Kelley (LINUX) wrote:
>> @@ -172,7 +200,14 @@ void __init swiotlb_update_mem_attributes(void)
>>   	vaddr = phys_to_virt(mem->start);
>>   	bytes = PAGE_ALIGN(mem->nslabs << IO_TLB_SHIFT);
>>   	set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
>> -	memset(vaddr, 0, bytes);
>> +
>> +	mem->vaddr = swiotlb_mem_remap(mem, bytes);
>> +	if (!mem->vaddr) {
>> +		pr_err("Fail to remap swiotlb mem.\n");
>> +		return;
>> +	}
>> +
>> +	memset(mem->vaddr, 0, bytes);
>>   }


> In the error case, do you want to leave mem->vaddr as NULL?  Or is it
> better to leave it as the virtual address of mem-start?  Your code leaves it
> as NULL.
> 
> The interaction between swiotlb_update_mem_attributes() and the helper
> function swiotlb_memo_remap() seems kind of clunky.  phys_to_virt() gets called
> twice, for example, and two error messages are printed.  The code would be
> more straightforward by just putting the helper function inline:
> 
> mem->vaddr = phys_to_virt(mem->start);
> bytes = PAGE_ALIGN(mem->nslabs << IO_TLB_SHIFT);
> set_memory_decrypted((unsigned long)(mem->vaddr), bytes >> PAGE_SHIFT);
> 
> if (swiotlb_unencrypted_base) {
> 	phys_addr_t paddr = mem->start + swiotlb_unencrypted_base;
> 
> 	mem->vaddr = memremap(paddr, bytes, MEMREMAP_WB);
> 	if (!mem->vaddr) {
> 		pr_err("Failed to map the unencrypted memory %llx size %lx.\n",
> 			       paddr, bytes);
> 		return;
> 	}
> }
> 
> memset(mem->vaddr, 0, bytes);
> 
> (This version also leaves mem->vaddr as NULL in the error case.)

 From Christoph's previous suggestion, there should be a well-documented 
wrapper to explain the remap option and so I split the code. leaving the 
virtual address of mem-start is better.

https://lkml.org/lkml/2021/9/28/51

> 
>>   static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
>> @@ -196,7 +231,18 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
>>   		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
>>   		mem->slots[i].alloc_size = 0;
>>   	}
>> +
>> +	/*
>> +	 * If swiotlb_unencrypted_base is set, the bounce buffer memory will
>> +	 * be remapped and cleared in swiotlb_update_mem_attributes.
>> +	 */
>> +	if (swiotlb_unencrypted_base)
>> +		return;
>> +
>> +	set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
> Prior to this patch, and here in the new version as well, the return value from
> set_memory_decrypted() is ignored in several places in this file.  As previously
> discussed, swiotlb_init_io_tlb_mem() is a void function, so there's no place to
> return an error. Is that OK?

Yes, the original code doesn't check the return value and so keep the 
rule。

Christoph, Could you help to check which way do you prefer?



^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH V2 5/6] net: netvsc: Add Isolation VM support for netvsc driver
  2021-11-23 14:30 ` [PATCH V2 5/6] net: netvsc: Add Isolation VM support for netvsc driver Tianyu Lan
  2021-11-23 17:55   ` Michael Kelley (LINUX)
@ 2021-11-24 17:03   ` Michael Kelley (LINUX)
  2021-11-25 21:58     ` Haiyang Zhang
  1 sibling, 1 reply; 13+ messages in thread
From: Michael Kelley (LINUX) @ 2021-11-24 17:03 UTC (permalink / raw)
  To: Tianyu Lan, tglx, mingo, bp, dave.hansen, x86, hpa, luto, peterz,
	jgross, sstabellini, boris.ostrovsky, KY Srinivasan,
	Haiyang Zhang, Stephen Hemminger, wei.liu, Dexuan Cui, joro,
	will, davem, kuba, jejb, martin.petersen, hch, m.szyprowski,
	robin.murphy, Tianyu Lan, thomas.lendacky, xen-devel
  Cc: iommu, linux-hyperv, linux-kernel, linux-scsi, netdev, vkuznets,
	brijesh.singh, konrad.wilk, parri.andrea, dave.hansen

From: Tianyu Lan <ltykernel@gmail.com> Sent: Tuesday, November 23, 2021 6:31 AM
> 
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
> pagebuffer() stills need to be handled. Use DMA API to map/umap
> these memory during sending/receiving packet and Hyper-V swiotlb
> bounce buffer dma address will be returned. The swiotlb bounce buffer
> has been masked to be visible to host during boot up.
> 
> Allocate rx/tx ring buffer via dma_alloc_noncontiguous() in Isolation
> VM. After calling vmbus_establish_gpadl() which marks these pages visible
> to host, map these pages unencrypted addes space via dma_vmap_noncontiguous().
> 

The big unresolved topic is how best to do the allocation and mapping of the big
netvsc send and receive buffers.  Let me summarize and make a recommendation.

Background
==========
1.  Each Hyper-V synthetic network device requires a large pre-allocated receive
     buffer (defaults to 16 Mbytes) and a similar send buffer (defaults to 1 Mbyte).
2.  The buffers are allocated in guest memory and shared with the Hyper-V host.
     As such, in the Hyper-V SNP environment, the memory must be unencrypted
     and accessed in the Hyper-V guest with shared_gpa_boundary (i.e., VTOM)
     added to the physical memory address.
3.  The buffers need *not* be contiguous in guest physical memory, but must be
     contiguously mapped in guest kernel virtual space.
4.  Network devices may come and go during the life of the VM, so allocation of
     these buffers and their mappings may be done after Linux has been running for
     a long time.
5.  Performance of the allocation and mapping process is not an issue since it is
     done only on synthetic network device add/remove.
6.  So the primary goals are an appropriate logical abstraction, code that is
     simple and straightforward, and efficient memory usage.

Approaches
==========
During the development of these patches, four approaches have been
implemented:

1.  Two virtual mappings:  One from vmalloc() to allocate the guest memory, and
     the second from vmap_pfns() after adding the shared_gpa_boundary.   This is
     implemented in Hyper-V or netvsc specific code, with no use of DMA APIs.
     No separate list of physical pages is maintained, so for creating the second
     mapping, the PFN list is assembled temporarily by doing virt-to-phys()
     page-by-page on the vmalloc mapping, and then discarded because it is no
     longer needed.  [v4 of the original patch series.]

2.  Two virtual mappings as in (1) above, but implemented via new DMA calls
     dma_map_decrypted() and dma_unmap_encrypted().  [v3 of the original
     patch series.]

3.  Two virtual mappings as in (1) above, but implemented via DMA noncontiguous
      allocation and mapping calls, as enhanced to allow for custom map/unmap
      implementations.  A list of physical pages is maintained in the dma_sgt_handle
      as expected by the DMA noncontiguous API.  [New split-off patch series v1 & v2]

4.   Single virtual mapping from vmap_pfns().  The netvsc driver allocates physical
      memory via alloc_pages() with as much contiguity as possible, and maintains a
      list of physical pages and ranges.   Single virtual map is setup with vmap_pfns()
      after adding shared_gpa_boundary.  [v5 of the original patch series.]

Both implementations using DMA APIs use very little of the existing DMA
machinery.  Both require extensions to the DMA APIs, and custom ops functions.
While in some sense the netvsc send and receive buffers involve DMA, they
do not require any DMA actions on a per-I/O basis.  It seems better to me to
not try to fit these two buffers into the DMA model as a one-off.  Let's just
use Hyper-V specific code to allocate and map them, as is done with the
Hyper-V VMbus channel ring buffers.

That leaves approaches (1) and (4) above.  Between those two, (1) is
simpler even though there are two virtual mappings.  Using alloc_pages() as
in (4) is messy and there's no real benefit to using higher order allocations.
(4) also requires maintaining a separate list of PFNs and ranges, which offsets
some of the benefits to having only one virtual mapping active at any point in
time.

I don't think there's a clear "right" answer, so it's a judgment call.  We've
explored what other approaches would look like, and I'd say let's go with
(1) as the simpler approach.  Thoughts?

Michael

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH V2 5/6] net: netvsc: Add Isolation VM support for netvsc driver
  2021-11-24 17:03   ` Michael Kelley (LINUX)
@ 2021-11-25 21:58     ` Haiyang Zhang
  0 siblings, 0 replies; 13+ messages in thread
From: Haiyang Zhang @ 2021-11-25 21:58 UTC (permalink / raw)
  To: Michael Kelley (LINUX),
	Tianyu Lan, tglx, mingo, bp, dave.hansen, x86, hpa, luto, peterz,
	jgross, sstabellini, boris.ostrovsky, KY Srinivasan,
	Stephen Hemminger, wei.liu, Dexuan Cui, joro, will, davem, kuba,
	jejb, martin.petersen, hch, m.szyprowski, robin.murphy,
	Tianyu Lan, thomas.lendacky, xen-devel
  Cc: iommu, linux-hyperv, linux-kernel, linux-scsi, netdev, vkuznets,
	brijesh.singh, konrad.wilk, parri.andrea, dave.hansen



> -----Original Message-----
> From: Michael Kelley (LINUX) <mikelley@microsoft.com>
> Sent: Wednesday, November 24, 2021 12:03 PM
> To: Tianyu Lan <ltykernel@gmail.com>; tglx@linutronix.de; mingo@redhat.com; bp@alien8.de;
> dave.hansen@linux.intel.com; x86@kernel.org; hpa@zytor.com; luto@kernel.org;
> peterz@infradead.org; jgross@suse.com; sstabellini@kernel.org; boris.ostrovsky@oracle.com;
> KY Srinivasan <kys@microsoft.com>; Haiyang Zhang <haiyangz@microsoft.com>; Stephen
> Hemminger <sthemmin@microsoft.com>; wei.liu@kernel.org; Dexuan Cui <decui@microsoft.com>;
> joro@8bytes.org; will@kernel.org; davem@davemloft.net; kuba@kernel.org; jejb@linux.ibm.com;
> martin.petersen@oracle.com; hch@lst.de; m.szyprowski@samsung.com; robin.murphy@arm.com;
> Tianyu Lan <Tianyu.Lan@microsoft.com>; thomas.lendacky@amd.com; xen-
> devel@lists.xenproject.org
> Cc: iommu@lists.linux-foundation.org; linux-hyperv@vger.kernel.org; linux-
> kernel@vger.kernel.org; linux-scsi@vger.kernel.org; netdev@vger.kernel.org; vkuznets
> <vkuznets@redhat.com>; brijesh.singh@amd.com; konrad.wilk@oracle.com;
> parri.andrea@gmail.com; dave.hansen@intel.com
> Subject: RE: [PATCH V2 5/6] net: netvsc: Add Isolation VM support for netvsc driver
> 
> From: Tianyu Lan <ltykernel@gmail.com> Sent: Tuesday, November 23, 2021 6:31 AM
> >
> > In Isolation VM, all shared memory with host needs to mark visible to
> > host via hvcall. vmbus_establish_gpadl() has already done it for
> > netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
> > pagebuffer() stills need to be handled. Use DMA API to map/umap these
> > memory during sending/receiving packet and Hyper-V swiotlb bounce
> > buffer dma address will be returned. The swiotlb bounce buffer has
> > been masked to be visible to host during boot up.
> >
> > Allocate rx/tx ring buffer via dma_alloc_noncontiguous() in Isolation
> > VM. After calling vmbus_establish_gpadl() which marks these pages
> > visible to host, map these pages unencrypted addes space via dma_vmap_noncontiguous().
> >
> 
> The big unresolved topic is how best to do the allocation and mapping of the big netvsc
> send and receive buffers.  Let me summarize and make a recommendation.
> 
> Background
> ==========
> 1.  Each Hyper-V synthetic network device requires a large pre-allocated receive
>      buffer (defaults to 16 Mbytes) and a similar send buffer (defaults to 1 Mbyte).
> 2.  The buffers are allocated in guest memory and shared with the Hyper-V host.
>      As such, in the Hyper-V SNP environment, the memory must be unencrypted
>      and accessed in the Hyper-V guest with shared_gpa_boundary (i.e., VTOM)
>      added to the physical memory address.
> 3.  The buffers need *not* be contiguous in guest physical memory, but must be
>      contiguously mapped in guest kernel virtual space.
> 4.  Network devices may come and go during the life of the VM, so allocation of
>      these buffers and their mappings may be done after Linux has been running for
>      a long time.
> 5.  Performance of the allocation and mapping process is not an issue since it is
>      done only on synthetic network device add/remove.
> 6.  So the primary goals are an appropriate logical abstraction, code that is
>      simple and straightforward, and efficient memory usage.
> 
> Approaches
> ==========
> During the development of these patches, four approaches have been
> implemented:
> 
> 1.  Two virtual mappings:  One from vmalloc() to allocate the guest memory, and
>      the second from vmap_pfns() after adding the shared_gpa_boundary.   This is
>      implemented in Hyper-V or netvsc specific code, with no use of DMA APIs.
>      No separate list of physical pages is maintained, so for creating the second
>      mapping, the PFN list is assembled temporarily by doing virt-to-phys()
>      page-by-page on the vmalloc mapping, and then discarded because it is no
>      longer needed.  [v4 of the original patch series.]
> 
> 2.  Two virtual mappings as in (1) above, but implemented via new DMA calls
>      dma_map_decrypted() and dma_unmap_encrypted().  [v3 of the original
>      patch series.]
> 
> 3.  Two virtual mappings as in (1) above, but implemented via DMA noncontiguous
>       allocation and mapping calls, as enhanced to allow for custom map/unmap
>       implementations.  A list of physical pages is maintained in the dma_sgt_handle
>       as expected by the DMA noncontiguous API.  [New split-off patch series v1 & v2]
> 
> 4.   Single virtual mapping from vmap_pfns().  The netvsc driver allocates physical
>       memory via alloc_pages() with as much contiguity as possible, and maintains a
>       list of physical pages and ranges.   Single virtual map is setup with vmap_pfns()
>       after adding shared_gpa_boundary.  [v5 of the original patch series.]
> 
> Both implementations using DMA APIs use very little of the existing DMA machinery.  Both
> require extensions to the DMA APIs, and custom ops functions.
> While in some sense the netvsc send and receive buffers involve DMA, they do not require
> any DMA actions on a per-I/O basis.  It seems better to me to not try to fit these two
> buffers into the DMA model as a one-off.  Let's just use Hyper-V specific code to allocate
> and map them, as is done with the Hyper-V VMbus channel ring buffers.
> 
> That leaves approaches (1) and (4) above.  Between those two, (1) is simpler even though
> there are two virtual mappings.  Using alloc_pages() as in (4) is messy and there's no
> real benefit to using higher order allocations.
> (4) also requires maintaining a separate list of PFNs and ranges, which offsets some of
> the benefits to having only one virtual mapping active at any point in time.
> 
> I don't think there's a clear "right" answer, so it's a judgment call.  We've explored
> what other approaches would look like, and I'd say let's go with
> (1) as the simpler approach.  Thoughts?
> 
I agree with the following goal:
"So the primary goals are an appropriate logical abstraction, code that is
     simple and straightforward, and efficient memory usage."

And the Approach #1 looks better to me as well.

Thanks,
- Haiyang


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2021-11-25 22:00 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-23 14:30 [PATCH V2 0/6] x86/Hyper-V: Add Hyper-V Isolation VM support(Second part) Tianyu Lan
2021-11-23 14:30 ` [PATCH V2 1/6] Swiotlb: Add Swiotlb bounce buffer remap function for HV IVM Tianyu Lan
2021-11-23 17:15   ` Michael Kelley (LINUX)
2021-11-24 14:07     ` Tianyu Lan
2021-11-23 14:30 ` [PATCH V2 2/6] dma-mapping: Add vmap/vunmap_noncontiguous() callback in dma ops Tianyu Lan
2021-11-23 14:30 ` [PATCH V2 3/6] x86/hyper-v: Add hyperv Isolation VM check in the cc_platform_has() Tianyu Lan
2021-11-23 14:30 ` [PATCH V2 4/6] hyperv/IOMMU: Enable swiotlb bounce buffer for Isolation VM Tianyu Lan
2021-11-23 17:44   ` Michael Kelley (LINUX)
2021-11-23 14:30 ` [PATCH V2 5/6] net: netvsc: Add Isolation VM support for netvsc driver Tianyu Lan
2021-11-23 17:55   ` Michael Kelley (LINUX)
2021-11-24 17:03   ` Michael Kelley (LINUX)
2021-11-25 21:58     ` Haiyang Zhang
2021-11-23 14:30 ` [PATCH V2 6/6] scsi: storvsc: Add Isolation VM support for storvsc driver Tianyu Lan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).