iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] iommu/vt-d: Consolidate various cache flush ops
@ 2019-11-22  3:04 Lu Baolu
  2019-11-22  3:04 ` [PATCH 1/5] iommu/vt-d: Extend iommu_flush for scalable mode Lu Baolu
                   ` (6 more replies)
  0 siblings, 7 replies; 14+ messages in thread
From: Lu Baolu @ 2019-11-22  3:04 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: kevin.tian, ashok.raj, linux-kernel, iommu, David Woodhouse

Intel VT-d 3.0 introduces more caches and interfaces for software to
flush when it runs in the scalable mode. Currently various cache flush
helpers are scattered around. This consolidates them by putting them in
the existing iommu_flush structure.

/* struct iommu_flush - Intel IOMMU cache invalidation ops
 *
 * @cc_inv: invalidate context cache
 * @iotlb_inv: Invalidate IOTLB and paging structure caches when software
 *             has changed second-level tables.
 * @p_iotlb_inv: Invalidate IOTLB and paging structure caches when software
 *               has changed first-level tables.
 * @pc_inv: invalidate pasid cache
 * @dev_tlb_inv: invalidate cached mappings used by requests-without-PASID
 *               from the Device-TLB on a endpoint device.
 * @p_dev_tlb_inv: invalidate cached mappings used by requests-with-PASID
 *                 from the Device-TLB on an endpoint device
 */
struct iommu_flush {
        void (*cc_inv)(struct intel_iommu *iommu, u16 did,
                       u16 sid, u8 fm, u64 type);
        void (*iotlb_inv)(struct intel_iommu *iommu, u16 did, u64 addr,
                          unsigned int size_order, u64 type);
        void (*p_iotlb_inv)(struct intel_iommu *iommu, u16 did, u32 pasid,
                            u64 addr, unsigned long npages, bool ih);
        void (*pc_inv)(struct intel_iommu *iommu, u16 did, u32 pasid,
                       u64 granu);
        void (*dev_tlb_inv)(struct intel_iommu *iommu, u16 sid, u16 pfsid,
                            u16 qdep, u64 addr, unsigned int mask);
        void (*p_dev_tlb_inv)(struct intel_iommu *iommu, u16 sid, u16 pfsid,
                              u32 pasid, u16 qdep, u64 addr,
                              unsigned long npages);
};

The name of each cache flush ops is defined according to the spec section 6.5
so that people are easy to look up them in the spec.

Best regards,
Lu Baolu

Lu Baolu (5):
  iommu/vt-d: Extend iommu_flush for scalable mode
  iommu/vt-d: Consolidate pasid cache invalidation
  iommu/vt-d: Consolidate device tlb invalidation
  iommu/vt-d: Consolidate pasid-based tlb invalidation
  iommu/vt-d: Consolidate pasid-based device tlb invalidation

 drivers/iommu/dmar.c        |  61 ---------
 drivers/iommu/intel-iommu.c | 246 +++++++++++++++++++++++++++++-------
 drivers/iommu/intel-pasid.c |  39 +-----
 drivers/iommu/intel-svm.c   |  60 ++-------
 include/linux/intel-iommu.h |  39 ++++--
 5 files changed, 244 insertions(+), 201 deletions(-)

-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/5] iommu/vt-d: Extend iommu_flush for scalable mode
  2019-11-22  3:04 [PATCH 0/5] iommu/vt-d: Consolidate various cache flush ops Lu Baolu
@ 2019-11-22  3:04 ` Lu Baolu
  2019-11-22  3:04 ` [PATCH 2/5] iommu/vt-d: Consolidate pasid cache invalidation Lu Baolu
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Lu Baolu @ 2019-11-22  3:04 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: kevin.tian, ashok.raj, linux-kernel, iommu, David Woodhouse

Intel VT-d 3.0 introduces more cache flush interfaces when it
runs in the scalable mode. Currently various cache flush helpers
are scattered around. This consolidates them by putting them in
the existing iommu_flush structure. The name of each cache flush
operation is defined according to the spec (section 6.5) so that
people are easy to look up them in the spec.

Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
---
 drivers/iommu/dmar.c        |  38 ------------
 drivers/iommu/intel-iommu.c | 118 ++++++++++++++++++++++--------------
 include/linux/intel-iommu.h |  34 ++++++++---
 3 files changed, 98 insertions(+), 92 deletions(-)

diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
index eecd6a421667..4b6090493f6d 100644
--- a/drivers/iommu/dmar.c
+++ b/drivers/iommu/dmar.c
@@ -1307,44 +1307,6 @@ void qi_global_iec(struct intel_iommu *iommu)
 	qi_submit_sync(&desc, iommu);
 }
 
-void qi_flush_context(struct intel_iommu *iommu, u16 did, u16 sid, u8 fm,
-		      u64 type)
-{
-	struct qi_desc desc;
-
-	desc.qw0 = QI_CC_FM(fm) | QI_CC_SID(sid) | QI_CC_DID(did)
-			| QI_CC_GRAN(type) | QI_CC_TYPE;
-	desc.qw1 = 0;
-	desc.qw2 = 0;
-	desc.qw3 = 0;
-
-	qi_submit_sync(&desc, iommu);
-}
-
-void qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr,
-		    unsigned int size_order, u64 type)
-{
-	u8 dw = 0, dr = 0;
-
-	struct qi_desc desc;
-	int ih = 0;
-
-	if (cap_write_drain(iommu->cap))
-		dw = 1;
-
-	if (cap_read_drain(iommu->cap))
-		dr = 1;
-
-	desc.qw0 = QI_IOTLB_DID(did) | QI_IOTLB_DR(dr) | QI_IOTLB_DW(dw)
-		| QI_IOTLB_GRAN(type) | QI_IOTLB_TYPE;
-	desc.qw1 = QI_IOTLB_ADDR(addr) | QI_IOTLB_IH(ih)
-		| QI_IOTLB_AM(size_order);
-	desc.qw2 = 0;
-	desc.qw3 = 0;
-
-	qi_submit_sync(&desc, iommu);
-}
-
 void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid,
 			u16 qdep, u64 addr, unsigned mask)
 {
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index b9d11f2e3194..59e4130161eb 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -1503,11 +1503,10 @@ static void iommu_flush_iotlb_psi(struct intel_iommu *iommu,
 	 * aligned to the size
 	 */
 	if (!cap_pgsel_inv(iommu->cap) || mask > cap_max_amask_val(iommu->cap))
-		iommu->flush.flush_iotlb(iommu, did, 0, 0,
-						DMA_TLB_DSI_FLUSH);
+		iommu->flush.iotlb_inv(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH);
 	else
-		iommu->flush.flush_iotlb(iommu, did, addr | ih, mask,
-						DMA_TLB_PSI_FLUSH);
+		iommu->flush.iotlb_inv(iommu, did, addr | ih,
+				       mask, DMA_TLB_PSI_FLUSH);
 
 	/*
 	 * In caching mode, changes of pages from non-present to present require
@@ -1540,7 +1539,7 @@ static void iommu_flush_iova(struct iova_domain *iovad)
 		struct intel_iommu *iommu = g_iommus[idx];
 		u16 did = domain->iommu_did[iommu->seq_id];
 
-		iommu->flush.flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH);
+		iommu->flush.iotlb_inv(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH);
 
 		if (!cap_caching_mode(iommu->cap))
 			iommu_flush_dev_iotlb(get_iommu_domain(iommu, did),
@@ -2017,12 +2016,12 @@ static int domain_context_mapping_one(struct dmar_domain *domain,
 		u16 did_old = context_domain_id(context);
 
 		if (did_old < cap_ndoms(iommu->cap)) {
-			iommu->flush.flush_context(iommu, did_old,
-						   (((u16)bus) << 8) | devfn,
-						   DMA_CCMD_MASK_NOBIT,
-						   DMA_CCMD_DEVICE_INVL);
-			iommu->flush.flush_iotlb(iommu, did_old, 0, 0,
-						 DMA_TLB_DSI_FLUSH);
+			iommu->flush.cc_inv(iommu, did_old,
+					    (((u16)bus) << 8) | devfn,
+					    DMA_CCMD_MASK_NOBIT,
+					    DMA_CCMD_DEVICE_INVL);
+			iommu->flush.iotlb_inv(iommu, did_old, 0, 0,
+					       DMA_TLB_DSI_FLUSH);
 		}
 	}
 
@@ -2099,11 +2098,11 @@ static int domain_context_mapping_one(struct dmar_domain *domain,
 	 * domain #0, which we have to flush:
 	 */
 	if (cap_caching_mode(iommu->cap)) {
-		iommu->flush.flush_context(iommu, 0,
-					   (((u16)bus) << 8) | devfn,
-					   DMA_CCMD_MASK_NOBIT,
-					   DMA_CCMD_DEVICE_INVL);
-		iommu->flush.flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH);
+		iommu->flush.cc_inv(iommu, 0,
+				    (((u16)bus) << 8) | devfn,
+				    DMA_CCMD_MASK_NOBIT,
+				    DMA_CCMD_DEVICE_INVL);
+		iommu->flush.iotlb_inv(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH);
 	} else {
 		iommu_flush_write_buffer(iommu);
 	}
@@ -2388,16 +2387,9 @@ static void domain_context_clear_one(struct intel_iommu *iommu, u8 bus, u8 devfn
 	context_clear_entry(context);
 	__iommu_flush_cache(iommu, context, sizeof(*context));
 	spin_unlock_irqrestore(&iommu->lock, flags);
-	iommu->flush.flush_context(iommu,
-				   did_old,
-				   (((u16)bus) << 8) | devfn,
-				   DMA_CCMD_MASK_NOBIT,
-				   DMA_CCMD_DEVICE_INVL);
-	iommu->flush.flush_iotlb(iommu,
-				 did_old,
-				 0,
-				 0,
-				 DMA_TLB_DSI_FLUSH);
+	iommu->flush.cc_inv(iommu, did_old, (((u16)bus) << 8) | devfn,
+			    DMA_CCMD_MASK_NOBIT, DMA_CCMD_DEVICE_INVL);
+	iommu->flush.iotlb_inv(iommu, did_old, 0, 0, DMA_TLB_DSI_FLUSH);
 }
 
 static inline void unlink_domain_info(struct device_domain_info *info)
@@ -2963,6 +2955,45 @@ static int device_def_domain_type(struct device *dev)
 			IOMMU_DOMAIN_IDENTITY : 0;
 }
 
+static void
+qi_flush_context(struct intel_iommu *iommu, u16 did, u16 sid, u8 fm, u64 type)
+{
+	struct qi_desc desc;
+
+	desc.qw0 = QI_CC_FM(fm) | QI_CC_SID(sid) | QI_CC_DID(did)
+			| QI_CC_GRAN(type) | QI_CC_TYPE;
+	desc.qw1 = 0;
+	desc.qw2 = 0;
+	desc.qw3 = 0;
+
+	qi_submit_sync(&desc, iommu);
+}
+
+static void
+qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr,
+	       unsigned int size_order, u64 type)
+{
+	u8 dw = 0, dr = 0;
+
+	struct qi_desc desc;
+	int ih = 0;
+
+	if (cap_write_drain(iommu->cap))
+		dw = 1;
+
+	if (cap_read_drain(iommu->cap))
+		dr = 1;
+
+	desc.qw0 = QI_IOTLB_DID(did) | QI_IOTLB_DR(dr) | QI_IOTLB_DW(dw)
+		| QI_IOTLB_GRAN(type) | QI_IOTLB_TYPE;
+	desc.qw1 = QI_IOTLB_ADDR(addr) | QI_IOTLB_IH(ih)
+		| QI_IOTLB_AM(size_order);
+	desc.qw2 = 0;
+	desc.qw3 = 0;
+
+	qi_submit_sync(&desc, iommu);
+}
+
 static void intel_iommu_init_qi(struct intel_iommu *iommu)
 {
 	/*
@@ -2987,13 +3018,13 @@ static void intel_iommu_init_qi(struct intel_iommu *iommu)
 		/*
 		 * Queued Invalidate not enabled, use Register Based Invalidate
 		 */
-		iommu->flush.flush_context = __iommu_flush_context;
-		iommu->flush.flush_iotlb = __iommu_flush_iotlb;
+		iommu->flush.cc_inv = __iommu_flush_context;
+		iommu->flush.iotlb_inv = __iommu_flush_iotlb;
 		pr_info("%s: Using Register based invalidation\n",
 			iommu->name);
 	} else {
-		iommu->flush.flush_context = qi_flush_context;
-		iommu->flush.flush_iotlb = qi_flush_iotlb;
+		iommu->flush.cc_inv = qi_flush_context;
+		iommu->flush.iotlb_inv = qi_flush_iotlb;
 		pr_info("%s: Using Queued invalidation\n", iommu->name);
 	}
 }
@@ -3300,8 +3331,8 @@ static int __init init_dmars(void)
 	for_each_active_iommu(iommu, drhd) {
 		iommu_flush_write_buffer(iommu);
 		iommu_set_root_entry(iommu);
-		iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
-		iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
+		iommu->flush.cc_inv(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
+		iommu->flush.iotlb_inv(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
 	}
 
 	if (iommu_default_passthrough())
@@ -4196,9 +4227,8 @@ static int init_iommu_hw(void)
 
 		iommu_set_root_entry(iommu);
 
-		iommu->flush.flush_context(iommu, 0, 0, 0,
-					   DMA_CCMD_GLOBAL_INVL);
-		iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
+		iommu->flush.cc_inv(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
+		iommu->flush.iotlb_inv(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
 		iommu_enable_translation(iommu);
 		iommu_disable_protect_mem_regions(iommu);
 	}
@@ -4212,10 +4242,8 @@ static void iommu_flush_all(void)
 	struct intel_iommu *iommu;
 
 	for_each_active_iommu(iommu, drhd) {
-		iommu->flush.flush_context(iommu, 0, 0, 0,
-					   DMA_CCMD_GLOBAL_INVL);
-		iommu->flush.flush_iotlb(iommu, 0, 0, 0,
-					 DMA_TLB_GLOBAL_FLUSH);
+		iommu->flush.cc_inv(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
+		iommu->flush.iotlb_inv(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
 	}
 }
 
@@ -4502,8 +4530,8 @@ static int intel_iommu_add(struct dmar_drhd_unit *dmaru)
 		goto disable_iommu;
 
 	iommu_set_root_entry(iommu);
-	iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
-	iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
+	iommu->flush.cc_inv(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
+	iommu->flush.iotlb_inv(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
 	iommu_enable_translation(iommu);
 
 	iommu_disable_protect_mem_regions(iommu);
@@ -5756,11 +5784,9 @@ int intel_iommu_enable_pasid(struct intel_iommu *iommu, struct device *dev)
 		ctx_lo |= CONTEXT_PASIDE;
 		context[0].lo = ctx_lo;
 		wmb();
-		iommu->flush.flush_context(iommu,
-					   domain->iommu_did[iommu->seq_id],
-					   PCI_DEVID(info->bus, info->devfn),
-					   DMA_CCMD_MASK_NOBIT,
-					   DMA_CCMD_DEVICE_INVL);
+		iommu->flush.cc_inv(iommu, domain->iommu_did[iommu->seq_id],
+				    PCI_DEVID(info->bus, info->devfn),
+				    DMA_CCMD_MASK_NOBIT, DMA_CCMD_DEVICE_INVL);
 	}
 
 	/* Enable PASID support in the device, if it wasn't already */
diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index aaece25c055f..ac725a4ce1c1 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -418,11 +418,33 @@ struct ir_table {
 };
 #endif
 
+/* struct iommu_flush - Intel IOMMU cache invalidation ops
+ *
+ * @cc_inv: invalidate context cache
+ * @iotlb_inv: Invalidate IOTLB and paging structure caches when software
+ *             has changed second-level tables.
+ * @p_iotlb_inv: Invalidate IOTLB and paging structure caches when software
+ *               has changed first-level tables.
+ * @pc_inv: invalidate pasid cache
+ * @dev_tlb_inv: invalidate cached mappings used by requests-without-PASID
+ *               from the Device-TLB on a endpoint device.
+ * @p_dev_tlb_inv: invalidate cached mappings used by requests-with-PASID
+ *                 from the Device-TLB on an endpoint device
+ */
 struct iommu_flush {
-	void (*flush_context)(struct intel_iommu *iommu, u16 did, u16 sid,
-			      u8 fm, u64 type);
-	void (*flush_iotlb)(struct intel_iommu *iommu, u16 did, u64 addr,
-			    unsigned int size_order, u64 type);
+	void (*cc_inv)(struct intel_iommu *iommu, u16 did,
+		       u16 sid, u8 fm, u64 type);
+	void (*iotlb_inv)(struct intel_iommu *iommu, u16 did, u64 addr,
+			  unsigned int size_order, u64 type);
+	void (*p_iotlb_inv)(struct intel_iommu *iommu, u16 did, u32 pasid,
+			    u64 addr, unsigned long npages, bool ih);
+	void (*pc_inv)(struct intel_iommu *iommu, u16 did, u32 pasid,
+		       u64 granu);
+	void (*dev_tlb_inv)(struct intel_iommu *iommu, u16 sid, u16 pfsid,
+			    u16 qdep, u64 addr, unsigned int mask);
+	void (*p_dev_tlb_inv)(struct intel_iommu *iommu, u16 sid, u16 pfsid,
+			      u32 pasid, u16 qdep, u64 addr,
+			      unsigned long npages);
 };
 
 enum {
@@ -640,10 +662,6 @@ extern void dmar_disable_qi(struct intel_iommu *iommu);
 extern int dmar_reenable_qi(struct intel_iommu *iommu);
 extern void qi_global_iec(struct intel_iommu *iommu);
 
-extern void qi_flush_context(struct intel_iommu *iommu, u16 did, u16 sid,
-			     u8 fm, u64 type);
-extern void qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr,
-			  unsigned int size_order, u64 type);
 extern void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid,
 			u16 qdep, u64 addr, unsigned mask);
 extern int qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu);
-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/5] iommu/vt-d: Consolidate pasid cache invalidation
  2019-11-22  3:04 [PATCH 0/5] iommu/vt-d: Consolidate various cache flush ops Lu Baolu
  2019-11-22  3:04 ` [PATCH 1/5] iommu/vt-d: Extend iommu_flush for scalable mode Lu Baolu
@ 2019-11-22  3:04 ` Lu Baolu
  2019-11-22  3:04 ` [PATCH 3/5] iommu/vt-d: Consolidate device tlb invalidation Lu Baolu
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Lu Baolu @ 2019-11-22  3:04 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: kevin.tian, ashok.raj, linux-kernel, iommu, David Woodhouse

Merge pasid cache invalidation into iommu->flush.pc_inv.

Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
---
 drivers/iommu/intel-iommu.c | 13 +++++++++++++
 drivers/iommu/intel-pasid.c | 18 ++----------------
 include/linux/intel-iommu.h |  3 +++
 3 files changed, 18 insertions(+), 16 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 59e4130161eb..283382584453 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -2994,6 +2994,18 @@ qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr,
 	qi_submit_sync(&desc, iommu);
 }
 
+/* PASID cache invalidation */
+static void
+qi_flush_pasid(struct intel_iommu *iommu, u16 did, u32 pasid, u64 granu)
+{
+	struct qi_desc desc = {.qw1 = 0, .qw2 = 0, .qw3 = 0};
+
+	desc.qw0 = QI_PC_PASID(pasid) | QI_PC_DID(did) |
+			QI_PC_GRAN(granu) | QI_PC_TYPE;
+
+	qi_submit_sync(&desc, iommu);
+}
+
 static void intel_iommu_init_qi(struct intel_iommu *iommu)
 {
 	/*
@@ -3025,6 +3037,7 @@ static void intel_iommu_init_qi(struct intel_iommu *iommu)
 	} else {
 		iommu->flush.cc_inv = qi_flush_context;
 		iommu->flush.iotlb_inv = qi_flush_iotlb;
+		iommu->flush.pc_inv = qi_flush_pasid;
 		pr_info("%s: Using Queued invalidation\n", iommu->name);
 	}
 }
diff --git a/drivers/iommu/intel-pasid.c b/drivers/iommu/intel-pasid.c
index 3cb569e76642..dd736f673603 100644
--- a/drivers/iommu/intel-pasid.c
+++ b/drivers/iommu/intel-pasid.c
@@ -359,20 +359,6 @@ pasid_set_flpm(struct pasid_entry *pe, u64 value)
 	pasid_set_bits(&pe->val[2], GENMASK_ULL(3, 2), value << 2);
 }
 
-static void
-pasid_cache_invalidation_with_pasid(struct intel_iommu *iommu,
-				    u16 did, int pasid)
-{
-	struct qi_desc desc;
-
-	desc.qw0 = QI_PC_DID(did) | QI_PC_PASID_SEL | QI_PC_PASID(pasid);
-	desc.qw1 = 0;
-	desc.qw2 = 0;
-	desc.qw3 = 0;
-
-	qi_submit_sync(&desc, iommu);
-}
-
 static void
 iotlb_invalidation_with_pasid(struct intel_iommu *iommu, u16 did, u32 pasid)
 {
@@ -421,7 +407,7 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu,
 	if (!ecap_coherent(iommu->ecap))
 		clflush_cache_range(pte, sizeof(*pte));
 
-	pasid_cache_invalidation_with_pasid(iommu, did, pasid);
+	iommu->flush.pc_inv(iommu, did, pasid, QI_PC_GRAN_PSWD);
 	iotlb_invalidation_with_pasid(iommu, did, pasid);
 
 	/* Device IOTLB doesn't need to be flushed in caching mode. */
@@ -437,7 +423,7 @@ static void pasid_flush_caches(struct intel_iommu *iommu,
 		clflush_cache_range(pte, sizeof(*pte));
 
 	if (cap_caching_mode(iommu->cap)) {
-		pasid_cache_invalidation_with_pasid(iommu, did, pasid);
+		iommu->flush.pc_inv(iommu, did, pasid, QI_PC_GRAN_PSWD);
 		iotlb_invalidation_with_pasid(iommu, did, pasid);
 	} else {
 		iommu_flush_write_buffer(iommu);
diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index ac725a4ce1c1..c32ff2a7d958 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -344,6 +344,9 @@ enum {
 #define QI_PC_PASID(pasid)	(((u64)pasid) << 32)
 #define QI_PC_DID(did)		(((u64)did) << 16)
 #define QI_PC_GRAN(gran)	(((u64)gran) << 4)
+#define QI_PC_GRAN_DS		0
+#define QI_PC_GRAN_PSWD		1
+#define QI_PC_GRAN_GLOBAL	3
 
 #define QI_PC_ALL_PASIDS	(QI_PC_TYPE | QI_PC_GRAN(0))
 #define QI_PC_PASID_SEL		(QI_PC_TYPE | QI_PC_GRAN(1))
-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 3/5] iommu/vt-d: Consolidate device tlb invalidation
  2019-11-22  3:04 [PATCH 0/5] iommu/vt-d: Consolidate various cache flush ops Lu Baolu
  2019-11-22  3:04 ` [PATCH 1/5] iommu/vt-d: Extend iommu_flush for scalable mode Lu Baolu
  2019-11-22  3:04 ` [PATCH 2/5] iommu/vt-d: Consolidate pasid cache invalidation Lu Baolu
@ 2019-11-22  3:04 ` Lu Baolu
  2019-11-22  3:04 ` [PATCH 4/5] iommu/vt-d: Consolidate pasid-based " Lu Baolu
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Lu Baolu @ 2019-11-22  3:04 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: kevin.tian, ashok.raj, linux-kernel, iommu, David Woodhouse

Merge device tlb invalidation into iommu->flush.dev_tlb_inv.

Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
---
 drivers/iommu/dmar.c        | 23 -----------------------
 drivers/iommu/intel-iommu.c | 31 +++++++++++++++++++++++++++++--
 drivers/iommu/intel-pasid.c |  3 ++-
 include/linux/intel-iommu.h |  2 --
 4 files changed, 31 insertions(+), 28 deletions(-)

diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
index 4b6090493f6d..8e26a36369ec 100644
--- a/drivers/iommu/dmar.c
+++ b/drivers/iommu/dmar.c
@@ -1307,29 +1307,6 @@ void qi_global_iec(struct intel_iommu *iommu)
 	qi_submit_sync(&desc, iommu);
 }
 
-void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid,
-			u16 qdep, u64 addr, unsigned mask)
-{
-	struct qi_desc desc;
-
-	if (mask) {
-		WARN_ON_ONCE(addr & ((1ULL << (VTD_PAGE_SHIFT + mask)) - 1));
-		addr |= (1ULL << (VTD_PAGE_SHIFT + mask - 1)) - 1;
-		desc.qw1 = QI_DEV_IOTLB_ADDR(addr) | QI_DEV_IOTLB_SIZE;
-	} else
-		desc.qw1 = QI_DEV_IOTLB_ADDR(addr);
-
-	if (qdep >= QI_DEV_IOTLB_MAX_INVS)
-		qdep = 0;
-
-	desc.qw0 = QI_DEV_IOTLB_SID(sid) | QI_DEV_IOTLB_QDEP(qdep) |
-		   QI_DIOTLB_TYPE | QI_DEV_IOTLB_PFSID(pfsid);
-	desc.qw2 = 0;
-	desc.qw3 = 0;
-
-	qi_submit_sync(&desc, iommu);
-}
-
 /*
  * Disable Queued Invalidation interface.
  */
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 283382584453..4eeb18942d3c 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -1465,6 +1465,7 @@ static void iommu_flush_dev_iotlb(struct dmar_domain *domain,
 {
 	u16 sid, qdep;
 	unsigned long flags;
+	struct intel_iommu *iommu;
 	struct device_domain_info *info;
 
 	if (!domain->has_iotlb_device)
@@ -1477,8 +1478,9 @@ static void iommu_flush_dev_iotlb(struct dmar_domain *domain,
 
 		sid = info->bus << 8 | info->devfn;
 		qdep = info->ats_qdep;
-		qi_flush_dev_iotlb(info->iommu, sid, info->pfsid,
-				qdep, addr, mask);
+		iommu = info->iommu;
+		iommu->flush.dev_tlb_inv(iommu, sid, info->pfsid,
+					 qdep, addr, mask);
 	}
 	spin_unlock_irqrestore(&device_domain_lock, flags);
 }
@@ -3006,6 +3008,30 @@ qi_flush_pasid(struct intel_iommu *iommu, u16 did, u32 pasid, u64 granu)
 	qi_submit_sync(&desc, iommu);
 }
 
+/* Device TLB invalidation */
+static void
+qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid,
+		   u16 qdep, u64 addr, unsigned int mask)
+{
+	struct qi_desc desc = {.qw2 = 0, .qw3 = 0};
+
+	if (mask) {
+		WARN_ON_ONCE(addr & ((1ULL << (VTD_PAGE_SHIFT + mask)) - 1));
+		addr |= (1ULL << (VTD_PAGE_SHIFT + mask - 1)) - 1;
+		desc.qw1 = QI_DEV_IOTLB_ADDR(addr) | QI_DEV_IOTLB_SIZE;
+	} else {
+		desc.qw1 = QI_DEV_IOTLB_ADDR(addr);
+	}
+
+	if (qdep >= QI_DEV_IOTLB_MAX_INVS)
+		qdep = 0;
+
+	desc.qw0 = QI_DEV_IOTLB_SID(sid) | QI_DEV_IOTLB_QDEP(qdep) |
+		   QI_DIOTLB_TYPE | QI_DEV_IOTLB_PFSID(pfsid);
+
+	qi_submit_sync(&desc, iommu);
+}
+
 static void intel_iommu_init_qi(struct intel_iommu *iommu)
 {
 	/*
@@ -3038,6 +3064,7 @@ static void intel_iommu_init_qi(struct intel_iommu *iommu)
 		iommu->flush.cc_inv = qi_flush_context;
 		iommu->flush.iotlb_inv = qi_flush_iotlb;
 		iommu->flush.pc_inv = qi_flush_pasid;
+		iommu->flush.dev_tlb_inv = qi_flush_dev_iotlb;
 		pr_info("%s: Using Queued invalidation\n", iommu->name);
 	}
 }
diff --git a/drivers/iommu/intel-pasid.c b/drivers/iommu/intel-pasid.c
index dd736f673603..01dd9c86178b 100644
--- a/drivers/iommu/intel-pasid.c
+++ b/drivers/iommu/intel-pasid.c
@@ -388,7 +388,8 @@ devtlb_invalidation_with_pasid(struct intel_iommu *iommu,
 	qdep = info->ats_qdep;
 	pfsid = info->pfsid;
 
-	qi_flush_dev_iotlb(iommu, sid, pfsid, qdep, 0, 64 - VTD_PAGE_SHIFT);
+	iommu->flush.dev_tlb_inv(iommu, sid, pfsid, qdep,
+				 0, 64 - VTD_PAGE_SHIFT);
 }
 
 void intel_pasid_tear_down_entry(struct intel_iommu *iommu,
diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index c32ff2a7d958..326146a36dbf 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -665,8 +665,6 @@ extern void dmar_disable_qi(struct intel_iommu *iommu);
 extern int dmar_reenable_qi(struct intel_iommu *iommu);
 extern void qi_global_iec(struct intel_iommu *iommu);
 
-extern void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid,
-			u16 qdep, u64 addr, unsigned mask);
 extern int qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu);
 
 extern int dmar_ir_support(void);
-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 4/5] iommu/vt-d: Consolidate pasid-based tlb invalidation
  2019-11-22  3:04 [PATCH 0/5] iommu/vt-d: Consolidate various cache flush ops Lu Baolu
                   ` (2 preceding siblings ...)
  2019-11-22  3:04 ` [PATCH 3/5] iommu/vt-d: Consolidate device tlb invalidation Lu Baolu
@ 2019-11-22  3:04 ` Lu Baolu
  2019-12-03 17:43   ` Jacob Pan
  2019-11-22  3:04 ` [PATCH 5/5] iommu/vt-d: Consolidate pasid-based device " Lu Baolu
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 14+ messages in thread
From: Lu Baolu @ 2019-11-22  3:04 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: kevin.tian, ashok.raj, linux-kernel, iommu, David Woodhouse

Merge pasid-based tlb invalidation into iommu->flush.p_iotlb_inv.

Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
---
 drivers/iommu/intel-iommu.c | 43 +++++++++++++++++++++++++++++++++++++
 drivers/iommu/intel-pasid.c | 18 ++--------------
 drivers/iommu/intel-svm.c   | 23 +++-----------------
 3 files changed, 48 insertions(+), 36 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 4eeb18942d3c..fec78cc877c1 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -3032,6 +3032,48 @@ qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid,
 	qi_submit_sync(&desc, iommu);
 }
 
+/* PASID-based IOTLB invalidation */
+static void
+qi_flush_piotlb(struct intel_iommu *iommu, u16 did, u32 pasid, u64 addr,
+		unsigned long npages, bool ih)
+{
+	struct qi_desc desc = {.qw2 = 0, .qw3 = 0};
+
+	/*
+	 * npages == -1 means a PASID-selective invalidation, otherwise,
+	 * a positive value for Page-selective-within-PASID invalidation.
+	 * 0 is not a valid input.
+	 */
+	if (WARN_ON(!npages)) {
+		pr_err("Invalid input npages = %ld\n", npages);
+		return;
+	}
+
+	if (npages == -1) {
+		desc.qw0 = QI_EIOTLB_PASID(pasid) |
+				QI_EIOTLB_DID(did) |
+				QI_EIOTLB_GRAN(QI_GRAN_NONG_PASID) |
+				QI_EIOTLB_TYPE;
+		desc.qw1 = 0;
+	} else {
+		int mask = ilog2(__roundup_pow_of_two(npages));
+		unsigned long align = (1ULL << (VTD_PAGE_SHIFT + mask));
+
+		if (WARN_ON_ONCE(!ALIGN(addr, align)))
+			addr &= ~(align - 1);
+
+		desc.qw0 = QI_EIOTLB_PASID(pasid) |
+				QI_EIOTLB_DID(did) |
+				QI_EIOTLB_GRAN(QI_GRAN_PSI_PASID) |
+				QI_EIOTLB_TYPE;
+		desc.qw1 = QI_EIOTLB_ADDR(addr) |
+				QI_EIOTLB_IH(ih) |
+				QI_EIOTLB_AM(mask);
+	}
+
+	qi_submit_sync(&desc, iommu);
+}
+
 static void intel_iommu_init_qi(struct intel_iommu *iommu)
 {
 	/*
@@ -3065,6 +3107,7 @@ static void intel_iommu_init_qi(struct intel_iommu *iommu)
 		iommu->flush.iotlb_inv = qi_flush_iotlb;
 		iommu->flush.pc_inv = qi_flush_pasid;
 		iommu->flush.dev_tlb_inv = qi_flush_dev_iotlb;
+		iommu->flush.p_iotlb_inv = qi_flush_piotlb;
 		pr_info("%s: Using Queued invalidation\n", iommu->name);
 	}
 }
diff --git a/drivers/iommu/intel-pasid.c b/drivers/iommu/intel-pasid.c
index 01dd9c86178b..78ff4eee8595 100644
--- a/drivers/iommu/intel-pasid.c
+++ b/drivers/iommu/intel-pasid.c
@@ -359,20 +359,6 @@ pasid_set_flpm(struct pasid_entry *pe, u64 value)
 	pasid_set_bits(&pe->val[2], GENMASK_ULL(3, 2), value << 2);
 }
 
-static void
-iotlb_invalidation_with_pasid(struct intel_iommu *iommu, u16 did, u32 pasid)
-{
-	struct qi_desc desc;
-
-	desc.qw0 = QI_EIOTLB_PASID(pasid) | QI_EIOTLB_DID(did) |
-			QI_EIOTLB_GRAN(QI_GRAN_NONG_PASID) | QI_EIOTLB_TYPE;
-	desc.qw1 = 0;
-	desc.qw2 = 0;
-	desc.qw3 = 0;
-
-	qi_submit_sync(&desc, iommu);
-}
-
 static void
 devtlb_invalidation_with_pasid(struct intel_iommu *iommu,
 			       struct device *dev, int pasid)
@@ -409,7 +395,7 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu,
 		clflush_cache_range(pte, sizeof(*pte));
 
 	iommu->flush.pc_inv(iommu, did, pasid, QI_PC_GRAN_PSWD);
-	iotlb_invalidation_with_pasid(iommu, did, pasid);
+	iommu->flush.p_iotlb_inv(iommu, did, pasid, 0, -1, 0);
 
 	/* Device IOTLB doesn't need to be flushed in caching mode. */
 	if (!cap_caching_mode(iommu->cap))
@@ -425,7 +411,7 @@ static void pasid_flush_caches(struct intel_iommu *iommu,
 
 	if (cap_caching_mode(iommu->cap)) {
 		iommu->flush.pc_inv(iommu, did, pasid, QI_PC_GRAN_PSWD);
-		iotlb_invalidation_with_pasid(iommu, did, pasid);
+		iommu->flush.p_iotlb_inv(iommu, did, pasid, 0, -1, 0);
 	} else {
 		iommu_flush_write_buffer(iommu);
 	}
diff --git a/drivers/iommu/intel-svm.c b/drivers/iommu/intel-svm.c
index f5594b9981a5..02c6b14f0568 100644
--- a/drivers/iommu/intel-svm.c
+++ b/drivers/iommu/intel-svm.c
@@ -118,27 +118,10 @@ static void intel_flush_svm_range_dev (struct intel_svm *svm, struct intel_svm_d
 				unsigned long address, unsigned long pages, int ih)
 {
 	struct qi_desc desc;
+	struct intel_iommu *iommu = svm->iommu;
 
-	if (pages == -1) {
-		desc.qw0 = QI_EIOTLB_PASID(svm->pasid) |
-			QI_EIOTLB_DID(sdev->did) |
-			QI_EIOTLB_GRAN(QI_GRAN_NONG_PASID) |
-			QI_EIOTLB_TYPE;
-		desc.qw1 = 0;
-	} else {
-		int mask = ilog2(__roundup_pow_of_two(pages));
-
-		desc.qw0 = QI_EIOTLB_PASID(svm->pasid) |
-				QI_EIOTLB_DID(sdev->did) |
-				QI_EIOTLB_GRAN(QI_GRAN_PSI_PASID) |
-				QI_EIOTLB_TYPE;
-		desc.qw1 = QI_EIOTLB_ADDR(address) |
-				QI_EIOTLB_IH(ih) |
-				QI_EIOTLB_AM(mask);
-	}
-	desc.qw2 = 0;
-	desc.qw3 = 0;
-	qi_submit_sync(&desc, svm->iommu);
+	iommu->flush.p_iotlb_inv(iommu, sdev->did,
+				 svm->pasid, address, pages, ih);
 
 	if (sdev->dev_iotlb) {
 		desc.qw0 = QI_DEV_EIOTLB_PASID(svm->pasid) |
-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 5/5] iommu/vt-d: Consolidate pasid-based device tlb invalidation
  2019-11-22  3:04 [PATCH 0/5] iommu/vt-d: Consolidate various cache flush ops Lu Baolu
                   ` (3 preceding siblings ...)
  2019-11-22  3:04 ` [PATCH 4/5] iommu/vt-d: Consolidate pasid-based " Lu Baolu
@ 2019-11-22  3:04 ` Lu Baolu
  2019-12-02 20:02 ` [PATCH 0/5] iommu/vt-d: Consolidate various cache flush ops Jacob Pan
  2019-12-03  8:49 ` David Woodhouse
  6 siblings, 0 replies; 14+ messages in thread
From: Lu Baolu @ 2019-11-22  3:04 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: kevin.tian, ashok.raj, linux-kernel, iommu, David Woodhouse

Merge pasid-based device tlb invalidation into iommu->flush.p_dev_tlb_inv.

Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
---
 drivers/iommu/intel-iommu.c | 41 +++++++++++++++++++++++++++++++++++++
 drivers/iommu/intel-svm.c   | 33 ++++++-----------------------
 2 files changed, 47 insertions(+), 27 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index fec78cc877c1..dd16d466320f 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -3074,6 +3074,46 @@ qi_flush_piotlb(struct intel_iommu *iommu, u16 did, u32 pasid, u64 addr,
 	qi_submit_sync(&desc, iommu);
 }
 
+/* PASID-based device TLB invalidation */
+static void
+qi_flush_dev_piotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid,
+		    u32 pasid, u16 qdep, u64 address, unsigned long npages)
+{
+	struct qi_desc desc = {.qw2 = 0, .qw3 = 0};
+
+	desc.qw0 = QI_DEV_EIOTLB_PASID(pasid) | QI_DEV_EIOTLB_SID(sid) |
+			QI_DEV_EIOTLB_QDEP(qdep) | QI_DEIOTLB_TYPE |
+			QI_DEV_IOTLB_PFSID(pfsid);
+
+	/*
+	 * npages == -1 means a PASID-selective invalidation, otherwise,
+	 * a positive value for Page-selective-within-PASID invalidation.
+	 * 0 is not a valid input.
+	 */
+	if (WARN_ON(!npages)) {
+		pr_err("Invalid input npages = %ld\n", npages);
+		return;
+	}
+
+	if (npages == -1) {
+		desc.qw1 = QI_DEV_EIOTLB_ADDR(((u64)-1) >> 1) |
+				QI_DEV_EIOTLB_SIZE;
+	} else if (npages > 1) {
+		/* The least significant zero bit indicates the size. So,
+		 * for example, an "address" value of 0x12345f000 will
+		 * flush from 0x123440000 to 0x12347ffff (256KiB). */
+		unsigned long last = address + ((unsigned long)(npages - 1) << VTD_PAGE_SHIFT);
+		unsigned long mask = __rounddown_pow_of_two(address ^ last);
+
+		desc.qw1 = QI_DEV_EIOTLB_ADDR((address & ~mask) |
+				(mask - 1)) | QI_DEV_EIOTLB_SIZE;
+	} else {
+		desc.qw1 = QI_DEV_EIOTLB_ADDR(address);
+	}
+
+	qi_submit_sync(&desc, iommu);
+}
+
 static void intel_iommu_init_qi(struct intel_iommu *iommu)
 {
 	/*
@@ -3108,6 +3148,7 @@ static void intel_iommu_init_qi(struct intel_iommu *iommu)
 		iommu->flush.pc_inv = qi_flush_pasid;
 		iommu->flush.dev_tlb_inv = qi_flush_dev_iotlb;
 		iommu->flush.p_iotlb_inv = qi_flush_piotlb;
+		iommu->flush.p_dev_tlb_inv = qi_flush_dev_piotlb;
 		pr_info("%s: Using Queued invalidation\n", iommu->name);
 	}
 }
diff --git a/drivers/iommu/intel-svm.c b/drivers/iommu/intel-svm.c
index 02c6b14f0568..b6b22989eb46 100644
--- a/drivers/iommu/intel-svm.c
+++ b/drivers/iommu/intel-svm.c
@@ -114,39 +114,18 @@ void intel_svm_check(struct intel_iommu *iommu)
 	iommu->flags |= VTD_FLAG_SVM_CAPABLE;
 }
 
-static void intel_flush_svm_range_dev (struct intel_svm *svm, struct intel_svm_dev *sdev,
-				unsigned long address, unsigned long pages, int ih)
+static void
+intel_flush_svm_range_dev(struct intel_svm *svm, struct intel_svm_dev *sdev,
+			  unsigned long address, unsigned long pages, int ih)
 {
-	struct qi_desc desc;
 	struct intel_iommu *iommu = svm->iommu;
 
 	iommu->flush.p_iotlb_inv(iommu, sdev->did,
 				 svm->pasid, address, pages, ih);
 
-	if (sdev->dev_iotlb) {
-		desc.qw0 = QI_DEV_EIOTLB_PASID(svm->pasid) |
-				QI_DEV_EIOTLB_SID(sdev->sid) |
-				QI_DEV_EIOTLB_QDEP(sdev->qdep) |
-				QI_DEIOTLB_TYPE;
-		if (pages == -1) {
-			desc.qw1 = QI_DEV_EIOTLB_ADDR(-1ULL >> 1) |
-					QI_DEV_EIOTLB_SIZE;
-		} else if (pages > 1) {
-			/* The least significant zero bit indicates the size. So,
-			 * for example, an "address" value of 0x12345f000 will
-			 * flush from 0x123440000 to 0x12347ffff (256KiB). */
-			unsigned long last = address + ((unsigned long)(pages - 1) << VTD_PAGE_SHIFT);
-			unsigned long mask = __rounddown_pow_of_two(address ^ last);
-
-			desc.qw1 = QI_DEV_EIOTLB_ADDR((address & ~mask) |
-					(mask - 1)) | QI_DEV_EIOTLB_SIZE;
-		} else {
-			desc.qw1 = QI_DEV_EIOTLB_ADDR(address);
-		}
-		desc.qw2 = 0;
-		desc.qw3 = 0;
-		qi_submit_sync(&desc, svm->iommu);
-	}
+	if (sdev->dev_iotlb)
+		iommu->flush.p_dev_tlb_inv(iommu, sdev->sid, 0, svm->pasid,
+					   sdev->qdep, address, pages);
 }
 
 static void intel_flush_svm_range(struct intel_svm *svm, unsigned long address,
-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/5] iommu/vt-d: Consolidate various cache flush ops
  2019-11-22  3:04 [PATCH 0/5] iommu/vt-d: Consolidate various cache flush ops Lu Baolu
                   ` (4 preceding siblings ...)
  2019-11-22  3:04 ` [PATCH 5/5] iommu/vt-d: Consolidate pasid-based device " Lu Baolu
@ 2019-12-02 20:02 ` Jacob Pan
  2019-12-03  2:44   ` Lu Baolu
  2019-12-03  8:49 ` David Woodhouse
  6 siblings, 1 reply; 14+ messages in thread
From: Jacob Pan @ 2019-12-02 20:02 UTC (permalink / raw)
  To: Lu Baolu; +Cc: kevin.tian, ashok.raj, linux-kernel, iommu, David Woodhouse

On Fri, 22 Nov 2019 11:04:44 +0800
Lu Baolu <baolu.lu@linux.intel.com> wrote:

> Intel VT-d 3.0 introduces more caches and interfaces for software to
> flush when it runs in the scalable mode. Currently various cache flush
> helpers are scattered around. This consolidates them by putting them
> in the existing iommu_flush structure.
> 
> /* struct iommu_flush - Intel IOMMU cache invalidation ops
>  *
>  * @cc_inv: invalidate context cache
>  * @iotlb_inv: Invalidate IOTLB and paging structure caches when
> software
>  *             has changed second-level tables.
>  * @p_iotlb_inv: Invalidate IOTLB and paging structure caches when
> software
>  *               has changed first-level tables.
>  * @pc_inv: invalidate pasid cache
>  * @dev_tlb_inv: invalidate cached mappings used by
> requests-without-PASID
>  *               from the Device-TLB on a endpoint device.
>  * @p_dev_tlb_inv: invalidate cached mappings used by
> requests-with-PASID
>  *                 from the Device-TLB on an endpoint device
>  */
> struct iommu_flush {
>         void (*cc_inv)(struct intel_iommu *iommu, u16 did,
>                        u16 sid, u8 fm, u64 type);
>         void (*iotlb_inv)(struct intel_iommu *iommu, u16 did, u64
> addr, unsigned int size_order, u64 type);
>         void (*p_iotlb_inv)(struct intel_iommu *iommu, u16 did, u32
> pasid, u64 addr, unsigned long npages, bool ih);
>         void (*pc_inv)(struct intel_iommu *iommu, u16 did, u32 pasid,
>                        u64 granu);
>         void (*dev_tlb_inv)(struct intel_iommu *iommu, u16 sid, u16
> pfsid, u16 qdep, u64 addr, unsigned int mask);
>         void (*p_dev_tlb_inv)(struct intel_iommu *iommu, u16 sid, u16
> pfsid, u32 pasid, u16 qdep, u64 addr,
>                               unsigned long npages);
> };
> 
> The name of each cache flush ops is defined according to the spec
> section 6.5 so that people are easy to look up them in the spec.
> 
Nice consolidation. For nested SVM, I also introduced cache flushed
helpers as needed.
https://lkml.org/lkml/2019/10/24/857

Should I wait for yours to be merged or you want to extend the this
consolidation after SVA/SVM cache flush? I expect to send my v8 shortly.

> Best regards,
> Lu Baolu
> 
> Lu Baolu (5):
>   iommu/vt-d: Extend iommu_flush for scalable mode
>   iommu/vt-d: Consolidate pasid cache invalidation
>   iommu/vt-d: Consolidate device tlb invalidation
>   iommu/vt-d: Consolidate pasid-based tlb invalidation
>   iommu/vt-d: Consolidate pasid-based device tlb invalidation
> 
>  drivers/iommu/dmar.c        |  61 ---------
>  drivers/iommu/intel-iommu.c | 246
> +++++++++++++++++++++++++++++------- drivers/iommu/intel-pasid.c |
> 39 +----- drivers/iommu/intel-svm.c   |  60 ++-------
>  include/linux/intel-iommu.h |  39 ++++--
>  5 files changed, 244 insertions(+), 201 deletions(-)
> 

[Jacob Pan]
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/5] iommu/vt-d: Consolidate various cache flush ops
  2019-12-02 20:02 ` [PATCH 0/5] iommu/vt-d: Consolidate various cache flush ops Jacob Pan
@ 2019-12-03  2:44   ` Lu Baolu
  2019-12-03 16:50     ` Jacob Pan
  0 siblings, 1 reply; 14+ messages in thread
From: Lu Baolu @ 2019-12-03  2:44 UTC (permalink / raw)
  To: Jacob Pan; +Cc: kevin.tian, ashok.raj, linux-kernel, iommu, David Woodhouse

Hi Jacob,

On 12/3/19 4:02 AM, Jacob Pan wrote:
> On Fri, 22 Nov 2019 11:04:44 +0800
> Lu Baolu<baolu.lu@linux.intel.com>  wrote:
> 
>> Intel VT-d 3.0 introduces more caches and interfaces for software to
>> flush when it runs in the scalable mode. Currently various cache flush
>> helpers are scattered around. This consolidates them by putting them
>> in the existing iommu_flush structure.
>>
>> /* struct iommu_flush - Intel IOMMU cache invalidation ops
>>   *
>>   * @cc_inv: invalidate context cache
>>   * @iotlb_inv: Invalidate IOTLB and paging structure caches when
>> software
>>   *             has changed second-level tables.
>>   * @p_iotlb_inv: Invalidate IOTLB and paging structure caches when
>> software
>>   *               has changed first-level tables.
>>   * @pc_inv: invalidate pasid cache
>>   * @dev_tlb_inv: invalidate cached mappings used by
>> requests-without-PASID
>>   *               from the Device-TLB on a endpoint device.
>>   * @p_dev_tlb_inv: invalidate cached mappings used by
>> requests-with-PASID
>>   *                 from the Device-TLB on an endpoint device
>>   */
>> struct iommu_flush {
>>          void (*cc_inv)(struct intel_iommu *iommu, u16 did,
>>                         u16 sid, u8 fm, u64 type);
>>          void (*iotlb_inv)(struct intel_iommu *iommu, u16 did, u64
>> addr, unsigned int size_order, u64 type);
>>          void (*p_iotlb_inv)(struct intel_iommu *iommu, u16 did, u32
>> pasid, u64 addr, unsigned long npages, bool ih);
>>          void (*pc_inv)(struct intel_iommu *iommu, u16 did, u32 pasid,
>>                         u64 granu);
>>          void (*dev_tlb_inv)(struct intel_iommu *iommu, u16 sid, u16
>> pfsid, u16 qdep, u64 addr, unsigned int mask);
>>          void (*p_dev_tlb_inv)(struct intel_iommu *iommu, u16 sid, u16
>> pfsid, u32 pasid, u16 qdep, u64 addr,
>>                                unsigned long npages);
>> };
>>
>> The name of each cache flush ops is defined according to the spec
>> section 6.5 so that people are easy to look up them in the spec.
>>
> Nice consolidation. For nested SVM, I also introduced cache flushed
> helpers as needed.
> https://lkml.org/lkml/2019/10/24/857
> 
> Should I wait for yours to be merged or you want to extend the this
> consolidation after SVA/SVM cache flush? I expect to send my v8 shortly.
> 

Please base your v8 patch on this series. So it could get more chances
for test.

I will queue this patch series for internal test after 5.5-rc1 and if
everything goes well, I will forward it to Joerg around rc4 for linux-
next.

Best regards,
baolu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/5] iommu/vt-d: Consolidate various cache flush ops
  2019-11-22  3:04 [PATCH 0/5] iommu/vt-d: Consolidate various cache flush ops Lu Baolu
                   ` (5 preceding siblings ...)
  2019-12-02 20:02 ` [PATCH 0/5] iommu/vt-d: Consolidate various cache flush ops Jacob Pan
@ 2019-12-03  8:49 ` David Woodhouse
  2019-12-04  0:27   ` Lu Baolu
  6 siblings, 1 reply; 14+ messages in thread
From: David Woodhouse @ 2019-12-03  8:49 UTC (permalink / raw)
  To: Lu Baolu, Joerg Roedel; +Cc: kevin.tian, ashok.raj, linux-kernel, iommu


[-- Attachment #1.1: Type: text/plain, Size: 2599 bytes --]

On Fri, 2019-11-22 at 11:04 +0800, Lu Baolu wrote:
> Intel VT-d 3.0 introduces more caches and interfaces for software to
> flush when it runs in the scalable mode. Currently various cache flush
> helpers are scattered around. This consolidates them by putting them in
> the existing iommu_flush structure.
> 
> /* struct iommu_flush - Intel IOMMU cache invalidation ops
>  *
>  * @cc_inv: invalidate context cache
>  * @iotlb_inv: Invalidate IOTLB and paging structure caches when software
>  *             has changed second-level tables.
>  * @p_iotlb_inv: Invalidate IOTLB and paging structure caches when software
>  *               has changed first-level tables.
>  * @pc_inv: invalidate pasid cache
>  * @dev_tlb_inv: invalidate cached mappings used by requests-without-PASID
>  *               from the Device-TLB on a endpoint device.
>  * @p_dev_tlb_inv: invalidate cached mappings used by requests-with-PASID
>  *                 from the Device-TLB on an endpoint device
>  */
> struct iommu_flush {
>         void (*cc_inv)(struct intel_iommu *iommu, u16 did,
>                        u16 sid, u8 fm, u64 type);
>         void (*iotlb_inv)(struct intel_iommu *iommu, u16 did, u64 addr,
>                           unsigned int size_order, u64 type);
>         void (*p_iotlb_inv)(struct intel_iommu *iommu, u16 did, u32 pasid,
>                             u64 addr, unsigned long npages, bool ih);
>         void (*pc_inv)(struct intel_iommu *iommu, u16 did, u32 pasid,
>                        u64 granu);
>         void (*dev_tlb_inv)(struct intel_iommu *iommu, u16 sid, u16 pfsid,
>                             u16 qdep, u64 addr, unsigned int mask);
>         void (*p_dev_tlb_inv)(struct intel_iommu *iommu, u16 sid, u16 pfsid,
>                               u32 pasid, u16 qdep, u64 addr,
>                               unsigned long npages);
> };
> 
> The name of each cache flush ops is defined according to the spec section 6.5
> so that people are easy to look up them in the spec.

Hm, indirect function calls are quite expensive these days.

I would have preferred to go in the opposite direction, since surely
aren't going to have *many* of these implementations. Currently there's
only one for register-based and one for queued invalidation, right?
Even if VT-d 3.0 throws an extra version in, I think I'd prefer to take
out the indirection completely and have an if/then helper.

Would love to see a microbenchmark of unmap operations before and after
this patch series with retpoline enabled, to see the effect.




[-- Attachment #1.2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5174 bytes --]

[-- Attachment #2: Type: text/plain, Size: 156 bytes --]

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/5] iommu/vt-d: Consolidate various cache flush ops
  2019-12-03  2:44   ` Lu Baolu
@ 2019-12-03 16:50     ` Jacob Pan
  2019-12-04  0:32       ` Lu Baolu
  0 siblings, 1 reply; 14+ messages in thread
From: Jacob Pan @ 2019-12-03 16:50 UTC (permalink / raw)
  To: Lu Baolu; +Cc: kevin.tian, ashok.raj, linux-kernel, iommu, David Woodhouse

On Tue, 3 Dec 2019 10:44:45 +0800
Lu Baolu <baolu.lu@linux.intel.com> wrote:

> Hi Jacob,
> 
> On 12/3/19 4:02 AM, Jacob Pan wrote:
> > On Fri, 22 Nov 2019 11:04:44 +0800
> > Lu Baolu<baolu.lu@linux.intel.com>  wrote:
> >   
> >> Intel VT-d 3.0 introduces more caches and interfaces for software
> >> to flush when it runs in the scalable mode. Currently various
> >> cache flush helpers are scattered around. This consolidates them
> >> by putting them in the existing iommu_flush structure.
> >>
> >> /* struct iommu_flush - Intel IOMMU cache invalidation ops
> >>   *
> >>   * @cc_inv: invalidate context cache
> >>   * @iotlb_inv: Invalidate IOTLB and paging structure caches when
> >> software
> >>   *             has changed second-level tables.
> >>   * @p_iotlb_inv: Invalidate IOTLB and paging structure caches when
> >> software
> >>   *               has changed first-level tables.
> >>   * @pc_inv: invalidate pasid cache
> >>   * @dev_tlb_inv: invalidate cached mappings used by
> >> requests-without-PASID
> >>   *               from the Device-TLB on a endpoint device.
> >>   * @p_dev_tlb_inv: invalidate cached mappings used by
> >> requests-with-PASID
> >>   *                 from the Device-TLB on an endpoint device
> >>   */
> >> struct iommu_flush {
> >>          void (*cc_inv)(struct intel_iommu *iommu, u16 did,
> >>                         u16 sid, u8 fm, u64 type);
> >>          void (*iotlb_inv)(struct intel_iommu *iommu, u16 did, u64
> >> addr, unsigned int size_order, u64 type);
> >>          void (*p_iotlb_inv)(struct intel_iommu *iommu, u16 did,
> >> u32 pasid, u64 addr, unsigned long npages, bool ih);
> >>          void (*pc_inv)(struct intel_iommu *iommu, u16 did, u32
> >> pasid, u64 granu);
> >>          void (*dev_tlb_inv)(struct intel_iommu *iommu, u16 sid,
> >> u16 pfsid, u16 qdep, u64 addr, unsigned int mask);
> >>          void (*p_dev_tlb_inv)(struct intel_iommu *iommu, u16 sid,
> >> u16 pfsid, u32 pasid, u16 qdep, u64 addr,
> >>                                unsigned long npages);
> >> };
> >>
> >> The name of each cache flush ops is defined according to the spec
> >> section 6.5 so that people are easy to look up them in the spec.
> >>  
> > Nice consolidation. For nested SVM, I also introduced cache flushed
> > helpers as needed.
> > https://lkml.org/lkml/2019/10/24/857
> > 
> > Should I wait for yours to be merged or you want to extend the this
> > consolidation after SVA/SVM cache flush? I expect to send my v8
> > shortly. 
> 
> Please base your v8 patch on this series. So it could get more chances
> for test.
> 
Sounds good.

> I will queue this patch series for internal test after 5.5-rc1 and if
> everything goes well, I will forward it to Joerg around rc4 for linux-
> next.
> 
> Best regards,
> baolu

[Jacob Pan]
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 4/5] iommu/vt-d: Consolidate pasid-based tlb invalidation
  2019-11-22  3:04 ` [PATCH 4/5] iommu/vt-d: Consolidate pasid-based " Lu Baolu
@ 2019-12-03 17:43   ` Jacob Pan
  0 siblings, 0 replies; 14+ messages in thread
From: Jacob Pan @ 2019-12-03 17:43 UTC (permalink / raw)
  To: Lu Baolu; +Cc: kevin.tian, ashok.raj, linux-kernel, iommu, David Woodhouse

On Fri, 22 Nov 2019 11:04:48 +0800
Lu Baolu <baolu.lu@linux.intel.com> wrote:

> Merge pasid-based tlb invalidation into iommu->flush.p_iotlb_inv.
> 
> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
> ---
>  drivers/iommu/intel-iommu.c | 43
> +++++++++++++++++++++++++++++++++++++ drivers/iommu/intel-pasid.c |
> 18 ++-------------- drivers/iommu/intel-svm.c   | 23
> +++----------------- 3 files changed, 48 insertions(+), 36
> deletions(-)
> 
> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
> index 4eeb18942d3c..fec78cc877c1 100644
> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -3032,6 +3032,48 @@ qi_flush_dev_iotlb(struct intel_iommu *iommu,
> u16 sid, u16 pfsid, qi_submit_sync(&desc, iommu);
>  }
>  
> +/* PASID-based IOTLB invalidation */
> +static void
> +qi_flush_piotlb(struct intel_iommu *iommu, u16 did, u32 pasid, u64
> addr,
> +		unsigned long npages, bool ih)
> +{
> +	struct qi_desc desc = {.qw2 = 0, .qw3 = 0};
> +
> +	/*
> +	 * npages == -1 means a PASID-selective invalidation,
> otherwise,
> +	 * a positive value for Page-selective-within-PASID
> invalidation.
> +	 * 0 is not a valid input.
> +	 */
> +	if (WARN_ON(!npages)) {
> +		pr_err("Invalid input npages = %ld\n", npages);
> +		return;
> +	}
> +
> +	if (npages == -1) {
> +		desc.qw0 = QI_EIOTLB_PASID(pasid) |
> +				QI_EIOTLB_DID(did) |
> +				QI_EIOTLB_GRAN(QI_GRAN_NONG_PASID) |
> +				QI_EIOTLB_TYPE;
> +		desc.qw1 = 0;
Is this based on the latest kernel? seems to be missing the recent
change for checking page selective cap. So I run into conflict.

+       /*                                                                     
+        * Do PASID granu IOTLB invalidation if page selective
  capability is   
+        * not
  available.                                                      
+
  */                                                                    
+       if (pages == -1 || !cap_pgsel_inv(svm->iommu->cap))
  {                  
+               desc.qw0 = QI_EIOTLB_PASID(svm->pasid)
  |                       

Seems missing this one in your base?

Refs: v5.3-rc6-2-g8744daf4b069                              
Author:     Jacob Pan <jacob.jun.pan@linux.intel.com>       
AuthorDate: Mon Aug 26 08:53:29 2019 -0700                  
Commit:     Joerg Roedel <jroedel@suse.de>                  
CommitDate: Tue Sep 3 15:01:27 2019 +0200                   
                                                            
    iommu/vt-d: Remove global page flush support            

> +	} else {
> +		int mask = ilog2(__roundup_pow_of_two(npages));
> +		unsigned long align = (1ULL << (VTD_PAGE_SHIFT +
> mask)); +
> +		if (WARN_ON_ONCE(!ALIGN(addr, align)))
> +			addr &= ~(align - 1);
> +
> +		desc.qw0 = QI_EIOTLB_PASID(pasid) |
> +				QI_EIOTLB_DID(did) |
> +				QI_EIOTLB_GRAN(QI_GRAN_PSI_PASID) |
> +				QI_EIOTLB_TYPE;
> +		desc.qw1 = QI_EIOTLB_ADDR(addr) |
> +				QI_EIOTLB_IH(ih) |
> +				QI_EIOTLB_AM(mask);
> +	}
> +
> +	qi_submit_sync(&desc, iommu);
> +}
> +
>  static void intel_iommu_init_qi(struct intel_iommu *iommu)
>  {
>  	/*
> @@ -3065,6 +3107,7 @@ static void intel_iommu_init_qi(struct
> intel_iommu *iommu) iommu->flush.iotlb_inv = qi_flush_iotlb;
>  		iommu->flush.pc_inv = qi_flush_pasid;
>  		iommu->flush.dev_tlb_inv = qi_flush_dev_iotlb;
> +		iommu->flush.p_iotlb_inv = qi_flush_piotlb;
>  		pr_info("%s: Using Queued invalidation\n",
> iommu->name); }
>  }
> diff --git a/drivers/iommu/intel-pasid.c b/drivers/iommu/intel-pasid.c
> index 01dd9c86178b..78ff4eee8595 100644
> --- a/drivers/iommu/intel-pasid.c
> +++ b/drivers/iommu/intel-pasid.c
> @@ -359,20 +359,6 @@ pasid_set_flpm(struct pasid_entry *pe, u64 value)
>  	pasid_set_bits(&pe->val[2], GENMASK_ULL(3, 2), value << 2);
>  }
>  
> -static void
> -iotlb_invalidation_with_pasid(struct intel_iommu *iommu, u16 did,
> u32 pasid) -{
> -	struct qi_desc desc;
> -
> -	desc.qw0 = QI_EIOTLB_PASID(pasid) | QI_EIOTLB_DID(did) |
> -			QI_EIOTLB_GRAN(QI_GRAN_NONG_PASID) |
> QI_EIOTLB_TYPE;
> -	desc.qw1 = 0;
> -	desc.qw2 = 0;
> -	desc.qw3 = 0;
> -
> -	qi_submit_sync(&desc, iommu);
> -}
> -
>  static void
>  devtlb_invalidation_with_pasid(struct intel_iommu *iommu,
>  			       struct device *dev, int pasid)
> @@ -409,7 +395,7 @@ void intel_pasid_tear_down_entry(struct
> intel_iommu *iommu, clflush_cache_range(pte, sizeof(*pte));
>  
>  	iommu->flush.pc_inv(iommu, did, pasid, QI_PC_GRAN_PSWD);
> -	iotlb_invalidation_with_pasid(iommu, did, pasid);
> +	iommu->flush.p_iotlb_inv(iommu, did, pasid, 0, -1, 0);
>  
>  	/* Device IOTLB doesn't need to be flushed in caching mode.
> */ if (!cap_caching_mode(iommu->cap))
> @@ -425,7 +411,7 @@ static void pasid_flush_caches(struct intel_iommu
> *iommu, 
>  	if (cap_caching_mode(iommu->cap)) {
>  		iommu->flush.pc_inv(iommu, did, pasid,
> QI_PC_GRAN_PSWD);
> -		iotlb_invalidation_with_pasid(iommu, did, pasid);
> +		iommu->flush.p_iotlb_inv(iommu, did, pasid, 0, -1,
> 0); } else {
>  		iommu_flush_write_buffer(iommu);
>  	}
> diff --git a/drivers/iommu/intel-svm.c b/drivers/iommu/intel-svm.c
> index f5594b9981a5..02c6b14f0568 100644
> --- a/drivers/iommu/intel-svm.c
> +++ b/drivers/iommu/intel-svm.c
> @@ -118,27 +118,10 @@ static void intel_flush_svm_range_dev (struct
> intel_svm *svm, struct intel_svm_d unsigned long address, unsigned
> long pages, int ih) {
>  	struct qi_desc desc;
> +	struct intel_iommu *iommu = svm->iommu;
>  
> -	if (pages == -1) {
> -		desc.qw0 = QI_EIOTLB_PASID(svm->pasid) |
> -			QI_EIOTLB_DID(sdev->did) |
> -			QI_EIOTLB_GRAN(QI_GRAN_NONG_PASID) |
> -			QI_EIOTLB_TYPE;
> -		desc.qw1 = 0;
> -	} else {
> -		int mask = ilog2(__roundup_pow_of_two(pages));
> -
> -		desc.qw0 = QI_EIOTLB_PASID(svm->pasid) |
> -				QI_EIOTLB_DID(sdev->did) |
> -				QI_EIOTLB_GRAN(QI_GRAN_PSI_PASID) |
> -				QI_EIOTLB_TYPE;
> -		desc.qw1 = QI_EIOTLB_ADDR(address) |
> -				QI_EIOTLB_IH(ih) |
> -				QI_EIOTLB_AM(mask);
> -	}
> -	desc.qw2 = 0;
> -	desc.qw3 = 0;
> -	qi_submit_sync(&desc, svm->iommu);
> +	iommu->flush.p_iotlb_inv(iommu, sdev->did,
> +				 svm->pasid, address, pages, ih);
>  
>  	if (sdev->dev_iotlb) {
>  		desc.qw0 = QI_DEV_EIOTLB_PASID(svm->pasid) |

[Jacob Pan]
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/5] iommu/vt-d: Consolidate various cache flush ops
  2019-12-03  8:49 ` David Woodhouse
@ 2019-12-04  0:27   ` Lu Baolu
  0 siblings, 0 replies; 14+ messages in thread
From: Lu Baolu @ 2019-12-04  0:27 UTC (permalink / raw)
  To: David Woodhouse, Joerg Roedel; +Cc: kevin.tian, ashok.raj, linux-kernel, iommu

Hi David,

On 12/3/19 4:49 PM, David Woodhouse wrote:
> On Fri, 2019-11-22 at 11:04 +0800, Lu Baolu wrote:
>> Intel VT-d 3.0 introduces more caches and interfaces for software to
>> flush when it runs in the scalable mode. Currently various cache flush
>> helpers are scattered around. This consolidates them by putting them in
>> the existing iommu_flush structure.
>>
>> /* struct iommu_flush - Intel IOMMU cache invalidation ops
>>   *
>>   * @cc_inv: invalidate context cache
>>   * @iotlb_inv: Invalidate IOTLB and paging structure caches when software
>>   *             has changed second-level tables.
>>   * @p_iotlb_inv: Invalidate IOTLB and paging structure caches when software
>>   *               has changed first-level tables.
>>   * @pc_inv: invalidate pasid cache
>>   * @dev_tlb_inv: invalidate cached mappings used by requests-without-PASID
>>   *               from the Device-TLB on a endpoint device.
>>   * @p_dev_tlb_inv: invalidate cached mappings used by requests-with-PASID
>>   *                 from the Device-TLB on an endpoint device
>>   */
>> struct iommu_flush {
>>          void (*cc_inv)(struct intel_iommu *iommu, u16 did,
>>                         u16 sid, u8 fm, u64 type);
>>          void (*iotlb_inv)(struct intel_iommu *iommu, u16 did, u64 addr,
>>                            unsigned int size_order, u64 type);
>>          void (*p_iotlb_inv)(struct intel_iommu *iommu, u16 did, u32 pasid,
>>                              u64 addr, unsigned long npages, bool ih);
>>          void (*pc_inv)(struct intel_iommu *iommu, u16 did, u32 pasid,
>>                         u64 granu);
>>          void (*dev_tlb_inv)(struct intel_iommu *iommu, u16 sid, u16 pfsid,
>>                              u16 qdep, u64 addr, unsigned int mask);
>>          void (*p_dev_tlb_inv)(struct intel_iommu *iommu, u16 sid, u16 pfsid,
>>                                u32 pasid, u16 qdep, u64 addr,
>>                                unsigned long npages);
>> };
>>
>> The name of each cache flush ops is defined according to the spec section 6.5
>> so that people are easy to look up them in the spec.
> 
> Hm, indirect function calls are quite expensive these days.

Good consideration. Thanks!

> 
> I would have preferred to go in the opposite direction, since surely
> aren't going to have *many* of these implementations. Currently there's
> only one for register-based and one for queued invalidation, right?
> Even if VT-d 3.0 throws an extra version in, I think I'd prefer to take
> out the indirection completely and have an if/then helper.
> 
> Would love to see a microbenchmark of unmap operations before and after
> this patch series with retpoline enabled, to see the effect.

Yes. Need some micro-bench tests to address the concern.

Best regards,
baolu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/5] iommu/vt-d: Consolidate various cache flush ops
  2019-12-03 16:50     ` Jacob Pan
@ 2019-12-04  0:32       ` Lu Baolu
  2019-12-04 17:41         ` Jacob Pan
  0 siblings, 1 reply; 14+ messages in thread
From: Lu Baolu @ 2019-12-04  0:32 UTC (permalink / raw)
  To: Jacob Pan; +Cc: kevin.tian, ashok.raj, linux-kernel, iommu, David Woodhouse

Hi Jacob,

On 12/4/19 12:50 AM, Jacob Pan wrote:
> On Tue, 3 Dec 2019 10:44:45 +0800
> Lu Baolu <baolu.lu@linux.intel.com> wrote:
> 
>> Hi Jacob,
>>
>> On 12/3/19 4:02 AM, Jacob Pan wrote:
>>> On Fri, 22 Nov 2019 11:04:44 +0800
>>> Lu Baolu<baolu.lu@linux.intel.com>  wrote:
>>>    
>>>> Intel VT-d 3.0 introduces more caches and interfaces for software
>>>> to flush when it runs in the scalable mode. Currently various
>>>> cache flush helpers are scattered around. This consolidates them
>>>> by putting them in the existing iommu_flush structure.
>>>>
>>>> /* struct iommu_flush - Intel IOMMU cache invalidation ops
>>>>    *
>>>>    * @cc_inv: invalidate context cache
>>>>    * @iotlb_inv: Invalidate IOTLB and paging structure caches when
>>>> software
>>>>    *             has changed second-level tables.
>>>>    * @p_iotlb_inv: Invalidate IOTLB and paging structure caches when
>>>> software
>>>>    *               has changed first-level tables.
>>>>    * @pc_inv: invalidate pasid cache
>>>>    * @dev_tlb_inv: invalidate cached mappings used by
>>>> requests-without-PASID
>>>>    *               from the Device-TLB on a endpoint device.
>>>>    * @p_dev_tlb_inv: invalidate cached mappings used by
>>>> requests-with-PASID
>>>>    *                 from the Device-TLB on an endpoint device
>>>>    */
>>>> struct iommu_flush {
>>>>           void (*cc_inv)(struct intel_iommu *iommu, u16 did,
>>>>                          u16 sid, u8 fm, u64 type);
>>>>           void (*iotlb_inv)(struct intel_iommu *iommu, u16 did, u64
>>>> addr, unsigned int size_order, u64 type);
>>>>           void (*p_iotlb_inv)(struct intel_iommu *iommu, u16 did,
>>>> u32 pasid, u64 addr, unsigned long npages, bool ih);
>>>>           void (*pc_inv)(struct intel_iommu *iommu, u16 did, u32
>>>> pasid, u64 granu);
>>>>           void (*dev_tlb_inv)(struct intel_iommu *iommu, u16 sid,
>>>> u16 pfsid, u16 qdep, u64 addr, unsigned int mask);
>>>>           void (*p_dev_tlb_inv)(struct intel_iommu *iommu, u16 sid,
>>>> u16 pfsid, u32 pasid, u16 qdep, u64 addr,
>>>>                                 unsigned long npages);
>>>> };
>>>>
>>>> The name of each cache flush ops is defined according to the spec
>>>> section 6.5 so that people are easy to look up them in the spec.
>>>>   
>>> Nice consolidation. For nested SVM, I also introduced cache flushed
>>> helpers as needed.
>>> https://lkml.org/lkml/2019/10/24/857
>>>
>>> Should I wait for yours to be merged or you want to extend the this
>>> consolidation after SVA/SVM cache flush? I expect to send my v8
>>> shortly.
>>
>> Please base your v8 patch on this series. So it could get more chances
>> for test.
>>
> Sounds good.

I am sorry I need to spend more time on this patch series. Please go
ahead without it.

Best regards,
baolu

> 
>> I will queue this patch series for internal test after 5.5-rc1 and if
>> everything goes well, I will forward it to Joerg around rc4 for linux-
>> next.
>>
>> Best regards,
>> baolu
> 
> [Jacob Pan]
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/5] iommu/vt-d: Consolidate various cache flush ops
  2019-12-04  0:32       ` Lu Baolu
@ 2019-12-04 17:41         ` Jacob Pan
  0 siblings, 0 replies; 14+ messages in thread
From: Jacob Pan @ 2019-12-04 17:41 UTC (permalink / raw)
  To: Lu Baolu; +Cc: kevin.tian, ashok.raj, linux-kernel, iommu, David Woodhouse

On Wed, 4 Dec 2019 08:32:17 +0800
Lu Baolu <baolu.lu@linux.intel.com> wrote:

> Hi Jacob,
> 
> On 12/4/19 12:50 AM, Jacob Pan wrote:
> > On Tue, 3 Dec 2019 10:44:45 +0800
> > Lu Baolu <baolu.lu@linux.intel.com> wrote:
> >   
> >> Hi Jacob,
> >>
> >> On 12/3/19 4:02 AM, Jacob Pan wrote:  
> >>> On Fri, 22 Nov 2019 11:04:44 +0800
> >>> Lu Baolu<baolu.lu@linux.intel.com>  wrote:
> >>>      
> >>>> Intel VT-d 3.0 introduces more caches and interfaces for software
> >>>> to flush when it runs in the scalable mode. Currently various
> >>>> cache flush helpers are scattered around. This consolidates them
> >>>> by putting them in the existing iommu_flush structure.
> >>>>
> >>>> /* struct iommu_flush - Intel IOMMU cache invalidation ops
> >>>>    *
> >>>>    * @cc_inv: invalidate context cache
> >>>>    * @iotlb_inv: Invalidate IOTLB and paging structure caches
> >>>> when software
> >>>>    *             has changed second-level tables.
> >>>>    * @p_iotlb_inv: Invalidate IOTLB and paging structure caches
> >>>> when software
> >>>>    *               has changed first-level tables.
> >>>>    * @pc_inv: invalidate pasid cache
> >>>>    * @dev_tlb_inv: invalidate cached mappings used by
> >>>> requests-without-PASID
> >>>>    *               from the Device-TLB on a endpoint device.
> >>>>    * @p_dev_tlb_inv: invalidate cached mappings used by
> >>>> requests-with-PASID
> >>>>    *                 from the Device-TLB on an endpoint device
> >>>>    */
> >>>> struct iommu_flush {
> >>>>           void (*cc_inv)(struct intel_iommu *iommu, u16 did,
> >>>>                          u16 sid, u8 fm, u64 type);
> >>>>           void (*iotlb_inv)(struct intel_iommu *iommu, u16 did,
> >>>> u64 addr, unsigned int size_order, u64 type);
> >>>>           void (*p_iotlb_inv)(struct intel_iommu *iommu, u16 did,
> >>>> u32 pasid, u64 addr, unsigned long npages, bool ih);
> >>>>           void (*pc_inv)(struct intel_iommu *iommu, u16 did, u32
> >>>> pasid, u64 granu);
> >>>>           void (*dev_tlb_inv)(struct intel_iommu *iommu, u16 sid,
> >>>> u16 pfsid, u16 qdep, u64 addr, unsigned int mask);
> >>>>           void (*p_dev_tlb_inv)(struct intel_iommu *iommu, u16
> >>>> sid, u16 pfsid, u32 pasid, u16 qdep, u64 addr,
> >>>>                                 unsigned long npages);
> >>>> };
> >>>>
> >>>> The name of each cache flush ops is defined according to the spec
> >>>> section 6.5 so that people are easy to look up them in the spec.
> >>>>     
> >>> Nice consolidation. For nested SVM, I also introduced cache
> >>> flushed helpers as needed.
> >>> https://lkml.org/lkml/2019/10/24/857
> >>>
> >>> Should I wait for yours to be merged or you want to extend the
> >>> this consolidation after SVA/SVM cache flush? I expect to send my
> >>> v8 shortly.  
> >>
> >> Please base your v8 patch on this series. So it could get more
> >> chances for test.
> >>  
> > Sounds good.  
> 
> I am sorry I need to spend more time on this patch series. Please go
> ahead without it.
> 
NP, let me know when you need testing.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2019-12-04 17:37 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-22  3:04 [PATCH 0/5] iommu/vt-d: Consolidate various cache flush ops Lu Baolu
2019-11-22  3:04 ` [PATCH 1/5] iommu/vt-d: Extend iommu_flush for scalable mode Lu Baolu
2019-11-22  3:04 ` [PATCH 2/5] iommu/vt-d: Consolidate pasid cache invalidation Lu Baolu
2019-11-22  3:04 ` [PATCH 3/5] iommu/vt-d: Consolidate device tlb invalidation Lu Baolu
2019-11-22  3:04 ` [PATCH 4/5] iommu/vt-d: Consolidate pasid-based " Lu Baolu
2019-12-03 17:43   ` Jacob Pan
2019-11-22  3:04 ` [PATCH 5/5] iommu/vt-d: Consolidate pasid-based device " Lu Baolu
2019-12-02 20:02 ` [PATCH 0/5] iommu/vt-d: Consolidate various cache flush ops Jacob Pan
2019-12-03  2:44   ` Lu Baolu
2019-12-03 16:50     ` Jacob Pan
2019-12-04  0:32       ` Lu Baolu
2019-12-04 17:41         ` Jacob Pan
2019-12-03  8:49 ` David Woodhouse
2019-12-04  0:27   ` Lu Baolu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).