* [PATCH 1/7] iommu/vt-d: Enforce PASID devTLB field mask
2020-06-23 15:43 [PATCH 0/7] iommu/vt-d: Misc tweaks and fixes for vSVA Jacob Pan
@ 2020-06-23 15:43 ` Jacob Pan
2020-06-25 7:14 ` Lu Baolu
2020-06-23 15:43 ` [PATCH 2/7] iommu/vt-d: Remove global page support in devTLB flush Jacob Pan
` (5 subsequent siblings)
6 siblings, 1 reply; 23+ messages in thread
From: Jacob Pan @ 2020-06-23 15:43 UTC (permalink / raw)
To: iommu, LKML, Lu Baolu, Joerg Roedel, David Woodhouse
Cc: Tian, Kevin, Raj Ashok
From: Liu Yi L <yi.l.liu@intel.com>
Set proper masks to avoid invalid input spillover to reserved bits.
Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
---
include/linux/intel-iommu.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index 4100bd224f5c..729386ca8122 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -380,8 +380,8 @@ enum {
#define QI_DEV_EIOTLB_ADDR(a) ((u64)(a) & VTD_PAGE_MASK)
#define QI_DEV_EIOTLB_SIZE (((u64)1) << 11)
-#define QI_DEV_EIOTLB_GLOB(g) ((u64)g)
-#define QI_DEV_EIOTLB_PASID(p) (((u64)p) << 32)
+#define QI_DEV_EIOTLB_GLOB(g) ((u64)(g) & 0x1)
+#define QI_DEV_EIOTLB_PASID(p) ((u64)((p) & 0xfffff) << 32)
#define QI_DEV_EIOTLB_SID(sid) ((u64)((sid) & 0xffff) << 16)
#define QI_DEV_EIOTLB_QDEP(qd) ((u64)((qd) & 0x1f) << 4)
#define QI_DEV_EIOTLB_PFSID(pfsid) (((u64)(pfsid & 0xf) << 12) | \
--
2.7.4
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH 1/7] iommu/vt-d: Enforce PASID devTLB field mask
2020-06-23 15:43 ` [PATCH 1/7] iommu/vt-d: Enforce PASID devTLB field mask Jacob Pan
@ 2020-06-25 7:14 ` Lu Baolu
0 siblings, 0 replies; 23+ messages in thread
From: Lu Baolu @ 2020-06-25 7:14 UTC (permalink / raw)
To: Jacob Pan, iommu, LKML, Joerg Roedel, David Woodhouse
Cc: Tian, Kevin, Raj Ashok
On 2020/6/23 23:43, Jacob Pan wrote:
> From: Liu Yi L <yi.l.liu@intel.com>
>
> Set proper masks to avoid invalid input spillover to reserved bits.
>
Acked-by: Lu Baolu <baolu.lu@linux.intel.com>
Best regards,
baolu
> Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
> ---
> include/linux/intel-iommu.h | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
> index 4100bd224f5c..729386ca8122 100644
> --- a/include/linux/intel-iommu.h
> +++ b/include/linux/intel-iommu.h
> @@ -380,8 +380,8 @@ enum {
>
> #define QI_DEV_EIOTLB_ADDR(a) ((u64)(a) & VTD_PAGE_MASK)
> #define QI_DEV_EIOTLB_SIZE (((u64)1) << 11)
> -#define QI_DEV_EIOTLB_GLOB(g) ((u64)g)
> -#define QI_DEV_EIOTLB_PASID(p) (((u64)p) << 32)
> +#define QI_DEV_EIOTLB_GLOB(g) ((u64)(g) & 0x1)
> +#define QI_DEV_EIOTLB_PASID(p) ((u64)((p) & 0xfffff) << 32)
> #define QI_DEV_EIOTLB_SID(sid) ((u64)((sid) & 0xffff) << 16)
> #define QI_DEV_EIOTLB_QDEP(qd) ((u64)((qd) & 0x1f) << 4)
> #define QI_DEV_EIOTLB_PFSID(pfsid) (((u64)(pfsid & 0xf) << 12) | \
>
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 2/7] iommu/vt-d: Remove global page support in devTLB flush
2020-06-23 15:43 [PATCH 0/7] iommu/vt-d: Misc tweaks and fixes for vSVA Jacob Pan
2020-06-23 15:43 ` [PATCH 1/7] iommu/vt-d: Enforce PASID devTLB field mask Jacob Pan
@ 2020-06-23 15:43 ` Jacob Pan
2020-06-25 7:17 ` Lu Baolu
2020-06-23 15:43 ` [PATCH 3/7] iommu/vt-d: Fix PASID devTLB invalidation Jacob Pan
` (4 subsequent siblings)
6 siblings, 1 reply; 23+ messages in thread
From: Jacob Pan @ 2020-06-23 15:43 UTC (permalink / raw)
To: iommu, LKML, Lu Baolu, Joerg Roedel, David Woodhouse
Cc: Tian, Kevin, Raj Ashok
Global pages support is removed from VT-d spec 3.0 for dev TLB
invalidation. This patch is to remove the bits for vSVA. Similar change
already made for the native SVA. See the link below.
Link: https://lkml.org/lkml/2019/8/26/651
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
---
drivers/iommu/intel/dmar.c | 4 +---
drivers/iommu/intel/iommu.c | 4 ++--
include/linux/intel-iommu.h | 3 +--
3 files changed, 4 insertions(+), 7 deletions(-)
diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
index cc46dff98fa0..d9f973fa1190 100644
--- a/drivers/iommu/intel/dmar.c
+++ b/drivers/iommu/intel/dmar.c
@@ -1437,8 +1437,7 @@ void qi_flush_piotlb(struct intel_iommu *iommu, u16 did, u32 pasid, u64 addr,
/* PASID-based device IOTLB Invalidate */
void qi_flush_dev_iotlb_pasid(struct intel_iommu *iommu, u16 sid, u16 pfsid,
- u32 pasid, u16 qdep, u64 addr,
- unsigned int size_order, u64 granu)
+ u32 pasid, u16 qdep, u64 addr, unsigned int size_order)
{
unsigned long mask = 1UL << (VTD_PAGE_SHIFT + size_order - 1);
struct qi_desc desc = {.qw1 = 0, .qw2 = 0, .qw3 = 0};
@@ -1446,7 +1445,6 @@ void qi_flush_dev_iotlb_pasid(struct intel_iommu *iommu, u16 sid, u16 pfsid,
desc.qw0 = QI_DEV_EIOTLB_PASID(pasid) | QI_DEV_EIOTLB_SID(sid) |
QI_DEV_EIOTLB_QDEP(qdep) | QI_DEIOTLB_TYPE |
QI_DEV_IOTLB_PFSID(pfsid);
- desc.qw1 = QI_DEV_EIOTLB_GLOB(granu);
/*
* If S bit is 0, we only flush a single page. If S bit is set,
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 9129663a7406..96340da57075 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -5466,7 +5466,7 @@ intel_iommu_sva_invalidate(struct iommu_domain *domain, struct device *dev,
info->pfsid, pasid,
info->ats_qdep,
inv_info->addr_info.addr,
- size, granu);
+ size);
break;
case IOMMU_CACHE_INV_TYPE_DEV_IOTLB:
if (info->ats_enabled)
@@ -5474,7 +5474,7 @@ intel_iommu_sva_invalidate(struct iommu_domain *domain, struct device *dev,
info->pfsid, pasid,
info->ats_qdep,
inv_info->addr_info.addr,
- size, granu);
+ size);
else
pr_warn_ratelimited("Passdown device IOTLB flush w/o ATS!\n");
break;
diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index 729386ca8122..9a6614880773 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -380,7 +380,6 @@ enum {
#define QI_DEV_EIOTLB_ADDR(a) ((u64)(a) & VTD_PAGE_MASK)
#define QI_DEV_EIOTLB_SIZE (((u64)1) << 11)
-#define QI_DEV_EIOTLB_GLOB(g) ((u64)(g) & 0x1)
#define QI_DEV_EIOTLB_PASID(p) ((u64)((p) & 0xfffff) << 32)
#define QI_DEV_EIOTLB_SID(sid) ((u64)((sid) & 0xffff) << 16)
#define QI_DEV_EIOTLB_QDEP(qd) ((u64)((qd) & 0x1f) << 4)
@@ -704,7 +703,7 @@ void qi_flush_piotlb(struct intel_iommu *iommu, u16 did, u32 pasid, u64 addr,
void qi_flush_dev_iotlb_pasid(struct intel_iommu *iommu, u16 sid, u16 pfsid,
u32 pasid, u16 qdep, u64 addr,
- unsigned int size_order, u64 granu);
+ unsigned int size_order);
void qi_flush_pasid_cache(struct intel_iommu *iommu, u16 did, u64 granu,
int pasid);
--
2.7.4
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH 2/7] iommu/vt-d: Remove global page support in devTLB flush
2020-06-23 15:43 ` [PATCH 2/7] iommu/vt-d: Remove global page support in devTLB flush Jacob Pan
@ 2020-06-25 7:17 ` Lu Baolu
0 siblings, 0 replies; 23+ messages in thread
From: Lu Baolu @ 2020-06-25 7:17 UTC (permalink / raw)
To: Jacob Pan, iommu, LKML, Joerg Roedel, David Woodhouse
Cc: Tian, Kevin, Raj Ashok
On 2020/6/23 23:43, Jacob Pan wrote:
> Global pages support is removed from VT-d spec 3.0 for dev TLB
> invalidation. This patch is to remove the bits for vSVA. Similar change
> already made for the native SVA. See the link below.
>
Acked-by: Lu Baolu <baolu.lu@linux.intel.com>
Best regards,
baolu
> Link: https://lkml.org/lkml/2019/8/26/651
> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
> ---
> drivers/iommu/intel/dmar.c | 4 +---
> drivers/iommu/intel/iommu.c | 4 ++--
> include/linux/intel-iommu.h | 3 +--
> 3 files changed, 4 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
> index cc46dff98fa0..d9f973fa1190 100644
> --- a/drivers/iommu/intel/dmar.c
> +++ b/drivers/iommu/intel/dmar.c
> @@ -1437,8 +1437,7 @@ void qi_flush_piotlb(struct intel_iommu *iommu, u16 did, u32 pasid, u64 addr,
>
> /* PASID-based device IOTLB Invalidate */
> void qi_flush_dev_iotlb_pasid(struct intel_iommu *iommu, u16 sid, u16 pfsid,
> - u32 pasid, u16 qdep, u64 addr,
> - unsigned int size_order, u64 granu)
> + u32 pasid, u16 qdep, u64 addr, unsigned int size_order)
> {
> unsigned long mask = 1UL << (VTD_PAGE_SHIFT + size_order - 1);
> struct qi_desc desc = {.qw1 = 0, .qw2 = 0, .qw3 = 0};
> @@ -1446,7 +1445,6 @@ void qi_flush_dev_iotlb_pasid(struct intel_iommu *iommu, u16 sid, u16 pfsid,
> desc.qw0 = QI_DEV_EIOTLB_PASID(pasid) | QI_DEV_EIOTLB_SID(sid) |
> QI_DEV_EIOTLB_QDEP(qdep) | QI_DEIOTLB_TYPE |
> QI_DEV_IOTLB_PFSID(pfsid);
> - desc.qw1 = QI_DEV_EIOTLB_GLOB(granu);
>
> /*
> * If S bit is 0, we only flush a single page. If S bit is set,
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index 9129663a7406..96340da57075 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -5466,7 +5466,7 @@ intel_iommu_sva_invalidate(struct iommu_domain *domain, struct device *dev,
> info->pfsid, pasid,
> info->ats_qdep,
> inv_info->addr_info.addr,
> - size, granu);
> + size);
> break;
> case IOMMU_CACHE_INV_TYPE_DEV_IOTLB:
> if (info->ats_enabled)
> @@ -5474,7 +5474,7 @@ intel_iommu_sva_invalidate(struct iommu_domain *domain, struct device *dev,
> info->pfsid, pasid,
> info->ats_qdep,
> inv_info->addr_info.addr,
> - size, granu);
> + size);
> else
> pr_warn_ratelimited("Passdown device IOTLB flush w/o ATS!\n");
> break;
> diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
> index 729386ca8122..9a6614880773 100644
> --- a/include/linux/intel-iommu.h
> +++ b/include/linux/intel-iommu.h
> @@ -380,7 +380,6 @@ enum {
>
> #define QI_DEV_EIOTLB_ADDR(a) ((u64)(a) & VTD_PAGE_MASK)
> #define QI_DEV_EIOTLB_SIZE (((u64)1) << 11)
> -#define QI_DEV_EIOTLB_GLOB(g) ((u64)(g) & 0x1)
> #define QI_DEV_EIOTLB_PASID(p) ((u64)((p) & 0xfffff) << 32)
> #define QI_DEV_EIOTLB_SID(sid) ((u64)((sid) & 0xffff) << 16)
> #define QI_DEV_EIOTLB_QDEP(qd) ((u64)((qd) & 0x1f) << 4)
> @@ -704,7 +703,7 @@ void qi_flush_piotlb(struct intel_iommu *iommu, u16 did, u32 pasid, u64 addr,
>
> void qi_flush_dev_iotlb_pasid(struct intel_iommu *iommu, u16 sid, u16 pfsid,
> u32 pasid, u16 qdep, u64 addr,
> - unsigned int size_order, u64 granu);
> + unsigned int size_order);
> void qi_flush_pasid_cache(struct intel_iommu *iommu, u16 did, u64 granu,
> int pasid);
>
>
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 3/7] iommu/vt-d: Fix PASID devTLB invalidation
2020-06-23 15:43 [PATCH 0/7] iommu/vt-d: Misc tweaks and fixes for vSVA Jacob Pan
2020-06-23 15:43 ` [PATCH 1/7] iommu/vt-d: Enforce PASID devTLB field mask Jacob Pan
2020-06-23 15:43 ` [PATCH 2/7] iommu/vt-d: Remove global page support in devTLB flush Jacob Pan
@ 2020-06-23 15:43 ` Jacob Pan
2020-06-25 7:25 ` Lu Baolu
2020-06-23 15:43 ` [PATCH 4/7] iommu/vt-d: Handle non-page aligned address Jacob Pan
` (3 subsequent siblings)
6 siblings, 1 reply; 23+ messages in thread
From: Jacob Pan @ 2020-06-23 15:43 UTC (permalink / raw)
To: iommu, LKML, Lu Baolu, Joerg Roedel, David Woodhouse
Cc: Tian, Kevin, Raj Ashok
DevTLB flush can be used for both DMA request with and without PASIDs.
The former uses PASID#0 (RID2PASID), latter uses non-zero PASID for SVA
usage.
This patch adds a check for PASID value such that devTLB flush with
PASID is used for SVA case. This is more efficient in that multiple
PASIDs can be used by a single device, when tearing down a PASID entry
we shall flush only the devTLB specific to a PASID.
Fixes: 6f7db75e1c46 ("iommu/vt-d: Add second level page table")
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
---
drivers/iommu/intel/pasid.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
index c81f0f17c6ba..3991a24539a1 100644
--- a/drivers/iommu/intel/pasid.c
+++ b/drivers/iommu/intel/pasid.c
@@ -486,7 +486,16 @@ devtlb_invalidation_with_pasid(struct intel_iommu *iommu,
qdep = info->ats_qdep;
pfsid = info->pfsid;
- qi_flush_dev_iotlb(iommu, sid, pfsid, qdep, 0, 64 - VTD_PAGE_SHIFT);
+ /*
+ * When PASID 0 is used, it indicates RID2PASID(DMA request w/o PASID),
+ * devTLB flush w/o PASID should be used. For non-zero PASID under
+ * SVA usage, device could do DMA with multiple PASIDs. It is more
+ * efficient to flush devTLB specific to the PASID.
+ */
+ if (pasid)
+ qi_flush_dev_iotlb_pasid(iommu, sid, pfsid, pasid, qdep, 0, 64 - VTD_PAGE_SHIFT);
+ else
+ qi_flush_dev_iotlb(iommu, sid, pfsid, qdep, 0, 64 - VTD_PAGE_SHIFT);
}
void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev,
--
2.7.4
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH 3/7] iommu/vt-d: Fix PASID devTLB invalidation
2020-06-23 15:43 ` [PATCH 3/7] iommu/vt-d: Fix PASID devTLB invalidation Jacob Pan
@ 2020-06-25 7:25 ` Lu Baolu
2020-06-30 3:01 ` Tian, Kevin
2020-06-30 4:57 ` Jacob Pan
0 siblings, 2 replies; 23+ messages in thread
From: Lu Baolu @ 2020-06-25 7:25 UTC (permalink / raw)
To: Jacob Pan, iommu, LKML, Joerg Roedel, David Woodhouse
Cc: Tian, Kevin, Raj Ashok
On 2020/6/23 23:43, Jacob Pan wrote:
> DevTLB flush can be used for both DMA request with and without PASIDs.
> The former uses PASID#0 (RID2PASID), latter uses non-zero PASID for SVA
> usage.
>
> This patch adds a check for PASID value such that devTLB flush with
> PASID is used for SVA case. This is more efficient in that multiple
> PASIDs can be used by a single device, when tearing down a PASID entry
> we shall flush only the devTLB specific to a PASID.
>
> Fixes: 6f7db75e1c46 ("iommu/vt-d: Add second level page table")
> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
> ---
> drivers/iommu/intel/pasid.c | 11 ++++++++++-
> 1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
> index c81f0f17c6ba..3991a24539a1 100644
> --- a/drivers/iommu/intel/pasid.c
> +++ b/drivers/iommu/intel/pasid.c
> @@ -486,7 +486,16 @@ devtlb_invalidation_with_pasid(struct intel_iommu *iommu,
> qdep = info->ats_qdep;
> pfsid = info->pfsid;
>
> - qi_flush_dev_iotlb(iommu, sid, pfsid, qdep, 0, 64 - VTD_PAGE_SHIFT);
> + /*
> + * When PASID 0 is used, it indicates RID2PASID(DMA request w/o PASID),
> + * devTLB flush w/o PASID should be used. For non-zero PASID under
> + * SVA usage, device could do DMA with multiple PASIDs. It is more
> + * efficient to flush devTLB specific to the PASID.
> + */
> + if (pasid)
How about
if (pasid == PASID_RID2PASID)
qi_flush_dev_iotlb(iommu, sid, pfsid, qdep, 0, 64 - VTD_PAGE_SHIFT);
else
qi_flush_dev_iotlb_pasid(iommu, sid, pfsid, pasid, qdep, 0, 64 -
VTD_PAGE_SHIFT);
?
It makes the code more readable and still works even we reassign another
pasid for RID2PASID.
Best regards,
baolu
> + qi_flush_dev_iotlb_pasid(iommu, sid, pfsid, pasid, qdep, 0, 64 - VTD_PAGE_SHIFT);
> + else
> + qi_flush_dev_iotlb(iommu, sid, pfsid, qdep, 0, 64 - VTD_PAGE_SHIFT);
> }
>
> void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev,
>
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 23+ messages in thread
* RE: [PATCH 3/7] iommu/vt-d: Fix PASID devTLB invalidation
2020-06-25 7:25 ` Lu Baolu
@ 2020-06-30 3:01 ` Tian, Kevin
2020-06-30 4:58 ` Jacob Pan
2020-06-30 4:57 ` Jacob Pan
1 sibling, 1 reply; 23+ messages in thread
From: Tian, Kevin @ 2020-06-30 3:01 UTC (permalink / raw)
To: Lu Baolu, Jacob Pan, iommu, LKML, Joerg Roedel, David Woodhouse
Cc: Raj, Ashok
> From: Lu Baolu <baolu.lu@linux.intel.com>
> Sent: Thursday, June 25, 2020 3:26 PM
>
> On 2020/6/23 23:43, Jacob Pan wrote:
> > DevTLB flush can be used for both DMA request with and without PASIDs.
> > The former uses PASID#0 (RID2PASID), latter uses non-zero PASID for SVA
> > usage.
> >
> > This patch adds a check for PASID value such that devTLB flush with
> > PASID is used for SVA case. This is more efficient in that multiple
> > PASIDs can be used by a single device, when tearing down a PASID entry
> > we shall flush only the devTLB specific to a PASID.
> >
> > Fixes: 6f7db75e1c46 ("iommu/vt-d: Add second level page table")
btw is it really a fix? From the description it's more like an optimization...
> > Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
> > ---
> > drivers/iommu/intel/pasid.c | 11 ++++++++++-
> > 1 file changed, 10 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
> > index c81f0f17c6ba..3991a24539a1 100644
> > --- a/drivers/iommu/intel/pasid.c
> > +++ b/drivers/iommu/intel/pasid.c
> > @@ -486,7 +486,16 @@ devtlb_invalidation_with_pasid(struct
> intel_iommu *iommu,
> > qdep = info->ats_qdep;
> > pfsid = info->pfsid;
> >
> > - qi_flush_dev_iotlb(iommu, sid, pfsid, qdep, 0, 64 - VTD_PAGE_SHIFT);
> > + /*
> > + * When PASID 0 is used, it indicates RID2PASID(DMA request w/o
> PASID),
> > + * devTLB flush w/o PASID should be used. For non-zero PASID under
> > + * SVA usage, device could do DMA with multiple PASIDs. It is more
> > + * efficient to flush devTLB specific to the PASID.
> > + */
> > + if (pasid)
>
> How about
>
> if (pasid == PASID_RID2PASID)
> qi_flush_dev_iotlb(iommu, sid, pfsid, qdep, 0, 64 -
> VTD_PAGE_SHIFT);
> else
> qi_flush_dev_iotlb_pasid(iommu, sid, pfsid, pasid, qdep, 0,
> 64 -
> VTD_PAGE_SHIFT);
>
> ?
>
> It makes the code more readable and still works even we reassign another
> pasid for RID2PASID.
>
> Best regards,
> baolu
>
> > + qi_flush_dev_iotlb_pasid(iommu, sid, pfsid, pasid, qdep, 0,
> 64 - VTD_PAGE_SHIFT);
> > + else
> > + qi_flush_dev_iotlb(iommu, sid, pfsid, qdep, 0, 64 -
> VTD_PAGE_SHIFT);
> > }
> >
> > void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct
> device *dev,
> >
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 3/7] iommu/vt-d: Fix PASID devTLB invalidation
2020-06-30 3:01 ` Tian, Kevin
@ 2020-06-30 4:58 ` Jacob Pan
0 siblings, 0 replies; 23+ messages in thread
From: Jacob Pan @ 2020-06-30 4:58 UTC (permalink / raw)
To: Tian, Kevin; +Cc: Raj, Ashok, LKML, iommu, David Woodhouse
On Tue, 30 Jun 2020 03:01:29 +0000
"Tian, Kevin" <kevin.tian@intel.com> wrote:
> > From: Lu Baolu <baolu.lu@linux.intel.com>
> > Sent: Thursday, June 25, 2020 3:26 PM
> >
> > On 2020/6/23 23:43, Jacob Pan wrote:
> > > DevTLB flush can be used for both DMA request with and without
> > > PASIDs. The former uses PASID#0 (RID2PASID), latter uses non-zero
> > > PASID for SVA usage.
> > >
> > > This patch adds a check for PASID value such that devTLB flush
> > > with PASID is used for SVA case. This is more efficient in that
> > > multiple PASIDs can be used by a single device, when tearing down
> > > a PASID entry we shall flush only the devTLB specific to a PASID.
> > >
> > > Fixes: 6f7db75e1c46 ("iommu/vt-d: Add second level page table")
>
> btw is it really a fix? From the description it's more like an
> optimization...
>
I guess it depends on how the issue is perceived. There is no
functional problem but the flush is too coarse w/o this patch.
> > > Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
> > > ---
> > > drivers/iommu/intel/pasid.c | 11 ++++++++++-
> > > 1 file changed, 10 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/iommu/intel/pasid.c
> > > b/drivers/iommu/intel/pasid.c index c81f0f17c6ba..3991a24539a1
> > > 100644 --- a/drivers/iommu/intel/pasid.c
> > > +++ b/drivers/iommu/intel/pasid.c
> > > @@ -486,7 +486,16 @@ devtlb_invalidation_with_pasid(struct
> > intel_iommu *iommu,
> > > qdep = info->ats_qdep;
> > > pfsid = info->pfsid;
> > >
> > > - qi_flush_dev_iotlb(iommu, sid, pfsid, qdep, 0, 64 -
> > > VTD_PAGE_SHIFT);
> > > + /*
> > > + * When PASID 0 is used, it indicates RID2PASID(DMA
> > > request w/o
> > PASID),
> > > + * devTLB flush w/o PASID should be used. For non-zero
> > > PASID under
> > > + * SVA usage, device could do DMA with multiple PASIDs.
> > > It is more
> > > + * efficient to flush devTLB specific to the PASID.
> > > + */
> > > + if (pasid)
> >
> > How about
> >
> > if (pasid == PASID_RID2PASID)
> > qi_flush_dev_iotlb(iommu, sid, pfsid, qdep, 0, 64 -
> > VTD_PAGE_SHIFT);
> > else
> > qi_flush_dev_iotlb_pasid(iommu, sid, pfsid, pasid,
> > qdep, 0, 64 -
> > VTD_PAGE_SHIFT);
> >
> > ?
> >
> > It makes the code more readable and still works even we reassign
> > another pasid for RID2PASID.
> >
> > Best regards,
> > baolu
> >
> > > + qi_flush_dev_iotlb_pasid(iommu, sid, pfsid,
> > > pasid, qdep, 0,
> > 64 - VTD_PAGE_SHIFT);
> > > + else
> > > + qi_flush_dev_iotlb(iommu, sid, pfsid, qdep, 0,
> > > 64 -
> > VTD_PAGE_SHIFT);
> > > }
> > >
> > > void intel_pasid_tear_down_entry(struct intel_iommu *iommu,
> > > struct
> > device *dev,
> > >
[Jacob Pan]
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 3/7] iommu/vt-d: Fix PASID devTLB invalidation
2020-06-25 7:25 ` Lu Baolu
2020-06-30 3:01 ` Tian, Kevin
@ 2020-06-30 4:57 ` Jacob Pan
1 sibling, 0 replies; 23+ messages in thread
From: Jacob Pan @ 2020-06-30 4:57 UTC (permalink / raw)
To: Lu Baolu; +Cc: Tian, Kevin, Raj Ashok, LKML, iommu, David Woodhouse
On Thu, 25 Jun 2020 15:25:57 +0800
Lu Baolu <baolu.lu@linux.intel.com> wrote:
> On 2020/6/23 23:43, Jacob Pan wrote:
> > DevTLB flush can be used for both DMA request with and without
> > PASIDs. The former uses PASID#0 (RID2PASID), latter uses non-zero
> > PASID for SVA usage.
> >
> > This patch adds a check for PASID value such that devTLB flush with
> > PASID is used for SVA case. This is more efficient in that multiple
> > PASIDs can be used by a single device, when tearing down a PASID
> > entry we shall flush only the devTLB specific to a PASID.
> >
> > Fixes: 6f7db75e1c46 ("iommu/vt-d: Add second level page table")
> > Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
> > ---
> > drivers/iommu/intel/pasid.c | 11 ++++++++++-
> > 1 file changed, 10 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/iommu/intel/pasid.c
> > b/drivers/iommu/intel/pasid.c index c81f0f17c6ba..3991a24539a1
> > 100644 --- a/drivers/iommu/intel/pasid.c
> > +++ b/drivers/iommu/intel/pasid.c
> > @@ -486,7 +486,16 @@ devtlb_invalidation_with_pasid(struct
> > intel_iommu *iommu, qdep = info->ats_qdep;
> > pfsid = info->pfsid;
> >
> > - qi_flush_dev_iotlb(iommu, sid, pfsid, qdep, 0, 64 -
> > VTD_PAGE_SHIFT);
> > + /*
> > + * When PASID 0 is used, it indicates RID2PASID(DMA
> > request w/o PASID),
> > + * devTLB flush w/o PASID should be used. For non-zero
> > PASID under
> > + * SVA usage, device could do DMA with multiple PASIDs. It
> > is more
> > + * efficient to flush devTLB specific to the PASID.
> > + */
> > + if (pasid)
>
> How about
>
> if (pasid == PASID_RID2PASID)
> qi_flush_dev_iotlb(iommu, sid, pfsid, qdep, 0, 64 -
> VTD_PAGE_SHIFT); else
> qi_flush_dev_iotlb_pasid(iommu, sid, pfsid, pasid,
> qdep, 0, 64 - VTD_PAGE_SHIFT);
>
> ?
>
> It makes the code more readable and still works even we reassign
> another pasid for RID2PASID.
>
agreed, thanks.
> Best regards,
> baolu
>
> > + qi_flush_dev_iotlb_pasid(iommu, sid, pfsid, pasid,
> > qdep, 0, 64 - VTD_PAGE_SHIFT);
> > + else
> > + qi_flush_dev_iotlb(iommu, sid, pfsid, qdep, 0, 64
> > - VTD_PAGE_SHIFT); }
> >
> > void intel_pasid_tear_down_entry(struct intel_iommu *iommu,
> > struct device *dev,
[Jacob Pan]
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 4/7] iommu/vt-d: Handle non-page aligned address
2020-06-23 15:43 [PATCH 0/7] iommu/vt-d: Misc tweaks and fixes for vSVA Jacob Pan
` (2 preceding siblings ...)
2020-06-23 15:43 ` [PATCH 3/7] iommu/vt-d: Fix PASID devTLB invalidation Jacob Pan
@ 2020-06-23 15:43 ` Jacob Pan
2020-06-25 10:05 ` Lu Baolu
2020-06-23 15:43 ` [PATCH 5/7] iommu/vt-d: Fix devTLB flush for vSVA Jacob Pan
` (2 subsequent siblings)
6 siblings, 1 reply; 23+ messages in thread
From: Jacob Pan @ 2020-06-23 15:43 UTC (permalink / raw)
To: iommu, LKML, Lu Baolu, Joerg Roedel, David Woodhouse
Cc: Tian, Kevin, Raj Ashok
From: Liu Yi L <yi.l.liu@intel.com>
Address information for device TLB invalidation comes from userspace
when device is directly assigned to a guest with vIOMMU support.
VT-d requires page aligned address. This patch checks and enforce
address to be page aligned, otherwise reserved bits can be set in the
invalidation descriptor. Unrecoverable fault will be reported due to
non-zero value in the reserved bits.
Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
---
drivers/iommu/intel/dmar.c | 19 +++++++++++++++++--
1 file changed, 17 insertions(+), 2 deletions(-)
diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
index d9f973fa1190..53f4e5003620 100644
--- a/drivers/iommu/intel/dmar.c
+++ b/drivers/iommu/intel/dmar.c
@@ -1455,9 +1455,24 @@ void qi_flush_dev_iotlb_pasid(struct intel_iommu *iommu, u16 sid, u16 pfsid,
* Max Invs Pending (MIP) is set to 0 for now until we have DIT in
* ECAP.
*/
- desc.qw1 |= addr & ~mask;
- if (size_order)
+ if (addr & ~VTD_PAGE_MASK)
+ pr_warn_ratelimited("Invalidate non-page aligned address %llx\n", addr);
+
+ if (size_order) {
+ /* Take page address */
+ desc.qw1 |= QI_DEV_EIOTLB_ADDR(addr);
+ /*
+ * Existing 0s in address below size_order may be the least
+ * significant bit, we must set them to 1s to avoid having
+ * smaller size than desired.
+ */
+ desc.qw1 |= GENMASK_ULL(size_order + VTD_PAGE_SHIFT,
+ VTD_PAGE_SHIFT);
+ /* Clear size_order bit to indicate size */
+ desc.qw1 &= ~mask;
+ /* Set the S bit to indicate flushing more than 1 page */
desc.qw1 |= QI_DEV_EIOTLB_SIZE;
+ }
qi_submit_sync(iommu, &desc, 1, 0);
}
--
2.7.4
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH 4/7] iommu/vt-d: Handle non-page aligned address
2020-06-23 15:43 ` [PATCH 4/7] iommu/vt-d: Handle non-page aligned address Jacob Pan
@ 2020-06-25 10:05 ` Lu Baolu
2020-06-30 17:19 ` Jacob Pan
0 siblings, 1 reply; 23+ messages in thread
From: Lu Baolu @ 2020-06-25 10:05 UTC (permalink / raw)
To: Jacob Pan, iommu, LKML, Joerg Roedel, David Woodhouse
Cc: Tian, Kevin, Raj Ashok
Hi,
On 2020/6/23 23:43, Jacob Pan wrote:
> From: Liu Yi L <yi.l.liu@intel.com>
>
> Address information for device TLB invalidation comes from userspace
> when device is directly assigned to a guest with vIOMMU support.
> VT-d requires page aligned address. This patch checks and enforce
> address to be page aligned, otherwise reserved bits can be set in the
> invalidation descriptor. Unrecoverable fault will be reported due to
> non-zero value in the reserved bits.
>
> Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
> ---
> drivers/iommu/intel/dmar.c | 19 +++++++++++++++++--
> 1 file changed, 17 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
> index d9f973fa1190..53f4e5003620 100644
> --- a/drivers/iommu/intel/dmar.c
> +++ b/drivers/iommu/intel/dmar.c
> @@ -1455,9 +1455,24 @@ void qi_flush_dev_iotlb_pasid(struct intel_iommu *iommu, u16 sid, u16 pfsid,
> * Max Invs Pending (MIP) is set to 0 for now until we have DIT in
> * ECAP.
> */
> - desc.qw1 |= addr & ~mask;
> - if (size_order)
> + if (addr & ~VTD_PAGE_MASK)
> + pr_warn_ratelimited("Invalidate non-page aligned address %llx\n", addr);
> +
> + if (size_order) {
> + /* Take page address */
> + desc.qw1 |= QI_DEV_EIOTLB_ADDR(addr);
If size_order == 0 (that means only a single page is about to be
invalidated), do you still need to set ADDR field of the descriptor?
Best regards,
baolu
> + /*
> + * Existing 0s in address below size_order may be the least
> + * significant bit, we must set them to 1s to avoid having
> + * smaller size than desired.
> + */
> + desc.qw1 |= GENMASK_ULL(size_order + VTD_PAGE_SHIFT,
> + VTD_PAGE_SHIFT);
> + /* Clear size_order bit to indicate size */
> + desc.qw1 &= ~mask;
> + /* Set the S bit to indicate flushing more than 1 page */
> desc.qw1 |= QI_DEV_EIOTLB_SIZE;
> + }
>
> qi_submit_sync(iommu, &desc, 1, 0);
> }
>
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 4/7] iommu/vt-d: Handle non-page aligned address
2020-06-25 10:05 ` Lu Baolu
@ 2020-06-30 17:19 ` Jacob Pan
0 siblings, 0 replies; 23+ messages in thread
From: Jacob Pan @ 2020-06-30 17:19 UTC (permalink / raw)
To: Lu Baolu; +Cc: Tian, Kevin, Raj Ashok, LKML, iommu, David Woodhouse
On Thu, 25 Jun 2020 18:05:52 +0800
Lu Baolu <baolu.lu@linux.intel.com> wrote:
> Hi,
>
> On 2020/6/23 23:43, Jacob Pan wrote:
> > From: Liu Yi L <yi.l.liu@intel.com>
> >
> > Address information for device TLB invalidation comes from userspace
> > when device is directly assigned to a guest with vIOMMU support.
> > VT-d requires page aligned address. This patch checks and enforce
> > address to be page aligned, otherwise reserved bits can be set in
> > the invalidation descriptor. Unrecoverable fault will be reported
> > due to non-zero value in the reserved bits.
> >
> > Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
> > Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
> > ---
> > drivers/iommu/intel/dmar.c | 19 +++++++++++++++++--
> > 1 file changed, 17 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
> > index d9f973fa1190..53f4e5003620 100644
> > --- a/drivers/iommu/intel/dmar.c
> > +++ b/drivers/iommu/intel/dmar.c
> > @@ -1455,9 +1455,24 @@ void qi_flush_dev_iotlb_pasid(struct
> > intel_iommu *iommu, u16 sid, u16 pfsid,
> > * Max Invs Pending (MIP) is set to 0 for now until we
> > have DIT in
> > * ECAP.
> > */
> > - desc.qw1 |= addr & ~mask;
> > - if (size_order)
> > + if (addr & ~VTD_PAGE_MASK)
> > + pr_warn_ratelimited("Invalidate non-page aligned
> > address %llx\n", addr); +
> > + if (size_order) {
> > + /* Take page address */
> > + desc.qw1 |= QI_DEV_EIOTLB_ADDR(addr);
>
> If size_order == 0 (that means only a single page is about to be
> invalidated), do you still need to set ADDR field of the descriptor?
>
Good catch! we should always set addr. I will move addr assignment out
of the if condition.
.
> Best regards,
> baolu
>
> > + /*
> > + * Existing 0s in address below size_order may be
> > the least
> > + * significant bit, we must set them to 1s to
> > avoid having
> > + * smaller size than desired.
> > + */
> > + desc.qw1 |= GENMASK_ULL(size_order +
> > VTD_PAGE_SHIFT,
> > + VTD_PAGE_SHIFT);
> > + /* Clear size_order bit to indicate size */
> > + desc.qw1 &= ~mask;
> > + /* Set the S bit to indicate flushing more than 1
> > page */ desc.qw1 |= QI_DEV_EIOTLB_SIZE;
> > + }
> >
> > qi_submit_sync(iommu, &desc, 1, 0);
> > }
> >
[Jacob Pan]
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 5/7] iommu/vt-d: Fix devTLB flush for vSVA
2020-06-23 15:43 [PATCH 0/7] iommu/vt-d: Misc tweaks and fixes for vSVA Jacob Pan
` (3 preceding siblings ...)
2020-06-23 15:43 ` [PATCH 4/7] iommu/vt-d: Handle non-page aligned address Jacob Pan
@ 2020-06-23 15:43 ` Jacob Pan
2020-06-23 20:12 ` kernel test robot
2020-06-24 0:38 ` Jacob Pan
2020-06-23 15:43 ` [PATCH 6/7] iommu/vt-d: Warn on out-of-range invalidation address Jacob Pan
2020-06-23 15:43 ` [PATCH 7/7] iommu/vt-d: Disable multiple GPASID-dev bind Jacob Pan
6 siblings, 2 replies; 23+ messages in thread
From: Jacob Pan @ 2020-06-23 15:43 UTC (permalink / raw)
To: iommu, LKML, Lu Baolu, Joerg Roedel, David Woodhouse
Cc: Tian, Kevin, Raj Ashok
From: Liu Yi L <yi.l.liu@intel.com>
For guest SVA usage, in order to optimize for less VMEXIT, guest request
of IOTLB flush also includes device TLB.
On the host side, IOMMU driver performs IOTLB and implicit devTLB
invalidation. When PASID-selective granularity is requested by the guest
we need to derive the equivalent address range for devTLB instead of
using the address information in the UAPI data. The reason for that is, unlike
IOTLB flush, devTLB flush does not support PASID-selective granularity.
This is to say, we need to set the following in the PASID based devTLB
invalidation descriptor:
- entire 64 bit range in address ~(0x1 << 63)
- S bit = 1 (VT-d CH 6.5.2.6).
Without this fix, device TLB flush range is not set properly for PASID
selective granularity. This patch also merged devTLB flush code for both
implicit and explicit cases.
Fixes: 6ee1b77ba3ac ("iommu/vt-d: Add svm/sva invalidate function")
Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
---
drivers/iommu/intel/iommu.c | 25 +++++++++++++++++--------
1 file changed, 17 insertions(+), 8 deletions(-)
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 96340da57075..5ea5732d5ec4 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -5408,7 +5408,7 @@ intel_iommu_sva_invalidate(struct iommu_domain *domain, struct device *dev,
sid = PCI_DEVID(bus, devfn);
/* Size is only valid in address selective invalidation */
- if (inv_info->granularity != IOMMU_INV_GRANU_PASID)
+ if (inv_info->granularity == IOMMU_INV_GRANU_ADDR)
size = to_vtd_size(inv_info->addr_info.granule_size,
inv_info->addr_info.nb_granules);
@@ -5417,6 +5417,7 @@ intel_iommu_sva_invalidate(struct iommu_domain *domain, struct device *dev,
IOMMU_CACHE_INV_TYPE_NR) {
int granu = 0;
u64 pasid = 0;
+ u64 addr = 0;
granu = to_vtd_granularity(cache_type, inv_info->granularity);
if (granu == -EINVAL) {
@@ -5456,19 +5457,27 @@ intel_iommu_sva_invalidate(struct iommu_domain *domain, struct device *dev,
(granu == QI_GRAN_NONG_PASID) ? -1 : 1 << size,
inv_info->addr_info.flags & IOMMU_INV_ADDR_FLAGS_LEAF);
+ if (!info->ats_enabled)
+ break;
/*
* Always flush device IOTLB if ATS is enabled. vIOMMU
* in the guest may assume IOTLB flush is inclusive,
* which is more efficient.
*/
- if (info->ats_enabled)
- qi_flush_dev_iotlb_pasid(iommu, sid,
- info->pfsid, pasid,
- info->ats_qdep,
- inv_info->addr_info.addr,
- size);
- break;
+ fallthrough;
case IOMMU_CACHE_INV_TYPE_DEV_IOTLB:
+ /*
+ * There is no PASID selective flush for device TLB, so
+ * the equivalent of that is we set the size to be the
+ * entire range of 64 bit. User only provides PASID info
+ * without address info. So we set addr to 0.
+ */
+ if (inv_info->granularity == IOMMU_INV_GRANU_PASID) {
+ size = 64 - VTD_PAGE_SHIFT;
+ addr = 0;
+ } else if (inv_info->granularity == IOMMU_INV_GRANU_ADDR)
+ addr = inv_info->addr_info.addr;
+
if (info->ats_enabled)
qi_flush_dev_iotlb_pasid(iommu, sid,
info->pfsid, pasid,
--
2.7.4
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH 5/7] iommu/vt-d: Fix devTLB flush for vSVA
2020-06-23 15:43 ` [PATCH 5/7] iommu/vt-d: Fix devTLB flush for vSVA Jacob Pan
@ 2020-06-23 20:12 ` kernel test robot
2020-06-24 0:38 ` Jacob Pan
1 sibling, 0 replies; 23+ messages in thread
From: kernel test robot @ 2020-06-23 20:12 UTC (permalink / raw)
To: Jacob Pan, iommu, LKML, Lu Baolu, Joerg Roedel, David Woodhouse
Cc: Tian, Kevin, kbuild-all, Raj Ashok
[-- Attachment #1: Type: text/plain, Size: 6048 bytes --]
Hi Jacob,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on iommu/next]
[also build test WARNING on linux/master linus/master v5.8-rc2 next-20200623]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Jacob-Pan/iommu-vt-d-Misc-tweaks-and-fixes-for-vSVA/20200623-233905
base: https://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git next
config: i386-allyesconfig (attached as .config)
compiler: gcc-9 (Debian 9.3.0-13) 9.3.0
reproduce (this is a W=1 build):
# save the attached .config to linux build tree
make W=1 ARCH=i386
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
drivers/iommu/intel/iommu.c: In function 'intel_iommu_sva_invalidate':
>> drivers/iommu/intel/iommu.c:5420:7: warning: variable 'addr' set but not used [-Wunused-but-set-variable]
5420 | u64 addr = 0;
| ^~~~
vim +/addr +5420 drivers/iommu/intel/iommu.c
5370
5371 #ifdef CONFIG_INTEL_IOMMU_SVM
5372 static int
5373 intel_iommu_sva_invalidate(struct iommu_domain *domain, struct device *dev,
5374 struct iommu_cache_invalidate_info *inv_info)
5375 {
5376 struct dmar_domain *dmar_domain = to_dmar_domain(domain);
5377 struct device_domain_info *info;
5378 struct intel_iommu *iommu;
5379 unsigned long flags;
5380 int cache_type;
5381 u8 bus, devfn;
5382 u16 did, sid;
5383 int ret = 0;
5384 u64 size = 0;
5385
5386 if (!inv_info || !dmar_domain ||
5387 inv_info->version != IOMMU_CACHE_INVALIDATE_INFO_VERSION_1)
5388 return -EINVAL;
5389
5390 if (!dev || !dev_is_pci(dev))
5391 return -ENODEV;
5392
5393 iommu = device_to_iommu(dev, &bus, &devfn);
5394 if (!iommu)
5395 return -ENODEV;
5396
5397 if (!(dmar_domain->flags & DOMAIN_FLAG_NESTING_MODE))
5398 return -EINVAL;
5399
5400 spin_lock_irqsave(&device_domain_lock, flags);
5401 spin_lock(&iommu->lock);
5402 info = get_domain_info(dev);
5403 if (!info) {
5404 ret = -EINVAL;
5405 goto out_unlock;
5406 }
5407 did = dmar_domain->iommu_did[iommu->seq_id];
5408 sid = PCI_DEVID(bus, devfn);
5409
5410 /* Size is only valid in address selective invalidation */
5411 if (inv_info->granularity == IOMMU_INV_GRANU_ADDR)
5412 size = to_vtd_size(inv_info->addr_info.granule_size,
5413 inv_info->addr_info.nb_granules);
5414
5415 for_each_set_bit(cache_type,
5416 (unsigned long *)&inv_info->cache,
5417 IOMMU_CACHE_INV_TYPE_NR) {
5418 int granu = 0;
5419 u64 pasid = 0;
> 5420 u64 addr = 0;
5421
5422 granu = to_vtd_granularity(cache_type, inv_info->granularity);
5423 if (granu == -EINVAL) {
5424 pr_err_ratelimited("Invalid cache type and granu combination %d/%d\n",
5425 cache_type, inv_info->granularity);
5426 break;
5427 }
5428
5429 /*
5430 * PASID is stored in different locations based on the
5431 * granularity.
5432 */
5433 if (inv_info->granularity == IOMMU_INV_GRANU_PASID &&
5434 (inv_info->pasid_info.flags & IOMMU_INV_PASID_FLAGS_PASID))
5435 pasid = inv_info->pasid_info.pasid;
5436 else if (inv_info->granularity == IOMMU_INV_GRANU_ADDR &&
5437 (inv_info->addr_info.flags & IOMMU_INV_ADDR_FLAGS_PASID))
5438 pasid = inv_info->addr_info.pasid;
5439
5440 switch (BIT(cache_type)) {
5441 case IOMMU_CACHE_INV_TYPE_IOTLB:
5442 if (inv_info->granularity == IOMMU_INV_GRANU_ADDR &&
5443 size &&
5444 (inv_info->addr_info.addr & ((BIT(VTD_PAGE_SHIFT + size)) - 1))) {
5445 pr_err_ratelimited("Address out of range, 0x%llx, size order %llu\n",
5446 inv_info->addr_info.addr, size);
5447 ret = -ERANGE;
5448 goto out_unlock;
5449 }
5450
5451 /*
5452 * If granu is PASID-selective, address is ignored.
5453 * We use npages = -1 to indicate that.
5454 */
5455 qi_flush_piotlb(iommu, did, pasid,
5456 mm_to_dma_pfn(inv_info->addr_info.addr),
5457 (granu == QI_GRAN_NONG_PASID) ? -1 : 1 << size,
5458 inv_info->addr_info.flags & IOMMU_INV_ADDR_FLAGS_LEAF);
5459
5460 if (!info->ats_enabled)
5461 break;
5462 /*
5463 * Always flush device IOTLB if ATS is enabled. vIOMMU
5464 * in the guest may assume IOTLB flush is inclusive,
5465 * which is more efficient.
5466 */
5467 fallthrough;
5468 case IOMMU_CACHE_INV_TYPE_DEV_IOTLB:
5469 /*
5470 * There is no PASID selective flush for device TLB, so
5471 * the equivalent of that is we set the size to be the
5472 * entire range of 64 bit. User only provides PASID info
5473 * without address info. So we set addr to 0.
5474 */
5475 if (inv_info->granularity == IOMMU_INV_GRANU_PASID) {
5476 size = 64 - VTD_PAGE_SHIFT;
5477 addr = 0;
5478 } else if (inv_info->granularity == IOMMU_INV_GRANU_ADDR)
5479 addr = inv_info->addr_info.addr;
5480
5481 if (info->ats_enabled)
5482 qi_flush_dev_iotlb_pasid(iommu, sid,
5483 info->pfsid, pasid,
5484 info->ats_qdep,
5485 inv_info->addr_info.addr,
5486 size);
5487 else
5488 pr_warn_ratelimited("Passdown device IOTLB flush w/o ATS!\n");
5489 break;
5490 default:
5491 dev_err_ratelimited(dev, "Unsupported IOMMU invalidation type %d\n",
5492 cache_type);
5493 ret = -EINVAL;
5494 }
5495 }
5496 out_unlock:
5497 spin_unlock(&iommu->lock);
5498 spin_unlock_irqrestore(&device_domain_lock, flags);
5499
5500 return ret;
5501 }
5502 #endif
5503
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 72813 bytes --]
[-- Attachment #3: Type: text/plain, Size: 156 bytes --]
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 5/7] iommu/vt-d: Fix devTLB flush for vSVA
2020-06-23 15:43 ` [PATCH 5/7] iommu/vt-d: Fix devTLB flush for vSVA Jacob Pan
2020-06-23 20:12 ` kernel test robot
@ 2020-06-24 0:38 ` Jacob Pan
1 sibling, 0 replies; 23+ messages in thread
From: Jacob Pan @ 2020-06-24 0:38 UTC (permalink / raw)
To: iommu, LKML, Lu Baolu, Joerg Roedel, David Woodhouse
Cc: Tian, Kevin, Raj Ashok
On Tue, 23 Jun 2020 08:43:14 -0700
Jacob Pan <jacob.jun.pan@linux.intel.com> wrote:
> From: Liu Yi L <yi.l.liu@intel.com>
>
> For guest SVA usage, in order to optimize for less VMEXIT, guest
> request of IOTLB flush also includes device TLB.
>
> On the host side, IOMMU driver performs IOTLB and implicit devTLB
> invalidation. When PASID-selective granularity is requested by the
> guest we need to derive the equivalent address range for devTLB
> instead of using the address information in the UAPI data. The reason
> for that is, unlike IOTLB flush, devTLB flush does not support
> PASID-selective granularity. This is to say, we need to set the
> following in the PASID based devTLB invalidation descriptor:
> - entire 64 bit range in address ~(0x1 << 63)
> - S bit = 1 (VT-d CH 6.5.2.6).
>
> Without this fix, device TLB flush range is not set properly for PASID
> selective granularity. This patch also merged devTLB flush code for
> both implicit and explicit cases.
>
> Fixes: 6ee1b77ba3ac ("iommu/vt-d: Add svm/sva invalidate function")
> Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
> ---
> drivers/iommu/intel/iommu.c | 25 +++++++++++++++++--------
> 1 file changed, 17 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index 96340da57075..5ea5732d5ec4 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -5408,7 +5408,7 @@ intel_iommu_sva_invalidate(struct iommu_domain
> *domain, struct device *dev, sid = PCI_DEVID(bus, devfn);
>
> /* Size is only valid in address selective invalidation */
> - if (inv_info->granularity != IOMMU_INV_GRANU_PASID)
> + if (inv_info->granularity == IOMMU_INV_GRANU_ADDR)
> size = to_vtd_size(inv_info->addr_info.granule_size,
> inv_info->addr_info.nb_granules);
>
> @@ -5417,6 +5417,7 @@ intel_iommu_sva_invalidate(struct iommu_domain
> *domain, struct device *dev, IOMMU_CACHE_INV_TYPE_NR) {
> int granu = 0;
> u64 pasid = 0;
> + u64 addr = 0;
>
> granu = to_vtd_granularity(cache_type,
> inv_info->granularity); if (granu == -EINVAL) {
> @@ -5456,19 +5457,27 @@ intel_iommu_sva_invalidate(struct
> iommu_domain *domain, struct device *dev, (granu ==
> QI_GRAN_NONG_PASID) ? -1 : 1 << size, inv_info->addr_info.flags &
> IOMMU_INV_ADDR_FLAGS_LEAF);
> + if (!info->ats_enabled)
> + break;
> /*
> * Always flush device IOTLB if ATS is
> enabled. vIOMMU
> * in the guest may assume IOTLB flush is
> inclusive,
> * which is more efficient.
> */
> - if (info->ats_enabled)
> - qi_flush_dev_iotlb_pasid(iommu, sid,
> - info->pfsid, pasid,
> - info->ats_qdep,
> -
> inv_info->addr_info.addr,
> - size);
> - break;
> + fallthrough;
> case IOMMU_CACHE_INV_TYPE_DEV_IOTLB:
> + /*
> + * There is no PASID selective flush for
> device TLB, so
> + * the equivalent of that is we set the size
> to be the
> + * entire range of 64 bit. User only
> provides PASID info
> + * without address info. So we set addr to 0.
> + */
> + if (inv_info->granularity ==
> IOMMU_INV_GRANU_PASID) {
> + size = 64 - VTD_PAGE_SHIFT;
> + addr = 0;
> + } else if (inv_info->granularity ==
> IOMMU_INV_GRANU_ADDR)
> + addr = inv_info->addr_info.addr;
> +
> if (info->ats_enabled)
> qi_flush_dev_iotlb_pasid(iommu, sid,
> info->pfsid, pasid,
addr should be used here. will fix in the next version. Baolu has
pointed out this before but missed it here.
Jacob
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 6/7] iommu/vt-d: Warn on out-of-range invalidation address
2020-06-23 15:43 [PATCH 0/7] iommu/vt-d: Misc tweaks and fixes for vSVA Jacob Pan
` (4 preceding siblings ...)
2020-06-23 15:43 ` [PATCH 5/7] iommu/vt-d: Fix devTLB flush for vSVA Jacob Pan
@ 2020-06-23 15:43 ` Jacob Pan
2020-06-25 10:10 ` Lu Baolu
2020-06-23 15:43 ` [PATCH 7/7] iommu/vt-d: Disable multiple GPASID-dev bind Jacob Pan
6 siblings, 1 reply; 23+ messages in thread
From: Jacob Pan @ 2020-06-23 15:43 UTC (permalink / raw)
To: iommu, LKML, Lu Baolu, Joerg Roedel, David Woodhouse
Cc: Tian, Kevin, Raj Ashok
For guest requested IOTLB invalidation, address and mask are provided as
part of the invalidation data. VT-d HW silently ignores any address bits
below the mask. SW shall also allow such case but give warning if
address does not align with the mask. This patch relax the fault
handling from error to warning and proceed with invalidation request
with the given mask.
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
---
drivers/iommu/intel/iommu.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 5ea5732d5ec4..50fc62413a35 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -5439,13 +5439,12 @@ intel_iommu_sva_invalidate(struct iommu_domain *domain, struct device *dev,
switch (BIT(cache_type)) {
case IOMMU_CACHE_INV_TYPE_IOTLB:
+ /* HW will ignore LSB bits based on address mask */
if (inv_info->granularity == IOMMU_INV_GRANU_ADDR &&
size &&
(inv_info->addr_info.addr & ((BIT(VTD_PAGE_SHIFT + size)) - 1))) {
- pr_err_ratelimited("Address out of range, 0x%llx, size order %llu\n",
- inv_info->addr_info.addr, size);
- ret = -ERANGE;
- goto out_unlock;
+ WARN_ONCE(1, "Address out of range, 0x%llx, size order %llu\n",
+ inv_info->addr_info.addr, size);
}
/*
--
2.7.4
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH 6/7] iommu/vt-d: Warn on out-of-range invalidation address
2020-06-23 15:43 ` [PATCH 6/7] iommu/vt-d: Warn on out-of-range invalidation address Jacob Pan
@ 2020-06-25 10:10 ` Lu Baolu
2020-06-30 17:34 ` Jacob Pan
0 siblings, 1 reply; 23+ messages in thread
From: Lu Baolu @ 2020-06-25 10:10 UTC (permalink / raw)
To: Jacob Pan, iommu, LKML, Joerg Roedel, David Woodhouse
Cc: Tian, Kevin, Raj Ashok
Hi,
On 2020/6/23 23:43, Jacob Pan wrote:
> For guest requested IOTLB invalidation, address and mask are provided as
> part of the invalidation data. VT-d HW silently ignores any address bits
> below the mask. SW shall also allow such case but give warning if
> address does not align with the mask. This patch relax the fault
> handling from error to warning and proceed with invalidation request
> with the given mask.
>
> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
> ---
> drivers/iommu/intel/iommu.c | 7 +++----
> 1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index 5ea5732d5ec4..50fc62413a35 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -5439,13 +5439,12 @@ intel_iommu_sva_invalidate(struct iommu_domain *domain, struct device *dev,
>
> switch (BIT(cache_type)) {
> case IOMMU_CACHE_INV_TYPE_IOTLB:
> + /* HW will ignore LSB bits based on address mask */
> if (inv_info->granularity == IOMMU_INV_GRANU_ADDR &&
> size &&
> (inv_info->addr_info.addr & ((BIT(VTD_PAGE_SHIFT + size)) - 1))) {
> - pr_err_ratelimited("Address out of range, 0x%llx, size order %llu\n",
> - inv_info->addr_info.addr, size);
> - ret = -ERANGE;
> - goto out_unlock;
> + WARN_ONCE(1, "Address out of range, 0x%llx, size order %llu\n",
> + inv_info->addr_info.addr, size);
I don't think WARN_ONCE() is suitable here. It makes users think it's a
kernel bug. How about pr_warn_ratelimited()?
Best regards,
baolu
> }
>
> /*
>
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 6/7] iommu/vt-d: Warn on out-of-range invalidation address
2020-06-25 10:10 ` Lu Baolu
@ 2020-06-30 17:34 ` Jacob Pan
2020-07-01 1:45 ` Lu Baolu
0 siblings, 1 reply; 23+ messages in thread
From: Jacob Pan @ 2020-06-30 17:34 UTC (permalink / raw)
To: Lu Baolu; +Cc: Tian, Kevin, Raj Ashok, LKML, iommu, David Woodhouse
On Thu, 25 Jun 2020 18:10:43 +0800
Lu Baolu <baolu.lu@linux.intel.com> wrote:
> Hi,
>
> On 2020/6/23 23:43, Jacob Pan wrote:
> > For guest requested IOTLB invalidation, address and mask are
> > provided as part of the invalidation data. VT-d HW silently ignores
> > any address bits below the mask. SW shall also allow such case but
> > give warning if address does not align with the mask. This patch
> > relax the fault handling from error to warning and proceed with
> > invalidation request with the given mask.
> >
> > Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
> > ---
> > drivers/iommu/intel/iommu.c | 7 +++----
> > 1 file changed, 3 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/iommu/intel/iommu.c
> > b/drivers/iommu/intel/iommu.c index 5ea5732d5ec4..50fc62413a35
> > 100644 --- a/drivers/iommu/intel/iommu.c
> > +++ b/drivers/iommu/intel/iommu.c
> > @@ -5439,13 +5439,12 @@ intel_iommu_sva_invalidate(struct
> > iommu_domain *domain, struct device *dev,
> > switch (BIT(cache_type)) {
> > case IOMMU_CACHE_INV_TYPE_IOTLB:
> > + /* HW will ignore LSB bits based on
> > address mask */ if (inv_info->granularity == IOMMU_INV_GRANU_ADDR &&
> > size &&
> > (inv_info->addr_info.addr &
> > ((BIT(VTD_PAGE_SHIFT + size)) - 1))) {
> > - pr_err_ratelimited("Address out of
> > range, 0x%llx, size order %llu\n",
> > -
> > inv_info->addr_info.addr, size);
> > - ret = -ERANGE;
> > - goto out_unlock;
> > + WARN_ONCE(1, "Address out of
> > range, 0x%llx, size order %llu\n",
> > +
> > inv_info->addr_info.addr, size);
>
> I don't think WARN_ONCE() is suitable here. It makes users think it's
> a kernel bug. How about pr_warn_ratelimited()?
>
I think pr_warn_ratelimited might still be too chatty. There is no
functional issues, we just don't to silently ignore it. Perhaps just
say:
WARN_ONCE(1, "User provided address not page aligned, alignment forced")
?
> Best regards,
> baolu
>
> > }
> >
> > /*
> >
[Jacob Pan]
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 6/7] iommu/vt-d: Warn on out-of-range invalidation address
2020-06-30 17:34 ` Jacob Pan
@ 2020-07-01 1:45 ` Lu Baolu
2020-07-01 14:19 ` Jacob Pan
0 siblings, 1 reply; 23+ messages in thread
From: Lu Baolu @ 2020-07-01 1:45 UTC (permalink / raw)
To: Jacob Pan; +Cc: Tian, Kevin, Raj Ashok, LKML, iommu, David Woodhouse
Hi Jacob,
On 7/1/20 1:34 AM, Jacob Pan wrote:
> On Thu, 25 Jun 2020 18:10:43 +0800
> Lu Baolu<baolu.lu@linux.intel.com> wrote:
>
>> Hi,
>>
>> On 2020/6/23 23:43, Jacob Pan wrote:
>>> For guest requested IOTLB invalidation, address and mask are
>>> provided as part of the invalidation data. VT-d HW silently ignores
>>> any address bits below the mask. SW shall also allow such case but
>>> give warning if address does not align with the mask. This patch
>>> relax the fault handling from error to warning and proceed with
>>> invalidation request with the given mask.
>>>
>>> Signed-off-by: Jacob Pan<jacob.jun.pan@linux.intel.com>
>>> ---
>>> drivers/iommu/intel/iommu.c | 7 +++----
>>> 1 file changed, 3 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/drivers/iommu/intel/iommu.c
>>> b/drivers/iommu/intel/iommu.c index 5ea5732d5ec4..50fc62413a35
>>> 100644 --- a/drivers/iommu/intel/iommu.c
>>> +++ b/drivers/iommu/intel/iommu.c
>>> @@ -5439,13 +5439,12 @@ intel_iommu_sva_invalidate(struct
>>> iommu_domain *domain, struct device *dev,
>>> switch (BIT(cache_type)) {
>>> case IOMMU_CACHE_INV_TYPE_IOTLB:
>>> + /* HW will ignore LSB bits based on
>>> address mask */ if (inv_info->granularity == IOMMU_INV_GRANU_ADDR &&
>>> size &&
>>> (inv_info->addr_info.addr &
>>> ((BIT(VTD_PAGE_SHIFT + size)) - 1))) {
>>> - pr_err_ratelimited("Address out of
>>> range, 0x%llx, size order %llu\n",
>>> -
>>> inv_info->addr_info.addr, size);
>>> - ret = -ERANGE;
>>> - goto out_unlock;
>>> + WARN_ONCE(1, "Address out of
>>> range, 0x%llx, size order %llu\n",
>>> +
>>> inv_info->addr_info.addr, size);
>> I don't think WARN_ONCE() is suitable here. It makes users think it's
>> a kernel bug. How about pr_warn_ratelimited()?
>>
> I think pr_warn_ratelimited might still be too chatty. There is no
> functional issues, we just don't to silently ignore it. Perhaps just
> say:
> WARN_ONCE(1, "User provided address not page aligned, alignment forced")
> ?
>
WARN() is normally used for reporting a kernel bug. It dumps kernel
trace. And the users will report bug through bugzilla.kernel.org.
In this case, it's actually an unexpected user input, we shouldn't
treat it as a kernel bug and pr_err_ratelimited() is enough?
Best regards,
baolu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 6/7] iommu/vt-d: Warn on out-of-range invalidation address
2020-07-01 1:45 ` Lu Baolu
@ 2020-07-01 14:19 ` Jacob Pan
0 siblings, 0 replies; 23+ messages in thread
From: Jacob Pan @ 2020-07-01 14:19 UTC (permalink / raw)
To: Lu Baolu; +Cc: Tian, Kevin, Raj Ashok, LKML, iommu, David Woodhouse
On Wed, 1 Jul 2020 09:45:40 +0800
Lu Baolu <baolu.lu@linux.intel.com> wrote:
> Hi Jacob,
>
> On 7/1/20 1:34 AM, Jacob Pan wrote:
> > On Thu, 25 Jun 2020 18:10:43 +0800
> > Lu Baolu<baolu.lu@linux.intel.com> wrote:
> >
> >> Hi,
> >>
> >> On 2020/6/23 23:43, Jacob Pan wrote:
> >>> For guest requested IOTLB invalidation, address and mask are
> >>> provided as part of the invalidation data. VT-d HW silently
> >>> ignores any address bits below the mask. SW shall also allow such
> >>> case but give warning if address does not align with the mask.
> >>> This patch relax the fault handling from error to warning and
> >>> proceed with invalidation request with the given mask.
> >>>
> >>> Signed-off-by: Jacob Pan<jacob.jun.pan@linux.intel.com>
> >>> ---
> >>> drivers/iommu/intel/iommu.c | 7 +++----
> >>> 1 file changed, 3 insertions(+), 4 deletions(-)
> >>>
> >>> diff --git a/drivers/iommu/intel/iommu.c
> >>> b/drivers/iommu/intel/iommu.c index 5ea5732d5ec4..50fc62413a35
> >>> 100644 --- a/drivers/iommu/intel/iommu.c
> >>> +++ b/drivers/iommu/intel/iommu.c
> >>> @@ -5439,13 +5439,12 @@ intel_iommu_sva_invalidate(struct
> >>> iommu_domain *domain, struct device *dev,
> >>> switch (BIT(cache_type)) {
> >>> case IOMMU_CACHE_INV_TYPE_IOTLB:
> >>> + /* HW will ignore LSB bits based on
> >>> address mask */ if (inv_info->granularity == IOMMU_INV_GRANU_ADDR
> >>> && size &&
> >>> (inv_info->addr_info.addr &
> >>> ((BIT(VTD_PAGE_SHIFT + size)) - 1))) {
> >>> - pr_err_ratelimited("Address out
> >>> of range, 0x%llx, size order %llu\n",
> >>> -
> >>> inv_info->addr_info.addr, size);
> >>> - ret = -ERANGE;
> >>> - goto out_unlock;
> >>> + WARN_ONCE(1, "Address out of
> >>> range, 0x%llx, size order %llu\n",
> >>> +
> >>> inv_info->addr_info.addr, size);
> >> I don't think WARN_ONCE() is suitable here. It makes users think
> >> it's a kernel bug. How about pr_warn_ratelimited()?
> >>
> > I think pr_warn_ratelimited might still be too chatty. There is no
> > functional issues, we just don't to silently ignore it. Perhaps just
> > say:
> > WARN_ONCE(1, "User provided address not page aligned, alignment
> > forced") ?
> >
>
> WARN() is normally used for reporting a kernel bug. It dumps kernel
> trace. And the users will report bug through bugzilla.kernel.org.
>
> In this case, it's actually an unexpected user input, we shouldn't
> treat it as a kernel bug and pr_err_ratelimited() is enough?
>
Sounds good. I will leave it.
Thanks,
Jacob
> Best regards,
> baolu
[Jacob Pan]
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 7/7] iommu/vt-d: Disable multiple GPASID-dev bind
2020-06-23 15:43 [PATCH 0/7] iommu/vt-d: Misc tweaks and fixes for vSVA Jacob Pan
` (5 preceding siblings ...)
2020-06-23 15:43 ` [PATCH 6/7] iommu/vt-d: Warn on out-of-range invalidation address Jacob Pan
@ 2020-06-23 15:43 ` Jacob Pan
2020-06-25 12:54 ` Lu Baolu
6 siblings, 1 reply; 23+ messages in thread
From: Jacob Pan @ 2020-06-23 15:43 UTC (permalink / raw)
To: iommu, LKML, Lu Baolu, Joerg Roedel, David Woodhouse
Cc: Tian, Kevin, Raj Ashok
For the unlikely use case where multiple aux domains from the same pdev
are attached to a single guest and then bound to a single process
(thus same PASID) within that guest, we cannot easily support this case
by refcounting the number of users. As there is only one SL page table
per PASID while we have multiple aux domains thus multiple SL page tables
for the same PASID.
Extra unbinding guest PASID can happen due to race between normal and
exception cases. Termination of one aux domain may affect others unless
we actively track and switch aux domains to ensure the validity of SL
page tables and TLB states in the shared PASID entry.
Support for sharing second level PGDs across domains can reduce the
complexity but this is not available due to the limitations on VFIO
container architecture. We can revisit this decision once sharing PGDs
are available.
Overall, the complexity and potential glitch do not warrant this unlikely
use case thereby removed by this patch.
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
---
drivers/iommu/intel/svm.c | 22 +++++++++-------------
1 file changed, 9 insertions(+), 13 deletions(-)
diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
index 6c87c807a0ab..d386853121a2 100644
--- a/drivers/iommu/intel/svm.c
+++ b/drivers/iommu/intel/svm.c
@@ -277,20 +277,16 @@ int intel_svm_bind_gpasid(struct iommu_domain *domain, struct device *dev,
goto out;
}
+ /*
+ * Do not allow multiple bindings of the same device-PASID since
+ * there is only one SL page tables per PASID. We may revisit
+ * once sharing PGD across domains are supported.
+ */
for_each_svm_dev(sdev, svm, dev) {
- /*
- * For devices with aux domains, we should allow
- * multiple bind calls with the same PASID and pdev.
- */
- if (iommu_dev_feature_enabled(dev,
- IOMMU_DEV_FEAT_AUX)) {
- sdev->users++;
- } else {
- dev_warn_ratelimited(dev,
- "Already bound with PASID %u\n",
- svm->pasid);
- ret = -EBUSY;
- }
+ dev_warn_ratelimited(dev,
+ "Already bound with PASID %u\n",
+ svm->pasid);
+ ret = -EBUSY;
goto out;
}
} else {
--
2.7.4
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH 7/7] iommu/vt-d: Disable multiple GPASID-dev bind
2020-06-23 15:43 ` [PATCH 7/7] iommu/vt-d: Disable multiple GPASID-dev bind Jacob Pan
@ 2020-06-25 12:54 ` Lu Baolu
0 siblings, 0 replies; 23+ messages in thread
From: Lu Baolu @ 2020-06-25 12:54 UTC (permalink / raw)
To: Jacob Pan, iommu, LKML, Joerg Roedel, David Woodhouse
Cc: Tian, Kevin, Raj Ashok
On 2020/6/23 23:43, Jacob Pan wrote:
> For the unlikely use case where multiple aux domains from the same pdev
> are attached to a single guest and then bound to a single process
> (thus same PASID) within that guest, we cannot easily support this case
> by refcounting the number of users. As there is only one SL page table
> per PASID while we have multiple aux domains thus multiple SL page tables
> for the same PASID.
>
> Extra unbinding guest PASID can happen due to race between normal and
> exception cases. Termination of one aux domain may affect others unless
> we actively track and switch aux domains to ensure the validity of SL
> page tables and TLB states in the shared PASID entry.
>
> Support for sharing second level PGDs across domains can reduce the
> complexity but this is not available due to the limitations on VFIO
> container architecture. We can revisit this decision once sharing PGDs
> are available.
>
> Overall, the complexity and potential glitch do not warrant this unlikely
> use case thereby removed by this patch.
>
> Cc: Kevin Tian <kevin.tian@intel.com>
> Cc: Lu Baolu <baolu.lu@linux.intel.com>
> Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Fixes: 56722a4398a30 ("iommu/vt-d: Add bind guest PASID support")
Acked-by: Lu Baolu <baolu.lu@linux.intel.com>
Best regards,
baolu
> ---
> drivers/iommu/intel/svm.c | 22 +++++++++-------------
> 1 file changed, 9 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
> index 6c87c807a0ab..d386853121a2 100644
> --- a/drivers/iommu/intel/svm.c
> +++ b/drivers/iommu/intel/svm.c
> @@ -277,20 +277,16 @@ int intel_svm_bind_gpasid(struct iommu_domain *domain, struct device *dev,
> goto out;
> }
>
> + /*
> + * Do not allow multiple bindings of the same device-PASID since
> + * there is only one SL page tables per PASID. We may revisit
> + * once sharing PGD across domains are supported.
> + */
> for_each_svm_dev(sdev, svm, dev) {
> - /*
> - * For devices with aux domains, we should allow
> - * multiple bind calls with the same PASID and pdev.
> - */
> - if (iommu_dev_feature_enabled(dev,
> - IOMMU_DEV_FEAT_AUX)) {
> - sdev->users++;
> - } else {
> - dev_warn_ratelimited(dev,
> - "Already bound with PASID %u\n",
> - svm->pasid);
> - ret = -EBUSY;
> - }
> + dev_warn_ratelimited(dev,
> + "Already bound with PASID %u\n",
> + svm->pasid);
> + ret = -EBUSY;
> goto out;
> }
> } else {
>
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 23+ messages in thread