All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] io-pgtable fixes + ARM short-descriptor format
@ 2015-12-04 17:52 ` Robin Murphy
  0 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-04 17:52 UTC (permalink / raw)
  To: will.deacon-5wv7dgnIgG8,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
  Cc: laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw,
	brian.starkey-5wv7dgnIgG8,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

This has been tested on an MMU-500 Fast Model, with the ARM SMMU driver
hacked up to force short-descriptor at stage 1 - doing it properly needs
more work to decouple the context format from the kernel bit-ness. Since
I didn't want to delay getting this lot posted, that can come later.

Thanks,
Robin.

Robin Murphy (5):
  iommu/io-pgtable-arm: Avoid dereferencing bogus PTEs
  iommu/io-pgtable: Indicate granule for TLB maintenance
  iommu/arm-smmu: Invalidate TLBs properly
  iommu/io-pgtable: Make io_pgtable_ops_to_pgtable() macro common
  iommu/io-pgtable: Add ARMv7 short descriptor support

 drivers/iommu/Kconfig              |  19 +
 drivers/iommu/Makefile             |   1 +
 drivers/iommu/arm-smmu-v3.c        |   2 +-
 drivers/iommu/arm-smmu.c           |  21 +-
 drivers/iommu/io-pgtable-arm-v7s.c | 836 +++++++++++++++++++++++++++++++++++++
 drivers/iommu/io-pgtable-arm.c     |  38 +-
 drivers/iommu/io-pgtable.c         |   3 +
 drivers/iommu/io-pgtable.h         |  20 +-
 drivers/iommu/ipmmu-vmsa.c         |   4 +-
 9 files changed, 916 insertions(+), 28 deletions(-)
 create mode 100644 drivers/iommu/io-pgtable-arm-v7s.c

-- 
1.9.1

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 0/5] io-pgtable fixes + ARM short-descriptor format
@ 2015-12-04 17:52 ` Robin Murphy
  0 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-04 17:52 UTC (permalink / raw)
  To: linux-arm-kernel

This has been tested on an MMU-500 Fast Model, with the ARM SMMU driver
hacked up to force short-descriptor at stage 1 - doing it properly needs
more work to decouple the context format from the kernel bit-ness. Since
I didn't want to delay getting this lot posted, that can come later.

Thanks,
Robin.

Robin Murphy (5):
  iommu/io-pgtable-arm: Avoid dereferencing bogus PTEs
  iommu/io-pgtable: Indicate granule for TLB maintenance
  iommu/arm-smmu: Invalidate TLBs properly
  iommu/io-pgtable: Make io_pgtable_ops_to_pgtable() macro common
  iommu/io-pgtable: Add ARMv7 short descriptor support

 drivers/iommu/Kconfig              |  19 +
 drivers/iommu/Makefile             |   1 +
 drivers/iommu/arm-smmu-v3.c        |   2 +-
 drivers/iommu/arm-smmu.c           |  21 +-
 drivers/iommu/io-pgtable-arm-v7s.c | 836 +++++++++++++++++++++++++++++++++++++
 drivers/iommu/io-pgtable-arm.c     |  38 +-
 drivers/iommu/io-pgtable.c         |   3 +
 drivers/iommu/io-pgtable.h         |  20 +-
 drivers/iommu/ipmmu-vmsa.c         |   4 +-
 9 files changed, 916 insertions(+), 28 deletions(-)
 create mode 100644 drivers/iommu/io-pgtable-arm-v7s.c

-- 
1.9.1

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 1/5] iommu/io-pgtable-arm: Avoid dereferencing bogus PTEs
  2015-12-04 17:52 ` Robin Murphy
@ 2015-12-04 17:52     ` Robin Murphy
  -1 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-04 17:52 UTC (permalink / raw)
  To: will.deacon-5wv7dgnIgG8,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
  Cc: laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw,
	brian.starkey-5wv7dgnIgG8,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

In the case of corrupted page tables, or when an invalid size is given,
__arm_lpae_unmap() may recurse beyond the maximum number of levels.
Unfortunately the detection of this error condition only happens *after*
calculating a nonsense offset from something which might not be a valid
table pointer and dereferencing that to see if it is a valid PTE.

Make things a little more robust by checking the level is valid before
doing anything which depends on it being so.

Signed-off-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
---
 drivers/iommu/io-pgtable-arm.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index 7df9777..366a354 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -486,11 +486,13 @@ static int __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
 	void *cookie = data->iop.cookie;
 	size_t blk_size = ARM_LPAE_BLOCK_SIZE(lvl, data);
 
+	/* Something went horribly wrong and we ran out of page table */
+	if (WARN_ON(lvl == ARM_LPAE_MAX_LEVELS))
+		return 0;
+
 	ptep += ARM_LPAE_LVL_IDX(iova, lvl, data);
 	pte = *ptep;
-
-	/* Something went horribly wrong and we ran out of page table */
-	if (WARN_ON(!pte || (lvl == ARM_LPAE_MAX_LEVELS)))
+	if (WARN_ON(!pte))
 		return 0;
 
 	/* If the size matches this level, we're in the right place */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 1/5] iommu/io-pgtable-arm: Avoid dereferencing bogus PTEs
@ 2015-12-04 17:52     ` Robin Murphy
  0 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-04 17:52 UTC (permalink / raw)
  To: linux-arm-kernel

In the case of corrupted page tables, or when an invalid size is given,
__arm_lpae_unmap() may recurse beyond the maximum number of levels.
Unfortunately the detection of this error condition only happens *after*
calculating a nonsense offset from something which might not be a valid
table pointer and dereferencing that to see if it is a valid PTE.

Make things a little more robust by checking the level is valid before
doing anything which depends on it being so.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
 drivers/iommu/io-pgtable-arm.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index 7df9777..366a354 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -486,11 +486,13 @@ static int __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
 	void *cookie = data->iop.cookie;
 	size_t blk_size = ARM_LPAE_BLOCK_SIZE(lvl, data);
 
+	/* Something went horribly wrong and we ran out of page table */
+	if (WARN_ON(lvl == ARM_LPAE_MAX_LEVELS))
+		return 0;
+
 	ptep += ARM_LPAE_LVL_IDX(iova, lvl, data);
 	pte = *ptep;
-
-	/* Something went horribly wrong and we ran out of page table */
-	if (WARN_ON(!pte || (lvl == ARM_LPAE_MAX_LEVELS)))
+	if (WARN_ON(!pte))
 		return 0;
 
 	/* If the size matches this level, we're in the right place */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 2/5] iommu/io-pgtable: Indicate granule for TLB maintenance
  2015-12-04 17:52 ` Robin Murphy
@ 2015-12-04 17:52     ` Robin Murphy
  -1 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-04 17:52 UTC (permalink / raw)
  To: will.deacon-5wv7dgnIgG8,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
  Cc: laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw,
	brian.starkey-5wv7dgnIgG8,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

IOMMU hardware with range-based TLB maintenance commands can work
happily with the iova and size arguments passed via the tlb_add_flush
callback, but for IOMMUs which require separate commands per entry in
the range, it is not straightforward to infer the necessary granularity
when it comes to issuing the actual commands.

Add an additional argument indicating the granularity for the benefit
of drivers needing to know, and update the ARM LPAE code appropriately
(for non-leaf invalidations we currently just assume the worst-case
page granularity rather than walking the table to check).

Signed-off-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
---
 drivers/iommu/arm-smmu-v3.c    |  2 +-
 drivers/iommu/arm-smmu.c       |  2 +-
 drivers/iommu/io-pgtable-arm.c | 27 +++++++++++++++------------
 drivers/iommu/io-pgtable.h     |  4 ++--
 drivers/iommu/ipmmu-vmsa.c     |  4 ++--
 5 files changed, 21 insertions(+), 18 deletions(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index 4e5118a..c302b65 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -1335,7 +1335,7 @@ static void arm_smmu_tlb_inv_context(void *cookie)
 }
 
 static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
-					  bool leaf, void *cookie)
+					  size_t granule, bool leaf, void *cookie)
 {
 	struct arm_smmu_domain *smmu_domain = cookie;
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
index 47dc7a7..601e3dd 100644
--- a/drivers/iommu/arm-smmu.c
+++ b/drivers/iommu/arm-smmu.c
@@ -582,7 +582,7 @@ static void arm_smmu_tlb_inv_context(void *cookie)
 }
 
 static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
-					  bool leaf, void *cookie)
+					  size_t granule, bool leaf, void *cookie)
 {
 	struct arm_smmu_domain *smmu_domain = cookie;
 	struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index 366a354..9088d27 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -58,8 +58,10 @@
 	((((d)->levels - ((l) - ARM_LPAE_START_LVL(d) + 1))		\
 	  * (d)->bits_per_level) + (d)->pg_shift)
 
+#define ARM_LPAE_GRANULE(d)		(1UL << (d)->pg_shift)
+
 #define ARM_LPAE_PAGES_PER_PGD(d)					\
-	DIV_ROUND_UP((d)->pgd_size, 1UL << (d)->pg_shift)
+	DIV_ROUND_UP((d)->pgd_size, ARM_LPAE_GRANULE(d))
 
 /*
  * Calculate the index at level l used to map virtual address a using the
@@ -169,7 +171,7 @@
 /* IOPTE accessors */
 #define iopte_deref(pte,d)					\
 	(__va((pte) & ((1ULL << ARM_LPAE_MAX_ADDR_BITS) - 1)	\
-	& ~((1ULL << (d)->pg_shift) - 1)))
+	& ~(ARM_LPAE_GRANULE(d) - 1)))
 
 #define iopte_type(pte,l)					\
 	(((pte) >> ARM_LPAE_PTE_TYPE_SHIFT) & ARM_LPAE_PTE_TYPE_MASK)
@@ -326,7 +328,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova,
 	/* Grab a pointer to the next level */
 	pte = *ptep;
 	if (!pte) {
-		cptep = __arm_lpae_alloc_pages(1UL << data->pg_shift,
+		cptep = __arm_lpae_alloc_pages(ARM_LPAE_GRANULE(data),
 					       GFP_ATOMIC, cfg);
 		if (!cptep)
 			return -ENOMEM;
@@ -412,7 +414,7 @@ static void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl,
 	if (lvl == ARM_LPAE_START_LVL(data))
 		table_size = data->pgd_size;
 	else
-		table_size = 1UL << data->pg_shift;
+		table_size = ARM_LPAE_GRANULE(data);
 
 	start = ptep;
 	end = (void *)ptep + table_size;
@@ -473,7 +475,7 @@ static int arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
 
 	__arm_lpae_set_pte(ptep, table, cfg);
 	iova &= ~(blk_size - 1);
-	cfg->tlb->tlb_add_flush(iova, blk_size, true, data->iop.cookie);
+	cfg->tlb->tlb_add_flush(iova, blk_size, blk_size, true, data->iop.cookie);
 	return size;
 }
 
@@ -501,12 +503,13 @@ static int __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
 
 		if (!iopte_leaf(pte, lvl)) {
 			/* Also flush any partial walks */
-			tlb->tlb_add_flush(iova, size, false, cookie);
+			tlb->tlb_add_flush(iova, size, ARM_LPAE_GRANULE(data),
+					   false, cookie);
 			tlb->tlb_sync(cookie);
 			ptep = iopte_deref(pte, data);
 			__arm_lpae_free_pgtable(data, lvl + 1, ptep);
 		} else {
-			tlb->tlb_add_flush(iova, size, true, cookie);
+			tlb->tlb_add_flush(iova, size, size, true, cookie);
 		}
 
 		return size;
@@ -572,7 +575,7 @@ static phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops,
 	return 0;
 
 found_translation:
-	iova &= ((1 << data->pg_shift) - 1);
+	iova &= (ARM_LPAE_GRANULE(data) - 1);
 	return ((phys_addr_t)iopte_to_pfn(pte,data) << data->pg_shift) | iova;
 }
 
@@ -670,7 +673,7 @@ arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie)
 	      (ARM_LPAE_TCR_RGN_WBWA << ARM_LPAE_TCR_IRGN0_SHIFT) |
 	      (ARM_LPAE_TCR_RGN_WBWA << ARM_LPAE_TCR_ORGN0_SHIFT);
 
-	switch (1 << data->pg_shift) {
+	switch (ARM_LPAE_GRANULE(data)) {
 	case SZ_4K:
 		reg |= ARM_LPAE_TCR_TG0_4K;
 		break;
@@ -771,7 +774,7 @@ arm_64_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cfg, void *cookie)
 
 	sl = ARM_LPAE_START_LVL(data);
 
-	switch (1 << data->pg_shift) {
+	switch (ARM_LPAE_GRANULE(data)) {
 	case SZ_4K:
 		reg |= ARM_LPAE_TCR_TG0_4K;
 		sl++; /* SL0 format is different for 4K granule size */
@@ -891,8 +894,8 @@ static void dummy_tlb_flush_all(void *cookie)
 	WARN_ON(cookie != cfg_cookie);
 }
 
-static void dummy_tlb_add_flush(unsigned long iova, size_t size, bool leaf,
-				void *cookie)
+static void dummy_tlb_add_flush(unsigned long iova, size_t size,
+				size_t granule, bool leaf, void *cookie)
 {
 	WARN_ON(cookie != cfg_cookie);
 	WARN_ON(!(size & cfg_cookie->pgsize_bitmap));
diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h
index ac9e234..2e18469 100644
--- a/drivers/iommu/io-pgtable.h
+++ b/drivers/iommu/io-pgtable.h
@@ -26,8 +26,8 @@ enum io_pgtable_fmt {
  */
 struct iommu_gather_ops {
 	void (*tlb_flush_all)(void *cookie);
-	void (*tlb_add_flush)(unsigned long iova, size_t size, bool leaf,
-			      void *cookie);
+	void (*tlb_add_flush)(unsigned long iova, size_t size, size_t granule,
+			      bool leaf, void *cookie);
 	void (*tlb_sync)(void *cookie);
 };
 
diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 8cf605f..5b1166d 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -277,8 +277,8 @@ static void ipmmu_tlb_flush_all(void *cookie)
 	ipmmu_tlb_invalidate(domain);
 }
 
-static void ipmmu_tlb_add_flush(unsigned long iova, size_t size, bool leaf,
-				void *cookie)
+static void ipmmu_tlb_add_flush(unsigned long iova, size_t size,
+				size_t granule, bool leaf, void *cookie)
 {
 	/* The hardware doesn't support selective TLB flush. */
 }
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 2/5] iommu/io-pgtable: Indicate granule for TLB maintenance
@ 2015-12-04 17:52     ` Robin Murphy
  0 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-04 17:52 UTC (permalink / raw)
  To: linux-arm-kernel

IOMMU hardware with range-based TLB maintenance commands can work
happily with the iova and size arguments passed via the tlb_add_flush
callback, but for IOMMUs which require separate commands per entry in
the range, it is not straightforward to infer the necessary granularity
when it comes to issuing the actual commands.

Add an additional argument indicating the granularity for the benefit
of drivers needing to know, and update the ARM LPAE code appropriately
(for non-leaf invalidations we currently just assume the worst-case
page granularity rather than walking the table to check).

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
 drivers/iommu/arm-smmu-v3.c    |  2 +-
 drivers/iommu/arm-smmu.c       |  2 +-
 drivers/iommu/io-pgtable-arm.c | 27 +++++++++++++++------------
 drivers/iommu/io-pgtable.h     |  4 ++--
 drivers/iommu/ipmmu-vmsa.c     |  4 ++--
 5 files changed, 21 insertions(+), 18 deletions(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index 4e5118a..c302b65 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -1335,7 +1335,7 @@ static void arm_smmu_tlb_inv_context(void *cookie)
 }
 
 static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
-					  bool leaf, void *cookie)
+					  size_t granule, bool leaf, void *cookie)
 {
 	struct arm_smmu_domain *smmu_domain = cookie;
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
index 47dc7a7..601e3dd 100644
--- a/drivers/iommu/arm-smmu.c
+++ b/drivers/iommu/arm-smmu.c
@@ -582,7 +582,7 @@ static void arm_smmu_tlb_inv_context(void *cookie)
 }
 
 static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
-					  bool leaf, void *cookie)
+					  size_t granule, bool leaf, void *cookie)
 {
 	struct arm_smmu_domain *smmu_domain = cookie;
 	struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index 366a354..9088d27 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -58,8 +58,10 @@
 	((((d)->levels - ((l) - ARM_LPAE_START_LVL(d) + 1))		\
 	  * (d)->bits_per_level) + (d)->pg_shift)
 
+#define ARM_LPAE_GRANULE(d)		(1UL << (d)->pg_shift)
+
 #define ARM_LPAE_PAGES_PER_PGD(d)					\
-	DIV_ROUND_UP((d)->pgd_size, 1UL << (d)->pg_shift)
+	DIV_ROUND_UP((d)->pgd_size, ARM_LPAE_GRANULE(d))
 
 /*
  * Calculate the index at level l used to map virtual address a using the
@@ -169,7 +171,7 @@
 /* IOPTE accessors */
 #define iopte_deref(pte,d)					\
 	(__va((pte) & ((1ULL << ARM_LPAE_MAX_ADDR_BITS) - 1)	\
-	& ~((1ULL << (d)->pg_shift) - 1)))
+	& ~(ARM_LPAE_GRANULE(d) - 1)))
 
 #define iopte_type(pte,l)					\
 	(((pte) >> ARM_LPAE_PTE_TYPE_SHIFT) & ARM_LPAE_PTE_TYPE_MASK)
@@ -326,7 +328,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova,
 	/* Grab a pointer to the next level */
 	pte = *ptep;
 	if (!pte) {
-		cptep = __arm_lpae_alloc_pages(1UL << data->pg_shift,
+		cptep = __arm_lpae_alloc_pages(ARM_LPAE_GRANULE(data),
 					       GFP_ATOMIC, cfg);
 		if (!cptep)
 			return -ENOMEM;
@@ -412,7 +414,7 @@ static void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl,
 	if (lvl == ARM_LPAE_START_LVL(data))
 		table_size = data->pgd_size;
 	else
-		table_size = 1UL << data->pg_shift;
+		table_size = ARM_LPAE_GRANULE(data);
 
 	start = ptep;
 	end = (void *)ptep + table_size;
@@ -473,7 +475,7 @@ static int arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
 
 	__arm_lpae_set_pte(ptep, table, cfg);
 	iova &= ~(blk_size - 1);
-	cfg->tlb->tlb_add_flush(iova, blk_size, true, data->iop.cookie);
+	cfg->tlb->tlb_add_flush(iova, blk_size, blk_size, true, data->iop.cookie);
 	return size;
 }
 
@@ -501,12 +503,13 @@ static int __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
 
 		if (!iopte_leaf(pte, lvl)) {
 			/* Also flush any partial walks */
-			tlb->tlb_add_flush(iova, size, false, cookie);
+			tlb->tlb_add_flush(iova, size, ARM_LPAE_GRANULE(data),
+					   false, cookie);
 			tlb->tlb_sync(cookie);
 			ptep = iopte_deref(pte, data);
 			__arm_lpae_free_pgtable(data, lvl + 1, ptep);
 		} else {
-			tlb->tlb_add_flush(iova, size, true, cookie);
+			tlb->tlb_add_flush(iova, size, size, true, cookie);
 		}
 
 		return size;
@@ -572,7 +575,7 @@ static phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops,
 	return 0;
 
 found_translation:
-	iova &= ((1 << data->pg_shift) - 1);
+	iova &= (ARM_LPAE_GRANULE(data) - 1);
 	return ((phys_addr_t)iopte_to_pfn(pte,data) << data->pg_shift) | iova;
 }
 
@@ -670,7 +673,7 @@ arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie)
 	      (ARM_LPAE_TCR_RGN_WBWA << ARM_LPAE_TCR_IRGN0_SHIFT) |
 	      (ARM_LPAE_TCR_RGN_WBWA << ARM_LPAE_TCR_ORGN0_SHIFT);
 
-	switch (1 << data->pg_shift) {
+	switch (ARM_LPAE_GRANULE(data)) {
 	case SZ_4K:
 		reg |= ARM_LPAE_TCR_TG0_4K;
 		break;
@@ -771,7 +774,7 @@ arm_64_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cfg, void *cookie)
 
 	sl = ARM_LPAE_START_LVL(data);
 
-	switch (1 << data->pg_shift) {
+	switch (ARM_LPAE_GRANULE(data)) {
 	case SZ_4K:
 		reg |= ARM_LPAE_TCR_TG0_4K;
 		sl++; /* SL0 format is different for 4K granule size */
@@ -891,8 +894,8 @@ static void dummy_tlb_flush_all(void *cookie)
 	WARN_ON(cookie != cfg_cookie);
 }
 
-static void dummy_tlb_add_flush(unsigned long iova, size_t size, bool leaf,
-				void *cookie)
+static void dummy_tlb_add_flush(unsigned long iova, size_t size,
+				size_t granule, bool leaf, void *cookie)
 {
 	WARN_ON(cookie != cfg_cookie);
 	WARN_ON(!(size & cfg_cookie->pgsize_bitmap));
diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h
index ac9e234..2e18469 100644
--- a/drivers/iommu/io-pgtable.h
+++ b/drivers/iommu/io-pgtable.h
@@ -26,8 +26,8 @@ enum io_pgtable_fmt {
  */
 struct iommu_gather_ops {
 	void (*tlb_flush_all)(void *cookie);
-	void (*tlb_add_flush)(unsigned long iova, size_t size, bool leaf,
-			      void *cookie);
+	void (*tlb_add_flush)(unsigned long iova, size_t size, size_t granule,
+			      bool leaf, void *cookie);
 	void (*tlb_sync)(void *cookie);
 };
 
diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 8cf605f..5b1166d 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -277,8 +277,8 @@ static void ipmmu_tlb_flush_all(void *cookie)
 	ipmmu_tlb_invalidate(domain);
 }
 
-static void ipmmu_tlb_add_flush(unsigned long iova, size_t size, bool leaf,
-				void *cookie)
+static void ipmmu_tlb_add_flush(unsigned long iova, size_t size,
+				size_t granule, bool leaf, void *cookie)
 {
 	/* The hardware doesn't support selective TLB flush. */
 }
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 3/5] iommu/arm-smmu: Invalidate TLBs properly
  2015-12-04 17:52 ` Robin Murphy
@ 2015-12-04 17:53     ` Robin Murphy
  -1 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-04 17:53 UTC (permalink / raw)
  To: will.deacon-5wv7dgnIgG8,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
  Cc: laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw,
	brian.starkey-5wv7dgnIgG8,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

When invalidating an IOVA range potentially spanning multiple pages,
such as when removing an entire intermediate-level table, we currently
only issue an invalidation for the first IOVA of that range. Since the
architecture specifies that address-based TLB maintenance operations
target a single entry, an SMMU could feasibly retain live entries for
subsequent pages within that unmapped range, which is not good.

Make sure we hit every possible entry by iterating over the whole range
at the granularity provided by the pagetable implementation.

Signed-off-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
---
 drivers/iommu/arm-smmu.c | 19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
index 601e3dd..f1d6fa7 100644
--- a/drivers/iommu/arm-smmu.c
+++ b/drivers/iommu/arm-smmu.c
@@ -597,12 +597,20 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
 		if (!IS_ENABLED(CONFIG_64BIT) || smmu->version == ARM_SMMU_V1) {
 			iova &= ~12UL;
 			iova |= ARM_SMMU_CB_ASID(cfg);
-			writel_relaxed(iova, reg);
+			while (size) {
+				writel_relaxed(iova, reg);
+				size -= granule;
+				iova += granule;
+			}
 #ifdef CONFIG_64BIT
 		} else {
 			iova >>= 12;
 			iova |= (u64)ARM_SMMU_CB_ASID(cfg) << 48;
-			writeq_relaxed(iova, reg);
+			while (size) {
+				writeq_relaxed(iova, reg);
+				size -= granule;
+				iova += granule >> 12;
+			}
 #endif
 		}
 #ifdef CONFIG_64BIT
@@ -610,7 +618,12 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
 		reg = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
 		reg += leaf ? ARM_SMMU_CB_S2_TLBIIPAS2L :
 			      ARM_SMMU_CB_S2_TLBIIPAS2;
-		writeq_relaxed(iova >> 12, reg);
+		iova >>= 12;
+		while (size) {
+			writeq_relaxed(iova, reg);
+			size -= granule;
+			iova += granule >> 12;
+		}
 #endif
 	} else {
 		reg = ARM_SMMU_GR0(smmu) + ARM_SMMU_GR0_TLBIVMID;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 3/5] iommu/arm-smmu: Invalidate TLBs properly
@ 2015-12-04 17:53     ` Robin Murphy
  0 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-04 17:53 UTC (permalink / raw)
  To: linux-arm-kernel

When invalidating an IOVA range potentially spanning multiple pages,
such as when removing an entire intermediate-level table, we currently
only issue an invalidation for the first IOVA of that range. Since the
architecture specifies that address-based TLB maintenance operations
target a single entry, an SMMU could feasibly retain live entries for
subsequent pages within that unmapped range, which is not good.

Make sure we hit every possible entry by iterating over the whole range
at the granularity provided by the pagetable implementation.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
 drivers/iommu/arm-smmu.c | 19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
index 601e3dd..f1d6fa7 100644
--- a/drivers/iommu/arm-smmu.c
+++ b/drivers/iommu/arm-smmu.c
@@ -597,12 +597,20 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
 		if (!IS_ENABLED(CONFIG_64BIT) || smmu->version == ARM_SMMU_V1) {
 			iova &= ~12UL;
 			iova |= ARM_SMMU_CB_ASID(cfg);
-			writel_relaxed(iova, reg);
+			while (size) {
+				writel_relaxed(iova, reg);
+				size -= granule;
+				iova += granule;
+			}
 #ifdef CONFIG_64BIT
 		} else {
 			iova >>= 12;
 			iova |= (u64)ARM_SMMU_CB_ASID(cfg) << 48;
-			writeq_relaxed(iova, reg);
+			while (size) {
+				writeq_relaxed(iova, reg);
+				size -= granule;
+				iova += granule >> 12;
+			}
 #endif
 		}
 #ifdef CONFIG_64BIT
@@ -610,7 +618,12 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
 		reg = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
 		reg += leaf ? ARM_SMMU_CB_S2_TLBIIPAS2L :
 			      ARM_SMMU_CB_S2_TLBIIPAS2;
-		writeq_relaxed(iova >> 12, reg);
+		iova >>= 12;
+		while (size) {
+			writeq_relaxed(iova, reg);
+			size -= granule;
+			iova += granule >> 12;
+		}
 #endif
 	} else {
 		reg = ARM_SMMU_GR0(smmu) + ARM_SMMU_GR0_TLBIVMID;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4/5] iommu/io-pgtable: Make io_pgtable_ops_to_pgtable() macro common
  2015-12-04 17:52 ` Robin Murphy
@ 2015-12-04 17:53     ` Robin Murphy
  -1 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-04 17:53 UTC (permalink / raw)
  To: will.deacon-5wv7dgnIgG8,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
  Cc: laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw,
	brian.starkey-5wv7dgnIgG8,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

There is no need to keep a useful accessor for a public structure hidden
away in a private implementation. Move it out alongside the structure
definition so that other implementations may reuse it.

Signed-off-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
---
 drivers/iommu/io-pgtable-arm.c | 3 ---
 drivers/iommu/io-pgtable.h     | 2 ++
 2 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index 9088d27..1619681 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -38,9 +38,6 @@
 #define io_pgtable_to_data(x)						\
 	container_of((x), struct arm_lpae_io_pgtable, iop)
 
-#define io_pgtable_ops_to_pgtable(x)					\
-	container_of((x), struct io_pgtable, ops)
-
 #define io_pgtable_ops_to_data(x)					\
 	io_pgtable_to_data(io_pgtable_ops_to_pgtable(x))
 
diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h
index 2e18469..36673c8 100644
--- a/drivers/iommu/io-pgtable.h
+++ b/drivers/iommu/io-pgtable.h
@@ -131,6 +131,8 @@ struct io_pgtable {
 	struct io_pgtable_ops	ops;
 };
 
+#define io_pgtable_ops_to_pgtable(x) container_of((x), struct io_pgtable, ops)
+
 /**
  * struct io_pgtable_init_fns - Alloc/free a set of page tables for a
  *                              particular format.
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 4/5] iommu/io-pgtable: Make io_pgtable_ops_to_pgtable() macro common
@ 2015-12-04 17:53     ` Robin Murphy
  0 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-04 17:53 UTC (permalink / raw)
  To: linux-arm-kernel

There is no need to keep a useful accessor for a public structure hidden
away in a private implementation. Move it out alongside the structure
definition so that other implementations may reuse it.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
 drivers/iommu/io-pgtable-arm.c | 3 ---
 drivers/iommu/io-pgtable.h     | 2 ++
 2 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index 9088d27..1619681 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -38,9 +38,6 @@
 #define io_pgtable_to_data(x)						\
 	container_of((x), struct arm_lpae_io_pgtable, iop)
 
-#define io_pgtable_ops_to_pgtable(x)					\
-	container_of((x), struct io_pgtable, ops)
-
 #define io_pgtable_ops_to_data(x)					\
 	io_pgtable_to_data(io_pgtable_ops_to_pgtable(x))
 
diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h
index 2e18469..36673c8 100644
--- a/drivers/iommu/io-pgtable.h
+++ b/drivers/iommu/io-pgtable.h
@@ -131,6 +131,8 @@ struct io_pgtable {
 	struct io_pgtable_ops	ops;
 };
 
+#define io_pgtable_ops_to_pgtable(x) container_of((x), struct io_pgtable, ops)
+
 /**
  * struct io_pgtable_init_fns - Alloc/free a set of page tables for a
  *                              particular format.
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 5/5] iommu/io-pgtable: Add ARMv7 short descriptor support
  2015-12-04 17:52 ` Robin Murphy
@ 2015-12-04 17:53     ` Robin Murphy
  -1 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-04 17:53 UTC (permalink / raw)
  To: will.deacon-5wv7dgnIgG8,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
  Cc: laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw,
	brian.starkey-5wv7dgnIgG8,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

Add a nearly-complete ARMv7 short descriptor implementation, omitting
only a few legacy and CPU-centric aspects which shouldn't be necessary
for IOMMU API use anyway.

Signed-off-by: Yong Wu <yong.wu-NuS5LvNUpcJWk0Htik3J/w@public.gmane.org>
Signed-off-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
---
 drivers/iommu/Kconfig              |  19 +
 drivers/iommu/Makefile             |   1 +
 drivers/iommu/io-pgtable-arm-v7s.c | 836 +++++++++++++++++++++++++++++++++++++
 drivers/iommu/io-pgtable.c         |   3 +
 drivers/iommu/io-pgtable.h         |  14 +-
 5 files changed, 872 insertions(+), 1 deletion(-)
 create mode 100644 drivers/iommu/io-pgtable-arm-v7s.c

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index b9094e9..b591022 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -39,6 +39,25 @@ config IOMMU_IO_PGTABLE_LPAE_SELFTEST
 
 	  If unsure, say N here.
 
+config IOMMU_IO_PGTABLE_ARMV7S
+	bool "ARMv7/v8 Short Descriptor Format"
+	select IOMMU_IO_PGTABLE
+	depends on HAS_DMA && (ARM || ARM64 || COMPILE_TEST)
+	help
+	  Enable support for the ARM Short-descriptor pagetable format.
+	  This supports 32-bit virtual and physical addresses mapped using
+	  2-level tables with 4KB pages/1MB sections, and contiguous entries
+	  for 64KB pages/16MB supersections if indicated by the IOMMU driver.
+
+config IOMMU_IO_PGTABLE_ARMV7S_SELFTEST
+	bool "ARMv7s selftests"
+	depends on IOMMU_IO_PGTABLE_ARMV7S
+	help
+	  Enable self-tests for ARMv7s page table allocator. This performs
+	  a series of page-table consistency checks during boot.
+
+	  If unsure, say N here.
+
 endmenu
 
 config IOMMU_IOVA
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index 68faca02..47f11d9 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -3,6 +3,7 @@ obj-$(CONFIG_IOMMU_API) += iommu-traces.o
 obj-$(CONFIG_IOMMU_API) += iommu-sysfs.o
 obj-$(CONFIG_IOMMU_DMA) += dma-iommu.o
 obj-$(CONFIG_IOMMU_IO_PGTABLE) += io-pgtable.o
+obj-$(CONFIG_IOMMU_IO_PGTABLE_ARMV7S) += io-pgtable-arm-v7s.o
 obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o
 obj-$(CONFIG_IOMMU_IOVA) += iova.o
 obj-$(CONFIG_OF_IOMMU)	+= of_iommu.o
diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c
new file mode 100644
index 0000000..52e85a0
--- /dev/null
+++ b/drivers/iommu/io-pgtable-arm-v7s.c
@@ -0,0 +1,836 @@
+/*
+ * CPU-agnostic ARM page table allocator.
+ *
+ * ARMv7 Short-descriptor format, supporting
+ * - Basic memory attributes
+ * - Simplified access permissions (AP[2:1] model)
+ * - Backwards-compatible TEX remap
+ * - Large pages/supersections (if indicated by the caller)
+ *
+ * Not supporting:
+ * - Legacy access permissions (AP[2:0] model)
+ *
+ * Almost certainly never supporting:
+ * - PXN
+ * - Domains
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Copyright (C) 2014-2015 ARM Limited
+ * Copyright (c) 2014-2015 MediaTek Inc.
+ */
+
+#define pr_fmt(fmt)	"arm-v7s io-pgtable: " fmt
+
+#include <linux/iommu.h>
+#include <linux/kernel.h>
+#include <linux/sizes.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+
+#include <asm/barrier.h>
+
+#include "io-pgtable.h"
+
+/* Struct accessors */
+#define io_pgtable_to_data(x)						\
+	container_of((x), struct arm_v7s_io_pgtable, iop)
+
+#define io_pgtable_ops_to_data(x)					\
+	io_pgtable_to_data(io_pgtable_ops_to_pgtable(x))
+
+/*
+ * We have 32 bits total; 12 bits resolved at level 1, 8 bits at level 2,
+ * and 12 bits in a page. With some carefully-chosen coefficients we can
+ * hide the ugly inconsistencies behind these macros and at least let the
+ * rest of the code pretend to be somewhat sane.
+ */
+#define ARM_V7S_ADDR_BITS		32
+#define _ARM_V7S_LVL_BITS(lvl)		(16 - (lvl) * 4)
+#define ARM_V7S_LVL_SHIFT(lvl)		(ARM_V7S_ADDR_BITS - (4 + 8 * (lvl)))
+#define ARM_V7S_TABLE_SHIFT		10
+
+#define ARM_V7S_PTES_PER_LVL(lvl)	(1 << _ARM_V7S_LVL_BITS(lvl))
+#define ARM_V7S_TABLE_SIZE(lvl)						\
+	(ARM_V7S_PTES_PER_LVL(lvl) * sizeof(arm_v7s_iopte))
+
+#define ARM_V7S_BLOCK_SIZE(lvl)		(1UL << ARM_V7S_LVL_SHIFT(lvl))
+#define ARM_V7S_LVL_MASK(lvl)		((u32)(~0U << ARM_V7S_LVL_SHIFT(lvl)))
+#define ARM_V7S_TABLE_MASK		((u32)(~0U << ARM_V7S_TABLE_SHIFT))
+#define _ARM_V7S_IDX_MASK(lvl)		(ARM_V7S_PTES_PER_LVL(lvl) - 1)
+#define ARM_V7S_LVL_IDX(addr, lvl)	({				\
+	int _l = lvl;							\
+	((u32)(addr) >> ARM_V7S_LVL_SHIFT(_l)) & _ARM_V7S_IDX_MASK(_l); \
+})
+
+#define ARM_V7S_CONT_PAGES		16
+
+/* PTE type bits: these are all mixed up with XN/PXN bits in most cases */
+#define ARM_V7S_PTE_TYPE_TABLE		0x1
+#define ARM_V7S_PTE_TYPE_PAGE		0x2
+#define ARM_V7S_PTE_TYPE_CONT_PAGE	0x1
+
+#define ARM_V7S_PTE_IS_VALID(pte)	(((pte) & 0x3) != 0)
+#define ARM_V7S_PTE_IS_TABLE(pte, lvl)	(lvl < 2 && ((pte) & ARM_V7S_PTE_TYPE_TABLE))
+
+/* Page table bits */
+#define ARM_V7S_ATTR_XN(lvl)		BIT(4 * (2 - (lvl)))
+#define ARM_V7S_ATTR_B			BIT(2)
+#define ARM_V7S_ATTR_C			BIT(3)
+#define ARM_V7S_ATTR_NS_TABLE		BIT(3)
+#define ARM_V7S_ATTR_NS_SECTION		BIT(19)
+
+#define ARM_V7S_CONT_SECTION		BIT(18)
+#define ARM_V7S_CONT_PAGE_XN_SHIFT	15
+
+/*
+ * The attribute bits are consistently ordered*, but occupy bits [17:10] of
+ * a level 1 PTE vs. bits [11:4] at level 2. Thus we define the individual
+ * fields relative to that 8-bit block, plus a total shift relative to the PTE.
+ */
+#define ARM_V7S_ATTR_SHIFT(lvl)		(16 - (lvl) * 6)
+
+#define ARM_V7S_ATTR_MASK		0xff
+#define ARM_V7S_ATTR_AP0		BIT(0)
+#define ARM_V7S_ATTR_AP1		BIT(1)
+#define ARM_V7S_ATTR_AP2		BIT(5)
+#define ARM_V7S_ATTR_S			BIT(6)
+#define ARM_V7S_ATTR_NG			BIT(7)
+#define ARM_V7S_TEX_SHIFT		2
+#define ARM_V7S_TEX_MASK		0x7
+#define ARM_V7S_ATTR_TEX(val)		(((val) & ARM_V7S_TEX_MASK) << ARM_V7S_TEX_SHIFT)
+
+/* *well, except for TEX on level 2 large pages, of course :( */
+#define ARM_V7S_CONT_PAGE_TEX_SHIFT	6
+#define ARM_V7S_CONT_PAGE_TEX_MASK	(ARM_V7S_TEX_MASK << ARM_V7S_CONT_PAGE_TEX_SHIFT)
+
+/* Simplified access permissions */
+#define ARM_V7S_PTE_AF			ARM_V7S_ATTR_AP0
+#define ARM_V7S_PTE_AP_UNPRIV		ARM_V7S_ATTR_AP1
+#define ARM_V7S_PTE_AP_RDONLY		ARM_V7S_ATTR_AP2
+
+/* Register bits */
+#define ARM_V7S_RGN_NC			0
+#define ARM_V7S_RGN_WBWA		1
+#define ARM_V7S_RGN_WT			2
+#define ARM_V7S_RGN_WB			3
+
+#define ARM_V7S_PRRR_TYPE_DEVICE	1
+#define ARM_V7S_PRRR_TYPE_NORMAL	2
+#define ARM_V7S_PRRR_TR(n, type)	(((type) & 0x3) << ((n) * 2))
+#define ARM_V7S_PRRR_DS0		BIT(16)
+#define ARM_V7S_PRRR_DS1		BIT(17)
+#define ARM_V7S_PRRR_NS0		BIT(18)
+#define ARM_V7S_PRRR_NS1		BIT(19)
+#define ARM_V7S_PRRR_NOS(n)		BIT((n) + 24)
+
+#define ARM_V7S_NMRR_IR(n, attr)	(((attr) & 0x3) << ((n) * 2))
+#define ARM_V7S_NMRR_OR(n, attr)	(((attr) & 0x3) << ((n) * 2 + 16))
+
+#define ARM_V7S_TTBR_S			BIT(1)
+#define ARM_V7S_TTBR_NOS		BIT(5)
+#define ARM_V7S_TTBR_ORGN_ATTR(attr)	(((attr) & 0x3) << 3)
+#define ARM_V7S_TTBR_IRGN_ATTR(attr)					\
+	((((attr) & 0x1) << 6) | (((attr) & 0x2) >> 1))
+
+#define ARM_V7S_TCR_PD1			BIT(5)
+
+typedef u32 arm_v7s_iopte;
+
+static bool selftest_running;
+
+struct arm_v7s_io_pgtable {
+	struct io_pgtable	iop;
+
+	arm_v7s_iopte		*pgd;
+	struct kmem_cache	*l2_tables;
+};
+
+static dma_addr_t __arm_v7s_dma_addr(void *pages)
+{
+	return (dma_addr_t)virt_to_phys(pages);
+}
+
+static arm_v7s_iopte *iopte_deref(arm_v7s_iopte pte, int lvl)
+{
+	if (ARM_V7S_PTE_IS_TABLE(pte, lvl))
+		pte &= ARM_V7S_TABLE_MASK;
+	else
+		pte &= ARM_V7S_LVL_MASK(lvl);
+	return phys_to_virt(pte);
+}
+
+static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+				   struct arm_v7s_io_pgtable *data)
+{
+	struct device *dev = data->iop.cfg.iommu_dev;
+	dma_addr_t dma;
+	size_t size = ARM_V7S_TABLE_SIZE(lvl);
+	void *table = NULL;
+
+	if (lvl == 1)
+		table = (void *)__get_dma_pages(__GFP_ZERO, get_order(size));
+	else if (lvl == 2)
+		table = kmem_cache_zalloc(data->l2_tables, gfp);
+	if (table && !selftest_running) {
+		dma = dma_map_single(dev, table, size, DMA_TO_DEVICE);
+		if (dma_mapping_error(dev, dma))
+			goto out_free;
+		/*
+		 * We depend on the IOMMU being able to work with any physical
+		 * address directly, so if the DMA layer suggests otherwise by
+		 * translating or truncating them, that bodes very badly...
+		 */
+		if (dma != virt_to_phys(table))
+			goto out_unmap;
+	}
+	return table;
+
+out_unmap:
+	dev_err(dev, "Cannot accommodate DMA translation for IOMMU page tables\n");
+	dma_unmap_single(dev, dma, size, DMA_TO_DEVICE);
+out_free:
+	if (lvl == 1)
+		free_pages((unsigned long)table, get_order(size));
+	else
+		kmem_cache_free(data->l2_tables, table);
+	return NULL;
+}
+
+static void __arm_v7s_free_table(void *table, int lvl,
+				 struct arm_v7s_io_pgtable *data)
+{
+	struct device *dev = data->iop.cfg.iommu_dev;
+	size_t size = ARM_V7S_TABLE_SIZE(lvl);
+
+	if (!selftest_running)
+		dma_unmap_single(dev, __arm_v7s_dma_addr(table), size,
+				 DMA_TO_DEVICE);
+	if (lvl == 1)
+		free_pages((unsigned long)table, get_order(size));
+	else
+		kmem_cache_free(data->l2_tables, table);
+}
+
+static void __arm_v7s_pte_sync(arm_v7s_iopte *ptep, int num_entries,
+			       struct io_pgtable_cfg *cfg)
+{
+	if (selftest_running)
+		return;
+
+	dma_sync_single_for_device(cfg->iommu_dev, __arm_v7s_dma_addr(ptep),
+				   num_entries * sizeof(*ptep), DMA_TO_DEVICE);
+}
+static void __arm_v7s_set_pte(arm_v7s_iopte *ptep, arm_v7s_iopte pte,
+			      int num_entries, struct io_pgtable_cfg *cfg)
+{
+	int i;
+
+	for (i = 0; i < num_entries; i++)
+		ptep[i] = pte;
+
+	__arm_v7s_pte_sync(ptep, num_entries, cfg);
+}
+
+static arm_v7s_iopte arm_v7s_prot_to_pte(int prot, int lvl,
+					 struct io_pgtable_cfg *cfg)
+{
+	bool ap = !(cfg->quirks & IO_PGTABLE_QUIRK_NO_PERMS);
+	arm_v7s_iopte pte = ARM_V7S_ATTR_NG | ARM_V7S_ATTR_S |
+			    ARM_V7S_ATTR_TEX(1);
+
+	if (ap) {
+		pte |= ARM_V7S_PTE_AF | ARM_V7S_PTE_AP_UNPRIV;
+		if (!(prot & IOMMU_WRITE))
+			pte |= ARM_V7S_PTE_AP_RDONLY;
+	}
+	pte <<= ARM_V7S_ATTR_SHIFT(lvl);
+
+	if ((prot & IOMMU_NOEXEC) && ap)
+		pte |= ARM_V7S_ATTR_XN(lvl);
+	if (prot & IOMMU_CACHE)
+		pte |= ARM_V7S_ATTR_B | ARM_V7S_ATTR_C;
+
+	return pte;
+}
+
+static int arm_v7s_pte_to_prot(arm_v7s_iopte pte, int lvl)
+{
+	int prot = IOMMU_READ;
+
+	if (pte & (ARM_V7S_PTE_AP_RDONLY << ARM_V7S_ATTR_SHIFT(lvl)))
+		prot |= IOMMU_WRITE;
+	if (pte & ARM_V7S_ATTR_C)
+		prot |= IOMMU_CACHE;
+
+	return prot;
+}
+
+static arm_v7s_iopte arm_v7s_pte_to_cont(arm_v7s_iopte pte, int lvl)
+{
+	if (lvl == 1) {
+		pte |= ARM_V7S_CONT_SECTION;
+	} else if (lvl == 2) {
+		arm_v7s_iopte xn = pte & ARM_V7S_ATTR_XN(lvl);
+		arm_v7s_iopte tex = pte & ARM_V7S_CONT_PAGE_TEX_MASK;
+
+		pte ^= xn | tex;
+		pte |= (xn << ARM_V7S_CONT_PAGE_XN_SHIFT) |
+		       (tex << ARM_V7S_CONT_PAGE_TEX_SHIFT);
+		pte ^= ARM_V7S_PTE_TYPE_PAGE | ARM_V7S_PTE_TYPE_CONT_PAGE;
+	}
+	return pte;
+}
+
+static arm_v7s_iopte arm_v7s_cont_to_pte(arm_v7s_iopte pte, int lvl)
+{
+	if (lvl == 1) {
+		pte &= ~ARM_V7S_CONT_SECTION;
+	} else if (lvl == 2) {
+		arm_v7s_iopte xn = pte & BIT(ARM_V7S_CONT_PAGE_XN_SHIFT);
+		arm_v7s_iopte tex = pte & (ARM_V7S_CONT_PAGE_TEX_MASK <<
+					   ARM_V7S_CONT_PAGE_TEX_SHIFT);
+
+		pte ^= xn | tex;
+		pte |= (xn >> ARM_V7S_CONT_PAGE_XN_SHIFT) |
+		       (tex >> ARM_V7S_CONT_PAGE_TEX_SHIFT);
+		pte ^= ARM_V7S_PTE_TYPE_PAGE | ARM_V7S_PTE_TYPE_CONT_PAGE;
+	}
+	return pte;
+}
+
+static bool arm_v7s_pte_is_cont(arm_v7s_iopte pte, int lvl)
+{
+	if (lvl == 1 && !ARM_V7S_PTE_IS_TABLE(pte, lvl))
+		return pte & ARM_V7S_CONT_SECTION;
+	else if (lvl == 2)
+		return !(pte & ARM_V7S_PTE_TYPE_PAGE);
+	return false;
+}
+
+static int __arm_v7s_unmap(struct arm_v7s_io_pgtable *, unsigned long,
+			   size_t, int, arm_v7s_iopte *);
+
+static int arm_v7s_init_pte(struct arm_v7s_io_pgtable *data,
+			    unsigned long iova, phys_addr_t paddr, int prot,
+			    int lvl, int num_entries, arm_v7s_iopte *ptep)
+{
+	struct io_pgtable_cfg *cfg = &data->iop.cfg;
+	arm_v7s_iopte pte = arm_v7s_prot_to_pte(prot, lvl, cfg);
+	int i;
+
+	for (i = 0; i < num_entries; i++)
+		if (ARM_V7S_PTE_IS_TABLE(ptep[i], lvl)) {
+			/*
+			 * We need to unmap and free the old table before
+			 * overwriting it with a block entry.
+			 */
+			arm_v7s_iopte *tblp;
+			size_t sz = ARM_V7S_BLOCK_SIZE(lvl);
+
+			tblp = ptep + i - ARM_V7S_LVL_IDX(iova, lvl);
+			if (WARN_ON(__arm_v7s_unmap(data, iova, sz, lvl, tblp)
+					!= sz))
+				return -EINVAL;
+		} else if (ptep[i]) {
+			/* We require an unmap first */
+			WARN_ON(!selftest_running);
+			return -EEXIST;
+		}
+
+	pte |= ARM_V7S_PTE_TYPE_PAGE;
+	if (lvl == 1 && (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS))
+		pte |= ARM_V7S_ATTR_NS_SECTION;
+
+	if (num_entries > 1)
+		pte = arm_v7s_pte_to_cont(pte, lvl);
+
+	pte |= paddr & ARM_V7S_LVL_MASK(lvl);
+
+	__arm_v7s_set_pte(ptep, pte, num_entries, cfg);
+	return 0;
+}
+
+static int __arm_v7s_map(struct arm_v7s_io_pgtable *data, unsigned long iova,
+			 phys_addr_t paddr, size_t size, int prot,
+			 int lvl, arm_v7s_iopte *ptep)
+{
+	struct io_pgtable_cfg *cfg = &data->iop.cfg;
+	arm_v7s_iopte pte, *cptep;
+	int num_entries = size >> ARM_V7S_LVL_SHIFT(lvl);
+
+	/* Find our entry at the current level */
+	ptep += ARM_V7S_LVL_IDX(iova, lvl);
+
+	/* If we can install a leaf entry at this level, then do so */
+	if (num_entries)
+		return arm_v7s_init_pte(data, iova, paddr, prot,
+					lvl, num_entries, ptep);
+
+	/* We can't allocate tables at the final level */
+	if (WARN_ON(lvl == 2))
+		return -EINVAL;
+
+	/* Grab a pointer to the next level */
+	pte = *ptep;
+	if (!pte) {
+		cptep = __arm_v7s_alloc_table(lvl + 1, GFP_ATOMIC, data);
+		if (!cptep)
+			return -ENOMEM;
+
+		pte = virt_to_phys(cptep) | ARM_V7S_PTE_TYPE_TABLE;
+		if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS)
+			pte |= ARM_V7S_ATTR_NS_TABLE;
+
+		__arm_v7s_set_pte(ptep, pte, 1, cfg);
+	} else {
+		cptep = iopte_deref(pte, lvl);
+	}
+
+	/* Rinse, repeat */
+	return __arm_v7s_map(data, iova, paddr, size, prot, lvl + 1, cptep);
+}
+
+static int arm_v7s_map(struct io_pgtable_ops *ops, unsigned long iova,
+			phys_addr_t paddr, size_t size, int prot)
+{
+	struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops);
+	struct io_pgtable_cfg *cfg = &data->iop.cfg;
+	const struct iommu_gather_ops *tlb = cfg->tlb;
+	void *cookie = data->iop.cookie;
+	int ret;
+
+	/* If no access, then nothing to do */
+	if (!(prot & (IOMMU_READ | IOMMU_WRITE)))
+		return 0;
+
+	ret = __arm_v7s_map(data, iova, paddr, size, prot, 1, data->pgd);
+	/*
+	 * Synchronise all PTE updates for the new mapping before there's
+	 * a chance for anything to kick off a table walk for the new iova.
+	 */
+	if (cfg->quirks & IO_PGTABLE_QUIRK_TLBI_ON_MAP) {
+		tlb->tlb_add_flush(iova, size, ARM_V7S_BLOCK_SIZE(2), false,
+				   cookie);
+		tlb->tlb_sync(cookie);
+	} else {
+		wmb();
+	}
+
+	return ret;
+}
+
+static void arm_v7s_free_pgtable(struct io_pgtable *iop)
+{
+	struct arm_v7s_io_pgtable *data = io_pgtable_to_data(iop);
+	int i;
+
+	for (i = 0; i < ARM_V7S_PTES_PER_LVL(1); i++) {
+		arm_v7s_iopte pte = data->pgd[i];
+
+		if (ARM_V7S_PTE_IS_TABLE(pte, 1))
+			__arm_v7s_free_table(iopte_deref(pte, 1), 2, data);
+	}
+	__arm_v7s_free_table(data->pgd, 1, data);
+	kmem_cache_destroy(data->l2_tables);
+	kfree(data);
+}
+
+static void arm_v7s_split_cont(struct arm_v7s_io_pgtable *data,
+			       unsigned long iova, int idx, int lvl,
+			       arm_v7s_iopte *ptep)
+{
+	struct io_pgtable_cfg *cfg = &data->iop.cfg;
+	void *cookie = data->iop.cookie;
+	arm_v7s_iopte pte;
+	size_t size = ARM_V7S_BLOCK_SIZE(lvl);
+	int i;
+
+	ptep -= idx & (ARM_V7S_CONT_PAGES - 1);
+	pte = arm_v7s_cont_to_pte(*ptep, lvl);
+	for (i = 0; i < ARM_V7S_CONT_PAGES; i++) {
+		ptep[i] = pte;
+		pte += size;
+	}
+
+	__arm_v7s_pte_sync(ptep, ARM_V7S_CONT_PAGES, cfg);
+
+	size *= ARM_V7S_CONT_PAGES;
+	cfg->tlb->tlb_add_flush(iova, size, size, true, cookie);
+	cfg->tlb->tlb_sync(cookie);
+}
+
+static int arm_v7s_split_blk_unmap(struct arm_v7s_io_pgtable *data,
+				   unsigned long iova, size_t size,
+				   arm_v7s_iopte *ptep)
+{
+	unsigned long blk_start, blk_end, blk_size;
+	phys_addr_t blk_paddr;
+	arm_v7s_iopte table = 0;
+	struct io_pgtable_cfg *cfg = &data->iop.cfg;
+	int prot = arm_v7s_pte_to_prot(*ptep, 1);
+
+	blk_size = ARM_V7S_BLOCK_SIZE(1);
+	blk_start = iova & ARM_V7S_LVL_MASK(1);
+	blk_end = blk_start + ARM_V7S_BLOCK_SIZE(1);
+	blk_paddr = *ptep & ARM_V7S_LVL_MASK(1);
+
+	for (; blk_start < blk_end; blk_start += size, blk_paddr += size) {
+		arm_v7s_iopte *tablep;
+
+		/* Unmap! */
+		if (blk_start == iova)
+			continue;
+
+		/* __arm_v7s_map expects a pointer to the start of the table */
+		tablep = &table - ARM_V7S_LVL_IDX(blk_start, 1);
+		if (__arm_v7s_map(data, blk_start, blk_paddr, size, prot, 1,
+				  tablep) < 0) {
+			if (table) {
+				/* Free the table we allocated */
+				tablep = iopte_deref(table, 1);
+				__arm_v7s_free_table(tablep, 2, data);
+			}
+			return 0; /* Bytes unmapped */
+		}
+	}
+
+	__arm_v7s_set_pte(ptep, table, 1, cfg);
+	iova &= ~(blk_size - 1);
+	cfg->tlb->tlb_add_flush(iova, blk_size, blk_size, true, data->iop.cookie);
+	return size;
+}
+
+static int __arm_v7s_unmap(struct arm_v7s_io_pgtable *data,
+			    unsigned long iova, size_t size, int lvl,
+			    arm_v7s_iopte *ptep)
+{
+	arm_v7s_iopte pte[ARM_V7S_CONT_PAGES];
+	struct io_pgtable_cfg *cfg = &data->iop.cfg;
+	const struct iommu_gather_ops *tlb = cfg->tlb;
+	void *cookie = data->iop.cookie;
+	int idx, i = 0, num_entries = size >> ARM_V7S_LVL_SHIFT(lvl);
+
+	/* Something went horribly wrong and we ran out of page table */
+	if (WARN_ON(lvl > 2))
+		return 0;
+
+	idx = ARM_V7S_LVL_IDX(iova, lvl);
+	ptep += idx;
+	do {
+		if (WARN_ON(!ARM_V7S_PTE_IS_VALID(ptep[i])))
+			return 0;
+		pte[i] = ptep[i];
+	} while (++i < num_entries);
+
+	/*
+	 * If we've hit a contiguous 'large page' entry at this level, it
+	 * needs splitting first, unless we're unmapping the whole lot.
+	 */
+	if (num_entries <= 1 && arm_v7s_pte_is_cont(pte[0], lvl))
+		arm_v7s_split_cont(data, iova, idx, lvl, ptep);
+
+	/* If the size matches this level, we're in the right place */
+	if (num_entries) {
+		size_t blk_size = ARM_V7S_BLOCK_SIZE(lvl);
+
+		__arm_v7s_set_pte(ptep, 0, num_entries, cfg);
+
+		for (i = 0; i < num_entries; i++) {
+			if (ARM_V7S_PTE_IS_TABLE(pte[i], lvl)) {
+				/* Also flush any partial walks */
+				tlb->tlb_add_flush(iova, blk_size,
+						   ARM_V7S_BLOCK_SIZE(2),
+						   false, cookie);
+				tlb->tlb_sync(cookie);
+				ptep = iopte_deref(pte[i], lvl);
+				__arm_v7s_free_table(ptep, lvl + 1, data);
+			} else {
+				tlb->tlb_add_flush(iova, blk_size, blk_size,
+						   true, cookie);
+			}
+			iova += blk_size;
+		}
+		return size;
+	} else if (lvl == 1 && !ARM_V7S_PTE_IS_TABLE(pte[0], lvl)) {
+		/*
+		 * Insert a table at the next level to map the old region,
+		 * minus the part we want to unmap
+		 */
+		return arm_v7s_split_blk_unmap(data, iova, size, ptep);
+	}
+
+	/* Keep on walkin' */
+	ptep = iopte_deref(pte[0], lvl);
+	return __arm_v7s_unmap(data, iova, size, lvl + 1, ptep);
+}
+
+static int arm_v7s_unmap(struct io_pgtable_ops *ops, unsigned long iova,
+			 size_t size)
+{
+	size_t unmapped;
+	struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops);
+	struct io_pgtable *iop = &data->iop;
+
+	unmapped = __arm_v7s_unmap(data, iova, size, 1, data->pgd);
+	if (unmapped)
+		iop->cfg.tlb->tlb_sync(iop->cookie);
+
+	return unmapped;
+}
+
+static phys_addr_t arm_v7s_iova_to_phys(struct io_pgtable_ops *ops,
+					unsigned long iova)
+{
+	struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops);
+	arm_v7s_iopte *ptep = data->pgd, pte = ARM_V7S_PTE_TYPE_TABLE;
+	int lvl = 0;
+	u32 mask;
+
+	while (ARM_V7S_PTE_IS_TABLE(pte, lvl)) {
+		pte = ptep[ARM_V7S_LVL_IDX(iova, ++lvl)];
+		ptep = iopte_deref(pte, lvl);
+	}
+	if (!ARM_V7S_PTE_IS_VALID(pte))
+		return 0;
+
+	mask = ARM_V7S_LVL_MASK(lvl);
+	if (arm_v7s_pte_is_cont(pte, lvl))
+		mask *= ARM_V7S_CONT_PAGES;
+	return (pte & mask) | (iova & ~mask);
+}
+
+static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
+						void *cookie)
+{
+	struct arm_v7s_io_pgtable *data;
+
+	if (cfg->ias > ARM_V7S_ADDR_BITS || cfg->oas > ARM_V7S_ADDR_BITS)
+		return NULL;
+
+	data = kmalloc(sizeof(*data), GFP_KERNEL);
+	if (!data)
+		return NULL;
+
+	data->l2_tables = kmem_cache_create("io-pgtable_armv7s_l2",
+					    ARM_V7S_TABLE_SIZE(2),
+					    ARM_V7S_TABLE_SIZE(2),
+					    SLAB_CACHE_DMA, NULL);
+	if (!data->l2_tables)
+		goto out_free_data;
+
+	data->iop.ops = (struct io_pgtable_ops) {
+		.map		= arm_v7s_map,
+		.unmap		= arm_v7s_unmap,
+		.iova_to_phys	= arm_v7s_iova_to_phys,
+	};
+
+	/* We have to do this early for __arm_v7s_alloc_table to work... */
+	data->iop.cfg = *cfg;
+
+	/*
+	 * Unless the IOMMU driver indicates supersection support by
+	 * having SZ_16M set in the initial bitmap, they won't be used.
+	 */
+	cfg->pgsize_bitmap &= SZ_4K | SZ_64K | SZ_1M | SZ_16M;
+
+	/* TCR: T0SZ=0, disable TTBR1 */
+	cfg->arm_v7s_cfg.tcr = ARM_V7S_TCR_PD1;
+
+	/*
+	 * TEX remap: the indices used map to the closest equivalent types
+	 * under the non-TEX-remap interpretation of those attribute bits,
+	 * excepting various implementation-defined aspects of shareability.
+	 */
+	cfg->arm_v7s_cfg.prrr = ARM_V7S_PRRR_TR(1, ARM_V7S_PRRR_TYPE_DEVICE) |
+				ARM_V7S_PRRR_TR(4, ARM_V7S_PRRR_TYPE_NORMAL) |
+				ARM_V7S_PRRR_TR(7, ARM_V7S_PRRR_TYPE_NORMAL) |
+				ARM_V7S_PRRR_DS0 | ARM_V7S_PRRR_DS1 |
+				ARM_V7S_PRRR_NS1 | ARM_V7S_PRRR_NOS(7);
+	cfg->arm_v7s_cfg.nmrr = ARM_V7S_NMRR_IR(7, ARM_V7S_RGN_WBWA) |
+				ARM_V7S_NMRR_OR(7, ARM_V7S_RGN_WBWA);
+
+	/* Looking good; allocate a pgd */
+	data->pgd = __arm_v7s_alloc_table(1, GFP_KERNEL, data);
+	if (!data->pgd)
+		goto out_free_data;
+
+	/* Ensure the empty pgd is visible before any actual TTBR write */
+	wmb();
+
+	/* TTBRs */
+	cfg->arm_v7s_cfg.ttbr[0] = virt_to_phys(data->pgd) |
+				   ARM_V7S_TTBR_S | ARM_V7S_TTBR_NOS |
+				   ARM_V7S_TTBR_IRGN_ATTR(ARM_V7S_RGN_WBWA) |
+				   ARM_V7S_TTBR_ORGN_ATTR(ARM_V7S_RGN_WBWA);
+	cfg->arm_v7s_cfg.ttbr[1] = 0;
+	return &data->iop;
+
+out_free_data:
+	kmem_cache_destroy(data->l2_tables);
+	kfree(data);
+	return NULL;
+}
+
+struct io_pgtable_init_fns io_pgtable_arm_v7s_init_fns = {
+	.alloc	= arm_v7s_alloc_pgtable,
+	.free	= arm_v7s_free_pgtable,
+};
+
+#ifdef CONFIG_IOMMU_IO_PGTABLE_ARMV7S_SELFTEST
+
+static struct io_pgtable_cfg *cfg_cookie;
+
+static void dummy_tlb_flush_all(void *cookie)
+{
+	WARN_ON(cookie != cfg_cookie);
+}
+
+static void dummy_tlb_add_flush(unsigned long iova, size_t size,
+				size_t granule, bool leaf, void *cookie)
+{
+	WARN_ON(cookie != cfg_cookie);
+	WARN_ON(!(size & cfg_cookie->pgsize_bitmap));
+}
+
+static void dummy_tlb_sync(void *cookie)
+{
+	WARN_ON(cookie != cfg_cookie);
+}
+
+static struct iommu_gather_ops dummy_tlb_ops = {
+	.tlb_flush_all	= dummy_tlb_flush_all,
+	.tlb_add_flush	= dummy_tlb_add_flush,
+	.tlb_sync	= dummy_tlb_sync,
+};
+
+#define __FAIL(ops)	({				\
+		WARN(1, "selftest: test failed\n");	\
+		selftest_running = false;		\
+		-EFAULT;				\
+})
+
+static int __init arm_v7s_do_selftests(void)
+{
+	struct io_pgtable_ops *ops;
+	struct io_pgtable_cfg cfg = {
+		.tlb = &dummy_tlb_ops,
+		.oas = 32,
+		.ias = 32,
+		.quirks = IO_PGTABLE_QUIRK_ARM_NS,
+		.pgsize_bitmap = SZ_4K | SZ_64K | SZ_1M | SZ_16M,
+	};
+	unsigned int iova, size, iova_start;
+	unsigned int i, loopnr = 0;
+
+	selftest_running = true;
+
+	cfg_cookie = &cfg;
+
+	ops = alloc_io_pgtable_ops(ARM_V7S, &cfg, &cfg);
+	if (!ops) {
+		pr_err("selftest: failed to allocate io pgtable ops\n");
+		return -EINVAL;
+	}
+
+	/*
+	 * Initial sanity checks.
+	 * Empty page tables shouldn't provide any translations.
+	 */
+	if (ops->iova_to_phys(ops, 42))
+		return __FAIL(ops);
+
+	if (ops->iova_to_phys(ops, SZ_1G + 42))
+		return __FAIL(ops);
+
+	if (ops->iova_to_phys(ops, SZ_2G + 42))
+		return __FAIL(ops);
+
+	/*
+	 * Distinct mappings of different granule sizes.
+	 */
+	iova = 0;
+	i = find_first_bit(&cfg.pgsize_bitmap, BITS_PER_LONG);
+	while (i != BITS_PER_LONG) {
+		size = 1UL << i;
+		if (ops->map(ops, iova, iova, size, IOMMU_READ |
+						    IOMMU_WRITE |
+						    IOMMU_NOEXEC |
+						    IOMMU_CACHE))
+			return __FAIL(ops);
+
+		/* Overlapping mappings */
+		if (!ops->map(ops, iova, iova + size, size,
+			      IOMMU_READ | IOMMU_NOEXEC))
+			return __FAIL(ops);
+
+		if (ops->iova_to_phys(ops, iova + 42) != (iova + 42))
+			return __FAIL(ops);
+
+		iova += SZ_16M;
+		i++;
+		i = find_next_bit(&cfg.pgsize_bitmap, BITS_PER_LONG, i);
+		loopnr++;
+	}
+
+	/* Partial unmap */
+	i = 1;
+	size = 1UL << __ffs(cfg.pgsize_bitmap);
+	while (i < loopnr) {
+		iova_start = i * SZ_16M;
+		if (ops->unmap(ops, iova_start + size, size) != size)
+			return __FAIL(ops);
+
+		/* Remap of partial unmap */
+		if (ops->map(ops, iova_start + size, size, size, IOMMU_READ))
+			return __FAIL(ops);
+
+		if (ops->iova_to_phys(ops, iova_start + size + 42)
+		    != (size + 42))
+			return __FAIL(ops);
+		i++;
+	}
+
+	/* Full unmap */
+	iova = 0;
+	i = find_first_bit(&cfg.pgsize_bitmap, BITS_PER_LONG);
+	while (i != BITS_PER_LONG) {
+		size = 1UL << i;
+
+		if (ops->unmap(ops, iova, size) != size)
+			return __FAIL(ops);
+
+		if (ops->iova_to_phys(ops, iova + 42))
+			return __FAIL(ops);
+
+		/* Remap full block */
+		if (ops->map(ops, iova, iova, size, IOMMU_WRITE))
+			return __FAIL(ops);
+
+		if (ops->iova_to_phys(ops, iova + 42) != (iova + 42))
+			return __FAIL(ops);
+
+		iova += SZ_16M;
+		i++;
+		i = find_next_bit(&cfg.pgsize_bitmap, BITS_PER_LONG, i);
+	}
+
+	free_io_pgtable_ops(ops);
+
+	selftest_running = false;
+
+	pr_info("self test ok\n");
+	return 0;
+}
+subsys_initcall(arm_v7s_do_selftests);
+#endif
diff --git a/drivers/iommu/io-pgtable.c b/drivers/iommu/io-pgtable.c
index 6f2e319..8c615b7 100644
--- a/drivers/iommu/io-pgtable.c
+++ b/drivers/iommu/io-pgtable.c
@@ -33,6 +33,9 @@ io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] =
 	[ARM_64_LPAE_S1] = &io_pgtable_arm_64_lpae_s1_init_fns,
 	[ARM_64_LPAE_S2] = &io_pgtable_arm_64_lpae_s2_init_fns,
 #endif
+#ifdef CONFIG_IOMMU_IO_PGTABLE_ARMV7S
+	[ARM_V7S] = &io_pgtable_arm_v7s_init_fns,
+#endif
 };
 
 struct io_pgtable_ops *alloc_io_pgtable_ops(enum io_pgtable_fmt fmt,
diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h
index 36673c8..aa57073 100644
--- a/drivers/iommu/io-pgtable.h
+++ b/drivers/iommu/io-pgtable.h
@@ -1,5 +1,6 @@
 #ifndef __IO_PGTABLE_H
 #define __IO_PGTABLE_H
+#include <linux/bitops.h>
 
 /*
  * Public API for use by IOMMU drivers
@@ -9,6 +10,7 @@ enum io_pgtable_fmt {
 	ARM_32_LPAE_S2,
 	ARM_64_LPAE_S1,
 	ARM_64_LPAE_S2,
+	ARM_V7S,
 	IO_PGTABLE_NUM_FMTS,
 };
 
@@ -45,7 +47,9 @@ struct iommu_gather_ops {
  *                 page table walker.
  */
 struct io_pgtable_cfg {
-	#define IO_PGTABLE_QUIRK_ARM_NS	(1 << 0)	/* Set NS bit in PTEs */
+	#define IO_PGTABLE_QUIRK_ARM_NS		BIT(0) /* Set NS bit in PTEs */
+	#define IO_PGTABLE_QUIRK_NO_PERMS	BIT(1) /* No AP/XN bits */
+	#define IO_PGTABLE_QUIRK_TLBI_ON_MAP	BIT(2) /* TLB Inv. on map */
 	int				quirks;
 	unsigned long			pgsize_bitmap;
 	unsigned int			ias;
@@ -65,6 +69,13 @@ struct io_pgtable_cfg {
 			u64	vttbr;
 			u64	vtcr;
 		} arm_lpae_s2_cfg;
+
+		struct {
+			u32	ttbr[2];
+			u32	tcr;
+			u32	nmrr;
+			u32	prrr;
+		} arm_v7s_cfg;
 	};
 };
 
@@ -149,5 +160,6 @@ extern struct io_pgtable_init_fns io_pgtable_arm_32_lpae_s1_init_fns;
 extern struct io_pgtable_init_fns io_pgtable_arm_32_lpae_s2_init_fns;
 extern struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s1_init_fns;
 extern struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s2_init_fns;
+extern struct io_pgtable_init_fns io_pgtable_arm_v7s_init_fns;
 
 #endif /* __IO_PGTABLE_H */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 5/5] iommu/io-pgtable: Add ARMv7 short descriptor support
@ 2015-12-04 17:53     ` Robin Murphy
  0 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-04 17:53 UTC (permalink / raw)
  To: linux-arm-kernel

Add a nearly-complete ARMv7 short descriptor implementation, omitting
only a few legacy and CPU-centric aspects which shouldn't be necessary
for IOMMU API use anyway.

Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
 drivers/iommu/Kconfig              |  19 +
 drivers/iommu/Makefile             |   1 +
 drivers/iommu/io-pgtable-arm-v7s.c | 836 +++++++++++++++++++++++++++++++++++++
 drivers/iommu/io-pgtable.c         |   3 +
 drivers/iommu/io-pgtable.h         |  14 +-
 5 files changed, 872 insertions(+), 1 deletion(-)
 create mode 100644 drivers/iommu/io-pgtable-arm-v7s.c

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index b9094e9..b591022 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -39,6 +39,25 @@ config IOMMU_IO_PGTABLE_LPAE_SELFTEST
 
 	  If unsure, say N here.
 
+config IOMMU_IO_PGTABLE_ARMV7S
+	bool "ARMv7/v8 Short Descriptor Format"
+	select IOMMU_IO_PGTABLE
+	depends on HAS_DMA && (ARM || ARM64 || COMPILE_TEST)
+	help
+	  Enable support for the ARM Short-descriptor pagetable format.
+	  This supports 32-bit virtual and physical addresses mapped using
+	  2-level tables with 4KB pages/1MB sections, and contiguous entries
+	  for 64KB pages/16MB supersections if indicated by the IOMMU driver.
+
+config IOMMU_IO_PGTABLE_ARMV7S_SELFTEST
+	bool "ARMv7s selftests"
+	depends on IOMMU_IO_PGTABLE_ARMV7S
+	help
+	  Enable self-tests for ARMv7s page table allocator. This performs
+	  a series of page-table consistency checks during boot.
+
+	  If unsure, say N here.
+
 endmenu
 
 config IOMMU_IOVA
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index 68faca02..47f11d9 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -3,6 +3,7 @@ obj-$(CONFIG_IOMMU_API) += iommu-traces.o
 obj-$(CONFIG_IOMMU_API) += iommu-sysfs.o
 obj-$(CONFIG_IOMMU_DMA) += dma-iommu.o
 obj-$(CONFIG_IOMMU_IO_PGTABLE) += io-pgtable.o
+obj-$(CONFIG_IOMMU_IO_PGTABLE_ARMV7S) += io-pgtable-arm-v7s.o
 obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o
 obj-$(CONFIG_IOMMU_IOVA) += iova.o
 obj-$(CONFIG_OF_IOMMU)	+= of_iommu.o
diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c
new file mode 100644
index 0000000..52e85a0
--- /dev/null
+++ b/drivers/iommu/io-pgtable-arm-v7s.c
@@ -0,0 +1,836 @@
+/*
+ * CPU-agnostic ARM page table allocator.
+ *
+ * ARMv7 Short-descriptor format, supporting
+ * - Basic memory attributes
+ * - Simplified access permissions (AP[2:1] model)
+ * - Backwards-compatible TEX remap
+ * - Large pages/supersections (if indicated by the caller)
+ *
+ * Not supporting:
+ * - Legacy access permissions (AP[2:0] model)
+ *
+ * Almost certainly never supporting:
+ * - PXN
+ * - Domains
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Copyright (C) 2014-2015 ARM Limited
+ * Copyright (c) 2014-2015 MediaTek Inc.
+ */
+
+#define pr_fmt(fmt)	"arm-v7s io-pgtable: " fmt
+
+#include <linux/iommu.h>
+#include <linux/kernel.h>
+#include <linux/sizes.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+
+#include <asm/barrier.h>
+
+#include "io-pgtable.h"
+
+/* Struct accessors */
+#define io_pgtable_to_data(x)						\
+	container_of((x), struct arm_v7s_io_pgtable, iop)
+
+#define io_pgtable_ops_to_data(x)					\
+	io_pgtable_to_data(io_pgtable_ops_to_pgtable(x))
+
+/*
+ * We have 32 bits total; 12 bits resolved at level 1, 8 bits at level 2,
+ * and 12 bits in a page. With some carefully-chosen coefficients we can
+ * hide the ugly inconsistencies behind these macros and at least let the
+ * rest of the code pretend to be somewhat sane.
+ */
+#define ARM_V7S_ADDR_BITS		32
+#define _ARM_V7S_LVL_BITS(lvl)		(16 - (lvl) * 4)
+#define ARM_V7S_LVL_SHIFT(lvl)		(ARM_V7S_ADDR_BITS - (4 + 8 * (lvl)))
+#define ARM_V7S_TABLE_SHIFT		10
+
+#define ARM_V7S_PTES_PER_LVL(lvl)	(1 << _ARM_V7S_LVL_BITS(lvl))
+#define ARM_V7S_TABLE_SIZE(lvl)						\
+	(ARM_V7S_PTES_PER_LVL(lvl) * sizeof(arm_v7s_iopte))
+
+#define ARM_V7S_BLOCK_SIZE(lvl)		(1UL << ARM_V7S_LVL_SHIFT(lvl))
+#define ARM_V7S_LVL_MASK(lvl)		((u32)(~0U << ARM_V7S_LVL_SHIFT(lvl)))
+#define ARM_V7S_TABLE_MASK		((u32)(~0U << ARM_V7S_TABLE_SHIFT))
+#define _ARM_V7S_IDX_MASK(lvl)		(ARM_V7S_PTES_PER_LVL(lvl) - 1)
+#define ARM_V7S_LVL_IDX(addr, lvl)	({				\
+	int _l = lvl;							\
+	((u32)(addr) >> ARM_V7S_LVL_SHIFT(_l)) & _ARM_V7S_IDX_MASK(_l); \
+})
+
+#define ARM_V7S_CONT_PAGES		16
+
+/* PTE type bits: these are all mixed up with XN/PXN bits in most cases */
+#define ARM_V7S_PTE_TYPE_TABLE		0x1
+#define ARM_V7S_PTE_TYPE_PAGE		0x2
+#define ARM_V7S_PTE_TYPE_CONT_PAGE	0x1
+
+#define ARM_V7S_PTE_IS_VALID(pte)	(((pte) & 0x3) != 0)
+#define ARM_V7S_PTE_IS_TABLE(pte, lvl)	(lvl < 2 && ((pte) & ARM_V7S_PTE_TYPE_TABLE))
+
+/* Page table bits */
+#define ARM_V7S_ATTR_XN(lvl)		BIT(4 * (2 - (lvl)))
+#define ARM_V7S_ATTR_B			BIT(2)
+#define ARM_V7S_ATTR_C			BIT(3)
+#define ARM_V7S_ATTR_NS_TABLE		BIT(3)
+#define ARM_V7S_ATTR_NS_SECTION		BIT(19)
+
+#define ARM_V7S_CONT_SECTION		BIT(18)
+#define ARM_V7S_CONT_PAGE_XN_SHIFT	15
+
+/*
+ * The attribute bits are consistently ordered*, but occupy bits [17:10] of
+ * a level 1 PTE vs. bits [11:4]@level 2. Thus we define the individual
+ * fields relative to that 8-bit block, plus a total shift relative to the PTE.
+ */
+#define ARM_V7S_ATTR_SHIFT(lvl)		(16 - (lvl) * 6)
+
+#define ARM_V7S_ATTR_MASK		0xff
+#define ARM_V7S_ATTR_AP0		BIT(0)
+#define ARM_V7S_ATTR_AP1		BIT(1)
+#define ARM_V7S_ATTR_AP2		BIT(5)
+#define ARM_V7S_ATTR_S			BIT(6)
+#define ARM_V7S_ATTR_NG			BIT(7)
+#define ARM_V7S_TEX_SHIFT		2
+#define ARM_V7S_TEX_MASK		0x7
+#define ARM_V7S_ATTR_TEX(val)		(((val) & ARM_V7S_TEX_MASK) << ARM_V7S_TEX_SHIFT)
+
+/* *well, except for TEX on level 2 large pages, of course :( */
+#define ARM_V7S_CONT_PAGE_TEX_SHIFT	6
+#define ARM_V7S_CONT_PAGE_TEX_MASK	(ARM_V7S_TEX_MASK << ARM_V7S_CONT_PAGE_TEX_SHIFT)
+
+/* Simplified access permissions */
+#define ARM_V7S_PTE_AF			ARM_V7S_ATTR_AP0
+#define ARM_V7S_PTE_AP_UNPRIV		ARM_V7S_ATTR_AP1
+#define ARM_V7S_PTE_AP_RDONLY		ARM_V7S_ATTR_AP2
+
+/* Register bits */
+#define ARM_V7S_RGN_NC			0
+#define ARM_V7S_RGN_WBWA		1
+#define ARM_V7S_RGN_WT			2
+#define ARM_V7S_RGN_WB			3
+
+#define ARM_V7S_PRRR_TYPE_DEVICE	1
+#define ARM_V7S_PRRR_TYPE_NORMAL	2
+#define ARM_V7S_PRRR_TR(n, type)	(((type) & 0x3) << ((n) * 2))
+#define ARM_V7S_PRRR_DS0		BIT(16)
+#define ARM_V7S_PRRR_DS1		BIT(17)
+#define ARM_V7S_PRRR_NS0		BIT(18)
+#define ARM_V7S_PRRR_NS1		BIT(19)
+#define ARM_V7S_PRRR_NOS(n)		BIT((n) + 24)
+
+#define ARM_V7S_NMRR_IR(n, attr)	(((attr) & 0x3) << ((n) * 2))
+#define ARM_V7S_NMRR_OR(n, attr)	(((attr) & 0x3) << ((n) * 2 + 16))
+
+#define ARM_V7S_TTBR_S			BIT(1)
+#define ARM_V7S_TTBR_NOS		BIT(5)
+#define ARM_V7S_TTBR_ORGN_ATTR(attr)	(((attr) & 0x3) << 3)
+#define ARM_V7S_TTBR_IRGN_ATTR(attr)					\
+	((((attr) & 0x1) << 6) | (((attr) & 0x2) >> 1))
+
+#define ARM_V7S_TCR_PD1			BIT(5)
+
+typedef u32 arm_v7s_iopte;
+
+static bool selftest_running;
+
+struct arm_v7s_io_pgtable {
+	struct io_pgtable	iop;
+
+	arm_v7s_iopte		*pgd;
+	struct kmem_cache	*l2_tables;
+};
+
+static dma_addr_t __arm_v7s_dma_addr(void *pages)
+{
+	return (dma_addr_t)virt_to_phys(pages);
+}
+
+static arm_v7s_iopte *iopte_deref(arm_v7s_iopte pte, int lvl)
+{
+	if (ARM_V7S_PTE_IS_TABLE(pte, lvl))
+		pte &= ARM_V7S_TABLE_MASK;
+	else
+		pte &= ARM_V7S_LVL_MASK(lvl);
+	return phys_to_virt(pte);
+}
+
+static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+				   struct arm_v7s_io_pgtable *data)
+{
+	struct device *dev = data->iop.cfg.iommu_dev;
+	dma_addr_t dma;
+	size_t size = ARM_V7S_TABLE_SIZE(lvl);
+	void *table = NULL;
+
+	if (lvl == 1)
+		table = (void *)__get_dma_pages(__GFP_ZERO, get_order(size));
+	else if (lvl == 2)
+		table = kmem_cache_zalloc(data->l2_tables, gfp);
+	if (table && !selftest_running) {
+		dma = dma_map_single(dev, table, size, DMA_TO_DEVICE);
+		if (dma_mapping_error(dev, dma))
+			goto out_free;
+		/*
+		 * We depend on the IOMMU being able to work with any physical
+		 * address directly, so if the DMA layer suggests otherwise by
+		 * translating or truncating them, that bodes very badly...
+		 */
+		if (dma != virt_to_phys(table))
+			goto out_unmap;
+	}
+	return table;
+
+out_unmap:
+	dev_err(dev, "Cannot accommodate DMA translation for IOMMU page tables\n");
+	dma_unmap_single(dev, dma, size, DMA_TO_DEVICE);
+out_free:
+	if (lvl == 1)
+		free_pages((unsigned long)table, get_order(size));
+	else
+		kmem_cache_free(data->l2_tables, table);
+	return NULL;
+}
+
+static void __arm_v7s_free_table(void *table, int lvl,
+				 struct arm_v7s_io_pgtable *data)
+{
+	struct device *dev = data->iop.cfg.iommu_dev;
+	size_t size = ARM_V7S_TABLE_SIZE(lvl);
+
+	if (!selftest_running)
+		dma_unmap_single(dev, __arm_v7s_dma_addr(table), size,
+				 DMA_TO_DEVICE);
+	if (lvl == 1)
+		free_pages((unsigned long)table, get_order(size));
+	else
+		kmem_cache_free(data->l2_tables, table);
+}
+
+static void __arm_v7s_pte_sync(arm_v7s_iopte *ptep, int num_entries,
+			       struct io_pgtable_cfg *cfg)
+{
+	if (selftest_running)
+		return;
+
+	dma_sync_single_for_device(cfg->iommu_dev, __arm_v7s_dma_addr(ptep),
+				   num_entries * sizeof(*ptep), DMA_TO_DEVICE);
+}
+static void __arm_v7s_set_pte(arm_v7s_iopte *ptep, arm_v7s_iopte pte,
+			      int num_entries, struct io_pgtable_cfg *cfg)
+{
+	int i;
+
+	for (i = 0; i < num_entries; i++)
+		ptep[i] = pte;
+
+	__arm_v7s_pte_sync(ptep, num_entries, cfg);
+}
+
+static arm_v7s_iopte arm_v7s_prot_to_pte(int prot, int lvl,
+					 struct io_pgtable_cfg *cfg)
+{
+	bool ap = !(cfg->quirks & IO_PGTABLE_QUIRK_NO_PERMS);
+	arm_v7s_iopte pte = ARM_V7S_ATTR_NG | ARM_V7S_ATTR_S |
+			    ARM_V7S_ATTR_TEX(1);
+
+	if (ap) {
+		pte |= ARM_V7S_PTE_AF | ARM_V7S_PTE_AP_UNPRIV;
+		if (!(prot & IOMMU_WRITE))
+			pte |= ARM_V7S_PTE_AP_RDONLY;
+	}
+	pte <<= ARM_V7S_ATTR_SHIFT(lvl);
+
+	if ((prot & IOMMU_NOEXEC) && ap)
+		pte |= ARM_V7S_ATTR_XN(lvl);
+	if (prot & IOMMU_CACHE)
+		pte |= ARM_V7S_ATTR_B | ARM_V7S_ATTR_C;
+
+	return pte;
+}
+
+static int arm_v7s_pte_to_prot(arm_v7s_iopte pte, int lvl)
+{
+	int prot = IOMMU_READ;
+
+	if (pte & (ARM_V7S_PTE_AP_RDONLY << ARM_V7S_ATTR_SHIFT(lvl)))
+		prot |= IOMMU_WRITE;
+	if (pte & ARM_V7S_ATTR_C)
+		prot |= IOMMU_CACHE;
+
+	return prot;
+}
+
+static arm_v7s_iopte arm_v7s_pte_to_cont(arm_v7s_iopte pte, int lvl)
+{
+	if (lvl == 1) {
+		pte |= ARM_V7S_CONT_SECTION;
+	} else if (lvl == 2) {
+		arm_v7s_iopte xn = pte & ARM_V7S_ATTR_XN(lvl);
+		arm_v7s_iopte tex = pte & ARM_V7S_CONT_PAGE_TEX_MASK;
+
+		pte ^= xn | tex;
+		pte |= (xn << ARM_V7S_CONT_PAGE_XN_SHIFT) |
+		       (tex << ARM_V7S_CONT_PAGE_TEX_SHIFT);
+		pte ^= ARM_V7S_PTE_TYPE_PAGE | ARM_V7S_PTE_TYPE_CONT_PAGE;
+	}
+	return pte;
+}
+
+static arm_v7s_iopte arm_v7s_cont_to_pte(arm_v7s_iopte pte, int lvl)
+{
+	if (lvl == 1) {
+		pte &= ~ARM_V7S_CONT_SECTION;
+	} else if (lvl == 2) {
+		arm_v7s_iopte xn = pte & BIT(ARM_V7S_CONT_PAGE_XN_SHIFT);
+		arm_v7s_iopte tex = pte & (ARM_V7S_CONT_PAGE_TEX_MASK <<
+					   ARM_V7S_CONT_PAGE_TEX_SHIFT);
+
+		pte ^= xn | tex;
+		pte |= (xn >> ARM_V7S_CONT_PAGE_XN_SHIFT) |
+		       (tex >> ARM_V7S_CONT_PAGE_TEX_SHIFT);
+		pte ^= ARM_V7S_PTE_TYPE_PAGE | ARM_V7S_PTE_TYPE_CONT_PAGE;
+	}
+	return pte;
+}
+
+static bool arm_v7s_pte_is_cont(arm_v7s_iopte pte, int lvl)
+{
+	if (lvl == 1 && !ARM_V7S_PTE_IS_TABLE(pte, lvl))
+		return pte & ARM_V7S_CONT_SECTION;
+	else if (lvl == 2)
+		return !(pte & ARM_V7S_PTE_TYPE_PAGE);
+	return false;
+}
+
+static int __arm_v7s_unmap(struct arm_v7s_io_pgtable *, unsigned long,
+			   size_t, int, arm_v7s_iopte *);
+
+static int arm_v7s_init_pte(struct arm_v7s_io_pgtable *data,
+			    unsigned long iova, phys_addr_t paddr, int prot,
+			    int lvl, int num_entries, arm_v7s_iopte *ptep)
+{
+	struct io_pgtable_cfg *cfg = &data->iop.cfg;
+	arm_v7s_iopte pte = arm_v7s_prot_to_pte(prot, lvl, cfg);
+	int i;
+
+	for (i = 0; i < num_entries; i++)
+		if (ARM_V7S_PTE_IS_TABLE(ptep[i], lvl)) {
+			/*
+			 * We need to unmap and free the old table before
+			 * overwriting it with a block entry.
+			 */
+			arm_v7s_iopte *tblp;
+			size_t sz = ARM_V7S_BLOCK_SIZE(lvl);
+
+			tblp = ptep + i - ARM_V7S_LVL_IDX(iova, lvl);
+			if (WARN_ON(__arm_v7s_unmap(data, iova, sz, lvl, tblp)
+					!= sz))
+				return -EINVAL;
+		} else if (ptep[i]) {
+			/* We require an unmap first */
+			WARN_ON(!selftest_running);
+			return -EEXIST;
+		}
+
+	pte |= ARM_V7S_PTE_TYPE_PAGE;
+	if (lvl == 1 && (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS))
+		pte |= ARM_V7S_ATTR_NS_SECTION;
+
+	if (num_entries > 1)
+		pte = arm_v7s_pte_to_cont(pte, lvl);
+
+	pte |= paddr & ARM_V7S_LVL_MASK(lvl);
+
+	__arm_v7s_set_pte(ptep, pte, num_entries, cfg);
+	return 0;
+}
+
+static int __arm_v7s_map(struct arm_v7s_io_pgtable *data, unsigned long iova,
+			 phys_addr_t paddr, size_t size, int prot,
+			 int lvl, arm_v7s_iopte *ptep)
+{
+	struct io_pgtable_cfg *cfg = &data->iop.cfg;
+	arm_v7s_iopte pte, *cptep;
+	int num_entries = size >> ARM_V7S_LVL_SHIFT(lvl);
+
+	/* Find our entry at the current level */
+	ptep += ARM_V7S_LVL_IDX(iova, lvl);
+
+	/* If we can install a leaf entry at this level, then do so */
+	if (num_entries)
+		return arm_v7s_init_pte(data, iova, paddr, prot,
+					lvl, num_entries, ptep);
+
+	/* We can't allocate tables at the final level */
+	if (WARN_ON(lvl == 2))
+		return -EINVAL;
+
+	/* Grab a pointer to the next level */
+	pte = *ptep;
+	if (!pte) {
+		cptep = __arm_v7s_alloc_table(lvl + 1, GFP_ATOMIC, data);
+		if (!cptep)
+			return -ENOMEM;
+
+		pte = virt_to_phys(cptep) | ARM_V7S_PTE_TYPE_TABLE;
+		if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS)
+			pte |= ARM_V7S_ATTR_NS_TABLE;
+
+		__arm_v7s_set_pte(ptep, pte, 1, cfg);
+	} else {
+		cptep = iopte_deref(pte, lvl);
+	}
+
+	/* Rinse, repeat */
+	return __arm_v7s_map(data, iova, paddr, size, prot, lvl + 1, cptep);
+}
+
+static int arm_v7s_map(struct io_pgtable_ops *ops, unsigned long iova,
+			phys_addr_t paddr, size_t size, int prot)
+{
+	struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops);
+	struct io_pgtable_cfg *cfg = &data->iop.cfg;
+	const struct iommu_gather_ops *tlb = cfg->tlb;
+	void *cookie = data->iop.cookie;
+	int ret;
+
+	/* If no access, then nothing to do */
+	if (!(prot & (IOMMU_READ | IOMMU_WRITE)))
+		return 0;
+
+	ret = __arm_v7s_map(data, iova, paddr, size, prot, 1, data->pgd);
+	/*
+	 * Synchronise all PTE updates for the new mapping before there's
+	 * a chance for anything to kick off a table walk for the new iova.
+	 */
+	if (cfg->quirks & IO_PGTABLE_QUIRK_TLBI_ON_MAP) {
+		tlb->tlb_add_flush(iova, size, ARM_V7S_BLOCK_SIZE(2), false,
+				   cookie);
+		tlb->tlb_sync(cookie);
+	} else {
+		wmb();
+	}
+
+	return ret;
+}
+
+static void arm_v7s_free_pgtable(struct io_pgtable *iop)
+{
+	struct arm_v7s_io_pgtable *data = io_pgtable_to_data(iop);
+	int i;
+
+	for (i = 0; i < ARM_V7S_PTES_PER_LVL(1); i++) {
+		arm_v7s_iopte pte = data->pgd[i];
+
+		if (ARM_V7S_PTE_IS_TABLE(pte, 1))
+			__arm_v7s_free_table(iopte_deref(pte, 1), 2, data);
+	}
+	__arm_v7s_free_table(data->pgd, 1, data);
+	kmem_cache_destroy(data->l2_tables);
+	kfree(data);
+}
+
+static void arm_v7s_split_cont(struct arm_v7s_io_pgtable *data,
+			       unsigned long iova, int idx, int lvl,
+			       arm_v7s_iopte *ptep)
+{
+	struct io_pgtable_cfg *cfg = &data->iop.cfg;
+	void *cookie = data->iop.cookie;
+	arm_v7s_iopte pte;
+	size_t size = ARM_V7S_BLOCK_SIZE(lvl);
+	int i;
+
+	ptep -= idx & (ARM_V7S_CONT_PAGES - 1);
+	pte = arm_v7s_cont_to_pte(*ptep, lvl);
+	for (i = 0; i < ARM_V7S_CONT_PAGES; i++) {
+		ptep[i] = pte;
+		pte += size;
+	}
+
+	__arm_v7s_pte_sync(ptep, ARM_V7S_CONT_PAGES, cfg);
+
+	size *= ARM_V7S_CONT_PAGES;
+	cfg->tlb->tlb_add_flush(iova, size, size, true, cookie);
+	cfg->tlb->tlb_sync(cookie);
+}
+
+static int arm_v7s_split_blk_unmap(struct arm_v7s_io_pgtable *data,
+				   unsigned long iova, size_t size,
+				   arm_v7s_iopte *ptep)
+{
+	unsigned long blk_start, blk_end, blk_size;
+	phys_addr_t blk_paddr;
+	arm_v7s_iopte table = 0;
+	struct io_pgtable_cfg *cfg = &data->iop.cfg;
+	int prot = arm_v7s_pte_to_prot(*ptep, 1);
+
+	blk_size = ARM_V7S_BLOCK_SIZE(1);
+	blk_start = iova & ARM_V7S_LVL_MASK(1);
+	blk_end = blk_start + ARM_V7S_BLOCK_SIZE(1);
+	blk_paddr = *ptep & ARM_V7S_LVL_MASK(1);
+
+	for (; blk_start < blk_end; blk_start += size, blk_paddr += size) {
+		arm_v7s_iopte *tablep;
+
+		/* Unmap! */
+		if (blk_start == iova)
+			continue;
+
+		/* __arm_v7s_map expects a pointer to the start of the table */
+		tablep = &table - ARM_V7S_LVL_IDX(blk_start, 1);
+		if (__arm_v7s_map(data, blk_start, blk_paddr, size, prot, 1,
+				  tablep) < 0) {
+			if (table) {
+				/* Free the table we allocated */
+				tablep = iopte_deref(table, 1);
+				__arm_v7s_free_table(tablep, 2, data);
+			}
+			return 0; /* Bytes unmapped */
+		}
+	}
+
+	__arm_v7s_set_pte(ptep, table, 1, cfg);
+	iova &= ~(blk_size - 1);
+	cfg->tlb->tlb_add_flush(iova, blk_size, blk_size, true, data->iop.cookie);
+	return size;
+}
+
+static int __arm_v7s_unmap(struct arm_v7s_io_pgtable *data,
+			    unsigned long iova, size_t size, int lvl,
+			    arm_v7s_iopte *ptep)
+{
+	arm_v7s_iopte pte[ARM_V7S_CONT_PAGES];
+	struct io_pgtable_cfg *cfg = &data->iop.cfg;
+	const struct iommu_gather_ops *tlb = cfg->tlb;
+	void *cookie = data->iop.cookie;
+	int idx, i = 0, num_entries = size >> ARM_V7S_LVL_SHIFT(lvl);
+
+	/* Something went horribly wrong and we ran out of page table */
+	if (WARN_ON(lvl > 2))
+		return 0;
+
+	idx = ARM_V7S_LVL_IDX(iova, lvl);
+	ptep += idx;
+	do {
+		if (WARN_ON(!ARM_V7S_PTE_IS_VALID(ptep[i])))
+			return 0;
+		pte[i] = ptep[i];
+	} while (++i < num_entries);
+
+	/*
+	 * If we've hit a contiguous 'large page' entry@this level, it
+	 * needs splitting first, unless we're unmapping the whole lot.
+	 */
+	if (num_entries <= 1 && arm_v7s_pte_is_cont(pte[0], lvl))
+		arm_v7s_split_cont(data, iova, idx, lvl, ptep);
+
+	/* If the size matches this level, we're in the right place */
+	if (num_entries) {
+		size_t blk_size = ARM_V7S_BLOCK_SIZE(lvl);
+
+		__arm_v7s_set_pte(ptep, 0, num_entries, cfg);
+
+		for (i = 0; i < num_entries; i++) {
+			if (ARM_V7S_PTE_IS_TABLE(pte[i], lvl)) {
+				/* Also flush any partial walks */
+				tlb->tlb_add_flush(iova, blk_size,
+						   ARM_V7S_BLOCK_SIZE(2),
+						   false, cookie);
+				tlb->tlb_sync(cookie);
+				ptep = iopte_deref(pte[i], lvl);
+				__arm_v7s_free_table(ptep, lvl + 1, data);
+			} else {
+				tlb->tlb_add_flush(iova, blk_size, blk_size,
+						   true, cookie);
+			}
+			iova += blk_size;
+		}
+		return size;
+	} else if (lvl == 1 && !ARM_V7S_PTE_IS_TABLE(pte[0], lvl)) {
+		/*
+		 * Insert a table at the next level to map the old region,
+		 * minus the part we want to unmap
+		 */
+		return arm_v7s_split_blk_unmap(data, iova, size, ptep);
+	}
+
+	/* Keep on walkin' */
+	ptep = iopte_deref(pte[0], lvl);
+	return __arm_v7s_unmap(data, iova, size, lvl + 1, ptep);
+}
+
+static int arm_v7s_unmap(struct io_pgtable_ops *ops, unsigned long iova,
+			 size_t size)
+{
+	size_t unmapped;
+	struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops);
+	struct io_pgtable *iop = &data->iop;
+
+	unmapped = __arm_v7s_unmap(data, iova, size, 1, data->pgd);
+	if (unmapped)
+		iop->cfg.tlb->tlb_sync(iop->cookie);
+
+	return unmapped;
+}
+
+static phys_addr_t arm_v7s_iova_to_phys(struct io_pgtable_ops *ops,
+					unsigned long iova)
+{
+	struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops);
+	arm_v7s_iopte *ptep = data->pgd, pte = ARM_V7S_PTE_TYPE_TABLE;
+	int lvl = 0;
+	u32 mask;
+
+	while (ARM_V7S_PTE_IS_TABLE(pte, lvl)) {
+		pte = ptep[ARM_V7S_LVL_IDX(iova, ++lvl)];
+		ptep = iopte_deref(pte, lvl);
+	}
+	if (!ARM_V7S_PTE_IS_VALID(pte))
+		return 0;
+
+	mask = ARM_V7S_LVL_MASK(lvl);
+	if (arm_v7s_pte_is_cont(pte, lvl))
+		mask *= ARM_V7S_CONT_PAGES;
+	return (pte & mask) | (iova & ~mask);
+}
+
+static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
+						void *cookie)
+{
+	struct arm_v7s_io_pgtable *data;
+
+	if (cfg->ias > ARM_V7S_ADDR_BITS || cfg->oas > ARM_V7S_ADDR_BITS)
+		return NULL;
+
+	data = kmalloc(sizeof(*data), GFP_KERNEL);
+	if (!data)
+		return NULL;
+
+	data->l2_tables = kmem_cache_create("io-pgtable_armv7s_l2",
+					    ARM_V7S_TABLE_SIZE(2),
+					    ARM_V7S_TABLE_SIZE(2),
+					    SLAB_CACHE_DMA, NULL);
+	if (!data->l2_tables)
+		goto out_free_data;
+
+	data->iop.ops = (struct io_pgtable_ops) {
+		.map		= arm_v7s_map,
+		.unmap		= arm_v7s_unmap,
+		.iova_to_phys	= arm_v7s_iova_to_phys,
+	};
+
+	/* We have to do this early for __arm_v7s_alloc_table to work... */
+	data->iop.cfg = *cfg;
+
+	/*
+	 * Unless the IOMMU driver indicates supersection support by
+	 * having SZ_16M set in the initial bitmap, they won't be used.
+	 */
+	cfg->pgsize_bitmap &= SZ_4K | SZ_64K | SZ_1M | SZ_16M;
+
+	/* TCR: T0SZ=0, disable TTBR1 */
+	cfg->arm_v7s_cfg.tcr = ARM_V7S_TCR_PD1;
+
+	/*
+	 * TEX remap: the indices used map to the closest equivalent types
+	 * under the non-TEX-remap interpretation of those attribute bits,
+	 * excepting various implementation-defined aspects of shareability.
+	 */
+	cfg->arm_v7s_cfg.prrr = ARM_V7S_PRRR_TR(1, ARM_V7S_PRRR_TYPE_DEVICE) |
+				ARM_V7S_PRRR_TR(4, ARM_V7S_PRRR_TYPE_NORMAL) |
+				ARM_V7S_PRRR_TR(7, ARM_V7S_PRRR_TYPE_NORMAL) |
+				ARM_V7S_PRRR_DS0 | ARM_V7S_PRRR_DS1 |
+				ARM_V7S_PRRR_NS1 | ARM_V7S_PRRR_NOS(7);
+	cfg->arm_v7s_cfg.nmrr = ARM_V7S_NMRR_IR(7, ARM_V7S_RGN_WBWA) |
+				ARM_V7S_NMRR_OR(7, ARM_V7S_RGN_WBWA);
+
+	/* Looking good; allocate a pgd */
+	data->pgd = __arm_v7s_alloc_table(1, GFP_KERNEL, data);
+	if (!data->pgd)
+		goto out_free_data;
+
+	/* Ensure the empty pgd is visible before any actual TTBR write */
+	wmb();
+
+	/* TTBRs */
+	cfg->arm_v7s_cfg.ttbr[0] = virt_to_phys(data->pgd) |
+				   ARM_V7S_TTBR_S | ARM_V7S_TTBR_NOS |
+				   ARM_V7S_TTBR_IRGN_ATTR(ARM_V7S_RGN_WBWA) |
+				   ARM_V7S_TTBR_ORGN_ATTR(ARM_V7S_RGN_WBWA);
+	cfg->arm_v7s_cfg.ttbr[1] = 0;
+	return &data->iop;
+
+out_free_data:
+	kmem_cache_destroy(data->l2_tables);
+	kfree(data);
+	return NULL;
+}
+
+struct io_pgtable_init_fns io_pgtable_arm_v7s_init_fns = {
+	.alloc	= arm_v7s_alloc_pgtable,
+	.free	= arm_v7s_free_pgtable,
+};
+
+#ifdef CONFIG_IOMMU_IO_PGTABLE_ARMV7S_SELFTEST
+
+static struct io_pgtable_cfg *cfg_cookie;
+
+static void dummy_tlb_flush_all(void *cookie)
+{
+	WARN_ON(cookie != cfg_cookie);
+}
+
+static void dummy_tlb_add_flush(unsigned long iova, size_t size,
+				size_t granule, bool leaf, void *cookie)
+{
+	WARN_ON(cookie != cfg_cookie);
+	WARN_ON(!(size & cfg_cookie->pgsize_bitmap));
+}
+
+static void dummy_tlb_sync(void *cookie)
+{
+	WARN_ON(cookie != cfg_cookie);
+}
+
+static struct iommu_gather_ops dummy_tlb_ops = {
+	.tlb_flush_all	= dummy_tlb_flush_all,
+	.tlb_add_flush	= dummy_tlb_add_flush,
+	.tlb_sync	= dummy_tlb_sync,
+};
+
+#define __FAIL(ops)	({				\
+		WARN(1, "selftest: test failed\n");	\
+		selftest_running = false;		\
+		-EFAULT;				\
+})
+
+static int __init arm_v7s_do_selftests(void)
+{
+	struct io_pgtable_ops *ops;
+	struct io_pgtable_cfg cfg = {
+		.tlb = &dummy_tlb_ops,
+		.oas = 32,
+		.ias = 32,
+		.quirks = IO_PGTABLE_QUIRK_ARM_NS,
+		.pgsize_bitmap = SZ_4K | SZ_64K | SZ_1M | SZ_16M,
+	};
+	unsigned int iova, size, iova_start;
+	unsigned int i, loopnr = 0;
+
+	selftest_running = true;
+
+	cfg_cookie = &cfg;
+
+	ops = alloc_io_pgtable_ops(ARM_V7S, &cfg, &cfg);
+	if (!ops) {
+		pr_err("selftest: failed to allocate io pgtable ops\n");
+		return -EINVAL;
+	}
+
+	/*
+	 * Initial sanity checks.
+	 * Empty page tables shouldn't provide any translations.
+	 */
+	if (ops->iova_to_phys(ops, 42))
+		return __FAIL(ops);
+
+	if (ops->iova_to_phys(ops, SZ_1G + 42))
+		return __FAIL(ops);
+
+	if (ops->iova_to_phys(ops, SZ_2G + 42))
+		return __FAIL(ops);
+
+	/*
+	 * Distinct mappings of different granule sizes.
+	 */
+	iova = 0;
+	i = find_first_bit(&cfg.pgsize_bitmap, BITS_PER_LONG);
+	while (i != BITS_PER_LONG) {
+		size = 1UL << i;
+		if (ops->map(ops, iova, iova, size, IOMMU_READ |
+						    IOMMU_WRITE |
+						    IOMMU_NOEXEC |
+						    IOMMU_CACHE))
+			return __FAIL(ops);
+
+		/* Overlapping mappings */
+		if (!ops->map(ops, iova, iova + size, size,
+			      IOMMU_READ | IOMMU_NOEXEC))
+			return __FAIL(ops);
+
+		if (ops->iova_to_phys(ops, iova + 42) != (iova + 42))
+			return __FAIL(ops);
+
+		iova += SZ_16M;
+		i++;
+		i = find_next_bit(&cfg.pgsize_bitmap, BITS_PER_LONG, i);
+		loopnr++;
+	}
+
+	/* Partial unmap */
+	i = 1;
+	size = 1UL << __ffs(cfg.pgsize_bitmap);
+	while (i < loopnr) {
+		iova_start = i * SZ_16M;
+		if (ops->unmap(ops, iova_start + size, size) != size)
+			return __FAIL(ops);
+
+		/* Remap of partial unmap */
+		if (ops->map(ops, iova_start + size, size, size, IOMMU_READ))
+			return __FAIL(ops);
+
+		if (ops->iova_to_phys(ops, iova_start + size + 42)
+		    != (size + 42))
+			return __FAIL(ops);
+		i++;
+	}
+
+	/* Full unmap */
+	iova = 0;
+	i = find_first_bit(&cfg.pgsize_bitmap, BITS_PER_LONG);
+	while (i != BITS_PER_LONG) {
+		size = 1UL << i;
+
+		if (ops->unmap(ops, iova, size) != size)
+			return __FAIL(ops);
+
+		if (ops->iova_to_phys(ops, iova + 42))
+			return __FAIL(ops);
+
+		/* Remap full block */
+		if (ops->map(ops, iova, iova, size, IOMMU_WRITE))
+			return __FAIL(ops);
+
+		if (ops->iova_to_phys(ops, iova + 42) != (iova + 42))
+			return __FAIL(ops);
+
+		iova += SZ_16M;
+		i++;
+		i = find_next_bit(&cfg.pgsize_bitmap, BITS_PER_LONG, i);
+	}
+
+	free_io_pgtable_ops(ops);
+
+	selftest_running = false;
+
+	pr_info("self test ok\n");
+	return 0;
+}
+subsys_initcall(arm_v7s_do_selftests);
+#endif
diff --git a/drivers/iommu/io-pgtable.c b/drivers/iommu/io-pgtable.c
index 6f2e319..8c615b7 100644
--- a/drivers/iommu/io-pgtable.c
+++ b/drivers/iommu/io-pgtable.c
@@ -33,6 +33,9 @@ io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] =
 	[ARM_64_LPAE_S1] = &io_pgtable_arm_64_lpae_s1_init_fns,
 	[ARM_64_LPAE_S2] = &io_pgtable_arm_64_lpae_s2_init_fns,
 #endif
+#ifdef CONFIG_IOMMU_IO_PGTABLE_ARMV7S
+	[ARM_V7S] = &io_pgtable_arm_v7s_init_fns,
+#endif
 };
 
 struct io_pgtable_ops *alloc_io_pgtable_ops(enum io_pgtable_fmt fmt,
diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h
index 36673c8..aa57073 100644
--- a/drivers/iommu/io-pgtable.h
+++ b/drivers/iommu/io-pgtable.h
@@ -1,5 +1,6 @@
 #ifndef __IO_PGTABLE_H
 #define __IO_PGTABLE_H
+#include <linux/bitops.h>
 
 /*
  * Public API for use by IOMMU drivers
@@ -9,6 +10,7 @@ enum io_pgtable_fmt {
 	ARM_32_LPAE_S2,
 	ARM_64_LPAE_S1,
 	ARM_64_LPAE_S2,
+	ARM_V7S,
 	IO_PGTABLE_NUM_FMTS,
 };
 
@@ -45,7 +47,9 @@ struct iommu_gather_ops {
  *                 page table walker.
  */
 struct io_pgtable_cfg {
-	#define IO_PGTABLE_QUIRK_ARM_NS	(1 << 0)	/* Set NS bit in PTEs */
+	#define IO_PGTABLE_QUIRK_ARM_NS		BIT(0) /* Set NS bit in PTEs */
+	#define IO_PGTABLE_QUIRK_NO_PERMS	BIT(1) /* No AP/XN bits */
+	#define IO_PGTABLE_QUIRK_TLBI_ON_MAP	BIT(2) /* TLB Inv. on map */
 	int				quirks;
 	unsigned long			pgsize_bitmap;
 	unsigned int			ias;
@@ -65,6 +69,13 @@ struct io_pgtable_cfg {
 			u64	vttbr;
 			u64	vtcr;
 		} arm_lpae_s2_cfg;
+
+		struct {
+			u32	ttbr[2];
+			u32	tcr;
+			u32	nmrr;
+			u32	prrr;
+		} arm_v7s_cfg;
 	};
 };
 
@@ -149,5 +160,6 @@ extern struct io_pgtable_init_fns io_pgtable_arm_32_lpae_s1_init_fns;
 extern struct io_pgtable_init_fns io_pgtable_arm_32_lpae_s2_init_fns;
 extern struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s1_init_fns;
 extern struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s2_init_fns;
+extern struct io_pgtable_init_fns io_pgtable_arm_v7s_init_fns;
 
 #endif /* __IO_PGTABLE_H */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH 2/5] iommu/io-pgtable: Indicate granule for TLB maintenance
  2015-12-04 17:52     ` Robin Murphy
@ 2015-12-07 11:08         ` Will Deacon
  -1 siblings, 0 replies; 40+ messages in thread
From: Will Deacon @ 2015-12-07 11:08 UTC (permalink / raw)
  To: Robin Murphy
  Cc: laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	brian.starkey-5wv7dgnIgG8

On Fri, Dec 04, 2015 at 05:52:59PM +0000, Robin Murphy wrote:
> IOMMU hardware with range-based TLB maintenance commands can work
> happily with the iova and size arguments passed via the tlb_add_flush
> callback, but for IOMMUs which require separate commands per entry in
> the range, it is not straightforward to infer the necessary granularity
> when it comes to issuing the actual commands.
> 
> Add an additional argument indicating the granularity for the benefit
> of drivers needing to know, and update the ARM LPAE code appropriately
> (for non-leaf invalidations we currently just assume the worst-case
> page granularity rather than walking the table to check).
> 
> Signed-off-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
> ---
>  drivers/iommu/arm-smmu-v3.c    |  2 +-
>  drivers/iommu/arm-smmu.c       |  2 +-
>  drivers/iommu/io-pgtable-arm.c | 27 +++++++++++++++------------
>  drivers/iommu/io-pgtable.h     |  4 ++--
>  drivers/iommu/ipmmu-vmsa.c     |  4 ++--
>  5 files changed, 21 insertions(+), 18 deletions(-)
> 
> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> index 4e5118a..c302b65 100644
> --- a/drivers/iommu/arm-smmu-v3.c
> +++ b/drivers/iommu/arm-smmu-v3.c
> @@ -1335,7 +1335,7 @@ static void arm_smmu_tlb_inv_context(void *cookie)
>  }
>  
>  static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
> -					  bool leaf, void *cookie)
> +					  size_t granule, bool leaf, void *cookie)
>  {
>  	struct arm_smmu_domain *smmu_domain = cookie;
>  	struct arm_smmu_device *smmu = smmu_domain->smmu;
> diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
> index 47dc7a7..601e3dd 100644
> --- a/drivers/iommu/arm-smmu.c
> +++ b/drivers/iommu/arm-smmu.c
> @@ -582,7 +582,7 @@ static void arm_smmu_tlb_inv_context(void *cookie)
>  }
>  
>  static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
> -					  bool leaf, void *cookie)
> +					  size_t granule, bool leaf, void *cookie)
>  {
>  	struct arm_smmu_domain *smmu_domain = cookie;
>  	struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
> diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
> index 366a354..9088d27 100644
> --- a/drivers/iommu/io-pgtable-arm.c
> +++ b/drivers/iommu/io-pgtable-arm.c
> @@ -58,8 +58,10 @@
>  	((((d)->levels - ((l) - ARM_LPAE_START_LVL(d) + 1))		\
>  	  * (d)->bits_per_level) + (d)->pg_shift)
>  
> +#define ARM_LPAE_GRANULE(d)		(1UL << (d)->pg_shift)
> +
>  #define ARM_LPAE_PAGES_PER_PGD(d)					\
> -	DIV_ROUND_UP((d)->pgd_size, 1UL << (d)->pg_shift)
> +	DIV_ROUND_UP((d)->pgd_size, ARM_LPAE_GRANULE(d))
>  
>  /*
>   * Calculate the index at level l used to map virtual address a using the
> @@ -169,7 +171,7 @@
>  /* IOPTE accessors */
>  #define iopte_deref(pte,d)					\
>  	(__va((pte) & ((1ULL << ARM_LPAE_MAX_ADDR_BITS) - 1)	\
> -	& ~((1ULL << (d)->pg_shift) - 1)))
> +	& ~(ARM_LPAE_GRANULE(d) - 1)))

Do we run the risk of truncating the VA on 32-bit ARM here?

Will

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 2/5] iommu/io-pgtable: Indicate granule for TLB maintenance
@ 2015-12-07 11:08         ` Will Deacon
  0 siblings, 0 replies; 40+ messages in thread
From: Will Deacon @ 2015-12-07 11:08 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Dec 04, 2015 at 05:52:59PM +0000, Robin Murphy wrote:
> IOMMU hardware with range-based TLB maintenance commands can work
> happily with the iova and size arguments passed via the tlb_add_flush
> callback, but for IOMMUs which require separate commands per entry in
> the range, it is not straightforward to infer the necessary granularity
> when it comes to issuing the actual commands.
> 
> Add an additional argument indicating the granularity for the benefit
> of drivers needing to know, and update the ARM LPAE code appropriately
> (for non-leaf invalidations we currently just assume the worst-case
> page granularity rather than walking the table to check).
> 
> Signed-off-by: Robin Murphy <robin.murphy@arm.com>
> ---
>  drivers/iommu/arm-smmu-v3.c    |  2 +-
>  drivers/iommu/arm-smmu.c       |  2 +-
>  drivers/iommu/io-pgtable-arm.c | 27 +++++++++++++++------------
>  drivers/iommu/io-pgtable.h     |  4 ++--
>  drivers/iommu/ipmmu-vmsa.c     |  4 ++--
>  5 files changed, 21 insertions(+), 18 deletions(-)
> 
> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> index 4e5118a..c302b65 100644
> --- a/drivers/iommu/arm-smmu-v3.c
> +++ b/drivers/iommu/arm-smmu-v3.c
> @@ -1335,7 +1335,7 @@ static void arm_smmu_tlb_inv_context(void *cookie)
>  }
>  
>  static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
> -					  bool leaf, void *cookie)
> +					  size_t granule, bool leaf, void *cookie)
>  {
>  	struct arm_smmu_domain *smmu_domain = cookie;
>  	struct arm_smmu_device *smmu = smmu_domain->smmu;
> diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
> index 47dc7a7..601e3dd 100644
> --- a/drivers/iommu/arm-smmu.c
> +++ b/drivers/iommu/arm-smmu.c
> @@ -582,7 +582,7 @@ static void arm_smmu_tlb_inv_context(void *cookie)
>  }
>  
>  static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
> -					  bool leaf, void *cookie)
> +					  size_t granule, bool leaf, void *cookie)
>  {
>  	struct arm_smmu_domain *smmu_domain = cookie;
>  	struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
> diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
> index 366a354..9088d27 100644
> --- a/drivers/iommu/io-pgtable-arm.c
> +++ b/drivers/iommu/io-pgtable-arm.c
> @@ -58,8 +58,10 @@
>  	((((d)->levels - ((l) - ARM_LPAE_START_LVL(d) + 1))		\
>  	  * (d)->bits_per_level) + (d)->pg_shift)
>  
> +#define ARM_LPAE_GRANULE(d)		(1UL << (d)->pg_shift)
> +
>  #define ARM_LPAE_PAGES_PER_PGD(d)					\
> -	DIV_ROUND_UP((d)->pgd_size, 1UL << (d)->pg_shift)
> +	DIV_ROUND_UP((d)->pgd_size, ARM_LPAE_GRANULE(d))
>  
>  /*
>   * Calculate the index at level l used to map virtual address a using the
> @@ -169,7 +171,7 @@
>  /* IOPTE accessors */
>  #define iopte_deref(pte,d)					\
>  	(__va((pte) & ((1ULL << ARM_LPAE_MAX_ADDR_BITS) - 1)	\
> -	& ~((1ULL << (d)->pg_shift) - 1)))
> +	& ~(ARM_LPAE_GRANULE(d) - 1)))

Do we run the risk of truncating the VA on 32-bit ARM here?

Will

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 3/5] iommu/arm-smmu: Invalidate TLBs properly
  2015-12-04 17:53     ` Robin Murphy
@ 2015-12-07 11:09         ` Will Deacon
  -1 siblings, 0 replies; 40+ messages in thread
From: Will Deacon @ 2015-12-07 11:09 UTC (permalink / raw)
  To: Robin Murphy
  Cc: laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	brian.starkey-5wv7dgnIgG8

On Fri, Dec 04, 2015 at 05:53:00PM +0000, Robin Murphy wrote:
> When invalidating an IOVA range potentially spanning multiple pages,
> such as when removing an entire intermediate-level table, we currently
> only issue an invalidation for the first IOVA of that range. Since the
> architecture specifies that address-based TLB maintenance operations
> target a single entry, an SMMU could feasibly retain live entries for
> subsequent pages within that unmapped range, which is not good.
> 
> Make sure we hit every possible entry by iterating over the whole range
> at the granularity provided by the pagetable implementation.
> 
> Signed-off-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
> ---
>  drivers/iommu/arm-smmu.c | 19 ++++++++++++++++---
>  1 file changed, 16 insertions(+), 3 deletions(-)

Can you do something similar for arm-smmu-v3.c as well, please?

Will

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 3/5] iommu/arm-smmu: Invalidate TLBs properly
@ 2015-12-07 11:09         ` Will Deacon
  0 siblings, 0 replies; 40+ messages in thread
From: Will Deacon @ 2015-12-07 11:09 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Dec 04, 2015 at 05:53:00PM +0000, Robin Murphy wrote:
> When invalidating an IOVA range potentially spanning multiple pages,
> such as when removing an entire intermediate-level table, we currently
> only issue an invalidation for the first IOVA of that range. Since the
> architecture specifies that address-based TLB maintenance operations
> target a single entry, an SMMU could feasibly retain live entries for
> subsequent pages within that unmapped range, which is not good.
> 
> Make sure we hit every possible entry by iterating over the whole range
> at the granularity provided by the pagetable implementation.
> 
> Signed-off-by: Robin Murphy <robin.murphy@arm.com>
> ---
>  drivers/iommu/arm-smmu.c | 19 ++++++++++++++++---
>  1 file changed, 16 insertions(+), 3 deletions(-)

Can you do something similar for arm-smmu-v3.c as well, please?

Will

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 2/5] iommu/io-pgtable: Indicate granule for TLB maintenance
  2015-12-07 11:08         ` Will Deacon
@ 2015-12-07 12:09             ` Robin Murphy
  -1 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-07 12:09 UTC (permalink / raw)
  To: Will Deacon
  Cc: laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	brian.starkey-5wv7dgnIgG8

On 07/12/15 11:08, Will Deacon wrote:
[...]
>> +#define ARM_LPAE_GRANULE(d)		(1UL << (d)->pg_shift)
>> +
>>   #define ARM_LPAE_PAGES_PER_PGD(d)					\
>> -	DIV_ROUND_UP((d)->pgd_size, 1UL << (d)->pg_shift)
>> +	DIV_ROUND_UP((d)->pgd_size, ARM_LPAE_GRANULE(d))
>>
>>   /*
>>    * Calculate the index at level l used to map virtual address a using the
>> @@ -169,7 +171,7 @@
>>   /* IOPTE accessors */
>>   #define iopte_deref(pte,d)					\
>>   	(__va((pte) & ((1ULL << ARM_LPAE_MAX_ADDR_BITS) - 1)	\
>> -	& ~((1ULL << (d)->pg_shift) - 1)))
>> +	& ~(ARM_LPAE_GRANULE(d) - 1)))
>
> Do we run the risk of truncating the VA on 32-bit ARM here?

Indeed, in all honesty I'd missed that, but since __va is going to 
truncate it to a 32-bit void * anyway, doing it earlier in the 
expression actually seems to result in better code - with iopte_deref 
wrapped like so to make it easier to pick out:

arm_lpae_iopte *__iopte_deref(arm_lpae_iopte pte, struct 
arm_lpae_io_pgtable *d) {
	return iopte_deref(pte, d);
}

the result goes from this:

1568:	e92d4070 	push	{r4, r5, r6, lr}
156c:	e3a0e001 	mov	lr, #1
1570:	e592c060 	ldr	ip, [r2, #96]	; 0x60
1574:	e3e04000 	mvn	r4, #0
1578:	e30f5fff 	movw	r5, #65535	; 0xffff
157c:	e26c6020 	rsb	r6, ip, #32
1580:	e1a02c1e 	lsl	r2, lr, ip
1584:	e2722000 	rsbs	r2, r2, #0
1588:	e0000002 	and	r0, r0, r2
158c:	e0000004 	and	r0, r0, r4
1590:	e2400481 	sub	r0, r0, #-2130706432	; 0x81000000
1594:	e8bd8070 	pop	{r4, r5, r6, pc}

to this:

151c:	e5922060	ldr	r2, [r2, #96] ; 0x60
1520:	e3e03000	mvn	r3, #0
1524:	e1a03213	lsl	r3, r3, r2
1528:	e0000003	and	r0, r0, r3
152c:	e2400481	sub	r0, r0, #-2130706432 ; 0x81000000
1530:	e12fff1e	bx	lr

which, given how mind-bogglingly silly some of the former code looks, 
seems like an inadvertent win.

Robin.

>
> Will
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 2/5] iommu/io-pgtable: Indicate granule for TLB maintenance
@ 2015-12-07 12:09             ` Robin Murphy
  0 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-07 12:09 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/12/15 11:08, Will Deacon wrote:
[...]
>> +#define ARM_LPAE_GRANULE(d)		(1UL << (d)->pg_shift)
>> +
>>   #define ARM_LPAE_PAGES_PER_PGD(d)					\
>> -	DIV_ROUND_UP((d)->pgd_size, 1UL << (d)->pg_shift)
>> +	DIV_ROUND_UP((d)->pgd_size, ARM_LPAE_GRANULE(d))
>>
>>   /*
>>    * Calculate the index at level l used to map virtual address a using the
>> @@ -169,7 +171,7 @@
>>   /* IOPTE accessors */
>>   #define iopte_deref(pte,d)					\
>>   	(__va((pte) & ((1ULL << ARM_LPAE_MAX_ADDR_BITS) - 1)	\
>> -	& ~((1ULL << (d)->pg_shift) - 1)))
>> +	& ~(ARM_LPAE_GRANULE(d) - 1)))
>
> Do we run the risk of truncating the VA on 32-bit ARM here?

Indeed, in all honesty I'd missed that, but since __va is going to 
truncate it to a 32-bit void * anyway, doing it earlier in the 
expression actually seems to result in better code - with iopte_deref 
wrapped like so to make it easier to pick out:

arm_lpae_iopte *__iopte_deref(arm_lpae_iopte pte, struct 
arm_lpae_io_pgtable *d) {
	return iopte_deref(pte, d);
}

the result goes from this:

1568:	e92d4070 	push	{r4, r5, r6, lr}
156c:	e3a0e001 	mov	lr, #1
1570:	e592c060 	ldr	ip, [r2, #96]	; 0x60
1574:	e3e04000 	mvn	r4, #0
1578:	e30f5fff 	movw	r5, #65535	; 0xffff
157c:	e26c6020 	rsb	r6, ip, #32
1580:	e1a02c1e 	lsl	r2, lr, ip
1584:	e2722000 	rsbs	r2, r2, #0
1588:	e0000002 	and	r0, r0, r2
158c:	e0000004 	and	r0, r0, r4
1590:	e2400481 	sub	r0, r0, #-2130706432	; 0x81000000
1594:	e8bd8070 	pop	{r4, r5, r6, pc}

to this:

151c:	e5922060	ldr	r2, [r2, #96] ; 0x60
1520:	e3e03000	mvn	r3, #0
1524:	e1a03213	lsl	r3, r3, r2
1528:	e0000003	and	r0, r0, r3
152c:	e2400481	sub	r0, r0, #-2130706432 ; 0x81000000
1530:	e12fff1e	bx	lr

which, given how mind-bogglingly silly some of the former code looks, 
seems like an inadvertent win.

Robin.

>
> Will
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 3/5] iommu/arm-smmu: Invalidate TLBs properly
  2015-12-07 11:09         ` Will Deacon
@ 2015-12-07 13:09             ` Robin Murphy
  -1 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-07 13:09 UTC (permalink / raw)
  To: Will Deacon
  Cc: laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	brian.starkey-5wv7dgnIgG8

On 07/12/15 11:09, Will Deacon wrote:
> On Fri, Dec 04, 2015 at 05:53:00PM +0000, Robin Murphy wrote:
>> When invalidating an IOVA range potentially spanning multiple pages,
>> such as when removing an entire intermediate-level table, we currently
>> only issue an invalidation for the first IOVA of that range. Since the
>> architecture specifies that address-based TLB maintenance operations
>> target a single entry, an SMMU could feasibly retain live entries for
>> subsequent pages within that unmapped range, which is not good.
>>
>> Make sure we hit every possible entry by iterating over the whole range
>> at the granularity provided by the pagetable implementation.
>>
>> Signed-off-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
>> ---
>>   drivers/iommu/arm-smmu.c | 19 ++++++++++++++++---
>>   1 file changed, 16 insertions(+), 3 deletions(-)
>
> Can you do something similar for arm-smmu-v3.c as well, please?

Something like this? (untested as I don't have a v3 model set up):

------>8------
From: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
Date: Mon, 7 Dec 2015 12:52:56 +0000
Subject: [PATCH] iommu/arm-smmu: Fix TLB invalidation

SMMUv3 operates under the same rules as SMMUv2 and the CPU
architectures, so when invalidating an IOVA range we have to hit
every address for which a TLB entry might exist.

To fix this, issue commands for the whole range rather than just the
initial address; as a minor optimisation, try to avoid flooding the
queue by falling back to 'invalidate all' if the range is large.

Signed-off-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
---
  drivers/iommu/arm-smmu-v3.c | 9 ++++++++-
  1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index c302b65..afa0b41 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -1346,6 +1346,10 @@ static void 
arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
  		},
  	};

+	/* If we'd fill the whole queue or more, don't even bother... */
+	if (granule << smmu->cmdq.q.max_n_shift >= size / (CMDQ_ENT_DWORDS << 3))
+		return arm_smmu_tlb_inv_context(cookie);
+
  	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
  		cmd.opcode	= CMDQ_OP_TLBI_NH_VA;
  		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd.asid;
@@ -1354,7 +1358,10 @@ static void 
arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
  		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
  	}

-	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+	do {
+		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+		cmd.tlbi.addr += granule;
+	} while (size -= granule);
  }

  static struct iommu_gather_ops arm_smmu_gather_ops = {

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 3/5] iommu/arm-smmu: Invalidate TLBs properly
@ 2015-12-07 13:09             ` Robin Murphy
  0 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-07 13:09 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/12/15 11:09, Will Deacon wrote:
> On Fri, Dec 04, 2015 at 05:53:00PM +0000, Robin Murphy wrote:
>> When invalidating an IOVA range potentially spanning multiple pages,
>> such as when removing an entire intermediate-level table, we currently
>> only issue an invalidation for the first IOVA of that range. Since the
>> architecture specifies that address-based TLB maintenance operations
>> target a single entry, an SMMU could feasibly retain live entries for
>> subsequent pages within that unmapped range, which is not good.
>>
>> Make sure we hit every possible entry by iterating over the whole range
>> at the granularity provided by the pagetable implementation.
>>
>> Signed-off-by: Robin Murphy <robin.murphy@arm.com>
>> ---
>>   drivers/iommu/arm-smmu.c | 19 ++++++++++++++++---
>>   1 file changed, 16 insertions(+), 3 deletions(-)
>
> Can you do something similar for arm-smmu-v3.c as well, please?

Something like this? (untested as I don't have a v3 model set up):

------>8------
From: Robin Murphy <robin.murphy@arm.com>
Date: Mon, 7 Dec 2015 12:52:56 +0000
Subject: [PATCH] iommu/arm-smmu: Fix TLB invalidation

SMMUv3 operates under the same rules as SMMUv2 and the CPU
architectures, so when invalidating an IOVA range we have to hit
every address for which a TLB entry might exist.

To fix this, issue commands for the whole range rather than just the
initial address; as a minor optimisation, try to avoid flooding the
queue by falling back to 'invalidate all' if the range is large.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
  drivers/iommu/arm-smmu-v3.c | 9 ++++++++-
  1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index c302b65..afa0b41 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -1346,6 +1346,10 @@ static void 
arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
  		},
  	};

+	/* If we'd fill the whole queue or more, don't even bother... */
+	if (granule << smmu->cmdq.q.max_n_shift >= size / (CMDQ_ENT_DWORDS << 3))
+		return arm_smmu_tlb_inv_context(cookie);
+
  	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
  		cmd.opcode	= CMDQ_OP_TLBI_NH_VA;
  		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd.asid;
@@ -1354,7 +1358,10 @@ static void 
arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
  		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
  	}

-	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+	do {
+		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+		cmd.tlbi.addr += granule;
+	} while (size -= granule);
  }

  static struct iommu_gather_ops arm_smmu_gather_ops = {

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH 3/5] iommu/arm-smmu: Invalidate TLBs properly
  2015-12-07 13:09             ` Robin Murphy
@ 2015-12-07 13:34                 ` Will Deacon
  -1 siblings, 0 replies; 40+ messages in thread
From: Will Deacon @ 2015-12-07 13:34 UTC (permalink / raw)
  To: Robin Murphy
  Cc: laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	brian.starkey-5wv7dgnIgG8

On Mon, Dec 07, 2015 at 01:09:35PM +0000, Robin Murphy wrote:
> On 07/12/15 11:09, Will Deacon wrote:
> >On Fri, Dec 04, 2015 at 05:53:00PM +0000, Robin Murphy wrote:
> >>When invalidating an IOVA range potentially spanning multiple pages,
> >>such as when removing an entire intermediate-level table, we currently
> >>only issue an invalidation for the first IOVA of that range. Since the
> >>architecture specifies that address-based TLB maintenance operations
> >>target a single entry, an SMMU could feasibly retain live entries for
> >>subsequent pages within that unmapped range, which is not good.
> >>
> >>Make sure we hit every possible entry by iterating over the whole range
> >>at the granularity provided by the pagetable implementation.
> >>
> >>Signed-off-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
> >>---
> >>  drivers/iommu/arm-smmu.c | 19 ++++++++++++++++---
> >>  1 file changed, 16 insertions(+), 3 deletions(-)
> >
> >Can you do something similar for arm-smmu-v3.c as well, please?
> 
> Something like this? (untested as I don't have a v3 model set up):
> 
> ------>8------
> From: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
> Date: Mon, 7 Dec 2015 12:52:56 +0000
> Subject: [PATCH] iommu/arm-smmu: Fix TLB invalidation
> 
> SMMUv3 operates under the same rules as SMMUv2 and the CPU
> architectures, so when invalidating an IOVA range we have to hit
> every address for which a TLB entry might exist.
> 
> To fix this, issue commands for the whole range rather than just the
> initial address; as a minor optimisation, try to avoid flooding the
> queue by falling back to 'invalidate all' if the range is large.
> 
> Signed-off-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
> ---
>  drivers/iommu/arm-smmu-v3.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> index c302b65..afa0b41 100644
> --- a/drivers/iommu/arm-smmu-v3.c
> +++ b/drivers/iommu/arm-smmu-v3.c
> @@ -1346,6 +1346,10 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned
> long iova, size_t size,
>  		},
>  	};
> 
> +	/* If we'd fill the whole queue or more, don't even bother... */
> +	if (granule << smmu->cmdq.q.max_n_shift >= size / (CMDQ_ENT_DWORDS << 3))
> +		return arm_smmu_tlb_inv_context(cookie);

Let's not bother with this heuristic for now. It's not at all clear where
the trade off is between CPU time and I/O latency and this check doesn't
take into account the current state of the command queue and/or how quickly
it drains anyway.

>  	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
>  		cmd.opcode	= CMDQ_OP_TLBI_NH_VA;
>  		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd.asid;
> @@ -1354,7 +1358,10 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned
> long iova, size_t size,
>  		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
>  	}
> 
> -	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +	do {
> +		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +		cmd.tlbi.addr += granule;
> +	} while (size -= granule);

This bit looks fine to me, thanks.

Will

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 3/5] iommu/arm-smmu: Invalidate TLBs properly
@ 2015-12-07 13:34                 ` Will Deacon
  0 siblings, 0 replies; 40+ messages in thread
From: Will Deacon @ 2015-12-07 13:34 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Dec 07, 2015 at 01:09:35PM +0000, Robin Murphy wrote:
> On 07/12/15 11:09, Will Deacon wrote:
> >On Fri, Dec 04, 2015 at 05:53:00PM +0000, Robin Murphy wrote:
> >>When invalidating an IOVA range potentially spanning multiple pages,
> >>such as when removing an entire intermediate-level table, we currently
> >>only issue an invalidation for the first IOVA of that range. Since the
> >>architecture specifies that address-based TLB maintenance operations
> >>target a single entry, an SMMU could feasibly retain live entries for
> >>subsequent pages within that unmapped range, which is not good.
> >>
> >>Make sure we hit every possible entry by iterating over the whole range
> >>at the granularity provided by the pagetable implementation.
> >>
> >>Signed-off-by: Robin Murphy <robin.murphy@arm.com>
> >>---
> >>  drivers/iommu/arm-smmu.c | 19 ++++++++++++++++---
> >>  1 file changed, 16 insertions(+), 3 deletions(-)
> >
> >Can you do something similar for arm-smmu-v3.c as well, please?
> 
> Something like this? (untested as I don't have a v3 model set up):
> 
> ------>8------
> From: Robin Murphy <robin.murphy@arm.com>
> Date: Mon, 7 Dec 2015 12:52:56 +0000
> Subject: [PATCH] iommu/arm-smmu: Fix TLB invalidation
> 
> SMMUv3 operates under the same rules as SMMUv2 and the CPU
> architectures, so when invalidating an IOVA range we have to hit
> every address for which a TLB entry might exist.
> 
> To fix this, issue commands for the whole range rather than just the
> initial address; as a minor optimisation, try to avoid flooding the
> queue by falling back to 'invalidate all' if the range is large.
> 
> Signed-off-by: Robin Murphy <robin.murphy@arm.com>
> ---
>  drivers/iommu/arm-smmu-v3.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> index c302b65..afa0b41 100644
> --- a/drivers/iommu/arm-smmu-v3.c
> +++ b/drivers/iommu/arm-smmu-v3.c
> @@ -1346,6 +1346,10 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned
> long iova, size_t size,
>  		},
>  	};
> 
> +	/* If we'd fill the whole queue or more, don't even bother... */
> +	if (granule << smmu->cmdq.q.max_n_shift >= size / (CMDQ_ENT_DWORDS << 3))
> +		return arm_smmu_tlb_inv_context(cookie);

Let's not bother with this heuristic for now. It's not at all clear where
the trade off is between CPU time and I/O latency and this check doesn't
take into account the current state of the command queue and/or how quickly
it drains anyway.

>  	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
>  		cmd.opcode	= CMDQ_OP_TLBI_NH_VA;
>  		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd.asid;
> @@ -1354,7 +1358,10 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned
> long iova, size_t size,
>  		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
>  	}
> 
> -	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +	do {
> +		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +		cmd.tlbi.addr += granule;
> +	} while (size -= granule);

This bit looks fine to me, thanks.

Will

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 2/5] iommu/io-pgtable: Indicate granule for TLB maintenance
  2015-12-07 12:09             ` Robin Murphy
@ 2015-12-07 13:48                 ` Will Deacon
  -1 siblings, 0 replies; 40+ messages in thread
From: Will Deacon @ 2015-12-07 13:48 UTC (permalink / raw)
  To: Robin Murphy
  Cc: laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	brian.starkey-5wv7dgnIgG8

On Mon, Dec 07, 2015 at 12:09:56PM +0000, Robin Murphy wrote:
> On 07/12/15 11:08, Will Deacon wrote:
> [...]
> >>+#define ARM_LPAE_GRANULE(d)		(1UL << (d)->pg_shift)
> >>+
> >>  #define ARM_LPAE_PAGES_PER_PGD(d)					\
> >>-	DIV_ROUND_UP((d)->pgd_size, 1UL << (d)->pg_shift)
> >>+	DIV_ROUND_UP((d)->pgd_size, ARM_LPAE_GRANULE(d))
> >>
> >>  /*
> >>   * Calculate the index at level l used to map virtual address a using the
> >>@@ -169,7 +171,7 @@
> >>  /* IOPTE accessors */
> >>  #define iopte_deref(pte,d)					\
> >>  	(__va((pte) & ((1ULL << ARM_LPAE_MAX_ADDR_BITS) - 1)	\
> >>-	& ~((1ULL << (d)->pg_shift) - 1)))
> >>+	& ~(ARM_LPAE_GRANULE(d) - 1)))
> >
> >Do we run the risk of truncating the VA on 32-bit ARM here?
> 
> Indeed, in all honesty I'd missed that, but since __va is going to truncate
> it to a 32-bit void * anyway, doing it earlier in the expression actually
> seems to result in better code - with iopte_deref wrapped like so to make it
> easier to pick out:

Gah, I meant PA not VA, sorry. The PA is 40-bit, but with ARM_LPAE_GRANULE
using unsigned long, we drop the top 8 bits of the output address field
in the pte.

Will

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 2/5] iommu/io-pgtable: Indicate granule for TLB maintenance
@ 2015-12-07 13:48                 ` Will Deacon
  0 siblings, 0 replies; 40+ messages in thread
From: Will Deacon @ 2015-12-07 13:48 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Dec 07, 2015 at 12:09:56PM +0000, Robin Murphy wrote:
> On 07/12/15 11:08, Will Deacon wrote:
> [...]
> >>+#define ARM_LPAE_GRANULE(d)		(1UL << (d)->pg_shift)
> >>+
> >>  #define ARM_LPAE_PAGES_PER_PGD(d)					\
> >>-	DIV_ROUND_UP((d)->pgd_size, 1UL << (d)->pg_shift)
> >>+	DIV_ROUND_UP((d)->pgd_size, ARM_LPAE_GRANULE(d))
> >>
> >>  /*
> >>   * Calculate the index at level l used to map virtual address a using the
> >>@@ -169,7 +171,7 @@
> >>  /* IOPTE accessors */
> >>  #define iopte_deref(pte,d)					\
> >>  	(__va((pte) & ((1ULL << ARM_LPAE_MAX_ADDR_BITS) - 1)	\
> >>-	& ~((1ULL << (d)->pg_shift) - 1)))
> >>+	& ~(ARM_LPAE_GRANULE(d) - 1)))
> >
> >Do we run the risk of truncating the VA on 32-bit ARM here?
> 
> Indeed, in all honesty I'd missed that, but since __va is going to truncate
> it to a 32-bit void * anyway, doing it earlier in the expression actually
> seems to result in better code - with iopte_deref wrapped like so to make it
> easier to pick out:

Gah, I meant PA not VA, sorry. The PA is 40-bit, but with ARM_LPAE_GRANULE
using unsigned long, we drop the top 8 bits of the output address field
in the pte.

Will

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v2] iommu/arm-smmu: Invalidate TLBs properly
  2015-12-04 17:53     ` Robin Murphy
@ 2015-12-07 18:18         ` Robin Murphy
  -1 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-07 18:18 UTC (permalink / raw)
  To: will.deacon-5wv7dgnIgG8,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
  Cc: linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

When invalidating an IOVA range potentially spanning multiple pages,
such as when removing an entire intermediate-level table, we currently
only issue an invalidation for the first IOVA of that range. Since the
architecture specifies that address-based TLB maintenance operations
target a single entry, an SMMU could feasibly retain live entries for
subsequent pages within that unmapped range, which is not good.

Make sure we hit every possible entry by iterating over the whole range
at the granularity provided by the pagetable implementation.

Signed-off-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
---

v2: include SMMUv3 fix, use the same shorter loop construct everywhere.

 drivers/iommu/arm-smmu-v3.c |  5 ++++-
 drivers/iommu/arm-smmu.c    | 16 +++++++++++++---
 2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index c302b65..8bb5abf 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -1354,7 +1354,10 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
 		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
 	}
 
-	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+	do {
+		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+		cmd.tlbi.addr += granule;
+	} while (size -= granule);
 }
 
 static struct iommu_gather_ops arm_smmu_gather_ops = {
diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
index 601e3dd..eb28c3e 100644
--- a/drivers/iommu/arm-smmu.c
+++ b/drivers/iommu/arm-smmu.c
@@ -597,12 +597,18 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
 		if (!IS_ENABLED(CONFIG_64BIT) || smmu->version == ARM_SMMU_V1) {
 			iova &= ~12UL;
 			iova |= ARM_SMMU_CB_ASID(cfg);
-			writel_relaxed(iova, reg);
+			do {
+				writel_relaxed(iova, reg);
+				iova += granule;
+			} while (size -= granule);
 #ifdef CONFIG_64BIT
 		} else {
 			iova >>= 12;
 			iova |= (u64)ARM_SMMU_CB_ASID(cfg) << 48;
-			writeq_relaxed(iova, reg);
+			do {
+				writeq_relaxed(iova, reg);
+				iova += granule >> 12;
+			} while (size -= granule)
 #endif
 		}
 #ifdef CONFIG_64BIT
@@ -610,7 +616,11 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
 		reg = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
 		reg += leaf ? ARM_SMMU_CB_S2_TLBIIPAS2L :
 			      ARM_SMMU_CB_S2_TLBIIPAS2;
-		writeq_relaxed(iova >> 12, reg);
+		iova >>= 12;
+		do {
+			writeq_relaxed(iova, reg);
+			iova += granule >> 12;
+		} while (size -= granule)
 #endif
 	} else {
 		reg = ARM_SMMU_GR0(smmu) + ARM_SMMU_GR0_TLBIVMID;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v2] iommu/arm-smmu: Invalidate TLBs properly
@ 2015-12-07 18:18         ` Robin Murphy
  0 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-07 18:18 UTC (permalink / raw)
  To: linux-arm-kernel

When invalidating an IOVA range potentially spanning multiple pages,
such as when removing an entire intermediate-level table, we currently
only issue an invalidation for the first IOVA of that range. Since the
architecture specifies that address-based TLB maintenance operations
target a single entry, an SMMU could feasibly retain live entries for
subsequent pages within that unmapped range, which is not good.

Make sure we hit every possible entry by iterating over the whole range
at the granularity provided by the pagetable implementation.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---

v2: include SMMUv3 fix, use the same shorter loop construct everywhere.

 drivers/iommu/arm-smmu-v3.c |  5 ++++-
 drivers/iommu/arm-smmu.c    | 16 +++++++++++++---
 2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index c302b65..8bb5abf 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -1354,7 +1354,10 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
 		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
 	}
 
-	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+	do {
+		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+		cmd.tlbi.addr += granule;
+	} while (size -= granule);
 }
 
 static struct iommu_gather_ops arm_smmu_gather_ops = {
diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
index 601e3dd..eb28c3e 100644
--- a/drivers/iommu/arm-smmu.c
+++ b/drivers/iommu/arm-smmu.c
@@ -597,12 +597,18 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
 		if (!IS_ENABLED(CONFIG_64BIT) || smmu->version == ARM_SMMU_V1) {
 			iova &= ~12UL;
 			iova |= ARM_SMMU_CB_ASID(cfg);
-			writel_relaxed(iova, reg);
+			do {
+				writel_relaxed(iova, reg);
+				iova += granule;
+			} while (size -= granule);
 #ifdef CONFIG_64BIT
 		} else {
 			iova >>= 12;
 			iova |= (u64)ARM_SMMU_CB_ASID(cfg) << 48;
-			writeq_relaxed(iova, reg);
+			do {
+				writeq_relaxed(iova, reg);
+				iova += granule >> 12;
+			} while (size -= granule)
 #endif
 		}
 #ifdef CONFIG_64BIT
@@ -610,7 +616,11 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
 		reg = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
 		reg += leaf ? ARM_SMMU_CB_S2_TLBIIPAS2L :
 			      ARM_SMMU_CB_S2_TLBIIPAS2;
-		writeq_relaxed(iova >> 12, reg);
+		iova >>= 12;
+		do {
+			writeq_relaxed(iova, reg);
+			iova += granule >> 12;
+		} while (size -= granule)
 #endif
 	} else {
 		reg = ARM_SMMU_GR0(smmu) + ARM_SMMU_GR0_TLBIVMID;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v2] iommu/io-pgtable: Indicate granule for TLB maintenance
  2015-12-04 17:52     ` Robin Murphy
@ 2015-12-07 18:18         ` Robin Murphy
  -1 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-07 18:18 UTC (permalink / raw)
  To: will.deacon-5wv7dgnIgG8,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
  Cc: linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

IOMMU hardware with range-based TLB maintenance commands can work
happily with the iova and size arguments passed via the tlb_add_flush
callback, but for IOMMUs which require separate commands per entry in
the range, it is not straightforward to infer the necessary granularity
when it comes to issuing the actual commands.

Add an additional argument indicating the granularity for the benefit
of drivers needing to know, and update the ARM LPAE code appropriately
(for non-leaf invalidations we currently just assume the worst-case
page granularity rather than walking the table to check).

Signed-off-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
---

v2: avoid truncation in iopte_deref - forcing the type promotion in
    the subtraction rather than the shift still somewhat improves the
    resulting 32-bit assembly code, too.

assembly code
 drivers/iommu/arm-smmu-v3.c    |  2 +-
 drivers/iommu/arm-smmu.c       |  2 +-
 drivers/iommu/io-pgtable-arm.c | 27 +++++++++++++++------------
 drivers/iommu/io-pgtable.h     |  4 ++--
 drivers/iommu/ipmmu-vmsa.c     |  4 ++--
 5 files changed, 21 insertions(+), 18 deletions(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index 4e5118a..c302b65 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -1335,7 +1335,7 @@ static void arm_smmu_tlb_inv_context(void *cookie)
 }
 
 static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
-					  bool leaf, void *cookie)
+					  size_t granule, bool leaf, void *cookie)
 {
 	struct arm_smmu_domain *smmu_domain = cookie;
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
index 47dc7a7..601e3dd 100644
--- a/drivers/iommu/arm-smmu.c
+++ b/drivers/iommu/arm-smmu.c
@@ -582,7 +582,7 @@ static void arm_smmu_tlb_inv_context(void *cookie)
 }
 
 static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
-					  bool leaf, void *cookie)
+					  size_t granule, bool leaf, void *cookie)
 {
 	struct arm_smmu_domain *smmu_domain = cookie;
 	struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index 366a354..7a5c772 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -58,8 +58,10 @@
 	((((d)->levels - ((l) - ARM_LPAE_START_LVL(d) + 1))		\
 	  * (d)->bits_per_level) + (d)->pg_shift)
 
+#define ARM_LPAE_GRANULE(d)		(1UL << (d)->pg_shift)
+
 #define ARM_LPAE_PAGES_PER_PGD(d)					\
-	DIV_ROUND_UP((d)->pgd_size, 1UL << (d)->pg_shift)
+	DIV_ROUND_UP((d)->pgd_size, ARM_LPAE_GRANULE(d))
 
 /*
  * Calculate the index at level l used to map virtual address a using the
@@ -169,7 +171,7 @@
 /* IOPTE accessors */
 #define iopte_deref(pte,d)					\
 	(__va((pte) & ((1ULL << ARM_LPAE_MAX_ADDR_BITS) - 1)	\
-	& ~((1ULL << (d)->pg_shift) - 1)))
+	& ~(ARM_LPAE_GRANULE(d) - 1ULL)))
 
 #define iopte_type(pte,l)					\
 	(((pte) >> ARM_LPAE_PTE_TYPE_SHIFT) & ARM_LPAE_PTE_TYPE_MASK)
@@ -326,7 +328,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova,
 	/* Grab a pointer to the next level */
 	pte = *ptep;
 	if (!pte) {
-		cptep = __arm_lpae_alloc_pages(1UL << data->pg_shift,
+		cptep = __arm_lpae_alloc_pages(ARM_LPAE_GRANULE(data),
 					       GFP_ATOMIC, cfg);
 		if (!cptep)
 			return -ENOMEM;
@@ -412,7 +414,7 @@ static void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl,
 	if (lvl == ARM_LPAE_START_LVL(data))
 		table_size = data->pgd_size;
 	else
-		table_size = 1UL << data->pg_shift;
+		table_size = ARM_LPAE_GRANULE(data);
 
 	start = ptep;
 	end = (void *)ptep + table_size;
@@ -473,7 +475,7 @@ static int arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
 
 	__arm_lpae_set_pte(ptep, table, cfg);
 	iova &= ~(blk_size - 1);
-	cfg->tlb->tlb_add_flush(iova, blk_size, true, data->iop.cookie);
+	cfg->tlb->tlb_add_flush(iova, blk_size, blk_size, true, data->iop.cookie);
 	return size;
 }
 
@@ -501,12 +503,13 @@ static int __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
 
 		if (!iopte_leaf(pte, lvl)) {
 			/* Also flush any partial walks */
-			tlb->tlb_add_flush(iova, size, false, cookie);
+			tlb->tlb_add_flush(iova, size, ARM_LPAE_GRANULE(data),
+					   false, cookie);
 			tlb->tlb_sync(cookie);
 			ptep = iopte_deref(pte, data);
 			__arm_lpae_free_pgtable(data, lvl + 1, ptep);
 		} else {
-			tlb->tlb_add_flush(iova, size, true, cookie);
+			tlb->tlb_add_flush(iova, size, size, true, cookie);
 		}
 
 		return size;
@@ -572,7 +575,7 @@ static phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops,
 	return 0;
 
 found_translation:
-	iova &= ((1 << data->pg_shift) - 1);
+	iova &= (ARM_LPAE_GRANULE(data) - 1);
 	return ((phys_addr_t)iopte_to_pfn(pte,data) << data->pg_shift) | iova;
 }
 
@@ -670,7 +673,7 @@ arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie)
 	      (ARM_LPAE_TCR_RGN_WBWA << ARM_LPAE_TCR_IRGN0_SHIFT) |
 	      (ARM_LPAE_TCR_RGN_WBWA << ARM_LPAE_TCR_ORGN0_SHIFT);
 
-	switch (1 << data->pg_shift) {
+	switch (ARM_LPAE_GRANULE(data)) {
 	case SZ_4K:
 		reg |= ARM_LPAE_TCR_TG0_4K;
 		break;
@@ -771,7 +774,7 @@ arm_64_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cfg, void *cookie)
 
 	sl = ARM_LPAE_START_LVL(data);
 
-	switch (1 << data->pg_shift) {
+	switch (ARM_LPAE_GRANULE(data)) {
 	case SZ_4K:
 		reg |= ARM_LPAE_TCR_TG0_4K;
 		sl++; /* SL0 format is different for 4K granule size */
@@ -891,8 +894,8 @@ static void dummy_tlb_flush_all(void *cookie)
 	WARN_ON(cookie != cfg_cookie);
 }
 
-static void dummy_tlb_add_flush(unsigned long iova, size_t size, bool leaf,
-				void *cookie)
+static void dummy_tlb_add_flush(unsigned long iova, size_t size,
+				size_t granule, bool leaf, void *cookie)
 {
 	WARN_ON(cookie != cfg_cookie);
 	WARN_ON(!(size & cfg_cookie->pgsize_bitmap));
diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h
index ac9e234..2e18469 100644
--- a/drivers/iommu/io-pgtable.h
+++ b/drivers/iommu/io-pgtable.h
@@ -26,8 +26,8 @@ enum io_pgtable_fmt {
  */
 struct iommu_gather_ops {
 	void (*tlb_flush_all)(void *cookie);
-	void (*tlb_add_flush)(unsigned long iova, size_t size, bool leaf,
-			      void *cookie);
+	void (*tlb_add_flush)(unsigned long iova, size_t size, size_t granule,
+			      bool leaf, void *cookie);
 	void (*tlb_sync)(void *cookie);
 };
 
diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 8cf605f..5b1166d 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -277,8 +277,8 @@ static void ipmmu_tlb_flush_all(void *cookie)
 	ipmmu_tlb_invalidate(domain);
 }
 
-static void ipmmu_tlb_add_flush(unsigned long iova, size_t size, bool leaf,
-				void *cookie)
+static void ipmmu_tlb_add_flush(unsigned long iova, size_t size,
+				size_t granule, bool leaf, void *cookie)
 {
 	/* The hardware doesn't support selective TLB flush. */
 }
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v2] iommu/io-pgtable: Indicate granule for TLB maintenance
@ 2015-12-07 18:18         ` Robin Murphy
  0 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-07 18:18 UTC (permalink / raw)
  To: linux-arm-kernel

IOMMU hardware with range-based TLB maintenance commands can work
happily with the iova and size arguments passed via the tlb_add_flush
callback, but for IOMMUs which require separate commands per entry in
the range, it is not straightforward to infer the necessary granularity
when it comes to issuing the actual commands.

Add an additional argument indicating the granularity for the benefit
of drivers needing to know, and update the ARM LPAE code appropriately
(for non-leaf invalidations we currently just assume the worst-case
page granularity rather than walking the table to check).

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---

v2: avoid truncation in iopte_deref - forcing the type promotion in
    the subtraction rather than the shift still somewhat improves the
    resulting 32-bit assembly code, too.

assembly code
 drivers/iommu/arm-smmu-v3.c    |  2 +-
 drivers/iommu/arm-smmu.c       |  2 +-
 drivers/iommu/io-pgtable-arm.c | 27 +++++++++++++++------------
 drivers/iommu/io-pgtable.h     |  4 ++--
 drivers/iommu/ipmmu-vmsa.c     |  4 ++--
 5 files changed, 21 insertions(+), 18 deletions(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index 4e5118a..c302b65 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -1335,7 +1335,7 @@ static void arm_smmu_tlb_inv_context(void *cookie)
 }
 
 static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
-					  bool leaf, void *cookie)
+					  size_t granule, bool leaf, void *cookie)
 {
 	struct arm_smmu_domain *smmu_domain = cookie;
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
index 47dc7a7..601e3dd 100644
--- a/drivers/iommu/arm-smmu.c
+++ b/drivers/iommu/arm-smmu.c
@@ -582,7 +582,7 @@ static void arm_smmu_tlb_inv_context(void *cookie)
 }
 
 static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
-					  bool leaf, void *cookie)
+					  size_t granule, bool leaf, void *cookie)
 {
 	struct arm_smmu_domain *smmu_domain = cookie;
 	struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index 366a354..7a5c772 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -58,8 +58,10 @@
 	((((d)->levels - ((l) - ARM_LPAE_START_LVL(d) + 1))		\
 	  * (d)->bits_per_level) + (d)->pg_shift)
 
+#define ARM_LPAE_GRANULE(d)		(1UL << (d)->pg_shift)
+
 #define ARM_LPAE_PAGES_PER_PGD(d)					\
-	DIV_ROUND_UP((d)->pgd_size, 1UL << (d)->pg_shift)
+	DIV_ROUND_UP((d)->pgd_size, ARM_LPAE_GRANULE(d))
 
 /*
  * Calculate the index at level l used to map virtual address a using the
@@ -169,7 +171,7 @@
 /* IOPTE accessors */
 #define iopte_deref(pte,d)					\
 	(__va((pte) & ((1ULL << ARM_LPAE_MAX_ADDR_BITS) - 1)	\
-	& ~((1ULL << (d)->pg_shift) - 1)))
+	& ~(ARM_LPAE_GRANULE(d) - 1ULL)))
 
 #define iopte_type(pte,l)					\
 	(((pte) >> ARM_LPAE_PTE_TYPE_SHIFT) & ARM_LPAE_PTE_TYPE_MASK)
@@ -326,7 +328,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova,
 	/* Grab a pointer to the next level */
 	pte = *ptep;
 	if (!pte) {
-		cptep = __arm_lpae_alloc_pages(1UL << data->pg_shift,
+		cptep = __arm_lpae_alloc_pages(ARM_LPAE_GRANULE(data),
 					       GFP_ATOMIC, cfg);
 		if (!cptep)
 			return -ENOMEM;
@@ -412,7 +414,7 @@ static void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl,
 	if (lvl == ARM_LPAE_START_LVL(data))
 		table_size = data->pgd_size;
 	else
-		table_size = 1UL << data->pg_shift;
+		table_size = ARM_LPAE_GRANULE(data);
 
 	start = ptep;
 	end = (void *)ptep + table_size;
@@ -473,7 +475,7 @@ static int arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
 
 	__arm_lpae_set_pte(ptep, table, cfg);
 	iova &= ~(blk_size - 1);
-	cfg->tlb->tlb_add_flush(iova, blk_size, true, data->iop.cookie);
+	cfg->tlb->tlb_add_flush(iova, blk_size, blk_size, true, data->iop.cookie);
 	return size;
 }
 
@@ -501,12 +503,13 @@ static int __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
 
 		if (!iopte_leaf(pte, lvl)) {
 			/* Also flush any partial walks */
-			tlb->tlb_add_flush(iova, size, false, cookie);
+			tlb->tlb_add_flush(iova, size, ARM_LPAE_GRANULE(data),
+					   false, cookie);
 			tlb->tlb_sync(cookie);
 			ptep = iopte_deref(pte, data);
 			__arm_lpae_free_pgtable(data, lvl + 1, ptep);
 		} else {
-			tlb->tlb_add_flush(iova, size, true, cookie);
+			tlb->tlb_add_flush(iova, size, size, true, cookie);
 		}
 
 		return size;
@@ -572,7 +575,7 @@ static phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops,
 	return 0;
 
 found_translation:
-	iova &= ((1 << data->pg_shift) - 1);
+	iova &= (ARM_LPAE_GRANULE(data) - 1);
 	return ((phys_addr_t)iopte_to_pfn(pte,data) << data->pg_shift) | iova;
 }
 
@@ -670,7 +673,7 @@ arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie)
 	      (ARM_LPAE_TCR_RGN_WBWA << ARM_LPAE_TCR_IRGN0_SHIFT) |
 	      (ARM_LPAE_TCR_RGN_WBWA << ARM_LPAE_TCR_ORGN0_SHIFT);
 
-	switch (1 << data->pg_shift) {
+	switch (ARM_LPAE_GRANULE(data)) {
 	case SZ_4K:
 		reg |= ARM_LPAE_TCR_TG0_4K;
 		break;
@@ -771,7 +774,7 @@ arm_64_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cfg, void *cookie)
 
 	sl = ARM_LPAE_START_LVL(data);
 
-	switch (1 << data->pg_shift) {
+	switch (ARM_LPAE_GRANULE(data)) {
 	case SZ_4K:
 		reg |= ARM_LPAE_TCR_TG0_4K;
 		sl++; /* SL0 format is different for 4K granule size */
@@ -891,8 +894,8 @@ static void dummy_tlb_flush_all(void *cookie)
 	WARN_ON(cookie != cfg_cookie);
 }
 
-static void dummy_tlb_add_flush(unsigned long iova, size_t size, bool leaf,
-				void *cookie)
+static void dummy_tlb_add_flush(unsigned long iova, size_t size,
+				size_t granule, bool leaf, void *cookie)
 {
 	WARN_ON(cookie != cfg_cookie);
 	WARN_ON(!(size & cfg_cookie->pgsize_bitmap));
diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h
index ac9e234..2e18469 100644
--- a/drivers/iommu/io-pgtable.h
+++ b/drivers/iommu/io-pgtable.h
@@ -26,8 +26,8 @@ enum io_pgtable_fmt {
  */
 struct iommu_gather_ops {
 	void (*tlb_flush_all)(void *cookie);
-	void (*tlb_add_flush)(unsigned long iova, size_t size, bool leaf,
-			      void *cookie);
+	void (*tlb_add_flush)(unsigned long iova, size_t size, size_t granule,
+			      bool leaf, void *cookie);
 	void (*tlb_sync)(void *cookie);
 };
 
diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 8cf605f..5b1166d 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -277,8 +277,8 @@ static void ipmmu_tlb_flush_all(void *cookie)
 	ipmmu_tlb_invalidate(domain);
 }
 
-static void ipmmu_tlb_add_flush(unsigned long iova, size_t size, bool leaf,
-				void *cookie)
+static void ipmmu_tlb_add_flush(unsigned long iova, size_t size,
+				size_t granule, bool leaf, void *cookie)
 {
 	/* The hardware doesn't support selective TLB flush. */
 }
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH v2] iommu/arm-smmu: Invalidate TLBs properly
  2015-12-07 18:18         ` Robin Murphy
@ 2015-12-07 18:28             ` Will Deacon
  -1 siblings, 0 replies; 40+ messages in thread
From: Will Deacon @ 2015-12-07 18:28 UTC (permalink / raw)
  To: Robin Murphy
  Cc: iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

On Mon, Dec 07, 2015 at 06:18:52PM +0000, Robin Murphy wrote:
> When invalidating an IOVA range potentially spanning multiple pages,
> such as when removing an entire intermediate-level table, we currently
> only issue an invalidation for the first IOVA of that range. Since the
> architecture specifies that address-based TLB maintenance operations
> target a single entry, an SMMU could feasibly retain live entries for
> subsequent pages within that unmapped range, which is not good.
> 
> Make sure we hit every possible entry by iterating over the whole range
> at the granularity provided by the pagetable implementation.
> 
> Signed-off-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
> ---
> 
> v2: include SMMUv3 fix, use the same shorter loop construct everywhere.
> 
>  drivers/iommu/arm-smmu-v3.c |  5 ++++-
>  drivers/iommu/arm-smmu.c    | 16 +++++++++++++---
>  2 files changed, 17 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> index c302b65..8bb5abf 100644
> --- a/drivers/iommu/arm-smmu-v3.c
> +++ b/drivers/iommu/arm-smmu-v3.c
> @@ -1354,7 +1354,10 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
>  		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
>  	}
>  
> -	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +	do {
> +		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +		cmd.tlbi.addr += granule;
> +	} while (size -= granule);
>  }
>  
>  static struct iommu_gather_ops arm_smmu_gather_ops = {
> diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
> index 601e3dd..eb28c3e 100644
> --- a/drivers/iommu/arm-smmu.c
> +++ b/drivers/iommu/arm-smmu.c
> @@ -597,12 +597,18 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
>  		if (!IS_ENABLED(CONFIG_64BIT) || smmu->version == ARM_SMMU_V1) {
>  			iova &= ~12UL;
>  			iova |= ARM_SMMU_CB_ASID(cfg);
> -			writel_relaxed(iova, reg);
> +			do {
> +				writel_relaxed(iova, reg);
> +				iova += granule;
> +			} while (size -= granule);
>  #ifdef CONFIG_64BIT
>  		} else {
>  			iova >>= 12;
>  			iova |= (u64)ARM_SMMU_CB_ASID(cfg) << 48;
> -			writeq_relaxed(iova, reg);
> +			do {
> +				writeq_relaxed(iova, reg);
> +				iova += granule >> 12;
> +			} while (size -= granule)

This doesn't compile.

>  #endif
>  		}
>  #ifdef CONFIG_64BIT
> @@ -610,7 +616,11 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
>  		reg = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
>  		reg += leaf ? ARM_SMMU_CB_S2_TLBIIPAS2L :
>  			      ARM_SMMU_CB_S2_TLBIIPAS2;
> -		writeq_relaxed(iova >> 12, reg);
> +		iova >>= 12;
> +		do {
> +			writeq_relaxed(iova, reg);
> +			iova += granule >> 12;
> +		} while (size -= granule)

Same here.

Please at least build your patches, and preferably test them too.

Will

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v2] iommu/arm-smmu: Invalidate TLBs properly
@ 2015-12-07 18:28             ` Will Deacon
  0 siblings, 0 replies; 40+ messages in thread
From: Will Deacon @ 2015-12-07 18:28 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Dec 07, 2015 at 06:18:52PM +0000, Robin Murphy wrote:
> When invalidating an IOVA range potentially spanning multiple pages,
> such as when removing an entire intermediate-level table, we currently
> only issue an invalidation for the first IOVA of that range. Since the
> architecture specifies that address-based TLB maintenance operations
> target a single entry, an SMMU could feasibly retain live entries for
> subsequent pages within that unmapped range, which is not good.
> 
> Make sure we hit every possible entry by iterating over the whole range
> at the granularity provided by the pagetable implementation.
> 
> Signed-off-by: Robin Murphy <robin.murphy@arm.com>
> ---
> 
> v2: include SMMUv3 fix, use the same shorter loop construct everywhere.
> 
>  drivers/iommu/arm-smmu-v3.c |  5 ++++-
>  drivers/iommu/arm-smmu.c    | 16 +++++++++++++---
>  2 files changed, 17 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> index c302b65..8bb5abf 100644
> --- a/drivers/iommu/arm-smmu-v3.c
> +++ b/drivers/iommu/arm-smmu-v3.c
> @@ -1354,7 +1354,10 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
>  		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
>  	}
>  
> -	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +	do {
> +		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> +		cmd.tlbi.addr += granule;
> +	} while (size -= granule);
>  }
>  
>  static struct iommu_gather_ops arm_smmu_gather_ops = {
> diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
> index 601e3dd..eb28c3e 100644
> --- a/drivers/iommu/arm-smmu.c
> +++ b/drivers/iommu/arm-smmu.c
> @@ -597,12 +597,18 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
>  		if (!IS_ENABLED(CONFIG_64BIT) || smmu->version == ARM_SMMU_V1) {
>  			iova &= ~12UL;
>  			iova |= ARM_SMMU_CB_ASID(cfg);
> -			writel_relaxed(iova, reg);
> +			do {
> +				writel_relaxed(iova, reg);
> +				iova += granule;
> +			} while (size -= granule);
>  #ifdef CONFIG_64BIT
>  		} else {
>  			iova >>= 12;
>  			iova |= (u64)ARM_SMMU_CB_ASID(cfg) << 48;
> -			writeq_relaxed(iova, reg);
> +			do {
> +				writeq_relaxed(iova, reg);
> +				iova += granule >> 12;
> +			} while (size -= granule)

This doesn't compile.

>  #endif
>  		}
>  #ifdef CONFIG_64BIT
> @@ -610,7 +616,11 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
>  		reg = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
>  		reg += leaf ? ARM_SMMU_CB_S2_TLBIIPAS2L :
>  			      ARM_SMMU_CB_S2_TLBIIPAS2;
> -		writeq_relaxed(iova >> 12, reg);
> +		iova >>= 12;
> +		do {
> +			writeq_relaxed(iova, reg);
> +			iova += granule >> 12;
> +		} while (size -= granule)

Same here.

Please at least build your patches, and preferably test them too.

Will

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 5/5] iommu/io-pgtable: Add ARMv7 short descriptor support
  2015-12-04 17:53     ` Robin Murphy
@ 2015-12-08  8:58         ` Yong Wu
  -1 siblings, 0 replies; 40+ messages in thread
From: Yong Wu @ 2015-12-08  8:58 UTC (permalink / raw)
  To: Robin Murphy
  Cc: laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw,
	will.deacon-5wv7dgnIgG8,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	yingjoe.chen-NuS5LvNUpcJWk0Htik3J/w,
	eddie.huang-NuS5LvNUpcJWk0Htik3J/w, brian.starkey-5wv7dgnIgG8,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

Hi Robin,

   Thanks very much for your rewriting. It looks more pretty.
   
   This works well in my selftest. and I will push to our chrome branch
and request our video/display to test more.

   Only a little comment below.

On Fri, 2015-12-04 at 17:53 +0000, Robin Murphy wrote:
> Add a nearly-complete ARMv7 short descriptor implementation, omitting
> only a few legacy and CPU-centric aspects which shouldn't be necessary
> for IOMMU API use anyway.
> 
> Signed-off-by: Yong Wu <yong.wu-NuS5LvNUpcJWk0Htik3J/w@public.gmane.org>
> Signed-off-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
> ---
[...]
> +/* PTE type bits: these are all mixed up with XN/PXN bits in most cases */
> +#define ARM_V7S_PTE_TYPE_TABLE		0x1
> +#define ARM_V7S_PTE_TYPE_PAGE		0x2
> +#define ARM_V7S_PTE_TYPE_CONT_PAGE	0x1

>From the spec, This is Large page, Do we need add a comment for
readable?
/* Large page */

and add /* Supersection */ for CONT_SECTION too.

> +
> +#define ARM_V7S_PTE_IS_VALID(pte)	(((pte) & 0x3) != 0)
> +#define ARM_V7S_PTE_IS_TABLE(pte, lvl)	(lvl < 2 && ((pte) & ARM_V7S_PTE_TYPE_TABLE))
> +
> +/* Page table bits */
> +#define ARM_V7S_ATTR_XN(lvl)		BIT(4 * (2 - (lvl)))
> +#define ARM_V7S_ATTR_B			BIT(2)
> +#define ARM_V7S_ATTR_C			BIT(3)
> +#define ARM_V7S_ATTR_NS_TABLE		BIT(3)
> +#define ARM_V7S_ATTR_NS_SECTION		BIT(19)
> +
> +#define ARM_V7S_CONT_SECTION		BIT(18)
> +#define ARM_V7S_CONT_PAGE_XN_SHIFT	15
> +
[...]
> +static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
> +				   struct arm_v7s_io_pgtable *data)
> +{
> +	struct device *dev = data->iop.cfg.iommu_dev;
> +	dma_addr_t dma;
> +	size_t size = ARM_V7S_TABLE_SIZE(lvl);
> +	void *table = NULL;
> +
> +	if (lvl == 1)
> +		table = (void *)__get_dma_pages(__GFP_ZERO, get_order(size));
> +	else if (lvl == 2)
> +		table = kmem_cache_zalloc(data->l2_tables, gfp);
> +	if (table && !selftest_running) {
> +		dma = dma_map_single(dev, table, size, DMA_TO_DEVICE);
> +		if (dma_mapping_error(dev, dma))
> +			goto out_free;
> +		/*
> +		 * We depend on the IOMMU being able to work with any physical
> +		 * address directly, so if the DMA layer suggests otherwise by
> +		 * translating or truncating them, that bodes very badly...
> +		 */
> +		if (dma != virt_to_phys(table))
> +			goto out_unmap;
> +	}

There is some special while we use kmem_cache, we save the physical
address into the pagetable, then get the virtual address via
phys_to_virt, then free it.
It isn't same with the normal case that saving the va and free the va.
Do we need add kmemleak_ignore here?

> +	return table;
> +
> +out_unmap:
> +	dev_err(dev, "Cannot accommodate DMA translation for IOMMU page tables\n");
> +	dma_unmap_single(dev, dma, size, DMA_TO_DEVICE);
> +out_free:
> +	if (lvl == 1)
> +		free_pages((unsigned long)table, get_order(size));
> +	else
> +		kmem_cache_free(data->l2_tables, table);
> +	return NULL;
> +}
> +
[...]
> +static int arm_v7s_init_pte(struct arm_v7s_io_pgtable *data,
> +			    unsigned long iova, phys_addr_t paddr, int prot,
> +			    int lvl, int num_entries, arm_v7s_iopte *ptep)
> +{
> +	struct io_pgtable_cfg *cfg = &data->iop.cfg;
> +	arm_v7s_iopte pte = arm_v7s_prot_to_pte(prot, lvl, cfg);
> +	int i;
> +
> +	for (i = 0; i < num_entries; i++)
> +		if (ARM_V7S_PTE_IS_TABLE(ptep[i], lvl)) {
> +			/*
> +			 * We need to unmap and free the old table before
> +			 * overwriting it with a block entry.
> +			 */
> +			arm_v7s_iopte *tblp;
> +			size_t sz = ARM_V7S_BLOCK_SIZE(lvl);
> +
> +			tblp = ptep + i - ARM_V7S_LVL_IDX(iova, lvl);
> +			if (WARN_ON(__arm_v7s_unmap(data, iova, sz, lvl, tblp)
> +					!= sz))
> +				return -EINVAL;

Here it may come from Will's "iommu/io-pgtable-arm: Unmap and free table
when overwriting with block".

But if we have IO_PGTABLE_QUIRK_TLBI_ON_MAP, the condition(1) in that
comment don't exist.  So we don't need take care whether the exist one
is a pgtable or not, we could always return -EEXIST here?

> +		} else if (ptep[i]) {
> +			/* We require an unmap first */
> +			WARN_ON(!selftest_running);
> +			return -EEXIST;
> +		}
> +

[...]

> +static int __arm_v7s_unmap(struct arm_v7s_io_pgtable *data,
> +			    unsigned long iova, size_t size, int lvl,
> +			    arm_v7s_iopte *ptep)
> +{
> +	arm_v7s_iopte pte[ARM_V7S_CONT_PAGES];
> +	struct io_pgtable_cfg *cfg = &data->iop.cfg;
> +	const struct iommu_gather_ops *tlb = cfg->tlb;
> +	void *cookie = data->iop.cookie;
> +	int idx, i = 0, num_entries = size >> ARM_V7S_LVL_SHIFT(lvl);
> +
> +	/* Something went horribly wrong and we ran out of page table */
> +	if (WARN_ON(lvl > 2))
> +		return 0;
> +
> +	idx = ARM_V7S_LVL_IDX(iova, lvl);
> +	ptep += idx;
> +	do {
> +		if (WARN_ON(!ARM_V7S_PTE_IS_VALID(ptep[i])))
> +			return 0;
> +		pte[i] = ptep[i];
> +	} while (++i < num_entries);
> +
> +	/*
> +	 * If we've hit a contiguous 'large page' entry at this level, it
> +	 * needs splitting first, unless we're unmapping the whole lot.
> +	 */
> +	if (num_entries <= 1 && arm_v7s_pte_is_cont(pte[0], lvl))
> +		arm_v7s_split_cont(data, iova, idx, lvl, ptep);
> +
> +	/* If the size matches this level, we're in the right place */
> +	if (num_entries) {
> +		size_t blk_size = ARM_V7S_BLOCK_SIZE(lvl);
> +
> +		__arm_v7s_set_pte(ptep, 0, num_entries, cfg);
> +
> +		for (i = 0; i < num_entries; i++) {
> +			if (ARM_V7S_PTE_IS_TABLE(pte[i], lvl)) {
> +				/* Also flush any partial walks */
> +				tlb->tlb_add_flush(iova, blk_size,
> +						   ARM_V7S_BLOCK_SIZE(2),
> +						   false, cookie);
> +				tlb->tlb_sync(cookie);

MTK iommu driver will get a warning here in my test.

There is a tlb_sync here, and in arm_v7s_unmap, there is another one.
then the flow is:

 tlb->tlb_add_flush(xxx)
 tlb->tlb_sync()
 tlb->tlb_sync()


In MTK tlb_sync, The code is:

static void mtk_iommu_tlb_sync(void *cookie)
{
	struct mtk_iommu_data *data = cookie;
	int ret;
	u32 tmp;

	ret = readl_poll_timeout_atomic(data->base + REG_MMU_CPE_DONE, tmp,
					tmp != 0, 10, 100000);
	if (ret) {
		dev_warn(data->dev,
			 "Partial TLB flush timed out, falling back to full flush\n");
		mtk_iommu_tlb_flush_all(cookie);
	}
	/* Clear the CPE status */
	writel_relaxed(0, data->base + REG_MMU_CPE_DONE);
}

In the end of this function, We have to write 0 to clear the CPE status,
then the HW could check the status in the next time.

So if we call tlb_sync twice continually. It will time out in the second
time, then we can see this log:

Partial TLB flush timed out, falling back to full flush

I don't know whether it is our HW special behavior, is this case ok in
the arm-smmu.c?
Is there some suggestion about this?


> +				ptep = iopte_deref(pte[i], lvl);
> +				__arm_v7s_free_table(ptep, lvl + 1, data);
> +			} else {
> +				tlb->tlb_add_flush(iova, blk_size, blk_size,
> +						   true, cookie);
> +			}
> +			iova += blk_size;
> +		}
> +		return size;
> +	} else if (lvl == 1 && !ARM_V7S_PTE_IS_TABLE(pte[0], lvl)) {
> +		/*
> +		 * Insert a table at the next level to map the old region,
> +		 * minus the part we want to unmap
> +		 */
> +		return arm_v7s_split_blk_unmap(data, iova, size, ptep);
> +	}
> +
> +	/* Keep on walkin' */
> +	ptep = iopte_deref(pte[0], lvl);
> +	return __arm_v7s_unmap(data, iova, size, lvl + 1, ptep);
> +}
> +
> +static int arm_v7s_unmap(struct io_pgtable_ops *ops, unsigned long iova,
> +			 size_t size)
> +{
> +	size_t unmapped;
> +	struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops);
> +	struct io_pgtable *iop = &data->iop;
> +
> +	unmapped = __arm_v7s_unmap(data, iova, size, 1, data->pgd);
> +	if (unmapped)
> +		iop->cfg.tlb->tlb_sync(iop->cookie);
> +
> +	return unmapped;
> +}
> +
> +static phys_addr_t arm_v7s_iova_to_phys(struct io_pgtable_ops *ops,
> +					unsigned long iova)
> +{
> +	struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops);
> +	arm_v7s_iopte *ptep = data->pgd, pte = ARM_V7S_PTE_TYPE_TABLE;
> +	int lvl = 0;
> +	u32 mask;
> +
> +	while (ARM_V7S_PTE_IS_TABLE(pte, lvl)) {
> +		pte = ptep[ARM_V7S_LVL_IDX(iova, ++lvl)];
> +		ptep = iopte_deref(pte, lvl);
> +	}

If we would like it always enter this while.
Could we use do{}while? then we don't need initialize the pte.

And in this file, the valid lvl should be 1 or 2. but here the "lvl" is
initialized to 0. Do we need add a enum for the lvl for more safe and
readable?

> +	if (!ARM_V7S_PTE_IS_VALID(pte))
> +		return 0;
> +
> +	mask = ARM_V7S_LVL_MASK(lvl);
> +	if (arm_v7s_pte_is_cont(pte, lvl))
> +		mask *= ARM_V7S_CONT_PAGES;
> +	return (pte & mask) | (iova & ~mask);
> +}
> +
[...]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 5/5] iommu/io-pgtable: Add ARMv7 short descriptor support
@ 2015-12-08  8:58         ` Yong Wu
  0 siblings, 0 replies; 40+ messages in thread
From: Yong Wu @ 2015-12-08  8:58 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Robin,

   Thanks very much for your rewriting. It looks more pretty.
   
   This works well in my selftest. and I will push to our chrome branch
and request our video/display to test more.

   Only a little comment below.

On Fri, 2015-12-04 at 17:53 +0000, Robin Murphy wrote:
> Add a nearly-complete ARMv7 short descriptor implementation, omitting
> only a few legacy and CPU-centric aspects which shouldn't be necessary
> for IOMMU API use anyway.
> 
> Signed-off-by: Yong Wu <yong.wu@mediatek.com>
> Signed-off-by: Robin Murphy <robin.murphy@arm.com>
> ---
[...]
> +/* PTE type bits: these are all mixed up with XN/PXN bits in most cases */
> +#define ARM_V7S_PTE_TYPE_TABLE		0x1
> +#define ARM_V7S_PTE_TYPE_PAGE		0x2
> +#define ARM_V7S_PTE_TYPE_CONT_PAGE	0x1

>From the spec, This is Large page, Do we need add a comment for
readable?
/* Large page */

and add /* Supersection */ for CONT_SECTION too.

> +
> +#define ARM_V7S_PTE_IS_VALID(pte)	(((pte) & 0x3) != 0)
> +#define ARM_V7S_PTE_IS_TABLE(pte, lvl)	(lvl < 2 && ((pte) & ARM_V7S_PTE_TYPE_TABLE))
> +
> +/* Page table bits */
> +#define ARM_V7S_ATTR_XN(lvl)		BIT(4 * (2 - (lvl)))
> +#define ARM_V7S_ATTR_B			BIT(2)
> +#define ARM_V7S_ATTR_C			BIT(3)
> +#define ARM_V7S_ATTR_NS_TABLE		BIT(3)
> +#define ARM_V7S_ATTR_NS_SECTION		BIT(19)
> +
> +#define ARM_V7S_CONT_SECTION		BIT(18)
> +#define ARM_V7S_CONT_PAGE_XN_SHIFT	15
> +
[...]
> +static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
> +				   struct arm_v7s_io_pgtable *data)
> +{
> +	struct device *dev = data->iop.cfg.iommu_dev;
> +	dma_addr_t dma;
> +	size_t size = ARM_V7S_TABLE_SIZE(lvl);
> +	void *table = NULL;
> +
> +	if (lvl == 1)
> +		table = (void *)__get_dma_pages(__GFP_ZERO, get_order(size));
> +	else if (lvl == 2)
> +		table = kmem_cache_zalloc(data->l2_tables, gfp);
> +	if (table && !selftest_running) {
> +		dma = dma_map_single(dev, table, size, DMA_TO_DEVICE);
> +		if (dma_mapping_error(dev, dma))
> +			goto out_free;
> +		/*
> +		 * We depend on the IOMMU being able to work with any physical
> +		 * address directly, so if the DMA layer suggests otherwise by
> +		 * translating or truncating them, that bodes very badly...
> +		 */
> +		if (dma != virt_to_phys(table))
> +			goto out_unmap;
> +	}

There is some special while we use kmem_cache, we save the physical
address into the pagetable, then get the virtual address via
phys_to_virt, then free it.
It isn't same with the normal case that saving the va and free the va.
Do we need add kmemleak_ignore here?

> +	return table;
> +
> +out_unmap:
> +	dev_err(dev, "Cannot accommodate DMA translation for IOMMU page tables\n");
> +	dma_unmap_single(dev, dma, size, DMA_TO_DEVICE);
> +out_free:
> +	if (lvl == 1)
> +		free_pages((unsigned long)table, get_order(size));
> +	else
> +		kmem_cache_free(data->l2_tables, table);
> +	return NULL;
> +}
> +
[...]
> +static int arm_v7s_init_pte(struct arm_v7s_io_pgtable *data,
> +			    unsigned long iova, phys_addr_t paddr, int prot,
> +			    int lvl, int num_entries, arm_v7s_iopte *ptep)
> +{
> +	struct io_pgtable_cfg *cfg = &data->iop.cfg;
> +	arm_v7s_iopte pte = arm_v7s_prot_to_pte(prot, lvl, cfg);
> +	int i;
> +
> +	for (i = 0; i < num_entries; i++)
> +		if (ARM_V7S_PTE_IS_TABLE(ptep[i], lvl)) {
> +			/*
> +			 * We need to unmap and free the old table before
> +			 * overwriting it with a block entry.
> +			 */
> +			arm_v7s_iopte *tblp;
> +			size_t sz = ARM_V7S_BLOCK_SIZE(lvl);
> +
> +			tblp = ptep + i - ARM_V7S_LVL_IDX(iova, lvl);
> +			if (WARN_ON(__arm_v7s_unmap(data, iova, sz, lvl, tblp)
> +					!= sz))
> +				return -EINVAL;

Here it may come from Will's "iommu/io-pgtable-arm: Unmap and free table
when overwriting with block".

But if we have IO_PGTABLE_QUIRK_TLBI_ON_MAP, the condition(1) in that
comment don't exist.  So we don't need take care whether the exist one
is a pgtable or not, we could always return -EEXIST here?

> +		} else if (ptep[i]) {
> +			/* We require an unmap first */
> +			WARN_ON(!selftest_running);
> +			return -EEXIST;
> +		}
> +

[...]

> +static int __arm_v7s_unmap(struct arm_v7s_io_pgtable *data,
> +			    unsigned long iova, size_t size, int lvl,
> +			    arm_v7s_iopte *ptep)
> +{
> +	arm_v7s_iopte pte[ARM_V7S_CONT_PAGES];
> +	struct io_pgtable_cfg *cfg = &data->iop.cfg;
> +	const struct iommu_gather_ops *tlb = cfg->tlb;
> +	void *cookie = data->iop.cookie;
> +	int idx, i = 0, num_entries = size >> ARM_V7S_LVL_SHIFT(lvl);
> +
> +	/* Something went horribly wrong and we ran out of page table */
> +	if (WARN_ON(lvl > 2))
> +		return 0;
> +
> +	idx = ARM_V7S_LVL_IDX(iova, lvl);
> +	ptep += idx;
> +	do {
> +		if (WARN_ON(!ARM_V7S_PTE_IS_VALID(ptep[i])))
> +			return 0;
> +		pte[i] = ptep[i];
> +	} while (++i < num_entries);
> +
> +	/*
> +	 * If we've hit a contiguous 'large page' entry at this level, it
> +	 * needs splitting first, unless we're unmapping the whole lot.
> +	 */
> +	if (num_entries <= 1 && arm_v7s_pte_is_cont(pte[0], lvl))
> +		arm_v7s_split_cont(data, iova, idx, lvl, ptep);
> +
> +	/* If the size matches this level, we're in the right place */
> +	if (num_entries) {
> +		size_t blk_size = ARM_V7S_BLOCK_SIZE(lvl);
> +
> +		__arm_v7s_set_pte(ptep, 0, num_entries, cfg);
> +
> +		for (i = 0; i < num_entries; i++) {
> +			if (ARM_V7S_PTE_IS_TABLE(pte[i], lvl)) {
> +				/* Also flush any partial walks */
> +				tlb->tlb_add_flush(iova, blk_size,
> +						   ARM_V7S_BLOCK_SIZE(2),
> +						   false, cookie);
> +				tlb->tlb_sync(cookie);

MTK iommu driver will get a warning here in my test.

There is a tlb_sync here, and in arm_v7s_unmap, there is another one.
then the flow is:

 tlb->tlb_add_flush(xxx)
 tlb->tlb_sync()
 tlb->tlb_sync()


In MTK tlb_sync, The code is:

static void mtk_iommu_tlb_sync(void *cookie)
{
	struct mtk_iommu_data *data = cookie;
	int ret;
	u32 tmp;

	ret = readl_poll_timeout_atomic(data->base + REG_MMU_CPE_DONE, tmp,
					tmp != 0, 10, 100000);
	if (ret) {
		dev_warn(data->dev,
			 "Partial TLB flush timed out, falling back to full flush\n");
		mtk_iommu_tlb_flush_all(cookie);
	}
	/* Clear the CPE status */
	writel_relaxed(0, data->base + REG_MMU_CPE_DONE);
}

In the end of this function, We have to write 0 to clear the CPE status,
then the HW could check the status in the next time.

So if we call tlb_sync twice continually. It will time out in the second
time, then we can see this log:

Partial TLB flush timed out, falling back to full flush

I don't know whether it is our HW special behavior, is this case ok in
the arm-smmu.c?
Is there some suggestion about this?


> +				ptep = iopte_deref(pte[i], lvl);
> +				__arm_v7s_free_table(ptep, lvl + 1, data);
> +			} else {
> +				tlb->tlb_add_flush(iova, blk_size, blk_size,
> +						   true, cookie);
> +			}
> +			iova += blk_size;
> +		}
> +		return size;
> +	} else if (lvl == 1 && !ARM_V7S_PTE_IS_TABLE(pte[0], lvl)) {
> +		/*
> +		 * Insert a table at the next level to map the old region,
> +		 * minus the part we want to unmap
> +		 */
> +		return arm_v7s_split_blk_unmap(data, iova, size, ptep);
> +	}
> +
> +	/* Keep on walkin' */
> +	ptep = iopte_deref(pte[0], lvl);
> +	return __arm_v7s_unmap(data, iova, size, lvl + 1, ptep);
> +}
> +
> +static int arm_v7s_unmap(struct io_pgtable_ops *ops, unsigned long iova,
> +			 size_t size)
> +{
> +	size_t unmapped;
> +	struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops);
> +	struct io_pgtable *iop = &data->iop;
> +
> +	unmapped = __arm_v7s_unmap(data, iova, size, 1, data->pgd);
> +	if (unmapped)
> +		iop->cfg.tlb->tlb_sync(iop->cookie);
> +
> +	return unmapped;
> +}
> +
> +static phys_addr_t arm_v7s_iova_to_phys(struct io_pgtable_ops *ops,
> +					unsigned long iova)
> +{
> +	struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops);
> +	arm_v7s_iopte *ptep = data->pgd, pte = ARM_V7S_PTE_TYPE_TABLE;
> +	int lvl = 0;
> +	u32 mask;
> +
> +	while (ARM_V7S_PTE_IS_TABLE(pte, lvl)) {
> +		pte = ptep[ARM_V7S_LVL_IDX(iova, ++lvl)];
> +		ptep = iopte_deref(pte, lvl);
> +	}

If we would like it always enter this while.
Could we use do{}while? then we don't need initialize the pte.

And in this file, the valid lvl should be 1 or 2. but here the "lvl" is
initialized to 0. Do we need add a enum for the lvl for more safe and
readable?

> +	if (!ARM_V7S_PTE_IS_VALID(pte))
> +		return 0;
> +
> +	mask = ARM_V7S_LVL_MASK(lvl);
> +	if (arm_v7s_pte_is_cont(pte, lvl))
> +		mask *= ARM_V7S_CONT_PAGES;
> +	return (pte & mask) | (iova & ~mask);
> +}
> +
[...]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 1/5] iommu/io-pgtable-arm: Avoid dereferencing bogus PTEs
  2015-12-04 17:52     ` Robin Murphy
@ 2015-12-13 21:41         ` Laurent Pinchart
  -1 siblings, 0 replies; 40+ messages in thread
From: Laurent Pinchart @ 2015-12-13 21:41 UTC (permalink / raw)
  To: Robin Murphy
  Cc: will.deacon-5wv7dgnIgG8,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	brian.starkey-5wv7dgnIgG8

Hi Robin,

Thank you for the patch.

On Friday 04 December 2015 17:52:58 Robin Murphy wrote:
> In the case of corrupted page tables, or when an invalid size is given,
> __arm_lpae_unmap() may recurse beyond the maximum number of levels.
> Unfortunately the detection of this error condition only happens *after*
> calculating a nonsense offset from something which might not be a valid
> table pointer and dereferencing that to see if it is a valid PTE.
> 
> Make things a little more robust by checking the level is valid before
> doing anything which depends on it being so.
> 
> Signed-off-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>

This looks good to me.

Reviewed-by: Laurent Pinchart <laurent.pinchart-ryLnwIuWjnjg/C1BVhZhaw@public.gmane.org>

I'm curious though, have you seen this error in practice ?

> ---
>  drivers/iommu/io-pgtable-arm.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
> index 7df9777..366a354 100644
> --- a/drivers/iommu/io-pgtable-arm.c
> +++ b/drivers/iommu/io-pgtable-arm.c
> @@ -486,11 +486,13 @@ static int __arm_lpae_unmap(struct arm_lpae_io_pgtable
> *data, void *cookie = data->iop.cookie;
>  	size_t blk_size = ARM_LPAE_BLOCK_SIZE(lvl, data);
> 
> +	/* Something went horribly wrong and we ran out of page table */
> +	if (WARN_ON(lvl == ARM_LPAE_MAX_LEVELS))
> +		return 0;
> +
>  	ptep += ARM_LPAE_LVL_IDX(iova, lvl, data);
>  	pte = *ptep;
> -
> -	/* Something went horribly wrong and we ran out of page table */
> -	if (WARN_ON(!pte || (lvl == ARM_LPAE_MAX_LEVELS)))
> +	if (WARN_ON(!pte))
>  		return 0;
> 
>  	/* If the size matches this level, we're in the right place */

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 1/5] iommu/io-pgtable-arm: Avoid dereferencing bogus PTEs
@ 2015-12-13 21:41         ` Laurent Pinchart
  0 siblings, 0 replies; 40+ messages in thread
From: Laurent Pinchart @ 2015-12-13 21:41 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Robin,

Thank you for the patch.

On Friday 04 December 2015 17:52:58 Robin Murphy wrote:
> In the case of corrupted page tables, or when an invalid size is given,
> __arm_lpae_unmap() may recurse beyond the maximum number of levels.
> Unfortunately the detection of this error condition only happens *after*
> calculating a nonsense offset from something which might not be a valid
> table pointer and dereferencing that to see if it is a valid PTE.
> 
> Make things a little more robust by checking the level is valid before
> doing anything which depends on it being so.
> 
> Signed-off-by: Robin Murphy <robin.murphy@arm.com>

This looks good to me.

Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>

I'm curious though, have you seen this error in practice ?

> ---
>  drivers/iommu/io-pgtable-arm.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
> index 7df9777..366a354 100644
> --- a/drivers/iommu/io-pgtable-arm.c
> +++ b/drivers/iommu/io-pgtable-arm.c
> @@ -486,11 +486,13 @@ static int __arm_lpae_unmap(struct arm_lpae_io_pgtable
> *data, void *cookie = data->iop.cookie;
>  	size_t blk_size = ARM_LPAE_BLOCK_SIZE(lvl, data);
> 
> +	/* Something went horribly wrong and we ran out of page table */
> +	if (WARN_ON(lvl == ARM_LPAE_MAX_LEVELS))
> +		return 0;
> +
>  	ptep += ARM_LPAE_LVL_IDX(iova, lvl, data);
>  	pte = *ptep;
> -
> -	/* Something went horribly wrong and we ran out of page table */
> -	if (WARN_ON(!pte || (lvl == ARM_LPAE_MAX_LEVELS)))
> +	if (WARN_ON(!pte))
>  		return 0;
> 
>  	/* If the size matches this level, we're in the right place */

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 4/5] iommu/io-pgtable: Make io_pgtable_ops_to_pgtable() macro common
  2015-12-04 17:53     ` Robin Murphy
@ 2015-12-13 21:52         ` Laurent Pinchart
  -1 siblings, 0 replies; 40+ messages in thread
From: Laurent Pinchart @ 2015-12-13 21:52 UTC (permalink / raw)
  To: Robin Murphy
  Cc: will.deacon-5wv7dgnIgG8,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	brian.starkey-5wv7dgnIgG8

Hi Robin,

Thank you for the patch.

On Friday 04 December 2015 17:53:01 Robin Murphy wrote:
> There is no need to keep a useful accessor for a public structure hidden
> away in a private implementation. Move it out alongside the structure
> definition so that other implementations may reuse it.
> 
> Signed-off-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>

Acked-by: Laurent Pinchart <laurent.pinchart-ryLnwIuWjnjg/C1BVhZhaw@public.gmane.org>

> ---
>  drivers/iommu/io-pgtable-arm.c | 3 ---
>  drivers/iommu/io-pgtable.h     | 2 ++
>  2 files changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
> index 9088d27..1619681 100644
> --- a/drivers/iommu/io-pgtable-arm.c
> +++ b/drivers/iommu/io-pgtable-arm.c
> @@ -38,9 +38,6 @@
>  #define io_pgtable_to_data(x)						\
>  	container_of((x), struct arm_lpae_io_pgtable, iop)
> 
> -#define io_pgtable_ops_to_pgtable(x)					\
> -	container_of((x), struct io_pgtable, ops)
> -
>  #define io_pgtable_ops_to_data(x)					\
>  	io_pgtable_to_data(io_pgtable_ops_to_pgtable(x))
> 
> diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h
> index 2e18469..36673c8 100644
> --- a/drivers/iommu/io-pgtable.h
> +++ b/drivers/iommu/io-pgtable.h
> @@ -131,6 +131,8 @@ struct io_pgtable {
>  	struct io_pgtable_ops	ops;
>  };
> 
> +#define io_pgtable_ops_to_pgtable(x) container_of((x), struct io_pgtable,
> ops) +
>  /**
>   * struct io_pgtable_init_fns - Alloc/free a set of page tables for a
>   *                              particular format.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 4/5] iommu/io-pgtable: Make io_pgtable_ops_to_pgtable() macro common
@ 2015-12-13 21:52         ` Laurent Pinchart
  0 siblings, 0 replies; 40+ messages in thread
From: Laurent Pinchart @ 2015-12-13 21:52 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Robin,

Thank you for the patch.

On Friday 04 December 2015 17:53:01 Robin Murphy wrote:
> There is no need to keep a useful accessor for a public structure hidden
> away in a private implementation. Move it out alongside the structure
> definition so that other implementations may reuse it.
> 
> Signed-off-by: Robin Murphy <robin.murphy@arm.com>

Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>

> ---
>  drivers/iommu/io-pgtable-arm.c | 3 ---
>  drivers/iommu/io-pgtable.h     | 2 ++
>  2 files changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
> index 9088d27..1619681 100644
> --- a/drivers/iommu/io-pgtable-arm.c
> +++ b/drivers/iommu/io-pgtable-arm.c
> @@ -38,9 +38,6 @@
>  #define io_pgtable_to_data(x)						\
>  	container_of((x), struct arm_lpae_io_pgtable, iop)
> 
> -#define io_pgtable_ops_to_pgtable(x)					\
> -	container_of((x), struct io_pgtable, ops)
> -
>  #define io_pgtable_ops_to_data(x)					\
>  	io_pgtable_to_data(io_pgtable_ops_to_pgtable(x))
> 
> diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h
> index 2e18469..36673c8 100644
> --- a/drivers/iommu/io-pgtable.h
> +++ b/drivers/iommu/io-pgtable.h
> @@ -131,6 +131,8 @@ struct io_pgtable {
>  	struct io_pgtable_ops	ops;
>  };
> 
> +#define io_pgtable_ops_to_pgtable(x) container_of((x), struct io_pgtable,
> ops) +
>  /**
>   * struct io_pgtable_init_fns - Alloc/free a set of page tables for a
>   *                              particular format.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 1/5] iommu/io-pgtable-arm: Avoid dereferencing bogus PTEs
  2015-12-13 21:41         ` Laurent Pinchart
@ 2015-12-14 15:33           ` Robin Murphy
  -1 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-14 15:33 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: will.deacon-5wv7dgnIgG8,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	brian.starkey-5wv7dgnIgG8

Hi Laurent,

On 13/12/15 21:41, Laurent Pinchart wrote:
> Hi Robin,
>
> Thank you for the patch.
>
> On Friday 04 December 2015 17:52:58 Robin Murphy wrote:
>> In the case of corrupted page tables, or when an invalid size is given,
>> __arm_lpae_unmap() may recurse beyond the maximum number of levels.
>> Unfortunately the detection of this error condition only happens *after*
>> calculating a nonsense offset from something which might not be a valid
>> table pointer and dereferencing that to see if it is a valid PTE.
>>
>> Make things a little more robust by checking the level is valid before
>> doing anything which depends on it being so.
>>
>> Signed-off-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
>
> This looks good to me.
>
> Reviewed-by: Laurent Pinchart <laurent.pinchart-ryLnwIuWjnjg/C1BVhZhaw@public.gmane.org>

Thanks!

> I'm curious though, have you seen this error in practice ?

Yes and no - I've hit the "run out of levels" warning for various 
reasons of my own doing while hacking on things, but only in the DMA 
case where the leaf PTE is still pointing to a page of kernel memory, 
thus an errant *ptep access within that page is not as troublesome as if 
the address was an MMIO region and it resulted in a read of some random 
device register. It was only in reusing this code for short-descriptor 
and checking that my crazy "bits per level" formulae never actually got 
a bogus level that I realised we have a potential use-before-check here, 
and it might as well be fixed before I propagate it further.

Robin.

>> ---
>>   drivers/iommu/io-pgtable-arm.c | 8 +++++---
>>   1 file changed, 5 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
>> index 7df9777..366a354 100644
>> --- a/drivers/iommu/io-pgtable-arm.c
>> +++ b/drivers/iommu/io-pgtable-arm.c
>> @@ -486,11 +486,13 @@ static int __arm_lpae_unmap(struct arm_lpae_io_pgtable
>> *data, void *cookie = data->iop.cookie;
>>   	size_t blk_size = ARM_LPAE_BLOCK_SIZE(lvl, data);
>>
>> +	/* Something went horribly wrong and we ran out of page table */
>> +	if (WARN_ON(lvl == ARM_LPAE_MAX_LEVELS))
>> +		return 0;
>> +
>>   	ptep += ARM_LPAE_LVL_IDX(iova, lvl, data);
>>   	pte = *ptep;
>> -
>> -	/* Something went horribly wrong and we ran out of page table */
>> -	if (WARN_ON(!pte || (lvl == ARM_LPAE_MAX_LEVELS)))
>> +	if (WARN_ON(!pte))
>>   		return 0;
>>
>>   	/* If the size matches this level, we're in the right place */
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 1/5] iommu/io-pgtable-arm: Avoid dereferencing bogus PTEs
@ 2015-12-14 15:33           ` Robin Murphy
  0 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-14 15:33 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Laurent,

On 13/12/15 21:41, Laurent Pinchart wrote:
> Hi Robin,
>
> Thank you for the patch.
>
> On Friday 04 December 2015 17:52:58 Robin Murphy wrote:
>> In the case of corrupted page tables, or when an invalid size is given,
>> __arm_lpae_unmap() may recurse beyond the maximum number of levels.
>> Unfortunately the detection of this error condition only happens *after*
>> calculating a nonsense offset from something which might not be a valid
>> table pointer and dereferencing that to see if it is a valid PTE.
>>
>> Make things a little more robust by checking the level is valid before
>> doing anything which depends on it being so.
>>
>> Signed-off-by: Robin Murphy <robin.murphy@arm.com>
>
> This looks good to me.
>
> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>

Thanks!

> I'm curious though, have you seen this error in practice ?

Yes and no - I've hit the "run out of levels" warning for various 
reasons of my own doing while hacking on things, but only in the DMA 
case where the leaf PTE is still pointing to a page of kernel memory, 
thus an errant *ptep access within that page is not as troublesome as if 
the address was an MMIO region and it resulted in a read of some random 
device register. It was only in reusing this code for short-descriptor 
and checking that my crazy "bits per level" formulae never actually got 
a bogus level that I realised we have a potential use-before-check here, 
and it might as well be fixed before I propagate it further.

Robin.

>> ---
>>   drivers/iommu/io-pgtable-arm.c | 8 +++++---
>>   1 file changed, 5 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
>> index 7df9777..366a354 100644
>> --- a/drivers/iommu/io-pgtable-arm.c
>> +++ b/drivers/iommu/io-pgtable-arm.c
>> @@ -486,11 +486,13 @@ static int __arm_lpae_unmap(struct arm_lpae_io_pgtable
>> *data, void *cookie = data->iop.cookie;
>>   	size_t blk_size = ARM_LPAE_BLOCK_SIZE(lvl, data);
>>
>> +	/* Something went horribly wrong and we ran out of page table */
>> +	if (WARN_ON(lvl == ARM_LPAE_MAX_LEVELS))
>> +		return 0;
>> +
>>   	ptep += ARM_LPAE_LVL_IDX(iova, lvl, data);
>>   	pte = *ptep;
>> -
>> -	/* Something went horribly wrong and we ran out of page table */
>> -	if (WARN_ON(!pte || (lvl == ARM_LPAE_MAX_LEVELS)))
>> +	if (WARN_ON(!pte))
>>   		return 0;
>>
>>   	/* If the size matches this level, we're in the right place */
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 5/5] iommu/io-pgtable: Add ARMv7 short descriptor support
  2015-12-08  8:58         ` Yong Wu
@ 2015-12-17 20:12           ` Robin Murphy
  -1 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-17 20:12 UTC (permalink / raw)
  To: Yong Wu
  Cc: laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw,
	will.deacon-5wv7dgnIgG8,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	yingjoe.chen-NuS5LvNUpcJWk0Htik3J/w,
	eddie.huang-NuS5LvNUpcJWk0Htik3J/w, brian.starkey-5wv7dgnIgG8,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

On 08/12/15 08:58, Yong Wu wrote:
> Hi Robin,
>
>     Thanks very much for your rewriting. It looks more pretty.
>
>     This works well in my selftest. and I will push to our chrome branch
> and request our video/display to test more.

Great, thanks.

>     Only a little comment below.
>
> On Fri, 2015-12-04 at 17:53 +0000, Robin Murphy wrote:
>> Add a nearly-complete ARMv7 short descriptor implementation, omitting
>> only a few legacy and CPU-centric aspects which shouldn't be necessary
>> for IOMMU API use anyway.
>>
>> Signed-off-by: Yong Wu <yong.wu-NuS5LvNUpcJWk0Htik3J/w@public.gmane.org>
>> Signed-off-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
>> ---
> [...]
>> +/* PTE type bits: these are all mixed up with XN/PXN bits in most cases */
>> +#define ARM_V7S_PTE_TYPE_TABLE		0x1
>> +#define ARM_V7S_PTE_TYPE_PAGE		0x2
>> +#define ARM_V7S_PTE_TYPE_CONT_PAGE	0x1
>
>  From the spec, This is Large page, Do we need add a comment for
> readable?
> /* Large page */
>
> and add /* Supersection */ for CONT_SECTION too.

Sure, I've added a comment that explains the use of "cont" for 
clarification.

>> +
>> +#define ARM_V7S_PTE_IS_VALID(pte)	(((pte) & 0x3) != 0)
>> +#define ARM_V7S_PTE_IS_TABLE(pte, lvl)	(lvl < 2 && ((pte) & ARM_V7S_PTE_TYPE_TABLE))
>> +
>> +/* Page table bits */
>> +#define ARM_V7S_ATTR_XN(lvl)		BIT(4 * (2 - (lvl)))
>> +#define ARM_V7S_ATTR_B			BIT(2)
>> +#define ARM_V7S_ATTR_C			BIT(3)
>> +#define ARM_V7S_ATTR_NS_TABLE		BIT(3)
>> +#define ARM_V7S_ATTR_NS_SECTION		BIT(19)
>> +
>> +#define ARM_V7S_CONT_SECTION		BIT(18)
>> +#define ARM_V7S_CONT_PAGE_XN_SHIFT	15
>> +
> [...]
>> +static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
>> +				   struct arm_v7s_io_pgtable *data)
>> +{
>> +	struct device *dev = data->iop.cfg.iommu_dev;
>> +	dma_addr_t dma;
>> +	size_t size = ARM_V7S_TABLE_SIZE(lvl);
>> +	void *table = NULL;
>> +
>> +	if (lvl == 1)
>> +		table = (void *)__get_dma_pages(__GFP_ZERO, get_order(size));
>> +	else if (lvl == 2)
>> +		table = kmem_cache_zalloc(data->l2_tables, gfp);
>> +	if (table && !selftest_running) {
>> +		dma = dma_map_single(dev, table, size, DMA_TO_DEVICE);
>> +		if (dma_mapping_error(dev, dma))
>> +			goto out_free;
>> +		/*
>> +		 * We depend on the IOMMU being able to work with any physical
>> +		 * address directly, so if the DMA layer suggests otherwise by
>> +		 * translating or truncating them, that bodes very badly...
>> +		 */
>> +		if (dma != virt_to_phys(table))
>> +			goto out_unmap;
>> +	}
>
> There is some special while we use kmem_cache, we save the physical
> address into the pagetable, then get the virtual address via
> phys_to_virt, then free it.
> It isn't same with the normal case that saving the va and free the va.
> Do we need add kmemleak_ignore here?

Indeed you're right; fixed. With only 2-level tables it might be 
feasible to keep track of all the VAs as well, and break the strict 
dma==phys dependency, but since things work OK as-is I'll put that idea 
aside for a slow day...

>> +	return table;
>> +
>> +out_unmap:
>> +	dev_err(dev, "Cannot accommodate DMA translation for IOMMU page tables\n");
>> +	dma_unmap_single(dev, dma, size, DMA_TO_DEVICE);
>> +out_free:
>> +	if (lvl == 1)
>> +		free_pages((unsigned long)table, get_order(size));
>> +	else
>> +		kmem_cache_free(data->l2_tables, table);
>> +	return NULL;
>> +}
>> +
> [...]
>> +static int arm_v7s_init_pte(struct arm_v7s_io_pgtable *data,
>> +			    unsigned long iova, phys_addr_t paddr, int prot,
>> +			    int lvl, int num_entries, arm_v7s_iopte *ptep)
>> +{
>> +	struct io_pgtable_cfg *cfg = &data->iop.cfg;
>> +	arm_v7s_iopte pte = arm_v7s_prot_to_pte(prot, lvl, cfg);
>> +	int i;
>> +
>> +	for (i = 0; i < num_entries; i++)
>> +		if (ARM_V7S_PTE_IS_TABLE(ptep[i], lvl)) {
>> +			/*
>> +			 * We need to unmap and free the old table before
>> +			 * overwriting it with a block entry.
>> +			 */
>> +			arm_v7s_iopte *tblp;
>> +			size_t sz = ARM_V7S_BLOCK_SIZE(lvl);
>> +
>> +			tblp = ptep + i - ARM_V7S_LVL_IDX(iova, lvl);
>> +			if (WARN_ON(__arm_v7s_unmap(data, iova, sz, lvl, tblp)
>> +					!= sz))
>> +				return -EINVAL;
>
> Here it may come from Will's "iommu/io-pgtable-arm: Unmap and free table
> when overwriting with block".
>
> But if we have IO_PGTABLE_QUIRK_TLBI_ON_MAP, the condition(1) in that
> comment don't exist.  So we don't need take care whether the exist one
> is a pgtable or not, we could always return -EEXIST here?

So if you map 4K, unmap it, then try to map 1MB of your now-free IOVA 
space over a pointer to an empty level 2 table, that always 
unconditionally fails? I'm pretty sure you don't want that ;)

With TLBI_ON_MAP you might not have hit the TLB conflict which revealed 
the bug in the first place, but the rest of that patch still very much 
applies (i.e. not leaking the table).

>> +		} else if (ptep[i]) {
>> +			/* We require an unmap first */
>> +			WARN_ON(!selftest_running);
>> +			return -EEXIST;
>> +		}
>> +
>
> [...]
>
>> +static int __arm_v7s_unmap(struct arm_v7s_io_pgtable *data,
>> +			    unsigned long iova, size_t size, int lvl,
>> +			    arm_v7s_iopte *ptep)
>> +{
>> +	arm_v7s_iopte pte[ARM_V7S_CONT_PAGES];
>> +	struct io_pgtable_cfg *cfg = &data->iop.cfg;
>> +	const struct iommu_gather_ops *tlb = cfg->tlb;
>> +	void *cookie = data->iop.cookie;
>> +	int idx, i = 0, num_entries = size >> ARM_V7S_LVL_SHIFT(lvl);
>> +
>> +	/* Something went horribly wrong and we ran out of page table */
>> +	if (WARN_ON(lvl > 2))
>> +		return 0;
>> +
>> +	idx = ARM_V7S_LVL_IDX(iova, lvl);
>> +	ptep += idx;
>> +	do {
>> +		if (WARN_ON(!ARM_V7S_PTE_IS_VALID(ptep[i])))
>> +			return 0;
>> +		pte[i] = ptep[i];
>> +	} while (++i < num_entries);
>> +
>> +	/*
>> +	 * If we've hit a contiguous 'large page' entry at this level, it
>> +	 * needs splitting first, unless we're unmapping the whole lot.
>> +	 */
>> +	if (num_entries <= 1 && arm_v7s_pte_is_cont(pte[0], lvl))
>> +		arm_v7s_split_cont(data, iova, idx, lvl, ptep);
>> +
>> +	/* If the size matches this level, we're in the right place */
>> +	if (num_entries) {
>> +		size_t blk_size = ARM_V7S_BLOCK_SIZE(lvl);
>> +
>> +		__arm_v7s_set_pte(ptep, 0, num_entries, cfg);
>> +
>> +		for (i = 0; i < num_entries; i++) {
>> +			if (ARM_V7S_PTE_IS_TABLE(pte[i], lvl)) {
>> +				/* Also flush any partial walks */
>> +				tlb->tlb_add_flush(iova, blk_size,
>> +						   ARM_V7S_BLOCK_SIZE(2),
>> +						   false, cookie);
>> +				tlb->tlb_sync(cookie);
>
> MTK iommu driver will get a warning here in my test.
>
> There is a tlb_sync here, and in arm_v7s_unmap, there is another one.
> then the flow is:
>
>   tlb->tlb_add_flush(xxx)
>   tlb->tlb_sync()
>   tlb->tlb_sync()
>
>
> In MTK tlb_sync, The code is:
>
> static void mtk_iommu_tlb_sync(void *cookie)
> {
> 	struct mtk_iommu_data *data = cookie;
> 	int ret;
> 	u32 tmp;
>
> 	ret = readl_poll_timeout_atomic(data->base + REG_MMU_CPE_DONE, tmp,
> 					tmp != 0, 10, 100000);
> 	if (ret) {
> 		dev_warn(data->dev,
> 			 "Partial TLB flush timed out, falling back to full flush\n");
> 		mtk_iommu_tlb_flush_all(cookie);
> 	}
> 	/* Clear the CPE status */
> 	writel_relaxed(0, data->base + REG_MMU_CPE_DONE);
> }
>
> In the end of this function, We have to write 0 to clear the CPE status,
> then the HW could check the status in the next time.
>
> So if we call tlb_sync twice continually. It will time out in the second
> time, then we can see this log:
>
> Partial TLB flush timed out, falling back to full flush
>
> I don't know whether it is our HW special behavior, is this case ok in
> the arm-smmu.c?
> Is there some suggestion about this?

In the case of the SMMU it's probably a needless performance hit but 
it's not going to cause any errors. I've come up with an idea to try 
optimising out redundant syncs as generally undesirable, which I'll 
post, but if back-to-back syncs are really a problem for your hardware 
then it's probably worth making the driver specifically detect and avoid 
that condition.

>> +				ptep = iopte_deref(pte[i], lvl);
>> +				__arm_v7s_free_table(ptep, lvl + 1, data);
>> +			} else {
>> +				tlb->tlb_add_flush(iova, blk_size, blk_size,
>> +						   true, cookie);
>> +			}
>> +			iova += blk_size;
>> +		}
>> +		return size;
>> +	} else if (lvl == 1 && !ARM_V7S_PTE_IS_TABLE(pte[0], lvl)) {
>> +		/*
>> +		 * Insert a table at the next level to map the old region,
>> +		 * minus the part we want to unmap
>> +		 */
>> +		return arm_v7s_split_blk_unmap(data, iova, size, ptep);
>> +	}
>> +
>> +	/* Keep on walkin' */
>> +	ptep = iopte_deref(pte[0], lvl);
>> +	return __arm_v7s_unmap(data, iova, size, lvl + 1, ptep);
>> +}
>> +
>> +static int arm_v7s_unmap(struct io_pgtable_ops *ops, unsigned long iova,
>> +			 size_t size)
>> +{
>> +	size_t unmapped;
>> +	struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops);
>> +	struct io_pgtable *iop = &data->iop;
>> +
>> +	unmapped = __arm_v7s_unmap(data, iova, size, 1, data->pgd);
>> +	if (unmapped)
>> +		iop->cfg.tlb->tlb_sync(iop->cookie);
>> +
>> +	return unmapped;
>> +}
>> +
>> +static phys_addr_t arm_v7s_iova_to_phys(struct io_pgtable_ops *ops,
>> +					unsigned long iova)
>> +{
>> +	struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops);
>> +	arm_v7s_iopte *ptep = data->pgd, pte = ARM_V7S_PTE_TYPE_TABLE;
>> +	int lvl = 0;
>> +	u32 mask;
>> +
>> +	while (ARM_V7S_PTE_IS_TABLE(pte, lvl)) {
>> +		pte = ptep[ARM_V7S_LVL_IDX(iova, ++lvl)];
>> +		ptep = iopte_deref(pte, lvl);
>> +	}
>
> If we would like it always enter this while.
> Could we use do{}while? then we don't need initialize the pte.

OK, now I'm embarrassed for missing that, given the time I spent 
obsessively rewriting this bit about 20 times just because I was sure it 
was possible to do without any ifs, breaks, or repetitions :D

> And in this file, the valid lvl should be 1 or 2. but here the "lvl" is
> initialized to 0. Do we need add a enum for the lvl for more safe and
> readable?

Since "1" and "2" are used to represent the values 1 and 2, plus we use 
the level directly in arithmetic all over the place, I don't think an 
enum is really semantically appropriate or necessary. The lvl variable 
is only initialised to 0 in this one case because of the aforementioned 
obsessively-clean loop requiring the increment at the start of each 
iteration. With your change to the loop I can put ARM_V7S_PTE_IS_TABLE 
back to being predicated on lvl == 1 specifically, and get rid of this 
last trace of the whole "pretend TTBRn is a 'level 0' entry" bad idea 
that I had at one point.

Robin.

>> +	if (!ARM_V7S_PTE_IS_VALID(pte))
>> +		return 0;
>> +
>> +	mask = ARM_V7S_LVL_MASK(lvl);
>> +	if (arm_v7s_pte_is_cont(pte, lvl))
>> +		mask *= ARM_V7S_CONT_PAGES;
>> +	return (pte & mask) | (iova & ~mask);
>> +}
>> +
> [...]
>
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 5/5] iommu/io-pgtable: Add ARMv7 short descriptor support
@ 2015-12-17 20:12           ` Robin Murphy
  0 siblings, 0 replies; 40+ messages in thread
From: Robin Murphy @ 2015-12-17 20:12 UTC (permalink / raw)
  To: linux-arm-kernel

On 08/12/15 08:58, Yong Wu wrote:
> Hi Robin,
>
>     Thanks very much for your rewriting. It looks more pretty.
>
>     This works well in my selftest. and I will push to our chrome branch
> and request our video/display to test more.

Great, thanks.

>     Only a little comment below.
>
> On Fri, 2015-12-04 at 17:53 +0000, Robin Murphy wrote:
>> Add a nearly-complete ARMv7 short descriptor implementation, omitting
>> only a few legacy and CPU-centric aspects which shouldn't be necessary
>> for IOMMU API use anyway.
>>
>> Signed-off-by: Yong Wu <yong.wu@mediatek.com>
>> Signed-off-by: Robin Murphy <robin.murphy@arm.com>
>> ---
> [...]
>> +/* PTE type bits: these are all mixed up with XN/PXN bits in most cases */
>> +#define ARM_V7S_PTE_TYPE_TABLE		0x1
>> +#define ARM_V7S_PTE_TYPE_PAGE		0x2
>> +#define ARM_V7S_PTE_TYPE_CONT_PAGE	0x1
>
>  From the spec, This is Large page, Do we need add a comment for
> readable?
> /* Large page */
>
> and add /* Supersection */ for CONT_SECTION too.

Sure, I've added a comment that explains the use of "cont" for 
clarification.

>> +
>> +#define ARM_V7S_PTE_IS_VALID(pte)	(((pte) & 0x3) != 0)
>> +#define ARM_V7S_PTE_IS_TABLE(pte, lvl)	(lvl < 2 && ((pte) & ARM_V7S_PTE_TYPE_TABLE))
>> +
>> +/* Page table bits */
>> +#define ARM_V7S_ATTR_XN(lvl)		BIT(4 * (2 - (lvl)))
>> +#define ARM_V7S_ATTR_B			BIT(2)
>> +#define ARM_V7S_ATTR_C			BIT(3)
>> +#define ARM_V7S_ATTR_NS_TABLE		BIT(3)
>> +#define ARM_V7S_ATTR_NS_SECTION		BIT(19)
>> +
>> +#define ARM_V7S_CONT_SECTION		BIT(18)
>> +#define ARM_V7S_CONT_PAGE_XN_SHIFT	15
>> +
> [...]
>> +static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
>> +				   struct arm_v7s_io_pgtable *data)
>> +{
>> +	struct device *dev = data->iop.cfg.iommu_dev;
>> +	dma_addr_t dma;
>> +	size_t size = ARM_V7S_TABLE_SIZE(lvl);
>> +	void *table = NULL;
>> +
>> +	if (lvl == 1)
>> +		table = (void *)__get_dma_pages(__GFP_ZERO, get_order(size));
>> +	else if (lvl == 2)
>> +		table = kmem_cache_zalloc(data->l2_tables, gfp);
>> +	if (table && !selftest_running) {
>> +		dma = dma_map_single(dev, table, size, DMA_TO_DEVICE);
>> +		if (dma_mapping_error(dev, dma))
>> +			goto out_free;
>> +		/*
>> +		 * We depend on the IOMMU being able to work with any physical
>> +		 * address directly, so if the DMA layer suggests otherwise by
>> +		 * translating or truncating them, that bodes very badly...
>> +		 */
>> +		if (dma != virt_to_phys(table))
>> +			goto out_unmap;
>> +	}
>
> There is some special while we use kmem_cache, we save the physical
> address into the pagetable, then get the virtual address via
> phys_to_virt, then free it.
> It isn't same with the normal case that saving the va and free the va.
> Do we need add kmemleak_ignore here?

Indeed you're right; fixed. With only 2-level tables it might be 
feasible to keep track of all the VAs as well, and break the strict 
dma==phys dependency, but since things work OK as-is I'll put that idea 
aside for a slow day...

>> +	return table;
>> +
>> +out_unmap:
>> +	dev_err(dev, "Cannot accommodate DMA translation for IOMMU page tables\n");
>> +	dma_unmap_single(dev, dma, size, DMA_TO_DEVICE);
>> +out_free:
>> +	if (lvl == 1)
>> +		free_pages((unsigned long)table, get_order(size));
>> +	else
>> +		kmem_cache_free(data->l2_tables, table);
>> +	return NULL;
>> +}
>> +
> [...]
>> +static int arm_v7s_init_pte(struct arm_v7s_io_pgtable *data,
>> +			    unsigned long iova, phys_addr_t paddr, int prot,
>> +			    int lvl, int num_entries, arm_v7s_iopte *ptep)
>> +{
>> +	struct io_pgtable_cfg *cfg = &data->iop.cfg;
>> +	arm_v7s_iopte pte = arm_v7s_prot_to_pte(prot, lvl, cfg);
>> +	int i;
>> +
>> +	for (i = 0; i < num_entries; i++)
>> +		if (ARM_V7S_PTE_IS_TABLE(ptep[i], lvl)) {
>> +			/*
>> +			 * We need to unmap and free the old table before
>> +			 * overwriting it with a block entry.
>> +			 */
>> +			arm_v7s_iopte *tblp;
>> +			size_t sz = ARM_V7S_BLOCK_SIZE(lvl);
>> +
>> +			tblp = ptep + i - ARM_V7S_LVL_IDX(iova, lvl);
>> +			if (WARN_ON(__arm_v7s_unmap(data, iova, sz, lvl, tblp)
>> +					!= sz))
>> +				return -EINVAL;
>
> Here it may come from Will's "iommu/io-pgtable-arm: Unmap and free table
> when overwriting with block".
>
> But if we have IO_PGTABLE_QUIRK_TLBI_ON_MAP, the condition(1) in that
> comment don't exist.  So we don't need take care whether the exist one
> is a pgtable or not, we could always return -EEXIST here?

So if you map 4K, unmap it, then try to map 1MB of your now-free IOVA 
space over a pointer to an empty level 2 table, that always 
unconditionally fails? I'm pretty sure you don't want that ;)

With TLBI_ON_MAP you might not have hit the TLB conflict which revealed 
the bug in the first place, but the rest of that patch still very much 
applies (i.e. not leaking the table).

>> +		} else if (ptep[i]) {
>> +			/* We require an unmap first */
>> +			WARN_ON(!selftest_running);
>> +			return -EEXIST;
>> +		}
>> +
>
> [...]
>
>> +static int __arm_v7s_unmap(struct arm_v7s_io_pgtable *data,
>> +			    unsigned long iova, size_t size, int lvl,
>> +			    arm_v7s_iopte *ptep)
>> +{
>> +	arm_v7s_iopte pte[ARM_V7S_CONT_PAGES];
>> +	struct io_pgtable_cfg *cfg = &data->iop.cfg;
>> +	const struct iommu_gather_ops *tlb = cfg->tlb;
>> +	void *cookie = data->iop.cookie;
>> +	int idx, i = 0, num_entries = size >> ARM_V7S_LVL_SHIFT(lvl);
>> +
>> +	/* Something went horribly wrong and we ran out of page table */
>> +	if (WARN_ON(lvl > 2))
>> +		return 0;
>> +
>> +	idx = ARM_V7S_LVL_IDX(iova, lvl);
>> +	ptep += idx;
>> +	do {
>> +		if (WARN_ON(!ARM_V7S_PTE_IS_VALID(ptep[i])))
>> +			return 0;
>> +		pte[i] = ptep[i];
>> +	} while (++i < num_entries);
>> +
>> +	/*
>> +	 * If we've hit a contiguous 'large page' entry at this level, it
>> +	 * needs splitting first, unless we're unmapping the whole lot.
>> +	 */
>> +	if (num_entries <= 1 && arm_v7s_pte_is_cont(pte[0], lvl))
>> +		arm_v7s_split_cont(data, iova, idx, lvl, ptep);
>> +
>> +	/* If the size matches this level, we're in the right place */
>> +	if (num_entries) {
>> +		size_t blk_size = ARM_V7S_BLOCK_SIZE(lvl);
>> +
>> +		__arm_v7s_set_pte(ptep, 0, num_entries, cfg);
>> +
>> +		for (i = 0; i < num_entries; i++) {
>> +			if (ARM_V7S_PTE_IS_TABLE(pte[i], lvl)) {
>> +				/* Also flush any partial walks */
>> +				tlb->tlb_add_flush(iova, blk_size,
>> +						   ARM_V7S_BLOCK_SIZE(2),
>> +						   false, cookie);
>> +				tlb->tlb_sync(cookie);
>
> MTK iommu driver will get a warning here in my test.
>
> There is a tlb_sync here, and in arm_v7s_unmap, there is another one.
> then the flow is:
>
>   tlb->tlb_add_flush(xxx)
>   tlb->tlb_sync()
>   tlb->tlb_sync()
>
>
> In MTK tlb_sync, The code is:
>
> static void mtk_iommu_tlb_sync(void *cookie)
> {
> 	struct mtk_iommu_data *data = cookie;
> 	int ret;
> 	u32 tmp;
>
> 	ret = readl_poll_timeout_atomic(data->base + REG_MMU_CPE_DONE, tmp,
> 					tmp != 0, 10, 100000);
> 	if (ret) {
> 		dev_warn(data->dev,
> 			 "Partial TLB flush timed out, falling back to full flush\n");
> 		mtk_iommu_tlb_flush_all(cookie);
> 	}
> 	/* Clear the CPE status */
> 	writel_relaxed(0, data->base + REG_MMU_CPE_DONE);
> }
>
> In the end of this function, We have to write 0 to clear the CPE status,
> then the HW could check the status in the next time.
>
> So if we call tlb_sync twice continually. It will time out in the second
> time, then we can see this log:
>
> Partial TLB flush timed out, falling back to full flush
>
> I don't know whether it is our HW special behavior, is this case ok in
> the arm-smmu.c?
> Is there some suggestion about this?

In the case of the SMMU it's probably a needless performance hit but 
it's not going to cause any errors. I've come up with an idea to try 
optimising out redundant syncs as generally undesirable, which I'll 
post, but if back-to-back syncs are really a problem for your hardware 
then it's probably worth making the driver specifically detect and avoid 
that condition.

>> +				ptep = iopte_deref(pte[i], lvl);
>> +				__arm_v7s_free_table(ptep, lvl + 1, data);
>> +			} else {
>> +				tlb->tlb_add_flush(iova, blk_size, blk_size,
>> +						   true, cookie);
>> +			}
>> +			iova += blk_size;
>> +		}
>> +		return size;
>> +	} else if (lvl == 1 && !ARM_V7S_PTE_IS_TABLE(pte[0], lvl)) {
>> +		/*
>> +		 * Insert a table at the next level to map the old region,
>> +		 * minus the part we want to unmap
>> +		 */
>> +		return arm_v7s_split_blk_unmap(data, iova, size, ptep);
>> +	}
>> +
>> +	/* Keep on walkin' */
>> +	ptep = iopte_deref(pte[0], lvl);
>> +	return __arm_v7s_unmap(data, iova, size, lvl + 1, ptep);
>> +}
>> +
>> +static int arm_v7s_unmap(struct io_pgtable_ops *ops, unsigned long iova,
>> +			 size_t size)
>> +{
>> +	size_t unmapped;
>> +	struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops);
>> +	struct io_pgtable *iop = &data->iop;
>> +
>> +	unmapped = __arm_v7s_unmap(data, iova, size, 1, data->pgd);
>> +	if (unmapped)
>> +		iop->cfg.tlb->tlb_sync(iop->cookie);
>> +
>> +	return unmapped;
>> +}
>> +
>> +static phys_addr_t arm_v7s_iova_to_phys(struct io_pgtable_ops *ops,
>> +					unsigned long iova)
>> +{
>> +	struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops);
>> +	arm_v7s_iopte *ptep = data->pgd, pte = ARM_V7S_PTE_TYPE_TABLE;
>> +	int lvl = 0;
>> +	u32 mask;
>> +
>> +	while (ARM_V7S_PTE_IS_TABLE(pte, lvl)) {
>> +		pte = ptep[ARM_V7S_LVL_IDX(iova, ++lvl)];
>> +		ptep = iopte_deref(pte, lvl);
>> +	}
>
> If we would like it always enter this while.
> Could we use do{}while? then we don't need initialize the pte.

OK, now I'm embarrassed for missing that, given the time I spent 
obsessively rewriting this bit about 20 times just because I was sure it 
was possible to do without any ifs, breaks, or repetitions :D

> And in this file, the valid lvl should be 1 or 2. but here the "lvl" is
> initialized to 0. Do we need add a enum for the lvl for more safe and
> readable?

Since "1" and "2" are used to represent the values 1 and 2, plus we use 
the level directly in arithmetic all over the place, I don't think an 
enum is really semantically appropriate or necessary. The lvl variable 
is only initialised to 0 in this one case because of the aforementioned 
obsessively-clean loop requiring the increment at the start of each 
iteration. With your change to the loop I can put ARM_V7S_PTE_IS_TABLE 
back to being predicated on lvl == 1 specifically, and get rid of this 
last trace of the whole "pretend TTBRn is a 'level 0' entry" bad idea 
that I had at one point.

Robin.

>> +	if (!ARM_V7S_PTE_IS_VALID(pte))
>> +		return 0;
>> +
>> +	mask = ARM_V7S_LVL_MASK(lvl);
>> +	if (arm_v7s_pte_is_cont(pte, lvl))
>> +		mask *= ARM_V7S_CONT_PAGES;
>> +	return (pte & mask) | (iova & ~mask);
>> +}
>> +
> [...]
>
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2015-12-17 20:12 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-04 17:52 [PATCH 0/5] io-pgtable fixes + ARM short-descriptor format Robin Murphy
2015-12-04 17:52 ` Robin Murphy
     [not found] ` <cover.1449246988.git.robin.murphy-5wv7dgnIgG8@public.gmane.org>
2015-12-04 17:52   ` [PATCH 1/5] iommu/io-pgtable-arm: Avoid dereferencing bogus PTEs Robin Murphy
2015-12-04 17:52     ` Robin Murphy
     [not found]     ` <ad5898fd59575d0e2a8dccabafde71650f44e2a8.1449246988.git.robin.murphy-5wv7dgnIgG8@public.gmane.org>
2015-12-13 21:41       ` Laurent Pinchart
2015-12-13 21:41         ` Laurent Pinchart
2015-12-14 15:33         ` Robin Murphy
2015-12-14 15:33           ` Robin Murphy
2015-12-04 17:52   ` [PATCH 2/5] iommu/io-pgtable: Indicate granule for TLB maintenance Robin Murphy
2015-12-04 17:52     ` Robin Murphy
     [not found]     ` <67223d4b1ff57f3f46e8c3102e663a063a50a7f7.1449246988.git.robin.murphy-5wv7dgnIgG8@public.gmane.org>
2015-12-07 11:08       ` Will Deacon
2015-12-07 11:08         ` Will Deacon
     [not found]         ` <20151207110804.GA23430-5wv7dgnIgG8@public.gmane.org>
2015-12-07 12:09           ` Robin Murphy
2015-12-07 12:09             ` Robin Murphy
     [not found]             ` <56657714.50504-5wv7dgnIgG8@public.gmane.org>
2015-12-07 13:48               ` Will Deacon
2015-12-07 13:48                 ` Will Deacon
2015-12-07 18:18       ` [PATCH v2] " Robin Murphy
2015-12-07 18:18         ` Robin Murphy
2015-12-04 17:53   ` [PATCH 3/5] iommu/arm-smmu: Invalidate TLBs properly Robin Murphy
2015-12-04 17:53     ` Robin Murphy
     [not found]     ` <2acaea8656f14a4421d7d466dd242fe5a3d0f6f6.1449246988.git.robin.murphy-5wv7dgnIgG8@public.gmane.org>
2015-12-07 11:09       ` Will Deacon
2015-12-07 11:09         ` Will Deacon
     [not found]         ` <20151207110939.GB23430-5wv7dgnIgG8@public.gmane.org>
2015-12-07 13:09           ` Robin Murphy
2015-12-07 13:09             ` Robin Murphy
     [not found]             ` <5665850F.1060406-5wv7dgnIgG8@public.gmane.org>
2015-12-07 13:34               ` Will Deacon
2015-12-07 13:34                 ` Will Deacon
2015-12-07 18:18       ` [PATCH v2] " Robin Murphy
2015-12-07 18:18         ` Robin Murphy
     [not found]         ` <ac2d6aedf473cf01eb1df48a1f81614f0f74b0b1.1449501523.git.robin.murphy-5wv7dgnIgG8@public.gmane.org>
2015-12-07 18:28           ` Will Deacon
2015-12-07 18:28             ` Will Deacon
2015-12-04 17:53   ` [PATCH 4/5] iommu/io-pgtable: Make io_pgtable_ops_to_pgtable() macro common Robin Murphy
2015-12-04 17:53     ` Robin Murphy
     [not found]     ` <ef5954ba727840a020b62b0135a1ce9f4a10fb2c.1449246988.git.robin.murphy-5wv7dgnIgG8@public.gmane.org>
2015-12-13 21:52       ` Laurent Pinchart
2015-12-13 21:52         ` Laurent Pinchart
2015-12-04 17:53   ` [PATCH 5/5] iommu/io-pgtable: Add ARMv7 short descriptor support Robin Murphy
2015-12-04 17:53     ` Robin Murphy
     [not found]     ` <3c72de1e8caa28cbfd423de41c6cba812db4e7db.1449246988.git.robin.murphy-5wv7dgnIgG8@public.gmane.org>
2015-12-08  8:58       ` Yong Wu
2015-12-08  8:58         ` Yong Wu
2015-12-17 20:12         ` Robin Murphy
2015-12-17 20:12           ` Robin Murphy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.