All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/9] Renesas ipmmu-vmsa: Miscellaneous cleanups and fixes
@ 2014-04-21 14:13 ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: linux-arm-kernel

Hello,

This patch set cleans up and fixes small issues in the ipmmu-vmsa driver. The
patches are based on top of "[PATCH v3] iommu: Add driver for Renesas
VMSA-compatible IPMMU" that adds the ipmmu-vmsa driver.

The most interesting part of this series is the rewrite of the page table
management code. The IOMMU core guarantees that the map and unmap operations
will always be called only with page sizes advertised by the driver. We can
use that assumption to remove loops of PGD and PMD entries, simplifying the
code.

Will, would it make sense to perform the same cleanup for the arm-smmu driver,
or is there a reason to keep loops over PGD and PMD entries ? Removing them
makes the implementation of 68kB and 2MB pages easier.

Laurent Pinchart (9):
  iommu/ipmmu-vmsa: Cleanup failures of ARM mapping creation or
    attachment
  iommu/ipmmu-vmsa: Fix the supported page sizes
  iommu/ipmmu-vmsa: Define driver-specific page directory sizes
  iommu/ipmmu-vmsa: Set the PTE contiguous hint bit when possible
  iommu/ipmmu-vmsa: PMD is never folded, PUD always is
  iommu/ipmmu-vmsa: Rewrite page table management
  iommu/ipmmu-vmsa: Support 2MB mappings
  iommu/ipmmu-vmsa: Remove stage 2 PTE bits definitions
  iommu/ipmmu-vmsa: Support clearing mappings

 drivers/iommu/ipmmu-vmsa.c | 443 +++++++++++++++++++++++++++++++--------------
 1 file changed, 309 insertions(+), 134 deletions(-)

-- 
Regards,

Laurent Pinchart


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 0/9] Renesas ipmmu-vmsa: Miscellaneous cleanups and fixes
@ 2014-04-21 14:13 ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: iommu; +Cc: linux-sh, linux-arm-kernel, Will Deacon

Hello,

This patch set cleans up and fixes small issues in the ipmmu-vmsa driver. The
patches are based on top of "[PATCH v3] iommu: Add driver for Renesas
VMSA-compatible IPMMU" that adds the ipmmu-vmsa driver.

The most interesting part of this series is the rewrite of the page table
management code. The IOMMU core guarantees that the map and unmap operations
will always be called only with page sizes advertised by the driver. We can
use that assumption to remove loops of PGD and PMD entries, simplifying the
code.

Will, would it make sense to perform the same cleanup for the arm-smmu driver,
or is there a reason to keep loops over PGD and PMD entries ? Removing them
makes the implementation of 68kB and 2MB pages easier.

Laurent Pinchart (9):
  iommu/ipmmu-vmsa: Cleanup failures of ARM mapping creation or
    attachment
  iommu/ipmmu-vmsa: Fix the supported page sizes
  iommu/ipmmu-vmsa: Define driver-specific page directory sizes
  iommu/ipmmu-vmsa: Set the PTE contiguous hint bit when possible
  iommu/ipmmu-vmsa: PMD is never folded, PUD always is
  iommu/ipmmu-vmsa: Rewrite page table management
  iommu/ipmmu-vmsa: Support 2MB mappings
  iommu/ipmmu-vmsa: Remove stage 2 PTE bits definitions
  iommu/ipmmu-vmsa: Support clearing mappings

 drivers/iommu/ipmmu-vmsa.c | 443 +++++++++++++++++++++++++++++++--------------
 1 file changed, 309 insertions(+), 134 deletions(-)

-- 
Regards,

Laurent Pinchart


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 0/9] Renesas ipmmu-vmsa: Miscellaneous cleanups and fixes
@ 2014-04-21 14:13 ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: linux-arm-kernel

Hello,

This patch set cleans up and fixes small issues in the ipmmu-vmsa driver. The
patches are based on top of "[PATCH v3] iommu: Add driver for Renesas
VMSA-compatible IPMMU" that adds the ipmmu-vmsa driver.

The most interesting part of this series is the rewrite of the page table
management code. The IOMMU core guarantees that the map and unmap operations
will always be called only with page sizes advertised by the driver. We can
use that assumption to remove loops of PGD and PMD entries, simplifying the
code.

Will, would it make sense to perform the same cleanup for the arm-smmu driver,
or is there a reason to keep loops over PGD and PMD entries ? Removing them
makes the implementation of 68kB and 2MB pages easier.

Laurent Pinchart (9):
  iommu/ipmmu-vmsa: Cleanup failures of ARM mapping creation or
    attachment
  iommu/ipmmu-vmsa: Fix the supported page sizes
  iommu/ipmmu-vmsa: Define driver-specific page directory sizes
  iommu/ipmmu-vmsa: Set the PTE contiguous hint bit when possible
  iommu/ipmmu-vmsa: PMD is never folded, PUD always is
  iommu/ipmmu-vmsa: Rewrite page table management
  iommu/ipmmu-vmsa: Support 2MB mappings
  iommu/ipmmu-vmsa: Remove stage 2 PTE bits definitions
  iommu/ipmmu-vmsa: Support clearing mappings

 drivers/iommu/ipmmu-vmsa.c | 443 +++++++++++++++++++++++++++++++--------------
 1 file changed, 309 insertions(+), 134 deletions(-)

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 1/9] iommu/ipmmu-vmsa: Cleanup failures of ARM mapping creation or attachment
@ 2014-04-21 14:13   ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: linux-arm-kernel

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index d277bde..1e0a757 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -935,7 +935,8 @@ static int ipmmu_add_device(struct device *dev)
 							SZ_1G, SZ_2G);
 		if (IS_ERR(mapping)) {
 			dev_err(mmu->dev, "failed to create ARM IOMMU mapping\n");
-			return PTR_ERR(mapping);
+			ret = PTR_ERR(mapping);
+			goto error;
 		}
 
 		mmu->mapping = mapping;
@@ -951,6 +952,7 @@ static int ipmmu_add_device(struct device *dev)
 	return 0;
 
 error:
+	arm_iommu_release_mapping(mmu->mapping);
 	kfree(dev->archdata.iommu);
 	dev->archdata.iommu = NULL;
 	iommu_group_remove_device(dev);
-- 
1.8.3.2


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 1/9] iommu/ipmmu-vmsa: Cleanup failures of ARM mapping creation or attachment
@ 2014-04-21 14:13   ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
  Cc: Will Deacon, linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-sh-u79uwXL29TY76Z2rM5mHXA

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw@public.gmane.org>
---
 drivers/iommu/ipmmu-vmsa.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index d277bde..1e0a757 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -935,7 +935,8 @@ static int ipmmu_add_device(struct device *dev)
 							SZ_1G, SZ_2G);
 		if (IS_ERR(mapping)) {
 			dev_err(mmu->dev, "failed to create ARM IOMMU mapping\n");
-			return PTR_ERR(mapping);
+			ret = PTR_ERR(mapping);
+			goto error;
 		}
 
 		mmu->mapping = mapping;
@@ -951,6 +952,7 @@ static int ipmmu_add_device(struct device *dev)
 	return 0;
 
 error:
+	arm_iommu_release_mapping(mmu->mapping);
 	kfree(dev->archdata.iommu);
 	dev->archdata.iommu = NULL;
 	iommu_group_remove_device(dev);
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 1/9] iommu/ipmmu-vmsa: Cleanup failures of ARM mapping creation or attachment
@ 2014-04-21 14:13   ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: linux-arm-kernel

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index d277bde..1e0a757 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -935,7 +935,8 @@ static int ipmmu_add_device(struct device *dev)
 							SZ_1G, SZ_2G);
 		if (IS_ERR(mapping)) {
 			dev_err(mmu->dev, "failed to create ARM IOMMU mapping\n");
-			return PTR_ERR(mapping);
+			ret = PTR_ERR(mapping);
+			goto error;
 		}
 
 		mmu->mapping = mapping;
@@ -951,6 +952,7 @@ static int ipmmu_add_device(struct device *dev)
 	return 0;
 
 error:
+	arm_iommu_release_mapping(mmu->mapping);
 	kfree(dev->archdata.iommu);
 	dev->archdata.iommu = NULL;
 	iommu_group_remove_device(dev);
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 2/9] iommu/ipmmu-vmsa: Fix the supported page sizes
  2014-04-21 14:13 ` Laurent Pinchart
  (?)
@ 2014-04-21 14:13   ` Laurent Pinchart
  -1 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: linux-arm-kernel

The hardware supports 2MB page sizes, not 1MB.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 1e0a757..1ae97d8 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -977,7 +977,7 @@ static struct iommu_ops ipmmu_ops = {
 	.iova_to_phys = ipmmu_iova_to_phys,
 	.add_device = ipmmu_add_device,
 	.remove_device = ipmmu_remove_device,
-	.pgsize_bitmap = SZ_1M | SZ_64K | SZ_4K,
+	.pgsize_bitmap = SZ_2M | SZ_64K | SZ_4K,
 };
 
 /* -----------------------------------------------------------------------------
-- 
1.8.3.2


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 2/9] iommu/ipmmu-vmsa: Fix the supported page sizes
@ 2014-04-21 14:13   ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: iommu; +Cc: linux-sh, linux-arm-kernel, Will Deacon

The hardware supports 2MB page sizes, not 1MB.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 1e0a757..1ae97d8 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -977,7 +977,7 @@ static struct iommu_ops ipmmu_ops = {
 	.iova_to_phys = ipmmu_iova_to_phys,
 	.add_device = ipmmu_add_device,
 	.remove_device = ipmmu_remove_device,
-	.pgsize_bitmap = SZ_1M | SZ_64K | SZ_4K,
+	.pgsize_bitmap = SZ_2M | SZ_64K | SZ_4K,
 };
 
 /* -----------------------------------------------------------------------------
-- 
1.8.3.2


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 2/9] iommu/ipmmu-vmsa: Fix the supported page sizes
@ 2014-04-21 14:13   ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: linux-arm-kernel

The hardware supports 2MB page sizes, not 1MB.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 1e0a757..1ae97d8 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -977,7 +977,7 @@ static struct iommu_ops ipmmu_ops = {
 	.iova_to_phys = ipmmu_iova_to_phys,
 	.add_device = ipmmu_add_device,
 	.remove_device = ipmmu_remove_device,
-	.pgsize_bitmap = SZ_1M | SZ_64K | SZ_4K,
+	.pgsize_bitmap = SZ_2M | SZ_64K | SZ_4K,
 };
 
 /* -----------------------------------------------------------------------------
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 3/9] iommu/ipmmu-vmsa: Define driver-specific page directory sizes
  2014-04-21 14:13 ` Laurent Pinchart
  (?)
@ 2014-04-21 14:13   ` Laurent Pinchart
  -1 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: linux-arm-kernel

The PTRS_PER_(PGD|PMD|PTE) macros evaluate to different values depending
on whether LPAE is enabled. The IPMMU driver uses a long descriptor
format regardless of LPAE, making those macros mismatch the IPMMU
configuration on non-LPAE systems.

Replace the macros by driver-specific versions that always evaluate to
the right value.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 1ae97d8..7c8c21e 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -210,6 +210,10 @@ static LIST_HEAD(ipmmu_devices);
 #define ARM_VMSA_PTE_MEMATTR_NC		(((pteval_t)0x5) << 2)
 #define ARM_VMSA_PTE_MEMATTR_DEV	(((pteval_t)0x1) << 2)
 
+#define IPMMU_PTRS_PER_PTE		512
+#define IPMMU_PTRS_PER_PMD		512
+#define IPMMU_PTRS_PER_PGD		4
+
 /* -----------------------------------------------------------------------------
  * Read/Write Access
  */
@@ -328,7 +332,7 @@ static int ipmmu_domain_init_context(struct ipmmu_vmsa_domain *domain)
 
 	/* TTBR0 */
 	ipmmu_flush_pgtable(domain->mmu, domain->pgd,
-			    PTRS_PER_PGD * sizeof(*domain->pgd));
+			    IPMMU_PTRS_PER_PGD * sizeof(*domain->pgd));
 	ttbr = __pa(domain->pgd);
 	ipmmu_ctx_write(domain, IMTTLBR0, ttbr);
 	ipmmu_ctx_write(domain, IMTTUBR0, ttbr >> 32);
@@ -470,7 +474,7 @@ static void ipmmu_free_pmds(pud_t *pud)
 	unsigned int i;
 
 	pmd = pmd_base;
-	for (i = 0; i < PTRS_PER_PMD; ++i) {
+	for (i = 0; i < IPMMU_PTRS_PER_PMD; ++i) {
 		if (pmd_none(*pmd))
 			continue;
 
@@ -487,7 +491,7 @@ static void ipmmu_free_puds(pgd_t *pgd)
 	unsigned int i;
 
 	pud = pud_base;
-	for (i = 0; i < PTRS_PER_PUD; ++i) {
+	for (i = 0; i < IPMMU_PTRS_PER_PUD; ++i) {
 		if (pud_none(*pud))
 			continue;
 
@@ -510,7 +514,7 @@ static void ipmmu_free_pgtables(struct ipmmu_vmsa_domain *domain)
 	 * tables.
 	 */
 	pgd = pgd_base;
-	for (i = 0; i < PTRS_PER_PGD; ++i) {
+	for (i = 0; i < IPMMU_PTRS_PER_PGD; ++i) {
 		if (pgd_none(*pgd))
 			continue;
 		ipmmu_free_puds(pgd);
@@ -695,7 +699,7 @@ static int ipmmu_domain_init(struct iommu_domain *io_domain)
 
 	spin_lock_init(&domain->lock);
 
-	domain->pgd = kzalloc(PTRS_PER_PGD * sizeof(pgd_t), GFP_KERNEL);
+	domain->pgd = kzalloc(IPMMU_PTRS_PER_PGD * sizeof(pgd_t), GFP_KERNEL);
 	if (!domain->pgd) {
 		kfree(domain);
 		return -ENOMEM;
-- 
1.8.3.2


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 3/9] iommu/ipmmu-vmsa: Define driver-specific page directory sizes
@ 2014-04-21 14:13   ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: iommu; +Cc: linux-sh, linux-arm-kernel, Will Deacon

The PTRS_PER_(PGD|PMD|PTE) macros evaluate to different values depending
on whether LPAE is enabled. The IPMMU driver uses a long descriptor
format regardless of LPAE, making those macros mismatch the IPMMU
configuration on non-LPAE systems.

Replace the macros by driver-specific versions that always evaluate to
the right value.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 1ae97d8..7c8c21e 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -210,6 +210,10 @@ static LIST_HEAD(ipmmu_devices);
 #define ARM_VMSA_PTE_MEMATTR_NC		(((pteval_t)0x5) << 2)
 #define ARM_VMSA_PTE_MEMATTR_DEV	(((pteval_t)0x1) << 2)
 
+#define IPMMU_PTRS_PER_PTE		512
+#define IPMMU_PTRS_PER_PMD		512
+#define IPMMU_PTRS_PER_PGD		4
+
 /* -----------------------------------------------------------------------------
  * Read/Write Access
  */
@@ -328,7 +332,7 @@ static int ipmmu_domain_init_context(struct ipmmu_vmsa_domain *domain)
 
 	/* TTBR0 */
 	ipmmu_flush_pgtable(domain->mmu, domain->pgd,
-			    PTRS_PER_PGD * sizeof(*domain->pgd));
+			    IPMMU_PTRS_PER_PGD * sizeof(*domain->pgd));
 	ttbr = __pa(domain->pgd);
 	ipmmu_ctx_write(domain, IMTTLBR0, ttbr);
 	ipmmu_ctx_write(domain, IMTTUBR0, ttbr >> 32);
@@ -470,7 +474,7 @@ static void ipmmu_free_pmds(pud_t *pud)
 	unsigned int i;
 
 	pmd = pmd_base;
-	for (i = 0; i < PTRS_PER_PMD; ++i) {
+	for (i = 0; i < IPMMU_PTRS_PER_PMD; ++i) {
 		if (pmd_none(*pmd))
 			continue;
 
@@ -487,7 +491,7 @@ static void ipmmu_free_puds(pgd_t *pgd)
 	unsigned int i;
 
 	pud = pud_base;
-	for (i = 0; i < PTRS_PER_PUD; ++i) {
+	for (i = 0; i < IPMMU_PTRS_PER_PUD; ++i) {
 		if (pud_none(*pud))
 			continue;
 
@@ -510,7 +514,7 @@ static void ipmmu_free_pgtables(struct ipmmu_vmsa_domain *domain)
 	 * tables.
 	 */
 	pgd = pgd_base;
-	for (i = 0; i < PTRS_PER_PGD; ++i) {
+	for (i = 0; i < IPMMU_PTRS_PER_PGD; ++i) {
 		if (pgd_none(*pgd))
 			continue;
 		ipmmu_free_puds(pgd);
@@ -695,7 +699,7 @@ static int ipmmu_domain_init(struct iommu_domain *io_domain)
 
 	spin_lock_init(&domain->lock);
 
-	domain->pgd = kzalloc(PTRS_PER_PGD * sizeof(pgd_t), GFP_KERNEL);
+	domain->pgd = kzalloc(IPMMU_PTRS_PER_PGD * sizeof(pgd_t), GFP_KERNEL);
 	if (!domain->pgd) {
 		kfree(domain);
 		return -ENOMEM;
-- 
1.8.3.2


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 3/9] iommu/ipmmu-vmsa: Define driver-specific page directory sizes
@ 2014-04-21 14:13   ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: linux-arm-kernel

The PTRS_PER_(PGD|PMD|PTE) macros evaluate to different values depending
on whether LPAE is enabled. The IPMMU driver uses a long descriptor
format regardless of LPAE, making those macros mismatch the IPMMU
configuration on non-LPAE systems.

Replace the macros by driver-specific versions that always evaluate to
the right value.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 1ae97d8..7c8c21e 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -210,6 +210,10 @@ static LIST_HEAD(ipmmu_devices);
 #define ARM_VMSA_PTE_MEMATTR_NC		(((pteval_t)0x5) << 2)
 #define ARM_VMSA_PTE_MEMATTR_DEV	(((pteval_t)0x1) << 2)
 
+#define IPMMU_PTRS_PER_PTE		512
+#define IPMMU_PTRS_PER_PMD		512
+#define IPMMU_PTRS_PER_PGD		4
+
 /* -----------------------------------------------------------------------------
  * Read/Write Access
  */
@@ -328,7 +332,7 @@ static int ipmmu_domain_init_context(struct ipmmu_vmsa_domain *domain)
 
 	/* TTBR0 */
 	ipmmu_flush_pgtable(domain->mmu, domain->pgd,
-			    PTRS_PER_PGD * sizeof(*domain->pgd));
+			    IPMMU_PTRS_PER_PGD * sizeof(*domain->pgd));
 	ttbr = __pa(domain->pgd);
 	ipmmu_ctx_write(domain, IMTTLBR0, ttbr);
 	ipmmu_ctx_write(domain, IMTTUBR0, ttbr >> 32);
@@ -470,7 +474,7 @@ static void ipmmu_free_pmds(pud_t *pud)
 	unsigned int i;
 
 	pmd = pmd_base;
-	for (i = 0; i < PTRS_PER_PMD; ++i) {
+	for (i = 0; i < IPMMU_PTRS_PER_PMD; ++i) {
 		if (pmd_none(*pmd))
 			continue;
 
@@ -487,7 +491,7 @@ static void ipmmu_free_puds(pgd_t *pgd)
 	unsigned int i;
 
 	pud = pud_base;
-	for (i = 0; i < PTRS_PER_PUD; ++i) {
+	for (i = 0; i < IPMMU_PTRS_PER_PUD; ++i) {
 		if (pud_none(*pud))
 			continue;
 
@@ -510,7 +514,7 @@ static void ipmmu_free_pgtables(struct ipmmu_vmsa_domain *domain)
 	 * tables.
 	 */
 	pgd = pgd_base;
-	for (i = 0; i < PTRS_PER_PGD; ++i) {
+	for (i = 0; i < IPMMU_PTRS_PER_PGD; ++i) {
 		if (pgd_none(*pgd))
 			continue;
 		ipmmu_free_puds(pgd);
@@ -695,7 +699,7 @@ static int ipmmu_domain_init(struct iommu_domain *io_domain)
 
 	spin_lock_init(&domain->lock);
 
-	domain->pgd = kzalloc(PTRS_PER_PGD * sizeof(pgd_t), GFP_KERNEL);
+	domain->pgd = kzalloc(IPMMU_PTRS_PER_PGD * sizeof(pgd_t), GFP_KERNEL);
 	if (!domain->pgd) {
 		kfree(domain);
 		return -ENOMEM;
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 4/9] iommu/ipmmu-vmsa: Set the PTE contiguous hint bit when possible
@ 2014-04-21 14:13   ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: linux-arm-kernel

The contiguous hint bit signals to the IOMMU that a range of 16 PTEs
refer to physically contiguous memory. It improves performances by
dividing the number of TLB lookups by 16, effectively implementing 64kB
page sizes.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 43 ++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 40 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 7c8c21e..1ce2115 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -210,6 +210,9 @@ static LIST_HEAD(ipmmu_devices);
 #define ARM_VMSA_PTE_MEMATTR_NC		(((pteval_t)0x5) << 2)
 #define ARM_VMSA_PTE_MEMATTR_DEV	(((pteval_t)0x1) << 2)
 
+#define ARM_VMSA_PTE_CONT_ENTRIES	16
+#define ARM_VMSA_PTE_CONT_SIZE		(PAGE_SIZE * ARM_VMSA_PTE_CONT_ENTRIES)
+
 #define IPMMU_PTRS_PER_PTE		512
 #define IPMMU_PTRS_PER_PMD		512
 #define IPMMU_PTRS_PER_PGD		4
@@ -569,10 +572,44 @@ static int ipmmu_alloc_init_pte(struct ipmmu_vmsa_device *mmu, pmd_t *pmd,
 	pteval |= ARM_VMSA_PTE_SH_IS;
 	start = pte;
 
-	/* Install the page table entries. */
+	/*
+	 * Install the page table entries.
+	 *
+	 * Set the contiguous hint in the PTEs where possible. The hint
+	 * indicates a series of ARM_VMSA_PTE_CONT_ENTRIES PTEs mapping a
+	 * physically contiguous region with the following constraints:
+	 *
+	 * - The region start is aligned to ARM_VMSA_PTE_CONT_SIZE
+	 * - Each PTE in the region has the contiguous hint bit set
+	 *
+	 * We don't support partial unmapping so there's no need to care about
+	 * clearing the contiguous hint from neighbour PTEs.
+	 */
 	do {
-		*pte++ = pfn_pte(pfn++, __pgprot(pteval));
-		addr += PAGE_SIZE;
+		unsigned long chunk_end;
+
+		/*
+		 * If the address is aligned to a contiguous region size and the
+		 * mapping size is large enough, process the largest possible
+		 * number of PTEs multiple of ARM_VMSA_PTE_CONT_ENTRIES.
+		 * Otherwise process the smallest number of PTEs to align the
+		 * address to a contiguous region size or to complete the
+		 * mapping.
+		 */
+		if (IS_ALIGNED(addr, ARM_VMSA_PTE_CONT_SIZE) &&
+		    end - addr >= ARM_VMSA_PTE_CONT_SIZE) {
+			chunk_end = round_down(end, ARM_VMSA_PTE_CONT_SIZE);
+			pteval |= ARM_VMSA_PTE_CONT;
+		} else {
+			chunk_end = min(ALIGN(addr, ARM_VMSA_PTE_CONT_SIZE),
+					end);
+			pteval &= ~ARM_VMSA_PTE_CONT;
+		}
+
+		do {
+			*pte++ = pfn_pte(pfn++, __pgprot(pteval));
+			addr += PAGE_SIZE;
+		} while (addr != chunk_end);
 	} while (addr != end);
 
 	ipmmu_flush_pgtable(mmu, start, sizeof(*pte) * (pte - start));
-- 
1.8.3.2


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 4/9] iommu/ipmmu-vmsa: Set the PTE contiguous hint bit when possible
@ 2014-04-21 14:13   ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
  Cc: Will Deacon, linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-sh-u79uwXL29TY76Z2rM5mHXA

The contiguous hint bit signals to the IOMMU that a range of 16 PTEs
refer to physically contiguous memory. It improves performances by
dividing the number of TLB lookups by 16, effectively implementing 64kB
page sizes.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw@public.gmane.org>
---
 drivers/iommu/ipmmu-vmsa.c | 43 ++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 40 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 7c8c21e..1ce2115 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -210,6 +210,9 @@ static LIST_HEAD(ipmmu_devices);
 #define ARM_VMSA_PTE_MEMATTR_NC		(((pteval_t)0x5) << 2)
 #define ARM_VMSA_PTE_MEMATTR_DEV	(((pteval_t)0x1) << 2)
 
+#define ARM_VMSA_PTE_CONT_ENTRIES	16
+#define ARM_VMSA_PTE_CONT_SIZE		(PAGE_SIZE * ARM_VMSA_PTE_CONT_ENTRIES)
+
 #define IPMMU_PTRS_PER_PTE		512
 #define IPMMU_PTRS_PER_PMD		512
 #define IPMMU_PTRS_PER_PGD		4
@@ -569,10 +572,44 @@ static int ipmmu_alloc_init_pte(struct ipmmu_vmsa_device *mmu, pmd_t *pmd,
 	pteval |= ARM_VMSA_PTE_SH_IS;
 	start = pte;
 
-	/* Install the page table entries. */
+	/*
+	 * Install the page table entries.
+	 *
+	 * Set the contiguous hint in the PTEs where possible. The hint
+	 * indicates a series of ARM_VMSA_PTE_CONT_ENTRIES PTEs mapping a
+	 * physically contiguous region with the following constraints:
+	 *
+	 * - The region start is aligned to ARM_VMSA_PTE_CONT_SIZE
+	 * - Each PTE in the region has the contiguous hint bit set
+	 *
+	 * We don't support partial unmapping so there's no need to care about
+	 * clearing the contiguous hint from neighbour PTEs.
+	 */
 	do {
-		*pte++ = pfn_pte(pfn++, __pgprot(pteval));
-		addr += PAGE_SIZE;
+		unsigned long chunk_end;
+
+		/*
+		 * If the address is aligned to a contiguous region size and the
+		 * mapping size is large enough, process the largest possible
+		 * number of PTEs multiple of ARM_VMSA_PTE_CONT_ENTRIES.
+		 * Otherwise process the smallest number of PTEs to align the
+		 * address to a contiguous region size or to complete the
+		 * mapping.
+		 */
+		if (IS_ALIGNED(addr, ARM_VMSA_PTE_CONT_SIZE) &&
+		    end - addr >= ARM_VMSA_PTE_CONT_SIZE) {
+			chunk_end = round_down(end, ARM_VMSA_PTE_CONT_SIZE);
+			pteval |= ARM_VMSA_PTE_CONT;
+		} else {
+			chunk_end = min(ALIGN(addr, ARM_VMSA_PTE_CONT_SIZE),
+					end);
+			pteval &= ~ARM_VMSA_PTE_CONT;
+		}
+
+		do {
+			*pte++ = pfn_pte(pfn++, __pgprot(pteval));
+			addr += PAGE_SIZE;
+		} while (addr != chunk_end);
 	} while (addr != end);
 
 	ipmmu_flush_pgtable(mmu, start, sizeof(*pte) * (pte - start));
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 4/9] iommu/ipmmu-vmsa: Set the PTE contiguous hint bit when possible
@ 2014-04-21 14:13   ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: linux-arm-kernel

The contiguous hint bit signals to the IOMMU that a range of 16 PTEs
refer to physically contiguous memory. It improves performances by
dividing the number of TLB lookups by 16, effectively implementing 64kB
page sizes.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 43 ++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 40 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 7c8c21e..1ce2115 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -210,6 +210,9 @@ static LIST_HEAD(ipmmu_devices);
 #define ARM_VMSA_PTE_MEMATTR_NC		(((pteval_t)0x5) << 2)
 #define ARM_VMSA_PTE_MEMATTR_DEV	(((pteval_t)0x1) << 2)
 
+#define ARM_VMSA_PTE_CONT_ENTRIES	16
+#define ARM_VMSA_PTE_CONT_SIZE		(PAGE_SIZE * ARM_VMSA_PTE_CONT_ENTRIES)
+
 #define IPMMU_PTRS_PER_PTE		512
 #define IPMMU_PTRS_PER_PMD		512
 #define IPMMU_PTRS_PER_PGD		4
@@ -569,10 +572,44 @@ static int ipmmu_alloc_init_pte(struct ipmmu_vmsa_device *mmu, pmd_t *pmd,
 	pteval |= ARM_VMSA_PTE_SH_IS;
 	start = pte;
 
-	/* Install the page table entries. */
+	/*
+	 * Install the page table entries.
+	 *
+	 * Set the contiguous hint in the PTEs where possible. The hint
+	 * indicates a series of ARM_VMSA_PTE_CONT_ENTRIES PTEs mapping a
+	 * physically contiguous region with the following constraints:
+	 *
+	 * - The region start is aligned to ARM_VMSA_PTE_CONT_SIZE
+	 * - Each PTE in the region has the contiguous hint bit set
+	 *
+	 * We don't support partial unmapping so there's no need to care about
+	 * clearing the contiguous hint from neighbour PTEs.
+	 */
 	do {
-		*pte++ = pfn_pte(pfn++, __pgprot(pteval));
-		addr += PAGE_SIZE;
+		unsigned long chunk_end;
+
+		/*
+		 * If the address is aligned to a contiguous region size and the
+		 * mapping size is large enough, process the largest possible
+		 * number of PTEs multiple of ARM_VMSA_PTE_CONT_ENTRIES.
+		 * Otherwise process the smallest number of PTEs to align the
+		 * address to a contiguous region size or to complete the
+		 * mapping.
+		 */
+		if (IS_ALIGNED(addr, ARM_VMSA_PTE_CONT_SIZE) &&
+		    end - addr >= ARM_VMSA_PTE_CONT_SIZE) {
+			chunk_end = round_down(end, ARM_VMSA_PTE_CONT_SIZE);
+			pteval |= ARM_VMSA_PTE_CONT;
+		} else {
+			chunk_end = min(ALIGN(addr, ARM_VMSA_PTE_CONT_SIZE),
+					end);
+			pteval &= ~ARM_VMSA_PTE_CONT;
+		}
+
+		do {
+			*pte++ = pfn_pte(pfn++, __pgprot(pteval));
+			addr += PAGE_SIZE;
+		} while (addr != chunk_end);
 	} while (addr != end);
 
 	ipmmu_flush_pgtable(mmu, start, sizeof(*pte) * (pte - start));
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 5/9] iommu/ipmmu-vmsa: PMD is never folded, PUD always is
  2014-04-21 14:13 ` Laurent Pinchart
  (?)
@ 2014-04-21 14:13   ` Laurent Pinchart
  -1 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: linux-arm-kernel

The driver only supports the 3-level long descriptor format that has no
PUD and always has a PMD.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 65 +++++++---------------------------------------
 1 file changed, 9 insertions(+), 56 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 1ce2115..183d82d 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -465,6 +465,8 @@ static irqreturn_t ipmmu_irq(int irq, void *dev)
  * Page Table Management
  */
 
+#define pud_pgtable(pud) pfn_to_page(__phys_to_pfn(pud_val(pud) & PHYS_MASK))
+
 static void ipmmu_free_ptes(pmd_t *pmd)
 {
 	pgtable_t table = pmd_pgtable(*pmd);
@@ -473,10 +475,10 @@ static void ipmmu_free_ptes(pmd_t *pmd)
 
 static void ipmmu_free_pmds(pud_t *pud)
 {
-	pmd_t *pmd, *pmd_base = pmd_offset(pud, 0);
+	pmd_t *pmd = pmd_offset(pud, 0);
+	pgtable_t table;
 	unsigned int i;
 
-	pmd = pmd_base;
 	for (i = 0; i < IPMMU_PTRS_PER_PMD; ++i) {
 		if (pmd_none(*pmd))
 			continue;
@@ -485,24 +487,8 @@ static void ipmmu_free_pmds(pud_t *pud)
 		pmd++;
 	}
 
-	pmd_free(NULL, pmd_base);
-}
-
-static void ipmmu_free_puds(pgd_t *pgd)
-{
-	pud_t *pud, *pud_base = pud_offset(pgd, 0);
-	unsigned int i;
-
-	pud = pud_base;
-	for (i = 0; i < IPMMU_PTRS_PER_PUD; ++i) {
-		if (pud_none(*pud))
-			continue;
-
-		ipmmu_free_pmds(pud);
-		pud++;
-	}
-
-	pud_free(NULL, pud_base);
+	table = pud_pgtable(*pud);
+	__free_page(table);
 }
 
 static void ipmmu_free_pgtables(struct ipmmu_vmsa_domain *domain)
@@ -520,7 +506,7 @@ static void ipmmu_free_pgtables(struct ipmmu_vmsa_domain *domain)
 	for (i = 0; i < IPMMU_PTRS_PER_PGD; ++i) {
 		if (pgd_none(*pgd))
 			continue;
-		ipmmu_free_puds(pgd);
+		ipmmu_free_pmds((pud_t *)pgd);
 		pgd++;
 	}
 
@@ -624,7 +610,6 @@ static int ipmmu_alloc_init_pmd(struct ipmmu_vmsa_device *mmu, pud_t *pud,
 	pmd_t *pmd;
 	int ret;
 
-#ifndef __PAGETABLE_PMD_FOLDED
 	if (pud_none(*pud)) {
 		pmd = (pmd_t *)get_zeroed_page(GFP_ATOMIC);
 		if (!pmd)
@@ -636,7 +621,6 @@ static int ipmmu_alloc_init_pmd(struct ipmmu_vmsa_device *mmu, pud_t *pud,
 
 		pmd += pmd_index(addr);
 	} else
-#endif
 		pmd = pmd_offset(pud, addr);
 
 	do {
@@ -648,38 +632,6 @@ static int ipmmu_alloc_init_pmd(struct ipmmu_vmsa_device *mmu, pud_t *pud,
 	return ret;
 }
 
-static int ipmmu_alloc_init_pud(struct ipmmu_vmsa_device *mmu, pgd_t *pgd,
-				unsigned long addr, unsigned long end,
-				phys_addr_t phys, int prot)
-{
-	unsigned long next;
-	pud_t *pud;
-	int ret;
-
-#ifndef __PAGETABLE_PUD_FOLDED
-	if (pgd_none(*pgd)) {
-		pud = (pud_t *)get_zeroed_page(GFP_ATOMIC);
-		if (!pud)
-			return -ENOMEM;
-
-		ipmmu_flush_pgtable(mmu, pud, PAGE_SIZE);
-		*pgd = __pgd(__pa(pud) | PMD_NSTABLE | PMD_TYPE_TABLE);
-		ipmmu_flush_pgtable(mmu, pgd, sizeof(*pgd));
-
-		pud += pud_index(addr);
-	} else
-#endif
-		pud = pud_offset(pgd, addr);
-
-	do {
-		next = pud_addr_end(addr, end);
-		ret = ipmmu_alloc_init_pmd(mmu, pud, addr, next, phys, prot);
-		phys += next - addr;
-	} while (pud++, addr = next, addr < end);
-
-	return ret;
-}
-
 static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
 				unsigned long iova, phys_addr_t paddr,
 				size_t size, int prot)
@@ -707,7 +659,8 @@ static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
 	do {
 		unsigned long next = pgd_addr_end(iova, end);
 
-		ret = ipmmu_alloc_init_pud(mmu, pgd, iova, next, paddr, prot);
+		ret = ipmmu_alloc_init_pmd(mmu, (pud_t *)pgd, iova, next, paddr,
+					   prot);
 		if (ret)
 			break;
 
-- 
1.8.3.2


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 5/9] iommu/ipmmu-vmsa: PMD is never folded, PUD always is
@ 2014-04-21 14:13   ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: iommu; +Cc: linux-sh, linux-arm-kernel, Will Deacon

The driver only supports the 3-level long descriptor format that has no
PUD and always has a PMD.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 65 +++++++---------------------------------------
 1 file changed, 9 insertions(+), 56 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 1ce2115..183d82d 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -465,6 +465,8 @@ static irqreturn_t ipmmu_irq(int irq, void *dev)
  * Page Table Management
  */
 
+#define pud_pgtable(pud) pfn_to_page(__phys_to_pfn(pud_val(pud) & PHYS_MASK))
+
 static void ipmmu_free_ptes(pmd_t *pmd)
 {
 	pgtable_t table = pmd_pgtable(*pmd);
@@ -473,10 +475,10 @@ static void ipmmu_free_ptes(pmd_t *pmd)
 
 static void ipmmu_free_pmds(pud_t *pud)
 {
-	pmd_t *pmd, *pmd_base = pmd_offset(pud, 0);
+	pmd_t *pmd = pmd_offset(pud, 0);
+	pgtable_t table;
 	unsigned int i;
 
-	pmd = pmd_base;
 	for (i = 0; i < IPMMU_PTRS_PER_PMD; ++i) {
 		if (pmd_none(*pmd))
 			continue;
@@ -485,24 +487,8 @@ static void ipmmu_free_pmds(pud_t *pud)
 		pmd++;
 	}
 
-	pmd_free(NULL, pmd_base);
-}
-
-static void ipmmu_free_puds(pgd_t *pgd)
-{
-	pud_t *pud, *pud_base = pud_offset(pgd, 0);
-	unsigned int i;
-
-	pud = pud_base;
-	for (i = 0; i < IPMMU_PTRS_PER_PUD; ++i) {
-		if (pud_none(*pud))
-			continue;
-
-		ipmmu_free_pmds(pud);
-		pud++;
-	}
-
-	pud_free(NULL, pud_base);
+	table = pud_pgtable(*pud);
+	__free_page(table);
 }
 
 static void ipmmu_free_pgtables(struct ipmmu_vmsa_domain *domain)
@@ -520,7 +506,7 @@ static void ipmmu_free_pgtables(struct ipmmu_vmsa_domain *domain)
 	for (i = 0; i < IPMMU_PTRS_PER_PGD; ++i) {
 		if (pgd_none(*pgd))
 			continue;
-		ipmmu_free_puds(pgd);
+		ipmmu_free_pmds((pud_t *)pgd);
 		pgd++;
 	}
 
@@ -624,7 +610,6 @@ static int ipmmu_alloc_init_pmd(struct ipmmu_vmsa_device *mmu, pud_t *pud,
 	pmd_t *pmd;
 	int ret;
 
-#ifndef __PAGETABLE_PMD_FOLDED
 	if (pud_none(*pud)) {
 		pmd = (pmd_t *)get_zeroed_page(GFP_ATOMIC);
 		if (!pmd)
@@ -636,7 +621,6 @@ static int ipmmu_alloc_init_pmd(struct ipmmu_vmsa_device *mmu, pud_t *pud,
 
 		pmd += pmd_index(addr);
 	} else
-#endif
 		pmd = pmd_offset(pud, addr);
 
 	do {
@@ -648,38 +632,6 @@ static int ipmmu_alloc_init_pmd(struct ipmmu_vmsa_device *mmu, pud_t *pud,
 	return ret;
 }
 
-static int ipmmu_alloc_init_pud(struct ipmmu_vmsa_device *mmu, pgd_t *pgd,
-				unsigned long addr, unsigned long end,
-				phys_addr_t phys, int prot)
-{
-	unsigned long next;
-	pud_t *pud;
-	int ret;
-
-#ifndef __PAGETABLE_PUD_FOLDED
-	if (pgd_none(*pgd)) {
-		pud = (pud_t *)get_zeroed_page(GFP_ATOMIC);
-		if (!pud)
-			return -ENOMEM;
-
-		ipmmu_flush_pgtable(mmu, pud, PAGE_SIZE);
-		*pgd = __pgd(__pa(pud) | PMD_NSTABLE | PMD_TYPE_TABLE);
-		ipmmu_flush_pgtable(mmu, pgd, sizeof(*pgd));
-
-		pud += pud_index(addr);
-	} else
-#endif
-		pud = pud_offset(pgd, addr);
-
-	do {
-		next = pud_addr_end(addr, end);
-		ret = ipmmu_alloc_init_pmd(mmu, pud, addr, next, phys, prot);
-		phys += next - addr;
-	} while (pud++, addr = next, addr < end);
-
-	return ret;
-}
-
 static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
 				unsigned long iova, phys_addr_t paddr,
 				size_t size, int prot)
@@ -707,7 +659,8 @@ static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
 	do {
 		unsigned long next = pgd_addr_end(iova, end);
 
-		ret = ipmmu_alloc_init_pud(mmu, pgd, iova, next, paddr, prot);
+		ret = ipmmu_alloc_init_pmd(mmu, (pud_t *)pgd, iova, next, paddr,
+					   prot);
 		if (ret)
 			break;
 
-- 
1.8.3.2


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 5/9] iommu/ipmmu-vmsa: PMD is never folded, PUD always is
@ 2014-04-21 14:13   ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: linux-arm-kernel

The driver only supports the 3-level long descriptor format that has no
PUD and always has a PMD.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 65 +++++++---------------------------------------
 1 file changed, 9 insertions(+), 56 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 1ce2115..183d82d 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -465,6 +465,8 @@ static irqreturn_t ipmmu_irq(int irq, void *dev)
  * Page Table Management
  */
 
+#define pud_pgtable(pud) pfn_to_page(__phys_to_pfn(pud_val(pud) & PHYS_MASK))
+
 static void ipmmu_free_ptes(pmd_t *pmd)
 {
 	pgtable_t table = pmd_pgtable(*pmd);
@@ -473,10 +475,10 @@ static void ipmmu_free_ptes(pmd_t *pmd)
 
 static void ipmmu_free_pmds(pud_t *pud)
 {
-	pmd_t *pmd, *pmd_base = pmd_offset(pud, 0);
+	pmd_t *pmd = pmd_offset(pud, 0);
+	pgtable_t table;
 	unsigned int i;
 
-	pmd = pmd_base;
 	for (i = 0; i < IPMMU_PTRS_PER_PMD; ++i) {
 		if (pmd_none(*pmd))
 			continue;
@@ -485,24 +487,8 @@ static void ipmmu_free_pmds(pud_t *pud)
 		pmd++;
 	}
 
-	pmd_free(NULL, pmd_base);
-}
-
-static void ipmmu_free_puds(pgd_t *pgd)
-{
-	pud_t *pud, *pud_base = pud_offset(pgd, 0);
-	unsigned int i;
-
-	pud = pud_base;
-	for (i = 0; i < IPMMU_PTRS_PER_PUD; ++i) {
-		if (pud_none(*pud))
-			continue;
-
-		ipmmu_free_pmds(pud);
-		pud++;
-	}
-
-	pud_free(NULL, pud_base);
+	table = pud_pgtable(*pud);
+	__free_page(table);
 }
 
 static void ipmmu_free_pgtables(struct ipmmu_vmsa_domain *domain)
@@ -520,7 +506,7 @@ static void ipmmu_free_pgtables(struct ipmmu_vmsa_domain *domain)
 	for (i = 0; i < IPMMU_PTRS_PER_PGD; ++i) {
 		if (pgd_none(*pgd))
 			continue;
-		ipmmu_free_puds(pgd);
+		ipmmu_free_pmds((pud_t *)pgd);
 		pgd++;
 	}
 
@@ -624,7 +610,6 @@ static int ipmmu_alloc_init_pmd(struct ipmmu_vmsa_device *mmu, pud_t *pud,
 	pmd_t *pmd;
 	int ret;
 
-#ifndef __PAGETABLE_PMD_FOLDED
 	if (pud_none(*pud)) {
 		pmd = (pmd_t *)get_zeroed_page(GFP_ATOMIC);
 		if (!pmd)
@@ -636,7 +621,6 @@ static int ipmmu_alloc_init_pmd(struct ipmmu_vmsa_device *mmu, pud_t *pud,
 
 		pmd += pmd_index(addr);
 	} else
-#endif
 		pmd = pmd_offset(pud, addr);
 
 	do {
@@ -648,38 +632,6 @@ static int ipmmu_alloc_init_pmd(struct ipmmu_vmsa_device *mmu, pud_t *pud,
 	return ret;
 }
 
-static int ipmmu_alloc_init_pud(struct ipmmu_vmsa_device *mmu, pgd_t *pgd,
-				unsigned long addr, unsigned long end,
-				phys_addr_t phys, int prot)
-{
-	unsigned long next;
-	pud_t *pud;
-	int ret;
-
-#ifndef __PAGETABLE_PUD_FOLDED
-	if (pgd_none(*pgd)) {
-		pud = (pud_t *)get_zeroed_page(GFP_ATOMIC);
-		if (!pud)
-			return -ENOMEM;
-
-		ipmmu_flush_pgtable(mmu, pud, PAGE_SIZE);
-		*pgd = __pgd(__pa(pud) | PMD_NSTABLE | PMD_TYPE_TABLE);
-		ipmmu_flush_pgtable(mmu, pgd, sizeof(*pgd));
-
-		pud += pud_index(addr);
-	} else
-#endif
-		pud = pud_offset(pgd, addr);
-
-	do {
-		next = pud_addr_end(addr, end);
-		ret = ipmmu_alloc_init_pmd(mmu, pud, addr, next, phys, prot);
-		phys += next - addr;
-	} while (pud++, addr = next, addr < end);
-
-	return ret;
-}
-
 static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
 				unsigned long iova, phys_addr_t paddr,
 				size_t size, int prot)
@@ -707,7 +659,8 @@ static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
 	do {
 		unsigned long next = pgd_addr_end(iova, end);
 
-		ret = ipmmu_alloc_init_pud(mmu, pgd, iova, next, paddr, prot);
+		ret = ipmmu_alloc_init_pmd(mmu, (pud_t *)pgd, iova, next, paddr,
+					   prot);
 		if (ret)
 			break;
 
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 6/9] iommu/ipmmu-vmsa: Rewrite page table management
  2014-04-21 14:13 ` Laurent Pinchart
  (?)
@ 2014-04-21 14:13   ` Laurent Pinchart
  -1 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: linux-arm-kernel

The IOMMU core will only call us with page sizes advertized as supported
by the driver. We can thus simplify the code by removing loops over PGD
and PMD entries.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 193 ++++++++++++++++++++-------------------------
 1 file changed, 86 insertions(+), 107 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 183d82d..de1a440 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -518,118 +518,97 @@ static void ipmmu_free_pgtables(struct ipmmu_vmsa_domain *domain)
  * functions as they would flush the CPU TLB.
  */
 
-static int ipmmu_alloc_init_pte(struct ipmmu_vmsa_device *mmu, pmd_t *pmd,
-				unsigned long addr, unsigned long end,
-				phys_addr_t phys, int prot)
+static pte_t *ipmmu_alloc_pte(struct ipmmu_vmsa_device *mmu, pmd_t *pmd,
+			      unsigned long iova)
 {
-	unsigned long pfn = __phys_to_pfn(phys);
-	pteval_t pteval = ARM_VMSA_PTE_PAGE | ARM_VMSA_PTE_NS | ARM_VMSA_PTE_AF
-			| ARM_VMSA_PTE_XN;
-	pte_t *pte, *start;
+	pte_t *pte;
 
-	if (pmd_none(*pmd)) {
-		/* Allocate a new set of tables */
-		pte = (pte_t *)get_zeroed_page(GFP_ATOMIC);
-		if (!pte)
-			return -ENOMEM;
+	if (!pmd_none(*pmd))
+		return pte_offset_kernel(pmd, iova);
 
-		ipmmu_flush_pgtable(mmu, pte, PAGE_SIZE);
-		*pmd = __pmd(__pa(pte) | PMD_NSTABLE | PMD_TYPE_TABLE);
-		ipmmu_flush_pgtable(mmu, pmd, sizeof(*pmd));
+	pte = (pte_t *)get_zeroed_page(GFP_ATOMIC);
+	if (!pte)
+		return NULL;
 
-		pte += pte_index(addr);
-	} else
-		pte = pte_offset_kernel(pmd, addr);
+	ipmmu_flush_pgtable(mmu, pte, PAGE_SIZE);
+	*pmd = __pmd(__pa(pte) | PMD_NSTABLE | PMD_TYPE_TABLE);
+	ipmmu_flush_pgtable(mmu, pmd, sizeof(*pmd));
 
-	pteval |= ARM_VMSA_PTE_AP_UNPRIV | ARM_VMSA_PTE_nG;
-	if (!(prot & IOMMU_WRITE) && (prot & IOMMU_READ))
-		pteval |= ARM_VMSA_PTE_AP_RDONLY;
+	return pte + pte_index(iova);
+}
 
-	if (prot & IOMMU_CACHE)
-		pteval |= (IMMAIR_ATTR_IDX_WBRWA <<
-			   ARM_VMSA_PTE_ATTRINDX_SHIFT);
+static pmd_t *ipmmu_alloc_pmd(struct ipmmu_vmsa_device *mmu, pgd_t *pgd,
+			      unsigned long iova)
+{
+	pud_t *pud = (pud_t *)pgd;
+	pmd_t *pmd;
 
-	/* If no access, create a faulting entry to avoid TLB fills */
-	if (prot & IOMMU_EXEC)
-		pteval &= ~ARM_VMSA_PTE_XN;
-	else if (!(prot & (IOMMU_READ | IOMMU_WRITE)))
-		pteval &= ~ARM_VMSA_PTE_PAGE;
+	if (!pud_none(*pud))
+		return pmd_offset(pud, iova);
 
-	pteval |= ARM_VMSA_PTE_SH_IS;
-	start = pte;
+	pmd = (pmd_t *)get_zeroed_page(GFP_ATOMIC);
+	if (!pmd)
+		return NULL;
 
-	/*
-	 * Install the page table entries.
-	 *
-	 * Set the contiguous hint in the PTEs where possible. The hint
-	 * indicates a series of ARM_VMSA_PTE_CONT_ENTRIES PTEs mapping a
-	 * physically contiguous region with the following constraints:
-	 *
-	 * - The region start is aligned to ARM_VMSA_PTE_CONT_SIZE
-	 * - Each PTE in the region has the contiguous hint bit set
-	 *
-	 * We don't support partial unmapping so there's no need to care about
-	 * clearing the contiguous hint from neighbour PTEs.
-	 */
-	do {
-		unsigned long chunk_end;
+	ipmmu_flush_pgtable(mmu, pmd, PAGE_SIZE);
+	*pud = __pud(__pa(pmd) | PMD_NSTABLE | PMD_TYPE_TABLE);
+	ipmmu_flush_pgtable(mmu, pud, sizeof(*pud));
 
-		/*
-		 * If the address is aligned to a contiguous region size and the
-		 * mapping size is large enough, process the largest possible
-		 * number of PTEs multiple of ARM_VMSA_PTE_CONT_ENTRIES.
-		 * Otherwise process the smallest number of PTEs to align the
-		 * address to a contiguous region size or to complete the
-		 * mapping.
-		 */
-		if (IS_ALIGNED(addr, ARM_VMSA_PTE_CONT_SIZE) &&
-		    end - addr >= ARM_VMSA_PTE_CONT_SIZE) {
-			chunk_end = round_down(end, ARM_VMSA_PTE_CONT_SIZE);
-			pteval |= ARM_VMSA_PTE_CONT;
-		} else {
-			chunk_end = min(ALIGN(addr, ARM_VMSA_PTE_CONT_SIZE),
-					end);
-			pteval &= ~ARM_VMSA_PTE_CONT;
-		}
+	return pmd + pmd_index(iova);
+}
 
-		do {
-			*pte++ = pfn_pte(pfn++, __pgprot(pteval));
-			addr += PAGE_SIZE;
-		} while (addr != chunk_end);
-	} while (addr != end);
+static u64 ipmmu_page_prot(unsigned int prot, u64 type)
+{
+	u64 pgprot = ARM_VMSA_PTE_XN | ARM_VMSA_PTE_nG | ARM_VMSA_PTE_AF
+		   | ARM_VMSA_PTE_SH_IS | ARM_VMSA_PTE_AP_UNPRIV
+		   | ARM_VMSA_PTE_NS | type;
 
-	ipmmu_flush_pgtable(mmu, start, sizeof(*pte) * (pte - start));
-	return 0;
+	if (!(prot & IOMMU_WRITE) && (prot & IOMMU_READ))
+		pgprot |= ARM_VMSA_PTE_AP_RDONLY;
+
+	if (prot & IOMMU_CACHE)
+		pgprot |= IMMAIR_ATTR_IDX_WBRWA << ARM_VMSA_PTE_ATTRINDX_SHIFT;
+
+	if (prot & IOMMU_EXEC)
+		pgprot &= ~ARM_VMSA_PTE_XN;
+	else if (!(prot & (IOMMU_READ | IOMMU_WRITE)))
+		/* If no access create a faulting entry to avoid TLB fills. */
+		pgprot &= ~ARM_VMSA_PTE_PAGE;
+
+	return pgprot;
 }
 
-static int ipmmu_alloc_init_pmd(struct ipmmu_vmsa_device *mmu, pud_t *pud,
-				unsigned long addr, unsigned long end,
-				phys_addr_t phys, int prot)
+static int ipmmu_alloc_init_pte(struct ipmmu_vmsa_device *mmu, pmd_t *pmd,
+				unsigned long iova, unsigned long pfn,
+				size_t size, int prot)
 {
-	unsigned long next;
-	pmd_t *pmd;
-	int ret;
+	pteval_t pteval = ipmmu_page_prot(prot, ARM_VMSA_PTE_PAGE);
+	unsigned int num_ptes = 1;
+	pte_t *pte, *start;
+	unsigned int i;
 
-	if (pud_none(*pud)) {
-		pmd = (pmd_t *)get_zeroed_page(GFP_ATOMIC);
-		if (!pmd)
-			return -ENOMEM;
+	pte = ipmmu_alloc_pte(mmu, pmd, iova);
+	if (!pte)
+		return -ENOMEM;
+
+	start = pte;
 
-		ipmmu_flush_pgtable(mmu, pmd, PAGE_SIZE);
-		*pud = __pud(__pa(pmd) | PMD_NSTABLE | PMD_TYPE_TABLE);
-		ipmmu_flush_pgtable(mmu, pud, sizeof(*pud));
+	/*
+	 * Install the page table entries. We can be called both for a single
+	 * page or for a block of 16 physically contiguous pages. In the latter
+	 * case set the PTE contiguous hint.
+	 */
+	if (size = SZ_64K) {
+		pteval |= ARM_VMSA_PTE_CONT;
+		num_ptes = ARM_VMSA_PTE_CONT_ENTRIES;
+	}
 
-		pmd += pmd_index(addr);
-	} else
-		pmd = pmd_offset(pud, addr);
+	for (i = num_ptes; i; --i)
+		*pte++ = pfn_pte(pfn++, __pgprot(pteval));
 
-	do {
-		next = pmd_addr_end(addr, end);
-		ret = ipmmu_alloc_init_pte(mmu, pmd, addr, end, phys, prot);
-		phys += next - addr;
-	} while (pmd++, addr = next, addr < end);
+	ipmmu_flush_pgtable(mmu, start, sizeof(*pte) * num_ptes);
 
-	return ret;
+	return 0;
 }
 
 static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
@@ -639,7 +618,8 @@ static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
 	struct ipmmu_vmsa_device *mmu = domain->mmu;
 	pgd_t *pgd = domain->pgd;
 	unsigned long flags;
-	unsigned long end;
+	unsigned long pfn;
+	pmd_t *pmd;
 	int ret;
 
 	if (!pgd)
@@ -651,26 +631,25 @@ static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
 	if (paddr & ~((1ULL << 40) - 1))
 		return -ERANGE;
 
-	spin_lock_irqsave(&domain->lock, flags);
-
+	pfn = __phys_to_pfn(paddr);
 	pgd += pgd_index(iova);
-	end = iova + size;
 
-	do {
-		unsigned long next = pgd_addr_end(iova, end);
+	/* Update the page tables. */
+	spin_lock_irqsave(&domain->lock, flags);
 
-		ret = ipmmu_alloc_init_pmd(mmu, (pud_t *)pgd, iova, next, paddr,
-					   prot);
-		if (ret)
-			break;
+	pmd = ipmmu_alloc_pmd(mmu, pgd, iova);
+	if (!pmd) {
+		ret = -ENOMEM;
+		goto done;
+	}
 
-		paddr += next - iova;
-		iova = next;
-	} while (pgd++, iova != end);
+	ret = ipmmu_alloc_init_pte(mmu, pmd, iova, pfn, size, prot);
 
+done:
 	spin_unlock_irqrestore(&domain->lock, flags);
 
-	ipmmu_tlb_invalidate(domain);
+	if (!ret)
+		ipmmu_tlb_invalidate(domain);
 
 	return ret;
 }
@@ -971,7 +950,7 @@ static struct iommu_ops ipmmu_ops = {
 	.iova_to_phys = ipmmu_iova_to_phys,
 	.add_device = ipmmu_add_device,
 	.remove_device = ipmmu_remove_device,
-	.pgsize_bitmap = SZ_2M | SZ_64K | SZ_4K,
+	.pgsize_bitmap = SZ_64K | SZ_4K,
 };
 
 /* -----------------------------------------------------------------------------
-- 
1.8.3.2


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 6/9] iommu/ipmmu-vmsa: Rewrite page table management
@ 2014-04-21 14:13   ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: iommu; +Cc: linux-sh, linux-arm-kernel, Will Deacon

The IOMMU core will only call us with page sizes advertized as supported
by the driver. We can thus simplify the code by removing loops over PGD
and PMD entries.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 193 ++++++++++++++++++++-------------------------
 1 file changed, 86 insertions(+), 107 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 183d82d..de1a440 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -518,118 +518,97 @@ static void ipmmu_free_pgtables(struct ipmmu_vmsa_domain *domain)
  * functions as they would flush the CPU TLB.
  */
 
-static int ipmmu_alloc_init_pte(struct ipmmu_vmsa_device *mmu, pmd_t *pmd,
-				unsigned long addr, unsigned long end,
-				phys_addr_t phys, int prot)
+static pte_t *ipmmu_alloc_pte(struct ipmmu_vmsa_device *mmu, pmd_t *pmd,
+			      unsigned long iova)
 {
-	unsigned long pfn = __phys_to_pfn(phys);
-	pteval_t pteval = ARM_VMSA_PTE_PAGE | ARM_VMSA_PTE_NS | ARM_VMSA_PTE_AF
-			| ARM_VMSA_PTE_XN;
-	pte_t *pte, *start;
+	pte_t *pte;
 
-	if (pmd_none(*pmd)) {
-		/* Allocate a new set of tables */
-		pte = (pte_t *)get_zeroed_page(GFP_ATOMIC);
-		if (!pte)
-			return -ENOMEM;
+	if (!pmd_none(*pmd))
+		return pte_offset_kernel(pmd, iova);
 
-		ipmmu_flush_pgtable(mmu, pte, PAGE_SIZE);
-		*pmd = __pmd(__pa(pte) | PMD_NSTABLE | PMD_TYPE_TABLE);
-		ipmmu_flush_pgtable(mmu, pmd, sizeof(*pmd));
+	pte = (pte_t *)get_zeroed_page(GFP_ATOMIC);
+	if (!pte)
+		return NULL;
 
-		pte += pte_index(addr);
-	} else
-		pte = pte_offset_kernel(pmd, addr);
+	ipmmu_flush_pgtable(mmu, pte, PAGE_SIZE);
+	*pmd = __pmd(__pa(pte) | PMD_NSTABLE | PMD_TYPE_TABLE);
+	ipmmu_flush_pgtable(mmu, pmd, sizeof(*pmd));
 
-	pteval |= ARM_VMSA_PTE_AP_UNPRIV | ARM_VMSA_PTE_nG;
-	if (!(prot & IOMMU_WRITE) && (prot & IOMMU_READ))
-		pteval |= ARM_VMSA_PTE_AP_RDONLY;
+	return pte + pte_index(iova);
+}
 
-	if (prot & IOMMU_CACHE)
-		pteval |= (IMMAIR_ATTR_IDX_WBRWA <<
-			   ARM_VMSA_PTE_ATTRINDX_SHIFT);
+static pmd_t *ipmmu_alloc_pmd(struct ipmmu_vmsa_device *mmu, pgd_t *pgd,
+			      unsigned long iova)
+{
+	pud_t *pud = (pud_t *)pgd;
+	pmd_t *pmd;
 
-	/* If no access, create a faulting entry to avoid TLB fills */
-	if (prot & IOMMU_EXEC)
-		pteval &= ~ARM_VMSA_PTE_XN;
-	else if (!(prot & (IOMMU_READ | IOMMU_WRITE)))
-		pteval &= ~ARM_VMSA_PTE_PAGE;
+	if (!pud_none(*pud))
+		return pmd_offset(pud, iova);
 
-	pteval |= ARM_VMSA_PTE_SH_IS;
-	start = pte;
+	pmd = (pmd_t *)get_zeroed_page(GFP_ATOMIC);
+	if (!pmd)
+		return NULL;
 
-	/*
-	 * Install the page table entries.
-	 *
-	 * Set the contiguous hint in the PTEs where possible. The hint
-	 * indicates a series of ARM_VMSA_PTE_CONT_ENTRIES PTEs mapping a
-	 * physically contiguous region with the following constraints:
-	 *
-	 * - The region start is aligned to ARM_VMSA_PTE_CONT_SIZE
-	 * - Each PTE in the region has the contiguous hint bit set
-	 *
-	 * We don't support partial unmapping so there's no need to care about
-	 * clearing the contiguous hint from neighbour PTEs.
-	 */
-	do {
-		unsigned long chunk_end;
+	ipmmu_flush_pgtable(mmu, pmd, PAGE_SIZE);
+	*pud = __pud(__pa(pmd) | PMD_NSTABLE | PMD_TYPE_TABLE);
+	ipmmu_flush_pgtable(mmu, pud, sizeof(*pud));
 
-		/*
-		 * If the address is aligned to a contiguous region size and the
-		 * mapping size is large enough, process the largest possible
-		 * number of PTEs multiple of ARM_VMSA_PTE_CONT_ENTRIES.
-		 * Otherwise process the smallest number of PTEs to align the
-		 * address to a contiguous region size or to complete the
-		 * mapping.
-		 */
-		if (IS_ALIGNED(addr, ARM_VMSA_PTE_CONT_SIZE) &&
-		    end - addr >= ARM_VMSA_PTE_CONT_SIZE) {
-			chunk_end = round_down(end, ARM_VMSA_PTE_CONT_SIZE);
-			pteval |= ARM_VMSA_PTE_CONT;
-		} else {
-			chunk_end = min(ALIGN(addr, ARM_VMSA_PTE_CONT_SIZE),
-					end);
-			pteval &= ~ARM_VMSA_PTE_CONT;
-		}
+	return pmd + pmd_index(iova);
+}
 
-		do {
-			*pte++ = pfn_pte(pfn++, __pgprot(pteval));
-			addr += PAGE_SIZE;
-		} while (addr != chunk_end);
-	} while (addr != end);
+static u64 ipmmu_page_prot(unsigned int prot, u64 type)
+{
+	u64 pgprot = ARM_VMSA_PTE_XN | ARM_VMSA_PTE_nG | ARM_VMSA_PTE_AF
+		   | ARM_VMSA_PTE_SH_IS | ARM_VMSA_PTE_AP_UNPRIV
+		   | ARM_VMSA_PTE_NS | type;
 
-	ipmmu_flush_pgtable(mmu, start, sizeof(*pte) * (pte - start));
-	return 0;
+	if (!(prot & IOMMU_WRITE) && (prot & IOMMU_READ))
+		pgprot |= ARM_VMSA_PTE_AP_RDONLY;
+
+	if (prot & IOMMU_CACHE)
+		pgprot |= IMMAIR_ATTR_IDX_WBRWA << ARM_VMSA_PTE_ATTRINDX_SHIFT;
+
+	if (prot & IOMMU_EXEC)
+		pgprot &= ~ARM_VMSA_PTE_XN;
+	else if (!(prot & (IOMMU_READ | IOMMU_WRITE)))
+		/* If no access create a faulting entry to avoid TLB fills. */
+		pgprot &= ~ARM_VMSA_PTE_PAGE;
+
+	return pgprot;
 }
 
-static int ipmmu_alloc_init_pmd(struct ipmmu_vmsa_device *mmu, pud_t *pud,
-				unsigned long addr, unsigned long end,
-				phys_addr_t phys, int prot)
+static int ipmmu_alloc_init_pte(struct ipmmu_vmsa_device *mmu, pmd_t *pmd,
+				unsigned long iova, unsigned long pfn,
+				size_t size, int prot)
 {
-	unsigned long next;
-	pmd_t *pmd;
-	int ret;
+	pteval_t pteval = ipmmu_page_prot(prot, ARM_VMSA_PTE_PAGE);
+	unsigned int num_ptes = 1;
+	pte_t *pte, *start;
+	unsigned int i;
 
-	if (pud_none(*pud)) {
-		pmd = (pmd_t *)get_zeroed_page(GFP_ATOMIC);
-		if (!pmd)
-			return -ENOMEM;
+	pte = ipmmu_alloc_pte(mmu, pmd, iova);
+	if (!pte)
+		return -ENOMEM;
+
+	start = pte;
 
-		ipmmu_flush_pgtable(mmu, pmd, PAGE_SIZE);
-		*pud = __pud(__pa(pmd) | PMD_NSTABLE | PMD_TYPE_TABLE);
-		ipmmu_flush_pgtable(mmu, pud, sizeof(*pud));
+	/*
+	 * Install the page table entries. We can be called both for a single
+	 * page or for a block of 16 physically contiguous pages. In the latter
+	 * case set the PTE contiguous hint.
+	 */
+	if (size == SZ_64K) {
+		pteval |= ARM_VMSA_PTE_CONT;
+		num_ptes = ARM_VMSA_PTE_CONT_ENTRIES;
+	}
 
-		pmd += pmd_index(addr);
-	} else
-		pmd = pmd_offset(pud, addr);
+	for (i = num_ptes; i; --i)
+		*pte++ = pfn_pte(pfn++, __pgprot(pteval));
 
-	do {
-		next = pmd_addr_end(addr, end);
-		ret = ipmmu_alloc_init_pte(mmu, pmd, addr, end, phys, prot);
-		phys += next - addr;
-	} while (pmd++, addr = next, addr < end);
+	ipmmu_flush_pgtable(mmu, start, sizeof(*pte) * num_ptes);
 
-	return ret;
+	return 0;
 }
 
 static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
@@ -639,7 +618,8 @@ static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
 	struct ipmmu_vmsa_device *mmu = domain->mmu;
 	pgd_t *pgd = domain->pgd;
 	unsigned long flags;
-	unsigned long end;
+	unsigned long pfn;
+	pmd_t *pmd;
 	int ret;
 
 	if (!pgd)
@@ -651,26 +631,25 @@ static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
 	if (paddr & ~((1ULL << 40) - 1))
 		return -ERANGE;
 
-	spin_lock_irqsave(&domain->lock, flags);
-
+	pfn = __phys_to_pfn(paddr);
 	pgd += pgd_index(iova);
-	end = iova + size;
 
-	do {
-		unsigned long next = pgd_addr_end(iova, end);
+	/* Update the page tables. */
+	spin_lock_irqsave(&domain->lock, flags);
 
-		ret = ipmmu_alloc_init_pmd(mmu, (pud_t *)pgd, iova, next, paddr,
-					   prot);
-		if (ret)
-			break;
+	pmd = ipmmu_alloc_pmd(mmu, pgd, iova);
+	if (!pmd) {
+		ret = -ENOMEM;
+		goto done;
+	}
 
-		paddr += next - iova;
-		iova = next;
-	} while (pgd++, iova != end);
+	ret = ipmmu_alloc_init_pte(mmu, pmd, iova, pfn, size, prot);
 
+done:
 	spin_unlock_irqrestore(&domain->lock, flags);
 
-	ipmmu_tlb_invalidate(domain);
+	if (!ret)
+		ipmmu_tlb_invalidate(domain);
 
 	return ret;
 }
@@ -971,7 +950,7 @@ static struct iommu_ops ipmmu_ops = {
 	.iova_to_phys = ipmmu_iova_to_phys,
 	.add_device = ipmmu_add_device,
 	.remove_device = ipmmu_remove_device,
-	.pgsize_bitmap = SZ_2M | SZ_64K | SZ_4K,
+	.pgsize_bitmap = SZ_64K | SZ_4K,
 };
 
 /* -----------------------------------------------------------------------------
-- 
1.8.3.2


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 6/9] iommu/ipmmu-vmsa: Rewrite page table management
@ 2014-04-21 14:13   ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: linux-arm-kernel

The IOMMU core will only call us with page sizes advertized as supported
by the driver. We can thus simplify the code by removing loops over PGD
and PMD entries.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 193 ++++++++++++++++++++-------------------------
 1 file changed, 86 insertions(+), 107 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 183d82d..de1a440 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -518,118 +518,97 @@ static void ipmmu_free_pgtables(struct ipmmu_vmsa_domain *domain)
  * functions as they would flush the CPU TLB.
  */
 
-static int ipmmu_alloc_init_pte(struct ipmmu_vmsa_device *mmu, pmd_t *pmd,
-				unsigned long addr, unsigned long end,
-				phys_addr_t phys, int prot)
+static pte_t *ipmmu_alloc_pte(struct ipmmu_vmsa_device *mmu, pmd_t *pmd,
+			      unsigned long iova)
 {
-	unsigned long pfn = __phys_to_pfn(phys);
-	pteval_t pteval = ARM_VMSA_PTE_PAGE | ARM_VMSA_PTE_NS | ARM_VMSA_PTE_AF
-			| ARM_VMSA_PTE_XN;
-	pte_t *pte, *start;
+	pte_t *pte;
 
-	if (pmd_none(*pmd)) {
-		/* Allocate a new set of tables */
-		pte = (pte_t *)get_zeroed_page(GFP_ATOMIC);
-		if (!pte)
-			return -ENOMEM;
+	if (!pmd_none(*pmd))
+		return pte_offset_kernel(pmd, iova);
 
-		ipmmu_flush_pgtable(mmu, pte, PAGE_SIZE);
-		*pmd = __pmd(__pa(pte) | PMD_NSTABLE | PMD_TYPE_TABLE);
-		ipmmu_flush_pgtable(mmu, pmd, sizeof(*pmd));
+	pte = (pte_t *)get_zeroed_page(GFP_ATOMIC);
+	if (!pte)
+		return NULL;
 
-		pte += pte_index(addr);
-	} else
-		pte = pte_offset_kernel(pmd, addr);
+	ipmmu_flush_pgtable(mmu, pte, PAGE_SIZE);
+	*pmd = __pmd(__pa(pte) | PMD_NSTABLE | PMD_TYPE_TABLE);
+	ipmmu_flush_pgtable(mmu, pmd, sizeof(*pmd));
 
-	pteval |= ARM_VMSA_PTE_AP_UNPRIV | ARM_VMSA_PTE_nG;
-	if (!(prot & IOMMU_WRITE) && (prot & IOMMU_READ))
-		pteval |= ARM_VMSA_PTE_AP_RDONLY;
+	return pte + pte_index(iova);
+}
 
-	if (prot & IOMMU_CACHE)
-		pteval |= (IMMAIR_ATTR_IDX_WBRWA <<
-			   ARM_VMSA_PTE_ATTRINDX_SHIFT);
+static pmd_t *ipmmu_alloc_pmd(struct ipmmu_vmsa_device *mmu, pgd_t *pgd,
+			      unsigned long iova)
+{
+	pud_t *pud = (pud_t *)pgd;
+	pmd_t *pmd;
 
-	/* If no access, create a faulting entry to avoid TLB fills */
-	if (prot & IOMMU_EXEC)
-		pteval &= ~ARM_VMSA_PTE_XN;
-	else if (!(prot & (IOMMU_READ | IOMMU_WRITE)))
-		pteval &= ~ARM_VMSA_PTE_PAGE;
+	if (!pud_none(*pud))
+		return pmd_offset(pud, iova);
 
-	pteval |= ARM_VMSA_PTE_SH_IS;
-	start = pte;
+	pmd = (pmd_t *)get_zeroed_page(GFP_ATOMIC);
+	if (!pmd)
+		return NULL;
 
-	/*
-	 * Install the page table entries.
-	 *
-	 * Set the contiguous hint in the PTEs where possible. The hint
-	 * indicates a series of ARM_VMSA_PTE_CONT_ENTRIES PTEs mapping a
-	 * physically contiguous region with the following constraints:
-	 *
-	 * - The region start is aligned to ARM_VMSA_PTE_CONT_SIZE
-	 * - Each PTE in the region has the contiguous hint bit set
-	 *
-	 * We don't support partial unmapping so there's no need to care about
-	 * clearing the contiguous hint from neighbour PTEs.
-	 */
-	do {
-		unsigned long chunk_end;
+	ipmmu_flush_pgtable(mmu, pmd, PAGE_SIZE);
+	*pud = __pud(__pa(pmd) | PMD_NSTABLE | PMD_TYPE_TABLE);
+	ipmmu_flush_pgtable(mmu, pud, sizeof(*pud));
 
-		/*
-		 * If the address is aligned to a contiguous region size and the
-		 * mapping size is large enough, process the largest possible
-		 * number of PTEs multiple of ARM_VMSA_PTE_CONT_ENTRIES.
-		 * Otherwise process the smallest number of PTEs to align the
-		 * address to a contiguous region size or to complete the
-		 * mapping.
-		 */
-		if (IS_ALIGNED(addr, ARM_VMSA_PTE_CONT_SIZE) &&
-		    end - addr >= ARM_VMSA_PTE_CONT_SIZE) {
-			chunk_end = round_down(end, ARM_VMSA_PTE_CONT_SIZE);
-			pteval |= ARM_VMSA_PTE_CONT;
-		} else {
-			chunk_end = min(ALIGN(addr, ARM_VMSA_PTE_CONT_SIZE),
-					end);
-			pteval &= ~ARM_VMSA_PTE_CONT;
-		}
+	return pmd + pmd_index(iova);
+}
 
-		do {
-			*pte++ = pfn_pte(pfn++, __pgprot(pteval));
-			addr += PAGE_SIZE;
-		} while (addr != chunk_end);
-	} while (addr != end);
+static u64 ipmmu_page_prot(unsigned int prot, u64 type)
+{
+	u64 pgprot = ARM_VMSA_PTE_XN | ARM_VMSA_PTE_nG | ARM_VMSA_PTE_AF
+		   | ARM_VMSA_PTE_SH_IS | ARM_VMSA_PTE_AP_UNPRIV
+		   | ARM_VMSA_PTE_NS | type;
 
-	ipmmu_flush_pgtable(mmu, start, sizeof(*pte) * (pte - start));
-	return 0;
+	if (!(prot & IOMMU_WRITE) && (prot & IOMMU_READ))
+		pgprot |= ARM_VMSA_PTE_AP_RDONLY;
+
+	if (prot & IOMMU_CACHE)
+		pgprot |= IMMAIR_ATTR_IDX_WBRWA << ARM_VMSA_PTE_ATTRINDX_SHIFT;
+
+	if (prot & IOMMU_EXEC)
+		pgprot &= ~ARM_VMSA_PTE_XN;
+	else if (!(prot & (IOMMU_READ | IOMMU_WRITE)))
+		/* If no access create a faulting entry to avoid TLB fills. */
+		pgprot &= ~ARM_VMSA_PTE_PAGE;
+
+	return pgprot;
 }
 
-static int ipmmu_alloc_init_pmd(struct ipmmu_vmsa_device *mmu, pud_t *pud,
-				unsigned long addr, unsigned long end,
-				phys_addr_t phys, int prot)
+static int ipmmu_alloc_init_pte(struct ipmmu_vmsa_device *mmu, pmd_t *pmd,
+				unsigned long iova, unsigned long pfn,
+				size_t size, int prot)
 {
-	unsigned long next;
-	pmd_t *pmd;
-	int ret;
+	pteval_t pteval = ipmmu_page_prot(prot, ARM_VMSA_PTE_PAGE);
+	unsigned int num_ptes = 1;
+	pte_t *pte, *start;
+	unsigned int i;
 
-	if (pud_none(*pud)) {
-		pmd = (pmd_t *)get_zeroed_page(GFP_ATOMIC);
-		if (!pmd)
-			return -ENOMEM;
+	pte = ipmmu_alloc_pte(mmu, pmd, iova);
+	if (!pte)
+		return -ENOMEM;
+
+	start = pte;
 
-		ipmmu_flush_pgtable(mmu, pmd, PAGE_SIZE);
-		*pud = __pud(__pa(pmd) | PMD_NSTABLE | PMD_TYPE_TABLE);
-		ipmmu_flush_pgtable(mmu, pud, sizeof(*pud));
+	/*
+	 * Install the page table entries. We can be called both for a single
+	 * page or for a block of 16 physically contiguous pages. In the latter
+	 * case set the PTE contiguous hint.
+	 */
+	if (size == SZ_64K) {
+		pteval |= ARM_VMSA_PTE_CONT;
+		num_ptes = ARM_VMSA_PTE_CONT_ENTRIES;
+	}
 
-		pmd += pmd_index(addr);
-	} else
-		pmd = pmd_offset(pud, addr);
+	for (i = num_ptes; i; --i)
+		*pte++ = pfn_pte(pfn++, __pgprot(pteval));
 
-	do {
-		next = pmd_addr_end(addr, end);
-		ret = ipmmu_alloc_init_pte(mmu, pmd, addr, end, phys, prot);
-		phys += next - addr;
-	} while (pmd++, addr = next, addr < end);
+	ipmmu_flush_pgtable(mmu, start, sizeof(*pte) * num_ptes);
 
-	return ret;
+	return 0;
 }
 
 static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
@@ -639,7 +618,8 @@ static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
 	struct ipmmu_vmsa_device *mmu = domain->mmu;
 	pgd_t *pgd = domain->pgd;
 	unsigned long flags;
-	unsigned long end;
+	unsigned long pfn;
+	pmd_t *pmd;
 	int ret;
 
 	if (!pgd)
@@ -651,26 +631,25 @@ static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
 	if (paddr & ~((1ULL << 40) - 1))
 		return -ERANGE;
 
-	spin_lock_irqsave(&domain->lock, flags);
-
+	pfn = __phys_to_pfn(paddr);
 	pgd += pgd_index(iova);
-	end = iova + size;
 
-	do {
-		unsigned long next = pgd_addr_end(iova, end);
+	/* Update the page tables. */
+	spin_lock_irqsave(&domain->lock, flags);
 
-		ret = ipmmu_alloc_init_pmd(mmu, (pud_t *)pgd, iova, next, paddr,
-					   prot);
-		if (ret)
-			break;
+	pmd = ipmmu_alloc_pmd(mmu, pgd, iova);
+	if (!pmd) {
+		ret = -ENOMEM;
+		goto done;
+	}
 
-		paddr += next - iova;
-		iova = next;
-	} while (pgd++, iova != end);
+	ret = ipmmu_alloc_init_pte(mmu, pmd, iova, pfn, size, prot);
 
+done:
 	spin_unlock_irqrestore(&domain->lock, flags);
 
-	ipmmu_tlb_invalidate(domain);
+	if (!ret)
+		ipmmu_tlb_invalidate(domain);
 
 	return ret;
 }
@@ -971,7 +950,7 @@ static struct iommu_ops ipmmu_ops = {
 	.iova_to_phys = ipmmu_iova_to_phys,
 	.add_device = ipmmu_add_device,
 	.remove_device = ipmmu_remove_device,
-	.pgsize_bitmap = SZ_2M | SZ_64K | SZ_4K,
+	.pgsize_bitmap = SZ_64K | SZ_4K,
 };
 
 /* -----------------------------------------------------------------------------
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 7/9] iommu/ipmmu-vmsa: Support 2MB mappings
  2014-04-21 14:13 ` Laurent Pinchart
  (?)
@ 2014-04-21 14:13   ` Laurent Pinchart
  -1 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: linux-arm-kernel

Add support for 2MB block mappings at the PMD level.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 32 +++++++++++++++++++++++++++++---
 1 file changed, 29 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index de1a440..0e4d145 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -480,7 +480,7 @@ static void ipmmu_free_pmds(pud_t *pud)
 	unsigned int i;
 
 	for (i = 0; i < IPMMU_PTRS_PER_PMD; ++i) {
-		if (pmd_none(*pmd))
+		if (!pmd_table(*pmd))
 			continue;
 
 		ipmmu_free_ptes(pmd);
@@ -611,6 +611,18 @@ static int ipmmu_alloc_init_pte(struct ipmmu_vmsa_device *mmu, pmd_t *pmd,
 	return 0;
 }
 
+static int ipmmu_alloc_init_pmd(struct ipmmu_vmsa_device *mmu, pmd_t *pmd,
+				unsigned long iova, unsigned long pfn,
+				int prot)
+{
+	pmdval_t pmdval = ipmmu_page_prot(prot, PMD_TYPE_SECT);
+
+	*pmd = pfn_pmd(pfn, __pgprot(pmdval));
+	ipmmu_flush_pgtable(mmu, pmd, sizeof(*pmd));
+
+	return 0;
+}
+
 static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
 				unsigned long iova, phys_addr_t paddr,
 				size_t size, int prot)
@@ -643,7 +655,18 @@ static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
 		goto done;
 	}
 
-	ret = ipmmu_alloc_init_pte(mmu, pmd, iova, pfn, size, prot);
+	switch (size) {
+	case SZ_2M:
+		ret = ipmmu_alloc_init_pmd(mmu, pmd, iova, pfn, prot);
+		break;
+	case SZ_64K:
+	case SZ_4K:
+		ret = ipmmu_alloc_init_pte(mmu, pmd, iova, pfn, size, prot);
+		break;
+	default:
+		ret = -EINVAL;
+		break;
+	}
 
 done:
 	spin_unlock_irqrestore(&domain->lock, flags);
@@ -793,6 +816,9 @@ static phys_addr_t ipmmu_iova_to_phys(struct iommu_domain *io_domain,
 	if (pmd_none(pmd))
 		return 0;
 
+	if (pmd_sect(pmd))
+		return __pfn_to_phys(pmd_pfn(pmd)) | (iova & ~PMD_MASK);
+
 	pte = *(pmd_page_vaddr(pmd) + pte_index(iova));
 	if (pte_none(pte))
 		return 0;
@@ -950,7 +976,7 @@ static struct iommu_ops ipmmu_ops = {
 	.iova_to_phys = ipmmu_iova_to_phys,
 	.add_device = ipmmu_add_device,
 	.remove_device = ipmmu_remove_device,
-	.pgsize_bitmap = SZ_64K | SZ_4K,
+	.pgsize_bitmap = SZ_2M | SZ_64K | SZ_4K,
 };
 
 /* -----------------------------------------------------------------------------
-- 
1.8.3.2


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 7/9] iommu/ipmmu-vmsa: Support 2MB mappings
@ 2014-04-21 14:13   ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: iommu; +Cc: linux-sh, linux-arm-kernel, Will Deacon

Add support for 2MB block mappings at the PMD level.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 32 +++++++++++++++++++++++++++++---
 1 file changed, 29 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index de1a440..0e4d145 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -480,7 +480,7 @@ static void ipmmu_free_pmds(pud_t *pud)
 	unsigned int i;
 
 	for (i = 0; i < IPMMU_PTRS_PER_PMD; ++i) {
-		if (pmd_none(*pmd))
+		if (!pmd_table(*pmd))
 			continue;
 
 		ipmmu_free_ptes(pmd);
@@ -611,6 +611,18 @@ static int ipmmu_alloc_init_pte(struct ipmmu_vmsa_device *mmu, pmd_t *pmd,
 	return 0;
 }
 
+static int ipmmu_alloc_init_pmd(struct ipmmu_vmsa_device *mmu, pmd_t *pmd,
+				unsigned long iova, unsigned long pfn,
+				int prot)
+{
+	pmdval_t pmdval = ipmmu_page_prot(prot, PMD_TYPE_SECT);
+
+	*pmd = pfn_pmd(pfn, __pgprot(pmdval));
+	ipmmu_flush_pgtable(mmu, pmd, sizeof(*pmd));
+
+	return 0;
+}
+
 static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
 				unsigned long iova, phys_addr_t paddr,
 				size_t size, int prot)
@@ -643,7 +655,18 @@ static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
 		goto done;
 	}
 
-	ret = ipmmu_alloc_init_pte(mmu, pmd, iova, pfn, size, prot);
+	switch (size) {
+	case SZ_2M:
+		ret = ipmmu_alloc_init_pmd(mmu, pmd, iova, pfn, prot);
+		break;
+	case SZ_64K:
+	case SZ_4K:
+		ret = ipmmu_alloc_init_pte(mmu, pmd, iova, pfn, size, prot);
+		break;
+	default:
+		ret = -EINVAL;
+		break;
+	}
 
 done:
 	spin_unlock_irqrestore(&domain->lock, flags);
@@ -793,6 +816,9 @@ static phys_addr_t ipmmu_iova_to_phys(struct iommu_domain *io_domain,
 	if (pmd_none(pmd))
 		return 0;
 
+	if (pmd_sect(pmd))
+		return __pfn_to_phys(pmd_pfn(pmd)) | (iova & ~PMD_MASK);
+
 	pte = *(pmd_page_vaddr(pmd) + pte_index(iova));
 	if (pte_none(pte))
 		return 0;
@@ -950,7 +976,7 @@ static struct iommu_ops ipmmu_ops = {
 	.iova_to_phys = ipmmu_iova_to_phys,
 	.add_device = ipmmu_add_device,
 	.remove_device = ipmmu_remove_device,
-	.pgsize_bitmap = SZ_64K | SZ_4K,
+	.pgsize_bitmap = SZ_2M | SZ_64K | SZ_4K,
 };
 
 /* -----------------------------------------------------------------------------
-- 
1.8.3.2


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 7/9] iommu/ipmmu-vmsa: Support 2MB mappings
@ 2014-04-21 14:13   ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: linux-arm-kernel

Add support for 2MB block mappings at the PMD level.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 32 +++++++++++++++++++++++++++++---
 1 file changed, 29 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index de1a440..0e4d145 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -480,7 +480,7 @@ static void ipmmu_free_pmds(pud_t *pud)
 	unsigned int i;
 
 	for (i = 0; i < IPMMU_PTRS_PER_PMD; ++i) {
-		if (pmd_none(*pmd))
+		if (!pmd_table(*pmd))
 			continue;
 
 		ipmmu_free_ptes(pmd);
@@ -611,6 +611,18 @@ static int ipmmu_alloc_init_pte(struct ipmmu_vmsa_device *mmu, pmd_t *pmd,
 	return 0;
 }
 
+static int ipmmu_alloc_init_pmd(struct ipmmu_vmsa_device *mmu, pmd_t *pmd,
+				unsigned long iova, unsigned long pfn,
+				int prot)
+{
+	pmdval_t pmdval = ipmmu_page_prot(prot, PMD_TYPE_SECT);
+
+	*pmd = pfn_pmd(pfn, __pgprot(pmdval));
+	ipmmu_flush_pgtable(mmu, pmd, sizeof(*pmd));
+
+	return 0;
+}
+
 static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
 				unsigned long iova, phys_addr_t paddr,
 				size_t size, int prot)
@@ -643,7 +655,18 @@ static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
 		goto done;
 	}
 
-	ret = ipmmu_alloc_init_pte(mmu, pmd, iova, pfn, size, prot);
+	switch (size) {
+	case SZ_2M:
+		ret = ipmmu_alloc_init_pmd(mmu, pmd, iova, pfn, prot);
+		break;
+	case SZ_64K:
+	case SZ_4K:
+		ret = ipmmu_alloc_init_pte(mmu, pmd, iova, pfn, size, prot);
+		break;
+	default:
+		ret = -EINVAL;
+		break;
+	}
 
 done:
 	spin_unlock_irqrestore(&domain->lock, flags);
@@ -793,6 +816,9 @@ static phys_addr_t ipmmu_iova_to_phys(struct iommu_domain *io_domain,
 	if (pmd_none(pmd))
 		return 0;
 
+	if (pmd_sect(pmd))
+		return __pfn_to_phys(pmd_pfn(pmd)) | (iova & ~PMD_MASK);
+
 	pte = *(pmd_page_vaddr(pmd) + pte_index(iova));
 	if (pte_none(pte))
 		return 0;
@@ -950,7 +976,7 @@ static struct iommu_ops ipmmu_ops = {
 	.iova_to_phys = ipmmu_iova_to_phys,
 	.add_device = ipmmu_add_device,
 	.remove_device = ipmmu_remove_device,
-	.pgsize_bitmap = SZ_64K | SZ_4K,
+	.pgsize_bitmap = SZ_2M | SZ_64K | SZ_4K,
 };
 
 /* -----------------------------------------------------------------------------
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 8/9] iommu/ipmmu-vmsa: Remove stage 2 PTE bits definitions
  2014-04-21 14:13 ` Laurent Pinchart
  (?)
@ 2014-04-21 14:13   ` Laurent Pinchart
  -1 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: linux-arm-kernel

We don't support stage 2 translation yet.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 0e4d145..711e5c4 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -202,14 +202,6 @@ static LIST_HEAD(ipmmu_devices);
 #define ARM_VMSA_PTE_ATTRINDX_SHIFT	2
 #define ARM_VMSA_PTE_nG			(((pteval_t)1) << 11)
 
-/* Stage-2 PTE */
-#define ARM_VMSA_PTE_HAP_FAULT		(((pteval_t)0) << 6)
-#define ARM_VMSA_PTE_HAP_READ		(((pteval_t)1) << 6)
-#define ARM_VMSA_PTE_HAP_WRITE		(((pteval_t)2) << 6)
-#define ARM_VMSA_PTE_MEMATTR_OIWB	(((pteval_t)0xf) << 2)
-#define ARM_VMSA_PTE_MEMATTR_NC		(((pteval_t)0x5) << 2)
-#define ARM_VMSA_PTE_MEMATTR_DEV	(((pteval_t)0x1) << 2)
-
 #define ARM_VMSA_PTE_CONT_ENTRIES	16
 #define ARM_VMSA_PTE_CONT_SIZE		(PAGE_SIZE * ARM_VMSA_PTE_CONT_ENTRIES)
 
-- 
1.8.3.2


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 8/9] iommu/ipmmu-vmsa: Remove stage 2 PTE bits definitions
@ 2014-04-21 14:13   ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: iommu; +Cc: linux-sh, linux-arm-kernel, Will Deacon

We don't support stage 2 translation yet.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 0e4d145..711e5c4 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -202,14 +202,6 @@ static LIST_HEAD(ipmmu_devices);
 #define ARM_VMSA_PTE_ATTRINDX_SHIFT	2
 #define ARM_VMSA_PTE_nG			(((pteval_t)1) << 11)
 
-/* Stage-2 PTE */
-#define ARM_VMSA_PTE_HAP_FAULT		(((pteval_t)0) << 6)
-#define ARM_VMSA_PTE_HAP_READ		(((pteval_t)1) << 6)
-#define ARM_VMSA_PTE_HAP_WRITE		(((pteval_t)2) << 6)
-#define ARM_VMSA_PTE_MEMATTR_OIWB	(((pteval_t)0xf) << 2)
-#define ARM_VMSA_PTE_MEMATTR_NC		(((pteval_t)0x5) << 2)
-#define ARM_VMSA_PTE_MEMATTR_DEV	(((pteval_t)0x1) << 2)
-
 #define ARM_VMSA_PTE_CONT_ENTRIES	16
 #define ARM_VMSA_PTE_CONT_SIZE		(PAGE_SIZE * ARM_VMSA_PTE_CONT_ENTRIES)
 
-- 
1.8.3.2


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 8/9] iommu/ipmmu-vmsa: Remove stage 2 PTE bits definitions
@ 2014-04-21 14:13   ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: linux-arm-kernel

We don't support stage 2 translation yet.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 0e4d145..711e5c4 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -202,14 +202,6 @@ static LIST_HEAD(ipmmu_devices);
 #define ARM_VMSA_PTE_ATTRINDX_SHIFT	2
 #define ARM_VMSA_PTE_nG			(((pteval_t)1) << 11)
 
-/* Stage-2 PTE */
-#define ARM_VMSA_PTE_HAP_FAULT		(((pteval_t)0) << 6)
-#define ARM_VMSA_PTE_HAP_READ		(((pteval_t)1) << 6)
-#define ARM_VMSA_PTE_HAP_WRITE		(((pteval_t)2) << 6)
-#define ARM_VMSA_PTE_MEMATTR_OIWB	(((pteval_t)0xf) << 2)
-#define ARM_VMSA_PTE_MEMATTR_NC		(((pteval_t)0x5) << 2)
-#define ARM_VMSA_PTE_MEMATTR_DEV	(((pteval_t)0x1) << 2)
-
 #define ARM_VMSA_PTE_CONT_ENTRIES	16
 #define ARM_VMSA_PTE_CONT_SIZE		(PAGE_SIZE * ARM_VMSA_PTE_CONT_ENTRIES)
 
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 9/9] iommu/ipmmu-vmsa: Support clearing mappings
  2014-04-21 14:13 ` Laurent Pinchart
  (?)
@ 2014-04-21 14:13   ` Laurent Pinchart
  -1 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: linux-arm-kernel

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 190 ++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 186 insertions(+), 4 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 711e5c4..de6461d 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -193,14 +193,22 @@ static LIST_HEAD(ipmmu_devices);
 #define ARM_VMSA_PTE_SH_NS		(((pteval_t)0) << 8)
 #define ARM_VMSA_PTE_SH_OS		(((pteval_t)2) << 8)
 #define ARM_VMSA_PTE_SH_IS		(((pteval_t)3) << 8)
+#define ARM_VMSA_PTE_SH_MASK		(((pteval_t)3) << 8)
 #define ARM_VMSA_PTE_NS			(((pteval_t)1) << 5)
 #define ARM_VMSA_PTE_PAGE		(((pteval_t)3) << 0)
 
 /* Stage-1 PTE */
+#define ARM_VMSA_PTE_nG			(((pteval_t)1) << 11)
 #define ARM_VMSA_PTE_AP_UNPRIV		(((pteval_t)1) << 6)
 #define ARM_VMSA_PTE_AP_RDONLY		(((pteval_t)2) << 6)
+#define ARM_VMSA_PTE_AP_MASK		(((pteval_t)3) << 6)
+#define ARM_VMSA_PTE_ATTRINDX_MASK	(((pteval_t)3) << 2)
 #define ARM_VMSA_PTE_ATTRINDX_SHIFT	2
-#define ARM_VMSA_PTE_nG			(((pteval_t)1) << 11)
+
+#define ARM_VMSA_PTE_ATTRS_MASK \
+	(ARM_VMSA_PTE_XN | ARM_VMSA_PTE_CONT | ARM_VMSA_PTE_nG | \
+	 ARM_VMSA_PTE_AF | ARM_VMSA_PTE_SH_MASK | ARM_VMSA_PTE_AP_MASK | \
+	 ARM_VMSA_PTE_NS | ARM_VMSA_PTE_ATTRINDX_MASK)
 
 #define ARM_VMSA_PTE_CONT_ENTRIES	16
 #define ARM_VMSA_PTE_CONT_SIZE		(PAGE_SIZE * ARM_VMSA_PTE_CONT_ENTRIES)
@@ -615,7 +623,7 @@ static int ipmmu_alloc_init_pmd(struct ipmmu_vmsa_device *mmu, pmd_t *pmd,
 	return 0;
 }
 
-static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
+static int ipmmu_create_mapping(struct ipmmu_vmsa_domain *domain,
 				unsigned long iova, phys_addr_t paddr,
 				size_t size, int prot)
 {
@@ -669,6 +677,180 @@ done:
 	return ret;
 }
 
+static void ipmmu_clear_pud(struct ipmmu_vmsa_device *mmu, pud_t *pud)
+{
+	/* Free the page table. */
+	pgtable_t table = pud_pgtable(*pud);
+	__free_page(table);
+
+	/* Clear the PUD. */
+	*pud = __pud(0);
+	ipmmu_flush_pgtable(mmu, pud, sizeof(*pud));
+}
+
+static void ipmmu_clear_pmd(struct ipmmu_vmsa_device *mmu, pud_t *pud,
+			    pmd_t *pmd)
+{
+	unsigned int i;
+
+	/* Free the page table. */
+	if (pmd_table(*pmd)) {
+		pgtable_t table = pmd_pgtable(*pmd);
+		__free_page(table);
+	}
+
+	/* Clear the PMD. */
+	*pmd = __pmd(0);
+	ipmmu_flush_pgtable(mmu, pmd, sizeof(*pmd));
+
+	/* Check whether the PUD is still needed. */
+	pmd = pmd_offset(pud, 0);
+	for (i = 0; i < IPMMU_PTRS_PER_PMD; ++i) {
+		if (!pmd_none(pmd[i]))
+			return;
+	}
+
+	/* Clear the parent PUD. */
+	ipmmu_clear_pud(mmu, pud);
+}
+
+static void ipmmu_clear_pte(struct ipmmu_vmsa_device *mmu, pud_t *pud,
+			    pmd_t *pmd, pte_t *pte, unsigned int num_ptes)
+{
+	unsigned int i;
+
+	/* Clear the PTE. */
+	for (i = num_ptes; i; --i)
+		pte[i-1] = __pte(0);
+
+	ipmmu_flush_pgtable(mmu, pte, sizeof(*pte) * num_ptes);
+
+	/* Check whether the PMD is still needed. */
+	pte = pte_offset_kernel(pmd, 0);
+	for (i = 0; i < IPMMU_PTRS_PER_PTE; ++i) {
+		if (!pte_none(pte[i]))
+			return;
+	}
+
+	/* Clear the parent PMD. */
+	ipmmu_clear_pmd(mmu, pud, pmd);
+}
+
+static int ipmmu_split_pmd(struct ipmmu_vmsa_device *mmu, pmd_t *pmd)
+{
+	pte_t *pte, *start;
+	pteval_t pteval;
+	unsigned long pfn;
+	unsigned int i;
+
+	pte = (pte_t *)get_zeroed_page(GFP_ATOMIC);
+	if (!pte)
+		return -ENOMEM;
+
+	/* Copy the PMD attributes. */
+	pteval = (pmd_val(*pmd) & ARM_VMSA_PTE_ATTRS_MASK)
+	       | ARM_VMSA_PTE_CONT | ARM_VMSA_PTE_PAGE;
+
+	pfn = pmd_pfn(*pmd);
+	start = pte;
+
+	for (i = IPMMU_PTRS_PER_PTE; i; --i)
+		*pte++ = pfn_pte(pfn++, __pgprot(pteval));
+
+	ipmmu_flush_pgtable(mmu, start, PAGE_SIZE);
+	*pmd = __pmd(__pa(start) | PMD_NSTABLE | PMD_TYPE_TABLE);
+	ipmmu_flush_pgtable(mmu, pmd, sizeof(*pmd));
+
+	return 0;
+}
+
+static void ipmmu_split_pte(struct ipmmu_vmsa_device *mmu, pte_t *pte)
+{
+	unsigned int i;
+
+	for (i = ARM_VMSA_PTE_CONT_ENTRIES; i; --i)
+		pte[i-1] = __pte(pte_val(*pte) & ~ARM_VMSA_PTE_CONT);
+
+	ipmmu_flush_pgtable(mmu, pte, sizeof(*pte) * ARM_VMSA_PTE_CONT_ENTRIES);
+}
+
+static int ipmmu_clear_mapping(struct ipmmu_vmsa_domain *domain,
+			       unsigned long iova, size_t size)
+{
+	struct ipmmu_vmsa_device *mmu = domain->mmu;
+	unsigned long flags;
+	pgd_t *pgd = domain->pgd;
+	pud_t *pud;
+	pmd_t *pmd;
+	pte_t *pte;
+	int ret = 0;
+
+	if (!pgd)
+		return -EINVAL;
+
+	if (size & ~PAGE_MASK)
+		return -EINVAL;
+
+	pgd += pgd_index(iova);
+	pud = (pud_t *)pgd;
+
+	spin_lock_irqsave(&domain->lock, flags);
+
+	/* If there's no PUD or PMD we're done. */
+	if (pud_none(*pud))
+		goto done;
+
+	pmd = pmd_offset(pud, iova);
+	if (pmd_none(*pmd))
+		goto done;
+
+	/*
+	 * When freeing a 2MB block just clear the PMD. In the unlikely case the
+	 * block is mapped as individual pages this will free the corresponding
+	 * PTE page table.
+	 */
+	if (size = SZ_2M) {
+		ipmmu_clear_pmd(mmu, pud, pmd);
+		goto done;
+	}
+
+	/*
+	 * If the PMD has been mapped as a section remap it as pages to allow
+	 * freeing individual pages.
+	 */
+	if (pmd_sect(*pmd))
+		ipmmu_split_pmd(mmu, pmd);
+
+	pte = pte_offset_kernel(pmd, iova);
+
+	/*
+	 * When freeing a 64kB block just clear the PTE entries. We don't have
+	 * to care about the contiguous hint of the surrounding entries.
+	 */
+	if (size = SZ_64K) {
+		ipmmu_clear_pte(mmu, pud, pmd, pte, ARM_VMSA_PTE_CONT_ENTRIES);
+		goto done;
+	}
+
+	/*
+	 * If the PTE has been mapped with the contiguous hint set remap it and
+	 * its surrounding PTEs to allow unmapping a single page.
+	 */
+	if (pte_val(*pte) & ARM_VMSA_PTE_CONT)
+		ipmmu_split_pte(mmu, pte);
+
+	/* Clear the PTE. */
+	ipmmu_clear_pte(mmu, pud, pmd, pte, 1);
+
+done:
+	spin_unlock_irqrestore(&domain->lock, flags);
+
+	if (ret)
+		ipmmu_tlb_invalidate(domain);
+
+	return 0;
+}
+
 /* -----------------------------------------------------------------------------
  * IOMMU Operations
  */
@@ -769,7 +951,7 @@ static int ipmmu_map(struct iommu_domain *io_domain, unsigned long iova,
 	if (!domain)
 		return -ENODEV;
 
-	return ipmmu_handle_mapping(domain, iova, paddr, size, prot);
+	return ipmmu_create_mapping(domain, iova, paddr, size, prot);
 }
 
 static size_t ipmmu_unmap(struct iommu_domain *io_domain, unsigned long iova,
@@ -778,7 +960,7 @@ static size_t ipmmu_unmap(struct iommu_domain *io_domain, unsigned long iova,
 	struct ipmmu_vmsa_domain *domain = io_domain->priv;
 	int ret;
 
-	ret = ipmmu_handle_mapping(domain, iova, 0, size, 0);
+	ret = ipmmu_clear_mapping(domain, iova, size);
 	return ret ? 0 : size;
 }
 
-- 
1.8.3.2


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 9/9] iommu/ipmmu-vmsa: Support clearing mappings
@ 2014-04-21 14:13   ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: iommu; +Cc: linux-sh, linux-arm-kernel, Will Deacon

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 190 ++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 186 insertions(+), 4 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 711e5c4..de6461d 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -193,14 +193,22 @@ static LIST_HEAD(ipmmu_devices);
 #define ARM_VMSA_PTE_SH_NS		(((pteval_t)0) << 8)
 #define ARM_VMSA_PTE_SH_OS		(((pteval_t)2) << 8)
 #define ARM_VMSA_PTE_SH_IS		(((pteval_t)3) << 8)
+#define ARM_VMSA_PTE_SH_MASK		(((pteval_t)3) << 8)
 #define ARM_VMSA_PTE_NS			(((pteval_t)1) << 5)
 #define ARM_VMSA_PTE_PAGE		(((pteval_t)3) << 0)
 
 /* Stage-1 PTE */
+#define ARM_VMSA_PTE_nG			(((pteval_t)1) << 11)
 #define ARM_VMSA_PTE_AP_UNPRIV		(((pteval_t)1) << 6)
 #define ARM_VMSA_PTE_AP_RDONLY		(((pteval_t)2) << 6)
+#define ARM_VMSA_PTE_AP_MASK		(((pteval_t)3) << 6)
+#define ARM_VMSA_PTE_ATTRINDX_MASK	(((pteval_t)3) << 2)
 #define ARM_VMSA_PTE_ATTRINDX_SHIFT	2
-#define ARM_VMSA_PTE_nG			(((pteval_t)1) << 11)
+
+#define ARM_VMSA_PTE_ATTRS_MASK \
+	(ARM_VMSA_PTE_XN | ARM_VMSA_PTE_CONT | ARM_VMSA_PTE_nG | \
+	 ARM_VMSA_PTE_AF | ARM_VMSA_PTE_SH_MASK | ARM_VMSA_PTE_AP_MASK | \
+	 ARM_VMSA_PTE_NS | ARM_VMSA_PTE_ATTRINDX_MASK)
 
 #define ARM_VMSA_PTE_CONT_ENTRIES	16
 #define ARM_VMSA_PTE_CONT_SIZE		(PAGE_SIZE * ARM_VMSA_PTE_CONT_ENTRIES)
@@ -615,7 +623,7 @@ static int ipmmu_alloc_init_pmd(struct ipmmu_vmsa_device *mmu, pmd_t *pmd,
 	return 0;
 }
 
-static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
+static int ipmmu_create_mapping(struct ipmmu_vmsa_domain *domain,
 				unsigned long iova, phys_addr_t paddr,
 				size_t size, int prot)
 {
@@ -669,6 +677,180 @@ done:
 	return ret;
 }
 
+static void ipmmu_clear_pud(struct ipmmu_vmsa_device *mmu, pud_t *pud)
+{
+	/* Free the page table. */
+	pgtable_t table = pud_pgtable(*pud);
+	__free_page(table);
+
+	/* Clear the PUD. */
+	*pud = __pud(0);
+	ipmmu_flush_pgtable(mmu, pud, sizeof(*pud));
+}
+
+static void ipmmu_clear_pmd(struct ipmmu_vmsa_device *mmu, pud_t *pud,
+			    pmd_t *pmd)
+{
+	unsigned int i;
+
+	/* Free the page table. */
+	if (pmd_table(*pmd)) {
+		pgtable_t table = pmd_pgtable(*pmd);
+		__free_page(table);
+	}
+
+	/* Clear the PMD. */
+	*pmd = __pmd(0);
+	ipmmu_flush_pgtable(mmu, pmd, sizeof(*pmd));
+
+	/* Check whether the PUD is still needed. */
+	pmd = pmd_offset(pud, 0);
+	for (i = 0; i < IPMMU_PTRS_PER_PMD; ++i) {
+		if (!pmd_none(pmd[i]))
+			return;
+	}
+
+	/* Clear the parent PUD. */
+	ipmmu_clear_pud(mmu, pud);
+}
+
+static void ipmmu_clear_pte(struct ipmmu_vmsa_device *mmu, pud_t *pud,
+			    pmd_t *pmd, pte_t *pte, unsigned int num_ptes)
+{
+	unsigned int i;
+
+	/* Clear the PTE. */
+	for (i = num_ptes; i; --i)
+		pte[i-1] = __pte(0);
+
+	ipmmu_flush_pgtable(mmu, pte, sizeof(*pte) * num_ptes);
+
+	/* Check whether the PMD is still needed. */
+	pte = pte_offset_kernel(pmd, 0);
+	for (i = 0; i < IPMMU_PTRS_PER_PTE; ++i) {
+		if (!pte_none(pte[i]))
+			return;
+	}
+
+	/* Clear the parent PMD. */
+	ipmmu_clear_pmd(mmu, pud, pmd);
+}
+
+static int ipmmu_split_pmd(struct ipmmu_vmsa_device *mmu, pmd_t *pmd)
+{
+	pte_t *pte, *start;
+	pteval_t pteval;
+	unsigned long pfn;
+	unsigned int i;
+
+	pte = (pte_t *)get_zeroed_page(GFP_ATOMIC);
+	if (!pte)
+		return -ENOMEM;
+
+	/* Copy the PMD attributes. */
+	pteval = (pmd_val(*pmd) & ARM_VMSA_PTE_ATTRS_MASK)
+	       | ARM_VMSA_PTE_CONT | ARM_VMSA_PTE_PAGE;
+
+	pfn = pmd_pfn(*pmd);
+	start = pte;
+
+	for (i = IPMMU_PTRS_PER_PTE; i; --i)
+		*pte++ = pfn_pte(pfn++, __pgprot(pteval));
+
+	ipmmu_flush_pgtable(mmu, start, PAGE_SIZE);
+	*pmd = __pmd(__pa(start) | PMD_NSTABLE | PMD_TYPE_TABLE);
+	ipmmu_flush_pgtable(mmu, pmd, sizeof(*pmd));
+
+	return 0;
+}
+
+static void ipmmu_split_pte(struct ipmmu_vmsa_device *mmu, pte_t *pte)
+{
+	unsigned int i;
+
+	for (i = ARM_VMSA_PTE_CONT_ENTRIES; i; --i)
+		pte[i-1] = __pte(pte_val(*pte) & ~ARM_VMSA_PTE_CONT);
+
+	ipmmu_flush_pgtable(mmu, pte, sizeof(*pte) * ARM_VMSA_PTE_CONT_ENTRIES);
+}
+
+static int ipmmu_clear_mapping(struct ipmmu_vmsa_domain *domain,
+			       unsigned long iova, size_t size)
+{
+	struct ipmmu_vmsa_device *mmu = domain->mmu;
+	unsigned long flags;
+	pgd_t *pgd = domain->pgd;
+	pud_t *pud;
+	pmd_t *pmd;
+	pte_t *pte;
+	int ret = 0;
+
+	if (!pgd)
+		return -EINVAL;
+
+	if (size & ~PAGE_MASK)
+		return -EINVAL;
+
+	pgd += pgd_index(iova);
+	pud = (pud_t *)pgd;
+
+	spin_lock_irqsave(&domain->lock, flags);
+
+	/* If there's no PUD or PMD we're done. */
+	if (pud_none(*pud))
+		goto done;
+
+	pmd = pmd_offset(pud, iova);
+	if (pmd_none(*pmd))
+		goto done;
+
+	/*
+	 * When freeing a 2MB block just clear the PMD. In the unlikely case the
+	 * block is mapped as individual pages this will free the corresponding
+	 * PTE page table.
+	 */
+	if (size == SZ_2M) {
+		ipmmu_clear_pmd(mmu, pud, pmd);
+		goto done;
+	}
+
+	/*
+	 * If the PMD has been mapped as a section remap it as pages to allow
+	 * freeing individual pages.
+	 */
+	if (pmd_sect(*pmd))
+		ipmmu_split_pmd(mmu, pmd);
+
+	pte = pte_offset_kernel(pmd, iova);
+
+	/*
+	 * When freeing a 64kB block just clear the PTE entries. We don't have
+	 * to care about the contiguous hint of the surrounding entries.
+	 */
+	if (size == SZ_64K) {
+		ipmmu_clear_pte(mmu, pud, pmd, pte, ARM_VMSA_PTE_CONT_ENTRIES);
+		goto done;
+	}
+
+	/*
+	 * If the PTE has been mapped with the contiguous hint set remap it and
+	 * its surrounding PTEs to allow unmapping a single page.
+	 */
+	if (pte_val(*pte) & ARM_VMSA_PTE_CONT)
+		ipmmu_split_pte(mmu, pte);
+
+	/* Clear the PTE. */
+	ipmmu_clear_pte(mmu, pud, pmd, pte, 1);
+
+done:
+	spin_unlock_irqrestore(&domain->lock, flags);
+
+	if (ret)
+		ipmmu_tlb_invalidate(domain);
+
+	return 0;
+}
+
 /* -----------------------------------------------------------------------------
  * IOMMU Operations
  */
@@ -769,7 +951,7 @@ static int ipmmu_map(struct iommu_domain *io_domain, unsigned long iova,
 	if (!domain)
 		return -ENODEV;
 
-	return ipmmu_handle_mapping(domain, iova, paddr, size, prot);
+	return ipmmu_create_mapping(domain, iova, paddr, size, prot);
 }
 
 static size_t ipmmu_unmap(struct iommu_domain *io_domain, unsigned long iova,
@@ -778,7 +960,7 @@ static size_t ipmmu_unmap(struct iommu_domain *io_domain, unsigned long iova,
 	struct ipmmu_vmsa_domain *domain = io_domain->priv;
 	int ret;
 
-	ret = ipmmu_handle_mapping(domain, iova, 0, size, 0);
+	ret = ipmmu_clear_mapping(domain, iova, size);
 	return ret ? 0 : size;
 }
 
-- 
1.8.3.2


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 9/9] iommu/ipmmu-vmsa: Support clearing mappings
@ 2014-04-21 14:13   ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 14:13 UTC (permalink / raw)
  To: linux-arm-kernel

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/iommu/ipmmu-vmsa.c | 190 ++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 186 insertions(+), 4 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 711e5c4..de6461d 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -193,14 +193,22 @@ static LIST_HEAD(ipmmu_devices);
 #define ARM_VMSA_PTE_SH_NS		(((pteval_t)0) << 8)
 #define ARM_VMSA_PTE_SH_OS		(((pteval_t)2) << 8)
 #define ARM_VMSA_PTE_SH_IS		(((pteval_t)3) << 8)
+#define ARM_VMSA_PTE_SH_MASK		(((pteval_t)3) << 8)
 #define ARM_VMSA_PTE_NS			(((pteval_t)1) << 5)
 #define ARM_VMSA_PTE_PAGE		(((pteval_t)3) << 0)
 
 /* Stage-1 PTE */
+#define ARM_VMSA_PTE_nG			(((pteval_t)1) << 11)
 #define ARM_VMSA_PTE_AP_UNPRIV		(((pteval_t)1) << 6)
 #define ARM_VMSA_PTE_AP_RDONLY		(((pteval_t)2) << 6)
+#define ARM_VMSA_PTE_AP_MASK		(((pteval_t)3) << 6)
+#define ARM_VMSA_PTE_ATTRINDX_MASK	(((pteval_t)3) << 2)
 #define ARM_VMSA_PTE_ATTRINDX_SHIFT	2
-#define ARM_VMSA_PTE_nG			(((pteval_t)1) << 11)
+
+#define ARM_VMSA_PTE_ATTRS_MASK \
+	(ARM_VMSA_PTE_XN | ARM_VMSA_PTE_CONT | ARM_VMSA_PTE_nG | \
+	 ARM_VMSA_PTE_AF | ARM_VMSA_PTE_SH_MASK | ARM_VMSA_PTE_AP_MASK | \
+	 ARM_VMSA_PTE_NS | ARM_VMSA_PTE_ATTRINDX_MASK)
 
 #define ARM_VMSA_PTE_CONT_ENTRIES	16
 #define ARM_VMSA_PTE_CONT_SIZE		(PAGE_SIZE * ARM_VMSA_PTE_CONT_ENTRIES)
@@ -615,7 +623,7 @@ static int ipmmu_alloc_init_pmd(struct ipmmu_vmsa_device *mmu, pmd_t *pmd,
 	return 0;
 }
 
-static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain,
+static int ipmmu_create_mapping(struct ipmmu_vmsa_domain *domain,
 				unsigned long iova, phys_addr_t paddr,
 				size_t size, int prot)
 {
@@ -669,6 +677,180 @@ done:
 	return ret;
 }
 
+static void ipmmu_clear_pud(struct ipmmu_vmsa_device *mmu, pud_t *pud)
+{
+	/* Free the page table. */
+	pgtable_t table = pud_pgtable(*pud);
+	__free_page(table);
+
+	/* Clear the PUD. */
+	*pud = __pud(0);
+	ipmmu_flush_pgtable(mmu, pud, sizeof(*pud));
+}
+
+static void ipmmu_clear_pmd(struct ipmmu_vmsa_device *mmu, pud_t *pud,
+			    pmd_t *pmd)
+{
+	unsigned int i;
+
+	/* Free the page table. */
+	if (pmd_table(*pmd)) {
+		pgtable_t table = pmd_pgtable(*pmd);
+		__free_page(table);
+	}
+
+	/* Clear the PMD. */
+	*pmd = __pmd(0);
+	ipmmu_flush_pgtable(mmu, pmd, sizeof(*pmd));
+
+	/* Check whether the PUD is still needed. */
+	pmd = pmd_offset(pud, 0);
+	for (i = 0; i < IPMMU_PTRS_PER_PMD; ++i) {
+		if (!pmd_none(pmd[i]))
+			return;
+	}
+
+	/* Clear the parent PUD. */
+	ipmmu_clear_pud(mmu, pud);
+}
+
+static void ipmmu_clear_pte(struct ipmmu_vmsa_device *mmu, pud_t *pud,
+			    pmd_t *pmd, pte_t *pte, unsigned int num_ptes)
+{
+	unsigned int i;
+
+	/* Clear the PTE. */
+	for (i = num_ptes; i; --i)
+		pte[i-1] = __pte(0);
+
+	ipmmu_flush_pgtable(mmu, pte, sizeof(*pte) * num_ptes);
+
+	/* Check whether the PMD is still needed. */
+	pte = pte_offset_kernel(pmd, 0);
+	for (i = 0; i < IPMMU_PTRS_PER_PTE; ++i) {
+		if (!pte_none(pte[i]))
+			return;
+	}
+
+	/* Clear the parent PMD. */
+	ipmmu_clear_pmd(mmu, pud, pmd);
+}
+
+static int ipmmu_split_pmd(struct ipmmu_vmsa_device *mmu, pmd_t *pmd)
+{
+	pte_t *pte, *start;
+	pteval_t pteval;
+	unsigned long pfn;
+	unsigned int i;
+
+	pte = (pte_t *)get_zeroed_page(GFP_ATOMIC);
+	if (!pte)
+		return -ENOMEM;
+
+	/* Copy the PMD attributes. */
+	pteval = (pmd_val(*pmd) & ARM_VMSA_PTE_ATTRS_MASK)
+	       | ARM_VMSA_PTE_CONT | ARM_VMSA_PTE_PAGE;
+
+	pfn = pmd_pfn(*pmd);
+	start = pte;
+
+	for (i = IPMMU_PTRS_PER_PTE; i; --i)
+		*pte++ = pfn_pte(pfn++, __pgprot(pteval));
+
+	ipmmu_flush_pgtable(mmu, start, PAGE_SIZE);
+	*pmd = __pmd(__pa(start) | PMD_NSTABLE | PMD_TYPE_TABLE);
+	ipmmu_flush_pgtable(mmu, pmd, sizeof(*pmd));
+
+	return 0;
+}
+
+static void ipmmu_split_pte(struct ipmmu_vmsa_device *mmu, pte_t *pte)
+{
+	unsigned int i;
+
+	for (i = ARM_VMSA_PTE_CONT_ENTRIES; i; --i)
+		pte[i-1] = __pte(pte_val(*pte) & ~ARM_VMSA_PTE_CONT);
+
+	ipmmu_flush_pgtable(mmu, pte, sizeof(*pte) * ARM_VMSA_PTE_CONT_ENTRIES);
+}
+
+static int ipmmu_clear_mapping(struct ipmmu_vmsa_domain *domain,
+			       unsigned long iova, size_t size)
+{
+	struct ipmmu_vmsa_device *mmu = domain->mmu;
+	unsigned long flags;
+	pgd_t *pgd = domain->pgd;
+	pud_t *pud;
+	pmd_t *pmd;
+	pte_t *pte;
+	int ret = 0;
+
+	if (!pgd)
+		return -EINVAL;
+
+	if (size & ~PAGE_MASK)
+		return -EINVAL;
+
+	pgd += pgd_index(iova);
+	pud = (pud_t *)pgd;
+
+	spin_lock_irqsave(&domain->lock, flags);
+
+	/* If there's no PUD or PMD we're done. */
+	if (pud_none(*pud))
+		goto done;
+
+	pmd = pmd_offset(pud, iova);
+	if (pmd_none(*pmd))
+		goto done;
+
+	/*
+	 * When freeing a 2MB block just clear the PMD. In the unlikely case the
+	 * block is mapped as individual pages this will free the corresponding
+	 * PTE page table.
+	 */
+	if (size == SZ_2M) {
+		ipmmu_clear_pmd(mmu, pud, pmd);
+		goto done;
+	}
+
+	/*
+	 * If the PMD has been mapped as a section remap it as pages to allow
+	 * freeing individual pages.
+	 */
+	if (pmd_sect(*pmd))
+		ipmmu_split_pmd(mmu, pmd);
+
+	pte = pte_offset_kernel(pmd, iova);
+
+	/*
+	 * When freeing a 64kB block just clear the PTE entries. We don't have
+	 * to care about the contiguous hint of the surrounding entries.
+	 */
+	if (size == SZ_64K) {
+		ipmmu_clear_pte(mmu, pud, pmd, pte, ARM_VMSA_PTE_CONT_ENTRIES);
+		goto done;
+	}
+
+	/*
+	 * If the PTE has been mapped with the contiguous hint set remap it and
+	 * its surrounding PTEs to allow unmapping a single page.
+	 */
+	if (pte_val(*pte) & ARM_VMSA_PTE_CONT)
+		ipmmu_split_pte(mmu, pte);
+
+	/* Clear the PTE. */
+	ipmmu_clear_pte(mmu, pud, pmd, pte, 1);
+
+done:
+	spin_unlock_irqrestore(&domain->lock, flags);
+
+	if (ret)
+		ipmmu_tlb_invalidate(domain);
+
+	return 0;
+}
+
 /* -----------------------------------------------------------------------------
  * IOMMU Operations
  */
@@ -769,7 +951,7 @@ static int ipmmu_map(struct iommu_domain *io_domain, unsigned long iova,
 	if (!domain)
 		return -ENODEV;
 
-	return ipmmu_handle_mapping(domain, iova, paddr, size, prot);
+	return ipmmu_create_mapping(domain, iova, paddr, size, prot);
 }
 
 static size_t ipmmu_unmap(struct iommu_domain *io_domain, unsigned long iova,
@@ -778,7 +960,7 @@ static size_t ipmmu_unmap(struct iommu_domain *io_domain, unsigned long iova,
 	struct ipmmu_vmsa_domain *domain = io_domain->priv;
 	int ret;
 
-	ret = ipmmu_handle_mapping(domain, iova, 0, size, 0);
+	ret = ipmmu_clear_mapping(domain, iova, size);
 	return ret ? 0 : size;
 }
 
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH 3/9] iommu/ipmmu-vmsa: Define driver-specific page directory sizes
@ 2014-04-21 19:05     ` Sergei Shtylyov
  0 siblings, 0 replies; 42+ messages in thread
From: Sergei Shtylyov @ 2014-04-21 19:05 UTC (permalink / raw)
  To: linux-arm-kernel

Hello.

On 04/21/2014 06:13 PM, Laurent Pinchart wrote:

> The PTRS_PER_(PGD|PMD|PTE) macros evaluate to different values depending
> on whether LPAE is enabled. The IPMMU driver uses a long descriptor
> format regardless of LPAE, making those macros mismatch the IPMMU
> configuration on non-LPAE systems.

> Replace the macros by driver-specific versions that always evaluate to
> the right value.

> Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
> ---
>   drivers/iommu/ipmmu-vmsa.c | 14 +++++++++-----
>   1 file changed, 9 insertions(+), 5 deletions(-)

> diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
> index 1ae97d8..7c8c21e 100644
> --- a/drivers/iommu/ipmmu-vmsa.c
> +++ b/drivers/iommu/ipmmu-vmsa.c
> @@ -210,6 +210,10 @@ static LIST_HEAD(ipmmu_devices);
>   #define ARM_VMSA_PTE_MEMATTR_NC		(((pteval_t)0x5) << 2)
>   #define ARM_VMSA_PTE_MEMATTR_DEV	(((pteval_t)0x1) << 2)
>
> +#define IPMMU_PTRS_PER_PTE		512
> +#define IPMMU_PTRS_PER_PMD		512
> +#define IPMMU_PTRS_PER_PGD		4
> +
[...]
> @@ -487,7 +491,7 @@ static void ipmmu_free_puds(pgd_t *pgd)
>   	unsigned int i;
>
>   	pud = pud_base;
> -	for (i = 0; i < PTRS_PER_PUD; ++i) {
> +	for (i = 0; i < IPMMU_PTRS_PER_PUD; ++i) {

    I don't see where you #define IPMMU_PTRS_PER_PUD...

[...]

WBR, Sergei


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 3/9] iommu/ipmmu-vmsa: Define driver-specific page directory sizes
@ 2014-04-21 19:05     ` Sergei Shtylyov
  0 siblings, 0 replies; 42+ messages in thread
From: Sergei Shtylyov @ 2014-04-21 19:05 UTC (permalink / raw)
  To: Laurent Pinchart, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
  Cc: Will Deacon, linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-sh-u79uwXL29TY76Z2rM5mHXA

Hello.

On 04/21/2014 06:13 PM, Laurent Pinchart wrote:

> The PTRS_PER_(PGD|PMD|PTE) macros evaluate to different values depending
> on whether LPAE is enabled. The IPMMU driver uses a long descriptor
> format regardless of LPAE, making those macros mismatch the IPMMU
> configuration on non-LPAE systems.

> Replace the macros by driver-specific versions that always evaluate to
> the right value.

> Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw@public.gmane.org>
> ---
>   drivers/iommu/ipmmu-vmsa.c | 14 +++++++++-----
>   1 file changed, 9 insertions(+), 5 deletions(-)

> diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
> index 1ae97d8..7c8c21e 100644
> --- a/drivers/iommu/ipmmu-vmsa.c
> +++ b/drivers/iommu/ipmmu-vmsa.c
> @@ -210,6 +210,10 @@ static LIST_HEAD(ipmmu_devices);
>   #define ARM_VMSA_PTE_MEMATTR_NC		(((pteval_t)0x5) << 2)
>   #define ARM_VMSA_PTE_MEMATTR_DEV	(((pteval_t)0x1) << 2)
>
> +#define IPMMU_PTRS_PER_PTE		512
> +#define IPMMU_PTRS_PER_PMD		512
> +#define IPMMU_PTRS_PER_PGD		4
> +
[...]
> @@ -487,7 +491,7 @@ static void ipmmu_free_puds(pgd_t *pgd)
>   	unsigned int i;
>
>   	pud = pud_base;
> -	for (i = 0; i < PTRS_PER_PUD; ++i) {
> +	for (i = 0; i < IPMMU_PTRS_PER_PUD; ++i) {

    I don't see where you #define IPMMU_PTRS_PER_PUD...

[...]

WBR, Sergei

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 3/9] iommu/ipmmu-vmsa: Define driver-specific page directory sizes
@ 2014-04-21 19:05     ` Sergei Shtylyov
  0 siblings, 0 replies; 42+ messages in thread
From: Sergei Shtylyov @ 2014-04-21 19:05 UTC (permalink / raw)
  To: linux-arm-kernel

Hello.

On 04/21/2014 06:13 PM, Laurent Pinchart wrote:

> The PTRS_PER_(PGD|PMD|PTE) macros evaluate to different values depending
> on whether LPAE is enabled. The IPMMU driver uses a long descriptor
> format regardless of LPAE, making those macros mismatch the IPMMU
> configuration on non-LPAE systems.

> Replace the macros by driver-specific versions that always evaluate to
> the right value.

> Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
> ---
>   drivers/iommu/ipmmu-vmsa.c | 14 +++++++++-----
>   1 file changed, 9 insertions(+), 5 deletions(-)

> diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
> index 1ae97d8..7c8c21e 100644
> --- a/drivers/iommu/ipmmu-vmsa.c
> +++ b/drivers/iommu/ipmmu-vmsa.c
> @@ -210,6 +210,10 @@ static LIST_HEAD(ipmmu_devices);
>   #define ARM_VMSA_PTE_MEMATTR_NC		(((pteval_t)0x5) << 2)
>   #define ARM_VMSA_PTE_MEMATTR_DEV	(((pteval_t)0x1) << 2)
>
> +#define IPMMU_PTRS_PER_PTE		512
> +#define IPMMU_PTRS_PER_PMD		512
> +#define IPMMU_PTRS_PER_PGD		4
> +
[...]
> @@ -487,7 +491,7 @@ static void ipmmu_free_puds(pgd_t *pgd)
>   	unsigned int i;
>
>   	pud = pud_base;
> -	for (i = 0; i < PTRS_PER_PUD; ++i) {
> +	for (i = 0; i < IPMMU_PTRS_PER_PUD; ++i) {

    I don't see where you #define IPMMU_PTRS_PER_PUD...

[...]

WBR, Sergei

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 3/9] iommu/ipmmu-vmsa: Define driver-specific page directory sizes
  2014-04-21 19:05     ` Sergei Shtylyov
  (?)
@ 2014-04-21 20:52       ` Laurent Pinchart
  -1 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 20:52 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Sergei,

Thank you for the review.

On Monday 21 April 2014 23:05:00 Sergei Shtylyov wrote:
> On 04/21/2014 06:13 PM, Laurent Pinchart wrote:
> > The PTRS_PER_(PGD|PMD|PTE) macros evaluate to different values depending
> > on whether LPAE is enabled. The IPMMU driver uses a long descriptor
> > format regardless of LPAE, making those macros mismatch the IPMMU
> > configuration on non-LPAE systems.
> > 
> > Replace the macros by driver-specific versions that always evaluate to
> > the right value.
> > 
> > Signed-off-by: Laurent Pinchart
> > <laurent.pinchart+renesas@ideasonboard.com>
> > ---
> > 
> >   drivers/iommu/ipmmu-vmsa.c | 14 +++++++++-----
> >   1 file changed, 9 insertions(+), 5 deletions(-)
> > 
> > diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
> > index 1ae97d8..7c8c21e 100644
> > --- a/drivers/iommu/ipmmu-vmsa.c
> > +++ b/drivers/iommu/ipmmu-vmsa.c
> > @@ -210,6 +210,10 @@ static LIST_HEAD(ipmmu_devices);
> > 
> >   #define ARM_VMSA_PTE_MEMATTR_NC		(((pteval_t)0x5) << 2)
> >   #define ARM_VMSA_PTE_MEMATTR_DEV	(((pteval_t)0x1) << 2)
> > 
> > +#define IPMMU_PTRS_PER_PTE		512
> > +#define IPMMU_PTRS_PER_PMD		512
> > +#define IPMMU_PTRS_PER_PGD		4
> > +
> 
> [...]
> 
> > @@ -487,7 +491,7 @@ static void ipmmu_free_puds(pgd_t *pgd)
> > 
> >   	unsigned int i;
> >   	
> >   	pud = pud_base;
> > 
> > -	for (i = 0; i < PTRS_PER_PUD; ++i) {
> > +	for (i = 0; i < IPMMU_PTRS_PER_PUD; ++i) {
> 
> I don't see where you #define IPMMU_PTRS_PER_PUD...

Come on, it's easy. I don't define it anywhere :-)

The line is changed in a further patch, it seems I've forgotten to compile-
test all intermediate versions. Thank you for catching the problem, I'll fix 
that in v2 and will make sure to properly test all patches individually.

-- 
Regards,

Laurent Pinchart


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 3/9] iommu/ipmmu-vmsa: Define driver-specific page directory sizes
@ 2014-04-21 20:52       ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 20:52 UTC (permalink / raw)
  To: Sergei Shtylyov
  Cc: Laurent Pinchart, iommu, linux-sh, linux-arm-kernel, Will Deacon

Hi Sergei,

Thank you for the review.

On Monday 21 April 2014 23:05:00 Sergei Shtylyov wrote:
> On 04/21/2014 06:13 PM, Laurent Pinchart wrote:
> > The PTRS_PER_(PGD|PMD|PTE) macros evaluate to different values depending
> > on whether LPAE is enabled. The IPMMU driver uses a long descriptor
> > format regardless of LPAE, making those macros mismatch the IPMMU
> > configuration on non-LPAE systems.
> > 
> > Replace the macros by driver-specific versions that always evaluate to
> > the right value.
> > 
> > Signed-off-by: Laurent Pinchart
> > <laurent.pinchart+renesas@ideasonboard.com>
> > ---
> > 
> >   drivers/iommu/ipmmu-vmsa.c | 14 +++++++++-----
> >   1 file changed, 9 insertions(+), 5 deletions(-)
> > 
> > diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
> > index 1ae97d8..7c8c21e 100644
> > --- a/drivers/iommu/ipmmu-vmsa.c
> > +++ b/drivers/iommu/ipmmu-vmsa.c
> > @@ -210,6 +210,10 @@ static LIST_HEAD(ipmmu_devices);
> > 
> >   #define ARM_VMSA_PTE_MEMATTR_NC		(((pteval_t)0x5) << 2)
> >   #define ARM_VMSA_PTE_MEMATTR_DEV	(((pteval_t)0x1) << 2)
> > 
> > +#define IPMMU_PTRS_PER_PTE		512
> > +#define IPMMU_PTRS_PER_PMD		512
> > +#define IPMMU_PTRS_PER_PGD		4
> > +
> 
> [...]
> 
> > @@ -487,7 +491,7 @@ static void ipmmu_free_puds(pgd_t *pgd)
> > 
> >   	unsigned int i;
> >   	
> >   	pud = pud_base;
> > 
> > -	for (i = 0; i < PTRS_PER_PUD; ++i) {
> > +	for (i = 0; i < IPMMU_PTRS_PER_PUD; ++i) {
> 
> I don't see where you #define IPMMU_PTRS_PER_PUD...

Come on, it's easy. I don't define it anywhere :-)

The line is changed in a further patch, it seems I've forgotten to compile-
test all intermediate versions. Thank you for catching the problem, I'll fix 
that in v2 and will make sure to properly test all patches individually.

-- 
Regards,

Laurent Pinchart


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 3/9] iommu/ipmmu-vmsa: Define driver-specific page directory sizes
@ 2014-04-21 20:52       ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-21 20:52 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Sergei,

Thank you for the review.

On Monday 21 April 2014 23:05:00 Sergei Shtylyov wrote:
> On 04/21/2014 06:13 PM, Laurent Pinchart wrote:
> > The PTRS_PER_(PGD|PMD|PTE) macros evaluate to different values depending
> > on whether LPAE is enabled. The IPMMU driver uses a long descriptor
> > format regardless of LPAE, making those macros mismatch the IPMMU
> > configuration on non-LPAE systems.
> > 
> > Replace the macros by driver-specific versions that always evaluate to
> > the right value.
> > 
> > Signed-off-by: Laurent Pinchart
> > <laurent.pinchart+renesas@ideasonboard.com>
> > ---
> > 
> >   drivers/iommu/ipmmu-vmsa.c | 14 +++++++++-----
> >   1 file changed, 9 insertions(+), 5 deletions(-)
> > 
> > diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
> > index 1ae97d8..7c8c21e 100644
> > --- a/drivers/iommu/ipmmu-vmsa.c
> > +++ b/drivers/iommu/ipmmu-vmsa.c
> > @@ -210,6 +210,10 @@ static LIST_HEAD(ipmmu_devices);
> > 
> >   #define ARM_VMSA_PTE_MEMATTR_NC		(((pteval_t)0x5) << 2)
> >   #define ARM_VMSA_PTE_MEMATTR_DEV	(((pteval_t)0x1) << 2)
> > 
> > +#define IPMMU_PTRS_PER_PTE		512
> > +#define IPMMU_PTRS_PER_PMD		512
> > +#define IPMMU_PTRS_PER_PGD		4
> > +
> 
> [...]
> 
> > @@ -487,7 +491,7 @@ static void ipmmu_free_puds(pgd_t *pgd)
> > 
> >   	unsigned int i;
> >   	
> >   	pud = pud_base;
> > 
> > -	for (i = 0; i < PTRS_PER_PUD; ++i) {
> > +	for (i = 0; i < IPMMU_PTRS_PER_PUD; ++i) {
> 
> I don't see where you #define IPMMU_PTRS_PER_PUD...

Come on, it's easy. I don't define it anywhere :-)

The line is changed in a further patch, it seems I've forgotten to compile-
test all intermediate versions. Thank you for catching the problem, I'll fix 
that in v2 and will make sure to properly test all patches individually.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 0/9] Renesas ipmmu-vmsa: Miscellaneous cleanups and fixes
  2014-04-21 14:13 ` Laurent Pinchart
  (?)
@ 2014-04-22 11:34   ` Will Deacon
  -1 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2014-04-22 11:34 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Apr 21, 2014 at 03:13:00PM +0100, Laurent Pinchart wrote:
> Hello,

Hi Laurent,

> This patch set cleans up and fixes small issues in the ipmmu-vmsa driver. The
> patches are based on top of "[PATCH v3] iommu: Add driver for Renesas
> VMSA-compatible IPMMU" that adds the ipmmu-vmsa driver.
> 
> The most interesting part of this series is the rewrite of the page table
> management code. The IOMMU core guarantees that the map and unmap operations
> will always be called only with page sizes advertised by the driver. We can
> use that assumption to remove loops of PGD and PMD entries, simplifying the
> code.

Hmm, interesting. We still have to handle the case where a mapping created
with one page-size could be unmapped with another though (in particular,
unmapping part of the range).

> Will, would it make sense to perform the same cleanup for the arm-smmu driver,
> or is there a reason to keep loops over PGD and PMD entries ? Removing them
> makes the implementation of 68kB and 2MB pages easier.

Is this an assumption that's relied on by other IOMMU drivers? It
certainly makes mapping of large ranges less efficient than it could be, so
I'm more inclined to set all the bits > PAGE_SIZE in pgsize_bitmap if it's
only used to determine the granularity at which map/unmap are called (which
is unrelated to what the hardware can actually do).

Will

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 0/9] Renesas ipmmu-vmsa: Miscellaneous cleanups and fixes
@ 2014-04-22 11:34   ` Will Deacon
  0 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2014-04-22 11:34 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: iommu, linux-sh, linux-arm-kernel

On Mon, Apr 21, 2014 at 03:13:00PM +0100, Laurent Pinchart wrote:
> Hello,

Hi Laurent,

> This patch set cleans up and fixes small issues in the ipmmu-vmsa driver. The
> patches are based on top of "[PATCH v3] iommu: Add driver for Renesas
> VMSA-compatible IPMMU" that adds the ipmmu-vmsa driver.
> 
> The most interesting part of this series is the rewrite of the page table
> management code. The IOMMU core guarantees that the map and unmap operations
> will always be called only with page sizes advertised by the driver. We can
> use that assumption to remove loops of PGD and PMD entries, simplifying the
> code.

Hmm, interesting. We still have to handle the case where a mapping created
with one page-size could be unmapped with another though (in particular,
unmapping part of the range).

> Will, would it make sense to perform the same cleanup for the arm-smmu driver,
> or is there a reason to keep loops over PGD and PMD entries ? Removing them
> makes the implementation of 68kB and 2MB pages easier.

Is this an assumption that's relied on by other IOMMU drivers? It
certainly makes mapping of large ranges less efficient than it could be, so
I'm more inclined to set all the bits > PAGE_SIZE in pgsize_bitmap if it's
only used to determine the granularity at which map/unmap are called (which
is unrelated to what the hardware can actually do).

Will

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 0/9] Renesas ipmmu-vmsa: Miscellaneous cleanups and fixes
@ 2014-04-22 11:34   ` Will Deacon
  0 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2014-04-22 11:34 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Apr 21, 2014 at 03:13:00PM +0100, Laurent Pinchart wrote:
> Hello,

Hi Laurent,

> This patch set cleans up and fixes small issues in the ipmmu-vmsa driver. The
> patches are based on top of "[PATCH v3] iommu: Add driver for Renesas
> VMSA-compatible IPMMU" that adds the ipmmu-vmsa driver.
> 
> The most interesting part of this series is the rewrite of the page table
> management code. The IOMMU core guarantees that the map and unmap operations
> will always be called only with page sizes advertised by the driver. We can
> use that assumption to remove loops of PGD and PMD entries, simplifying the
> code.

Hmm, interesting. We still have to handle the case where a mapping created
with one page-size could be unmapped with another though (in particular,
unmapping part of the range).

> Will, would it make sense to perform the same cleanup for the arm-smmu driver,
> or is there a reason to keep loops over PGD and PMD entries ? Removing them
> makes the implementation of 68kB and 2MB pages easier.

Is this an assumption that's relied on by other IOMMU drivers? It
certainly makes mapping of large ranges less efficient than it could be, so
I'm more inclined to set all the bits > PAGE_SIZE in pgsize_bitmap if it's
only used to determine the granularity at which map/unmap are called (which
is unrelated to what the hardware can actually do).

Will

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 0/9] Renesas ipmmu-vmsa: Miscellaneous cleanups and fixes
  2014-04-22 11:34   ` Will Deacon
  (?)
@ 2014-04-22 11:44     ` Laurent Pinchart
  -1 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-22 11:44 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Will,

On Tuesday 22 April 2014 12:34:23 Will Deacon wrote:
> On Mon, Apr 21, 2014 at 03:13:00PM +0100, Laurent Pinchart wrote:
> > Hello,
> 
> Hi Laurent,
> 
> > This patch set cleans up and fixes small issues in the ipmmu-vmsa driver.
> > The patches are based on top of "[PATCH v3] iommu: Add driver for Renesas
> > VMSA-compatible IPMMU" that adds the ipmmu-vmsa driver.
> > 
> > The most interesting part of this series is the rewrite of the page table
> > management code. The IOMMU core guarantees that the map and unmap
> > operations will always be called only with page sizes advertised by the
> > driver. We can use that assumption to remove loops of PGD and PMD
> > entries, simplifying the code.
> 
> Hmm, interesting. We still have to handle the case where a mapping created
> with one page-size could be unmapped with another though (in particular,
> unmapping part of the range).

Correct. I've implemented that in patch 9/9. Note that the patch also frees 
pages use for page directory entries when they're not needed anymore, instead 
of just marking them as invalid. That's something you probably should do in 
the arm-smmu driver as well.

> > Will, would it make sense to perform the same cleanup for the arm-smmu
> > driver, or is there a reason to keep loops over PGD and PMD entries ?
> > Removing them makes the implementation of 68kB and 2MB pages easier.
> 
> Is this an assumption that's relied on by other IOMMU drivers? It certainly
> makes mapping of large ranges less efficient than it could be, so I'm more
> inclined to set all the bits > PAGE_SIZE in pgsize_bitmap if it's only used
> to determine the granularity at which map/unmap are called (which is
> unrelated to what the hardware can actually do).

I haven't checked all the other IOMMU drivers, but at least the OMAP IOMMU 
driver relies on the same assumption. Splitting map/unmap operations in page 
size chunks inside the IOMMU core might indeed have a negative performance 
impact due to locking, but I'm not sure it would be noticeable.

-- 
Regards,

Laurent Pinchart


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 0/9] Renesas ipmmu-vmsa: Miscellaneous cleanups and fixes
@ 2014-04-22 11:44     ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-22 11:44 UTC (permalink / raw)
  To: Will Deacon; +Cc: Laurent Pinchart, iommu, linux-sh, linux-arm-kernel

Hi Will,

On Tuesday 22 April 2014 12:34:23 Will Deacon wrote:
> On Mon, Apr 21, 2014 at 03:13:00PM +0100, Laurent Pinchart wrote:
> > Hello,
> 
> Hi Laurent,
> 
> > This patch set cleans up and fixes small issues in the ipmmu-vmsa driver.
> > The patches are based on top of "[PATCH v3] iommu: Add driver for Renesas
> > VMSA-compatible IPMMU" that adds the ipmmu-vmsa driver.
> > 
> > The most interesting part of this series is the rewrite of the page table
> > management code. The IOMMU core guarantees that the map and unmap
> > operations will always be called only with page sizes advertised by the
> > driver. We can use that assumption to remove loops of PGD and PMD
> > entries, simplifying the code.
> 
> Hmm, interesting. We still have to handle the case where a mapping created
> with one page-size could be unmapped with another though (in particular,
> unmapping part of the range).

Correct. I've implemented that in patch 9/9. Note that the patch also frees 
pages use for page directory entries when they're not needed anymore, instead 
of just marking them as invalid. That's something you probably should do in 
the arm-smmu driver as well.

> > Will, would it make sense to perform the same cleanup for the arm-smmu
> > driver, or is there a reason to keep loops over PGD and PMD entries ?
> > Removing them makes the implementation of 68kB and 2MB pages easier.
> 
> Is this an assumption that's relied on by other IOMMU drivers? It certainly
> makes mapping of large ranges less efficient than it could be, so I'm more
> inclined to set all the bits > PAGE_SIZE in pgsize_bitmap if it's only used
> to determine the granularity at which map/unmap are called (which is
> unrelated to what the hardware can actually do).

I haven't checked all the other IOMMU drivers, but at least the OMAP IOMMU 
driver relies on the same assumption. Splitting map/unmap operations in page 
size chunks inside the IOMMU core might indeed have a negative performance 
impact due to locking, but I'm not sure it would be noticeable.

-- 
Regards,

Laurent Pinchart


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 0/9] Renesas ipmmu-vmsa: Miscellaneous cleanups and fixes
@ 2014-04-22 11:44     ` Laurent Pinchart
  0 siblings, 0 replies; 42+ messages in thread
From: Laurent Pinchart @ 2014-04-22 11:44 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Will,

On Tuesday 22 April 2014 12:34:23 Will Deacon wrote:
> On Mon, Apr 21, 2014 at 03:13:00PM +0100, Laurent Pinchart wrote:
> > Hello,
> 
> Hi Laurent,
> 
> > This patch set cleans up and fixes small issues in the ipmmu-vmsa driver.
> > The patches are based on top of "[PATCH v3] iommu: Add driver for Renesas
> > VMSA-compatible IPMMU" that adds the ipmmu-vmsa driver.
> > 
> > The most interesting part of this series is the rewrite of the page table
> > management code. The IOMMU core guarantees that the map and unmap
> > operations will always be called only with page sizes advertised by the
> > driver. We can use that assumption to remove loops of PGD and PMD
> > entries, simplifying the code.
> 
> Hmm, interesting. We still have to handle the case where a mapping created
> with one page-size could be unmapped with another though (in particular,
> unmapping part of the range).

Correct. I've implemented that in patch 9/9. Note that the patch also frees 
pages use for page directory entries when they're not needed anymore, instead 
of just marking them as invalid. That's something you probably should do in 
the arm-smmu driver as well.

> > Will, would it make sense to perform the same cleanup for the arm-smmu
> > driver, or is there a reason to keep loops over PGD and PMD entries ?
> > Removing them makes the implementation of 68kB and 2MB pages easier.
> 
> Is this an assumption that's relied on by other IOMMU drivers? It certainly
> makes mapping of large ranges less efficient than it could be, so I'm more
> inclined to set all the bits > PAGE_SIZE in pgsize_bitmap if it's only used
> to determine the granularity at which map/unmap are called (which is
> unrelated to what the hardware can actually do).

I haven't checked all the other IOMMU drivers, but at least the OMAP IOMMU 
driver relies on the same assumption. Splitting map/unmap operations in page 
size chunks inside the IOMMU core might indeed have a negative performance 
impact due to locking, but I'm not sure it would be noticeable.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2014-04-22 11:44 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-04-21 14:13 [PATCH 0/9] Renesas ipmmu-vmsa: Miscellaneous cleanups and fixes Laurent Pinchart
2014-04-21 14:13 ` Laurent Pinchart
2014-04-21 14:13 ` Laurent Pinchart
2014-04-21 14:13 ` [PATCH 1/9] iommu/ipmmu-vmsa: Cleanup failures of ARM mapping creation or attachment Laurent Pinchart
2014-04-21 14:13   ` Laurent Pinchart
2014-04-21 14:13   ` Laurent Pinchart
2014-04-21 14:13 ` [PATCH 2/9] iommu/ipmmu-vmsa: Fix the supported page sizes Laurent Pinchart
2014-04-21 14:13   ` Laurent Pinchart
2014-04-21 14:13   ` Laurent Pinchart
2014-04-21 14:13 ` [PATCH 3/9] iommu/ipmmu-vmsa: Define driver-specific page directory sizes Laurent Pinchart
2014-04-21 14:13   ` Laurent Pinchart
2014-04-21 14:13   ` Laurent Pinchart
2014-04-21 19:05   ` Sergei Shtylyov
2014-04-21 19:05     ` Sergei Shtylyov
2014-04-21 19:05     ` Sergei Shtylyov
2014-04-21 20:52     ` Laurent Pinchart
2014-04-21 20:52       ` Laurent Pinchart
2014-04-21 20:52       ` Laurent Pinchart
2014-04-21 14:13 ` [PATCH 4/9] iommu/ipmmu-vmsa: Set the PTE contiguous hint bit when possible Laurent Pinchart
2014-04-21 14:13   ` Laurent Pinchart
2014-04-21 14:13   ` Laurent Pinchart
2014-04-21 14:13 ` [PATCH 5/9] iommu/ipmmu-vmsa: PMD is never folded, PUD always is Laurent Pinchart
2014-04-21 14:13   ` Laurent Pinchart
2014-04-21 14:13   ` Laurent Pinchart
2014-04-21 14:13 ` [PATCH 6/9] iommu/ipmmu-vmsa: Rewrite page table management Laurent Pinchart
2014-04-21 14:13   ` Laurent Pinchart
2014-04-21 14:13   ` Laurent Pinchart
2014-04-21 14:13 ` [PATCH 7/9] iommu/ipmmu-vmsa: Support 2MB mappings Laurent Pinchart
2014-04-21 14:13   ` Laurent Pinchart
2014-04-21 14:13   ` Laurent Pinchart
2014-04-21 14:13 ` [PATCH 8/9] iommu/ipmmu-vmsa: Remove stage 2 PTE bits definitions Laurent Pinchart
2014-04-21 14:13   ` Laurent Pinchart
2014-04-21 14:13   ` Laurent Pinchart
2014-04-21 14:13 ` [PATCH 9/9] iommu/ipmmu-vmsa: Support clearing mappings Laurent Pinchart
2014-04-21 14:13   ` Laurent Pinchart
2014-04-21 14:13   ` Laurent Pinchart
2014-04-22 11:34 ` [PATCH 0/9] Renesas ipmmu-vmsa: Miscellaneous cleanups and fixes Will Deacon
2014-04-22 11:34   ` Will Deacon
2014-04-22 11:34   ` Will Deacon
2014-04-22 11:44   ` Laurent Pinchart
2014-04-22 11:44     ` Laurent Pinchart
2014-04-22 11:44     ` Laurent Pinchart

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.