All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/7] arm64: hugetlb cleanup + break-before-make fixes
@ 2017-03-21 18:04 Punit Agrawal
  2017-03-21 18:04 ` [PATCH 1/7] arm64: hugetlb: set_huge_pte_at Add WARN_ON on !pte_present Punit Agrawal
                   ` (6 more replies)
  0 siblings, 7 replies; 15+ messages in thread
From: Punit Agrawal @ 2017-03-21 18:04 UTC (permalink / raw)
  To: linux-arm-kernel

[ Posting this series on behalf of Steve ]

It has become apparent that one has to take special care when
modifying attributes of memory mappings that employ the contiguous
bit.

Both the requirement and the architecturally correct
"Break-Before-Make" technique of updating contiguous entries can be
found described in: ARM DDI 0487A.k_iss10775, "Misprogramming of the
Contiguous bit", page D4-1762.

The huge pte accessors currently replace the attributes of contiguous
pte entries in place thus can, on certain platforms, lead to TLB
conflict aborts or even erroneous results returned from TLB lookups.

This patch series adds the required break-before-make logic to the
contiguous bit hugetlb accessors.

In order to simplify the break-before-make logic (and the patch
review), we first simplify the existing code.

During the course of this simplification I have found a bug whereby
spurious calls were being made to huge_ptep_offset by other accessors.
Also, a pre-emptive, WARN_ON is added to set_huge_pte_at in case it is
supplied with swp/migration entries unexpectedly (from
MEMORY_FAILURE).

Steve Capper (7):
  arm64: hugetlb: set_huge_pte_at Add WARN_ON on !pte_present
  arm64: hugetlb: Cleanup setup_hugepagesz
  arm64: hugetlb: Refactor find_num_contig
  arm64: hugetlb: Introduce pte_pgprot helper
  arm64: hugetlb: Remove spurious calls to huge_ptep_offset
  arm64: hugetlb: Spring clean huge pte accessors
  arm64: hugetlb: Add break-before-make logic for contiguous entries

 arch/arm64/mm/hugetlbpage.c | 228 +++++++++++++++++++++++++-------------------
 1 file changed, 130 insertions(+), 98 deletions(-)

-- 
2.11.0

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/7] arm64: hugetlb: set_huge_pte_at Add WARN_ON on !pte_present
  2017-03-21 18:04 [PATCH 0/7] arm64: hugetlb cleanup + break-before-make fixes Punit Agrawal
@ 2017-03-21 18:04 ` Punit Agrawal
  2017-03-21 18:04 ` [PATCH 2/7] arm64: hugetlb: Cleanup setup_hugepagesz Punit Agrawal
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Punit Agrawal @ 2017-03-21 18:04 UTC (permalink / raw)
  To: linux-arm-kernel

From: Steve Capper <steve.capper@arm.com>

This patch adds a WARN_ON to set_huge_pte_at as the accessor assumes
that entries to be written down are all present. (There are separate
accessors to clear huge ptes).

We will need to handle the !pte_present case where memory offlining
is used on hugetlb pages. swap and migration entries will be supplied
to set_huge_pte_at in this case.

Cc: David Woods <dwoods@mellanox.com>
Signed-off-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
---
 arch/arm64/mm/hugetlbpage.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index e25584d72396..a686f8705ef3 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -69,6 +69,12 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
 	unsigned long pfn;
 	pgprot_t hugeprot;
 
+	/*
+	 * Code needs to be expanded to handle huge swap and migration
+	 * entries. Needed for HUGETLB and MEMORY_FAILURE.
+	 */
+	WARN_ON(!pte_present(pte));
+
 	if (ncontig == 1) {
 		set_pte_at(mm, addr, ptep, pte);
 		return;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/7] arm64: hugetlb: Cleanup setup_hugepagesz
  2017-03-21 18:04 [PATCH 0/7] arm64: hugetlb cleanup + break-before-make fixes Punit Agrawal
  2017-03-21 18:04 ` [PATCH 1/7] arm64: hugetlb: set_huge_pte_at Add WARN_ON on !pte_present Punit Agrawal
@ 2017-03-21 18:04 ` Punit Agrawal
  2017-03-22 14:01   ` Mark Rutland
  2017-03-21 18:04 ` [PATCH 3/7] arm64: hugetlb: Refactor find_num_contig Punit Agrawal
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Punit Agrawal @ 2017-03-21 18:04 UTC (permalink / raw)
  To: linux-arm-kernel

From: Steve Capper <steve.capper@arm.com>

Replace a lot of if statements with switch and case labels to make it
much clearer which huge page sizes are supported.

Also, we prevent PUD_SIZE from being used on systems not running with
4KB PAGE_SIZE. Before if one supplied PUD_SIZE in these circumstances,
then unusuable huge page sizes would be in use.

Fixes: 084bd29810a5 ("ARM64: mm: HugeTLB support.")
Cc: David Woods <dwoods@mellanox.com>
Signed-off-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
---
 arch/arm64/mm/hugetlbpage.c | 26 +++++++++++++-------------
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index a686f8705ef3..aacbe18e6640 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -296,20 +296,20 @@ static __init int setup_hugepagesz(char *opt)
 {
 	unsigned long ps = memparse(opt, &opt);
 
-	if (ps == PMD_SIZE) {
-		hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT);
-	} else if (ps == PUD_SIZE) {
-		hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT);
-	} else if (ps == (PAGE_SIZE * CONT_PTES)) {
-		hugetlb_add_hstate(CONT_PTE_SHIFT);
-	} else if (ps == (PMD_SIZE * CONT_PMDS)) {
-		hugetlb_add_hstate((PMD_SHIFT + CONT_PMD_SHIFT) - PAGE_SHIFT);
-	} else {
-		hugetlb_bad_size();
-		pr_err("hugepagesz: Unsupported page size %lu K\n", ps >> 10);
-		return 0;
+	switch (ps) {
+#ifdef CONFIG_ARM64_4K_PAGES
+	case PUD_SIZE:
+#endif
+	case PMD_SIZE * CONT_PMDS:
+	case PMD_SIZE:
+	case PAGE_SIZE * CONT_PTES:
+		hugetlb_add_hstate(ilog2(ps) - PAGE_SHIFT);
+		return 1;
 	}
-	return 1;
+
+	hugetlb_bad_size();
+	pr_err("hugepagesz: Unsupported page size %lu K\n", ps >> 10);
+	return 0;
 }
 __setup("hugepagesz=", setup_hugepagesz);
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 3/7] arm64: hugetlb: Refactor find_num_contig
  2017-03-21 18:04 [PATCH 0/7] arm64: hugetlb cleanup + break-before-make fixes Punit Agrawal
  2017-03-21 18:04 ` [PATCH 1/7] arm64: hugetlb: set_huge_pte_at Add WARN_ON on !pte_present Punit Agrawal
  2017-03-21 18:04 ` [PATCH 2/7] arm64: hugetlb: Cleanup setup_hugepagesz Punit Agrawal
@ 2017-03-21 18:04 ` Punit Agrawal
  2017-03-22 14:14   ` Mark Rutland
  2017-03-21 18:04 ` [PATCH 4/7] arm64: hugetlb: Introduce pte_pgprot helper Punit Agrawal
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Punit Agrawal @ 2017-03-21 18:04 UTC (permalink / raw)
  To: linux-arm-kernel

From: Steve Capper <steve.capper@arm.com>

As we regularly check for contiguous pte's in the huge accessors, remove
this extra check from find_num_contig.

Cc: David Woods <dwoods@mellanox.com>
Signed-off-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
---
 arch/arm64/mm/hugetlbpage.c | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index aacbe18e6640..81bc6e4cf714 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -42,15 +42,13 @@ int pud_huge(pud_t pud)
 }
 
 static int find_num_contig(struct mm_struct *mm, unsigned long addr,
-			   pte_t *ptep, pte_t pte, size_t *pgsize)
+			   pte_t *ptep, size_t *pgsize)
 {
 	pgd_t *pgd = pgd_offset(mm, addr);
 	pud_t *pud;
 	pmd_t *pmd;
 
 	*pgsize = PAGE_SIZE;
-	if (!pte_cont(pte))
-		return 1;
 	pud = pud_offset(pgd, addr);
 	pmd = pmd_offset(pud, addr);
 	if ((pte_t *)pmd == ptep) {
@@ -65,7 +63,7 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
 {
 	size_t pgsize;
 	int i;
-	int ncontig = find_num_contig(mm, addr, ptep, pte, &pgsize);
+	int ncontig;
 	unsigned long pfn;
 	pgprot_t hugeprot;
 
@@ -75,11 +73,12 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
 	 */
 	WARN_ON(!pte_present(pte));
 
-	if (ncontig == 1) {
+	if (!pte_cont(pte)) {
 		set_pte_at(mm, addr, ptep, pte);
 		return;
 	}
 
+	ncontig = find_num_contig(mm, addr, ptep, &pgsize);
 	pfn = pte_pfn(pte);
 	hugeprot = __pgprot(pte_val(pfn_pte(pfn, __pgprot(0))) ^ pte_val(pte));
 	for (i = 0; i < ncontig; i++) {
@@ -203,7 +202,7 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
 		bool is_dirty = false;
 
 		cpte = huge_pte_offset(mm, addr);
-		ncontig = find_num_contig(mm, addr, cpte, *cpte, &pgsize);
+		ncontig = find_num_contig(mm, addr, cpte, &pgsize);
 		/* save the 1st pte to return */
 		pte = ptep_get_and_clear(mm, addr, cpte);
 		for (i = 1, addr += pgsize; i < ncontig; ++i, addr += pgsize) {
@@ -243,7 +242,7 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
 		cpte = huge_pte_offset(vma->vm_mm, addr);
 		pfn = pte_pfn(*cpte);
 		ncontig = find_num_contig(vma->vm_mm, addr, cpte,
-					  *cpte, &pgsize);
+					  &pgsize);
 		for (i = 0; i < ncontig; ++i, ++cpte, addr += pgsize) {
 			changed |= ptep_set_access_flags(vma, addr, cpte,
 							pfn_pte(pfn,
@@ -266,7 +265,7 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
 		size_t pgsize = 0;
 
 		cpte = huge_pte_offset(mm, addr);
-		ncontig = find_num_contig(mm, addr, cpte, *cpte, &pgsize);
+		ncontig = find_num_contig(mm, addr, cpte, &pgsize);
 		for (i = 0; i < ncontig; ++i, ++cpte, addr += pgsize)
 			ptep_set_wrprotect(mm, addr, cpte);
 	} else {
@@ -284,7 +283,7 @@ void huge_ptep_clear_flush(struct vm_area_struct *vma,
 
 		cpte = huge_pte_offset(vma->vm_mm, addr);
 		ncontig = find_num_contig(vma->vm_mm, addr, cpte,
-					  *cpte, &pgsize);
+					  &pgsize);
 		for (i = 0; i < ncontig; ++i, ++cpte, addr += pgsize)
 			ptep_clear_flush(vma, addr, cpte);
 	} else {
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 4/7] arm64: hugetlb: Introduce pte_pgprot helper
  2017-03-21 18:04 [PATCH 0/7] arm64: hugetlb cleanup + break-before-make fixes Punit Agrawal
                   ` (2 preceding siblings ...)
  2017-03-21 18:04 ` [PATCH 3/7] arm64: hugetlb: Refactor find_num_contig Punit Agrawal
@ 2017-03-21 18:04 ` Punit Agrawal
  2017-03-22 14:16   ` Mark Rutland
  2017-03-21 18:04 ` [PATCH 5/7] arm64: hugetlb: Remove spurious calls to huge_ptep_offset Punit Agrawal
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Punit Agrawal @ 2017-03-21 18:04 UTC (permalink / raw)
  To: linux-arm-kernel

From: Steve Capper <steve.capper@arm.com>

Rather than xor pte bits in various places, use this helper function.

Cc: David Woods <dwoods@mellanox.com>
Signed-off-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
---
 arch/arm64/mm/hugetlbpage.c | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 81bc6e4cf714..44d637eb32a7 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -41,6 +41,16 @@ int pud_huge(pud_t pud)
 #endif
 }
 
+/*
+ * Select all bits except the pfn
+ */
+static inline pgprot_t pte_pgprot(pte_t pte)
+{
+	unsigned long pfn = pte_pfn(pte);
+
+	return __pgprot(pte_val(pfn_pte(pfn, __pgprot(0))) ^ pte_val(pte));
+}
+
 static int find_num_contig(struct mm_struct *mm, unsigned long addr,
 			   pte_t *ptep, size_t *pgsize)
 {
@@ -80,7 +90,7 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
 
 	ncontig = find_num_contig(mm, addr, ptep, &pgsize);
 	pfn = pte_pfn(pte);
-	hugeprot = __pgprot(pte_val(pfn_pte(pfn, __pgprot(0))) ^ pte_val(pte));
+	hugeprot = pte_pgprot(pte);
 	for (i = 0; i < ncontig; i++) {
 		pr_debug("%s: set pte %p to 0x%llx\n", __func__, ptep,
 			 pte_val(pfn_pte(pfn, hugeprot)));
@@ -235,9 +245,7 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
 		size_t pgsize = 0;
 		unsigned long pfn = pte_pfn(pte);
 		/* Select all bits except the pfn */
-		pgprot_t hugeprot =
-			__pgprot(pte_val(pfn_pte(pfn, __pgprot(0))) ^
-				 pte_val(pte));
+		pgprot_t hugeprot = pte_pgprot(pte);
 
 		cpte = huge_pte_offset(vma->vm_mm, addr);
 		pfn = pte_pfn(*cpte);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 5/7] arm64: hugetlb: Remove spurious calls to huge_ptep_offset
  2017-03-21 18:04 [PATCH 0/7] arm64: hugetlb cleanup + break-before-make fixes Punit Agrawal
                   ` (3 preceding siblings ...)
  2017-03-21 18:04 ` [PATCH 4/7] arm64: hugetlb: Introduce pte_pgprot helper Punit Agrawal
@ 2017-03-21 18:04 ` Punit Agrawal
  2017-03-22 14:18   ` Mark Rutland
  2017-03-21 18:04 ` [PATCH 6/7] arm64: hugetlb: Spring clean huge pte accessors Punit Agrawal
  2017-03-21 18:04 ` [PATCH 7/7] arm64: hugetlb: Add break-before-make logic for contiguous entries Punit Agrawal
  6 siblings, 1 reply; 15+ messages in thread
From: Punit Agrawal @ 2017-03-21 18:04 UTC (permalink / raw)
  To: linux-arm-kernel

From: Steve Capper <steve.capper@arm.com>

We don't need to call huge_ptep_offset as our accessors are already
supplied with the pte_t *. This patch removes those spurious calls.

Cc: David Woods <dwoods@mellanox.com>
Signed-off-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
---
 arch/arm64/mm/hugetlbpage.c | 37 ++++++++++++++-----------------------
 1 file changed, 14 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 44d637eb32a7..c868c80a9d6b 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -208,21 +208,19 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
 	if (pte_cont(*ptep)) {
 		int ncontig, i;
 		size_t pgsize;
-		pte_t *cpte;
 		bool is_dirty = false;
 
-		cpte = huge_pte_offset(mm, addr);
-		ncontig = find_num_contig(mm, addr, cpte, &pgsize);
+		ncontig = find_num_contig(mm, addr, ptep, &pgsize);
 		/* save the 1st pte to return */
-		pte = ptep_get_and_clear(mm, addr, cpte);
+		pte = ptep_get_and_clear(mm, addr, ptep);
 		for (i = 1, addr += pgsize; i < ncontig; ++i, addr += pgsize) {
 			/*
 			 * If HW_AFDBM is enabled, then the HW could
 			 * turn on the dirty bit for any of the page
 			 * in the set, so check them all.
 			 */
-			++cpte;
-			if (pte_dirty(ptep_get_and_clear(mm, addr, cpte)))
+			++ptep;
+			if (pte_dirty(ptep_get_and_clear(mm, addr, ptep)))
 				is_dirty = true;
 		}
 		if (is_dirty)
@@ -238,8 +236,6 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
 			       unsigned long addr, pte_t *ptep,
 			       pte_t pte, int dirty)
 {
-	pte_t *cpte;
-
 	if (pte_cont(pte)) {
 		int ncontig, i, changed = 0;
 		size_t pgsize = 0;
@@ -247,12 +243,11 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
 		/* Select all bits except the pfn */
 		pgprot_t hugeprot = pte_pgprot(pte);
 
-		cpte = huge_pte_offset(vma->vm_mm, addr);
-		pfn = pte_pfn(*cpte);
-		ncontig = find_num_contig(vma->vm_mm, addr, cpte,
+		pfn = pte_pfn(pte);
+		ncontig = find_num_contig(vma->vm_mm, addr, ptep,
 					  &pgsize);
-		for (i = 0; i < ncontig; ++i, ++cpte, addr += pgsize) {
-			changed |= ptep_set_access_flags(vma, addr, cpte,
+		for (i = 0; i < ncontig; ++i, ++ptep, addr += pgsize) {
+			changed |= ptep_set_access_flags(vma, addr, ptep,
 							pfn_pte(pfn,
 								hugeprot),
 							dirty);
@@ -269,13 +264,11 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
 {
 	if (pte_cont(*ptep)) {
 		int ncontig, i;
-		pte_t *cpte;
 		size_t pgsize = 0;
 
-		cpte = huge_pte_offset(mm, addr);
-		ncontig = find_num_contig(mm, addr, cpte, &pgsize);
-		for (i = 0; i < ncontig; ++i, ++cpte, addr += pgsize)
-			ptep_set_wrprotect(mm, addr, cpte);
+		ncontig = find_num_contig(mm, addr, ptep, &pgsize);
+		for (i = 0; i < ncontig; ++i, ++ptep, addr += pgsize)
+			ptep_set_wrprotect(mm, addr, ptep);
 	} else {
 		ptep_set_wrprotect(mm, addr, ptep);
 	}
@@ -286,14 +279,12 @@ void huge_ptep_clear_flush(struct vm_area_struct *vma,
 {
 	if (pte_cont(*ptep)) {
 		int ncontig, i;
-		pte_t *cpte;
 		size_t pgsize = 0;
 
-		cpte = huge_pte_offset(vma->vm_mm, addr);
-		ncontig = find_num_contig(vma->vm_mm, addr, cpte,
+		ncontig = find_num_contig(vma->vm_mm, addr, ptep,
 					  &pgsize);
-		for (i = 0; i < ncontig; ++i, ++cpte, addr += pgsize)
-			ptep_clear_flush(vma, addr, cpte);
+		for (i = 0; i < ncontig; ++i, ++ptep, addr += pgsize)
+			ptep_clear_flush(vma, addr, ptep);
 	} else {
 		ptep_clear_flush(vma, addr, ptep);
 	}
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 6/7] arm64: hugetlb: Spring clean huge pte accessors
  2017-03-21 18:04 [PATCH 0/7] arm64: hugetlb cleanup + break-before-make fixes Punit Agrawal
                   ` (4 preceding siblings ...)
  2017-03-21 18:04 ` [PATCH 5/7] arm64: hugetlb: Remove spurious calls to huge_ptep_offset Punit Agrawal
@ 2017-03-21 18:04 ` Punit Agrawal
  2017-03-22 14:24   ` Mark Rutland
  2017-03-21 18:04 ` [PATCH 7/7] arm64: hugetlb: Add break-before-make logic for contiguous entries Punit Agrawal
  6 siblings, 1 reply; 15+ messages in thread
From: Punit Agrawal @ 2017-03-21 18:04 UTC (permalink / raw)
  To: linux-arm-kernel

From: Steve Capper <steve.capper@arm.com>

This patch aims to re-structure the huge pte accessors without affecting
their functionality. Control flow is changed to reduce indentation and
expanded use is made of post for loop variable modification.

It is then much easier to add break-before-make semantics in a subsequent
patch.

Cc: David Woods <dwoods@mellanox.com>
Signed-off-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
---
 arch/arm64/mm/hugetlbpage.c | 121 ++++++++++++++++++++------------------------
 1 file changed, 55 insertions(+), 66 deletions(-)

diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index c868c80a9d6b..ef85d0656039 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -74,7 +74,7 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
 	size_t pgsize;
 	int i;
 	int ncontig;
-	unsigned long pfn;
+	unsigned long pfn, dpfn;
 	pgprot_t hugeprot;
 
 	/*
@@ -90,14 +90,13 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
 
 	ncontig = find_num_contig(mm, addr, ptep, &pgsize);
 	pfn = pte_pfn(pte);
+	dpfn = pgsize >> PAGE_SHIFT;
 	hugeprot = pte_pgprot(pte);
-	for (i = 0; i < ncontig; i++) {
+
+	for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) {
 		pr_debug("%s: set pte %p to 0x%llx\n", __func__, ptep,
 			 pte_val(pfn_pte(pfn, hugeprot)));
 		set_pte_at(mm, addr, ptep, pfn_pte(pfn, hugeprot));
-		ptep++;
-		pfn += pgsize >> PAGE_SHIFT;
-		addr += pgsize;
 	}
 }
 
@@ -203,91 +202,81 @@ pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
 pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
 			      unsigned long addr, pte_t *ptep)
 {
-	pte_t pte;
-
-	if (pte_cont(*ptep)) {
-		int ncontig, i;
-		size_t pgsize;
-		bool is_dirty = false;
-
-		ncontig = find_num_contig(mm, addr, ptep, &pgsize);
-		/* save the 1st pte to return */
-		pte = ptep_get_and_clear(mm, addr, ptep);
-		for (i = 1, addr += pgsize; i < ncontig; ++i, addr += pgsize) {
-			/*
-			 * If HW_AFDBM is enabled, then the HW could
-			 * turn on the dirty bit for any of the page
-			 * in the set, so check them all.
-			 */
-			++ptep;
-			if (pte_dirty(ptep_get_and_clear(mm, addr, ptep)))
-				is_dirty = true;
-		}
-		if (is_dirty)
-			return pte_mkdirty(pte);
-		else
-			return pte;
-	} else {
+	int ncontig, i;
+	size_t pgsize;
+	pte_t orig_pte = huge_ptep_get(ptep);
+
+	if (!pte_cont(orig_pte))
 		return ptep_get_and_clear(mm, addr, ptep);
+
+	ncontig = find_num_contig(mm, addr, ptep, &pgsize);
+	for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) {
+		/*
+		 * If HW_AFDBM is enabled, then the HW could
+		 * turn on the dirty bit for any of the page
+		 * in the set, so check them all.
+		 */
+		if (pte_dirty(ptep_get_and_clear(mm, addr, ptep)))
+			orig_pte = pte_mkdirty(orig_pte);
 	}
+
+	return orig_pte;
 }
 
 int huge_ptep_set_access_flags(struct vm_area_struct *vma,
 			       unsigned long addr, pte_t *ptep,
 			       pte_t pte, int dirty)
 {
-	if (pte_cont(pte)) {
-		int ncontig, i, changed = 0;
-		size_t pgsize = 0;
-		unsigned long pfn = pte_pfn(pte);
-		/* Select all bits except the pfn */
-		pgprot_t hugeprot = pte_pgprot(pte);
-
-		pfn = pte_pfn(pte);
-		ncontig = find_num_contig(vma->vm_mm, addr, ptep,
-					  &pgsize);
-		for (i = 0; i < ncontig; ++i, ++ptep, addr += pgsize) {
-			changed |= ptep_set_access_flags(vma, addr, ptep,
-							pfn_pte(pfn,
-								hugeprot),
-							dirty);
-			pfn += pgsize >> PAGE_SHIFT;
-		}
-		return changed;
-	} else {
+	int ncontig, i, changed = 0;
+	size_t pgsize = 0;
+	unsigned long pfn = pte_pfn(pte), dpfn;
+	pgprot_t hugeprot;
+
+	if (!pte_cont(pte))
 		return ptep_set_access_flags(vma, addr, ptep, pte, dirty);
+
+	ncontig = find_num_contig(vma->vm_mm, addr, ptep, &pgsize);
+	dpfn = pgsize >> PAGE_SHIFT;
+	hugeprot = pte_pgprot(pte);
+
+	for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) {
+		changed |= ptep_set_access_flags(vma, addr, ptep,
+				pfn_pte(pfn, hugeprot), dirty);
 	}
+
+	return changed;
 }
 
 void huge_ptep_set_wrprotect(struct mm_struct *mm,
 			     unsigned long addr, pte_t *ptep)
 {
-	if (pte_cont(*ptep)) {
-		int ncontig, i;
-		size_t pgsize = 0;
-
-		ncontig = find_num_contig(mm, addr, ptep, &pgsize);
-		for (i = 0; i < ncontig; ++i, ++ptep, addr += pgsize)
-			ptep_set_wrprotect(mm, addr, ptep);
-	} else {
+	int ncontig, i;
+	size_t pgsize;
+
+	if (!pte_cont(*ptep)) {
 		ptep_set_wrprotect(mm, addr, ptep);
+		return;
 	}
+
+	ncontig = find_num_contig(mm, addr, ptep, &pgsize);
+	for (i = 0; i < ncontig; i++, ptep++, addr += pgsize)
+		ptep_set_wrprotect(mm, addr, ptep);
 }
 
 void huge_ptep_clear_flush(struct vm_area_struct *vma,
 			   unsigned long addr, pte_t *ptep)
 {
-	if (pte_cont(*ptep)) {
-		int ncontig, i;
-		size_t pgsize = 0;
-
-		ncontig = find_num_contig(vma->vm_mm, addr, ptep,
-					  &pgsize);
-		for (i = 0; i < ncontig; ++i, ++ptep, addr += pgsize)
-			ptep_clear_flush(vma, addr, ptep);
-	} else {
+	int ncontig, i;
+	size_t pgsize;
+
+	if (!pte_cont(*ptep)) {
 		ptep_clear_flush(vma, addr, ptep);
+		return;
 	}
+
+	ncontig = find_num_contig(vma->vm_mm, addr, ptep, &pgsize);
+	for (i = 0; i < ncontig; i++, ptep++, addr += pgsize)
+		ptep_clear_flush(vma, addr, ptep);
 }
 
 static __init int setup_hugepagesz(char *opt)
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 7/7] arm64: hugetlb: Add break-before-make logic for contiguous entries
  2017-03-21 18:04 [PATCH 0/7] arm64: hugetlb cleanup + break-before-make fixes Punit Agrawal
                   ` (5 preceding siblings ...)
  2017-03-21 18:04 ` [PATCH 6/7] arm64: hugetlb: Spring clean huge pte accessors Punit Agrawal
@ 2017-03-21 18:04 ` Punit Agrawal
  2017-03-22 14:38   ` Mark Rutland
  6 siblings, 1 reply; 15+ messages in thread
From: Punit Agrawal @ 2017-03-21 18:04 UTC (permalink / raw)
  To: linux-arm-kernel

From: Steve Capper <steve.capper@arm.com>

It has become apparent that one has to take special care when modifying
attributes of memory mappings that employ the contiguous bit.

Both the requirement and the architecturally correct "Break-Before-Make"
technique of updating contiguous entries can be found described in:
ARM DDI 0487A.k_iss10775, "Misprogramming of the Contiguous bit",
page D4-1762.

The huge pte accessors currently replace the attributes of contiguous
pte entries in place thus can, on certain platforms, lead to TLB
conflict aborts or even erroneous results returned from TLB lookups.

This patch adds a helper function get_clear_flush(.) that clears a
contiguous entry and returns the head pte (whilst taking care to
retain dirty bit information that could have been modified by DBM).
A tlb invalidate is performed to then ensure that there is no
possibility of multiple tlb entries being present for the same
region.

Cc: David Woods <dwoods@mellanox.com>
Signed-off-by: Steve Capper <steve.capper@arm.com>
(Fixed indentation and nuked stale comment)
Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
---
 arch/arm64/mm/hugetlbpage.c | 73 ++++++++++++++++++++++++++++++++++-----------
 1 file changed, 56 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index ef85d0656039..a35c5c392198 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -68,6 +68,39 @@ static int find_num_contig(struct mm_struct *mm, unsigned long addr,
 	return CONT_PTES;
 }
 
+static pte_t get_clear_flush(struct mm_struct *mm,
+			     unsigned long addr,
+			     pte_t *ptep,
+			     unsigned long pgsize,
+			     unsigned long ncontig)
+{
+	unsigned long i, saddr = addr;
+	struct vm_area_struct vma = { .vm_mm = mm };
+	pte_t orig_pte = huge_ptep_get(ptep);
+
+	/*
+	 * If we already have a faulting entry then we don't need
+	 * to break before make (there won't be a tlb entry cached).
+	 */
+	if (!pte_present(orig_pte))
+		return orig_pte;
+
+	for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) {
+		pte_t pte = ptep_get_and_clear(mm, addr, ptep);
+
+		/*
+		 * If HW_AFDBM is enabled, then the HW could turn on the dirty
+		 * bit for any of the page in the set, so check them all.
+		 * All hugetlb entries are already young.
+		 */
+		if (IS_ENABLED(CONFIG_ARM64_HW_AFDBM) && pte_dirty(pte))
+			orig_pte = pte_mkdirty(orig_pte);
+	}
+
+	flush_tlb_range(&vma, saddr, addr);
+	return orig_pte;
+}
+
 void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
 			    pte_t *ptep, pte_t pte)
 {
@@ -93,6 +126,8 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
 	dpfn = pgsize >> PAGE_SHIFT;
 	hugeprot = pte_pgprot(pte);
 
+	get_clear_flush(mm, addr, ptep, pgsize, ncontig);
+
 	for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) {
 		pr_debug("%s: set pte %p to 0x%llx\n", __func__, ptep,
 			 pte_val(pfn_pte(pfn, hugeprot)));
@@ -202,7 +237,7 @@ pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
 pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
 			      unsigned long addr, pte_t *ptep)
 {
-	int ncontig, i;
+	int ncontig;
 	size_t pgsize;
 	pte_t orig_pte = huge_ptep_get(ptep);
 
@@ -210,17 +245,8 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
 		return ptep_get_and_clear(mm, addr, ptep);
 
 	ncontig = find_num_contig(mm, addr, ptep, &pgsize);
-	for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) {
-		/*
-		 * If HW_AFDBM is enabled, then the HW could
-		 * turn on the dirty bit for any of the page
-		 * in the set, so check them all.
-		 */
-		if (pte_dirty(ptep_get_and_clear(mm, addr, ptep)))
-			orig_pte = pte_mkdirty(orig_pte);
-	}
 
-	return orig_pte;
+	return get_clear_flush(mm, addr, ptep, pgsize, ncontig);
 }
 
 int huge_ptep_set_access_flags(struct vm_area_struct *vma,
@@ -230,6 +256,7 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
 	int ncontig, i, changed = 0;
 	size_t pgsize = 0;
 	unsigned long pfn = pte_pfn(pte), dpfn;
+	pte_t orig_pte;
 	pgprot_t hugeprot;
 
 	if (!pte_cont(pte))
@@ -239,10 +266,12 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
 	dpfn = pgsize >> PAGE_SHIFT;
 	hugeprot = pte_pgprot(pte);
 
-	for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) {
-		changed |= ptep_set_access_flags(vma, addr, ptep,
-				pfn_pte(pfn, hugeprot), dirty);
-	}
+	orig_pte = get_clear_flush(vma->vm_mm, addr, ptep, pgsize, ncontig);
+	if (!pte_same(orig_pte, pte))
+		changed = 1;
+
+	for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn)
+		set_pte_at(vma->vm_mm, addr, ptep, pfn_pte(pfn, hugeprot));
 
 	return changed;
 }
@@ -252,6 +281,9 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
 {
 	int ncontig, i;
 	size_t pgsize;
+	pte_t pte = pte_wrprotect(huge_ptep_get(ptep)), orig_pte;
+	unsigned long pfn = pte_pfn(pte), dpfn;
+	pgprot_t hugeprot;
 
 	if (!pte_cont(*ptep)) {
 		ptep_set_wrprotect(mm, addr, ptep);
@@ -259,8 +291,15 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
 	}
 
 	ncontig = find_num_contig(mm, addr, ptep, &pgsize);
-	for (i = 0; i < ncontig; i++, ptep++, addr += pgsize)
-		ptep_set_wrprotect(mm, addr, ptep);
+	dpfn = pgsize >> PAGE_SHIFT;
+
+	orig_pte = get_clear_flush(mm, addr, ptep, pgsize, ncontig);
+	if (pte_dirty(orig_pte))
+		pte = pte_mkdirty(pte);
+
+	hugeprot = pte_pgprot(pte);
+	for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn)
+		set_pte_at(mm, addr, ptep, pfn_pte(pfn, hugeprot));
 }
 
 void huge_ptep_clear_flush(struct vm_area_struct *vma,
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/7] arm64: hugetlb: Cleanup setup_hugepagesz
  2017-03-21 18:04 ` [PATCH 2/7] arm64: hugetlb: Cleanup setup_hugepagesz Punit Agrawal
@ 2017-03-22 14:01   ` Mark Rutland
  0 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2017-03-22 14:01 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Mar 21, 2017 at 06:04:16PM +0000, Punit Agrawal wrote:
> From: Steve Capper <steve.capper@arm.com>
> 
> Replace a lot of if statements with switch and case labels to make it
> much clearer which huge page sizes are supported.
> 
> Also, we prevent PUD_SIZE from being used on systems not running with
> 4KB PAGE_SIZE. Before if one supplied PUD_SIZE in these circumstances,
> then unusuable huge page sizes would be in use.
> 
> Fixes: 084bd29810a5 ("ARM64: mm: HugeTLB support.")
> Cc: David Woods <dwoods@mellanox.com>
> Signed-off-by: Steve Capper <steve.capper@arm.com>
> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

I assume this'll get a cc stable to ensure backport.

Thanks,
Mark.

> ---
>  arch/arm64/mm/hugetlbpage.c | 26 +++++++++++++-------------
>  1 file changed, 13 insertions(+), 13 deletions(-)
> 
> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
> index a686f8705ef3..aacbe18e6640 100644
> --- a/arch/arm64/mm/hugetlbpage.c
> +++ b/arch/arm64/mm/hugetlbpage.c
> @@ -296,20 +296,20 @@ static __init int setup_hugepagesz(char *opt)
>  {
>  	unsigned long ps = memparse(opt, &opt);
>  
> -	if (ps == PMD_SIZE) {
> -		hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT);
> -	} else if (ps == PUD_SIZE) {
> -		hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT);
> -	} else if (ps == (PAGE_SIZE * CONT_PTES)) {
> -		hugetlb_add_hstate(CONT_PTE_SHIFT);
> -	} else if (ps == (PMD_SIZE * CONT_PMDS)) {
> -		hugetlb_add_hstate((PMD_SHIFT + CONT_PMD_SHIFT) - PAGE_SHIFT);
> -	} else {
> -		hugetlb_bad_size();
> -		pr_err("hugepagesz: Unsupported page size %lu K\n", ps >> 10);
> -		return 0;
> +	switch (ps) {
> +#ifdef CONFIG_ARM64_4K_PAGES
> +	case PUD_SIZE:
> +#endif
> +	case PMD_SIZE * CONT_PMDS:
> +	case PMD_SIZE:
> +	case PAGE_SIZE * CONT_PTES:
> +		hugetlb_add_hstate(ilog2(ps) - PAGE_SHIFT);
> +		return 1;
>  	}
> -	return 1;
> +
> +	hugetlb_bad_size();
> +	pr_err("hugepagesz: Unsupported page size %lu K\n", ps >> 10);
> +	return 0;
>  }
>  __setup("hugepagesz=", setup_hugepagesz);
>  
> -- 
> 2.11.0
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 3/7] arm64: hugetlb: Refactor find_num_contig
  2017-03-21 18:04 ` [PATCH 3/7] arm64: hugetlb: Refactor find_num_contig Punit Agrawal
@ 2017-03-22 14:14   ` Mark Rutland
  0 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2017-03-22 14:14 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Mar 21, 2017 at 06:04:17PM +0000, Punit Agrawal wrote:
> From: Steve Capper <steve.capper@arm.com>
> 
> As we regularly check for contiguous pte's in the huge accessors, remove
> this extra check from find_num_contig.
> 
> Cc: David Woods <dwoods@mellanox.com>
> Signed-off-by: Steve Capper <steve.capper@arm.com>
> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/mm/hugetlbpage.c | 17 ++++++++---------
>  1 file changed, 8 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
> index aacbe18e6640..81bc6e4cf714 100644
> --- a/arch/arm64/mm/hugetlbpage.c
> +++ b/arch/arm64/mm/hugetlbpage.c
> @@ -42,15 +42,13 @@ int pud_huge(pud_t pud)
>  }
>  
>  static int find_num_contig(struct mm_struct *mm, unsigned long addr,
> -			   pte_t *ptep, pte_t pte, size_t *pgsize)
> +			   pte_t *ptep, size_t *pgsize)
>  {
>  	pgd_t *pgd = pgd_offset(mm, addr);
>  	pud_t *pud;
>  	pmd_t *pmd;
>  
>  	*pgsize = PAGE_SIZE;
> -	if (!pte_cont(pte))
> -		return 1;
>  	pud = pud_offset(pgd, addr);
>  	pmd = pmd_offset(pud, addr);
>  	if ((pte_t *)pmd == ptep) {
> @@ -65,7 +63,7 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
>  {
>  	size_t pgsize;
>  	int i;
> -	int ncontig = find_num_contig(mm, addr, ptep, pte, &pgsize);
> +	int ncontig;
>  	unsigned long pfn;
>  	pgprot_t hugeprot;
>  
> @@ -75,11 +73,12 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
>  	 */
>  	WARN_ON(!pte_present(pte));
>  
> -	if (ncontig == 1) {
> +	if (!pte_cont(pte)) {
>  		set_pte_at(mm, addr, ptep, pte);
>  		return;
>  	}
>  
> +	ncontig = find_num_contig(mm, addr, ptep, &pgsize);
>  	pfn = pte_pfn(pte);
>  	hugeprot = __pgprot(pte_val(pfn_pte(pfn, __pgprot(0))) ^ pte_val(pte));
>  	for (i = 0; i < ncontig; i++) {
> @@ -203,7 +202,7 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
>  		bool is_dirty = false;
>  
>  		cpte = huge_pte_offset(mm, addr);
> -		ncontig = find_num_contig(mm, addr, cpte, *cpte, &pgsize);
> +		ncontig = find_num_contig(mm, addr, cpte, &pgsize);
>  		/* save the 1st pte to return */
>  		pte = ptep_get_and_clear(mm, addr, cpte);
>  		for (i = 1, addr += pgsize; i < ncontig; ++i, addr += pgsize) {
> @@ -243,7 +242,7 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
>  		cpte = huge_pte_offset(vma->vm_mm, addr);
>  		pfn = pte_pfn(*cpte);
>  		ncontig = find_num_contig(vma->vm_mm, addr, cpte,
> -					  *cpte, &pgsize);
> +					  &pgsize);
>  		for (i = 0; i < ncontig; ++i, ++cpte, addr += pgsize) {
>  			changed |= ptep_set_access_flags(vma, addr, cpte,
>  							pfn_pte(pfn,
> @@ -266,7 +265,7 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
>  		size_t pgsize = 0;
>  
>  		cpte = huge_pte_offset(mm, addr);
> -		ncontig = find_num_contig(mm, addr, cpte, *cpte, &pgsize);
> +		ncontig = find_num_contig(mm, addr, cpte, &pgsize);
>  		for (i = 0; i < ncontig; ++i, ++cpte, addr += pgsize)
>  			ptep_set_wrprotect(mm, addr, cpte);
>  	} else {
> @@ -284,7 +283,7 @@ void huge_ptep_clear_flush(struct vm_area_struct *vma,
>  
>  		cpte = huge_pte_offset(vma->vm_mm, addr);
>  		ncontig = find_num_contig(vma->vm_mm, addr, cpte,
> -					  *cpte, &pgsize);
> +					  &pgsize);
>  		for (i = 0; i < ncontig; ++i, ++cpte, addr += pgsize)
>  			ptep_clear_flush(vma, addr, cpte);
>  	} else {
> -- 
> 2.11.0
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 4/7] arm64: hugetlb: Introduce pte_pgprot helper
  2017-03-21 18:04 ` [PATCH 4/7] arm64: hugetlb: Introduce pte_pgprot helper Punit Agrawal
@ 2017-03-22 14:16   ` Mark Rutland
  0 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2017-03-22 14:16 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Mar 21, 2017 at 06:04:18PM +0000, Punit Agrawal wrote:
> From: Steve Capper <steve.capper@arm.com>
> 
> Rather than xor pte bits in various places, use this helper function.
> 
> Cc: David Woods <dwoods@mellanox.com>
> Signed-off-by: Steve Capper <steve.capper@arm.com>
> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>

Much nicer!

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/mm/hugetlbpage.c | 16 ++++++++++++----
>  1 file changed, 12 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
> index 81bc6e4cf714..44d637eb32a7 100644
> --- a/arch/arm64/mm/hugetlbpage.c
> +++ b/arch/arm64/mm/hugetlbpage.c
> @@ -41,6 +41,16 @@ int pud_huge(pud_t pud)
>  #endif
>  }
>  
> +/*
> + * Select all bits except the pfn
> + */
> +static inline pgprot_t pte_pgprot(pte_t pte)
> +{
> +	unsigned long pfn = pte_pfn(pte);
> +
> +	return __pgprot(pte_val(pfn_pte(pfn, __pgprot(0))) ^ pte_val(pte));
> +}
> +
>  static int find_num_contig(struct mm_struct *mm, unsigned long addr,
>  			   pte_t *ptep, size_t *pgsize)
>  {
> @@ -80,7 +90,7 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
>  
>  	ncontig = find_num_contig(mm, addr, ptep, &pgsize);
>  	pfn = pte_pfn(pte);
> -	hugeprot = __pgprot(pte_val(pfn_pte(pfn, __pgprot(0))) ^ pte_val(pte));
> +	hugeprot = pte_pgprot(pte);
>  	for (i = 0; i < ncontig; i++) {
>  		pr_debug("%s: set pte %p to 0x%llx\n", __func__, ptep,
>  			 pte_val(pfn_pte(pfn, hugeprot)));
> @@ -235,9 +245,7 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
>  		size_t pgsize = 0;
>  		unsigned long pfn = pte_pfn(pte);
>  		/* Select all bits except the pfn */
> -		pgprot_t hugeprot =
> -			__pgprot(pte_val(pfn_pte(pfn, __pgprot(0))) ^
> -				 pte_val(pte));
> +		pgprot_t hugeprot = pte_pgprot(pte);
>  
>  		cpte = huge_pte_offset(vma->vm_mm, addr);
>  		pfn = pte_pfn(*cpte);
> -- 
> 2.11.0
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 5/7] arm64: hugetlb: Remove spurious calls to huge_ptep_offset
  2017-03-21 18:04 ` [PATCH 5/7] arm64: hugetlb: Remove spurious calls to huge_ptep_offset Punit Agrawal
@ 2017-03-22 14:18   ` Mark Rutland
  0 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2017-03-22 14:18 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Mar 21, 2017 at 06:04:19PM +0000, Punit Agrawal wrote:
> From: Steve Capper <steve.capper@arm.com>
> 
> We don't need to call huge_ptep_offset as our accessors are already
> supplied with the pte_t *. This patch removes those spurious calls.
> 
> Cc: David Woods <dwoods@mellanox.com>
> Signed-off-by: Steve Capper <steve.capper@arm.com>
> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>

Reviwewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/mm/hugetlbpage.c | 37 ++++++++++++++-----------------------
>  1 file changed, 14 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
> index 44d637eb32a7..c868c80a9d6b 100644
> --- a/arch/arm64/mm/hugetlbpage.c
> +++ b/arch/arm64/mm/hugetlbpage.c
> @@ -208,21 +208,19 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
>  	if (pte_cont(*ptep)) {
>  		int ncontig, i;
>  		size_t pgsize;
> -		pte_t *cpte;
>  		bool is_dirty = false;
>  
> -		cpte = huge_pte_offset(mm, addr);
> -		ncontig = find_num_contig(mm, addr, cpte, &pgsize);
> +		ncontig = find_num_contig(mm, addr, ptep, &pgsize);
>  		/* save the 1st pte to return */
> -		pte = ptep_get_and_clear(mm, addr, cpte);
> +		pte = ptep_get_and_clear(mm, addr, ptep);
>  		for (i = 1, addr += pgsize; i < ncontig; ++i, addr += pgsize) {
>  			/*
>  			 * If HW_AFDBM is enabled, then the HW could
>  			 * turn on the dirty bit for any of the page
>  			 * in the set, so check them all.
>  			 */
> -			++cpte;
> -			if (pte_dirty(ptep_get_and_clear(mm, addr, cpte)))
> +			++ptep;
> +			if (pte_dirty(ptep_get_and_clear(mm, addr, ptep)))
>  				is_dirty = true;
>  		}
>  		if (is_dirty)
> @@ -238,8 +236,6 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
>  			       unsigned long addr, pte_t *ptep,
>  			       pte_t pte, int dirty)
>  {
> -	pte_t *cpte;
> -
>  	if (pte_cont(pte)) {
>  		int ncontig, i, changed = 0;
>  		size_t pgsize = 0;
> @@ -247,12 +243,11 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
>  		/* Select all bits except the pfn */
>  		pgprot_t hugeprot = pte_pgprot(pte);
>  
> -		cpte = huge_pte_offset(vma->vm_mm, addr);
> -		pfn = pte_pfn(*cpte);
> -		ncontig = find_num_contig(vma->vm_mm, addr, cpte,
> +		pfn = pte_pfn(pte);
> +		ncontig = find_num_contig(vma->vm_mm, addr, ptep,
>  					  &pgsize);
> -		for (i = 0; i < ncontig; ++i, ++cpte, addr += pgsize) {
> -			changed |= ptep_set_access_flags(vma, addr, cpte,
> +		for (i = 0; i < ncontig; ++i, ++ptep, addr += pgsize) {
> +			changed |= ptep_set_access_flags(vma, addr, ptep,
>  							pfn_pte(pfn,
>  								hugeprot),
>  							dirty);
> @@ -269,13 +264,11 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
>  {
>  	if (pte_cont(*ptep)) {
>  		int ncontig, i;
> -		pte_t *cpte;
>  		size_t pgsize = 0;
>  
> -		cpte = huge_pte_offset(mm, addr);
> -		ncontig = find_num_contig(mm, addr, cpte, &pgsize);
> -		for (i = 0; i < ncontig; ++i, ++cpte, addr += pgsize)
> -			ptep_set_wrprotect(mm, addr, cpte);
> +		ncontig = find_num_contig(mm, addr, ptep, &pgsize);
> +		for (i = 0; i < ncontig; ++i, ++ptep, addr += pgsize)
> +			ptep_set_wrprotect(mm, addr, ptep);
>  	} else {
>  		ptep_set_wrprotect(mm, addr, ptep);
>  	}
> @@ -286,14 +279,12 @@ void huge_ptep_clear_flush(struct vm_area_struct *vma,
>  {
>  	if (pte_cont(*ptep)) {
>  		int ncontig, i;
> -		pte_t *cpte;
>  		size_t pgsize = 0;
>  
> -		cpte = huge_pte_offset(vma->vm_mm, addr);
> -		ncontig = find_num_contig(vma->vm_mm, addr, cpte,
> +		ncontig = find_num_contig(vma->vm_mm, addr, ptep,
>  					  &pgsize);
> -		for (i = 0; i < ncontig; ++i, ++cpte, addr += pgsize)
> -			ptep_clear_flush(vma, addr, cpte);
> +		for (i = 0; i < ncontig; ++i, ++ptep, addr += pgsize)
> +			ptep_clear_flush(vma, addr, ptep);
>  	} else {
>  		ptep_clear_flush(vma, addr, ptep);
>  	}
> -- 
> 2.11.0
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 6/7] arm64: hugetlb: Spring clean huge pte accessors
  2017-03-21 18:04 ` [PATCH 6/7] arm64: hugetlb: Spring clean huge pte accessors Punit Agrawal
@ 2017-03-22 14:24   ` Mark Rutland
  0 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2017-03-22 14:24 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Mar 21, 2017 at 06:04:20PM +0000, Punit Agrawal wrote:
> +		/*
> +		 * If HW_AFDBM is enabled, then the HW could
> +		 * turn on the dirty bit for any of the page
> +		 * in the set, so check them all.
> +		 */

Nit: we've carried over a grammar bug from the existing comment, which
would be nive to fix up. Can we either s/page/pages/ or s/of the // ?

Either way, the logic looks good.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 7/7] arm64: hugetlb: Add break-before-make logic for contiguous entries
  2017-03-21 18:04 ` [PATCH 7/7] arm64: hugetlb: Add break-before-make logic for contiguous entries Punit Agrawal
@ 2017-03-22 14:38   ` Mark Rutland
  2017-03-22 18:14     ` Punit Agrawal
  0 siblings, 1 reply; 15+ messages in thread
From: Mark Rutland @ 2017-03-22 14:38 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Mar 21, 2017 at 06:04:21PM +0000, Punit Agrawal wrote:
> From: Steve Capper <steve.capper@arm.com>
> 
> It has become apparent that one has to take special care when modifying
> attributes of memory mappings that employ the contiguous bit.
> 
> Both the requirement and the architecturally correct "Break-Before-Make"
> technique of updating contiguous entries can be found described in:
> ARM DDI 0487A.k_iss10775, "Misprogramming of the Contiguous bit",
> page D4-1762.
> 
> The huge pte accessors currently replace the attributes of contiguous
> pte entries in place thus can, on certain platforms, lead to TLB
> conflict aborts or even erroneous results returned from TLB lookups.
> 
> This patch adds a helper function get_clear_flush(.) that clears a
> contiguous entry and returns the head pte (whilst taking care to
> retain dirty bit information that could have been modified by DBM).
> A tlb invalidate is performed to then ensure that there is no
> possibility of multiple tlb entries being present for the same
> region.

Since its evidently easy to miss, can we please add a comment above
get_clear_flush() regarding the BBM requirement, e.g.

/*
 * Changing some bits of contiguous entries requires us to follow a
 * Break-Before-Make approach, breaking the whole contiguous set before
 * we can change any entries. See ARM DDI 0487A.k_iss10775,
 * "Misprogramming of the Contiguous bit", page D4-1762.
 *
 * This helper performs the break step.
 */

Otherwise, this looks good to me, and to the best of my knowledge avoids
the issue described above.

FWIW:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Mark.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 7/7] arm64: hugetlb: Add break-before-make logic for contiguous entries
  2017-03-22 14:38   ` Mark Rutland
@ 2017-03-22 18:14     ` Punit Agrawal
  0 siblings, 0 replies; 15+ messages in thread
From: Punit Agrawal @ 2017-03-22 18:14 UTC (permalink / raw)
  To: linux-arm-kernel



On 22/03/17 14:38, Mark Rutland wrote:
> On Tue, Mar 21, 2017 at 06:04:21PM +0000, Punit Agrawal wrote:
>> From: Steve Capper <steve.capper@arm.com>
>>
>> It has become apparent that one has to take special care when modifying
>> attributes of memory mappings that employ the contiguous bit.
>>
>> Both the requirement and the architecturally correct "Break-Before-Make"
>> technique of updating contiguous entries can be found described in:
>> ARM DDI 0487A.k_iss10775, "Misprogramming of the Contiguous bit",
>> page D4-1762.
>>
>> The huge pte accessors currently replace the attributes of contiguous
>> pte entries in place thus can, on certain platforms, lead to TLB
>> conflict aborts or even erroneous results returned from TLB lookups.
>>
>> This patch adds a helper function get_clear_flush(.) that clears a
>> contiguous entry and returns the head pte (whilst taking care to
>> retain dirty bit information that could have been modified by DBM).
>> A tlb invalidate is performed to then ensure that there is no
>> possibility of multiple tlb entries being present for the same
>> region.
>
> Since its evidently easy to miss, can we please add a comment above
> get_clear_flush() regarding the BBM requirement, e.g.
>
> /*
>  * Changing some bits of contiguous entries requires us to follow a
>  * Break-Before-Make approach, breaking the whole contiguous set before
>  * we can change any entries. See ARM DDI 0487A.k_iss10775,
>  * "Misprogramming of the Contiguous bit", page D4-1762.
>  *
>  * This helper performs the break step.
>  */
>
> Otherwise, this looks good to me, and to the best of my knowledge avoids
> the issue described above.
>
> FWIW:
>
> Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Hi Mark,

Thanks for reviewing the patches. I'll address your comments and post a
new version in a day or two.

Thanks,
Punit

>
> Mark.
>
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2017-03-22 18:14 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-21 18:04 [PATCH 0/7] arm64: hugetlb cleanup + break-before-make fixes Punit Agrawal
2017-03-21 18:04 ` [PATCH 1/7] arm64: hugetlb: set_huge_pte_at Add WARN_ON on !pte_present Punit Agrawal
2017-03-21 18:04 ` [PATCH 2/7] arm64: hugetlb: Cleanup setup_hugepagesz Punit Agrawal
2017-03-22 14:01   ` Mark Rutland
2017-03-21 18:04 ` [PATCH 3/7] arm64: hugetlb: Refactor find_num_contig Punit Agrawal
2017-03-22 14:14   ` Mark Rutland
2017-03-21 18:04 ` [PATCH 4/7] arm64: hugetlb: Introduce pte_pgprot helper Punit Agrawal
2017-03-22 14:16   ` Mark Rutland
2017-03-21 18:04 ` [PATCH 5/7] arm64: hugetlb: Remove spurious calls to huge_ptep_offset Punit Agrawal
2017-03-22 14:18   ` Mark Rutland
2017-03-21 18:04 ` [PATCH 6/7] arm64: hugetlb: Spring clean huge pte accessors Punit Agrawal
2017-03-22 14:24   ` Mark Rutland
2017-03-21 18:04 ` [PATCH 7/7] arm64: hugetlb: Add break-before-make logic for contiguous entries Punit Agrawal
2017-03-22 14:38   ` Mark Rutland
2017-03-22 18:14     ` Punit Agrawal

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.