All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/13] TLB/XPA fixes & cleanups
@ 2016-04-19  8:24 ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:24 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Maciej W. Rozycki, Joshua Kinard,
	Huacai Chen, Maciej W. Rozycki, Paul Gortmaker, Aneesh Kumar K.V,
	linux-kernel, Peter Zijlstra (Intel),
	David Hildenbrand, Andrew Morton, David Daney, Jonas Gorski,
	Markos Chandras, Ingo Molnar, Jerome Marchand, Alex Smith,
	Kirill A. Shutemov

This series fixes up a number of issues introduced by commit
c5b367835cfc ("MIPS: Add support for XPA."), including breakage of the
MIPS32 with 36 bit physical addressing case & clobbering of $1 upon TLB
refill exceptions. Along the way a number of cleanups are made, which
leaves pgtable-bits.h in particular much more readable than before.

The series applies atop v4.6-rc4.

James Hogan (4):
  MIPS: Separate XPA CPU feature into LPA and MVH
  MIPS: Fix HTW config on XPA kernel without LPA enabled
  MIPS: mm: Don't clobber $1 on XPA TLB refill
  MIPS: mm: Don't do MTHC0 if XPA not present

Paul Burton (9):
  MIPS: Remove redundant asm/pgtable-bits.h inclusions
  MIPS: Use enums to make asm/pgtable-bits.h readable
  MIPS: mm: Standardise on _PAGE_NO_READ, drop _PAGE_READ
  MIPS: mm: Unify pte_page definition
  MIPS: mm: Fix MIPS32 36b physical addressing (alchemy, netlogic)
  MIPS: mm: Pass scratch register through to iPTE_SW
  MIPS: mm: Be more explicit about PTE mode bit handling
  MIPS: mm: Simplify build_update_entries
  MIPS: mm: Panic if an XPA kernel is run without RIXI

 arch/mips/include/asm/cpu-features.h |   8 +-
 arch/mips/include/asm/cpu.h          |   3 +-
 arch/mips/include/asm/pgtable-32.h   |  33 +++++-
 arch/mips/include/asm/pgtable-bits.h | 211 ++++++++++++++++-------------------
 arch/mips/include/asm/pgtable.h      |  78 +++++++++----
 arch/mips/kernel/cpu-probe.c         |   6 +-
 arch/mips/kernel/head.S              |   1 -
 arch/mips/kernel/r4k_switch.S        |   1 -
 arch/mips/mm/init.c                  |  12 +-
 arch/mips/mm/tlb-r4k.c               |   6 +-
 arch/mips/mm/tlbex.c                 | 116 +++++++++----------
 11 files changed, 263 insertions(+), 212 deletions(-)

-- 
2.8.0

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v3 00/13] TLB/XPA fixes & cleanups
@ 2016-04-19  8:24 ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:24 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Maciej W. Rozycki, Joshua Kinard,
	Huacai Chen, Maciej W. Rozycki, Paul Gortmaker, Aneesh Kumar K.V,
	linux-kernel, Peter Zijlstra (Intel),
	David Hildenbrand, Andrew Morton, David Daney, Jonas Gorski,
	Markos Chandras, Ingo Molnar, Jerome Marchand, Alex Smith,
	Kirill A. Shutemov

This series fixes up a number of issues introduced by commit
c5b367835cfc ("MIPS: Add support for XPA."), including breakage of the
MIPS32 with 36 bit physical addressing case & clobbering of $1 upon TLB
refill exceptions. Along the way a number of cleanups are made, which
leaves pgtable-bits.h in particular much more readable than before.

The series applies atop v4.6-rc4.

James Hogan (4):
  MIPS: Separate XPA CPU feature into LPA and MVH
  MIPS: Fix HTW config on XPA kernel without LPA enabled
  MIPS: mm: Don't clobber $1 on XPA TLB refill
  MIPS: mm: Don't do MTHC0 if XPA not present

Paul Burton (9):
  MIPS: Remove redundant asm/pgtable-bits.h inclusions
  MIPS: Use enums to make asm/pgtable-bits.h readable
  MIPS: mm: Standardise on _PAGE_NO_READ, drop _PAGE_READ
  MIPS: mm: Unify pte_page definition
  MIPS: mm: Fix MIPS32 36b physical addressing (alchemy, netlogic)
  MIPS: mm: Pass scratch register through to iPTE_SW
  MIPS: mm: Be more explicit about PTE mode bit handling
  MIPS: mm: Simplify build_update_entries
  MIPS: mm: Panic if an XPA kernel is run without RIXI

 arch/mips/include/asm/cpu-features.h |   8 +-
 arch/mips/include/asm/cpu.h          |   3 +-
 arch/mips/include/asm/pgtable-32.h   |  33 +++++-
 arch/mips/include/asm/pgtable-bits.h | 211 ++++++++++++++++-------------------
 arch/mips/include/asm/pgtable.h      |  78 +++++++++----
 arch/mips/kernel/cpu-probe.c         |   6 +-
 arch/mips/kernel/head.S              |   1 -
 arch/mips/kernel/r4k_switch.S        |   1 -
 arch/mips/mm/init.c                  |  12 +-
 arch/mips/mm/tlb-r4k.c               |   6 +-
 arch/mips/mm/tlbex.c                 | 116 +++++++++----------
 11 files changed, 263 insertions(+), 212 deletions(-)

-- 
2.8.0

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v3 01/13] MIPS: Separate XPA CPU feature into LPA and MVH
@ 2016-04-19  8:24   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:24 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Maciej W. Rozycki, Joshua Kinard,
	linux-kernel, Markos Chandras

From: James Hogan <james.hogan@imgtec.com>

XPA (eXtended Physical Addressing) should be detected as a combination
of two architectural features:
- Large Physical Address (as per Config3.LPA). With XPA this will be set
  on MIPS32r5 cores, but it may also be set for MIPS64r2 cores too.
- MTHC0/MFHC0 instructions (as per Config5.MVH). With XPA this will be
  set, but it may also be set in VZ guest context even when Config3.LPA
  in the guest context has been cleared by the hypervisor.

As such, XPA is only usable if both bits are set. Update CPU features to
separate these two features, with cpu_has_xpa requiring both to be set.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
---

Changes in v3: None
Changes in v2: None

 arch/mips/include/asm/cpu-features.h | 8 +++++++-
 arch/mips/include/asm/cpu.h          | 3 ++-
 arch/mips/kernel/cpu-probe.c         | 6 +++---
 3 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/arch/mips/include/asm/cpu-features.h b/arch/mips/include/asm/cpu-features.h
index eeec8c8..3993a35 100644
--- a/arch/mips/include/asm/cpu-features.h
+++ b/arch/mips/include/asm/cpu-features.h
@@ -142,8 +142,14 @@
 # endif
 #endif
 
+#ifndef cpu_has_lpa
+#define cpu_has_lpa		(cpu_data[0].options & MIPS_CPU_LPA)
+#endif
+#ifndef cpu_has_mvh
+#define cpu_has_mvh		(cpu_data[0].options & MIPS_CPU_MVH)
+#endif
 #ifndef cpu_has_xpa
-#define cpu_has_xpa		(cpu_data[0].options & MIPS_CPU_XPA)
+#define cpu_has_xpa		(cpu_has_lpa && cpu_has_mvh)
 #endif
 #ifndef cpu_has_vtag_icache
 #define cpu_has_vtag_icache	(cpu_data[0].icache.flags & MIPS_CACHE_VTAG)
diff --git a/arch/mips/include/asm/cpu.h b/arch/mips/include/asm/cpu.h
index a97ca97..32fa061 100644
--- a/arch/mips/include/asm/cpu.h
+++ b/arch/mips/include/asm/cpu.h
@@ -381,13 +381,14 @@ enum cpu_type_enum {
 #define MIPS_CPU_MAAR		0x400000000ull /* MAAR(I) registers are present */
 #define MIPS_CPU_FRE		0x800000000ull /* FRE & UFE bits implemented */
 #define MIPS_CPU_RW_LLB		0x1000000000ull /* LLADDR/LLB writes are allowed */
-#define MIPS_CPU_XPA		0x2000000000ull /* CPU supports Extended Physical Addressing */
+#define MIPS_CPU_LPA		0x2000000000ull /* CPU supports Large Physical Addressing */
 #define MIPS_CPU_CDMM		0x4000000000ull	/* CPU has Common Device Memory Map */
 #define MIPS_CPU_BP_GHIST	0x8000000000ull /* R12K+ Branch Prediction Global History */
 #define MIPS_CPU_SP		0x10000000000ull /* Small (1KB) page support */
 #define MIPS_CPU_FTLB		0x20000000000ull /* CPU has Fixed-page-size TLB */
 #define MIPS_CPU_NAN_LEGACY	0x40000000000ull /* Legacy NaN implemented */
 #define MIPS_CPU_NAN_2008	0x80000000000ull /* 2008 NaN implemented */
+#define MIPS_CPU_MVH		0x100000000000ull /* CPU supports MFHC0/MTHC0 */
 
 /*
  * CPU ASE encodings
diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
index b725b71..da21fbe 100644
--- a/arch/mips/kernel/cpu-probe.c
+++ b/arch/mips/kernel/cpu-probe.c
@@ -685,6 +685,8 @@ static inline unsigned int decode_config3(struct cpuinfo_mips *c)
 		c->options |= MIPS_CPU_VINT;
 	if (config3 & MIPS_CONF3_VEIC)
 		c->options |= MIPS_CPU_VEIC;
+	if (config3 & MIPS_CONF3_LPA)
+		c->options |= MIPS_CPU_LPA;
 	if (config3 & MIPS_CONF3_MT)
 		c->ases |= MIPS_ASE_MIPSMT;
 	if (config3 & MIPS_CONF3_ULRI)
@@ -792,10 +794,8 @@ static inline unsigned int decode_config5(struct cpuinfo_mips *c)
 		c->options |= MIPS_CPU_MAAR;
 	if (config5 & MIPS_CONF5_LLB)
 		c->options |= MIPS_CPU_RW_LLB;
-#ifdef CONFIG_XPA
 	if (config5 & MIPS_CONF5_MVH)
-		c->options |= MIPS_CPU_XPA;
-#endif
+		c->options |= MIPS_CPU_MVH;
 
 	return config5 & MIPS_CONF_M;
 }
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 01/13] MIPS: Separate XPA CPU feature into LPA and MVH
@ 2016-04-19  8:24   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:24 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Maciej W. Rozycki, Joshua Kinard,
	linux-kernel, Markos Chandras

From: James Hogan <james.hogan@imgtec.com>

XPA (eXtended Physical Addressing) should be detected as a combination
of two architectural features:
- Large Physical Address (as per Config3.LPA). With XPA this will be set
  on MIPS32r5 cores, but it may also be set for MIPS64r2 cores too.
- MTHC0/MFHC0 instructions (as per Config5.MVH). With XPA this will be
  set, but it may also be set in VZ guest context even when Config3.LPA
  in the guest context has been cleared by the hypervisor.

As such, XPA is only usable if both bits are set. Update CPU features to
separate these two features, with cpu_has_xpa requiring both to be set.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
---

Changes in v3: None
Changes in v2: None

 arch/mips/include/asm/cpu-features.h | 8 +++++++-
 arch/mips/include/asm/cpu.h          | 3 ++-
 arch/mips/kernel/cpu-probe.c         | 6 +++---
 3 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/arch/mips/include/asm/cpu-features.h b/arch/mips/include/asm/cpu-features.h
index eeec8c8..3993a35 100644
--- a/arch/mips/include/asm/cpu-features.h
+++ b/arch/mips/include/asm/cpu-features.h
@@ -142,8 +142,14 @@
 # endif
 #endif
 
+#ifndef cpu_has_lpa
+#define cpu_has_lpa		(cpu_data[0].options & MIPS_CPU_LPA)
+#endif
+#ifndef cpu_has_mvh
+#define cpu_has_mvh		(cpu_data[0].options & MIPS_CPU_MVH)
+#endif
 #ifndef cpu_has_xpa
-#define cpu_has_xpa		(cpu_data[0].options & MIPS_CPU_XPA)
+#define cpu_has_xpa		(cpu_has_lpa && cpu_has_mvh)
 #endif
 #ifndef cpu_has_vtag_icache
 #define cpu_has_vtag_icache	(cpu_data[0].icache.flags & MIPS_CACHE_VTAG)
diff --git a/arch/mips/include/asm/cpu.h b/arch/mips/include/asm/cpu.h
index a97ca97..32fa061 100644
--- a/arch/mips/include/asm/cpu.h
+++ b/arch/mips/include/asm/cpu.h
@@ -381,13 +381,14 @@ enum cpu_type_enum {
 #define MIPS_CPU_MAAR		0x400000000ull /* MAAR(I) registers are present */
 #define MIPS_CPU_FRE		0x800000000ull /* FRE & UFE bits implemented */
 #define MIPS_CPU_RW_LLB		0x1000000000ull /* LLADDR/LLB writes are allowed */
-#define MIPS_CPU_XPA		0x2000000000ull /* CPU supports Extended Physical Addressing */
+#define MIPS_CPU_LPA		0x2000000000ull /* CPU supports Large Physical Addressing */
 #define MIPS_CPU_CDMM		0x4000000000ull	/* CPU has Common Device Memory Map */
 #define MIPS_CPU_BP_GHIST	0x8000000000ull /* R12K+ Branch Prediction Global History */
 #define MIPS_CPU_SP		0x10000000000ull /* Small (1KB) page support */
 #define MIPS_CPU_FTLB		0x20000000000ull /* CPU has Fixed-page-size TLB */
 #define MIPS_CPU_NAN_LEGACY	0x40000000000ull /* Legacy NaN implemented */
 #define MIPS_CPU_NAN_2008	0x80000000000ull /* 2008 NaN implemented */
+#define MIPS_CPU_MVH		0x100000000000ull /* CPU supports MFHC0/MTHC0 */
 
 /*
  * CPU ASE encodings
diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
index b725b71..da21fbe 100644
--- a/arch/mips/kernel/cpu-probe.c
+++ b/arch/mips/kernel/cpu-probe.c
@@ -685,6 +685,8 @@ static inline unsigned int decode_config3(struct cpuinfo_mips *c)
 		c->options |= MIPS_CPU_VINT;
 	if (config3 & MIPS_CONF3_VEIC)
 		c->options |= MIPS_CPU_VEIC;
+	if (config3 & MIPS_CONF3_LPA)
+		c->options |= MIPS_CPU_LPA;
 	if (config3 & MIPS_CONF3_MT)
 		c->ases |= MIPS_ASE_MIPSMT;
 	if (config3 & MIPS_CONF3_ULRI)
@@ -792,10 +794,8 @@ static inline unsigned int decode_config5(struct cpuinfo_mips *c)
 		c->options |= MIPS_CPU_MAAR;
 	if (config5 & MIPS_CONF5_LLB)
 		c->options |= MIPS_CPU_RW_LLB;
-#ifdef CONFIG_XPA
 	if (config5 & MIPS_CONF5_MVH)
-		c->options |= MIPS_CPU_XPA;
-#endif
+		c->options |= MIPS_CPU_MVH;
 
 	return config5 & MIPS_CONF_M;
 }
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 02/13] MIPS: Fix HTW config on XPA kernel without LPA enabled
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Steven J . Hill, Paul Burton, Paul Gortmaker, linux-kernel

From: James Hogan <james.hogan@imgtec.com>

The hardware page table walker (HTW) configuration is broken on XPA
kernels where XPA couldn't be enabled (either nohtw or the hardware
doesn't support it). This is because the PWSize.PTEW field (PTE width)
was only set to 8 bytes (an extra shift of 1) in config_htw_params() if
PageGrain.ELPA (enable large physical addressing) is set. On an XPA
kernel though the size of PTEs is fixed at 8 bytes regardless of whether
XPA could actually be enabled.

Fix the initialisation of this field based on sizeof(pte_t) instead.

Fixes: c5b367835cfc ("MIPS: Add support for XPA.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: linux-mips@linux-mips.org
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
---

Changes in v3: None
Changes in v2: None

 arch/mips/mm/tlbex.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index 84c6e3f..86aa7c2 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -2311,9 +2311,7 @@ static void config_htw_params(void)
 	if (CONFIG_PGTABLE_LEVELS >= 3)
 		pwsize |= ilog2(PTRS_PER_PMD) << MIPS_PWSIZE_MDW_SHIFT;
 
-	/* If XPA has been enabled, PTEs are 64-bit in size. */
-	if (config_enabled(CONFIG_64BITS) || (read_c0_pagegrain() & PG_ELPA))
-		pwsize |= 1;
+	pwsize |= ilog2(sizeof(pte_t)/4) << MIPS_PWSIZE_PTEW_SHIFT;
 
 	write_c0_pwsize(pwsize);
 
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 02/13] MIPS: Fix HTW config on XPA kernel without LPA enabled
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Steven J . Hill, Paul Burton, Paul Gortmaker, linux-kernel

From: James Hogan <james.hogan@imgtec.com>

The hardware page table walker (HTW) configuration is broken on XPA
kernels where XPA couldn't be enabled (either nohtw or the hardware
doesn't support it). This is because the PWSize.PTEW field (PTE width)
was only set to 8 bytes (an extra shift of 1) in config_htw_params() if
PageGrain.ELPA (enable large physical addressing) is set. On an XPA
kernel though the size of PTEs is fixed at 8 bytes regardless of whether
XPA could actually be enabled.

Fix the initialisation of this field based on sizeof(pte_t) instead.

Fixes: c5b367835cfc ("MIPS: Add support for XPA.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: linux-mips@linux-mips.org
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
---

Changes in v3: None
Changes in v2: None

 arch/mips/mm/tlbex.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index 84c6e3f..86aa7c2 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -2311,9 +2311,7 @@ static void config_htw_params(void)
 	if (CONFIG_PGTABLE_LEVELS >= 3)
 		pwsize |= ilog2(PTRS_PER_PMD) << MIPS_PWSIZE_MDW_SHIFT;
 
-	/* If XPA has been enabled, PTEs are 64-bit in size. */
-	if (config_enabled(CONFIG_64BITS) || (read_c0_pagegrain() & PG_ELPA))
-		pwsize |= 1;
+	pwsize |= ilog2(sizeof(pte_t)/4) << MIPS_PWSIZE_PTEW_SHIFT;
 
 	write_c0_pwsize(pwsize);
 
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 03/13] MIPS: Remove redundant asm/pgtable-bits.h inclusions
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Maciej W. Rozycki, linux-kernel,
	Jonas Gorski, Markos Chandras, Alex Smith, Kirill A. Shutemov,
	Andrew Morton

asm/pgtable-bits.h is included in 2 assembly files and thus has to
ifdef around C code, however nothing defined by the header is used
in either of the assembly files that include it.

Remove the redundant inclusions such that asm/pgtable-bits.h doesn't
need to #ifdef around C code, for cleanliness and in preparation for
later patches which will add more C.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>

---

Changes in v3:
- Really fixup commit message (don't start a line with #ifdef...).
- Drop extraneous ampersand "& and"...

Changes in v2:
- Fixup commit message that was missing a line.

 arch/mips/include/asm/pgtable-bits.h | 2 --
 arch/mips/kernel/head.S              | 1 -
 arch/mips/kernel/r4k_switch.S        | 1 -
 3 files changed, 4 deletions(-)

diff --git a/arch/mips/include/asm/pgtable-bits.h b/arch/mips/include/asm/pgtable-bits.h
index 97b3138..2f40312 100644
--- a/arch/mips/include/asm/pgtable-bits.h
+++ b/arch/mips/include/asm/pgtable-bits.h
@@ -191,7 +191,6 @@
  */
 
 
-#ifndef __ASSEMBLY__
 /*
  * pte_to_entrylo converts a page table entry (PTE) into a Mips
  * entrylo0/1 value.
@@ -218,7 +217,6 @@ static inline uint64_t pte_to_entrylo(unsigned long pte_val)
 
 	return pte_val >> _PAGE_GLOBAL_SHIFT;
 }
-#endif
 
 /*
  * Cache attributes
diff --git a/arch/mips/kernel/head.S b/arch/mips/kernel/head.S
index 4e4cc5b..b8fb0ba 100644
--- a/arch/mips/kernel/head.S
+++ b/arch/mips/kernel/head.S
@@ -21,7 +21,6 @@
 #include <asm/asmmacro.h>
 #include <asm/irqflags.h>
 #include <asm/regdef.h>
-#include <asm/pgtable-bits.h>
 #include <asm/mipsregs.h>
 #include <asm/stackframe.h>
 
diff --git a/arch/mips/kernel/r4k_switch.S b/arch/mips/kernel/r4k_switch.S
index 92cd051..2f0a3b2 100644
--- a/arch/mips/kernel/r4k_switch.S
+++ b/arch/mips/kernel/r4k_switch.S
@@ -15,7 +15,6 @@
 #include <asm/fpregdef.h>
 #include <asm/mipsregs.h>
 #include <asm/asm-offsets.h>
-#include <asm/pgtable-bits.h>
 #include <asm/regdef.h>
 #include <asm/stackframe.h>
 #include <asm/thread_info.h>
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 03/13] MIPS: Remove redundant asm/pgtable-bits.h inclusions
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Maciej W. Rozycki, linux-kernel,
	Jonas Gorski, Markos Chandras, Alex Smith, Kirill A. Shutemov,
	Andrew Morton

asm/pgtable-bits.h is included in 2 assembly files and thus has to
ifdef around C code, however nothing defined by the header is used
in either of the assembly files that include it.

Remove the redundant inclusions such that asm/pgtable-bits.h doesn't
need to #ifdef around C code, for cleanliness and in preparation for
later patches which will add more C.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>

---

Changes in v3:
- Really fixup commit message (don't start a line with #ifdef...).
- Drop extraneous ampersand "& and"...

Changes in v2:
- Fixup commit message that was missing a line.

 arch/mips/include/asm/pgtable-bits.h | 2 --
 arch/mips/kernel/head.S              | 1 -
 arch/mips/kernel/r4k_switch.S        | 1 -
 3 files changed, 4 deletions(-)

diff --git a/arch/mips/include/asm/pgtable-bits.h b/arch/mips/include/asm/pgtable-bits.h
index 97b3138..2f40312 100644
--- a/arch/mips/include/asm/pgtable-bits.h
+++ b/arch/mips/include/asm/pgtable-bits.h
@@ -191,7 +191,6 @@
  */
 
 
-#ifndef __ASSEMBLY__
 /*
  * pte_to_entrylo converts a page table entry (PTE) into a Mips
  * entrylo0/1 value.
@@ -218,7 +217,6 @@ static inline uint64_t pte_to_entrylo(unsigned long pte_val)
 
 	return pte_val >> _PAGE_GLOBAL_SHIFT;
 }
-#endif
 
 /*
  * Cache attributes
diff --git a/arch/mips/kernel/head.S b/arch/mips/kernel/head.S
index 4e4cc5b..b8fb0ba 100644
--- a/arch/mips/kernel/head.S
+++ b/arch/mips/kernel/head.S
@@ -21,7 +21,6 @@
 #include <asm/asmmacro.h>
 #include <asm/irqflags.h>
 #include <asm/regdef.h>
-#include <asm/pgtable-bits.h>
 #include <asm/mipsregs.h>
 #include <asm/stackframe.h>
 
diff --git a/arch/mips/kernel/r4k_switch.S b/arch/mips/kernel/r4k_switch.S
index 92cd051..2f0a3b2 100644
--- a/arch/mips/kernel/r4k_switch.S
+++ b/arch/mips/kernel/r4k_switch.S
@@ -15,7 +15,6 @@
 #include <asm/fpregdef.h>
 #include <asm/mipsregs.h>
 #include <asm/asm-offsets.h>
-#include <asm/pgtable-bits.h>
 #include <asm/regdef.h>
 #include <asm/stackframe.h>
 #include <asm/thread_info.h>
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 04/13] MIPS: Use enums to make asm/pgtable-bits.h readable
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Maciej W. Rozycki, linux-kernel,
	Markos Chandras, Alex Smith, Kirill A. Shutemov

asm/pgtable-bits.h has grown to become an unreadable mess of #ifdef
directives defining bits conditionally upon other bits all at the
preprocessing stage, for no good reason.

Instead of having quite so many #ifdef's, simply use enums to provide
sequential numbering for bit shifts, without having to keep track
manually of what the last bit defined was. Masks are defined separately,
after the shifts, which allows for most of their definitions to be
reused for all systems rather than duplicated.

This patch is not intended to make any behavioural change to the code -
all bits should be used in the same way they were before this patch.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
---

Changes in v3: None
Changes in v2: None

 arch/mips/include/asm/pgtable-bits.h | 189 +++++++++++++++--------------------
 1 file changed, 81 insertions(+), 108 deletions(-)

diff --git a/arch/mips/include/asm/pgtable-bits.h b/arch/mips/include/asm/pgtable-bits.h
index 2f40312..c81fc17 100644
--- a/arch/mips/include/asm/pgtable-bits.h
+++ b/arch/mips/include/asm/pgtable-bits.h
@@ -35,36 +35,25 @@
 #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
 
 /*
- * The following bits are implemented by the TLB hardware
+ * Page table bit offsets used for 64 bit physical addressing on MIPS32,
+ * for example with Alchemy, Netlogic XLP/XLR or XPA.
  */
-#define _PAGE_NO_EXEC_SHIFT	0
-#define _PAGE_NO_EXEC		(1 << _PAGE_NO_EXEC_SHIFT)
-#define _PAGE_NO_READ_SHIFT	(_PAGE_NO_EXEC_SHIFT + 1)
-#define _PAGE_NO_READ		(1 << _PAGE_NO_READ_SHIFT)
-#define _PAGE_GLOBAL_SHIFT	(_PAGE_NO_READ_SHIFT + 1)
-#define _PAGE_GLOBAL		(1 << _PAGE_GLOBAL_SHIFT)
-#define _PAGE_VALID_SHIFT	(_PAGE_GLOBAL_SHIFT + 1)
-#define _PAGE_VALID		(1 << _PAGE_VALID_SHIFT)
-#define _PAGE_DIRTY_SHIFT	(_PAGE_VALID_SHIFT + 1)
-#define _PAGE_DIRTY		(1 << _PAGE_DIRTY_SHIFT)
-#define _CACHE_SHIFT		(_PAGE_DIRTY_SHIFT + 1)
-#define _CACHE_MASK		(7 << _CACHE_SHIFT)
-
-/*
- * The following bits are implemented in software
- */
-#define _PAGE_PRESENT_SHIFT	(24)
-#define _PAGE_PRESENT		(1 << _PAGE_PRESENT_SHIFT)
-#define _PAGE_READ_SHIFT	(_PAGE_PRESENT_SHIFT + 1)
-#define _PAGE_READ		(1 << _PAGE_READ_SHIFT)
-#define _PAGE_WRITE_SHIFT	(_PAGE_READ_SHIFT + 1)
-#define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
-#define _PAGE_ACCESSED_SHIFT	(_PAGE_WRITE_SHIFT + 1)
-#define _PAGE_ACCESSED		(1 << _PAGE_ACCESSED_SHIFT)
-#define _PAGE_MODIFIED_SHIFT	(_PAGE_ACCESSED_SHIFT + 1)
-#define _PAGE_MODIFIED		(1 << _PAGE_MODIFIED_SHIFT)
-
-#define _PFN_SHIFT		(PAGE_SHIFT - 12 + _CACHE_SHIFT + 3)
+enum pgtable_bits {
+	/* Used by TLB hardware (placed in EntryLo*) */
+	_PAGE_NO_EXEC_SHIFT,
+	_PAGE_NO_READ_SHIFT,
+	_PAGE_GLOBAL_SHIFT,
+	_PAGE_VALID_SHIFT,
+	_PAGE_DIRTY_SHIFT,
+	_CACHE_SHIFT,
+
+	/* Used only by software (masked out before writing EntryLo*) */
+	_PAGE_PRESENT_SHIFT = 24,
+	_PAGE_READ_SHIFT,
+	_PAGE_WRITE_SHIFT,
+	_PAGE_ACCESSED_SHIFT,
+	_PAGE_MODIFIED_SHIFT,
+};
 
 /*
  * Bits for extended EntryLo0/EntryLo1 registers
@@ -73,101 +62,85 @@
 
 #elif defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
 
-/*
- * The following bits are implemented in software
- */
-#define _PAGE_PRESENT_SHIFT	(0)
-#define _PAGE_PRESENT		(1 << _PAGE_PRESENT_SHIFT)
-#define _PAGE_READ_SHIFT	(_PAGE_PRESENT_SHIFT + 1)
-#define _PAGE_READ		(1 << _PAGE_READ_SHIFT)
-#define _PAGE_WRITE_SHIFT	(_PAGE_READ_SHIFT + 1)
-#define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
-#define _PAGE_ACCESSED_SHIFT	(_PAGE_WRITE_SHIFT + 1)
-#define _PAGE_ACCESSED		(1 << _PAGE_ACCESSED_SHIFT)
-#define _PAGE_MODIFIED_SHIFT	(_PAGE_ACCESSED_SHIFT + 1)
-#define _PAGE_MODIFIED		(1 << _PAGE_MODIFIED_SHIFT)
+/* Page table bits used for r3k systems */
+enum pgtable_bits {
+	/* Used only by software (writes to EntryLo ignored) */
+	_PAGE_PRESENT_SHIFT,
+	_PAGE_READ_SHIFT,
+	_PAGE_WRITE_SHIFT,
+	_PAGE_ACCESSED_SHIFT,
+	_PAGE_MODIFIED_SHIFT,
+
+	/* Used by TLB hardware (placed in EntryLo) */
+	_PAGE_GLOBAL_SHIFT = 8,
+	_PAGE_VALID_SHIFT,
+	_PAGE_DIRTY_SHIFT,
+	_CACHE_UNCACHED_SHIFT,
+};
 
-/*
- * The following bits are implemented by the TLB hardware
- */
-#define _PAGE_GLOBAL_SHIFT	(_PAGE_MODIFIED_SHIFT + 4)
-#define _PAGE_GLOBAL		(1 << _PAGE_GLOBAL_SHIFT)
-#define _PAGE_VALID_SHIFT	(_PAGE_GLOBAL_SHIFT + 1)
-#define _PAGE_VALID		(1 << _PAGE_VALID_SHIFT)
-#define _PAGE_DIRTY_SHIFT	(_PAGE_VALID_SHIFT + 1)
-#define _PAGE_DIRTY		(1 << _PAGE_DIRTY_SHIFT)
-#define _CACHE_UNCACHED_SHIFT	(_PAGE_DIRTY_SHIFT + 1)
-#define _CACHE_UNCACHED		(1 << _CACHE_UNCACHED_SHIFT)
-#define _CACHE_MASK		_CACHE_UNCACHED
+#else
 
-#define _PFN_SHIFT		PAGE_SHIFT
+/* Page table bits used for r4k systems */
+enum pgtable_bits {
+	/* Used only by software (masked out before writing EntryLo*) */
+	_PAGE_PRESENT_SHIFT,
+#if !defined(CONFIG_CPU_MIPSR2) && !defined(CONFIG_CPU_MIPSR6)
+	_PAGE_READ_SHIFT,
+#endif
+	_PAGE_WRITE_SHIFT,
+	_PAGE_ACCESSED_SHIFT,
+	_PAGE_MODIFIED_SHIFT,
+#if defined(CONFIG_64BIT) && defined(CONFIG_MIPS_HUGE_TLB_SUPPORT)
+	_PAGE_HUGE_SHIFT,
+#endif
 
-#else
-/*
- * Below are the "Normal" R4K cases
- */
+	/* Used by TLB hardware (placed in EntryLo*) */
+#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
+	_PAGE_NO_EXEC_SHIFT,
+	_PAGE_NO_READ_SHIFT,
+	_PAGE_READ_SHIFT = _PAGE_NO_READ_SHIFT,
+#endif
+	_PAGE_GLOBAL_SHIFT,
+	_PAGE_VALID_SHIFT,
+	_PAGE_DIRTY_SHIFT,
+	_CACHE_SHIFT,
+};
 
-/*
- * The following bits are implemented in software
- */
-#define _PAGE_PRESENT_SHIFT	0
+#endif /* defined(CONFIG_PHYS_ADDR_T_64BIT && defined(CONFIG_CPU_MIPS32) */
+
+/* Used only by software */
 #define _PAGE_PRESENT		(1 << _PAGE_PRESENT_SHIFT)
-/* R2 or later cores check for RI/XI support to determine _PAGE_READ */
 #if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
-#define _PAGE_WRITE_SHIFT	(_PAGE_PRESENT_SHIFT + 1)
-#define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
+# define _PAGE_READ		(cpu_has_rixi ? 0 : (1 << _PAGE_READ_SHIFT))
 #else
-#define _PAGE_READ_SHIFT	(_PAGE_PRESENT_SHIFT + 1)
-#define _PAGE_READ		(1 << _PAGE_READ_SHIFT)
-#define _PAGE_WRITE_SHIFT	(_PAGE_READ_SHIFT + 1)
-#define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
+# define _PAGE_READ		(1 << _PAGE_READ_SHIFT)
 #endif
-#define _PAGE_ACCESSED_SHIFT	(_PAGE_WRITE_SHIFT + 1)
+#define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
 #define _PAGE_ACCESSED		(1 << _PAGE_ACCESSED_SHIFT)
-#define _PAGE_MODIFIED_SHIFT	(_PAGE_ACCESSED_SHIFT + 1)
 #define _PAGE_MODIFIED		(1 << _PAGE_MODIFIED_SHIFT)
-
 #if defined(CONFIG_64BIT) && defined(CONFIG_MIPS_HUGE_TLB_SUPPORT)
-/* Huge TLB page */
-#define _PAGE_HUGE_SHIFT	(_PAGE_MODIFIED_SHIFT + 1)
-#define _PAGE_HUGE		(1 << _PAGE_HUGE_SHIFT)
-#endif	/* CONFIG_64BIT && CONFIG_MIPS_HUGE_TLB_SUPPORT */
-
-#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
-/* XI - page cannot be executed */
-#ifdef _PAGE_HUGE_SHIFT
-#define _PAGE_NO_EXEC_SHIFT	(_PAGE_HUGE_SHIFT + 1)
-#else
-#define _PAGE_NO_EXEC_SHIFT	(_PAGE_MODIFIED_SHIFT + 1)
+# define _PAGE_HUGE		(1 << _PAGE_HUGE_SHIFT)
 #endif
-#define _PAGE_NO_EXEC		(cpu_has_rixi ? (1 << _PAGE_NO_EXEC_SHIFT) : 0)
-
-/* RI - page cannot be read */
-#define _PAGE_READ_SHIFT	(_PAGE_NO_EXEC_SHIFT + 1)
-#define _PAGE_READ		(cpu_has_rixi ? 0 : (1 << _PAGE_READ_SHIFT))
-#define _PAGE_NO_READ_SHIFT	_PAGE_READ_SHIFT
-#define _PAGE_NO_READ		(cpu_has_rixi ? (1 << _PAGE_READ_SHIFT) : 0)
-#endif	/* defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6) */
-
-#if defined(_PAGE_NO_READ_SHIFT)
-#define _PAGE_GLOBAL_SHIFT	(_PAGE_NO_READ_SHIFT + 1)
-#elif defined(_PAGE_HUGE_SHIFT)
-#define _PAGE_GLOBAL_SHIFT	(_PAGE_HUGE_SHIFT + 1)
-#else
-#define _PAGE_GLOBAL_SHIFT	(_PAGE_MODIFIED_SHIFT + 1)
+
+/* Used by TLB hardware (placed in EntryLo*) */
+#if (defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32))
+# define _PAGE_NO_EXEC		(1 << _PAGE_NO_EXEC_SHIFT)
+# define _PAGE_NO_READ		(1 << _PAGE_NO_READ_SHIFT)
+#elif defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
+# define _PAGE_NO_EXEC		(cpu_has_rixi ? (1 << _PAGE_NO_EXEC_SHIFT) : 0)
+# define _PAGE_NO_READ		(cpu_has_rixi ? (1 << _PAGE_NO_READ_SHIFT) : 0)
 #endif
 #define _PAGE_GLOBAL		(1 << _PAGE_GLOBAL_SHIFT)
-
-#define _PAGE_VALID_SHIFT	(_PAGE_GLOBAL_SHIFT + 1)
 #define _PAGE_VALID		(1 << _PAGE_VALID_SHIFT)
-#define _PAGE_DIRTY_SHIFT	(_PAGE_VALID_SHIFT + 1)
 #define _PAGE_DIRTY		(1 << _PAGE_DIRTY_SHIFT)
-#define _CACHE_SHIFT		(_PAGE_DIRTY_SHIFT + 1)
-#define _CACHE_MASK		(7 << _CACHE_SHIFT)
-
-#define _PFN_SHIFT		(PAGE_SHIFT - 12 + _CACHE_SHIFT + 3)
-
-#endif /* defined(CONFIG_PHYS_ADDR_T_64BIT && defined(CONFIG_CPU_MIPS32) */
+#if defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
+# define _CACHE_UNCACHED	(1 << _CACHE_UNCACHED_SHIFT)
+# define _CACHE_MASK		_CACHE_UNCACHED
+# define _PFN_SHIFT		PAGE_SHIFT
+#else
+# define _CACHE_MASK		(7 << _CACHE_SHIFT)
+# define _PFN_SHIFT		(PAGE_SHIFT - 12 + _CACHE_SHIFT + 3)
+#endif
 
 #ifndef _PAGE_NO_EXEC
 #define _PAGE_NO_EXEC		0
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 04/13] MIPS: Use enums to make asm/pgtable-bits.h readable
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Maciej W. Rozycki, linux-kernel,
	Markos Chandras, Alex Smith, Kirill A. Shutemov

asm/pgtable-bits.h has grown to become an unreadable mess of #ifdef
directives defining bits conditionally upon other bits all at the
preprocessing stage, for no good reason.

Instead of having quite so many #ifdef's, simply use enums to provide
sequential numbering for bit shifts, without having to keep track
manually of what the last bit defined was. Masks are defined separately,
after the shifts, which allows for most of their definitions to be
reused for all systems rather than duplicated.

This patch is not intended to make any behavioural change to the code -
all bits should be used in the same way they were before this patch.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
---

Changes in v3: None
Changes in v2: None

 arch/mips/include/asm/pgtable-bits.h | 189 +++++++++++++++--------------------
 1 file changed, 81 insertions(+), 108 deletions(-)

diff --git a/arch/mips/include/asm/pgtable-bits.h b/arch/mips/include/asm/pgtable-bits.h
index 2f40312..c81fc17 100644
--- a/arch/mips/include/asm/pgtable-bits.h
+++ b/arch/mips/include/asm/pgtable-bits.h
@@ -35,36 +35,25 @@
 #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
 
 /*
- * The following bits are implemented by the TLB hardware
+ * Page table bit offsets used for 64 bit physical addressing on MIPS32,
+ * for example with Alchemy, Netlogic XLP/XLR or XPA.
  */
-#define _PAGE_NO_EXEC_SHIFT	0
-#define _PAGE_NO_EXEC		(1 << _PAGE_NO_EXEC_SHIFT)
-#define _PAGE_NO_READ_SHIFT	(_PAGE_NO_EXEC_SHIFT + 1)
-#define _PAGE_NO_READ		(1 << _PAGE_NO_READ_SHIFT)
-#define _PAGE_GLOBAL_SHIFT	(_PAGE_NO_READ_SHIFT + 1)
-#define _PAGE_GLOBAL		(1 << _PAGE_GLOBAL_SHIFT)
-#define _PAGE_VALID_SHIFT	(_PAGE_GLOBAL_SHIFT + 1)
-#define _PAGE_VALID		(1 << _PAGE_VALID_SHIFT)
-#define _PAGE_DIRTY_SHIFT	(_PAGE_VALID_SHIFT + 1)
-#define _PAGE_DIRTY		(1 << _PAGE_DIRTY_SHIFT)
-#define _CACHE_SHIFT		(_PAGE_DIRTY_SHIFT + 1)
-#define _CACHE_MASK		(7 << _CACHE_SHIFT)
-
-/*
- * The following bits are implemented in software
- */
-#define _PAGE_PRESENT_SHIFT	(24)
-#define _PAGE_PRESENT		(1 << _PAGE_PRESENT_SHIFT)
-#define _PAGE_READ_SHIFT	(_PAGE_PRESENT_SHIFT + 1)
-#define _PAGE_READ		(1 << _PAGE_READ_SHIFT)
-#define _PAGE_WRITE_SHIFT	(_PAGE_READ_SHIFT + 1)
-#define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
-#define _PAGE_ACCESSED_SHIFT	(_PAGE_WRITE_SHIFT + 1)
-#define _PAGE_ACCESSED		(1 << _PAGE_ACCESSED_SHIFT)
-#define _PAGE_MODIFIED_SHIFT	(_PAGE_ACCESSED_SHIFT + 1)
-#define _PAGE_MODIFIED		(1 << _PAGE_MODIFIED_SHIFT)
-
-#define _PFN_SHIFT		(PAGE_SHIFT - 12 + _CACHE_SHIFT + 3)
+enum pgtable_bits {
+	/* Used by TLB hardware (placed in EntryLo*) */
+	_PAGE_NO_EXEC_SHIFT,
+	_PAGE_NO_READ_SHIFT,
+	_PAGE_GLOBAL_SHIFT,
+	_PAGE_VALID_SHIFT,
+	_PAGE_DIRTY_SHIFT,
+	_CACHE_SHIFT,
+
+	/* Used only by software (masked out before writing EntryLo*) */
+	_PAGE_PRESENT_SHIFT = 24,
+	_PAGE_READ_SHIFT,
+	_PAGE_WRITE_SHIFT,
+	_PAGE_ACCESSED_SHIFT,
+	_PAGE_MODIFIED_SHIFT,
+};
 
 /*
  * Bits for extended EntryLo0/EntryLo1 registers
@@ -73,101 +62,85 @@
 
 #elif defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
 
-/*
- * The following bits are implemented in software
- */
-#define _PAGE_PRESENT_SHIFT	(0)
-#define _PAGE_PRESENT		(1 << _PAGE_PRESENT_SHIFT)
-#define _PAGE_READ_SHIFT	(_PAGE_PRESENT_SHIFT + 1)
-#define _PAGE_READ		(1 << _PAGE_READ_SHIFT)
-#define _PAGE_WRITE_SHIFT	(_PAGE_READ_SHIFT + 1)
-#define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
-#define _PAGE_ACCESSED_SHIFT	(_PAGE_WRITE_SHIFT + 1)
-#define _PAGE_ACCESSED		(1 << _PAGE_ACCESSED_SHIFT)
-#define _PAGE_MODIFIED_SHIFT	(_PAGE_ACCESSED_SHIFT + 1)
-#define _PAGE_MODIFIED		(1 << _PAGE_MODIFIED_SHIFT)
+/* Page table bits used for r3k systems */
+enum pgtable_bits {
+	/* Used only by software (writes to EntryLo ignored) */
+	_PAGE_PRESENT_SHIFT,
+	_PAGE_READ_SHIFT,
+	_PAGE_WRITE_SHIFT,
+	_PAGE_ACCESSED_SHIFT,
+	_PAGE_MODIFIED_SHIFT,
+
+	/* Used by TLB hardware (placed in EntryLo) */
+	_PAGE_GLOBAL_SHIFT = 8,
+	_PAGE_VALID_SHIFT,
+	_PAGE_DIRTY_SHIFT,
+	_CACHE_UNCACHED_SHIFT,
+};
 
-/*
- * The following bits are implemented by the TLB hardware
- */
-#define _PAGE_GLOBAL_SHIFT	(_PAGE_MODIFIED_SHIFT + 4)
-#define _PAGE_GLOBAL		(1 << _PAGE_GLOBAL_SHIFT)
-#define _PAGE_VALID_SHIFT	(_PAGE_GLOBAL_SHIFT + 1)
-#define _PAGE_VALID		(1 << _PAGE_VALID_SHIFT)
-#define _PAGE_DIRTY_SHIFT	(_PAGE_VALID_SHIFT + 1)
-#define _PAGE_DIRTY		(1 << _PAGE_DIRTY_SHIFT)
-#define _CACHE_UNCACHED_SHIFT	(_PAGE_DIRTY_SHIFT + 1)
-#define _CACHE_UNCACHED		(1 << _CACHE_UNCACHED_SHIFT)
-#define _CACHE_MASK		_CACHE_UNCACHED
+#else
 
-#define _PFN_SHIFT		PAGE_SHIFT
+/* Page table bits used for r4k systems */
+enum pgtable_bits {
+	/* Used only by software (masked out before writing EntryLo*) */
+	_PAGE_PRESENT_SHIFT,
+#if !defined(CONFIG_CPU_MIPSR2) && !defined(CONFIG_CPU_MIPSR6)
+	_PAGE_READ_SHIFT,
+#endif
+	_PAGE_WRITE_SHIFT,
+	_PAGE_ACCESSED_SHIFT,
+	_PAGE_MODIFIED_SHIFT,
+#if defined(CONFIG_64BIT) && defined(CONFIG_MIPS_HUGE_TLB_SUPPORT)
+	_PAGE_HUGE_SHIFT,
+#endif
 
-#else
-/*
- * Below are the "Normal" R4K cases
- */
+	/* Used by TLB hardware (placed in EntryLo*) */
+#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
+	_PAGE_NO_EXEC_SHIFT,
+	_PAGE_NO_READ_SHIFT,
+	_PAGE_READ_SHIFT = _PAGE_NO_READ_SHIFT,
+#endif
+	_PAGE_GLOBAL_SHIFT,
+	_PAGE_VALID_SHIFT,
+	_PAGE_DIRTY_SHIFT,
+	_CACHE_SHIFT,
+};
 
-/*
- * The following bits are implemented in software
- */
-#define _PAGE_PRESENT_SHIFT	0
+#endif /* defined(CONFIG_PHYS_ADDR_T_64BIT && defined(CONFIG_CPU_MIPS32) */
+
+/* Used only by software */
 #define _PAGE_PRESENT		(1 << _PAGE_PRESENT_SHIFT)
-/* R2 or later cores check for RI/XI support to determine _PAGE_READ */
 #if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
-#define _PAGE_WRITE_SHIFT	(_PAGE_PRESENT_SHIFT + 1)
-#define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
+# define _PAGE_READ		(cpu_has_rixi ? 0 : (1 << _PAGE_READ_SHIFT))
 #else
-#define _PAGE_READ_SHIFT	(_PAGE_PRESENT_SHIFT + 1)
-#define _PAGE_READ		(1 << _PAGE_READ_SHIFT)
-#define _PAGE_WRITE_SHIFT	(_PAGE_READ_SHIFT + 1)
-#define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
+# define _PAGE_READ		(1 << _PAGE_READ_SHIFT)
 #endif
-#define _PAGE_ACCESSED_SHIFT	(_PAGE_WRITE_SHIFT + 1)
+#define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
 #define _PAGE_ACCESSED		(1 << _PAGE_ACCESSED_SHIFT)
-#define _PAGE_MODIFIED_SHIFT	(_PAGE_ACCESSED_SHIFT + 1)
 #define _PAGE_MODIFIED		(1 << _PAGE_MODIFIED_SHIFT)
-
 #if defined(CONFIG_64BIT) && defined(CONFIG_MIPS_HUGE_TLB_SUPPORT)
-/* Huge TLB page */
-#define _PAGE_HUGE_SHIFT	(_PAGE_MODIFIED_SHIFT + 1)
-#define _PAGE_HUGE		(1 << _PAGE_HUGE_SHIFT)
-#endif	/* CONFIG_64BIT && CONFIG_MIPS_HUGE_TLB_SUPPORT */
-
-#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
-/* XI - page cannot be executed */
-#ifdef _PAGE_HUGE_SHIFT
-#define _PAGE_NO_EXEC_SHIFT	(_PAGE_HUGE_SHIFT + 1)
-#else
-#define _PAGE_NO_EXEC_SHIFT	(_PAGE_MODIFIED_SHIFT + 1)
+# define _PAGE_HUGE		(1 << _PAGE_HUGE_SHIFT)
 #endif
-#define _PAGE_NO_EXEC		(cpu_has_rixi ? (1 << _PAGE_NO_EXEC_SHIFT) : 0)
-
-/* RI - page cannot be read */
-#define _PAGE_READ_SHIFT	(_PAGE_NO_EXEC_SHIFT + 1)
-#define _PAGE_READ		(cpu_has_rixi ? 0 : (1 << _PAGE_READ_SHIFT))
-#define _PAGE_NO_READ_SHIFT	_PAGE_READ_SHIFT
-#define _PAGE_NO_READ		(cpu_has_rixi ? (1 << _PAGE_READ_SHIFT) : 0)
-#endif	/* defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6) */
-
-#if defined(_PAGE_NO_READ_SHIFT)
-#define _PAGE_GLOBAL_SHIFT	(_PAGE_NO_READ_SHIFT + 1)
-#elif defined(_PAGE_HUGE_SHIFT)
-#define _PAGE_GLOBAL_SHIFT	(_PAGE_HUGE_SHIFT + 1)
-#else
-#define _PAGE_GLOBAL_SHIFT	(_PAGE_MODIFIED_SHIFT + 1)
+
+/* Used by TLB hardware (placed in EntryLo*) */
+#if (defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32))
+# define _PAGE_NO_EXEC		(1 << _PAGE_NO_EXEC_SHIFT)
+# define _PAGE_NO_READ		(1 << _PAGE_NO_READ_SHIFT)
+#elif defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
+# define _PAGE_NO_EXEC		(cpu_has_rixi ? (1 << _PAGE_NO_EXEC_SHIFT) : 0)
+# define _PAGE_NO_READ		(cpu_has_rixi ? (1 << _PAGE_NO_READ_SHIFT) : 0)
 #endif
 #define _PAGE_GLOBAL		(1 << _PAGE_GLOBAL_SHIFT)
-
-#define _PAGE_VALID_SHIFT	(_PAGE_GLOBAL_SHIFT + 1)
 #define _PAGE_VALID		(1 << _PAGE_VALID_SHIFT)
-#define _PAGE_DIRTY_SHIFT	(_PAGE_VALID_SHIFT + 1)
 #define _PAGE_DIRTY		(1 << _PAGE_DIRTY_SHIFT)
-#define _CACHE_SHIFT		(_PAGE_DIRTY_SHIFT + 1)
-#define _CACHE_MASK		(7 << _CACHE_SHIFT)
-
-#define _PFN_SHIFT		(PAGE_SHIFT - 12 + _CACHE_SHIFT + 3)
-
-#endif /* defined(CONFIG_PHYS_ADDR_T_64BIT && defined(CONFIG_CPU_MIPS32) */
+#if defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
+# define _CACHE_UNCACHED	(1 << _CACHE_UNCACHED_SHIFT)
+# define _CACHE_MASK		_CACHE_UNCACHED
+# define _PFN_SHIFT		PAGE_SHIFT
+#else
+# define _CACHE_MASK		(7 << _CACHE_SHIFT)
+# define _PFN_SHIFT		(PAGE_SHIFT - 12 + _CACHE_SHIFT + 3)
+#endif
 
 #ifndef _PAGE_NO_EXEC
 #define _PAGE_NO_EXEC		0
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 05/13] MIPS: mm: Standardise on _PAGE_NO_READ, drop _PAGE_READ
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, David Daney, Huacai Chen,
	Maciej W. Rozycki, Paul Gortmaker, Aneesh Kumar K.V,
	linux-kernel, Andrew Morton, Markos Chandras, Alex Smith,
	Kirill A. Shutemov

Ever since support for RI/XI was implemented by commit 6dd9344cfc41
("MIPS: Implement Read Inhibit/eXecute Inhibit") we've had a mixture of
_PAGE_READ & _PAGE_NO_READ bits. Rather than keep both around, switch
away from using _PAGE_READ to determine page presence & instead invert
the use to _PAGE_NO_READ. Wherever we formerly had no definition for
_PAGE_NO_READ, change what was _PAGE_READ to _PAGE_NO_READ. The end
result is that we consistently use _PAGE_NO_READ to determine whether a
page is readable, regardless of whether RI/XI is implemented.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
---

Changes in v3: None
Changes in v2: None

 arch/mips/include/asm/pgtable-bits.h | 19 ++++---------------
 arch/mips/include/asm/pgtable.h      | 23 +++++++----------------
 arch/mips/mm/tlbex.c                 | 13 ++++---------
 3 files changed, 15 insertions(+), 40 deletions(-)

diff --git a/arch/mips/include/asm/pgtable-bits.h b/arch/mips/include/asm/pgtable-bits.h
index c81fc17..5bc663d 100644
--- a/arch/mips/include/asm/pgtable-bits.h
+++ b/arch/mips/include/asm/pgtable-bits.h
@@ -49,7 +49,6 @@ enum pgtable_bits {
 
 	/* Used only by software (masked out before writing EntryLo*) */
 	_PAGE_PRESENT_SHIFT = 24,
-	_PAGE_READ_SHIFT,
 	_PAGE_WRITE_SHIFT,
 	_PAGE_ACCESSED_SHIFT,
 	_PAGE_MODIFIED_SHIFT,
@@ -66,7 +65,7 @@ enum pgtable_bits {
 enum pgtable_bits {
 	/* Used only by software (writes to EntryLo ignored) */
 	_PAGE_PRESENT_SHIFT,
-	_PAGE_READ_SHIFT,
+	_PAGE_NO_READ_SHIFT,
 	_PAGE_WRITE_SHIFT,
 	_PAGE_ACCESSED_SHIFT,
 	_PAGE_MODIFIED_SHIFT,
@@ -85,7 +84,7 @@ enum pgtable_bits {
 	/* Used only by software (masked out before writing EntryLo*) */
 	_PAGE_PRESENT_SHIFT,
 #if !defined(CONFIG_CPU_MIPSR2) && !defined(CONFIG_CPU_MIPSR6)
-	_PAGE_READ_SHIFT,
+	_PAGE_NO_READ_SHIFT,
 #endif
 	_PAGE_WRITE_SHIFT,
 	_PAGE_ACCESSED_SHIFT,
@@ -98,7 +97,6 @@ enum pgtable_bits {
 #if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
 	_PAGE_NO_EXEC_SHIFT,
 	_PAGE_NO_READ_SHIFT,
-	_PAGE_READ_SHIFT = _PAGE_NO_READ_SHIFT,
 #endif
 	_PAGE_GLOBAL_SHIFT,
 	_PAGE_VALID_SHIFT,
@@ -110,11 +108,6 @@ enum pgtable_bits {
 
 /* Used only by software */
 #define _PAGE_PRESENT		(1 << _PAGE_PRESENT_SHIFT)
-#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
-# define _PAGE_READ		(cpu_has_rixi ? 0 : (1 << _PAGE_READ_SHIFT))
-#else
-# define _PAGE_READ		(1 << _PAGE_READ_SHIFT)
-#endif
 #define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
 #define _PAGE_ACCESSED		(1 << _PAGE_ACCESSED_SHIFT)
 #define _PAGE_MODIFIED		(1 << _PAGE_MODIFIED_SHIFT)
@@ -125,11 +118,10 @@ enum pgtable_bits {
 /* Used by TLB hardware (placed in EntryLo*) */
 #if (defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32))
 # define _PAGE_NO_EXEC		(1 << _PAGE_NO_EXEC_SHIFT)
-# define _PAGE_NO_READ		(1 << _PAGE_NO_READ_SHIFT)
 #elif defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
 # define _PAGE_NO_EXEC		(cpu_has_rixi ? (1 << _PAGE_NO_EXEC_SHIFT) : 0)
-# define _PAGE_NO_READ		(cpu_has_rixi ? (1 << _PAGE_NO_READ_SHIFT) : 0)
 #endif
+#define _PAGE_NO_READ		(1 << _PAGE_NO_READ_SHIFT)
 #define _PAGE_GLOBAL		(1 << _PAGE_GLOBAL_SHIFT)
 #define _PAGE_VALID		(1 << _PAGE_VALID_SHIFT)
 #define _PAGE_DIRTY		(1 << _PAGE_DIRTY_SHIFT)
@@ -145,9 +137,6 @@ enum pgtable_bits {
 #ifndef _PAGE_NO_EXEC
 #define _PAGE_NO_EXEC		0
 #endif
-#ifndef _PAGE_NO_READ
-#define _PAGE_NO_READ		0
-#endif
 
 #define _PAGE_SILENT_READ	_PAGE_VALID
 #define _PAGE_SILENT_WRITE	_PAGE_DIRTY
@@ -245,7 +234,7 @@ static inline uint64_t pte_to_entrylo(unsigned long pte_val)
 #define _CACHE_UNCACHED_ACCELERATED	(7<<_CACHE_SHIFT)
 #endif
 
-#define __READABLE	(_PAGE_SILENT_READ | _PAGE_READ | _PAGE_ACCESSED)
+#define __READABLE	(_PAGE_SILENT_READ | _PAGE_ACCESSED)
 #define __WRITEABLE	(_PAGE_SILENT_WRITE | _PAGE_WRITE | _PAGE_MODIFIED)
 
 #define _PAGE_CHG_MASK	(_PAGE_ACCESSED | _PAGE_MODIFIED |	\
diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h
index 9a4fe01..1459ee9 100644
--- a/arch/mips/include/asm/pgtable.h
+++ b/arch/mips/include/asm/pgtable.h
@@ -23,18 +23,19 @@
 struct mm_struct;
 struct vm_area_struct;
 
-#define PAGE_NONE	__pgprot(_PAGE_PRESENT | _CACHE_CACHABLE_NONCOHERENT)
-#define PAGE_SHARED	__pgprot(_PAGE_PRESENT | _PAGE_WRITE | _PAGE_READ | \
+#define PAGE_NONE	__pgprot(_PAGE_PRESENT | _PAGE_NO_READ | \
+				 _CACHE_CACHABLE_NONCOHERENT)
+#define PAGE_SHARED	__pgprot(_PAGE_PRESENT | _PAGE_WRITE | \
 				 _page_cachable_default)
-#define PAGE_COPY	__pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_NO_EXEC | \
+#define PAGE_COPY	__pgprot(_PAGE_PRESENT | _PAGE_NO_EXEC | \
 				 _page_cachable_default)
-#define PAGE_READONLY	__pgprot(_PAGE_PRESENT | _PAGE_READ | \
+#define PAGE_READONLY	__pgprot(_PAGE_PRESENT | \
 				 _page_cachable_default)
 #define PAGE_KERNEL	__pgprot(_PAGE_PRESENT | __READABLE | __WRITEABLE | \
 				 _PAGE_GLOBAL | _page_cachable_default)
 #define PAGE_KERNEL_NC	__pgprot(_PAGE_PRESENT | __READABLE | __WRITEABLE | \
 				 _PAGE_GLOBAL | _CACHE_CACHABLE_NONCOHERENT)
-#define PAGE_USERIO	__pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | \
+#define PAGE_USERIO	__pgprot(_PAGE_PRESENT | _PAGE_WRITE | \
 				 _page_cachable_default)
 #define PAGE_KERNEL_UNCACHED __pgprot(_PAGE_PRESENT | __READABLE | \
 			__WRITEABLE | _PAGE_GLOBAL | _CACHE_UNCACHED)
@@ -307,7 +308,7 @@ static inline pte_t pte_mkdirty(pte_t pte)
 static inline pte_t pte_mkyoung(pte_t pte)
 {
 	pte.pte_low |= _PAGE_ACCESSED;
-	if (pte.pte_low & _PAGE_READ)
+	if (!(pte.pte_low & _PAGE_NO_READ))
 		pte.pte_high |= _PAGE_SILENT_READ;
 	return pte;
 }
@@ -353,13 +354,8 @@ static inline pte_t pte_mkdirty(pte_t pte)
 static inline pte_t pte_mkyoung(pte_t pte)
 {
 	pte_val(pte) |= _PAGE_ACCESSED;
-#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
 	if (!(pte_val(pte) & _PAGE_NO_READ))
 		pte_val(pte) |= _PAGE_SILENT_READ;
-	else
-#endif
-	if (pte_val(pte) & _PAGE_READ)
-		pte_val(pte) |= _PAGE_SILENT_READ;
 	return pte;
 }
 
@@ -542,13 +538,8 @@ static inline pmd_t pmd_mkyoung(pmd_t pmd)
 {
 	pmd_val(pmd) |= _PAGE_ACCESSED;
 
-#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
 	if (!(pmd_val(pmd) & _PAGE_NO_READ))
 		pmd_val(pmd) |= _PAGE_SILENT_READ;
-	else
-#endif
-	if (pmd_val(pmd) & _PAGE_READ)
-		pmd_val(pmd) |= _PAGE_SILENT_READ;
 
 	return pmd;
 }
diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index 86aa7c2..7e3272f 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -234,20 +234,16 @@ static void output_pgtable_bits_defines(void)
 	pr_debug("\n");
 
 	pr_define("_PAGE_PRESENT_SHIFT %d\n", _PAGE_PRESENT_SHIFT);
-	pr_define("_PAGE_READ_SHIFT %d\n", _PAGE_READ_SHIFT);
+	pr_define("_PAGE_NO_READ_SHIFT %d\n", _PAGE_NO_READ_SHIFT);
 	pr_define("_PAGE_WRITE_SHIFT %d\n", _PAGE_WRITE_SHIFT);
 	pr_define("_PAGE_ACCESSED_SHIFT %d\n", _PAGE_ACCESSED_SHIFT);
 	pr_define("_PAGE_MODIFIED_SHIFT %d\n", _PAGE_MODIFIED_SHIFT);
 #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
 	pr_define("_PAGE_HUGE_SHIFT %d\n", _PAGE_HUGE_SHIFT);
 #endif
-#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
-	if (cpu_has_rixi) {
 #ifdef _PAGE_NO_EXEC_SHIFT
+	if (cpu_has_rixi)
 		pr_define("_PAGE_NO_EXEC_SHIFT %d\n", _PAGE_NO_EXEC_SHIFT);
-		pr_define("_PAGE_NO_READ_SHIFT %d\n", _PAGE_NO_READ_SHIFT);
-#endif
-	}
 #endif
 	pr_define("_PAGE_GLOBAL_SHIFT %d\n", _PAGE_GLOBAL_SHIFT);
 	pr_define("_PAGE_VALID_SHIFT %d\n", _PAGE_VALID_SHIFT);
@@ -1615,9 +1611,8 @@ build_pte_present(u32 **p, struct uasm_reloc **r,
 			cur = t;
 		}
 		uasm_i_andi(p, t, cur,
-			(_PAGE_PRESENT | _PAGE_READ) >> _PAGE_PRESENT_SHIFT);
-		uasm_i_xori(p, t, t,
-			(_PAGE_PRESENT | _PAGE_READ) >> _PAGE_PRESENT_SHIFT);
+			(_PAGE_PRESENT | _PAGE_NO_READ) >> _PAGE_PRESENT_SHIFT);
+		uasm_i_xori(p, t, t, _PAGE_PRESENT >> _PAGE_PRESENT_SHIFT);
 		uasm_il_bnez(p, r, t, lid);
 		if (pte == t)
 			/* You lose the SMP race :-(*/
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 05/13] MIPS: mm: Standardise on _PAGE_NO_READ, drop _PAGE_READ
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, David Daney, Huacai Chen,
	Maciej W. Rozycki, Paul Gortmaker, Aneesh Kumar K.V,
	linux-kernel, Andrew Morton, Markos Chandras, Alex Smith,
	Kirill A. Shutemov

Ever since support for RI/XI was implemented by commit 6dd9344cfc41
("MIPS: Implement Read Inhibit/eXecute Inhibit") we've had a mixture of
_PAGE_READ & _PAGE_NO_READ bits. Rather than keep both around, switch
away from using _PAGE_READ to determine page presence & instead invert
the use to _PAGE_NO_READ. Wherever we formerly had no definition for
_PAGE_NO_READ, change what was _PAGE_READ to _PAGE_NO_READ. The end
result is that we consistently use _PAGE_NO_READ to determine whether a
page is readable, regardless of whether RI/XI is implemented.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
---

Changes in v3: None
Changes in v2: None

 arch/mips/include/asm/pgtable-bits.h | 19 ++++---------------
 arch/mips/include/asm/pgtable.h      | 23 +++++++----------------
 arch/mips/mm/tlbex.c                 | 13 ++++---------
 3 files changed, 15 insertions(+), 40 deletions(-)

diff --git a/arch/mips/include/asm/pgtable-bits.h b/arch/mips/include/asm/pgtable-bits.h
index c81fc17..5bc663d 100644
--- a/arch/mips/include/asm/pgtable-bits.h
+++ b/arch/mips/include/asm/pgtable-bits.h
@@ -49,7 +49,6 @@ enum pgtable_bits {
 
 	/* Used only by software (masked out before writing EntryLo*) */
 	_PAGE_PRESENT_SHIFT = 24,
-	_PAGE_READ_SHIFT,
 	_PAGE_WRITE_SHIFT,
 	_PAGE_ACCESSED_SHIFT,
 	_PAGE_MODIFIED_SHIFT,
@@ -66,7 +65,7 @@ enum pgtable_bits {
 enum pgtable_bits {
 	/* Used only by software (writes to EntryLo ignored) */
 	_PAGE_PRESENT_SHIFT,
-	_PAGE_READ_SHIFT,
+	_PAGE_NO_READ_SHIFT,
 	_PAGE_WRITE_SHIFT,
 	_PAGE_ACCESSED_SHIFT,
 	_PAGE_MODIFIED_SHIFT,
@@ -85,7 +84,7 @@ enum pgtable_bits {
 	/* Used only by software (masked out before writing EntryLo*) */
 	_PAGE_PRESENT_SHIFT,
 #if !defined(CONFIG_CPU_MIPSR2) && !defined(CONFIG_CPU_MIPSR6)
-	_PAGE_READ_SHIFT,
+	_PAGE_NO_READ_SHIFT,
 #endif
 	_PAGE_WRITE_SHIFT,
 	_PAGE_ACCESSED_SHIFT,
@@ -98,7 +97,6 @@ enum pgtable_bits {
 #if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
 	_PAGE_NO_EXEC_SHIFT,
 	_PAGE_NO_READ_SHIFT,
-	_PAGE_READ_SHIFT = _PAGE_NO_READ_SHIFT,
 #endif
 	_PAGE_GLOBAL_SHIFT,
 	_PAGE_VALID_SHIFT,
@@ -110,11 +108,6 @@ enum pgtable_bits {
 
 /* Used only by software */
 #define _PAGE_PRESENT		(1 << _PAGE_PRESENT_SHIFT)
-#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
-# define _PAGE_READ		(cpu_has_rixi ? 0 : (1 << _PAGE_READ_SHIFT))
-#else
-# define _PAGE_READ		(1 << _PAGE_READ_SHIFT)
-#endif
 #define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
 #define _PAGE_ACCESSED		(1 << _PAGE_ACCESSED_SHIFT)
 #define _PAGE_MODIFIED		(1 << _PAGE_MODIFIED_SHIFT)
@@ -125,11 +118,10 @@ enum pgtable_bits {
 /* Used by TLB hardware (placed in EntryLo*) */
 #if (defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32))
 # define _PAGE_NO_EXEC		(1 << _PAGE_NO_EXEC_SHIFT)
-# define _PAGE_NO_READ		(1 << _PAGE_NO_READ_SHIFT)
 #elif defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
 # define _PAGE_NO_EXEC		(cpu_has_rixi ? (1 << _PAGE_NO_EXEC_SHIFT) : 0)
-# define _PAGE_NO_READ		(cpu_has_rixi ? (1 << _PAGE_NO_READ_SHIFT) : 0)
 #endif
+#define _PAGE_NO_READ		(1 << _PAGE_NO_READ_SHIFT)
 #define _PAGE_GLOBAL		(1 << _PAGE_GLOBAL_SHIFT)
 #define _PAGE_VALID		(1 << _PAGE_VALID_SHIFT)
 #define _PAGE_DIRTY		(1 << _PAGE_DIRTY_SHIFT)
@@ -145,9 +137,6 @@ enum pgtable_bits {
 #ifndef _PAGE_NO_EXEC
 #define _PAGE_NO_EXEC		0
 #endif
-#ifndef _PAGE_NO_READ
-#define _PAGE_NO_READ		0
-#endif
 
 #define _PAGE_SILENT_READ	_PAGE_VALID
 #define _PAGE_SILENT_WRITE	_PAGE_DIRTY
@@ -245,7 +234,7 @@ static inline uint64_t pte_to_entrylo(unsigned long pte_val)
 #define _CACHE_UNCACHED_ACCELERATED	(7<<_CACHE_SHIFT)
 #endif
 
-#define __READABLE	(_PAGE_SILENT_READ | _PAGE_READ | _PAGE_ACCESSED)
+#define __READABLE	(_PAGE_SILENT_READ | _PAGE_ACCESSED)
 #define __WRITEABLE	(_PAGE_SILENT_WRITE | _PAGE_WRITE | _PAGE_MODIFIED)
 
 #define _PAGE_CHG_MASK	(_PAGE_ACCESSED | _PAGE_MODIFIED |	\
diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h
index 9a4fe01..1459ee9 100644
--- a/arch/mips/include/asm/pgtable.h
+++ b/arch/mips/include/asm/pgtable.h
@@ -23,18 +23,19 @@
 struct mm_struct;
 struct vm_area_struct;
 
-#define PAGE_NONE	__pgprot(_PAGE_PRESENT | _CACHE_CACHABLE_NONCOHERENT)
-#define PAGE_SHARED	__pgprot(_PAGE_PRESENT | _PAGE_WRITE | _PAGE_READ | \
+#define PAGE_NONE	__pgprot(_PAGE_PRESENT | _PAGE_NO_READ | \
+				 _CACHE_CACHABLE_NONCOHERENT)
+#define PAGE_SHARED	__pgprot(_PAGE_PRESENT | _PAGE_WRITE | \
 				 _page_cachable_default)
-#define PAGE_COPY	__pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_NO_EXEC | \
+#define PAGE_COPY	__pgprot(_PAGE_PRESENT | _PAGE_NO_EXEC | \
 				 _page_cachable_default)
-#define PAGE_READONLY	__pgprot(_PAGE_PRESENT | _PAGE_READ | \
+#define PAGE_READONLY	__pgprot(_PAGE_PRESENT | \
 				 _page_cachable_default)
 #define PAGE_KERNEL	__pgprot(_PAGE_PRESENT | __READABLE | __WRITEABLE | \
 				 _PAGE_GLOBAL | _page_cachable_default)
 #define PAGE_KERNEL_NC	__pgprot(_PAGE_PRESENT | __READABLE | __WRITEABLE | \
 				 _PAGE_GLOBAL | _CACHE_CACHABLE_NONCOHERENT)
-#define PAGE_USERIO	__pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | \
+#define PAGE_USERIO	__pgprot(_PAGE_PRESENT | _PAGE_WRITE | \
 				 _page_cachable_default)
 #define PAGE_KERNEL_UNCACHED __pgprot(_PAGE_PRESENT | __READABLE | \
 			__WRITEABLE | _PAGE_GLOBAL | _CACHE_UNCACHED)
@@ -307,7 +308,7 @@ static inline pte_t pte_mkdirty(pte_t pte)
 static inline pte_t pte_mkyoung(pte_t pte)
 {
 	pte.pte_low |= _PAGE_ACCESSED;
-	if (pte.pte_low & _PAGE_READ)
+	if (!(pte.pte_low & _PAGE_NO_READ))
 		pte.pte_high |= _PAGE_SILENT_READ;
 	return pte;
 }
@@ -353,13 +354,8 @@ static inline pte_t pte_mkdirty(pte_t pte)
 static inline pte_t pte_mkyoung(pte_t pte)
 {
 	pte_val(pte) |= _PAGE_ACCESSED;
-#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
 	if (!(pte_val(pte) & _PAGE_NO_READ))
 		pte_val(pte) |= _PAGE_SILENT_READ;
-	else
-#endif
-	if (pte_val(pte) & _PAGE_READ)
-		pte_val(pte) |= _PAGE_SILENT_READ;
 	return pte;
 }
 
@@ -542,13 +538,8 @@ static inline pmd_t pmd_mkyoung(pmd_t pmd)
 {
 	pmd_val(pmd) |= _PAGE_ACCESSED;
 
-#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
 	if (!(pmd_val(pmd) & _PAGE_NO_READ))
 		pmd_val(pmd) |= _PAGE_SILENT_READ;
-	else
-#endif
-	if (pmd_val(pmd) & _PAGE_READ)
-		pmd_val(pmd) |= _PAGE_SILENT_READ;
 
 	return pmd;
 }
diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index 86aa7c2..7e3272f 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -234,20 +234,16 @@ static void output_pgtable_bits_defines(void)
 	pr_debug("\n");
 
 	pr_define("_PAGE_PRESENT_SHIFT %d\n", _PAGE_PRESENT_SHIFT);
-	pr_define("_PAGE_READ_SHIFT %d\n", _PAGE_READ_SHIFT);
+	pr_define("_PAGE_NO_READ_SHIFT %d\n", _PAGE_NO_READ_SHIFT);
 	pr_define("_PAGE_WRITE_SHIFT %d\n", _PAGE_WRITE_SHIFT);
 	pr_define("_PAGE_ACCESSED_SHIFT %d\n", _PAGE_ACCESSED_SHIFT);
 	pr_define("_PAGE_MODIFIED_SHIFT %d\n", _PAGE_MODIFIED_SHIFT);
 #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
 	pr_define("_PAGE_HUGE_SHIFT %d\n", _PAGE_HUGE_SHIFT);
 #endif
-#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
-	if (cpu_has_rixi) {
 #ifdef _PAGE_NO_EXEC_SHIFT
+	if (cpu_has_rixi)
 		pr_define("_PAGE_NO_EXEC_SHIFT %d\n", _PAGE_NO_EXEC_SHIFT);
-		pr_define("_PAGE_NO_READ_SHIFT %d\n", _PAGE_NO_READ_SHIFT);
-#endif
-	}
 #endif
 	pr_define("_PAGE_GLOBAL_SHIFT %d\n", _PAGE_GLOBAL_SHIFT);
 	pr_define("_PAGE_VALID_SHIFT %d\n", _PAGE_VALID_SHIFT);
@@ -1615,9 +1611,8 @@ build_pte_present(u32 **p, struct uasm_reloc **r,
 			cur = t;
 		}
 		uasm_i_andi(p, t, cur,
-			(_PAGE_PRESENT | _PAGE_READ) >> _PAGE_PRESENT_SHIFT);
-		uasm_i_xori(p, t, t,
-			(_PAGE_PRESENT | _PAGE_READ) >> _PAGE_PRESENT_SHIFT);
+			(_PAGE_PRESENT | _PAGE_NO_READ) >> _PAGE_PRESENT_SHIFT);
+		uasm_i_xori(p, t, t, _PAGE_PRESENT >> _PAGE_PRESENT_SHIFT);
 		uasm_il_bnez(p, r, t, lid);
 		if (pte == t)
 			/* You lose the SMP race :-(*/
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 06/13] MIPS: mm: Unify pte_page definition
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Paul Gortmaker, linux-kernel

The same definition for pte_page is duplicated for the MIPS32
PHYS_ADDR_T_64BIT case & the generic case. Unify them by moving a single
definition outside of preprocessor conditionals.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
---

Changes in v3: None
Changes in v2: None

 arch/mips/include/asm/pgtable-32.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/mips/include/asm/pgtable-32.h b/arch/mips/include/asm/pgtable-32.h
index 832e216..181bd8e 100644
--- a/arch/mips/include/asm/pgtable-32.h
+++ b/arch/mips/include/asm/pgtable-32.h
@@ -104,7 +104,7 @@ static inline void pmd_clear(pmd_t *pmdp)
 }
 
 #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
-#define pte_page(x)		pfn_to_page(pte_pfn(x))
+
 #define pte_pfn(x)		(((unsigned long)((x).pte_high >> _PFN_SHIFT)) | (unsigned long)((x).pte_low << _PAGE_PRESENT_SHIFT))
 static inline pte_t
 pfn_pte(unsigned long pfn, pgprot_t prot)
@@ -120,8 +120,6 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
 
 #else
 
-#define pte_page(x)		pfn_to_page(pte_pfn(x))
-
 #ifdef CONFIG_CPU_VR41XX
 #define pte_pfn(x)		((unsigned long)((x).pte >> (PAGE_SHIFT + 2)))
 #define pfn_pte(pfn, prot)	__pte(((pfn) << (PAGE_SHIFT + 2)) | pgprot_val(prot))
@@ -131,6 +129,8 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
 #endif
 #endif /* defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32) */
 
+#define pte_page(x)		pfn_to_page(pte_pfn(x))
+
 #define __pgd_offset(address)	pgd_index(address)
 #define __pud_offset(address)	(((address) >> PUD_SHIFT) & (PTRS_PER_PUD-1))
 #define __pmd_offset(address)	(((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1))
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 06/13] MIPS: mm: Unify pte_page definition
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Paul Gortmaker, linux-kernel

The same definition for pte_page is duplicated for the MIPS32
PHYS_ADDR_T_64BIT case & the generic case. Unify them by moving a single
definition outside of preprocessor conditionals.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
---

Changes in v3: None
Changes in v2: None

 arch/mips/include/asm/pgtable-32.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/mips/include/asm/pgtable-32.h b/arch/mips/include/asm/pgtable-32.h
index 832e216..181bd8e 100644
--- a/arch/mips/include/asm/pgtable-32.h
+++ b/arch/mips/include/asm/pgtable-32.h
@@ -104,7 +104,7 @@ static inline void pmd_clear(pmd_t *pmdp)
 }
 
 #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
-#define pte_page(x)		pfn_to_page(pte_pfn(x))
+
 #define pte_pfn(x)		(((unsigned long)((x).pte_high >> _PFN_SHIFT)) | (unsigned long)((x).pte_low << _PAGE_PRESENT_SHIFT))
 static inline pte_t
 pfn_pte(unsigned long pfn, pgprot_t prot)
@@ -120,8 +120,6 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
 
 #else
 
-#define pte_page(x)		pfn_to_page(pte_pfn(x))
-
 #ifdef CONFIG_CPU_VR41XX
 #define pte_pfn(x)		((unsigned long)((x).pte >> (PAGE_SHIFT + 2)))
 #define pfn_pte(pfn, prot)	__pte(((pfn) << (PAGE_SHIFT + 2)) | pgprot_val(prot))
@@ -131,6 +129,8 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
 #endif
 #endif /* defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32) */
 
+#define pte_page(x)		pfn_to_page(pte_pfn(x))
+
 #define __pgd_offset(address)	pgd_index(address)
 #define __pud_offset(address)	(((address) >> PUD_SHIFT) & (PTRS_PER_PUD-1))
 #define __pmd_offset(address)	(((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1))
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 07/13] MIPS: mm: Fix MIPS32 36b physical addressing (alchemy, netlogic)
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, # v4 . 1+,
	David Daney, Huacai Chen, Maciej W. Rozycki, Paul Gortmaker,
	Aneesh Kumar K.V, linux-kernel, Peter Zijlstra (Intel),
	David Hildenbrand, Andrew Morton, Markos Chandras, Ingo Molnar,
	Alex Smith, Kirill A. Shutemov

There are 2 distinct cases in which a kernel for a MIPS32 CPU
(CONFIG_CPU_MIPS32=y) may use 64 bit physical addresses
(CONFIG_PHYS_ADDR_T_64BIT=y):

  - 36 bit physical addressing as used by RMI Alchemy & Netlogic XLP/XLR
    CPUs.

  - MIPS32r5 eXtended Physical Addressing (XPA).

These 2 cases are distinct in that they require different behaviour from
the kernel - the EntryLo registers have different formats. Until Linux
v4.1 we only supported the first case, with code conditional upon the 2
aforementioned Kconfig variables being set. Commit c5b367835cfc ("MIPS:
Add support for XPA.") added support for the second case, but did so by
modifying the code that existed for the first case rather than treating
the 2 cases as distinct. Since the EntryLo registers have different
formats this breaks the 36 bit Alchemy/XLP/XLR case. Fix this by
splitting the 2 cases, with XPA cases now being conditional upon
CONFIG_XPA and the non-XPA case matching the code as it existed prior to
commit c5b367835cfc ("MIPS: Add support for XPA.").

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reported-by: Manuel Lauss <manuel.lauss@gmail.com>
Tested-by: Manuel Lauss <manuel.lauss@gmail.com>
Fixes: c5b367835cfc ("MIPS: Add support for XPA.")
Cc: <stable@vger.kernel.org> # v4.1+

---

Changes in v3: None
Changes in v2:
- Catch some extra pte_low manipulations (thanks James!).

 arch/mips/include/asm/pgtable-32.h   | 27 +++++++++++++++--
 arch/mips/include/asm/pgtable-bits.h | 29 +++++++++++++++---
 arch/mips/include/asm/pgtable.h      | 57 +++++++++++++++++++++++++++++++-----
 arch/mips/mm/init.c                  |  4 ++-
 arch/mips/mm/tlbex.c                 | 35 ++++++++++++++--------
 5 files changed, 125 insertions(+), 27 deletions(-)

diff --git a/arch/mips/include/asm/pgtable-32.h b/arch/mips/include/asm/pgtable-32.h
index 181bd8e..d21f3da 100644
--- a/arch/mips/include/asm/pgtable-32.h
+++ b/arch/mips/include/asm/pgtable-32.h
@@ -103,7 +103,7 @@ static inline void pmd_clear(pmd_t *pmdp)
 	pmd_val(*pmdp) = ((unsigned long) invalid_pte_table);
 }
 
-#if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+#if defined(CONFIG_XPA)
 
 #define pte_pfn(x)		(((unsigned long)((x).pte_high >> _PFN_SHIFT)) | (unsigned long)((x).pte_low << _PAGE_PRESENT_SHIFT))
 static inline pte_t
@@ -118,6 +118,20 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
 	return pte;
 }
 
+#elif defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+
+#define pte_pfn(x)		((unsigned long)((x).pte_high >> 6))
+
+static inline pte_t pfn_pte(unsigned long pfn, pgprot_t prot)
+{
+	pte_t pte;
+
+	pte.pte_high = (pfn << 6) | (pgprot_val(prot) & 0x3f);
+	pte.pte_low = pgprot_val(prot);
+
+	return pte;
+}
+
 #else
 
 #ifdef CONFIG_CPU_VR41XX
@@ -166,7 +180,7 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
 
 #else
 
-#if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+#if defined(CONFIG_XPA)
 
 /* Swap entries must have VALID and GLOBAL bits cleared. */
 #define __swp_type(x)			(((x).val >> 4) & 0x1f)
@@ -175,6 +189,15 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
 #define __pte_to_swp_entry(pte)		((swp_entry_t) { (pte).pte_high })
 #define __swp_entry_to_pte(x)		((pte_t) { 0, (x).val })
 
+#elif defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+
+/* Swap entries must have VALID and GLOBAL bits cleared. */
+#define __swp_type(x)			(((x).val >> 2) & 0x1f)
+#define __swp_offset(x)			 ((x).val >> 7)
+#define __swp_entry(type, offset)	((swp_entry_t)  { ((type) << 2) | ((offset) << 7) })
+#define __pte_to_swp_entry(pte)		((swp_entry_t) { (pte).pte_high })
+#define __swp_entry_to_pte(x)		((pte_t) { 0, (x).val })
+
 #else
 /*
  * Constraints:
diff --git a/arch/mips/include/asm/pgtable-bits.h b/arch/mips/include/asm/pgtable-bits.h
index 5bc663d..58e8bf8 100644
--- a/arch/mips/include/asm/pgtable-bits.h
+++ b/arch/mips/include/asm/pgtable-bits.h
@@ -32,11 +32,11 @@
  * unpredictable things.  The code (when it is written) to deal with
  * this problem will be in the update_mmu_cache() code for the r4k.
  */
-#if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+#if defined(CONFIG_XPA)
 
 /*
- * Page table bit offsets used for 64 bit physical addressing on MIPS32,
- * for example with Alchemy, Netlogic XLP/XLR or XPA.
+ * Page table bit offsets used for 64 bit physical addressing on
+ * MIPS32r5 with XPA.
  */
 enum pgtable_bits {
 	/* Used by TLB hardware (placed in EntryLo*) */
@@ -59,6 +59,27 @@ enum pgtable_bits {
  */
 #define _PFNX_MASK		0xffffff
 
+#elif defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+
+/*
+ * Page table bit offsets used for 36 bit physical addressing on MIPS32,
+ * for example with Alchemy or Netlogic XLP/XLR.
+ */
+enum pgtable_bits {
+	/* Used by TLB hardware (placed in EntryLo*) */
+	_PAGE_GLOBAL_SHIFT,
+	_PAGE_VALID_SHIFT,
+	_PAGE_DIRTY_SHIFT,
+	_CACHE_SHIFT,
+
+	/* Used only by software (masked out before writing EntryLo*) */
+	_PAGE_PRESENT_SHIFT = _CACHE_SHIFT + 3,
+	_PAGE_NO_READ_SHIFT,
+	_PAGE_WRITE_SHIFT,
+	_PAGE_ACCESSED_SHIFT,
+	_PAGE_MODIFIED_SHIFT,
+};
+
 #elif defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
 
 /* Page table bits used for r3k systems */
@@ -116,7 +137,7 @@ enum pgtable_bits {
 #endif
 
 /* Used by TLB hardware (placed in EntryLo*) */
-#if (defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32))
+#if defined(CONFIG_XPA)
 # define _PAGE_NO_EXEC		(1 << _PAGE_NO_EXEC_SHIFT)
 #elif defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
 # define _PAGE_NO_EXEC		(cpu_has_rixi ? (1 << _PAGE_NO_EXEC_SHIFT) : 0)
diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h
index 1459ee9..3822d7d 100644
--- a/arch/mips/include/asm/pgtable.h
+++ b/arch/mips/include/asm/pgtable.h
@@ -130,7 +130,12 @@ do {									\
 
 #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
 
-#define pte_none(pte)		(!(((pte).pte_high) & ~_PAGE_GLOBAL))
+#ifdef CONFIG_XPA
+# define pte_none(pte)		(!(((pte).pte_high) & ~_PAGE_GLOBAL))
+#else
+# define pte_none(pte)		(!(((pte).pte_low | (pte).pte_high) & ~_PAGE_GLOBAL))
+#endif
+
 #define pte_present(pte)	((pte).pte_low & _PAGE_PRESENT)
 
 static inline void set_pte(pte_t *ptep, pte_t pte)
@@ -139,14 +144,21 @@ static inline void set_pte(pte_t *ptep, pte_t pte)
 	smp_wmb();
 	ptep->pte_low = pte.pte_low;
 
+#ifdef CONFIG_XPA
 	if (pte.pte_high & _PAGE_GLOBAL) {
+#else
+	if (pte.pte_low & _PAGE_GLOBAL) {
+#endif
 		pte_t *buddy = ptep_buddy(ptep);
 		/*
 		 * Make sure the buddy is global too (if it's !none,
 		 * it better already be global)
 		 */
-		if (pte_none(*buddy))
+		if (pte_none(*buddy)) {
+			if (!config_enabled(CONFIG_XPA))
+				buddy->pte_low |= _PAGE_GLOBAL;
 			buddy->pte_high |= _PAGE_GLOBAL;
+		}
 	}
 }
 #define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval)
@@ -157,8 +169,13 @@ static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *pt
 
 	htw_stop();
 	/* Preserve global status for the pair */
-	if (ptep_buddy(ptep)->pte_high & _PAGE_GLOBAL)
-		null.pte_high = _PAGE_GLOBAL;
+	if (config_enabled(CONFIG_XPA)) {
+		if (ptep_buddy(ptep)->pte_high & _PAGE_GLOBAL)
+			null.pte_high = _PAGE_GLOBAL;
+	} else {
+		if (ptep_buddy(ptep)->pte_low & _PAGE_GLOBAL)
+			null.pte_low = null.pte_high = _PAGE_GLOBAL;
+	}
 
 	set_pte_at(mm, addr, ptep, null);
 	htw_start();
@@ -271,6 +288,8 @@ static inline int pte_young(pte_t pte)	{ return pte.pte_low & _PAGE_ACCESSED; }
 static inline pte_t pte_wrprotect(pte_t pte)
 {
 	pte.pte_low  &= ~_PAGE_WRITE;
+	if (!config_enabled(CONFIG_XPA))
+		pte.pte_low &= ~_PAGE_SILENT_WRITE;
 	pte.pte_high &= ~_PAGE_SILENT_WRITE;
 	return pte;
 }
@@ -278,6 +297,8 @@ static inline pte_t pte_wrprotect(pte_t pte)
 static inline pte_t pte_mkclean(pte_t pte)
 {
 	pte.pte_low  &= ~_PAGE_MODIFIED;
+	if (!config_enabled(CONFIG_XPA))
+		pte.pte_low &= ~_PAGE_SILENT_WRITE;
 	pte.pte_high &= ~_PAGE_SILENT_WRITE;
 	return pte;
 }
@@ -285,6 +306,8 @@ static inline pte_t pte_mkclean(pte_t pte)
 static inline pte_t pte_mkold(pte_t pte)
 {
 	pte.pte_low  &= ~_PAGE_ACCESSED;
+	if (!config_enabled(CONFIG_XPA))
+		pte.pte_low &= ~_PAGE_SILENT_READ;
 	pte.pte_high &= ~_PAGE_SILENT_READ;
 	return pte;
 }
@@ -292,24 +315,33 @@ static inline pte_t pte_mkold(pte_t pte)
 static inline pte_t pte_mkwrite(pte_t pte)
 {
 	pte.pte_low |= _PAGE_WRITE;
-	if (pte.pte_low & _PAGE_MODIFIED)
+	if (pte.pte_low & _PAGE_MODIFIED) {
+		if (!config_enabled(CONFIG_XPA))
+			pte.pte_low |= _PAGE_SILENT_WRITE;
 		pte.pte_high |= _PAGE_SILENT_WRITE;
+	}
 	return pte;
 }
 
 static inline pte_t pte_mkdirty(pte_t pte)
 {
 	pte.pte_low |= _PAGE_MODIFIED;
-	if (pte.pte_low & _PAGE_WRITE)
+	if (pte.pte_low & _PAGE_WRITE) {
+		if (!config_enabled(CONFIG_XPA))
+			pte.pte_low |= _PAGE_SILENT_WRITE;
 		pte.pte_high |= _PAGE_SILENT_WRITE;
+	}
 	return pte;
 }
 
 static inline pte_t pte_mkyoung(pte_t pte)
 {
 	pte.pte_low |= _PAGE_ACCESSED;
-	if (!(pte.pte_low & _PAGE_NO_READ))
+	if (!(pte.pte_low & _PAGE_NO_READ)) {
+		if (!config_enabled(CONFIG_XPA))
+			pte.pte_low |= _PAGE_SILENT_READ;
 		pte.pte_high |= _PAGE_SILENT_READ;
+	}
 	return pte;
 }
 #else
@@ -407,7 +439,7 @@ static inline pgprot_t pgprot_writecombine(pgprot_t _prot)
  */
 #define mk_pte(page, pgprot)	pfn_pte(page_to_pfn(page), (pgprot))
 
-#if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+#if defined(CONFIG_XPA)
 static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
 {
 	pte.pte_low  &= (_PAGE_MODIFIED | _PAGE_ACCESSED | _PFNX_MASK);
@@ -416,6 +448,15 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
 	pte.pte_high |= pgprot_val(newprot) & ~_PFN_MASK;
 	return pte;
 }
+#elif defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+{
+	pte.pte_low  &= _PAGE_CHG_MASK;
+	pte.pte_high &= (_PFN_MASK | _CACHE_MASK);
+	pte.pte_low  |= pgprot_val(newprot);
+	pte.pte_high |= pgprot_val(newprot) & ~(_PFN_MASK | _CACHE_MASK);
+	return pte;
+}
 #else
 static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
 {
diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c
index 7e5fa09..0e57893 100644
--- a/arch/mips/mm/init.c
+++ b/arch/mips/mm/init.c
@@ -98,8 +98,10 @@ static void *__kmap_pgprot(struct page *page, unsigned long addr, pgprot_t prot)
 	idx += in_interrupt() ? FIX_N_COLOURS : 0;
 	vaddr = __fix_to_virt(FIX_CMAP_END - idx);
 	pte = mk_pte(page, prot);
-#if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+#if defined(CONFIG_XPA)
 	entrylo = pte_to_entrylo(pte.pte_high);
+#elif defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+	entrylo = pte.pte_high;
 #else
 	entrylo = pte_to_entrylo(pte_val(pte));
 #endif
diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index 7e3272f..ceaee32 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -1003,25 +1003,21 @@ static void build_get_ptep(u32 **p, unsigned int tmp, unsigned int ptr)
 
 static void build_update_entries(u32 **p, unsigned int tmp, unsigned int ptep)
 {
-	/*
-	 * 64bit address support (36bit on a 32bit CPU) in a 32bit
-	 * Kernel is a special case. Only a few CPUs use it.
-	 */
-	if (config_enabled(CONFIG_PHYS_ADDR_T_64BIT) && !cpu_has_64bits) {
+	if (config_enabled(CONFIG_XPA)) {
 		int pte_off_even = sizeof(pte_t) / 2;
 		int pte_off_odd = pte_off_even + sizeof(pte_t);
-#ifdef CONFIG_XPA
 		const int scratch = 1; /* Our extra working register */
 
 		uasm_i_addu(p, scratch, 0, ptep);
-#endif
+
 		uasm_i_lw(p, tmp, pte_off_even, ptep); /* even pte */
-		uasm_i_lw(p, ptep, pte_off_odd, ptep); /* odd pte */
 		UASM_i_ROTR(p, tmp, tmp, ilog2(_PAGE_GLOBAL));
-		UASM_i_ROTR(p, ptep, ptep, ilog2(_PAGE_GLOBAL));
 		UASM_i_MTC0(p, tmp, C0_ENTRYLO0);
+
+		uasm_i_lw(p, ptep, pte_off_odd, ptep); /* odd pte */
+		UASM_i_ROTR(p, ptep, ptep, ilog2(_PAGE_GLOBAL));
 		UASM_i_MTC0(p, ptep, C0_ENTRYLO1);
-#ifdef CONFIG_XPA
+
 		uasm_i_lw(p, tmp, 0, scratch);
 		uasm_i_lw(p, ptep, sizeof(pte_t), scratch);
 		uasm_i_lui(p, scratch, 0xff);
@@ -1030,7 +1026,22 @@ static void build_update_entries(u32 **p, unsigned int tmp, unsigned int ptep)
 		uasm_i_and(p, ptep, scratch, ptep);
 		uasm_i_mthc0(p, tmp, C0_ENTRYLO0);
 		uasm_i_mthc0(p, ptep, C0_ENTRYLO1);
-#endif
+		return;
+	}
+
+	/*
+	 * 64bit address support (36bit on a 32bit CPU) in a 32bit
+	 * Kernel is a special case. Only a few CPUs use it.
+	 */
+	if (config_enabled(CONFIG_PHYS_ADDR_T_64BIT) && !cpu_has_64bits) {
+		int pte_off_even = sizeof(pte_t) / 2;
+		int pte_off_odd = pte_off_even + sizeof(pte_t);
+
+		uasm_i_lw(p, tmp, pte_off_even, ptep); /* even pte */
+		UASM_i_MTC0(p, tmp, C0_ENTRYLO0);
+
+		uasm_i_lw(p, ptep, pte_off_odd, ptep); /* odd pte */
+		UASM_i_MTC0(p, ptep, C0_ENTRYLO1);
 		return;
 	}
 
@@ -1524,7 +1535,7 @@ iPTE_SW(u32 **p, struct uasm_reloc **r, unsigned int pte, unsigned int ptr,
 #ifdef CONFIG_PHYS_ADDR_T_64BIT
 	unsigned int hwmode = mode & (_PAGE_VALID | _PAGE_DIRTY);
 
-	if (!cpu_has_64bits) {
+	if (config_enabled(CONFIG_XPA) && !cpu_has_64bits) {
 		const int scratch = 1; /* Our extra working register */
 
 		uasm_i_lui(p, scratch, (mode >> 16));
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 07/13] MIPS: mm: Fix MIPS32 36b physical addressing (alchemy, netlogic)
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, # v4 . 1+,
	David Daney, Huacai Chen, Maciej W. Rozycki, Paul Gortmaker,
	Aneesh Kumar K.V, linux-kernel, Peter Zijlstra (Intel),
	David Hildenbrand, Andrew Morton, Markos Chandras, Ingo Molnar,
	Alex Smith, Kirill A. Shutemov

There are 2 distinct cases in which a kernel for a MIPS32 CPU
(CONFIG_CPU_MIPS32=y) may use 64 bit physical addresses
(CONFIG_PHYS_ADDR_T_64BIT=y):

  - 36 bit physical addressing as used by RMI Alchemy & Netlogic XLP/XLR
    CPUs.

  - MIPS32r5 eXtended Physical Addressing (XPA).

These 2 cases are distinct in that they require different behaviour from
the kernel - the EntryLo registers have different formats. Until Linux
v4.1 we only supported the first case, with code conditional upon the 2
aforementioned Kconfig variables being set. Commit c5b367835cfc ("MIPS:
Add support for XPA.") added support for the second case, but did so by
modifying the code that existed for the first case rather than treating
the 2 cases as distinct. Since the EntryLo registers have different
formats this breaks the 36 bit Alchemy/XLP/XLR case. Fix this by
splitting the 2 cases, with XPA cases now being conditional upon
CONFIG_XPA and the non-XPA case matching the code as it existed prior to
commit c5b367835cfc ("MIPS: Add support for XPA.").

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reported-by: Manuel Lauss <manuel.lauss@gmail.com>
Tested-by: Manuel Lauss <manuel.lauss@gmail.com>
Fixes: c5b367835cfc ("MIPS: Add support for XPA.")
Cc: <stable@vger.kernel.org> # v4.1+

---

Changes in v3: None
Changes in v2:
- Catch some extra pte_low manipulations (thanks James!).

 arch/mips/include/asm/pgtable-32.h   | 27 +++++++++++++++--
 arch/mips/include/asm/pgtable-bits.h | 29 +++++++++++++++---
 arch/mips/include/asm/pgtable.h      | 57 +++++++++++++++++++++++++++++++-----
 arch/mips/mm/init.c                  |  4 ++-
 arch/mips/mm/tlbex.c                 | 35 ++++++++++++++--------
 5 files changed, 125 insertions(+), 27 deletions(-)

diff --git a/arch/mips/include/asm/pgtable-32.h b/arch/mips/include/asm/pgtable-32.h
index 181bd8e..d21f3da 100644
--- a/arch/mips/include/asm/pgtable-32.h
+++ b/arch/mips/include/asm/pgtable-32.h
@@ -103,7 +103,7 @@ static inline void pmd_clear(pmd_t *pmdp)
 	pmd_val(*pmdp) = ((unsigned long) invalid_pte_table);
 }
 
-#if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+#if defined(CONFIG_XPA)
 
 #define pte_pfn(x)		(((unsigned long)((x).pte_high >> _PFN_SHIFT)) | (unsigned long)((x).pte_low << _PAGE_PRESENT_SHIFT))
 static inline pte_t
@@ -118,6 +118,20 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
 	return pte;
 }
 
+#elif defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+
+#define pte_pfn(x)		((unsigned long)((x).pte_high >> 6))
+
+static inline pte_t pfn_pte(unsigned long pfn, pgprot_t prot)
+{
+	pte_t pte;
+
+	pte.pte_high = (pfn << 6) | (pgprot_val(prot) & 0x3f);
+	pte.pte_low = pgprot_val(prot);
+
+	return pte;
+}
+
 #else
 
 #ifdef CONFIG_CPU_VR41XX
@@ -166,7 +180,7 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
 
 #else
 
-#if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+#if defined(CONFIG_XPA)
 
 /* Swap entries must have VALID and GLOBAL bits cleared. */
 #define __swp_type(x)			(((x).val >> 4) & 0x1f)
@@ -175,6 +189,15 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
 #define __pte_to_swp_entry(pte)		((swp_entry_t) { (pte).pte_high })
 #define __swp_entry_to_pte(x)		((pte_t) { 0, (x).val })
 
+#elif defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+
+/* Swap entries must have VALID and GLOBAL bits cleared. */
+#define __swp_type(x)			(((x).val >> 2) & 0x1f)
+#define __swp_offset(x)			 ((x).val >> 7)
+#define __swp_entry(type, offset)	((swp_entry_t)  { ((type) << 2) | ((offset) << 7) })
+#define __pte_to_swp_entry(pte)		((swp_entry_t) { (pte).pte_high })
+#define __swp_entry_to_pte(x)		((pte_t) { 0, (x).val })
+
 #else
 /*
  * Constraints:
diff --git a/arch/mips/include/asm/pgtable-bits.h b/arch/mips/include/asm/pgtable-bits.h
index 5bc663d..58e8bf8 100644
--- a/arch/mips/include/asm/pgtable-bits.h
+++ b/arch/mips/include/asm/pgtable-bits.h
@@ -32,11 +32,11 @@
  * unpredictable things.  The code (when it is written) to deal with
  * this problem will be in the update_mmu_cache() code for the r4k.
  */
-#if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+#if defined(CONFIG_XPA)
 
 /*
- * Page table bit offsets used for 64 bit physical addressing on MIPS32,
- * for example with Alchemy, Netlogic XLP/XLR or XPA.
+ * Page table bit offsets used for 64 bit physical addressing on
+ * MIPS32r5 with XPA.
  */
 enum pgtable_bits {
 	/* Used by TLB hardware (placed in EntryLo*) */
@@ -59,6 +59,27 @@ enum pgtable_bits {
  */
 #define _PFNX_MASK		0xffffff
 
+#elif defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+
+/*
+ * Page table bit offsets used for 36 bit physical addressing on MIPS32,
+ * for example with Alchemy or Netlogic XLP/XLR.
+ */
+enum pgtable_bits {
+	/* Used by TLB hardware (placed in EntryLo*) */
+	_PAGE_GLOBAL_SHIFT,
+	_PAGE_VALID_SHIFT,
+	_PAGE_DIRTY_SHIFT,
+	_CACHE_SHIFT,
+
+	/* Used only by software (masked out before writing EntryLo*) */
+	_PAGE_PRESENT_SHIFT = _CACHE_SHIFT + 3,
+	_PAGE_NO_READ_SHIFT,
+	_PAGE_WRITE_SHIFT,
+	_PAGE_ACCESSED_SHIFT,
+	_PAGE_MODIFIED_SHIFT,
+};
+
 #elif defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
 
 /* Page table bits used for r3k systems */
@@ -116,7 +137,7 @@ enum pgtable_bits {
 #endif
 
 /* Used by TLB hardware (placed in EntryLo*) */
-#if (defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32))
+#if defined(CONFIG_XPA)
 # define _PAGE_NO_EXEC		(1 << _PAGE_NO_EXEC_SHIFT)
 #elif defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
 # define _PAGE_NO_EXEC		(cpu_has_rixi ? (1 << _PAGE_NO_EXEC_SHIFT) : 0)
diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h
index 1459ee9..3822d7d 100644
--- a/arch/mips/include/asm/pgtable.h
+++ b/arch/mips/include/asm/pgtable.h
@@ -130,7 +130,12 @@ do {									\
 
 #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
 
-#define pte_none(pte)		(!(((pte).pte_high) & ~_PAGE_GLOBAL))
+#ifdef CONFIG_XPA
+# define pte_none(pte)		(!(((pte).pte_high) & ~_PAGE_GLOBAL))
+#else
+# define pte_none(pte)		(!(((pte).pte_low | (pte).pte_high) & ~_PAGE_GLOBAL))
+#endif
+
 #define pte_present(pte)	((pte).pte_low & _PAGE_PRESENT)
 
 static inline void set_pte(pte_t *ptep, pte_t pte)
@@ -139,14 +144,21 @@ static inline void set_pte(pte_t *ptep, pte_t pte)
 	smp_wmb();
 	ptep->pte_low = pte.pte_low;
 
+#ifdef CONFIG_XPA
 	if (pte.pte_high & _PAGE_GLOBAL) {
+#else
+	if (pte.pte_low & _PAGE_GLOBAL) {
+#endif
 		pte_t *buddy = ptep_buddy(ptep);
 		/*
 		 * Make sure the buddy is global too (if it's !none,
 		 * it better already be global)
 		 */
-		if (pte_none(*buddy))
+		if (pte_none(*buddy)) {
+			if (!config_enabled(CONFIG_XPA))
+				buddy->pte_low |= _PAGE_GLOBAL;
 			buddy->pte_high |= _PAGE_GLOBAL;
+		}
 	}
 }
 #define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval)
@@ -157,8 +169,13 @@ static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *pt
 
 	htw_stop();
 	/* Preserve global status for the pair */
-	if (ptep_buddy(ptep)->pte_high & _PAGE_GLOBAL)
-		null.pte_high = _PAGE_GLOBAL;
+	if (config_enabled(CONFIG_XPA)) {
+		if (ptep_buddy(ptep)->pte_high & _PAGE_GLOBAL)
+			null.pte_high = _PAGE_GLOBAL;
+	} else {
+		if (ptep_buddy(ptep)->pte_low & _PAGE_GLOBAL)
+			null.pte_low = null.pte_high = _PAGE_GLOBAL;
+	}
 
 	set_pte_at(mm, addr, ptep, null);
 	htw_start();
@@ -271,6 +288,8 @@ static inline int pte_young(pte_t pte)	{ return pte.pte_low & _PAGE_ACCESSED; }
 static inline pte_t pte_wrprotect(pte_t pte)
 {
 	pte.pte_low  &= ~_PAGE_WRITE;
+	if (!config_enabled(CONFIG_XPA))
+		pte.pte_low &= ~_PAGE_SILENT_WRITE;
 	pte.pte_high &= ~_PAGE_SILENT_WRITE;
 	return pte;
 }
@@ -278,6 +297,8 @@ static inline pte_t pte_wrprotect(pte_t pte)
 static inline pte_t pte_mkclean(pte_t pte)
 {
 	pte.pte_low  &= ~_PAGE_MODIFIED;
+	if (!config_enabled(CONFIG_XPA))
+		pte.pte_low &= ~_PAGE_SILENT_WRITE;
 	pte.pte_high &= ~_PAGE_SILENT_WRITE;
 	return pte;
 }
@@ -285,6 +306,8 @@ static inline pte_t pte_mkclean(pte_t pte)
 static inline pte_t pte_mkold(pte_t pte)
 {
 	pte.pte_low  &= ~_PAGE_ACCESSED;
+	if (!config_enabled(CONFIG_XPA))
+		pte.pte_low &= ~_PAGE_SILENT_READ;
 	pte.pte_high &= ~_PAGE_SILENT_READ;
 	return pte;
 }
@@ -292,24 +315,33 @@ static inline pte_t pte_mkold(pte_t pte)
 static inline pte_t pte_mkwrite(pte_t pte)
 {
 	pte.pte_low |= _PAGE_WRITE;
-	if (pte.pte_low & _PAGE_MODIFIED)
+	if (pte.pte_low & _PAGE_MODIFIED) {
+		if (!config_enabled(CONFIG_XPA))
+			pte.pte_low |= _PAGE_SILENT_WRITE;
 		pte.pte_high |= _PAGE_SILENT_WRITE;
+	}
 	return pte;
 }
 
 static inline pte_t pte_mkdirty(pte_t pte)
 {
 	pte.pte_low |= _PAGE_MODIFIED;
-	if (pte.pte_low & _PAGE_WRITE)
+	if (pte.pte_low & _PAGE_WRITE) {
+		if (!config_enabled(CONFIG_XPA))
+			pte.pte_low |= _PAGE_SILENT_WRITE;
 		pte.pte_high |= _PAGE_SILENT_WRITE;
+	}
 	return pte;
 }
 
 static inline pte_t pte_mkyoung(pte_t pte)
 {
 	pte.pte_low |= _PAGE_ACCESSED;
-	if (!(pte.pte_low & _PAGE_NO_READ))
+	if (!(pte.pte_low & _PAGE_NO_READ)) {
+		if (!config_enabled(CONFIG_XPA))
+			pte.pte_low |= _PAGE_SILENT_READ;
 		pte.pte_high |= _PAGE_SILENT_READ;
+	}
 	return pte;
 }
 #else
@@ -407,7 +439,7 @@ static inline pgprot_t pgprot_writecombine(pgprot_t _prot)
  */
 #define mk_pte(page, pgprot)	pfn_pte(page_to_pfn(page), (pgprot))
 
-#if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+#if defined(CONFIG_XPA)
 static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
 {
 	pte.pte_low  &= (_PAGE_MODIFIED | _PAGE_ACCESSED | _PFNX_MASK);
@@ -416,6 +448,15 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
 	pte.pte_high |= pgprot_val(newprot) & ~_PFN_MASK;
 	return pte;
 }
+#elif defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+{
+	pte.pte_low  &= _PAGE_CHG_MASK;
+	pte.pte_high &= (_PFN_MASK | _CACHE_MASK);
+	pte.pte_low  |= pgprot_val(newprot);
+	pte.pte_high |= pgprot_val(newprot) & ~(_PFN_MASK | _CACHE_MASK);
+	return pte;
+}
 #else
 static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
 {
diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c
index 7e5fa09..0e57893 100644
--- a/arch/mips/mm/init.c
+++ b/arch/mips/mm/init.c
@@ -98,8 +98,10 @@ static void *__kmap_pgprot(struct page *page, unsigned long addr, pgprot_t prot)
 	idx += in_interrupt() ? FIX_N_COLOURS : 0;
 	vaddr = __fix_to_virt(FIX_CMAP_END - idx);
 	pte = mk_pte(page, prot);
-#if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+#if defined(CONFIG_XPA)
 	entrylo = pte_to_entrylo(pte.pte_high);
+#elif defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+	entrylo = pte.pte_high;
 #else
 	entrylo = pte_to_entrylo(pte_val(pte));
 #endif
diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index 7e3272f..ceaee32 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -1003,25 +1003,21 @@ static void build_get_ptep(u32 **p, unsigned int tmp, unsigned int ptr)
 
 static void build_update_entries(u32 **p, unsigned int tmp, unsigned int ptep)
 {
-	/*
-	 * 64bit address support (36bit on a 32bit CPU) in a 32bit
-	 * Kernel is a special case. Only a few CPUs use it.
-	 */
-	if (config_enabled(CONFIG_PHYS_ADDR_T_64BIT) && !cpu_has_64bits) {
+	if (config_enabled(CONFIG_XPA)) {
 		int pte_off_even = sizeof(pte_t) / 2;
 		int pte_off_odd = pte_off_even + sizeof(pte_t);
-#ifdef CONFIG_XPA
 		const int scratch = 1; /* Our extra working register */
 
 		uasm_i_addu(p, scratch, 0, ptep);
-#endif
+
 		uasm_i_lw(p, tmp, pte_off_even, ptep); /* even pte */
-		uasm_i_lw(p, ptep, pte_off_odd, ptep); /* odd pte */
 		UASM_i_ROTR(p, tmp, tmp, ilog2(_PAGE_GLOBAL));
-		UASM_i_ROTR(p, ptep, ptep, ilog2(_PAGE_GLOBAL));
 		UASM_i_MTC0(p, tmp, C0_ENTRYLO0);
+
+		uasm_i_lw(p, ptep, pte_off_odd, ptep); /* odd pte */
+		UASM_i_ROTR(p, ptep, ptep, ilog2(_PAGE_GLOBAL));
 		UASM_i_MTC0(p, ptep, C0_ENTRYLO1);
-#ifdef CONFIG_XPA
+
 		uasm_i_lw(p, tmp, 0, scratch);
 		uasm_i_lw(p, ptep, sizeof(pte_t), scratch);
 		uasm_i_lui(p, scratch, 0xff);
@@ -1030,7 +1026,22 @@ static void build_update_entries(u32 **p, unsigned int tmp, unsigned int ptep)
 		uasm_i_and(p, ptep, scratch, ptep);
 		uasm_i_mthc0(p, tmp, C0_ENTRYLO0);
 		uasm_i_mthc0(p, ptep, C0_ENTRYLO1);
-#endif
+		return;
+	}
+
+	/*
+	 * 64bit address support (36bit on a 32bit CPU) in a 32bit
+	 * Kernel is a special case. Only a few CPUs use it.
+	 */
+	if (config_enabled(CONFIG_PHYS_ADDR_T_64BIT) && !cpu_has_64bits) {
+		int pte_off_even = sizeof(pte_t) / 2;
+		int pte_off_odd = pte_off_even + sizeof(pte_t);
+
+		uasm_i_lw(p, tmp, pte_off_even, ptep); /* even pte */
+		UASM_i_MTC0(p, tmp, C0_ENTRYLO0);
+
+		uasm_i_lw(p, ptep, pte_off_odd, ptep); /* odd pte */
+		UASM_i_MTC0(p, ptep, C0_ENTRYLO1);
 		return;
 	}
 
@@ -1524,7 +1535,7 @@ iPTE_SW(u32 **p, struct uasm_reloc **r, unsigned int pte, unsigned int ptr,
 #ifdef CONFIG_PHYS_ADDR_T_64BIT
 	unsigned int hwmode = mode & (_PAGE_VALID | _PAGE_DIRTY);
 
-	if (!cpu_has_64bits) {
+	if (config_enabled(CONFIG_XPA) && !cpu_has_64bits) {
 		const int scratch = 1; /* Our extra working register */
 
 		uasm_i_lui(p, scratch, (mode >> 16));
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 08/13] MIPS: mm: Don't clobber $1 on XPA TLB refill
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Paul Gortmaker, linux-kernel

From: James Hogan <james.hogan@imgtec.com>

For XPA kernels build_update_entries() uses $1 (at) as a scratch
register, but doesn't arrange for it to be preserved, so it will always
be clobbered by the TLB refill exception. Although this register
normally has a very short lifetime that doesn't cross memory accesses,
TLB refills due to instruction fetches (either on a page boundary or
after preemption) could clobber live data, and its easy to reproduce
the clobber with a little bit of assembler code.

Note that the use of a hardware page table walker will partly mask the
problem, as the TLB refill handler will not always be invoked.

This is fixed by avoiding the use of the extra scratch register. The
pte_high parts (going into the lower half of the EntryLo registers) are
loaded and manipulated separately so as to keep the PTE pointer around
for the other halves (instead of storing in the scratch register), and
the pte_low parts (going into the high half of the EntryLo registers)
are masked with 0x00ffffff using an ext instruction (instead of loading
0x00ffffff into the scratch register and AND'ing).

[paul.burton@imgtec.com:
  - Rebase atop other TLB work.
  - Use ext instead of an sll, srl sequence.
  - Use cpu_has_xpa instead of #ifdefs.
  - Modify commit subject to include "mm".]

Fixes: c5b367835cfc ("MIPS: Add support for XPA.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
---

Changes in v3: None
Changes in v2: None

 arch/mips/mm/tlbex.c | 24 ++++++++++--------------
 1 file changed, 10 insertions(+), 14 deletions(-)

diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index ceaee32..004cd9f 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -1006,26 +1006,22 @@ static void build_update_entries(u32 **p, unsigned int tmp, unsigned int ptep)
 	if (config_enabled(CONFIG_XPA)) {
 		int pte_off_even = sizeof(pte_t) / 2;
 		int pte_off_odd = pte_off_even + sizeof(pte_t);
-		const int scratch = 1; /* Our extra working register */
-
-		uasm_i_addu(p, scratch, 0, ptep);
 
 		uasm_i_lw(p, tmp, pte_off_even, ptep); /* even pte */
 		UASM_i_ROTR(p, tmp, tmp, ilog2(_PAGE_GLOBAL));
 		UASM_i_MTC0(p, tmp, C0_ENTRYLO0);
 
-		uasm_i_lw(p, ptep, pte_off_odd, ptep); /* odd pte */
-		UASM_i_ROTR(p, ptep, ptep, ilog2(_PAGE_GLOBAL));
-		UASM_i_MTC0(p, ptep, C0_ENTRYLO1);
-
-		uasm_i_lw(p, tmp, 0, scratch);
-		uasm_i_lw(p, ptep, sizeof(pte_t), scratch);
-		uasm_i_lui(p, scratch, 0xff);
-		uasm_i_ori(p, scratch, scratch, 0xffff);
-		uasm_i_and(p, tmp, scratch, tmp);
-		uasm_i_and(p, ptep, scratch, ptep);
+		uasm_i_lw(p, tmp, 0, ptep);
+		uasm_i_ext(p, tmp, tmp, 0, 24);
 		uasm_i_mthc0(p, tmp, C0_ENTRYLO0);
-		uasm_i_mthc0(p, ptep, C0_ENTRYLO1);
+
+		uasm_i_lw(p, tmp, pte_off_odd, ptep); /* odd pte */
+		UASM_i_ROTR(p, tmp, tmp, ilog2(_PAGE_GLOBAL));
+		UASM_i_MTC0(p, tmp, C0_ENTRYLO1);
+
+		uasm_i_lw(p, tmp, sizeof(pte_t), ptep);
+		uasm_i_ext(p, tmp, tmp, 0, 24);
+		uasm_i_mthc0(p, tmp, C0_ENTRYLO1);
 		return;
 	}
 
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 08/13] MIPS: mm: Don't clobber $1 on XPA TLB refill
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Paul Gortmaker, linux-kernel

From: James Hogan <james.hogan@imgtec.com>

For XPA kernels build_update_entries() uses $1 (at) as a scratch
register, but doesn't arrange for it to be preserved, so it will always
be clobbered by the TLB refill exception. Although this register
normally has a very short lifetime that doesn't cross memory accesses,
TLB refills due to instruction fetches (either on a page boundary or
after preemption) could clobber live data, and its easy to reproduce
the clobber with a little bit of assembler code.

Note that the use of a hardware page table walker will partly mask the
problem, as the TLB refill handler will not always be invoked.

This is fixed by avoiding the use of the extra scratch register. The
pte_high parts (going into the lower half of the EntryLo registers) are
loaded and manipulated separately so as to keep the PTE pointer around
for the other halves (instead of storing in the scratch register), and
the pte_low parts (going into the high half of the EntryLo registers)
are masked with 0x00ffffff using an ext instruction (instead of loading
0x00ffffff into the scratch register and AND'ing).

[paul.burton@imgtec.com:
  - Rebase atop other TLB work.
  - Use ext instead of an sll, srl sequence.
  - Use cpu_has_xpa instead of #ifdefs.
  - Modify commit subject to include "mm".]

Fixes: c5b367835cfc ("MIPS: Add support for XPA.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
---

Changes in v3: None
Changes in v2: None

 arch/mips/mm/tlbex.c | 24 ++++++++++--------------
 1 file changed, 10 insertions(+), 14 deletions(-)

diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index ceaee32..004cd9f 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -1006,26 +1006,22 @@ static void build_update_entries(u32 **p, unsigned int tmp, unsigned int ptep)
 	if (config_enabled(CONFIG_XPA)) {
 		int pte_off_even = sizeof(pte_t) / 2;
 		int pte_off_odd = pte_off_even + sizeof(pte_t);
-		const int scratch = 1; /* Our extra working register */
-
-		uasm_i_addu(p, scratch, 0, ptep);
 
 		uasm_i_lw(p, tmp, pte_off_even, ptep); /* even pte */
 		UASM_i_ROTR(p, tmp, tmp, ilog2(_PAGE_GLOBAL));
 		UASM_i_MTC0(p, tmp, C0_ENTRYLO0);
 
-		uasm_i_lw(p, ptep, pte_off_odd, ptep); /* odd pte */
-		UASM_i_ROTR(p, ptep, ptep, ilog2(_PAGE_GLOBAL));
-		UASM_i_MTC0(p, ptep, C0_ENTRYLO1);
-
-		uasm_i_lw(p, tmp, 0, scratch);
-		uasm_i_lw(p, ptep, sizeof(pte_t), scratch);
-		uasm_i_lui(p, scratch, 0xff);
-		uasm_i_ori(p, scratch, scratch, 0xffff);
-		uasm_i_and(p, tmp, scratch, tmp);
-		uasm_i_and(p, ptep, scratch, ptep);
+		uasm_i_lw(p, tmp, 0, ptep);
+		uasm_i_ext(p, tmp, tmp, 0, 24);
 		uasm_i_mthc0(p, tmp, C0_ENTRYLO0);
-		uasm_i_mthc0(p, ptep, C0_ENTRYLO1);
+
+		uasm_i_lw(p, tmp, pte_off_odd, ptep); /* odd pte */
+		UASM_i_ROTR(p, tmp, tmp, ilog2(_PAGE_GLOBAL));
+		UASM_i_MTC0(p, tmp, C0_ENTRYLO1);
+
+		uasm_i_lw(p, tmp, sizeof(pte_t), ptep);
+		uasm_i_ext(p, tmp, tmp, 0, 24);
+		uasm_i_mthc0(p, tmp, C0_ENTRYLO1);
 		return;
 	}
 
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 09/13] MIPS: mm: Pass scratch register through to iPTE_SW
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Paul Gortmaker, linux-kernel

Rather than hardcode a scratch register for the XPA case in iPTE_SW,
pass one through from the work registers allocated by the caller. This
allows for the XPA path to function correctly regardless of the work
registers in use.

Without doing this there are cases (where KScratch registers are
unavailable) in which iPTE_SW will incorrectly clobber $1 despite it
already being in use for the PTE or PTE pointer.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>

---

Changes in v3: None
Changes in v2:
- Mention $1 clobbering.

 arch/mips/mm/tlbex.c | 24 +++++++++++-------------
 1 file changed, 11 insertions(+), 13 deletions(-)

diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index 004cd9f..d7a7b3d 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -1526,14 +1526,12 @@ iPTE_LW(u32 **p, unsigned int pte, unsigned int ptr)
 
 static void
 iPTE_SW(u32 **p, struct uasm_reloc **r, unsigned int pte, unsigned int ptr,
-	unsigned int mode)
+	unsigned int mode, unsigned int scratch)
 {
 #ifdef CONFIG_PHYS_ADDR_T_64BIT
 	unsigned int hwmode = mode & (_PAGE_VALID | _PAGE_DIRTY);
 
 	if (config_enabled(CONFIG_XPA) && !cpu_has_64bits) {
-		const int scratch = 1; /* Our extra working register */
-
 		uasm_i_lui(p, scratch, (mode >> 16));
 		uasm_i_or(p, pte, pte, scratch);
 	} else
@@ -1630,11 +1628,11 @@ build_pte_present(u32 **p, struct uasm_reloc **r,
 /* Make PTE valid, store result in PTR. */
 static void
 build_make_valid(u32 **p, struct uasm_reloc **r, unsigned int pte,
-		 unsigned int ptr)
+		 unsigned int ptr, unsigned int scratch)
 {
 	unsigned int mode = _PAGE_VALID | _PAGE_ACCESSED;
 
-	iPTE_SW(p, r, pte, ptr, mode);
+	iPTE_SW(p, r, pte, ptr, mode, scratch);
 }
 
 /*
@@ -1670,12 +1668,12 @@ build_pte_writable(u32 **p, struct uasm_reloc **r,
  */
 static void
 build_make_write(u32 **p, struct uasm_reloc **r, unsigned int pte,
-		 unsigned int ptr)
+		 unsigned int ptr, unsigned int scratch)
 {
 	unsigned int mode = (_PAGE_ACCESSED | _PAGE_MODIFIED | _PAGE_VALID
 			     | _PAGE_DIRTY);
 
-	iPTE_SW(p, r, pte, ptr, mode);
+	iPTE_SW(p, r, pte, ptr, mode, scratch);
 }
 
 /*
@@ -1780,7 +1778,7 @@ static void build_r3000_tlb_load_handler(void)
 	build_r3000_tlbchange_handler_head(&p, K0, K1);
 	build_pte_present(&p, &r, K0, K1, -1, label_nopage_tlbl);
 	uasm_i_nop(&p); /* load delay */
-	build_make_valid(&p, &r, K0, K1);
+	build_make_valid(&p, &r, K0, K1, -1);
 	build_r3000_tlb_reload_write(&p, &l, &r, K0, K1);
 
 	uasm_l_nopage_tlbl(&l, p);
@@ -1811,7 +1809,7 @@ static void build_r3000_tlb_store_handler(void)
 	build_r3000_tlbchange_handler_head(&p, K0, K1);
 	build_pte_writable(&p, &r, K0, K1, -1, label_nopage_tlbs);
 	uasm_i_nop(&p); /* load delay */
-	build_make_write(&p, &r, K0, K1);
+	build_make_write(&p, &r, K0, K1, -1);
 	build_r3000_tlb_reload_write(&p, &l, &r, K0, K1);
 
 	uasm_l_nopage_tlbs(&l, p);
@@ -1842,7 +1840,7 @@ static void build_r3000_tlb_modify_handler(void)
 	build_r3000_tlbchange_handler_head(&p, K0, K1);
 	build_pte_modifiable(&p, &r, K0, K1,  -1, label_nopage_tlbm);
 	uasm_i_nop(&p); /* load delay */
-	build_make_write(&p, &r, K0, K1);
+	build_make_write(&p, &r, K0, K1, -1);
 	build_r3000_pte_reload_tlbwi(&p, K0, K1);
 
 	uasm_l_nopage_tlbm(&l, p);
@@ -2010,7 +2008,7 @@ static void build_r4000_tlb_load_handler(void)
 		}
 		uasm_l_tlbl_goaround1(&l, p);
 	}
-	build_make_valid(&p, &r, wr.r1, wr.r2);
+	build_make_valid(&p, &r, wr.r1, wr.r2, wr.r3);
 	build_r4000_tlbchange_handler_tail(&p, &l, &r, wr.r1, wr.r2);
 
 #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
@@ -2124,7 +2122,7 @@ static void build_r4000_tlb_store_handler(void)
 	build_pte_writable(&p, &r, wr.r1, wr.r2, wr.r3, label_nopage_tlbs);
 	if (m4kc_tlbp_war())
 		build_tlb_probe_entry(&p);
-	build_make_write(&p, &r, wr.r1, wr.r2);
+	build_make_write(&p, &r, wr.r1, wr.r2, wr.r3);
 	build_r4000_tlbchange_handler_tail(&p, &l, &r, wr.r1, wr.r2);
 
 #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
@@ -2180,7 +2178,7 @@ static void build_r4000_tlb_modify_handler(void)
 	if (m4kc_tlbp_war())
 		build_tlb_probe_entry(&p);
 	/* Present and writable bits set, set accessed and dirty bits. */
-	build_make_write(&p, &r, wr.r1, wr.r2);
+	build_make_write(&p, &r, wr.r1, wr.r2, wr.r3);
 	build_r4000_tlbchange_handler_tail(&p, &l, &r, wr.r1, wr.r2);
 
 #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 09/13] MIPS: mm: Pass scratch register through to iPTE_SW
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Paul Gortmaker, linux-kernel

Rather than hardcode a scratch register for the XPA case in iPTE_SW,
pass one through from the work registers allocated by the caller. This
allows for the XPA path to function correctly regardless of the work
registers in use.

Without doing this there are cases (where KScratch registers are
unavailable) in which iPTE_SW will incorrectly clobber $1 despite it
already being in use for the PTE or PTE pointer.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>

---

Changes in v3: None
Changes in v2:
- Mention $1 clobbering.

 arch/mips/mm/tlbex.c | 24 +++++++++++-------------
 1 file changed, 11 insertions(+), 13 deletions(-)

diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index 004cd9f..d7a7b3d 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -1526,14 +1526,12 @@ iPTE_LW(u32 **p, unsigned int pte, unsigned int ptr)
 
 static void
 iPTE_SW(u32 **p, struct uasm_reloc **r, unsigned int pte, unsigned int ptr,
-	unsigned int mode)
+	unsigned int mode, unsigned int scratch)
 {
 #ifdef CONFIG_PHYS_ADDR_T_64BIT
 	unsigned int hwmode = mode & (_PAGE_VALID | _PAGE_DIRTY);
 
 	if (config_enabled(CONFIG_XPA) && !cpu_has_64bits) {
-		const int scratch = 1; /* Our extra working register */
-
 		uasm_i_lui(p, scratch, (mode >> 16));
 		uasm_i_or(p, pte, pte, scratch);
 	} else
@@ -1630,11 +1628,11 @@ build_pte_present(u32 **p, struct uasm_reloc **r,
 /* Make PTE valid, store result in PTR. */
 static void
 build_make_valid(u32 **p, struct uasm_reloc **r, unsigned int pte,
-		 unsigned int ptr)
+		 unsigned int ptr, unsigned int scratch)
 {
 	unsigned int mode = _PAGE_VALID | _PAGE_ACCESSED;
 
-	iPTE_SW(p, r, pte, ptr, mode);
+	iPTE_SW(p, r, pte, ptr, mode, scratch);
 }
 
 /*
@@ -1670,12 +1668,12 @@ build_pte_writable(u32 **p, struct uasm_reloc **r,
  */
 static void
 build_make_write(u32 **p, struct uasm_reloc **r, unsigned int pte,
-		 unsigned int ptr)
+		 unsigned int ptr, unsigned int scratch)
 {
 	unsigned int mode = (_PAGE_ACCESSED | _PAGE_MODIFIED | _PAGE_VALID
 			     | _PAGE_DIRTY);
 
-	iPTE_SW(p, r, pte, ptr, mode);
+	iPTE_SW(p, r, pte, ptr, mode, scratch);
 }
 
 /*
@@ -1780,7 +1778,7 @@ static void build_r3000_tlb_load_handler(void)
 	build_r3000_tlbchange_handler_head(&p, K0, K1);
 	build_pte_present(&p, &r, K0, K1, -1, label_nopage_tlbl);
 	uasm_i_nop(&p); /* load delay */
-	build_make_valid(&p, &r, K0, K1);
+	build_make_valid(&p, &r, K0, K1, -1);
 	build_r3000_tlb_reload_write(&p, &l, &r, K0, K1);
 
 	uasm_l_nopage_tlbl(&l, p);
@@ -1811,7 +1809,7 @@ static void build_r3000_tlb_store_handler(void)
 	build_r3000_tlbchange_handler_head(&p, K0, K1);
 	build_pte_writable(&p, &r, K0, K1, -1, label_nopage_tlbs);
 	uasm_i_nop(&p); /* load delay */
-	build_make_write(&p, &r, K0, K1);
+	build_make_write(&p, &r, K0, K1, -1);
 	build_r3000_tlb_reload_write(&p, &l, &r, K0, K1);
 
 	uasm_l_nopage_tlbs(&l, p);
@@ -1842,7 +1840,7 @@ static void build_r3000_tlb_modify_handler(void)
 	build_r3000_tlbchange_handler_head(&p, K0, K1);
 	build_pte_modifiable(&p, &r, K0, K1,  -1, label_nopage_tlbm);
 	uasm_i_nop(&p); /* load delay */
-	build_make_write(&p, &r, K0, K1);
+	build_make_write(&p, &r, K0, K1, -1);
 	build_r3000_pte_reload_tlbwi(&p, K0, K1);
 
 	uasm_l_nopage_tlbm(&l, p);
@@ -2010,7 +2008,7 @@ static void build_r4000_tlb_load_handler(void)
 		}
 		uasm_l_tlbl_goaround1(&l, p);
 	}
-	build_make_valid(&p, &r, wr.r1, wr.r2);
+	build_make_valid(&p, &r, wr.r1, wr.r2, wr.r3);
 	build_r4000_tlbchange_handler_tail(&p, &l, &r, wr.r1, wr.r2);
 
 #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
@@ -2124,7 +2122,7 @@ static void build_r4000_tlb_store_handler(void)
 	build_pte_writable(&p, &r, wr.r1, wr.r2, wr.r3, label_nopage_tlbs);
 	if (m4kc_tlbp_war())
 		build_tlb_probe_entry(&p);
-	build_make_write(&p, &r, wr.r1, wr.r2);
+	build_make_write(&p, &r, wr.r1, wr.r2, wr.r3);
 	build_r4000_tlbchange_handler_tail(&p, &l, &r, wr.r1, wr.r2);
 
 #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
@@ -2180,7 +2178,7 @@ static void build_r4000_tlb_modify_handler(void)
 	if (m4kc_tlbp_war())
 		build_tlb_probe_entry(&p);
 	/* Present and writable bits set, set accessed and dirty bits. */
-	build_make_write(&p, &r, wr.r1, wr.r2);
+	build_make_write(&p, &r, wr.r1, wr.r2, wr.r3);
 	build_r4000_tlbchange_handler_tail(&p, &l, &r, wr.r1, wr.r2);
 
 #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 10/13] MIPS: mm: Be more explicit about PTE mode bit handling
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Paul Gortmaker, linux-kernel

The XPA case in iPTE_SW or's in software mode bits to the pte_low value
(which is what actually ends up in the high 32 bits of EntryLo...). It
does this presuming that only bits in the upper 16 bits of the 32 bit
pte_low value will be set. Make this assumption explicit with a BUG_ON.

A similar assumption is made for the hardware mode bits, which are or'd
in with a single ori instruction. Make that assumption explicit with a
BUG_ON too.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
---

Changes in v3: None
Changes in v2: None

 arch/mips/mm/tlbex.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index d7a7b3d..0bd3755 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -1528,15 +1528,17 @@ static void
 iPTE_SW(u32 **p, struct uasm_reloc **r, unsigned int pte, unsigned int ptr,
 	unsigned int mode, unsigned int scratch)
 {
-#ifdef CONFIG_PHYS_ADDR_T_64BIT
 	unsigned int hwmode = mode & (_PAGE_VALID | _PAGE_DIRTY);
+	unsigned int swmode = mode & ~hwmode;
 
 	if (config_enabled(CONFIG_XPA) && !cpu_has_64bits) {
-		uasm_i_lui(p, scratch, (mode >> 16));
+		uasm_i_lui(p, scratch, swmode >> 16);
 		uasm_i_or(p, pte, pte, scratch);
-	} else
-#endif
-	uasm_i_ori(p, pte, pte, mode);
+		BUG_ON(swmode & 0xffff);
+	} else {
+		uasm_i_ori(p, pte, pte, mode);
+	}
+
 #ifdef CONFIG_SMP
 # ifdef CONFIG_PHYS_ADDR_T_64BIT
 	if (cpu_has_64bits)
@@ -1555,6 +1557,7 @@ iPTE_SW(u32 **p, struct uasm_reloc **r, unsigned int pte, unsigned int ptr,
 		/* no uasm_i_nop needed */
 		uasm_i_ll(p, pte, sizeof(pte_t) / 2, ptr);
 		uasm_i_ori(p, pte, pte, hwmode);
+		BUG_ON(hwmode & ~0xffff);
 		uasm_i_sc(p, pte, sizeof(pte_t) / 2, ptr);
 		uasm_il_beqz(p, r, pte, label_smp_pgtable_change);
 		/* no uasm_i_nop needed */
@@ -1576,6 +1579,7 @@ iPTE_SW(u32 **p, struct uasm_reloc **r, unsigned int pte, unsigned int ptr,
 	if (!cpu_has_64bits) {
 		uasm_i_lw(p, pte, sizeof(pte_t) / 2, ptr);
 		uasm_i_ori(p, pte, pte, hwmode);
+		BUG_ON(hwmode & ~0xffff);
 		uasm_i_sw(p, pte, sizeof(pte_t) / 2, ptr);
 		uasm_i_lw(p, pte, 0, ptr);
 	}
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 10/13] MIPS: mm: Be more explicit about PTE mode bit handling
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Paul Gortmaker, linux-kernel

The XPA case in iPTE_SW or's in software mode bits to the pte_low value
(which is what actually ends up in the high 32 bits of EntryLo...). It
does this presuming that only bits in the upper 16 bits of the 32 bit
pte_low value will be set. Make this assumption explicit with a BUG_ON.

A similar assumption is made for the hardware mode bits, which are or'd
in with a single ori instruction. Make that assumption explicit with a
BUG_ON too.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
---

Changes in v3: None
Changes in v2: None

 arch/mips/mm/tlbex.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index d7a7b3d..0bd3755 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -1528,15 +1528,17 @@ static void
 iPTE_SW(u32 **p, struct uasm_reloc **r, unsigned int pte, unsigned int ptr,
 	unsigned int mode, unsigned int scratch)
 {
-#ifdef CONFIG_PHYS_ADDR_T_64BIT
 	unsigned int hwmode = mode & (_PAGE_VALID | _PAGE_DIRTY);
+	unsigned int swmode = mode & ~hwmode;
 
 	if (config_enabled(CONFIG_XPA) && !cpu_has_64bits) {
-		uasm_i_lui(p, scratch, (mode >> 16));
+		uasm_i_lui(p, scratch, swmode >> 16);
 		uasm_i_or(p, pte, pte, scratch);
-	} else
-#endif
-	uasm_i_ori(p, pte, pte, mode);
+		BUG_ON(swmode & 0xffff);
+	} else {
+		uasm_i_ori(p, pte, pte, mode);
+	}
+
 #ifdef CONFIG_SMP
 # ifdef CONFIG_PHYS_ADDR_T_64BIT
 	if (cpu_has_64bits)
@@ -1555,6 +1557,7 @@ iPTE_SW(u32 **p, struct uasm_reloc **r, unsigned int pte, unsigned int ptr,
 		/* no uasm_i_nop needed */
 		uasm_i_ll(p, pte, sizeof(pte_t) / 2, ptr);
 		uasm_i_ori(p, pte, pte, hwmode);
+		BUG_ON(hwmode & ~0xffff);
 		uasm_i_sc(p, pte, sizeof(pte_t) / 2, ptr);
 		uasm_il_beqz(p, r, pte, label_smp_pgtable_change);
 		/* no uasm_i_nop needed */
@@ -1576,6 +1579,7 @@ iPTE_SW(u32 **p, struct uasm_reloc **r, unsigned int pte, unsigned int ptr,
 	if (!cpu_has_64bits) {
 		uasm_i_lw(p, pte, sizeof(pte_t) / 2, ptr);
 		uasm_i_ori(p, pte, pte, hwmode);
+		BUG_ON(hwmode & ~0xffff);
 		uasm_i_sw(p, pte, sizeof(pte_t) / 2, ptr);
 		uasm_i_lw(p, pte, 0, ptr);
 	}
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 11/13] MIPS: mm: Simplify build_update_entries
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Paul Gortmaker, linux-kernel

We can simplify build_update_entries by unifying the code for the 36 bit
physical addressing with MIPS32 case with the general case, by using
pte_off_ variables in all cases & handling the trivial
_PAGE_GLOBAL_SHIFT == 0 case in build_convert_pte_to_entrylo. This
leaves XPA as the only special case.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>

---

Changes in v3: None
Changes in v2:
- #ifdef on CONFIG_CPU_MIPS32 instead of config_enabled(CONFIG_32BIT) to avoid referencing pte_high in configurations where it doesn't exist.

 arch/mips/mm/tlbex.c | 37 ++++++++++++++++---------------------
 1 file changed, 16 insertions(+), 21 deletions(-)

diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index 0bd3755..c7c14bd 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -626,6 +626,11 @@ static void build_tlb_write_entry(u32 **p, struct uasm_label **l,
 static __maybe_unused void build_convert_pte_to_entrylo(u32 **p,
 							unsigned int reg)
 {
+	if (_PAGE_GLOBAL_SHIFT == 0) {
+		/* pte_t is already in EntryLo format */
+		return;
+	}
+
 	if (cpu_has_rixi && _PAGE_NO_EXEC) {
 		if (fill_includes_sw_bits) {
 			UASM_i_ROTR(p, reg, reg, ilog2(_PAGE_GLOBAL));
@@ -1003,10 +1008,16 @@ static void build_get_ptep(u32 **p, unsigned int tmp, unsigned int ptr)
 
 static void build_update_entries(u32 **p, unsigned int tmp, unsigned int ptep)
 {
-	if (config_enabled(CONFIG_XPA)) {
-		int pte_off_even = sizeof(pte_t) / 2;
-		int pte_off_odd = pte_off_even + sizeof(pte_t);
+	int pte_off_even = 0;
+	int pte_off_odd = sizeof(pte_t);
 
+#if defined(CONFIG_CPU_MIPS32) && defined(CONFIG_PHYS_ADDR_T_64BIT)
+	/* The low 32 bits of EntryLo is stored in pte_high */
+	pte_off_even += offsetof(pte_t, pte_high);
+	pte_off_odd += offsetof(pte_t, pte_high);
+#endif
+
+	if (config_enabled(CONFIG_XPA)) {
 		uasm_i_lw(p, tmp, pte_off_even, ptep); /* even pte */
 		UASM_i_ROTR(p, tmp, tmp, ilog2(_PAGE_GLOBAL));
 		UASM_i_MTC0(p, tmp, C0_ENTRYLO0);
@@ -1025,24 +1036,8 @@ static void build_update_entries(u32 **p, unsigned int tmp, unsigned int ptep)
 		return;
 	}
 
-	/*
-	 * 64bit address support (36bit on a 32bit CPU) in a 32bit
-	 * Kernel is a special case. Only a few CPUs use it.
-	 */
-	if (config_enabled(CONFIG_PHYS_ADDR_T_64BIT) && !cpu_has_64bits) {
-		int pte_off_even = sizeof(pte_t) / 2;
-		int pte_off_odd = pte_off_even + sizeof(pte_t);
-
-		uasm_i_lw(p, tmp, pte_off_even, ptep); /* even pte */
-		UASM_i_MTC0(p, tmp, C0_ENTRYLO0);
-
-		uasm_i_lw(p, ptep, pte_off_odd, ptep); /* odd pte */
-		UASM_i_MTC0(p, ptep, C0_ENTRYLO1);
-		return;
-	}
-
-	UASM_i_LW(p, tmp, 0, ptep); /* get even pte */
-	UASM_i_LW(p, ptep, sizeof(pte_t), ptep); /* get odd pte */
+	UASM_i_LW(p, tmp, pte_off_even, ptep); /* get even pte */
+	UASM_i_LW(p, ptep, pte_off_odd, ptep); /* get odd pte */
 	if (r45k_bvahwbug())
 		build_tlb_probe_entry(p);
 	build_convert_pte_to_entrylo(p, tmp);
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 11/13] MIPS: mm: Simplify build_update_entries
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Paul Gortmaker, linux-kernel

We can simplify build_update_entries by unifying the code for the 36 bit
physical addressing with MIPS32 case with the general case, by using
pte_off_ variables in all cases & handling the trivial
_PAGE_GLOBAL_SHIFT == 0 case in build_convert_pte_to_entrylo. This
leaves XPA as the only special case.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>

---

Changes in v3: None
Changes in v2:
- #ifdef on CONFIG_CPU_MIPS32 instead of config_enabled(CONFIG_32BIT) to avoid referencing pte_high in configurations where it doesn't exist.

 arch/mips/mm/tlbex.c | 37 ++++++++++++++++---------------------
 1 file changed, 16 insertions(+), 21 deletions(-)

diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index 0bd3755..c7c14bd 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -626,6 +626,11 @@ static void build_tlb_write_entry(u32 **p, struct uasm_label **l,
 static __maybe_unused void build_convert_pte_to_entrylo(u32 **p,
 							unsigned int reg)
 {
+	if (_PAGE_GLOBAL_SHIFT == 0) {
+		/* pte_t is already in EntryLo format */
+		return;
+	}
+
 	if (cpu_has_rixi && _PAGE_NO_EXEC) {
 		if (fill_includes_sw_bits) {
 			UASM_i_ROTR(p, reg, reg, ilog2(_PAGE_GLOBAL));
@@ -1003,10 +1008,16 @@ static void build_get_ptep(u32 **p, unsigned int tmp, unsigned int ptr)
 
 static void build_update_entries(u32 **p, unsigned int tmp, unsigned int ptep)
 {
-	if (config_enabled(CONFIG_XPA)) {
-		int pte_off_even = sizeof(pte_t) / 2;
-		int pte_off_odd = pte_off_even + sizeof(pte_t);
+	int pte_off_even = 0;
+	int pte_off_odd = sizeof(pte_t);
 
+#if defined(CONFIG_CPU_MIPS32) && defined(CONFIG_PHYS_ADDR_T_64BIT)
+	/* The low 32 bits of EntryLo is stored in pte_high */
+	pte_off_even += offsetof(pte_t, pte_high);
+	pte_off_odd += offsetof(pte_t, pte_high);
+#endif
+
+	if (config_enabled(CONFIG_XPA)) {
 		uasm_i_lw(p, tmp, pte_off_even, ptep); /* even pte */
 		UASM_i_ROTR(p, tmp, tmp, ilog2(_PAGE_GLOBAL));
 		UASM_i_MTC0(p, tmp, C0_ENTRYLO0);
@@ -1025,24 +1036,8 @@ static void build_update_entries(u32 **p, unsigned int tmp, unsigned int ptep)
 		return;
 	}
 
-	/*
-	 * 64bit address support (36bit on a 32bit CPU) in a 32bit
-	 * Kernel is a special case. Only a few CPUs use it.
-	 */
-	if (config_enabled(CONFIG_PHYS_ADDR_T_64BIT) && !cpu_has_64bits) {
-		int pte_off_even = sizeof(pte_t) / 2;
-		int pte_off_odd = pte_off_even + sizeof(pte_t);
-
-		uasm_i_lw(p, tmp, pte_off_even, ptep); /* even pte */
-		UASM_i_MTC0(p, tmp, C0_ENTRYLO0);
-
-		uasm_i_lw(p, ptep, pte_off_odd, ptep); /* odd pte */
-		UASM_i_MTC0(p, ptep, C0_ENTRYLO1);
-		return;
-	}
-
-	UASM_i_LW(p, tmp, 0, ptep); /* get even pte */
-	UASM_i_LW(p, ptep, sizeof(pte_t), ptep); /* get odd pte */
+	UASM_i_LW(p, tmp, pte_off_even, ptep); /* get even pte */
+	UASM_i_LW(p, ptep, pte_off_odd, ptep); /* get odd pte */
 	if (r45k_bvahwbug())
 		build_tlb_probe_entry(p);
 	build_convert_pte_to_entrylo(p, tmp);
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 12/13] MIPS: mm: Don't do MTHC0 if XPA not present
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Paul Gortmaker, David Hildenbrand,
	linux-kernel, Andrew Morton, Jerome Marchand, Kirill A. Shutemov

From: James Hogan <james.hogan@imgtec.com>

Performing an MTHC0 instruction without XPA being present will trigger a
reserved instruction exception, therefore conditionalise the use of this
instruction when building TLB handlers (build_update_entries()), and in
__update_tlb().

This allows an XPA kernel to run on non XPA hardware without that
instruction implemented, just like it can run on XPA capable hardware
without XPA in use (with the noxpa kernel argument) or with XPA not
configured in hardware.

[paul.burton@imgtec.com:
  - Rebase atop other TLB work.
  - Add "mm" to subject.
  - Handle the __kmap_pgprot case.]

Fixes: c5b367835cfc ("MIPS: Add support for XPA.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org

---

Changes in v3:
- Use #ifdef CONFIG_XPA to avoid use of _PFNX_MASK in __kmap_pgprot in configurations where it isn't defined.

Changes in v2: None

 arch/mips/mm/init.c    |  8 +++++---
 arch/mips/mm/tlb-r4k.c |  6 ++++--
 arch/mips/mm/tlbex.c   | 16 ++++++++++------
 3 files changed, 19 insertions(+), 11 deletions(-)

diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c
index 0e57893..307c534 100644
--- a/arch/mips/mm/init.c
+++ b/arch/mips/mm/init.c
@@ -112,9 +112,11 @@ static void *__kmap_pgprot(struct page *page, unsigned long addr, pgprot_t prot)
 	write_c0_entrylo0(entrylo);
 	write_c0_entrylo1(entrylo);
 #ifdef CONFIG_XPA
-	entrylo = (pte.pte_low & _PFNX_MASK);
-	writex_c0_entrylo0(entrylo);
-	writex_c0_entrylo1(entrylo);
+	if (cpu_has_xpa) {
+		entrylo = (pte.pte_low & _PFNX_MASK);
+		writex_c0_entrylo0(entrylo);
+		writex_c0_entrylo1(entrylo);
+	}
 #endif
 	tlbidx = read_c0_wired();
 	write_c0_wired(tlbidx + 1);
diff --git a/arch/mips/mm/tlb-r4k.c b/arch/mips/mm/tlb-r4k.c
index c17d762..b99695c 100644
--- a/arch/mips/mm/tlb-r4k.c
+++ b/arch/mips/mm/tlb-r4k.c
@@ -336,10 +336,12 @@ void __update_tlb(struct vm_area_struct * vma, unsigned long address, pte_t pte)
 #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
 #ifdef CONFIG_XPA
 		write_c0_entrylo0(pte_to_entrylo(ptep->pte_high));
-		writex_c0_entrylo0(ptep->pte_low & _PFNX_MASK);
+		if (cpu_has_xpa)
+			writex_c0_entrylo0(ptep->pte_low & _PFNX_MASK);
 		ptep++;
 		write_c0_entrylo1(pte_to_entrylo(ptep->pte_high));
-		writex_c0_entrylo1(ptep->pte_low & _PFNX_MASK);
+		if (cpu_has_xpa)
+			writex_c0_entrylo1(ptep->pte_low & _PFNX_MASK);
 #else
 		write_c0_entrylo0(ptep->pte_high);
 		ptep++;
diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index c7c14bd..3f1a8a2 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -1022,17 +1022,21 @@ static void build_update_entries(u32 **p, unsigned int tmp, unsigned int ptep)
 		UASM_i_ROTR(p, tmp, tmp, ilog2(_PAGE_GLOBAL));
 		UASM_i_MTC0(p, tmp, C0_ENTRYLO0);
 
-		uasm_i_lw(p, tmp, 0, ptep);
-		uasm_i_ext(p, tmp, tmp, 0, 24);
-		uasm_i_mthc0(p, tmp, C0_ENTRYLO0);
+		if (cpu_has_xpa && !mips_xpa_disabled) {
+			uasm_i_lw(p, tmp, 0, ptep);
+			uasm_i_ext(p, tmp, tmp, 0, 24);
+			uasm_i_mthc0(p, tmp, C0_ENTRYLO0);
+		}
 
 		uasm_i_lw(p, tmp, pte_off_odd, ptep); /* odd pte */
 		UASM_i_ROTR(p, tmp, tmp, ilog2(_PAGE_GLOBAL));
 		UASM_i_MTC0(p, tmp, C0_ENTRYLO1);
 
-		uasm_i_lw(p, tmp, sizeof(pte_t), ptep);
-		uasm_i_ext(p, tmp, tmp, 0, 24);
-		uasm_i_mthc0(p, tmp, C0_ENTRYLO1);
+		if (cpu_has_xpa && !mips_xpa_disabled) {
+			uasm_i_lw(p, tmp, sizeof(pte_t), ptep);
+			uasm_i_ext(p, tmp, tmp, 0, 24);
+			uasm_i_mthc0(p, tmp, C0_ENTRYLO1);
+		}
 		return;
 	}
 
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 12/13] MIPS: mm: Don't do MTHC0 if XPA not present
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Paul Gortmaker, David Hildenbrand,
	linux-kernel, Andrew Morton, Jerome Marchand, Kirill A. Shutemov

From: James Hogan <james.hogan@imgtec.com>

Performing an MTHC0 instruction without XPA being present will trigger a
reserved instruction exception, therefore conditionalise the use of this
instruction when building TLB handlers (build_update_entries()), and in
__update_tlb().

This allows an XPA kernel to run on non XPA hardware without that
instruction implemented, just like it can run on XPA capable hardware
without XPA in use (with the noxpa kernel argument) or with XPA not
configured in hardware.

[paul.burton@imgtec.com:
  - Rebase atop other TLB work.
  - Add "mm" to subject.
  - Handle the __kmap_pgprot case.]

Fixes: c5b367835cfc ("MIPS: Add support for XPA.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org

---

Changes in v3:
- Use #ifdef CONFIG_XPA to avoid use of _PFNX_MASK in __kmap_pgprot in configurations where it isn't defined.

Changes in v2: None

 arch/mips/mm/init.c    |  8 +++++---
 arch/mips/mm/tlb-r4k.c |  6 ++++--
 arch/mips/mm/tlbex.c   | 16 ++++++++++------
 3 files changed, 19 insertions(+), 11 deletions(-)

diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c
index 0e57893..307c534 100644
--- a/arch/mips/mm/init.c
+++ b/arch/mips/mm/init.c
@@ -112,9 +112,11 @@ static void *__kmap_pgprot(struct page *page, unsigned long addr, pgprot_t prot)
 	write_c0_entrylo0(entrylo);
 	write_c0_entrylo1(entrylo);
 #ifdef CONFIG_XPA
-	entrylo = (pte.pte_low & _PFNX_MASK);
-	writex_c0_entrylo0(entrylo);
-	writex_c0_entrylo1(entrylo);
+	if (cpu_has_xpa) {
+		entrylo = (pte.pte_low & _PFNX_MASK);
+		writex_c0_entrylo0(entrylo);
+		writex_c0_entrylo1(entrylo);
+	}
 #endif
 	tlbidx = read_c0_wired();
 	write_c0_wired(tlbidx + 1);
diff --git a/arch/mips/mm/tlb-r4k.c b/arch/mips/mm/tlb-r4k.c
index c17d762..b99695c 100644
--- a/arch/mips/mm/tlb-r4k.c
+++ b/arch/mips/mm/tlb-r4k.c
@@ -336,10 +336,12 @@ void __update_tlb(struct vm_area_struct * vma, unsigned long address, pte_t pte)
 #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
 #ifdef CONFIG_XPA
 		write_c0_entrylo0(pte_to_entrylo(ptep->pte_high));
-		writex_c0_entrylo0(ptep->pte_low & _PFNX_MASK);
+		if (cpu_has_xpa)
+			writex_c0_entrylo0(ptep->pte_low & _PFNX_MASK);
 		ptep++;
 		write_c0_entrylo1(pte_to_entrylo(ptep->pte_high));
-		writex_c0_entrylo1(ptep->pte_low & _PFNX_MASK);
+		if (cpu_has_xpa)
+			writex_c0_entrylo1(ptep->pte_low & _PFNX_MASK);
 #else
 		write_c0_entrylo0(ptep->pte_high);
 		ptep++;
diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index c7c14bd..3f1a8a2 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -1022,17 +1022,21 @@ static void build_update_entries(u32 **p, unsigned int tmp, unsigned int ptep)
 		UASM_i_ROTR(p, tmp, tmp, ilog2(_PAGE_GLOBAL));
 		UASM_i_MTC0(p, tmp, C0_ENTRYLO0);
 
-		uasm_i_lw(p, tmp, 0, ptep);
-		uasm_i_ext(p, tmp, tmp, 0, 24);
-		uasm_i_mthc0(p, tmp, C0_ENTRYLO0);
+		if (cpu_has_xpa && !mips_xpa_disabled) {
+			uasm_i_lw(p, tmp, 0, ptep);
+			uasm_i_ext(p, tmp, tmp, 0, 24);
+			uasm_i_mthc0(p, tmp, C0_ENTRYLO0);
+		}
 
 		uasm_i_lw(p, tmp, pte_off_odd, ptep); /* odd pte */
 		UASM_i_ROTR(p, tmp, tmp, ilog2(_PAGE_GLOBAL));
 		UASM_i_MTC0(p, tmp, C0_ENTRYLO1);
 
-		uasm_i_lw(p, tmp, sizeof(pte_t), ptep);
-		uasm_i_ext(p, tmp, tmp, 0, 24);
-		uasm_i_mthc0(p, tmp, C0_ENTRYLO1);
+		if (cpu_has_xpa && !mips_xpa_disabled) {
+			uasm_i_lw(p, tmp, sizeof(pte_t), ptep);
+			uasm_i_ext(p, tmp, tmp, 0, 24);
+			uasm_i_mthc0(p, tmp, C0_ENTRYLO1);
+		}
 		return;
 	}
 
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 13/13] MIPS: mm: Panic if an XPA kernel is run without RIXI
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Paul Gortmaker, linux-kernel

XPA kernels hardcode for the presence of RIXI - the PTE format & its
handling presume RI & XI bits. Make this dependence explicit by panicing
if we run on a system that violates it.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>

---

Changes in v3:
- Remove newline in panic() call.

Changes in v2:
- New patch, in response to clarification on patch 5.

 arch/mips/mm/tlbex.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index 3f1a8a2..2afc710 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -2395,6 +2395,9 @@ void build_tlb_refill_handler(void)
 	 */
 	static int run_once = 0;
 
+	if (config_enabled(CONFIG_XPA) && !cpu_has_rixi)
+		panic("Kernels supporting XPA currently require CPUs with RIXI");
+
 	output_pgtable_bits_defines();
 	check_pabits();
 
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 13/13] MIPS: mm: Panic if an XPA kernel is run without RIXI
@ 2016-04-19  8:25   ` Paul Burton
  0 siblings, 0 replies; 28+ messages in thread
From: Paul Burton @ 2016-04-19  8:25 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Paul Gortmaker, linux-kernel

XPA kernels hardcode for the presence of RIXI - the PTE format & its
handling presume RI & XI bits. Make this dependence explicit by panicing
if we run on a system that violates it.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>

---

Changes in v3:
- Remove newline in panic() call.

Changes in v2:
- New patch, in response to clarification on patch 5.

 arch/mips/mm/tlbex.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index 3f1a8a2..2afc710 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -2395,6 +2395,9 @@ void build_tlb_refill_handler(void)
 	 */
 	static int run_once = 0;
 
+	if (config_enabled(CONFIG_XPA) && !cpu_has_rixi)
+		panic("Kernels supporting XPA currently require CPUs with RIXI");
+
 	output_pgtable_bits_defines();
 	check_pabits();
 
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2016-04-19  8:30 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-04-19  8:24 [PATCH v3 00/13] TLB/XPA fixes & cleanups Paul Burton
2016-04-19  8:24 ` Paul Burton
2016-04-19  8:24 ` [PATCH v3 01/13] MIPS: Separate XPA CPU feature into LPA and MVH Paul Burton
2016-04-19  8:24   ` Paul Burton
2016-04-19  8:25 ` [PATCH v3 02/13] MIPS: Fix HTW config on XPA kernel without LPA enabled Paul Burton
2016-04-19  8:25   ` Paul Burton
2016-04-19  8:25 ` [PATCH v3 03/13] MIPS: Remove redundant asm/pgtable-bits.h inclusions Paul Burton
2016-04-19  8:25   ` Paul Burton
2016-04-19  8:25 ` [PATCH v3 04/13] MIPS: Use enums to make asm/pgtable-bits.h readable Paul Burton
2016-04-19  8:25   ` Paul Burton
2016-04-19  8:25 ` [PATCH v3 05/13] MIPS: mm: Standardise on _PAGE_NO_READ, drop _PAGE_READ Paul Burton
2016-04-19  8:25   ` Paul Burton
2016-04-19  8:25 ` [PATCH v3 06/13] MIPS: mm: Unify pte_page definition Paul Burton
2016-04-19  8:25   ` Paul Burton
2016-04-19  8:25 ` [PATCH v3 07/13] MIPS: mm: Fix MIPS32 36b physical addressing (alchemy, netlogic) Paul Burton
2016-04-19  8:25   ` Paul Burton
2016-04-19  8:25 ` [PATCH v3 08/13] MIPS: mm: Don't clobber $1 on XPA TLB refill Paul Burton
2016-04-19  8:25   ` Paul Burton
2016-04-19  8:25 ` [PATCH v3 09/13] MIPS: mm: Pass scratch register through to iPTE_SW Paul Burton
2016-04-19  8:25   ` Paul Burton
2016-04-19  8:25 ` [PATCH v3 10/13] MIPS: mm: Be more explicit about PTE mode bit handling Paul Burton
2016-04-19  8:25   ` Paul Burton
2016-04-19  8:25 ` [PATCH v3 11/13] MIPS: mm: Simplify build_update_entries Paul Burton
2016-04-19  8:25   ` Paul Burton
2016-04-19  8:25 ` [PATCH v3 12/13] MIPS: mm: Don't do MTHC0 if XPA not present Paul Burton
2016-04-19  8:25   ` Paul Burton
2016-04-19  8:25 ` [PATCH v3 13/13] MIPS: mm: Panic if an XPA kernel is run without RIXI Paul Burton
2016-04-19  8:25   ` Paul Burton

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.