linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 01/12] MIPS: Separate XPA CPU feature into LPA and MVH
       [not found] <1460716620-13382-1-git-send-email-paul.burton@imgtec.com>
@ 2016-04-15 10:36 ` Paul Burton
  2016-04-15 10:36 ` [PATCH 02/12] MIPS: Fix HTW config on XPA kernel without LPA enabled Paul Burton
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 21+ messages in thread
From: Paul Burton @ 2016-04-15 10:36 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Maciej W. Rozycki, Joshua Kinard,
	linux-kernel, Markos Chandras

From: James Hogan <james.hogan@imgtec.com>

XPA (eXtended Physical Addressing) should be detected as a combination
of two architectural features:
- Large Physical Address (as per Config3.LPA). With XPA this will be set
  on MIPS32r5 cores, but it may also be set for MIPS64r2 cores too.
- MTHC0/MFHC0 instructions (as per Config5.MVH). With XPA this will be
  set, but it may also be set in VZ guest context even when Config3.LPA
  in the guest context has been cleared by the hypervisor.

As such, XPA is only usable if both bits are set. Update CPU features to
separate these two features, with cpu_has_xpa requiring both to be set.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
---

 arch/mips/include/asm/cpu-features.h | 8 +++++++-
 arch/mips/include/asm/cpu.h          | 3 ++-
 arch/mips/kernel/cpu-probe.c         | 6 +++---
 3 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/arch/mips/include/asm/cpu-features.h b/arch/mips/include/asm/cpu-features.h
index eeec8c8..3993a35 100644
--- a/arch/mips/include/asm/cpu-features.h
+++ b/arch/mips/include/asm/cpu-features.h
@@ -142,8 +142,14 @@
 # endif
 #endif
 
+#ifndef cpu_has_lpa
+#define cpu_has_lpa		(cpu_data[0].options & MIPS_CPU_LPA)
+#endif
+#ifndef cpu_has_mvh
+#define cpu_has_mvh		(cpu_data[0].options & MIPS_CPU_MVH)
+#endif
 #ifndef cpu_has_xpa
-#define cpu_has_xpa		(cpu_data[0].options & MIPS_CPU_XPA)
+#define cpu_has_xpa		(cpu_has_lpa && cpu_has_mvh)
 #endif
 #ifndef cpu_has_vtag_icache
 #define cpu_has_vtag_icache	(cpu_data[0].icache.flags & MIPS_CACHE_VTAG)
diff --git a/arch/mips/include/asm/cpu.h b/arch/mips/include/asm/cpu.h
index a97ca97..32fa061 100644
--- a/arch/mips/include/asm/cpu.h
+++ b/arch/mips/include/asm/cpu.h
@@ -381,13 +381,14 @@ enum cpu_type_enum {
 #define MIPS_CPU_MAAR		0x400000000ull /* MAAR(I) registers are present */
 #define MIPS_CPU_FRE		0x800000000ull /* FRE & UFE bits implemented */
 #define MIPS_CPU_RW_LLB		0x1000000000ull /* LLADDR/LLB writes are allowed */
-#define MIPS_CPU_XPA		0x2000000000ull /* CPU supports Extended Physical Addressing */
+#define MIPS_CPU_LPA		0x2000000000ull /* CPU supports Large Physical Addressing */
 #define MIPS_CPU_CDMM		0x4000000000ull	/* CPU has Common Device Memory Map */
 #define MIPS_CPU_BP_GHIST	0x8000000000ull /* R12K+ Branch Prediction Global History */
 #define MIPS_CPU_SP		0x10000000000ull /* Small (1KB) page support */
 #define MIPS_CPU_FTLB		0x20000000000ull /* CPU has Fixed-page-size TLB */
 #define MIPS_CPU_NAN_LEGACY	0x40000000000ull /* Legacy NaN implemented */
 #define MIPS_CPU_NAN_2008	0x80000000000ull /* 2008 NaN implemented */
+#define MIPS_CPU_MVH		0x100000000000ull /* CPU supports MFHC0/MTHC0 */
 
 /*
  * CPU ASE encodings
diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
index b725b71..da21fbe 100644
--- a/arch/mips/kernel/cpu-probe.c
+++ b/arch/mips/kernel/cpu-probe.c
@@ -685,6 +685,8 @@ static inline unsigned int decode_config3(struct cpuinfo_mips *c)
 		c->options |= MIPS_CPU_VINT;
 	if (config3 & MIPS_CONF3_VEIC)
 		c->options |= MIPS_CPU_VEIC;
+	if (config3 & MIPS_CONF3_LPA)
+		c->options |= MIPS_CPU_LPA;
 	if (config3 & MIPS_CONF3_MT)
 		c->ases |= MIPS_ASE_MIPSMT;
 	if (config3 & MIPS_CONF3_ULRI)
@@ -792,10 +794,8 @@ static inline unsigned int decode_config5(struct cpuinfo_mips *c)
 		c->options |= MIPS_CPU_MAAR;
 	if (config5 & MIPS_CONF5_LLB)
 		c->options |= MIPS_CPU_RW_LLB;
-#ifdef CONFIG_XPA
 	if (config5 & MIPS_CONF5_MVH)
-		c->options |= MIPS_CPU_XPA;
-#endif
+		c->options |= MIPS_CPU_MVH;
 
 	return config5 & MIPS_CONF_M;
 }
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 02/12] MIPS: Fix HTW config on XPA kernel without LPA enabled
       [not found] <1460716620-13382-1-git-send-email-paul.burton@imgtec.com>
  2016-04-15 10:36 ` [PATCH 01/12] MIPS: Separate XPA CPU feature into LPA and MVH Paul Burton
@ 2016-04-15 10:36 ` Paul Burton
  2016-04-15 10:36 ` [PATCH 03/12] MIPS: Remove redundant asm/pgtable-bits.h inclusions Paul Burton
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 21+ messages in thread
From: Paul Burton @ 2016-04-15 10:36 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Steven J . Hill, Paul Burton, Adam Buchbinder,
	Huacai Chen, Paul Gortmaker, linux-kernel, Andrew Morton

From: James Hogan <james.hogan@imgtec.com>

The hardware page table walker (HTW) configuration is broken on XPA
kernels where XPA couldn't be enabled (either nohtw or the hardware
doesn't support it). This is because the PWSize.PTEW field (PTE width)
was only set to 8 bytes (an extra shift of 1) in config_htw_params() if
PageGrain.ELPA (enable large physical addressing) is set. On an XPA
kernel though the size of PTEs is fixed at 8 bytes regardless of whether
XPA could actually be enabled.

Fix the initialisation of this field based on sizeof(pte_t) instead.

Fixes: c5b367835cfc ("MIPS: Add support for XPA.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: linux-mips@linux-mips.org
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
---

 arch/mips/mm/tlbex.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index 84c6e3f..86aa7c2 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -2311,9 +2311,7 @@ static void config_htw_params(void)
 	if (CONFIG_PGTABLE_LEVELS >= 3)
 		pwsize |= ilog2(PTRS_PER_PMD) << MIPS_PWSIZE_MDW_SHIFT;
 
-	/* If XPA has been enabled, PTEs are 64-bit in size. */
-	if (config_enabled(CONFIG_64BITS) || (read_c0_pagegrain() & PG_ELPA))
-		pwsize |= 1;
+	pwsize |= ilog2(sizeof(pte_t)/4) << MIPS_PWSIZE_PTEW_SHIFT;
 
 	write_c0_pwsize(pwsize);
 
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 03/12] MIPS: Remove redundant asm/pgtable-bits.h inclusions
       [not found] <1460716620-13382-1-git-send-email-paul.burton@imgtec.com>
  2016-04-15 10:36 ` [PATCH 01/12] MIPS: Separate XPA CPU feature into LPA and MVH Paul Burton
  2016-04-15 10:36 ` [PATCH 02/12] MIPS: Fix HTW config on XPA kernel without LPA enabled Paul Burton
@ 2016-04-15 10:36 ` Paul Burton
  2016-04-15 19:16   ` James Hogan
  2016-04-15 10:36 ` [PATCH 04/12] MIPS: Use enums to make asm/pgtable-bits.h readable Paul Burton
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 21+ messages in thread
From: Paul Burton @ 2016-04-15 10:36 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Maciej W. Rozycki, linux-kernel,
	Jonas Gorski, Markos Chandras, Alex Smith, Kirill A. Shutemov,
	Andrew Morton

asm/pgtable-bits.h is included in 2 assembly files and thus has to
in either of the assembly files that include it.

Remove the redundant inclusions such that asm/pgtable-bits.h doesn't
need to #ifdef around C code, for cleanliness & and in preparation for
later patches which will add more C.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
---

 arch/mips/include/asm/pgtable-bits.h | 2 --
 arch/mips/kernel/head.S              | 1 -
 arch/mips/kernel/r4k_switch.S        | 1 -
 3 files changed, 4 deletions(-)

diff --git a/arch/mips/include/asm/pgtable-bits.h b/arch/mips/include/asm/pgtable-bits.h
index 97b3138..2f40312 100644
--- a/arch/mips/include/asm/pgtable-bits.h
+++ b/arch/mips/include/asm/pgtable-bits.h
@@ -191,7 +191,6 @@
  */
 
 
-#ifndef __ASSEMBLY__
 /*
  * pte_to_entrylo converts a page table entry (PTE) into a Mips
  * entrylo0/1 value.
@@ -218,7 +217,6 @@ static inline uint64_t pte_to_entrylo(unsigned long pte_val)
 
 	return pte_val >> _PAGE_GLOBAL_SHIFT;
 }
-#endif
 
 /*
  * Cache attributes
diff --git a/arch/mips/kernel/head.S b/arch/mips/kernel/head.S
index 4e4cc5b..b8fb0ba 100644
--- a/arch/mips/kernel/head.S
+++ b/arch/mips/kernel/head.S
@@ -21,7 +21,6 @@
 #include <asm/asmmacro.h>
 #include <asm/irqflags.h>
 #include <asm/regdef.h>
-#include <asm/pgtable-bits.h>
 #include <asm/mipsregs.h>
 #include <asm/stackframe.h>
 
diff --git a/arch/mips/kernel/r4k_switch.S b/arch/mips/kernel/r4k_switch.S
index 92cd051..2f0a3b2 100644
--- a/arch/mips/kernel/r4k_switch.S
+++ b/arch/mips/kernel/r4k_switch.S
@@ -15,7 +15,6 @@
 #include <asm/fpregdef.h>
 #include <asm/mipsregs.h>
 #include <asm/asm-offsets.h>
-#include <asm/pgtable-bits.h>
 #include <asm/regdef.h>
 #include <asm/stackframe.h>
 #include <asm/thread_info.h>
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 04/12] MIPS: Use enums to make asm/pgtable-bits.h readable
       [not found] <1460716620-13382-1-git-send-email-paul.burton@imgtec.com>
                   ` (2 preceding siblings ...)
  2016-04-15 10:36 ` [PATCH 03/12] MIPS: Remove redundant asm/pgtable-bits.h inclusions Paul Burton
@ 2016-04-15 10:36 ` Paul Burton
  2016-04-15 20:29   ` James Hogan
  2016-04-15 10:36 ` [PATCH 06/12] MIPS: mm: Unify pte_page definition Paul Burton
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 21+ messages in thread
From: Paul Burton @ 2016-04-15 10:36 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Maciej W. Rozycki, linux-kernel,
	Markos Chandras, Alex Smith, Kirill A. Shutemov

asm/pgtable-bits.h has grown to become an unreadable mess of #ifdef
directives defining bits conditionally upon other bits all at the
preprocessing stage, for no good reason.

Instead of having quite so many #ifdef's, simply use enums to provide
sequential numbering for bit shifts, without having to keep track
manually of what the last bit defined was. Masks are defined separately,
after the shifts, which allows for most of their definitions to be
reused for all systems rather than duplicated.

This patch is not intended to make any behavioural change to the code -
all bits should be used in the same way they were before this patch.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
---

 arch/mips/include/asm/pgtable-bits.h | 189 +++++++++++++++--------------------
 1 file changed, 81 insertions(+), 108 deletions(-)

diff --git a/arch/mips/include/asm/pgtable-bits.h b/arch/mips/include/asm/pgtable-bits.h
index 2f40312..c81fc17 100644
--- a/arch/mips/include/asm/pgtable-bits.h
+++ b/arch/mips/include/asm/pgtable-bits.h
@@ -35,36 +35,25 @@
 #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
 
 /*
- * The following bits are implemented by the TLB hardware
+ * Page table bit offsets used for 64 bit physical addressing on MIPS32,
+ * for example with Alchemy, Netlogic XLP/XLR or XPA.
  */
-#define _PAGE_NO_EXEC_SHIFT	0
-#define _PAGE_NO_EXEC		(1 << _PAGE_NO_EXEC_SHIFT)
-#define _PAGE_NO_READ_SHIFT	(_PAGE_NO_EXEC_SHIFT + 1)
-#define _PAGE_NO_READ		(1 << _PAGE_NO_READ_SHIFT)
-#define _PAGE_GLOBAL_SHIFT	(_PAGE_NO_READ_SHIFT + 1)
-#define _PAGE_GLOBAL		(1 << _PAGE_GLOBAL_SHIFT)
-#define _PAGE_VALID_SHIFT	(_PAGE_GLOBAL_SHIFT + 1)
-#define _PAGE_VALID		(1 << _PAGE_VALID_SHIFT)
-#define _PAGE_DIRTY_SHIFT	(_PAGE_VALID_SHIFT + 1)
-#define _PAGE_DIRTY		(1 << _PAGE_DIRTY_SHIFT)
-#define _CACHE_SHIFT		(_PAGE_DIRTY_SHIFT + 1)
-#define _CACHE_MASK		(7 << _CACHE_SHIFT)
-
-/*
- * The following bits are implemented in software
- */
-#define _PAGE_PRESENT_SHIFT	(24)
-#define _PAGE_PRESENT		(1 << _PAGE_PRESENT_SHIFT)
-#define _PAGE_READ_SHIFT	(_PAGE_PRESENT_SHIFT + 1)
-#define _PAGE_READ		(1 << _PAGE_READ_SHIFT)
-#define _PAGE_WRITE_SHIFT	(_PAGE_READ_SHIFT + 1)
-#define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
-#define _PAGE_ACCESSED_SHIFT	(_PAGE_WRITE_SHIFT + 1)
-#define _PAGE_ACCESSED		(1 << _PAGE_ACCESSED_SHIFT)
-#define _PAGE_MODIFIED_SHIFT	(_PAGE_ACCESSED_SHIFT + 1)
-#define _PAGE_MODIFIED		(1 << _PAGE_MODIFIED_SHIFT)
-
-#define _PFN_SHIFT		(PAGE_SHIFT - 12 + _CACHE_SHIFT + 3)
+enum pgtable_bits {
+	/* Used by TLB hardware (placed in EntryLo*) */
+	_PAGE_NO_EXEC_SHIFT,
+	_PAGE_NO_READ_SHIFT,
+	_PAGE_GLOBAL_SHIFT,
+	_PAGE_VALID_SHIFT,
+	_PAGE_DIRTY_SHIFT,
+	_CACHE_SHIFT,
+
+	/* Used only by software (masked out before writing EntryLo*) */
+	_PAGE_PRESENT_SHIFT = 24,
+	_PAGE_READ_SHIFT,
+	_PAGE_WRITE_SHIFT,
+	_PAGE_ACCESSED_SHIFT,
+	_PAGE_MODIFIED_SHIFT,
+};
 
 /*
  * Bits for extended EntryLo0/EntryLo1 registers
@@ -73,101 +62,85 @@
 
 #elif defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
 
-/*
- * The following bits are implemented in software
- */
-#define _PAGE_PRESENT_SHIFT	(0)
-#define _PAGE_PRESENT		(1 << _PAGE_PRESENT_SHIFT)
-#define _PAGE_READ_SHIFT	(_PAGE_PRESENT_SHIFT + 1)
-#define _PAGE_READ		(1 << _PAGE_READ_SHIFT)
-#define _PAGE_WRITE_SHIFT	(_PAGE_READ_SHIFT + 1)
-#define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
-#define _PAGE_ACCESSED_SHIFT	(_PAGE_WRITE_SHIFT + 1)
-#define _PAGE_ACCESSED		(1 << _PAGE_ACCESSED_SHIFT)
-#define _PAGE_MODIFIED_SHIFT	(_PAGE_ACCESSED_SHIFT + 1)
-#define _PAGE_MODIFIED		(1 << _PAGE_MODIFIED_SHIFT)
+/* Page table bits used for r3k systems */
+enum pgtable_bits {
+	/* Used only by software (writes to EntryLo ignored) */
+	_PAGE_PRESENT_SHIFT,
+	_PAGE_READ_SHIFT,
+	_PAGE_WRITE_SHIFT,
+	_PAGE_ACCESSED_SHIFT,
+	_PAGE_MODIFIED_SHIFT,
+
+	/* Used by TLB hardware (placed in EntryLo) */
+	_PAGE_GLOBAL_SHIFT = 8,
+	_PAGE_VALID_SHIFT,
+	_PAGE_DIRTY_SHIFT,
+	_CACHE_UNCACHED_SHIFT,
+};
 
-/*
- * The following bits are implemented by the TLB hardware
- */
-#define _PAGE_GLOBAL_SHIFT	(_PAGE_MODIFIED_SHIFT + 4)
-#define _PAGE_GLOBAL		(1 << _PAGE_GLOBAL_SHIFT)
-#define _PAGE_VALID_SHIFT	(_PAGE_GLOBAL_SHIFT + 1)
-#define _PAGE_VALID		(1 << _PAGE_VALID_SHIFT)
-#define _PAGE_DIRTY_SHIFT	(_PAGE_VALID_SHIFT + 1)
-#define _PAGE_DIRTY		(1 << _PAGE_DIRTY_SHIFT)
-#define _CACHE_UNCACHED_SHIFT	(_PAGE_DIRTY_SHIFT + 1)
-#define _CACHE_UNCACHED		(1 << _CACHE_UNCACHED_SHIFT)
-#define _CACHE_MASK		_CACHE_UNCACHED
+#else
 
-#define _PFN_SHIFT		PAGE_SHIFT
+/* Page table bits used for r4k systems */
+enum pgtable_bits {
+	/* Used only by software (masked out before writing EntryLo*) */
+	_PAGE_PRESENT_SHIFT,
+#if !defined(CONFIG_CPU_MIPSR2) && !defined(CONFIG_CPU_MIPSR6)
+	_PAGE_READ_SHIFT,
+#endif
+	_PAGE_WRITE_SHIFT,
+	_PAGE_ACCESSED_SHIFT,
+	_PAGE_MODIFIED_SHIFT,
+#if defined(CONFIG_64BIT) && defined(CONFIG_MIPS_HUGE_TLB_SUPPORT)
+	_PAGE_HUGE_SHIFT,
+#endif
 
-#else
-/*
- * Below are the "Normal" R4K cases
- */
+	/* Used by TLB hardware (placed in EntryLo*) */
+#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
+	_PAGE_NO_EXEC_SHIFT,
+	_PAGE_NO_READ_SHIFT,
+	_PAGE_READ_SHIFT = _PAGE_NO_READ_SHIFT,
+#endif
+	_PAGE_GLOBAL_SHIFT,
+	_PAGE_VALID_SHIFT,
+	_PAGE_DIRTY_SHIFT,
+	_CACHE_SHIFT,
+};
 
-/*
- * The following bits are implemented in software
- */
-#define _PAGE_PRESENT_SHIFT	0
+#endif /* defined(CONFIG_PHYS_ADDR_T_64BIT && defined(CONFIG_CPU_MIPS32) */
+
+/* Used only by software */
 #define _PAGE_PRESENT		(1 << _PAGE_PRESENT_SHIFT)
-/* R2 or later cores check for RI/XI support to determine _PAGE_READ */
 #if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
-#define _PAGE_WRITE_SHIFT	(_PAGE_PRESENT_SHIFT + 1)
-#define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
+# define _PAGE_READ		(cpu_has_rixi ? 0 : (1 << _PAGE_READ_SHIFT))
 #else
-#define _PAGE_READ_SHIFT	(_PAGE_PRESENT_SHIFT + 1)
-#define _PAGE_READ		(1 << _PAGE_READ_SHIFT)
-#define _PAGE_WRITE_SHIFT	(_PAGE_READ_SHIFT + 1)
-#define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
+# define _PAGE_READ		(1 << _PAGE_READ_SHIFT)
 #endif
-#define _PAGE_ACCESSED_SHIFT	(_PAGE_WRITE_SHIFT + 1)
+#define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
 #define _PAGE_ACCESSED		(1 << _PAGE_ACCESSED_SHIFT)
-#define _PAGE_MODIFIED_SHIFT	(_PAGE_ACCESSED_SHIFT + 1)
 #define _PAGE_MODIFIED		(1 << _PAGE_MODIFIED_SHIFT)
-
 #if defined(CONFIG_64BIT) && defined(CONFIG_MIPS_HUGE_TLB_SUPPORT)
-/* Huge TLB page */
-#define _PAGE_HUGE_SHIFT	(_PAGE_MODIFIED_SHIFT + 1)
-#define _PAGE_HUGE		(1 << _PAGE_HUGE_SHIFT)
-#endif	/* CONFIG_64BIT && CONFIG_MIPS_HUGE_TLB_SUPPORT */
-
-#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
-/* XI - page cannot be executed */
-#ifdef _PAGE_HUGE_SHIFT
-#define _PAGE_NO_EXEC_SHIFT	(_PAGE_HUGE_SHIFT + 1)
-#else
-#define _PAGE_NO_EXEC_SHIFT	(_PAGE_MODIFIED_SHIFT + 1)
+# define _PAGE_HUGE		(1 << _PAGE_HUGE_SHIFT)
 #endif
-#define _PAGE_NO_EXEC		(cpu_has_rixi ? (1 << _PAGE_NO_EXEC_SHIFT) : 0)
-
-/* RI - page cannot be read */
-#define _PAGE_READ_SHIFT	(_PAGE_NO_EXEC_SHIFT + 1)
-#define _PAGE_READ		(cpu_has_rixi ? 0 : (1 << _PAGE_READ_SHIFT))
-#define _PAGE_NO_READ_SHIFT	_PAGE_READ_SHIFT
-#define _PAGE_NO_READ		(cpu_has_rixi ? (1 << _PAGE_READ_SHIFT) : 0)
-#endif	/* defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6) */
-
-#if defined(_PAGE_NO_READ_SHIFT)
-#define _PAGE_GLOBAL_SHIFT	(_PAGE_NO_READ_SHIFT + 1)
-#elif defined(_PAGE_HUGE_SHIFT)
-#define _PAGE_GLOBAL_SHIFT	(_PAGE_HUGE_SHIFT + 1)
-#else
-#define _PAGE_GLOBAL_SHIFT	(_PAGE_MODIFIED_SHIFT + 1)
+
+/* Used by TLB hardware (placed in EntryLo*) */
+#if (defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32))
+# define _PAGE_NO_EXEC		(1 << _PAGE_NO_EXEC_SHIFT)
+# define _PAGE_NO_READ		(1 << _PAGE_NO_READ_SHIFT)
+#elif defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
+# define _PAGE_NO_EXEC		(cpu_has_rixi ? (1 << _PAGE_NO_EXEC_SHIFT) : 0)
+# define _PAGE_NO_READ		(cpu_has_rixi ? (1 << _PAGE_NO_READ_SHIFT) : 0)
 #endif
 #define _PAGE_GLOBAL		(1 << _PAGE_GLOBAL_SHIFT)
-
-#define _PAGE_VALID_SHIFT	(_PAGE_GLOBAL_SHIFT + 1)
 #define _PAGE_VALID		(1 << _PAGE_VALID_SHIFT)
-#define _PAGE_DIRTY_SHIFT	(_PAGE_VALID_SHIFT + 1)
 #define _PAGE_DIRTY		(1 << _PAGE_DIRTY_SHIFT)
-#define _CACHE_SHIFT		(_PAGE_DIRTY_SHIFT + 1)
-#define _CACHE_MASK		(7 << _CACHE_SHIFT)
-
-#define _PFN_SHIFT		(PAGE_SHIFT - 12 + _CACHE_SHIFT + 3)
-
-#endif /* defined(CONFIG_PHYS_ADDR_T_64BIT && defined(CONFIG_CPU_MIPS32) */
+#if defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
+# define _CACHE_UNCACHED	(1 << _CACHE_UNCACHED_SHIFT)
+# define _CACHE_MASK		_CACHE_UNCACHED
+# define _PFN_SHIFT		PAGE_SHIFT
+#else
+# define _CACHE_MASK		(7 << _CACHE_SHIFT)
+# define _PFN_SHIFT		(PAGE_SHIFT - 12 + _CACHE_SHIFT + 3)
+#endif
 
 #ifndef _PAGE_NO_EXEC
 #define _PAGE_NO_EXEC		0
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 06/12] MIPS: mm: Unify pte_page definition
       [not found] <1460716620-13382-1-git-send-email-paul.burton@imgtec.com>
                   ` (3 preceding siblings ...)
  2016-04-15 10:36 ` [PATCH 04/12] MIPS: Use enums to make asm/pgtable-bits.h readable Paul Burton
@ 2016-04-15 10:36 ` Paul Burton
  2016-04-15 23:16   ` James Hogan
  2016-04-15 10:36 ` [PATCH 08/12] MIPS: mm: Don't clobber $1 on XPA TLB refill Paul Burton
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 21+ messages in thread
From: Paul Burton @ 2016-04-15 10:36 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Paul Gortmaker, linux-kernel

The same definition for pte_page is duplicated for the MIPS32
PHYS_ADDR_T_64BIT case & the generic case. Unify them by moving a single
definition outside of preprocessor conditionals.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
---

 arch/mips/include/asm/pgtable-32.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/mips/include/asm/pgtable-32.h b/arch/mips/include/asm/pgtable-32.h
index 832e216..181bd8e 100644
--- a/arch/mips/include/asm/pgtable-32.h
+++ b/arch/mips/include/asm/pgtable-32.h
@@ -104,7 +104,7 @@ static inline void pmd_clear(pmd_t *pmdp)
 }
 
 #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
-#define pte_page(x)		pfn_to_page(pte_pfn(x))
+
 #define pte_pfn(x)		(((unsigned long)((x).pte_high >> _PFN_SHIFT)) | (unsigned long)((x).pte_low << _PAGE_PRESENT_SHIFT))
 static inline pte_t
 pfn_pte(unsigned long pfn, pgprot_t prot)
@@ -120,8 +120,6 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
 
 #else
 
-#define pte_page(x)		pfn_to_page(pte_pfn(x))
-
 #ifdef CONFIG_CPU_VR41XX
 #define pte_pfn(x)		((unsigned long)((x).pte >> (PAGE_SHIFT + 2)))
 #define pfn_pte(pfn, prot)	__pte(((pfn) << (PAGE_SHIFT + 2)) | pgprot_val(prot))
@@ -131,6 +129,8 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
 #endif
 #endif /* defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32) */
 
+#define pte_page(x)		pfn_to_page(pte_pfn(x))
+
 #define __pgd_offset(address)	pgd_index(address)
 #define __pud_offset(address)	(((address) >> PUD_SHIFT) & (PTRS_PER_PUD-1))
 #define __pmd_offset(address)	(((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1))
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 08/12] MIPS: mm: Don't clobber $1 on XPA TLB refill
       [not found] <1460716620-13382-1-git-send-email-paul.burton@imgtec.com>
                   ` (4 preceding siblings ...)
  2016-04-15 10:36 ` [PATCH 06/12] MIPS: mm: Unify pte_page definition Paul Burton
@ 2016-04-15 10:36 ` Paul Burton
  2016-04-15 10:36 ` [PATCH 09/12] MIPS: mm: Pass scratch register through to iPTE_SW Paul Burton
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 21+ messages in thread
From: Paul Burton @ 2016-04-15 10:36 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Adam Buchbinder, Huacai Chen,
	Paul Gortmaker, linux-kernel, Kirill A. Shutemov

From: James Hogan <james.hogan@imgtec.com>

For XPA kernels build_update_entries() uses $1 (at) as a scratch
register, but doesn't arrange for it to be preserved, so it will always
be clobbered by the TLB refill exception. Although this register
normally has a very short lifetime that doesn't cross memory accesses,
TLB refills due to instruction fetches (either on a page boundary or
after preemption) could clobber live data, and its easy to reproduce
the clobber with a little bit of assembler code.

Note that the use of a hardware page table walker will partly mask the
problem, as the TLB refill handler will not always be invoked.

This is fixed by avoiding the use of the extra scratch register. The
pte_high parts (going into the lower half of the EntryLo registers) are
loaded and manipulated separately so as to keep the PTE pointer around
for the other halves (instead of storing in the scratch register), and
the pte_low parts (going into the high half of the EntryLo registers)
are masked with 0x00ffffff using an ext instruction (instead of loading
0x00ffffff into the scratch register and AND'ing).

[paul.burton@imgtec.com:
  - Rebase atop other TLB work.
  - Use ext instead of an sll, srl sequence.
  - Use cpu_has_xpa instead of #ifdefs.
  - Modify commit subject to include "mm".]

Fixes: c5b367835cfc ("MIPS: Add support for XPA.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
---

 arch/mips/mm/tlbex.c | 24 ++++++++++--------------
 1 file changed, 10 insertions(+), 14 deletions(-)

diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index ceaee32..004cd9f 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -1006,26 +1006,22 @@ static void build_update_entries(u32 **p, unsigned int tmp, unsigned int ptep)
 	if (config_enabled(CONFIG_XPA)) {
 		int pte_off_even = sizeof(pte_t) / 2;
 		int pte_off_odd = pte_off_even + sizeof(pte_t);
-		const int scratch = 1; /* Our extra working register */
-
-		uasm_i_addu(p, scratch, 0, ptep);
 
 		uasm_i_lw(p, tmp, pte_off_even, ptep); /* even pte */
 		UASM_i_ROTR(p, tmp, tmp, ilog2(_PAGE_GLOBAL));
 		UASM_i_MTC0(p, tmp, C0_ENTRYLO0);
 
-		uasm_i_lw(p, ptep, pte_off_odd, ptep); /* odd pte */
-		UASM_i_ROTR(p, ptep, ptep, ilog2(_PAGE_GLOBAL));
-		UASM_i_MTC0(p, ptep, C0_ENTRYLO1);
-
-		uasm_i_lw(p, tmp, 0, scratch);
-		uasm_i_lw(p, ptep, sizeof(pte_t), scratch);
-		uasm_i_lui(p, scratch, 0xff);
-		uasm_i_ori(p, scratch, scratch, 0xffff);
-		uasm_i_and(p, tmp, scratch, tmp);
-		uasm_i_and(p, ptep, scratch, ptep);
+		uasm_i_lw(p, tmp, 0, ptep);
+		uasm_i_ext(p, tmp, tmp, 0, 24);
 		uasm_i_mthc0(p, tmp, C0_ENTRYLO0);
-		uasm_i_mthc0(p, ptep, C0_ENTRYLO1);
+
+		uasm_i_lw(p, tmp, pte_off_odd, ptep); /* odd pte */
+		UASM_i_ROTR(p, tmp, tmp, ilog2(_PAGE_GLOBAL));
+		UASM_i_MTC0(p, tmp, C0_ENTRYLO1);
+
+		uasm_i_lw(p, tmp, sizeof(pte_t), ptep);
+		uasm_i_ext(p, tmp, tmp, 0, 24);
+		uasm_i_mthc0(p, tmp, C0_ENTRYLO1);
 		return;
 	}
 
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 09/12] MIPS: mm: Pass scratch register through to iPTE_SW
       [not found] <1460716620-13382-1-git-send-email-paul.burton@imgtec.com>
                   ` (5 preceding siblings ...)
  2016-04-15 10:36 ` [PATCH 08/12] MIPS: mm: Don't clobber $1 on XPA TLB refill Paul Burton
@ 2016-04-15 10:36 ` Paul Burton
  2016-04-15 22:28   ` James Hogan
  2016-04-15 10:36 ` [PATCH 10/12] MIPS: mm: Be more explicit about PTE mode bit handling Paul Burton
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 21+ messages in thread
From: Paul Burton @ 2016-04-15 10:36 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Adam Buchbinder, Huacai Chen,
	Paul Gortmaker, linux-kernel, Kirill A. Shutemov

Rather than hardcode a scratch register for the XPA case in iPTE_SW,
pass one through from the work registers allocated by the caller. This
allows for the XPA path to function correctly regardless of the work
registers in use.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
---

 arch/mips/mm/tlbex.c | 24 +++++++++++-------------
 1 file changed, 11 insertions(+), 13 deletions(-)

diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index 004cd9f..d7a7b3d 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -1526,14 +1526,12 @@ iPTE_LW(u32 **p, unsigned int pte, unsigned int ptr)
 
 static void
 iPTE_SW(u32 **p, struct uasm_reloc **r, unsigned int pte, unsigned int ptr,
-	unsigned int mode)
+	unsigned int mode, unsigned int scratch)
 {
 #ifdef CONFIG_PHYS_ADDR_T_64BIT
 	unsigned int hwmode = mode & (_PAGE_VALID | _PAGE_DIRTY);
 
 	if (config_enabled(CONFIG_XPA) && !cpu_has_64bits) {
-		const int scratch = 1; /* Our extra working register */
-
 		uasm_i_lui(p, scratch, (mode >> 16));
 		uasm_i_or(p, pte, pte, scratch);
 	} else
@@ -1630,11 +1628,11 @@ build_pte_present(u32 **p, struct uasm_reloc **r,
 /* Make PTE valid, store result in PTR. */
 static void
 build_make_valid(u32 **p, struct uasm_reloc **r, unsigned int pte,
-		 unsigned int ptr)
+		 unsigned int ptr, unsigned int scratch)
 {
 	unsigned int mode = _PAGE_VALID | _PAGE_ACCESSED;
 
-	iPTE_SW(p, r, pte, ptr, mode);
+	iPTE_SW(p, r, pte, ptr, mode, scratch);
 }
 
 /*
@@ -1670,12 +1668,12 @@ build_pte_writable(u32 **p, struct uasm_reloc **r,
  */
 static void
 build_make_write(u32 **p, struct uasm_reloc **r, unsigned int pte,
-		 unsigned int ptr)
+		 unsigned int ptr, unsigned int scratch)
 {
 	unsigned int mode = (_PAGE_ACCESSED | _PAGE_MODIFIED | _PAGE_VALID
 			     | _PAGE_DIRTY);
 
-	iPTE_SW(p, r, pte, ptr, mode);
+	iPTE_SW(p, r, pte, ptr, mode, scratch);
 }
 
 /*
@@ -1780,7 +1778,7 @@ static void build_r3000_tlb_load_handler(void)
 	build_r3000_tlbchange_handler_head(&p, K0, K1);
 	build_pte_present(&p, &r, K0, K1, -1, label_nopage_tlbl);
 	uasm_i_nop(&p); /* load delay */
-	build_make_valid(&p, &r, K0, K1);
+	build_make_valid(&p, &r, K0, K1, -1);
 	build_r3000_tlb_reload_write(&p, &l, &r, K0, K1);
 
 	uasm_l_nopage_tlbl(&l, p);
@@ -1811,7 +1809,7 @@ static void build_r3000_tlb_store_handler(void)
 	build_r3000_tlbchange_handler_head(&p, K0, K1);
 	build_pte_writable(&p, &r, K0, K1, -1, label_nopage_tlbs);
 	uasm_i_nop(&p); /* load delay */
-	build_make_write(&p, &r, K0, K1);
+	build_make_write(&p, &r, K0, K1, -1);
 	build_r3000_tlb_reload_write(&p, &l, &r, K0, K1);
 
 	uasm_l_nopage_tlbs(&l, p);
@@ -1842,7 +1840,7 @@ static void build_r3000_tlb_modify_handler(void)
 	build_r3000_tlbchange_handler_head(&p, K0, K1);
 	build_pte_modifiable(&p, &r, K0, K1,  -1, label_nopage_tlbm);
 	uasm_i_nop(&p); /* load delay */
-	build_make_write(&p, &r, K0, K1);
+	build_make_write(&p, &r, K0, K1, -1);
 	build_r3000_pte_reload_tlbwi(&p, K0, K1);
 
 	uasm_l_nopage_tlbm(&l, p);
@@ -2010,7 +2008,7 @@ static void build_r4000_tlb_load_handler(void)
 		}
 		uasm_l_tlbl_goaround1(&l, p);
 	}
-	build_make_valid(&p, &r, wr.r1, wr.r2);
+	build_make_valid(&p, &r, wr.r1, wr.r2, wr.r3);
 	build_r4000_tlbchange_handler_tail(&p, &l, &r, wr.r1, wr.r2);
 
 #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
@@ -2124,7 +2122,7 @@ static void build_r4000_tlb_store_handler(void)
 	build_pte_writable(&p, &r, wr.r1, wr.r2, wr.r3, label_nopage_tlbs);
 	if (m4kc_tlbp_war())
 		build_tlb_probe_entry(&p);
-	build_make_write(&p, &r, wr.r1, wr.r2);
+	build_make_write(&p, &r, wr.r1, wr.r2, wr.r3);
 	build_r4000_tlbchange_handler_tail(&p, &l, &r, wr.r1, wr.r2);
 
 #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
@@ -2180,7 +2178,7 @@ static void build_r4000_tlb_modify_handler(void)
 	if (m4kc_tlbp_war())
 		build_tlb_probe_entry(&p);
 	/* Present and writable bits set, set accessed and dirty bits. */
-	build_make_write(&p, &r, wr.r1, wr.r2);
+	build_make_write(&p, &r, wr.r1, wr.r2, wr.r3);
 	build_r4000_tlbchange_handler_tail(&p, &l, &r, wr.r1, wr.r2);
 
 #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 10/12] MIPS: mm: Be more explicit about PTE mode bit handling
       [not found] <1460716620-13382-1-git-send-email-paul.burton@imgtec.com>
                   ` (6 preceding siblings ...)
  2016-04-15 10:36 ` [PATCH 09/12] MIPS: mm: Pass scratch register through to iPTE_SW Paul Burton
@ 2016-04-15 10:36 ` Paul Burton
  2016-04-15 10:36 ` [PATCH 11/12] MIPS: mm: Simplify build_update_entries Paul Burton
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 21+ messages in thread
From: Paul Burton @ 2016-04-15 10:36 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Adam Buchbinder, Paul Gortmaker,
	linux-kernel, Kirill A. Shutemov

The XPA case in iPTE_SW or's in software mode bits to the pte_low value
(which is what actually ends up in the high 32 bits of EntryLo...). It
does this presuming that only bits in the upper 16 bits of the 32 bit
pte_low value will be set. Make this assumption explicit with a BUG_ON.

A similar assumption is made for the hardware mode bits, which are or'd
in with a single ori instruction. Make that assumption explicit with a
BUG_ON too.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
---

 arch/mips/mm/tlbex.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index d7a7b3d..0bd3755 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -1528,15 +1528,17 @@ static void
 iPTE_SW(u32 **p, struct uasm_reloc **r, unsigned int pte, unsigned int ptr,
 	unsigned int mode, unsigned int scratch)
 {
-#ifdef CONFIG_PHYS_ADDR_T_64BIT
 	unsigned int hwmode = mode & (_PAGE_VALID | _PAGE_DIRTY);
+	unsigned int swmode = mode & ~hwmode;
 
 	if (config_enabled(CONFIG_XPA) && !cpu_has_64bits) {
-		uasm_i_lui(p, scratch, (mode >> 16));
+		uasm_i_lui(p, scratch, swmode >> 16);
 		uasm_i_or(p, pte, pte, scratch);
-	} else
-#endif
-	uasm_i_ori(p, pte, pte, mode);
+		BUG_ON(swmode & 0xffff);
+	} else {
+		uasm_i_ori(p, pte, pte, mode);
+	}
+
 #ifdef CONFIG_SMP
 # ifdef CONFIG_PHYS_ADDR_T_64BIT
 	if (cpu_has_64bits)
@@ -1555,6 +1557,7 @@ iPTE_SW(u32 **p, struct uasm_reloc **r, unsigned int pte, unsigned int ptr,
 		/* no uasm_i_nop needed */
 		uasm_i_ll(p, pte, sizeof(pte_t) / 2, ptr);
 		uasm_i_ori(p, pte, pte, hwmode);
+		BUG_ON(hwmode & ~0xffff);
 		uasm_i_sc(p, pte, sizeof(pte_t) / 2, ptr);
 		uasm_il_beqz(p, r, pte, label_smp_pgtable_change);
 		/* no uasm_i_nop needed */
@@ -1576,6 +1579,7 @@ iPTE_SW(u32 **p, struct uasm_reloc **r, unsigned int pte, unsigned int ptr,
 	if (!cpu_has_64bits) {
 		uasm_i_lw(p, pte, sizeof(pte_t) / 2, ptr);
 		uasm_i_ori(p, pte, pte, hwmode);
+		BUG_ON(hwmode & ~0xffff);
 		uasm_i_sw(p, pte, sizeof(pte_t) / 2, ptr);
 		uasm_i_lw(p, pte, 0, ptr);
 	}
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 11/12] MIPS: mm: Simplify build_update_entries
       [not found] <1460716620-13382-1-git-send-email-paul.burton@imgtec.com>
                   ` (7 preceding siblings ...)
  2016-04-15 10:36 ` [PATCH 10/12] MIPS: mm: Be more explicit about PTE mode bit handling Paul Burton
@ 2016-04-15 10:36 ` Paul Burton
  2016-04-15 23:09   ` James Hogan
  2016-04-15 10:37 ` [PATCH 12/12] MIPS: mm: Don't do MTHC0 if XPA not present Paul Burton
  2016-05-10 12:44 ` [PATCH 00/12] TLB/XPA fixes & cleanups Ralf Baechle
  10 siblings, 1 reply; 21+ messages in thread
From: Paul Burton @ 2016-04-15 10:36 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Huacai Chen, Paul Gortmaker,
	linux-kernel, Kirill A. Shutemov

We can simplify build_update_entries by unifying the code for the 36 bit
physical addressing with MIPS32 case with the general case, by using
pte_off_ variables in all cases & handling the trivial
_PAGE_GLOBAL_SHIFT == 0 case in build_convert_pte_to_entrylo. This
leaves XPA as the only special case.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
---

 arch/mips/mm/tlbex.c | 38 +++++++++++++++++---------------------
 1 file changed, 17 insertions(+), 21 deletions(-)

diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index 0bd3755..45234ad 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -626,6 +626,11 @@ static void build_tlb_write_entry(u32 **p, struct uasm_label **l,
 static __maybe_unused void build_convert_pte_to_entrylo(u32 **p,
 							unsigned int reg)
 {
+	if (_PAGE_GLOBAL_SHIFT == 0) {
+		/* pte_t is already in EntryLo format */
+		return;
+	}
+
 	if (cpu_has_rixi && _PAGE_NO_EXEC) {
 		if (fill_includes_sw_bits) {
 			UASM_i_ROTR(p, reg, reg, ilog2(_PAGE_GLOBAL));
@@ -1003,10 +1008,17 @@ static void build_get_ptep(u32 **p, unsigned int tmp, unsigned int ptr)
 
 static void build_update_entries(u32 **p, unsigned int tmp, unsigned int ptep)
 {
-	if (config_enabled(CONFIG_XPA)) {
-		int pte_off_even = sizeof(pte_t) / 2;
-		int pte_off_odd = pte_off_even + sizeof(pte_t);
+	int pte_off_even = 0;
+	int pte_off_odd = sizeof(pte_t);
+
+	if (config_enabled(CONFIG_PHYS_ADDR_T_64BIT) &&
+	    config_enabled(CONFIG_32BIT)) {
+		/* The low 32 bits of EntryLo is stored in pte_high */
+		pte_off_even += offsetof(pte_t, pte_high);
+		pte_off_odd += offsetof(pte_t, pte_high);
+	}
 
+	if (config_enabled(CONFIG_XPA)) {
 		uasm_i_lw(p, tmp, pte_off_even, ptep); /* even pte */
 		UASM_i_ROTR(p, tmp, tmp, ilog2(_PAGE_GLOBAL));
 		UASM_i_MTC0(p, tmp, C0_ENTRYLO0);
@@ -1025,24 +1037,8 @@ static void build_update_entries(u32 **p, unsigned int tmp, unsigned int ptep)
 		return;
 	}
 
-	/*
-	 * 64bit address support (36bit on a 32bit CPU) in a 32bit
-	 * Kernel is a special case. Only a few CPUs use it.
-	 */
-	if (config_enabled(CONFIG_PHYS_ADDR_T_64BIT) && !cpu_has_64bits) {
-		int pte_off_even = sizeof(pte_t) / 2;
-		int pte_off_odd = pte_off_even + sizeof(pte_t);
-
-		uasm_i_lw(p, tmp, pte_off_even, ptep); /* even pte */
-		UASM_i_MTC0(p, tmp, C0_ENTRYLO0);
-
-		uasm_i_lw(p, ptep, pte_off_odd, ptep); /* odd pte */
-		UASM_i_MTC0(p, ptep, C0_ENTRYLO1);
-		return;
-	}
-
-	UASM_i_LW(p, tmp, 0, ptep); /* get even pte */
-	UASM_i_LW(p, ptep, sizeof(pte_t), ptep); /* get odd pte */
+	UASM_i_LW(p, tmp, pte_off_even, ptep); /* get even pte */
+	UASM_i_LW(p, ptep, pte_off_odd, ptep); /* get odd pte */
 	if (r45k_bvahwbug())
 		build_tlb_probe_entry(p);
 	build_convert_pte_to_entrylo(p, tmp);
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 12/12] MIPS: mm: Don't do MTHC0 if XPA not present
       [not found] <1460716620-13382-1-git-send-email-paul.burton@imgtec.com>
                   ` (8 preceding siblings ...)
  2016-04-15 10:36 ` [PATCH 11/12] MIPS: mm: Simplify build_update_entries Paul Burton
@ 2016-04-15 10:37 ` Paul Burton
  2016-05-10 12:44 ` [PATCH 00/12] TLB/XPA fixes & cleanups Ralf Baechle
  10 siblings, 0 replies; 21+ messages in thread
From: Paul Burton @ 2016-04-15 10:37 UTC (permalink / raw)
  To: linux-mips, Ralf Baechle
  Cc: James Hogan, Paul Burton, Adam Buchbinder, Huacai Chen,
	Paul Gortmaker, linux-kernel, Kirill A. Shutemov,
	David Hildenbrand

From: James Hogan <james.hogan@imgtec.com>

Performing an MTHC0 instruction without XPA being present will trigger a
reserved instruction exception, therefore conditionalise the use of this
instruction when building TLB handlers (build_update_entries()), and in
__update_tlb().

This allows an XPA kernel to run on non XPA hardware without that
instruction implemented, just like it can run on XPA capable hardware
without XPA in use (with the noxpa kernel argument) or with XPA not
configured in hardware.

[paul.burton@imgtec.com:
  - Rebase atop other TLB work.
  - Add "mm" to subject.
  - Handle the __kmap_pgprot case.]

Fixes: c5b367835cfc ("MIPS: Add support for XPA.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
---

 arch/mips/mm/init.c    | 12 +++++++-----
 arch/mips/mm/tlb-r4k.c |  6 ++++--
 arch/mips/mm/tlbex.c   | 16 ++++++++++------
 3 files changed, 21 insertions(+), 13 deletions(-)

diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c
index 0e57893..1588409 100644
--- a/arch/mips/mm/init.c
+++ b/arch/mips/mm/init.c
@@ -111,11 +111,13 @@ static void *__kmap_pgprot(struct page *page, unsigned long addr, pgprot_t prot)
 	write_c0_entryhi(vaddr & (PAGE_MASK << 1));
 	write_c0_entrylo0(entrylo);
 	write_c0_entrylo1(entrylo);
-#ifdef CONFIG_XPA
-	entrylo = (pte.pte_low & _PFNX_MASK);
-	writex_c0_entrylo0(entrylo);
-	writex_c0_entrylo1(entrylo);
-#endif
+
+	if (config_enabled(CONFIG_XPA) && cpu_has_xpa) {
+		entrylo = (pte.pte_low & _PFNX_MASK);
+		writex_c0_entrylo0(entrylo);
+		writex_c0_entrylo1(entrylo);
+	}
+
 	tlbidx = read_c0_wired();
 	write_c0_wired(tlbidx + 1);
 	write_c0_index(tlbidx);
diff --git a/arch/mips/mm/tlb-r4k.c b/arch/mips/mm/tlb-r4k.c
index c17d762..b99695c 100644
--- a/arch/mips/mm/tlb-r4k.c
+++ b/arch/mips/mm/tlb-r4k.c
@@ -336,10 +336,12 @@ void __update_tlb(struct vm_area_struct * vma, unsigned long address, pte_t pte)
 #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
 #ifdef CONFIG_XPA
 		write_c0_entrylo0(pte_to_entrylo(ptep->pte_high));
-		writex_c0_entrylo0(ptep->pte_low & _PFNX_MASK);
+		if (cpu_has_xpa)
+			writex_c0_entrylo0(ptep->pte_low & _PFNX_MASK);
 		ptep++;
 		write_c0_entrylo1(pte_to_entrylo(ptep->pte_high));
-		writex_c0_entrylo1(ptep->pte_low & _PFNX_MASK);
+		if (cpu_has_xpa)
+			writex_c0_entrylo1(ptep->pte_low & _PFNX_MASK);
 #else
 		write_c0_entrylo0(ptep->pte_high);
 		ptep++;
diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index 45234ad..4b41859 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -1023,17 +1023,21 @@ static void build_update_entries(u32 **p, unsigned int tmp, unsigned int ptep)
 		UASM_i_ROTR(p, tmp, tmp, ilog2(_PAGE_GLOBAL));
 		UASM_i_MTC0(p, tmp, C0_ENTRYLO0);
 
-		uasm_i_lw(p, tmp, 0, ptep);
-		uasm_i_ext(p, tmp, tmp, 0, 24);
-		uasm_i_mthc0(p, tmp, C0_ENTRYLO0);
+		if (cpu_has_xpa && !mips_xpa_disabled) {
+			uasm_i_lw(p, tmp, 0, ptep);
+			uasm_i_ext(p, tmp, tmp, 0, 24);
+			uasm_i_mthc0(p, tmp, C0_ENTRYLO0);
+		}
 
 		uasm_i_lw(p, tmp, pte_off_odd, ptep); /* odd pte */
 		UASM_i_ROTR(p, tmp, tmp, ilog2(_PAGE_GLOBAL));
 		UASM_i_MTC0(p, tmp, C0_ENTRYLO1);
 
-		uasm_i_lw(p, tmp, sizeof(pte_t), ptep);
-		uasm_i_ext(p, tmp, tmp, 0, 24);
-		uasm_i_mthc0(p, tmp, C0_ENTRYLO1);
+		if (cpu_has_xpa && !mips_xpa_disabled) {
+			uasm_i_lw(p, tmp, sizeof(pte_t), ptep);
+			uasm_i_ext(p, tmp, tmp, 0, 24);
+			uasm_i_mthc0(p, tmp, C0_ENTRYLO1);
+		}
 		return;
 	}
 
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 03/12] MIPS: Remove redundant asm/pgtable-bits.h inclusions
  2016-04-15 10:36 ` [PATCH 03/12] MIPS: Remove redundant asm/pgtable-bits.h inclusions Paul Burton
@ 2016-04-15 19:16   ` James Hogan
  2016-04-15 21:19     ` Paul Burton
  0 siblings, 1 reply; 21+ messages in thread
From: James Hogan @ 2016-04-15 19:16 UTC (permalink / raw)
  To: Paul Burton
  Cc: linux-mips, Ralf Baechle, Maciej W. Rozycki, linux-kernel,
	Jonas Gorski, Markos Chandras, Alex Smith, Kirill A. Shutemov,
	Andrew Morton

[-- Attachment #1: Type: text/plain, Size: 2138 bytes --]

On Fri, Apr 15, 2016 at 11:36:51AM +0100, Paul Burton wrote:
> asm/pgtable-bits.h is included in 2 assembly files and thus has to
> in either of the assembly files that include it.

That could do with rewording :-)

Otherwise
Reviewed-by: James Hogan <james.hogan@imgtec.com>

Cheers
James

> 
> Remove the redundant inclusions such that asm/pgtable-bits.h doesn't
> need to #ifdef around C code, for cleanliness & and in preparation for
> later patches which will add more C.
> 
> Signed-off-by: Paul Burton <paul.burton@imgtec.com>
> ---
> 
>  arch/mips/include/asm/pgtable-bits.h | 2 --
>  arch/mips/kernel/head.S              | 1 -
>  arch/mips/kernel/r4k_switch.S        | 1 -
>  3 files changed, 4 deletions(-)
> 
> diff --git a/arch/mips/include/asm/pgtable-bits.h b/arch/mips/include/asm/pgtable-bits.h
> index 97b3138..2f40312 100644
> --- a/arch/mips/include/asm/pgtable-bits.h
> +++ b/arch/mips/include/asm/pgtable-bits.h
> @@ -191,7 +191,6 @@
>   */
>  
>  
> -#ifndef __ASSEMBLY__
>  /*
>   * pte_to_entrylo converts a page table entry (PTE) into a Mips
>   * entrylo0/1 value.
> @@ -218,7 +217,6 @@ static inline uint64_t pte_to_entrylo(unsigned long pte_val)
>  
>  	return pte_val >> _PAGE_GLOBAL_SHIFT;
>  }
> -#endif
>  
>  /*
>   * Cache attributes
> diff --git a/arch/mips/kernel/head.S b/arch/mips/kernel/head.S
> index 4e4cc5b..b8fb0ba 100644
> --- a/arch/mips/kernel/head.S
> +++ b/arch/mips/kernel/head.S
> @@ -21,7 +21,6 @@
>  #include <asm/asmmacro.h>
>  #include <asm/irqflags.h>
>  #include <asm/regdef.h>
> -#include <asm/pgtable-bits.h>
>  #include <asm/mipsregs.h>
>  #include <asm/stackframe.h>
>  
> diff --git a/arch/mips/kernel/r4k_switch.S b/arch/mips/kernel/r4k_switch.S
> index 92cd051..2f0a3b2 100644
> --- a/arch/mips/kernel/r4k_switch.S
> +++ b/arch/mips/kernel/r4k_switch.S
> @@ -15,7 +15,6 @@
>  #include <asm/fpregdef.h>
>  #include <asm/mipsregs.h>
>  #include <asm/asm-offsets.h>
> -#include <asm/pgtable-bits.h>
>  #include <asm/regdef.h>
>  #include <asm/stackframe.h>
>  #include <asm/thread_info.h>
> -- 
> 2.8.0
> 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 04/12] MIPS: Use enums to make asm/pgtable-bits.h readable
  2016-04-15 10:36 ` [PATCH 04/12] MIPS: Use enums to make asm/pgtable-bits.h readable Paul Burton
@ 2016-04-15 20:29   ` James Hogan
  2016-05-11  9:38     ` Ralf Baechle
  0 siblings, 1 reply; 21+ messages in thread
From: James Hogan @ 2016-04-15 20:29 UTC (permalink / raw)
  To: Paul Burton
  Cc: linux-mips, Ralf Baechle, Maciej W. Rozycki, linux-kernel,
	Markos Chandras, Alex Smith, Kirill A. Shutemov

[-- Attachment #1: Type: text/plain, Size: 9925 bytes --]

On Fri, Apr 15, 2016 at 11:36:52AM +0100, Paul Burton wrote:
> asm/pgtable-bits.h has grown to become an unreadable mess of #ifdef
> directives defining bits conditionally upon other bits all at the
> preprocessing stage, for no good reason.
> 
> Instead of having quite so many #ifdef's, simply use enums to provide
> sequential numbering for bit shifts, without having to keep track
> manually of what the last bit defined was. Masks are defined separately,
> after the shifts, which allows for most of their definitions to be
> reused for all systems rather than duplicated.
> 
> This patch is not intended to make any behavioural change to the code -
> all bits should be used in the same way they were before this patch.
> 
> Signed-off-by: Paul Burton <paul.burton@imgtec.com>
> ---
> 
>  arch/mips/include/asm/pgtable-bits.h | 189 +++++++++++++++--------------------
>  1 file changed, 81 insertions(+), 108 deletions(-)

Having had to work my way through some of this file to manually walk
page tables only this week, I really do think this is an excellent
cleanup (if nothing else, look at that diffstat :-D ).

Reviewed-by: James Hogan <james.hogan@imgtec.com>

Thanks!
James

> 
> diff --git a/arch/mips/include/asm/pgtable-bits.h b/arch/mips/include/asm/pgtable-bits.h
> index 2f40312..c81fc17 100644
> --- a/arch/mips/include/asm/pgtable-bits.h
> +++ b/arch/mips/include/asm/pgtable-bits.h
> @@ -35,36 +35,25 @@
>  #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
>  
>  /*
> - * The following bits are implemented by the TLB hardware
> + * Page table bit offsets used for 64 bit physical addressing on MIPS32,
> + * for example with Alchemy, Netlogic XLP/XLR or XPA.
>   */
> -#define _PAGE_NO_EXEC_SHIFT	0
> -#define _PAGE_NO_EXEC		(1 << _PAGE_NO_EXEC_SHIFT)
> -#define _PAGE_NO_READ_SHIFT	(_PAGE_NO_EXEC_SHIFT + 1)
> -#define _PAGE_NO_READ		(1 << _PAGE_NO_READ_SHIFT)
> -#define _PAGE_GLOBAL_SHIFT	(_PAGE_NO_READ_SHIFT + 1)
> -#define _PAGE_GLOBAL		(1 << _PAGE_GLOBAL_SHIFT)
> -#define _PAGE_VALID_SHIFT	(_PAGE_GLOBAL_SHIFT + 1)
> -#define _PAGE_VALID		(1 << _PAGE_VALID_SHIFT)
> -#define _PAGE_DIRTY_SHIFT	(_PAGE_VALID_SHIFT + 1)
> -#define _PAGE_DIRTY		(1 << _PAGE_DIRTY_SHIFT)
> -#define _CACHE_SHIFT		(_PAGE_DIRTY_SHIFT + 1)
> -#define _CACHE_MASK		(7 << _CACHE_SHIFT)
> -
> -/*
> - * The following bits are implemented in software
> - */
> -#define _PAGE_PRESENT_SHIFT	(24)
> -#define _PAGE_PRESENT		(1 << _PAGE_PRESENT_SHIFT)
> -#define _PAGE_READ_SHIFT	(_PAGE_PRESENT_SHIFT + 1)
> -#define _PAGE_READ		(1 << _PAGE_READ_SHIFT)
> -#define _PAGE_WRITE_SHIFT	(_PAGE_READ_SHIFT + 1)
> -#define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
> -#define _PAGE_ACCESSED_SHIFT	(_PAGE_WRITE_SHIFT + 1)
> -#define _PAGE_ACCESSED		(1 << _PAGE_ACCESSED_SHIFT)
> -#define _PAGE_MODIFIED_SHIFT	(_PAGE_ACCESSED_SHIFT + 1)
> -#define _PAGE_MODIFIED		(1 << _PAGE_MODIFIED_SHIFT)
> -
> -#define _PFN_SHIFT		(PAGE_SHIFT - 12 + _CACHE_SHIFT + 3)
> +enum pgtable_bits {
> +	/* Used by TLB hardware (placed in EntryLo*) */
> +	_PAGE_NO_EXEC_SHIFT,
> +	_PAGE_NO_READ_SHIFT,
> +	_PAGE_GLOBAL_SHIFT,
> +	_PAGE_VALID_SHIFT,
> +	_PAGE_DIRTY_SHIFT,
> +	_CACHE_SHIFT,
> +
> +	/* Used only by software (masked out before writing EntryLo*) */
> +	_PAGE_PRESENT_SHIFT = 24,
> +	_PAGE_READ_SHIFT,
> +	_PAGE_WRITE_SHIFT,
> +	_PAGE_ACCESSED_SHIFT,
> +	_PAGE_MODIFIED_SHIFT,
> +};
>  
>  /*
>   * Bits for extended EntryLo0/EntryLo1 registers
> @@ -73,101 +62,85 @@
>  
>  #elif defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
>  
> -/*
> - * The following bits are implemented in software
> - */
> -#define _PAGE_PRESENT_SHIFT	(0)
> -#define _PAGE_PRESENT		(1 << _PAGE_PRESENT_SHIFT)
> -#define _PAGE_READ_SHIFT	(_PAGE_PRESENT_SHIFT + 1)
> -#define _PAGE_READ		(1 << _PAGE_READ_SHIFT)
> -#define _PAGE_WRITE_SHIFT	(_PAGE_READ_SHIFT + 1)
> -#define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
> -#define _PAGE_ACCESSED_SHIFT	(_PAGE_WRITE_SHIFT + 1)
> -#define _PAGE_ACCESSED		(1 << _PAGE_ACCESSED_SHIFT)
> -#define _PAGE_MODIFIED_SHIFT	(_PAGE_ACCESSED_SHIFT + 1)
> -#define _PAGE_MODIFIED		(1 << _PAGE_MODIFIED_SHIFT)
> +/* Page table bits used for r3k systems */
> +enum pgtable_bits {
> +	/* Used only by software (writes to EntryLo ignored) */
> +	_PAGE_PRESENT_SHIFT,
> +	_PAGE_READ_SHIFT,
> +	_PAGE_WRITE_SHIFT,
> +	_PAGE_ACCESSED_SHIFT,
> +	_PAGE_MODIFIED_SHIFT,
> +
> +	/* Used by TLB hardware (placed in EntryLo) */
> +	_PAGE_GLOBAL_SHIFT = 8,
> +	_PAGE_VALID_SHIFT,
> +	_PAGE_DIRTY_SHIFT,
> +	_CACHE_UNCACHED_SHIFT,
> +};
>  
> -/*
> - * The following bits are implemented by the TLB hardware
> - */
> -#define _PAGE_GLOBAL_SHIFT	(_PAGE_MODIFIED_SHIFT + 4)
> -#define _PAGE_GLOBAL		(1 << _PAGE_GLOBAL_SHIFT)
> -#define _PAGE_VALID_SHIFT	(_PAGE_GLOBAL_SHIFT + 1)
> -#define _PAGE_VALID		(1 << _PAGE_VALID_SHIFT)
> -#define _PAGE_DIRTY_SHIFT	(_PAGE_VALID_SHIFT + 1)
> -#define _PAGE_DIRTY		(1 << _PAGE_DIRTY_SHIFT)
> -#define _CACHE_UNCACHED_SHIFT	(_PAGE_DIRTY_SHIFT + 1)
> -#define _CACHE_UNCACHED		(1 << _CACHE_UNCACHED_SHIFT)
> -#define _CACHE_MASK		_CACHE_UNCACHED
> +#else
>  
> -#define _PFN_SHIFT		PAGE_SHIFT
> +/* Page table bits used for r4k systems */
> +enum pgtable_bits {
> +	/* Used only by software (masked out before writing EntryLo*) */
> +	_PAGE_PRESENT_SHIFT,
> +#if !defined(CONFIG_CPU_MIPSR2) && !defined(CONFIG_CPU_MIPSR6)
> +	_PAGE_READ_SHIFT,
> +#endif
> +	_PAGE_WRITE_SHIFT,
> +	_PAGE_ACCESSED_SHIFT,
> +	_PAGE_MODIFIED_SHIFT,
> +#if defined(CONFIG_64BIT) && defined(CONFIG_MIPS_HUGE_TLB_SUPPORT)
> +	_PAGE_HUGE_SHIFT,
> +#endif
>  
> -#else
> -/*
> - * Below are the "Normal" R4K cases
> - */
> +	/* Used by TLB hardware (placed in EntryLo*) */
> +#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
> +	_PAGE_NO_EXEC_SHIFT,
> +	_PAGE_NO_READ_SHIFT,
> +	_PAGE_READ_SHIFT = _PAGE_NO_READ_SHIFT,
> +#endif
> +	_PAGE_GLOBAL_SHIFT,
> +	_PAGE_VALID_SHIFT,
> +	_PAGE_DIRTY_SHIFT,
> +	_CACHE_SHIFT,
> +};
>  
> -/*
> - * The following bits are implemented in software
> - */
> -#define _PAGE_PRESENT_SHIFT	0
> +#endif /* defined(CONFIG_PHYS_ADDR_T_64BIT && defined(CONFIG_CPU_MIPS32) */
> +
> +/* Used only by software */
>  #define _PAGE_PRESENT		(1 << _PAGE_PRESENT_SHIFT)
> -/* R2 or later cores check for RI/XI support to determine _PAGE_READ */
>  #if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
> -#define _PAGE_WRITE_SHIFT	(_PAGE_PRESENT_SHIFT + 1)
> -#define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
> +# define _PAGE_READ		(cpu_has_rixi ? 0 : (1 << _PAGE_READ_SHIFT))
>  #else
> -#define _PAGE_READ_SHIFT	(_PAGE_PRESENT_SHIFT + 1)
> -#define _PAGE_READ		(1 << _PAGE_READ_SHIFT)
> -#define _PAGE_WRITE_SHIFT	(_PAGE_READ_SHIFT + 1)
> -#define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
> +# define _PAGE_READ		(1 << _PAGE_READ_SHIFT)
>  #endif
> -#define _PAGE_ACCESSED_SHIFT	(_PAGE_WRITE_SHIFT + 1)
> +#define _PAGE_WRITE		(1 << _PAGE_WRITE_SHIFT)
>  #define _PAGE_ACCESSED		(1 << _PAGE_ACCESSED_SHIFT)
> -#define _PAGE_MODIFIED_SHIFT	(_PAGE_ACCESSED_SHIFT + 1)
>  #define _PAGE_MODIFIED		(1 << _PAGE_MODIFIED_SHIFT)
> -
>  #if defined(CONFIG_64BIT) && defined(CONFIG_MIPS_HUGE_TLB_SUPPORT)
> -/* Huge TLB page */
> -#define _PAGE_HUGE_SHIFT	(_PAGE_MODIFIED_SHIFT + 1)
> -#define _PAGE_HUGE		(1 << _PAGE_HUGE_SHIFT)
> -#endif	/* CONFIG_64BIT && CONFIG_MIPS_HUGE_TLB_SUPPORT */
> -
> -#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
> -/* XI - page cannot be executed */
> -#ifdef _PAGE_HUGE_SHIFT
> -#define _PAGE_NO_EXEC_SHIFT	(_PAGE_HUGE_SHIFT + 1)
> -#else
> -#define _PAGE_NO_EXEC_SHIFT	(_PAGE_MODIFIED_SHIFT + 1)
> +# define _PAGE_HUGE		(1 << _PAGE_HUGE_SHIFT)
>  #endif
> -#define _PAGE_NO_EXEC		(cpu_has_rixi ? (1 << _PAGE_NO_EXEC_SHIFT) : 0)
> -
> -/* RI - page cannot be read */
> -#define _PAGE_READ_SHIFT	(_PAGE_NO_EXEC_SHIFT + 1)
> -#define _PAGE_READ		(cpu_has_rixi ? 0 : (1 << _PAGE_READ_SHIFT))
> -#define _PAGE_NO_READ_SHIFT	_PAGE_READ_SHIFT
> -#define _PAGE_NO_READ		(cpu_has_rixi ? (1 << _PAGE_READ_SHIFT) : 0)
> -#endif	/* defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6) */
> -
> -#if defined(_PAGE_NO_READ_SHIFT)
> -#define _PAGE_GLOBAL_SHIFT	(_PAGE_NO_READ_SHIFT + 1)
> -#elif defined(_PAGE_HUGE_SHIFT)
> -#define _PAGE_GLOBAL_SHIFT	(_PAGE_HUGE_SHIFT + 1)
> -#else
> -#define _PAGE_GLOBAL_SHIFT	(_PAGE_MODIFIED_SHIFT + 1)
> +
> +/* Used by TLB hardware (placed in EntryLo*) */
> +#if (defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32))
> +# define _PAGE_NO_EXEC		(1 << _PAGE_NO_EXEC_SHIFT)
> +# define _PAGE_NO_READ		(1 << _PAGE_NO_READ_SHIFT)
> +#elif defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
> +# define _PAGE_NO_EXEC		(cpu_has_rixi ? (1 << _PAGE_NO_EXEC_SHIFT) : 0)
> +# define _PAGE_NO_READ		(cpu_has_rixi ? (1 << _PAGE_NO_READ_SHIFT) : 0)
>  #endif
>  #define _PAGE_GLOBAL		(1 << _PAGE_GLOBAL_SHIFT)
> -
> -#define _PAGE_VALID_SHIFT	(_PAGE_GLOBAL_SHIFT + 1)
>  #define _PAGE_VALID		(1 << _PAGE_VALID_SHIFT)
> -#define _PAGE_DIRTY_SHIFT	(_PAGE_VALID_SHIFT + 1)
>  #define _PAGE_DIRTY		(1 << _PAGE_DIRTY_SHIFT)
> -#define _CACHE_SHIFT		(_PAGE_DIRTY_SHIFT + 1)
> -#define _CACHE_MASK		(7 << _CACHE_SHIFT)
> -
> -#define _PFN_SHIFT		(PAGE_SHIFT - 12 + _CACHE_SHIFT + 3)
> -
> -#endif /* defined(CONFIG_PHYS_ADDR_T_64BIT && defined(CONFIG_CPU_MIPS32) */
> +#if defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
> +# define _CACHE_UNCACHED	(1 << _CACHE_UNCACHED_SHIFT)
> +# define _CACHE_MASK		_CACHE_UNCACHED
> +# define _PFN_SHIFT		PAGE_SHIFT
> +#else
> +# define _CACHE_MASK		(7 << _CACHE_SHIFT)
> +# define _PFN_SHIFT		(PAGE_SHIFT - 12 + _CACHE_SHIFT + 3)
> +#endif
>  
>  #ifndef _PAGE_NO_EXEC
>  #define _PAGE_NO_EXEC		0
> -- 
> 2.8.0
> 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 03/12] MIPS: Remove redundant asm/pgtable-bits.h inclusions
  2016-04-15 19:16   ` James Hogan
@ 2016-04-15 21:19     ` Paul Burton
  0 siblings, 0 replies; 21+ messages in thread
From: Paul Burton @ 2016-04-15 21:19 UTC (permalink / raw)
  To: James Hogan, Ralf Baechle
  Cc: linux-mips, Maciej W. Rozycki, linux-kernel, Jonas Gorski,
	Markos Chandras, Alex Smith, Kirill A. Shutemov, Andrew Morton

On Fri, Apr 15, 2016 at 08:16:40PM +0100, James Hogan wrote:
> On Fri, Apr 15, 2016 at 11:36:51AM +0100, Paul Burton wrote:
> > asm/pgtable-bits.h is included in 2 assembly files and thus has to
> > in either of the assembly files that include it.
> 
> That could do with rewording :-)

Oops, it appears that I accidentally some words.

Originally there was an extra line in the middle so that it read
something like:

  asm/pgtable-bits.h is included in 2 assembly files and thus has to
  #ifdef around C code, however nothing defined by the header is used
  in either of the assembly files that include it.

Ralf: Would you be ok with adding back that line to the commit message
      if this gets merged without needing a v2?

> Otherwise
> Reviewed-by: James Hogan <james.hogan@imgtec.com>

Thanks James :)

Paul

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 09/12] MIPS: mm: Pass scratch register through to iPTE_SW
  2016-04-15 10:36 ` [PATCH 09/12] MIPS: mm: Pass scratch register through to iPTE_SW Paul Burton
@ 2016-04-15 22:28   ` James Hogan
  0 siblings, 0 replies; 21+ messages in thread
From: James Hogan @ 2016-04-15 22:28 UTC (permalink / raw)
  To: Paul Burton
  Cc: linux-mips, Ralf Baechle, Adam Buchbinder, Huacai Chen,
	Paul Gortmaker, linux-kernel, Kirill A. Shutemov

[-- Attachment #1: Type: text/plain, Size: 4724 bytes --]

On Fri, Apr 15, 2016 at 11:36:57AM +0100, Paul Burton wrote:
> Rather than hardcode a scratch register for the XPA case in iPTE_SW,
> pass one through from the work registers allocated by the caller. This
> allows for the XPA path to function correctly regardless of the work
> registers in use.

Looks good to me, although probably worth mentioning that the current
behaviour uses the $1 register, so this fixes another case of $1
clobbering.

Reviewed-by: James Hogan <james.hogan@imgtec.com>

Cheers
James

> 
> Signed-off-by: Paul Burton <paul.burton@imgtec.com>
> ---
> 
>  arch/mips/mm/tlbex.c | 24 +++++++++++-------------
>  1 file changed, 11 insertions(+), 13 deletions(-)
> 
> diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
> index 004cd9f..d7a7b3d 100644
> --- a/arch/mips/mm/tlbex.c
> +++ b/arch/mips/mm/tlbex.c
> @@ -1526,14 +1526,12 @@ iPTE_LW(u32 **p, unsigned int pte, unsigned int ptr)
>  
>  static void
>  iPTE_SW(u32 **p, struct uasm_reloc **r, unsigned int pte, unsigned int ptr,
> -	unsigned int mode)
> +	unsigned int mode, unsigned int scratch)
>  {
>  #ifdef CONFIG_PHYS_ADDR_T_64BIT
>  	unsigned int hwmode = mode & (_PAGE_VALID | _PAGE_DIRTY);
>  
>  	if (config_enabled(CONFIG_XPA) && !cpu_has_64bits) {
> -		const int scratch = 1; /* Our extra working register */
> -
>  		uasm_i_lui(p, scratch, (mode >> 16));
>  		uasm_i_or(p, pte, pte, scratch);
>  	} else
> @@ -1630,11 +1628,11 @@ build_pte_present(u32 **p, struct uasm_reloc **r,
>  /* Make PTE valid, store result in PTR. */
>  static void
>  build_make_valid(u32 **p, struct uasm_reloc **r, unsigned int pte,
> -		 unsigned int ptr)
> +		 unsigned int ptr, unsigned int scratch)
>  {
>  	unsigned int mode = _PAGE_VALID | _PAGE_ACCESSED;
>  
> -	iPTE_SW(p, r, pte, ptr, mode);
> +	iPTE_SW(p, r, pte, ptr, mode, scratch);
>  }
>  
>  /*
> @@ -1670,12 +1668,12 @@ build_pte_writable(u32 **p, struct uasm_reloc **r,
>   */
>  static void
>  build_make_write(u32 **p, struct uasm_reloc **r, unsigned int pte,
> -		 unsigned int ptr)
> +		 unsigned int ptr, unsigned int scratch)
>  {
>  	unsigned int mode = (_PAGE_ACCESSED | _PAGE_MODIFIED | _PAGE_VALID
>  			     | _PAGE_DIRTY);
>  
> -	iPTE_SW(p, r, pte, ptr, mode);
> +	iPTE_SW(p, r, pte, ptr, mode, scratch);
>  }
>  
>  /*
> @@ -1780,7 +1778,7 @@ static void build_r3000_tlb_load_handler(void)
>  	build_r3000_tlbchange_handler_head(&p, K0, K1);
>  	build_pte_present(&p, &r, K0, K1, -1, label_nopage_tlbl);
>  	uasm_i_nop(&p); /* load delay */
> -	build_make_valid(&p, &r, K0, K1);
> +	build_make_valid(&p, &r, K0, K1, -1);
>  	build_r3000_tlb_reload_write(&p, &l, &r, K0, K1);
>  
>  	uasm_l_nopage_tlbl(&l, p);
> @@ -1811,7 +1809,7 @@ static void build_r3000_tlb_store_handler(void)
>  	build_r3000_tlbchange_handler_head(&p, K0, K1);
>  	build_pte_writable(&p, &r, K0, K1, -1, label_nopage_tlbs);
>  	uasm_i_nop(&p); /* load delay */
> -	build_make_write(&p, &r, K0, K1);
> +	build_make_write(&p, &r, K0, K1, -1);
>  	build_r3000_tlb_reload_write(&p, &l, &r, K0, K1);
>  
>  	uasm_l_nopage_tlbs(&l, p);
> @@ -1842,7 +1840,7 @@ static void build_r3000_tlb_modify_handler(void)
>  	build_r3000_tlbchange_handler_head(&p, K0, K1);
>  	build_pte_modifiable(&p, &r, K0, K1,  -1, label_nopage_tlbm);
>  	uasm_i_nop(&p); /* load delay */
> -	build_make_write(&p, &r, K0, K1);
> +	build_make_write(&p, &r, K0, K1, -1);
>  	build_r3000_pte_reload_tlbwi(&p, K0, K1);
>  
>  	uasm_l_nopage_tlbm(&l, p);
> @@ -2010,7 +2008,7 @@ static void build_r4000_tlb_load_handler(void)
>  		}
>  		uasm_l_tlbl_goaround1(&l, p);
>  	}
> -	build_make_valid(&p, &r, wr.r1, wr.r2);
> +	build_make_valid(&p, &r, wr.r1, wr.r2, wr.r3);
>  	build_r4000_tlbchange_handler_tail(&p, &l, &r, wr.r1, wr.r2);
>  
>  #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
> @@ -2124,7 +2122,7 @@ static void build_r4000_tlb_store_handler(void)
>  	build_pte_writable(&p, &r, wr.r1, wr.r2, wr.r3, label_nopage_tlbs);
>  	if (m4kc_tlbp_war())
>  		build_tlb_probe_entry(&p);
> -	build_make_write(&p, &r, wr.r1, wr.r2);
> +	build_make_write(&p, &r, wr.r1, wr.r2, wr.r3);
>  	build_r4000_tlbchange_handler_tail(&p, &l, &r, wr.r1, wr.r2);
>  
>  #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
> @@ -2180,7 +2178,7 @@ static void build_r4000_tlb_modify_handler(void)
>  	if (m4kc_tlbp_war())
>  		build_tlb_probe_entry(&p);
>  	/* Present and writable bits set, set accessed and dirty bits. */
> -	build_make_write(&p, &r, wr.r1, wr.r2);
> +	build_make_write(&p, &r, wr.r1, wr.r2, wr.r3);
>  	build_r4000_tlbchange_handler_tail(&p, &l, &r, wr.r1, wr.r2);
>  
>  #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
> -- 
> 2.8.0
> 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 11/12] MIPS: mm: Simplify build_update_entries
  2016-04-15 10:36 ` [PATCH 11/12] MIPS: mm: Simplify build_update_entries Paul Burton
@ 2016-04-15 23:09   ` James Hogan
  0 siblings, 0 replies; 21+ messages in thread
From: James Hogan @ 2016-04-15 23:09 UTC (permalink / raw)
  To: Paul Burton
  Cc: linux-mips, Ralf Baechle, Huacai Chen, Paul Gortmaker,
	linux-kernel, Kirill A. Shutemov

[-- Attachment #1: Type: text/plain, Size: 3300 bytes --]

On Fri, Apr 15, 2016 at 11:36:59AM +0100, Paul Burton wrote:
> We can simplify build_update_entries by unifying the code for the 36 bit
> physical addressing with MIPS32 case with the general case, by using
> pte_off_ variables in all cases & handling the trivial
> _PAGE_GLOBAL_SHIFT == 0 case in build_convert_pte_to_entrylo. This
> leaves XPA as the only special case.
> 
> Signed-off-by: Paul Burton <paul.burton@imgtec.com>
> ---
> 
>  arch/mips/mm/tlbex.c | 38 +++++++++++++++++---------------------
>  1 file changed, 17 insertions(+), 21 deletions(-)
> 
> diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
> index 0bd3755..45234ad 100644
> --- a/arch/mips/mm/tlbex.c
> +++ b/arch/mips/mm/tlbex.c
> @@ -626,6 +626,11 @@ static void build_tlb_write_entry(u32 **p, struct uasm_label **l,
>  static __maybe_unused void build_convert_pte_to_entrylo(u32 **p,
>  							unsigned int reg)
>  {
> +	if (_PAGE_GLOBAL_SHIFT == 0) {
> +		/* pte_t is already in EntryLo format */
> +		return;
> +	}
> +
>  	if (cpu_has_rixi && _PAGE_NO_EXEC) {
>  		if (fill_includes_sw_bits) {
>  			UASM_i_ROTR(p, reg, reg, ilog2(_PAGE_GLOBAL));
> @@ -1003,10 +1008,17 @@ static void build_get_ptep(u32 **p, unsigned int tmp, unsigned int ptr)
>  
>  static void build_update_entries(u32 **p, unsigned int tmp, unsigned int ptep)
>  {
> -	if (config_enabled(CONFIG_XPA)) {
> -		int pte_off_even = sizeof(pte_t) / 2;
> -		int pte_off_odd = pte_off_even + sizeof(pte_t);
> +	int pte_off_even = 0;
> +	int pte_off_odd = sizeof(pte_t);
> +
> +	if (config_enabled(CONFIG_PHYS_ADDR_T_64BIT) &&
> +	    config_enabled(CONFIG_32BIT)) {
> +		/* The low 32 bits of EntryLo is stored in pte_high */
> +		pte_off_even += offsetof(pte_t, pte_high);
> +		pte_off_odd += offsetof(pte_t, pte_high);

pte_high doesn't exist unless CONFIG_CPU_MIPS32=y (e.g. you
can set CONFIG_CPU_MIPS64=y, CONFIG_CPU_MIPS32=n and CONFIG_32BIT=y).

With that fixed it looks good to me.
Reviewed-by: James Hogan <james.hogan@imgtec.com>

Cheers
James

> +	}
>  
> +	if (config_enabled(CONFIG_XPA)) {
>  		uasm_i_lw(p, tmp, pte_off_even, ptep); /* even pte */
>  		UASM_i_ROTR(p, tmp, tmp, ilog2(_PAGE_GLOBAL));
>  		UASM_i_MTC0(p, tmp, C0_ENTRYLO0);
> @@ -1025,24 +1037,8 @@ static void build_update_entries(u32 **p, unsigned int tmp, unsigned int ptep)
>  		return;
>  	}
>  
> -	/*
> -	 * 64bit address support (36bit on a 32bit CPU) in a 32bit
> -	 * Kernel is a special case. Only a few CPUs use it.
> -	 */
> -	if (config_enabled(CONFIG_PHYS_ADDR_T_64BIT) && !cpu_has_64bits) {
> -		int pte_off_even = sizeof(pte_t) / 2;
> -		int pte_off_odd = pte_off_even + sizeof(pte_t);
> -
> -		uasm_i_lw(p, tmp, pte_off_even, ptep); /* even pte */
> -		UASM_i_MTC0(p, tmp, C0_ENTRYLO0);
> -
> -		uasm_i_lw(p, ptep, pte_off_odd, ptep); /* odd pte */
> -		UASM_i_MTC0(p, ptep, C0_ENTRYLO1);
> -		return;
> -	}
> -
> -	UASM_i_LW(p, tmp, 0, ptep); /* get even pte */
> -	UASM_i_LW(p, ptep, sizeof(pte_t), ptep); /* get odd pte */
> +	UASM_i_LW(p, tmp, pte_off_even, ptep); /* get even pte */
> +	UASM_i_LW(p, ptep, pte_off_odd, ptep); /* get odd pte */
>  	if (r45k_bvahwbug())
>  		build_tlb_probe_entry(p);
>  	build_convert_pte_to_entrylo(p, tmp);
> -- 
> 2.8.0
> 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 06/12] MIPS: mm: Unify pte_page definition
  2016-04-15 10:36 ` [PATCH 06/12] MIPS: mm: Unify pte_page definition Paul Burton
@ 2016-04-15 23:16   ` James Hogan
  0 siblings, 0 replies; 21+ messages in thread
From: James Hogan @ 2016-04-15 23:16 UTC (permalink / raw)
  To: Paul Burton; +Cc: linux-mips, Ralf Baechle, Paul Gortmaker, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1883 bytes --]

On Fri, Apr 15, 2016 at 11:36:54AM +0100, Paul Burton wrote:
> The same definition for pte_page is duplicated for the MIPS32
> PHYS_ADDR_T_64BIT case & the generic case. Unify them by moving a single
> definition outside of preprocessor conditionals.
> 
> Signed-off-by: Paul Burton <paul.burton@imgtec.com>

Reviewed-by: James Hogan <james.hogan@imgtec.com>

Cheers
James

> ---
> 
>  arch/mips/include/asm/pgtable-32.h | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/mips/include/asm/pgtable-32.h b/arch/mips/include/asm/pgtable-32.h
> index 832e216..181bd8e 100644
> --- a/arch/mips/include/asm/pgtable-32.h
> +++ b/arch/mips/include/asm/pgtable-32.h
> @@ -104,7 +104,7 @@ static inline void pmd_clear(pmd_t *pmdp)
>  }
>  
>  #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
> -#define pte_page(x)		pfn_to_page(pte_pfn(x))
> +
>  #define pte_pfn(x)		(((unsigned long)((x).pte_high >> _PFN_SHIFT)) | (unsigned long)((x).pte_low << _PAGE_PRESENT_SHIFT))
>  static inline pte_t
>  pfn_pte(unsigned long pfn, pgprot_t prot)
> @@ -120,8 +120,6 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
>  
>  #else
>  
> -#define pte_page(x)		pfn_to_page(pte_pfn(x))
> -
>  #ifdef CONFIG_CPU_VR41XX
>  #define pte_pfn(x)		((unsigned long)((x).pte >> (PAGE_SHIFT + 2)))
>  #define pfn_pte(pfn, prot)	__pte(((pfn) << (PAGE_SHIFT + 2)) | pgprot_val(prot))
> @@ -131,6 +129,8 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
>  #endif
>  #endif /* defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32) */
>  
> +#define pte_page(x)		pfn_to_page(pte_pfn(x))
> +
>  #define __pgd_offset(address)	pgd_index(address)
>  #define __pud_offset(address)	(((address) >> PUD_SHIFT) & (PTRS_PER_PUD-1))
>  #define __pmd_offset(address)	(((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1))
> -- 
> 2.8.0
> 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 00/12] TLB/XPA fixes & cleanups
       [not found] <1460716620-13382-1-git-send-email-paul.burton@imgtec.com>
                   ` (9 preceding siblings ...)
  2016-04-15 10:37 ` [PATCH 12/12] MIPS: mm: Don't do MTHC0 if XPA not present Paul Burton
@ 2016-05-10 12:44 ` Ralf Baechle
  2016-05-10 17:47   ` Florian Fainelli
  10 siblings, 1 reply; 21+ messages in thread
From: Ralf Baechle @ 2016-05-10 12:44 UTC (permalink / raw)
  To: Paul Burton
  Cc: linux-mips, James Hogan, Adam Buchbinder, Maciej W. Rozycki,
	Joshua Kinard, Huacai Chen, Maciej W. Rozycki, Paul Gortmaker,
	Aneesh Kumar K.V, linux-kernel, Peter Zijlstra (Intel),
	David Hildenbrand, Andrew Morton, David Daney, Jonas Gorski,
	Markos Chandras, Ingo Molnar, Alex Smith, Kirill A. Shutemov,
	Florian Fainelli

On Fri, Apr 15, 2016 at 11:36:48AM +0100, Paul Burton wrote:

> This series fixes up a number of issues introduced by commit
> c5b367835cfc ("MIPS: Add support for XPA."), including breakage of the
> MIPS32 with 36 bit physical addressing case & clobbering of $1 upon TLB
> refill exceptions. Along the way a number of cleanups are made, which
> leaves pgtable-bits.h in particular much more readable than before.
> 
> The series applies atop v4.6-rc3.
> 
> James Hogan (4):
>   MIPS: Separate XPA CPU feature into LPA and MVH
>   MIPS: Fix HTW config on XPA kernel without LPA enabled
>   MIPS: mm: Don't clobber $1 on XPA TLB refill
>   MIPS: mm: Don't do MTHC0 if XPA not present
> 
> Paul Burton (8):
>   MIPS: Remove redundant asm/pgtable-bits.h inclusions
>   MIPS: Use enums to make asm/pgtable-bits.h readable
>   MIPS: mm: Standardise on _PAGE_NO_READ, drop _PAGE_READ
>   MIPS: mm: Unify pte_page definition
>   MIPS: mm: Fix MIPS32 36b physical addressing (alchemy, netlogic)
>   MIPS: mm: Pass scratch register through to iPTE_SW
>   MIPS: mm: Be more explicit about PTE mode bit handling
>   MIPS: mm: Simplify build_update_entries

Applied - but "MIPS: Separate XPA CPU feature into LPA and MVH" causes
a massive conflict with Florian's RIXI patches

  [3/6] MIPS: Allow RIXI to be used on non-R2 or R6 core
  [4/6] MIPS: Move RIXI exception enabling after vendor-specific cpu_probe
  [5/6] MIPS: BMIPS: BMIPS4380 and BMIPS5000 support RIXI

I figured unapplying those three, applying Paul's series then re-applying
Florian's patch on top of the whole series will be the easier path as in
leaving me with the smaller rejects to manage.

  Ralf

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 00/12] TLB/XPA fixes & cleanups
  2016-05-10 12:44 ` [PATCH 00/12] TLB/XPA fixes & cleanups Ralf Baechle
@ 2016-05-10 17:47   ` Florian Fainelli
  2016-05-11 10:03     ` Ralf Baechle
  0 siblings, 1 reply; 21+ messages in thread
From: Florian Fainelli @ 2016-05-10 17:47 UTC (permalink / raw)
  To: Ralf Baechle, Paul Burton
  Cc: linux-mips, James Hogan, Adam Buchbinder, Maciej W. Rozycki,
	Joshua Kinard, Huacai Chen, Maciej W. Rozycki, Paul Gortmaker,
	Aneesh Kumar K.V, linux-kernel, Peter Zijlstra (Intel),
	David Hildenbrand, Andrew Morton, David Daney, Jonas Gorski,
	Markos Chandras, Ingo Molnar, Alex Smith, Kirill A. Shutemov

On 05/10/2016 05:44 AM, Ralf Baechle wrote:
> On Fri, Apr 15, 2016 at 11:36:48AM +0100, Paul Burton wrote:
> 
>> This series fixes up a number of issues introduced by commit
>> c5b367835cfc ("MIPS: Add support for XPA."), including breakage of the
>> MIPS32 with 36 bit physical addressing case & clobbering of $1 upon TLB
>> refill exceptions. Along the way a number of cleanups are made, which
>> leaves pgtable-bits.h in particular much more readable than before.
>>
>> The series applies atop v4.6-rc3.
>>
>> James Hogan (4):
>>   MIPS: Separate XPA CPU feature into LPA and MVH
>>   MIPS: Fix HTW config on XPA kernel without LPA enabled
>>   MIPS: mm: Don't clobber $1 on XPA TLB refill
>>   MIPS: mm: Don't do MTHC0 if XPA not present
>>
>> Paul Burton (8):
>>   MIPS: Remove redundant asm/pgtable-bits.h inclusions
>>   MIPS: Use enums to make asm/pgtable-bits.h readable
>>   MIPS: mm: Standardise on _PAGE_NO_READ, drop _PAGE_READ
>>   MIPS: mm: Unify pte_page definition
>>   MIPS: mm: Fix MIPS32 36b physical addressing (alchemy, netlogic)
>>   MIPS: mm: Pass scratch register through to iPTE_SW
>>   MIPS: mm: Be more explicit about PTE mode bit handling
>>   MIPS: mm: Simplify build_update_entries
> 
> Applied - but "MIPS: Separate XPA CPU feature into LPA and MVH" causes
> a massive conflict with Florian's RIXI patches
> 
>   [3/6] MIPS: Allow RIXI to be used on non-R2 or R6 core
>   [4/6] MIPS: Move RIXI exception enabling after vendor-specific cpu_probe
>   [5/6] MIPS: BMIPS: BMIPS4380 and BMIPS5000 support RIXI
> 
> I figured unapplying those three, applying Paul's series then re-applying
> Florian's patch on top of the whole series will be the easier path as in
> leaving me with the smaller rejects to manage.

Did you already push that to mips-for-linux-next? I can give it a quick
spin once you do so.
-- 
Florian

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 04/12] MIPS: Use enums to make asm/pgtable-bits.h readable
  2016-04-15 20:29   ` James Hogan
@ 2016-05-11  9:38     ` Ralf Baechle
  0 siblings, 0 replies; 21+ messages in thread
From: Ralf Baechle @ 2016-05-11  9:38 UTC (permalink / raw)
  To: James Hogan
  Cc: Paul Burton, linux-mips, Maciej W. Rozycki, linux-kernel,
	Markos Chandras, Alex Smith, Kirill A. Shutemov

On Fri, Apr 15, 2016 at 09:29:06PM +0100, James Hogan wrote:

> Having had to work my way through some of this file to manually walk
> page tables only this week, I really do think this is an excellent
> cleanup (if nothing else, look at that diffstat :-D ).

I agree.  Lots of history in this file.  It uses #define because well,
i386 was using #define back in the 90's when Elvis was still alive.
Much of the page table code has been rewritten for simplicity, performance
and ease of maintenance but somehow this has escaped so far.

And I'm wondering if eventually the rewrite should be taken even further
making things fully dynamic.

  Ralf

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 00/12] TLB/XPA fixes & cleanups
  2016-05-10 17:47   ` Florian Fainelli
@ 2016-05-11 10:03     ` Ralf Baechle
  2016-05-11 19:17       ` Florian Fainelli
  0 siblings, 1 reply; 21+ messages in thread
From: Ralf Baechle @ 2016-05-11 10:03 UTC (permalink / raw)
  To: Florian Fainelli
  Cc: Paul Burton, linux-mips, James Hogan, Adam Buchbinder,
	Maciej W. Rozycki, Joshua Kinard, Huacai Chen, Maciej W. Rozycki,
	Paul Gortmaker, Aneesh Kumar K.V, linux-kernel,
	Peter Zijlstra (Intel),
	David Hildenbrand, Andrew Morton, David Daney, Jonas Gorski,
	Markos Chandras, Ingo Molnar, Alex Smith, Kirill A. Shutemov

On Tue, May 10, 2016 at 10:47:48AM -0700, Florian Fainelli wrote:

> > Applied - but "MIPS: Separate XPA CPU feature into LPA and MVH" causes
> > a massive conflict with Florian's RIXI patches
> > 
> >   [3/6] MIPS: Allow RIXI to be used on non-R2 or R6 core
> >   [4/6] MIPS: Move RIXI exception enabling after vendor-specific cpu_probe
> >   [5/6] MIPS: BMIPS: BMIPS4380 and BMIPS5000 support RIXI
> > 
> > I figured unapplying those three, applying Paul's series then re-applying
> > Florian's patch on top of the whole series will be the easier path as in
> > leaving me with the smaller rejects to manage.
> 
> Did you already push that to mips-for-linux-next? I can give it a quick
> spin once you do so.

I just pushed a tree with everything applied.  HEAD of tree is
22702a86997c5aed2e479bfe0b24d10d66b09604 dated May 11 11:58:06; a version
from earlier today was broken.

  Ralf

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 00/12] TLB/XPA fixes & cleanups
  2016-05-11 10:03     ` Ralf Baechle
@ 2016-05-11 19:17       ` Florian Fainelli
  0 siblings, 0 replies; 21+ messages in thread
From: Florian Fainelli @ 2016-05-11 19:17 UTC (permalink / raw)
  To: Ralf Baechle
  Cc: Paul Burton, linux-mips, James Hogan, Adam Buchbinder,
	Maciej W. Rozycki, Joshua Kinard, Huacai Chen, Maciej W. Rozycki,
	Paul Gortmaker, Aneesh Kumar K.V, linux-kernel,
	Peter Zijlstra (Intel),
	David Hildenbrand, Andrew Morton, David Daney, Jonas Gorski,
	Markos Chandras, Ingo Molnar, Alex Smith, Kirill A. Shutemov

On 05/11/2016 03:03 AM, Ralf Baechle wrote:
> On Tue, May 10, 2016 at 10:47:48AM -0700, Florian Fainelli wrote:
> 
>>> Applied - but "MIPS: Separate XPA CPU feature into LPA and MVH" causes
>>> a massive conflict with Florian's RIXI patches
>>>
>>>   [3/6] MIPS: Allow RIXI to be used on non-R2 or R6 core
>>>   [4/6] MIPS: Move RIXI exception enabling after vendor-specific cpu_probe
>>>   [5/6] MIPS: BMIPS: BMIPS4380 and BMIPS5000 support RIXI
>>>
>>> I figured unapplying those three, applying Paul's series then re-applying
>>> Florian's patch on top of the whole series will be the easier path as in
>>> leaving me with the smaller rejects to manage.
>>
>> Did you already push that to mips-for-linux-next? I can give it a quick
>> spin once you do so.
> 
> I just pushed a tree with everything applied.  HEAD of tree is
> 22702a86997c5aed2e479bfe0b24d10d66b09604 dated May 11 11:58:06; a version
> from earlier today was broken.

Boot tested on BMIPS5000 (BCM7425):

Tested-by: Florian Fainelli <f.fainelli@gmail.com>

Thanks!
-- 
Florian

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2016-05-11 19:17 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <1460716620-13382-1-git-send-email-paul.burton@imgtec.com>
2016-04-15 10:36 ` [PATCH 01/12] MIPS: Separate XPA CPU feature into LPA and MVH Paul Burton
2016-04-15 10:36 ` [PATCH 02/12] MIPS: Fix HTW config on XPA kernel without LPA enabled Paul Burton
2016-04-15 10:36 ` [PATCH 03/12] MIPS: Remove redundant asm/pgtable-bits.h inclusions Paul Burton
2016-04-15 19:16   ` James Hogan
2016-04-15 21:19     ` Paul Burton
2016-04-15 10:36 ` [PATCH 04/12] MIPS: Use enums to make asm/pgtable-bits.h readable Paul Burton
2016-04-15 20:29   ` James Hogan
2016-05-11  9:38     ` Ralf Baechle
2016-04-15 10:36 ` [PATCH 06/12] MIPS: mm: Unify pte_page definition Paul Burton
2016-04-15 23:16   ` James Hogan
2016-04-15 10:36 ` [PATCH 08/12] MIPS: mm: Don't clobber $1 on XPA TLB refill Paul Burton
2016-04-15 10:36 ` [PATCH 09/12] MIPS: mm: Pass scratch register through to iPTE_SW Paul Burton
2016-04-15 22:28   ` James Hogan
2016-04-15 10:36 ` [PATCH 10/12] MIPS: mm: Be more explicit about PTE mode bit handling Paul Burton
2016-04-15 10:36 ` [PATCH 11/12] MIPS: mm: Simplify build_update_entries Paul Burton
2016-04-15 23:09   ` James Hogan
2016-04-15 10:37 ` [PATCH 12/12] MIPS: mm: Don't do MTHC0 if XPA not present Paul Burton
2016-05-10 12:44 ` [PATCH 00/12] TLB/XPA fixes & cleanups Ralf Baechle
2016-05-10 17:47   ` Florian Fainelli
2016-05-11 10:03     ` Ralf Baechle
2016-05-11 19:17       ` Florian Fainelli

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).