linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/10] Optimise TLB miss handlers on 603/e300
@ 2019-01-25 12:34 Christophe Leroy
  2019-01-25 12:34 ` [PATCH 01/10] powerpc: simplify BDI switch Christophe Leroy
                   ` (9 more replies)
  0 siblings, 10 replies; 13+ messages in thread
From: Christophe Leroy @ 2019-01-25 12:34 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	joakim.tjernlund
  Cc: linux-kernel, linuxppc-dev

The purpose of this serie is to optimise the handling of
TLB misses on the 603/e300.

Today the TLB miss handlers are implemented by more or less
copying the actions performed by the hash page handlers used
on processors having HASH pagetable.

This serie brings some simplification.

Christophe Leroy (10):
  powerpc: simplify BDI switch
  powerpc/603: Store PGDIR physical address in a SPRG
  powerpc/603: use physical address directly in TLB miss handlers.
  powerpc/hash32: use physical address directly in hash handlers.
  powerpc/603: Don't handle kernel page TLB misses when not need
  powerpc/603: Don't handle _PAGE_RW and _PAGE_DIRTY on ITLB misses
  powerpc/603: let's handle PAGE_DIRTY directly
  powerpc/603: Don't worry about _PAGE_USER in TLB miss handlers
  powerpc/603: don't handle PAGE_ACCESSED in TLB miss handlers.
  powerpc/book3s32: Reorder _PAGE_XXX flags to simplify TLB handling

 arch/powerpc/include/asm/book3s/32/hash.h |  8 +--
 arch/powerpc/include/asm/mmu.h            |  2 +
 arch/powerpc/include/asm/reg.h            |  1 +
 arch/powerpc/kernel/cpu_setup_6xx.S       |  4 ++
 arch/powerpc/kernel/head_32.S             | 97 ++++++++++++++-----------------
 arch/powerpc/kernel/head_40x.S            |  5 +-
 arch/powerpc/kernel/head_8xx.S            |  1 +
 arch/powerpc/mm/8xx_mmu.c                 |  7 +--
 arch/powerpc/mm/hash_low_32.S             | 68 +++++++++-------------
 arch/powerpc/mm/ppc_mmu_32.c              |  6 +-
 10 files changed, 93 insertions(+), 106 deletions(-)

-- 
2.13.3


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 01/10] powerpc: simplify BDI switch
  2019-01-25 12:34 [PATCH 00/10] Optimise TLB miss handlers on 603/e300 Christophe Leroy
@ 2019-01-25 12:34 ` Christophe Leroy
  2019-01-25 12:34 ` [PATCH 02/10] powerpc/603: Store PGDIR physical address in a SPRG Christophe Leroy
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Christophe Leroy @ 2019-01-25 12:34 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	joakim.tjernlund
  Cc: linux-kernel, linuxppc-dev

There is no reason to re-read each time the pointer at
location 0xf0 as it is fixed and known.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/include/asm/mmu.h | 2 ++
 arch/powerpc/kernel/head_32.S  | 5 ++---
 arch/powerpc/kernel/head_40x.S | 5 ++---
 arch/powerpc/kernel/head_8xx.S | 1 +
 arch/powerpc/mm/8xx_mmu.c      | 7 ++-----
 5 files changed, 9 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index 25607604a7a5..6d22a8e78fe2 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -356,6 +356,8 @@ extern void early_init_mmu_secondary(void);
 extern void setup_initial_memory_limit(phys_addr_t first_memblock_base,
 				       phys_addr_t first_memblock_size);
 static inline void mmu_early_init_devtree(void) { }
+
+extern void *abatron_pteptrs[2];
 #endif /* __ASSEMBLY__ */
 #endif
 
diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
index 05b08db3901d..c2f564690778 100644
--- a/arch/powerpc/kernel/head_32.S
+++ b/arch/powerpc/kernel/head_32.S
@@ -1027,9 +1027,8 @@ _ENTRY(switch_mmu_context)
 	 * The PGDIR is passed as second argument.
 	 */
 	lwz	r4,MM_PGD(r4)
-	lis	r5, KERNELBASE@h
-	lwz	r5, 0xf0(r5)
-	stw	r4, 0x4(r5)
+	lis	r5, abatron_pteptrs@ha
+	stw	r4, abatron_pteptrs@l + 0x4(r5)
 #endif
 	li	r4,0
 	isync
diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index b19d78410511..11dd09d0ce1a 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -953,9 +953,8 @@ _GLOBAL(set_context)
 	/* Context switch the PTE pointer for the Abatron BDI2000.
 	 * The PGDIR is the second parameter.
 	 */
-	lis	r5, KERNELBASE@h
-	lwz	r5, 0xf0(r5)
-	stw	r4, 0x4(r5)
+	lis	r5, abatron_pteptrs@ha
+	stw	r4, abatron_pteptrs@l + 0x4(r5)
 #endif
 	sync
 	mtspr	SPRN_PID,r3
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index c3f776fda984..4a2e3ffdb5bb 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -1012,5 +1012,6 @@ swapper_pg_dir:
 /* Room for two PTE table poiners, usually the kernel and current user
  * pointer to their respective root page table (pgdir).
  */
+	.globl	abatron_pteptrs
 abatron_pteptrs:
 	.space	8
diff --git a/arch/powerpc/mm/8xx_mmu.c b/arch/powerpc/mm/8xx_mmu.c
index b5f6d794281d..e2c32bdb6023 100644
--- a/arch/powerpc/mm/8xx_mmu.c
+++ b/arch/powerpc/mm/8xx_mmu.c
@@ -159,14 +159,11 @@ void set_context(unsigned long id, pgd_t *pgd)
 {
 	s16 offset = (s16)(__pa(swapper_pg_dir));
 
-#ifdef CONFIG_BDI_SWITCH
-	pgd_t	**ptr = *(pgd_t ***)(KERNELBASE + 0xf0);
-
 	/* Context switch the PTE pointer for the Abatron BDI2000.
 	 * The PGDIR is passed as second argument.
 	 */
-	*(ptr + 1) = pgd;
-#endif
+	if (IS_ENABLED(CONFIG_BDI_SWITCH))
+		abatron_pteptrs[1] = pgd;
 
 	/* Register M_TWB will contain base address of level 1 table minus the
 	 * lower part of the kernel PGDIR base address, so that all accesses to
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 02/10] powerpc/603: Store PGDIR physical address in a SPRG
  2019-01-25 12:34 [PATCH 00/10] Optimise TLB miss handlers on 603/e300 Christophe Leroy
  2019-01-25 12:34 ` [PATCH 01/10] powerpc: simplify BDI switch Christophe Leroy
@ 2019-01-25 12:34 ` Christophe Leroy
  2019-02-20 17:39   ` Christophe Leroy
  2019-01-25 12:34 ` [PATCH 03/10] powerpc/603: use physical address directly in TLB miss handlers Christophe Leroy
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 13+ messages in thread
From: Christophe Leroy @ 2019-01-25 12:34 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	joakim.tjernlund
  Cc: linux-kernel, linuxppc-dev

Use SPRN_SPRG5 to store the current thread PGDIR and
avoid reading thread_struct->pgdir at every TLB miss.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/include/asm/reg.h      |  1 +
 arch/powerpc/kernel/cpu_setup_6xx.S |  4 ++++
 arch/powerpc/kernel/head_32.S       | 28 ++++++++++++++++------------
 3 files changed, 21 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index 1c98ef1f2d5b..ba0ab1a1431b 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -1169,6 +1169,7 @@
 #define SPRN_SPRG_SCRATCH1	SPRN_SPRG1
 #define SPRN_SPRG_RTAS		SPRN_SPRG2
 #define SPRN_SPRG_603_LRU	SPRN_SPRG4
+#define SPRN_SPRG_603_PGDIR	SPRN_SPRG5
 #endif
 
 #ifdef CONFIG_40x
diff --git a/arch/powerpc/kernel/cpu_setup_6xx.S b/arch/powerpc/kernel/cpu_setup_6xx.S
index 8c069e96c478..4c91d1f640fe 100644
--- a/arch/powerpc/kernel/cpu_setup_6xx.S
+++ b/arch/powerpc/kernel/cpu_setup_6xx.S
@@ -24,6 +24,10 @@ BEGIN_MMU_FTR_SECTION
 	li	r10,0
 	mtspr	SPRN_SPRG_603_LRU,r10		/* init SW LRU tracking */
 END_MMU_FTR_SECTION_IFSET(MMU_FTR_NEED_DTLB_SW_LRU)
+	lis	r10, (swapper_pg_dir - PAGE_OFFSET)@h
+	ori	r10, r10, (swapper_pg_dir - PAGE_OFFSET)@l
+	mtspr	SPRN_SPRG_603_PGDIR, r10
+
 BEGIN_FTR_SECTION
 	bl	__init_fpu_registers
 END_FTR_SECTION_IFCLR(CPU_FTR_FPU_UNAVAILABLE)
diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
index c2f564690778..dbd15e03952a 100644
--- a/arch/powerpc/kernel/head_32.S
+++ b/arch/powerpc/kernel/head_32.S
@@ -502,16 +502,15 @@ InstructionTLBMiss:
 	mfspr	r3,SPRN_IMISS
 	lis	r1,PAGE_OFFSET@h		/* check if kernel address */
 	cmplw	0,r1,r3
-	mfspr	r2,SPRN_SPRG_THREAD
+	mfspr	r2, SPRN_SPRG_603_PGDIR
 	li	r1,_PAGE_USER|_PAGE_PRESENT|_PAGE_EXEC /* low addresses tested as user */
-	lwz	r2,PGDIR(r2)
 	bge-	112f
 	mfspr	r2,SPRN_SRR1		/* and MSR_PR bit from SRR1 */
 	rlwimi	r1,r2,32-12,29,29	/* shift MSR_PR to _PAGE_USER posn */
 	lis	r2,swapper_pg_dir@ha	/* if kernel address, use */
 	addi	r2,r2,swapper_pg_dir@l	/* kernel page table */
-112:	tophys(r2,r2)
-	rlwimi	r2,r3,12,20,29		/* insert top 10 bits of address */
+	tophys(r2,r2)
+112:	rlwimi	r2,r3,12,20,29		/* insert top 10 bits of address */
 	lwz	r2,0(r2)		/* get pmd entry */
 	rlwinm.	r2,r2,0,0,19		/* extract address of pte page */
 	beq-	InstructionAddressInvalid	/* return if no mapping */
@@ -576,16 +575,15 @@ DataLoadTLBMiss:
 	mfspr	r3,SPRN_DMISS
 	lis	r1,PAGE_OFFSET@h		/* check if kernel address */
 	cmplw	0,r1,r3
-	mfspr	r2,SPRN_SPRG_THREAD
+	mfspr	r2, SPRN_SPRG_603_PGDIR
 	li	r1,_PAGE_USER|_PAGE_PRESENT /* low addresses tested as user */
-	lwz	r2,PGDIR(r2)
 	bge-	112f
 	mfspr	r2,SPRN_SRR1		/* and MSR_PR bit from SRR1 */
 	rlwimi	r1,r2,32-12,29,29	/* shift MSR_PR to _PAGE_USER posn */
 	lis	r2,swapper_pg_dir@ha	/* if kernel address, use */
 	addi	r2,r2,swapper_pg_dir@l	/* kernel page table */
-112:	tophys(r2,r2)
-	rlwimi	r2,r3,12,20,29		/* insert top 10 bits of address */
+	tophys(r2,r2)
+112:	rlwimi	r2,r3,12,20,29		/* insert top 10 bits of address */
 	lwz	r2,0(r2)		/* get pmd entry */
 	rlwinm.	r2,r2,0,0,19		/* extract address of pte page */
 	beq-	DataAddressInvalid	/* return if no mapping */
@@ -660,16 +658,15 @@ DataStoreTLBMiss:
 	mfspr	r3,SPRN_DMISS
 	lis	r1,PAGE_OFFSET@h		/* check if kernel address */
 	cmplw	0,r1,r3
-	mfspr	r2,SPRN_SPRG_THREAD
+	mfspr	r2, SPRN_SPRG_603_PGDIR
 	li	r1,_PAGE_RW|_PAGE_USER|_PAGE_PRESENT /* access flags */
-	lwz	r2,PGDIR(r2)
 	bge-	112f
 	mfspr	r2,SPRN_SRR1		/* and MSR_PR bit from SRR1 */
 	rlwimi	r1,r2,32-12,29,29	/* shift MSR_PR to _PAGE_USER posn */
 	lis	r2,swapper_pg_dir@ha	/* if kernel address, use */
 	addi	r2,r2,swapper_pg_dir@l	/* kernel page table */
-112:	tophys(r2,r2)
-	rlwimi	r2,r3,12,20,29		/* insert top 10 bits of address */
+	tophys(r2,r2)
+112:	rlwimi	r2,r3,12,20,29		/* insert top 10 bits of address */
 	lwz	r2,0(r2)		/* get pmd entry */
 	rlwinm.	r2,r2,0,0,19		/* extract address of pte page */
 	beq-	DataAddressInvalid	/* return if no mapping */
@@ -1030,6 +1027,13 @@ _ENTRY(switch_mmu_context)
 	lis	r5, abatron_pteptrs@ha
 	stw	r4, abatron_pteptrs@l + 0x4(r5)
 #endif
+BEGIN_MMU_FTR_SECTION
+#ifndef CONFIG_BDI_SWITCH
+	lwz	r4, MM_PGD(r4)
+#endif
+	tophys(r4, r4)
+	mtspr	SPRN_SPRG_603_PGDIR, r4
+END_MMU_FTR_SECTION_IFCLR(MMU_FTR_HPTE_TABLE)
 	li	r4,0
 	isync
 3:
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 03/10] powerpc/603: use physical address directly in TLB miss handlers.
  2019-01-25 12:34 [PATCH 00/10] Optimise TLB miss handlers on 603/e300 Christophe Leroy
  2019-01-25 12:34 ` [PATCH 01/10] powerpc: simplify BDI switch Christophe Leroy
  2019-01-25 12:34 ` [PATCH 02/10] powerpc/603: Store PGDIR physical address in a SPRG Christophe Leroy
@ 2019-01-25 12:34 ` Christophe Leroy
  2019-01-25 12:34 ` [PATCH 04/10] powerpc/hash32: use physical address directly in hash handlers Christophe Leroy
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Christophe Leroy @ 2019-01-25 12:34 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	joakim.tjernlund
  Cc: linux-kernel, linuxppc-dev

Since commit c62ce9ef97ba ("powerpc: remove remaining bits from
CONFIG_APUS"), tophys() has become a pure constant operation.
PAGE_OFFSET is known at compile time so the physical address
can be builtin directly.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/kernel/head_32.S | 15 ++++++---------
 1 file changed, 6 insertions(+), 9 deletions(-)

diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
index dbd15e03952a..53f65124edd5 100644
--- a/arch/powerpc/kernel/head_32.S
+++ b/arch/powerpc/kernel/head_32.S
@@ -507,9 +507,8 @@ InstructionTLBMiss:
 	bge-	112f
 	mfspr	r2,SPRN_SRR1		/* and MSR_PR bit from SRR1 */
 	rlwimi	r1,r2,32-12,29,29	/* shift MSR_PR to _PAGE_USER posn */
-	lis	r2,swapper_pg_dir@ha	/* if kernel address, use */
-	addi	r2,r2,swapper_pg_dir@l	/* kernel page table */
-	tophys(r2,r2)
+	lis	r2, (swapper_pg_dir - PAGE_OFFSET)@ha	/* if kernel address, use */
+	addi	r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l	/* kernel page table */
 112:	rlwimi	r2,r3,12,20,29		/* insert top 10 bits of address */
 	lwz	r2,0(r2)		/* get pmd entry */
 	rlwinm.	r2,r2,0,0,19		/* extract address of pte page */
@@ -580,9 +579,8 @@ DataLoadTLBMiss:
 	bge-	112f
 	mfspr	r2,SPRN_SRR1		/* and MSR_PR bit from SRR1 */
 	rlwimi	r1,r2,32-12,29,29	/* shift MSR_PR to _PAGE_USER posn */
-	lis	r2,swapper_pg_dir@ha	/* if kernel address, use */
-	addi	r2,r2,swapper_pg_dir@l	/* kernel page table */
-	tophys(r2,r2)
+	lis	r2, (swapper_pg_dir - PAGE_OFFSET)@ha	/* if kernel address, use */
+	addi	r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l	/* kernel page table */
 112:	rlwimi	r2,r3,12,20,29		/* insert top 10 bits of address */
 	lwz	r2,0(r2)		/* get pmd entry */
 	rlwinm.	r2,r2,0,0,19		/* extract address of pte page */
@@ -663,9 +661,8 @@ DataStoreTLBMiss:
 	bge-	112f
 	mfspr	r2,SPRN_SRR1		/* and MSR_PR bit from SRR1 */
 	rlwimi	r1,r2,32-12,29,29	/* shift MSR_PR to _PAGE_USER posn */
-	lis	r2,swapper_pg_dir@ha	/* if kernel address, use */
-	addi	r2,r2,swapper_pg_dir@l	/* kernel page table */
-	tophys(r2,r2)
+	lis	r2, (swapper_pg_dir - PAGE_OFFSET)@ha	/* if kernel address, use */
+	addi	r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l	/* kernel page table */
 112:	rlwimi	r2,r3,12,20,29		/* insert top 10 bits of address */
 	lwz	r2,0(r2)		/* get pmd entry */
 	rlwinm.	r2,r2,0,0,19		/* extract address of pte page */
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 04/10] powerpc/hash32: use physical address directly in hash handlers.
  2019-01-25 12:34 [PATCH 00/10] Optimise TLB miss handlers on 603/e300 Christophe Leroy
                   ` (2 preceding siblings ...)
  2019-01-25 12:34 ` [PATCH 03/10] powerpc/603: use physical address directly in TLB miss handlers Christophe Leroy
@ 2019-01-25 12:34 ` Christophe Leroy
  2019-01-25 12:34 ` [PATCH 05/10] powerpc/603: Don't handle kernel page TLB misses when not need Christophe Leroy
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Christophe Leroy @ 2019-01-25 12:34 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	joakim.tjernlund
  Cc: linux-kernel, linuxppc-dev

Since commit c62ce9ef97ba ("powerpc: remove remaining bits from
CONFIG_APUS"), tophys() has become a pure constant operation.
PAGE_OFFSET is known at compile time so the physical address
can be builtin directly.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/mm/hash_low_32.S | 62 +++++++++++++++++++------------------------
 arch/powerpc/mm/ppc_mmu_32.c  |  6 +++--
 2 files changed, 31 insertions(+), 37 deletions(-)

diff --git a/arch/powerpc/mm/hash_low_32.S b/arch/powerpc/mm/hash_low_32.S
index 1e2df3e9f9ea..2afd0dce6d6a 100644
--- a/arch/powerpc/mm/hash_low_32.S
+++ b/arch/powerpc/mm/hash_low_32.S
@@ -47,14 +47,13 @@ mmu_hash_lock:
  * Returns to the caller if the access is illegal or there is no
  * mapping for the address.  Otherwise it places an appropriate PTE
  * in the hash table and returns from the exception.
- * Uses r0, r3 - r8, r10, ctr, lr.
+ * Uses r0, r3 - r6, r8, r10, ctr, lr.
  */
 	.text
 _GLOBAL(hash_page)
-	tophys(r7,0)			/* gets -KERNELBASE into r7 */
 #ifdef CONFIG_SMP
-	addis	r8,r7,mmu_hash_lock@h
-	ori	r8,r8,mmu_hash_lock@l
+	lis	r8, (mmu_hash_lock - PAGE_OFFSET)@h
+	ori	r8, r8, (mmu_hash_lock - PAGE_OFFSET)@l
 	lis	r0,0x0fff
 	b	10f
 11:	lwz	r6,0(r8)
@@ -77,7 +76,7 @@ _GLOBAL(hash_page)
 	lis	r5,swapper_pg_dir@ha	/* if kernel address, use */
 	addi	r5,r5,swapper_pg_dir@l	/* kernel page table */
 	rlwimi	r3,r9,32-12,29,29	/* MSR_PR -> _PAGE_USER */
-112:	add	r5,r5,r7		/* convert to phys addr */
+112:	tophys(r5, r5)
 #ifndef CONFIG_PTE_64BIT
 	rlwimi	r5,r4,12,20,29		/* insert top 10 bits of address */
 	lwz	r8,0(r5)		/* get pmd entry */
@@ -144,25 +143,24 @@ retry:
 
 #ifdef CONFIG_SMP
 	eieio
-	addis	r8,r7,mmu_hash_lock@ha
+	lis	r8, (mmu_hash_lock - PAGE_OFFSET)@ha
 	li	r0,0
-	stw	r0,mmu_hash_lock@l(r8)
+	stw	r0, (mmu_hash_lock - PAGE_OFFSET)@l(r8)
 #endif
 
 	/* Return from the exception */
 	lwz	r5,_CTR(r11)
 	mtctr	r5
 	lwz	r0,GPR0(r11)
-	lwz	r7,GPR7(r11)
 	lwz	r8,GPR8(r11)
 	b	fast_exception_return
 
 #ifdef CONFIG_SMP
 hash_page_out:
 	eieio
-	addis	r8,r7,mmu_hash_lock@ha
+	lis	r8, (mmu_hash_lock - PAGE_OFFSET)@ha
 	li	r0,0
-	stw	r0,mmu_hash_lock@l(r8)
+	stw	r0, (mmu_hash_lock - PAGE_OFFSET)@l(r8)
 	blr
 #endif /* CONFIG_SMP */
 
@@ -208,11 +206,9 @@ _GLOBAL(add_hash_page)
 	SYNC_601
 	isync
 
-	tophys(r7,0)
-
 #ifdef CONFIG_SMP
-	addis	r6,r7,mmu_hash_lock@ha
-	addi	r6,r6,mmu_hash_lock@l
+	lis	r6, (mmu_hash_lock - PAGE_OFFSET)@ha
+	addi	r6, r6, (mmu_hash_lock - PAGE_OFFSET)@l
 10:	lwarx	r0,0,r6			/* take the mmu_hash_lock */
 	cmpi	0,r0,0
 	bne-	11f
@@ -257,8 +253,8 @@ _GLOBAL(add_hash_page)
 
 9:
 #ifdef CONFIG_SMP
-	addis	r6,r7,mmu_hash_lock@ha
-	addi	r6,r6,mmu_hash_lock@l
+	lis	r6, (mmu_hash_lock - PAGE_OFFSET)@ha
+	addi	r6, r6, (mmu_hash_lock - PAGE_OFFSET)@l
 	eieio
 	li	r0,0
 	stw	r0,0(r6)		/* clear mmu_hash_lock */
@@ -278,10 +274,8 @@ _GLOBAL(add_hash_page)
  * It is designed to be called with the MMU either on or off.
  * r3 contains the VSID, r4 contains the virtual address,
  * r5 contains the linux PTE, r6 contains the old value of the
- * linux PTE (before setting _PAGE_HASHPTE) and r7 contains the
- * offset to be added to addresses (0 if the MMU is on,
- * -KERNELBASE if it is off).  r10 contains the upper half of
- * the PTE if CONFIG_PTE_64BIT.
+ * linux PTE (before setting _PAGE_HASHPTE). r10 contains the
+ * upper half of the PTE if CONFIG_PTE_64BIT.
  * On SMP, the caller should have the mmu_hash_lock held.
  * We assume that the caller has (or will) set the _PAGE_HASHPTE
  * bit in the linux PTE in memory.  The value passed in r6 should
@@ -342,7 +336,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_NEED_COHERENT)
 	patch_site	1f, patch__hash_page_A1
 	patch_site	2f, patch__hash_page_A2
 	/* Get the address of the primary PTE group in the hash table (r3) */
-0:	addis	r0,r7,Hash_base@h	/* base address of hash table */
+0:	lis	r0, (Hash_base - PAGE_OFFSET)@h	/* base address of hash table */
 1:	rlwimi	r0,r3,LG_PTEG_SIZE,HASH_LEFT,HASH_RIGHT    /* VSID -> hash */
 2:	rlwinm	r3,r4,20+LG_PTEG_SIZE,HASH_LEFT,HASH_RIGHT /* PI -> hash */
 	xor	r3,r3,r0		/* make primary hash */
@@ -356,10 +350,10 @@ END_FTR_SECTION_IFCLR(CPU_FTR_NEED_COHERENT)
 	beq+	10f			/* no PTE: go look for an empty slot */
 	tlbie	r4
 
-	addis	r4,r7,htab_hash_searches@ha
-	lwz	r6,htab_hash_searches@l(r4)
+	lis	r4, (htab_hash_searches - PAGE_OFFSET)@ha
+	lwz	r6, (htab_hash_searches - PAGE_OFFSET)@l(r4)
 	addi	r6,r6,1			/* count how many searches we do */
-	stw	r6,htab_hash_searches@l(r4)
+	stw	r6, (htab_hash_searches - PAGE_OFFSET)@l(r4)
 
 	/* Search the primary PTEG for a PTE whose 1st (d)word matches r5 */
 	mtctr	r0
@@ -391,10 +385,10 @@ END_FTR_SECTION_IFCLR(CPU_FTR_NEED_COHERENT)
 	beq+	found_empty
 
 	/* update counter of times that the primary PTEG is full */
-	addis	r4,r7,primary_pteg_full@ha
-	lwz	r6,primary_pteg_full@l(r4)
+	lis	r4, (primary_pteg_full - PAGE_OFFSET)@ha
+	lwz	r6, (primary_pteg_full - PAGE_OFFSET)@l(r4)
 	addi	r6,r6,1
-	stw	r6,primary_pteg_full@l(r4)
+	stw	r6, (primary_pteg_full - PAGE_OFFSET)@l(r4)
 
 	patch_site	0f, patch__hash_page_C
 	/* Search the secondary PTEG for an empty slot */
@@ -428,8 +422,8 @@ END_FTR_SECTION_IFCLR(CPU_FTR_NEED_COHERENT)
 	 * lockup here but that shouldn't happen
 	 */
 
-1:	addis	r4,r7,next_slot@ha		/* get next evict slot */
-	lwz	r6,next_slot@l(r4)
+1:	lis	r4, (next_slot - PAGE_OFFSET)@ha	/* get next evict slot */
+	lwz	r6, (next_slot - PAGE_OFFSET)@l(r4)
 	addi	r6,r6,HPTE_SIZE			/* search for candidate */
 	andi.	r6,r6,7*HPTE_SIZE
 	stw	r6,next_slot@l(r4)
@@ -501,8 +495,6 @@ htab_hash_searches:
  * We assume that there is a hash table in use (Hash != 0).
  */
 _GLOBAL(flush_hash_pages)
-	tophys(r7,0)
-
 	/*
 	 * We disable interrupts here, even on UP, because we want
 	 * the _PAGE_HASHPTE bit to be a reliable indication of
@@ -547,10 +539,10 @@ _GLOBAL(flush_hash_pages)
 	SET_V(r11)			/* set V (valid) bit */
 
 #ifdef CONFIG_SMP
-	addis	r9,r7,mmu_hash_lock@ha
-	addi	r9,r9,mmu_hash_lock@l
+	lis	r9, (mmu_hash_lock - PAGE_OFFSET)@ha
+	addi	r9, r9, (mmu_hash_lock - PAGE_OFFSET)@l
 	CURRENT_THREAD_INFO(r8, r1)
-	add	r8,r8,r7
+	tophys(r8, r8)
 	lwz	r8,TI_CPU(r8)
 	oris	r8,r8,9
 10:	lwarx	r0,0,r9
@@ -584,7 +576,7 @@ _GLOBAL(flush_hash_pages)
 	patch_site	1f, patch__flush_hash_A1
 	patch_site	2f, patch__flush_hash_A2
 	/* Get the address of the primary PTE group in the hash table (r3) */
-0:	addis	r8,r7,Hash_base@h	/* base address of hash table */
+0:	lis	r8, (Hash_base - PAGE_OFFSET)@h	/* base address of hash table */
 1:	rlwimi	r8,r3,LG_PTEG_SIZE,HASH_LEFT,HASH_RIGHT    /* VSID -> hash */
 2:	rlwinm	r0,r4,20+LG_PTEG_SIZE,HASH_LEFT,HASH_RIGHT /* PI -> hash */
 	xor	r8,r0,r8		/* make primary hash */
diff --git a/arch/powerpc/mm/ppc_mmu_32.c b/arch/powerpc/mm/ppc_mmu_32.c
index 3f4193201ee7..fb747bb0b3e4 100644
--- a/arch/powerpc/mm/ppc_mmu_32.c
+++ b/arch/powerpc/mm/ppc_mmu_32.c
@@ -231,7 +231,8 @@ void __init MMU_init_hw(void)
 	if (lg_n_hpteg > 16)
 		mb2 = 16 - LG_HPTEG_SIZE;
 
-	modify_instruction_site(&patch__hash_page_A0, 0xffff, (unsigned int)Hash >> 16);
+	modify_instruction_site(&patch__hash_page_A0, 0xffff,
+				((unsigned int)Hash - PAGE_OFFSET) >> 16);
 	modify_instruction_site(&patch__hash_page_A1, 0x7c0, mb << 6);
 	modify_instruction_site(&patch__hash_page_A2, 0x7c0, mb2 << 6);
 	modify_instruction_site(&patch__hash_page_B, 0xffff, hmask);
@@ -240,7 +241,8 @@ void __init MMU_init_hw(void)
 	/*
 	 * Patch up the instructions in hashtable.S:flush_hash_page
 	 */
-	modify_instruction_site(&patch__flush_hash_A0, 0xffff, (unsigned int)Hash >> 16);
+	modify_instruction_site(&patch__flush_hash_A0, 0xffff,
+				((unsigned int)Hash - PAGE_OFFSET) >> 16);
 	modify_instruction_site(&patch__flush_hash_A1, 0x7c0, mb << 6);
 	modify_instruction_site(&patch__flush_hash_A2, 0x7c0, mb2 << 6);
 	modify_instruction_site(&patch__flush_hash_B, 0xffff, hmask);
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 05/10] powerpc/603: Don't handle kernel page TLB misses when not need
  2019-01-25 12:34 [PATCH 00/10] Optimise TLB miss handlers on 603/e300 Christophe Leroy
                   ` (3 preceding siblings ...)
  2019-01-25 12:34 ` [PATCH 04/10] powerpc/hash32: use physical address directly in hash handlers Christophe Leroy
@ 2019-01-25 12:34 ` Christophe Leroy
  2019-01-25 12:34 ` [PATCH 06/10] powerpc/603: Don't handle _PAGE_RW and _PAGE_DIRTY on ITLB misses Christophe Leroy
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Christophe Leroy @ 2019-01-25 12:34 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	joakim.tjernlund
  Cc: linux-kernel, linuxppc-dev

ITLB miss on kernel pages only occur with CONFIG_MODULES and
CONFIG_DEBUG_PAGEALLOC.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/kernel/head_32.S | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
index 53f65124edd5..3bbf937ad634 100644
--- a/arch/powerpc/kernel/head_32.S
+++ b/arch/powerpc/kernel/head_32.S
@@ -500,15 +500,19 @@ InstructionTLBMiss:
  */
 	/* Get PTE (linux-style) and check access */
 	mfspr	r3,SPRN_IMISS
+#if defined(CONFIG_MODULES) || defined(CONFIG_DEBUG_PAGEALLOC)
 	lis	r1,PAGE_OFFSET@h		/* check if kernel address */
 	cmplw	0,r1,r3
+#endif
 	mfspr	r2, SPRN_SPRG_603_PGDIR
 	li	r1,_PAGE_USER|_PAGE_PRESENT|_PAGE_EXEC /* low addresses tested as user */
+#if defined(CONFIG_MODULES) || defined(CONFIG_DEBUG_PAGEALLOC)
 	bge-	112f
 	mfspr	r2,SPRN_SRR1		/* and MSR_PR bit from SRR1 */
 	rlwimi	r1,r2,32-12,29,29	/* shift MSR_PR to _PAGE_USER posn */
 	lis	r2, (swapper_pg_dir - PAGE_OFFSET)@ha	/* if kernel address, use */
 	addi	r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l	/* kernel page table */
+#endif
 112:	rlwimi	r2,r3,12,20,29		/* insert top 10 bits of address */
 	lwz	r2,0(r2)		/* get pmd entry */
 	rlwinm.	r2,r2,0,0,19		/* extract address of pte page */
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 06/10] powerpc/603: Don't handle _PAGE_RW and _PAGE_DIRTY on ITLB misses
  2019-01-25 12:34 [PATCH 00/10] Optimise TLB miss handlers on 603/e300 Christophe Leroy
                   ` (4 preceding siblings ...)
  2019-01-25 12:34 ` [PATCH 05/10] powerpc/603: Don't handle kernel page TLB misses when not need Christophe Leroy
@ 2019-01-25 12:34 ` Christophe Leroy
  2019-01-25 12:34 ` [PATCH 07/10] powerpc/603: let's handle PAGE_DIRTY directly Christophe Leroy
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Christophe Leroy @ 2019-01-25 12:34 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	joakim.tjernlund
  Cc: linux-kernel, linuxppc-dev

_PAGE_RW and _PAGE_DIRTY do not matter for ITLB misses.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/kernel/head_32.S | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
index 3bbf937ad634..2aec3f91c9f5 100644
--- a/arch/powerpc/kernel/head_32.S
+++ b/arch/powerpc/kernel/head_32.S
@@ -528,13 +528,9 @@ InstructionTLBMiss:
 	 */
 	stw	r0,0(r2)		/* update PTE (accessed bit) */
 	/* Convert linux-style PTE to low word of PPC-style PTE */
-	rlwinm	r1,r0,32-10,31,31	/* _PAGE_RW -> PP lsb */
-	rlwinm	r2,r0,32-7,31,31	/* _PAGE_DIRTY -> PP lsb */
-	and	r1,r1,r2		/* writable if _RW and _DIRTY */
 	rlwimi	r0,r0,32-1,30,30	/* _PAGE_USER -> PP msb */
-	rlwimi	r0,r0,32-1,31,31	/* _PAGE_USER -> PP lsb */
-	ori	r1,r1,0xe04		/* clear out reserved bits */
-	andc	r1,r0,r1		/* PP = user? (rw&dirty? 2: 3): 0 */
+	ori	r1, r1, 0xe05		/* clear out reserved bits */
+	andc	r1, r0, r1		/* PP = user? 2 : 0 */
 BEGIN_FTR_SECTION
 	rlwinm	r1,r1,0,~_PAGE_COHERENT	/* clear M (coherence not required) */
 END_FTR_SECTION_IFCLR(CPU_FTR_NEED_COHERENT)
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 07/10] powerpc/603: let's handle PAGE_DIRTY directly
  2019-01-25 12:34 [PATCH 00/10] Optimise TLB miss handlers on 603/e300 Christophe Leroy
                   ` (5 preceding siblings ...)
  2019-01-25 12:34 ` [PATCH 06/10] powerpc/603: Don't handle _PAGE_RW and _PAGE_DIRTY on ITLB misses Christophe Leroy
@ 2019-01-25 12:34 ` Christophe Leroy
  2019-01-25 12:34 ` [PATCH 08/10] powerpc/603: Don't worry about _PAGE_USER in TLB miss handlers Christophe Leroy
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Christophe Leroy @ 2019-01-25 12:34 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	joakim.tjernlund
  Cc: linux-kernel, linuxppc-dev

PAGE_DIRTY corresponds to the C bit. If writing on
a page for which the C bit is not set, a DataStoreTLBMiss
is generated. No need to check it in DataLoadTLBMiss.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/kernel/head_32.S | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
index 2aec3f91c9f5..abbaf51b6f58 100644
--- a/arch/powerpc/kernel/head_32.S
+++ b/arch/powerpc/kernel/head_32.S
@@ -597,12 +597,10 @@ DataLoadTLBMiss:
 	stw	r0,0(r2)		/* update PTE (accessed bit) */
 	/* Convert linux-style PTE to low word of PPC-style PTE */
 	rlwinm	r1,r0,32-10,31,31	/* _PAGE_RW -> PP lsb */
-	rlwinm	r2,r0,32-7,31,31	/* _PAGE_DIRTY -> PP lsb */
-	and	r1,r1,r2		/* writable if _RW and _DIRTY */
 	rlwimi	r0,r0,32-1,30,30	/* _PAGE_USER -> PP msb */
 	rlwimi	r0,r0,32-1,31,31	/* _PAGE_USER -> PP lsb */
 	ori	r1,r1,0xe04		/* clear out reserved bits */
-	andc	r1,r0,r1		/* PP = user? (rw&dirty? 2: 3): 0 */
+	andc	r1,r0,r1		/* PP = user? rw? 2: 3: 0 */
 BEGIN_FTR_SECTION
 	rlwinm	r1,r1,0,~_PAGE_COHERENT	/* clear M (coherence not required) */
 END_FTR_SECTION_IFCLR(CPU_FTR_NEED_COHERENT)
@@ -671,7 +669,7 @@ DataStoreTLBMiss:
 	lwz	r0,0(r2)		/* get linux-style pte */
 	andc.	r1,r1,r0		/* check access & ~permission */
 	bne-	DataAddressInvalid	/* return if access not permitted */
-	ori	r0,r0,_PAGE_ACCESSED|_PAGE_DIRTY
+	ori	r0,r0,_PAGE_ACCESSED
 	/*
 	 * NOTE! We are assuming this is not an SMP system, otherwise
 	 * we would need to update the pte atomically with lwarx/stwcx.
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 08/10] powerpc/603: Don't worry about _PAGE_USER in TLB miss handlers
  2019-01-25 12:34 [PATCH 00/10] Optimise TLB miss handlers on 603/e300 Christophe Leroy
                   ` (6 preceding siblings ...)
  2019-01-25 12:34 ` [PATCH 07/10] powerpc/603: let's handle PAGE_DIRTY directly Christophe Leroy
@ 2019-01-25 12:34 ` Christophe Leroy
  2019-01-25 12:34 ` [PATCH 09/10] powerpc/603: don't handle PAGE_ACCESSED " Christophe Leroy
  2019-01-25 12:34 ` [PATCH 10/10] powerpc/book3s32: Reorder _PAGE_XXX flags to simplify TLB handling Christophe Leroy
  9 siblings, 0 replies; 13+ messages in thread
From: Christophe Leroy @ 2019-01-25 12:34 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	joakim.tjernlund
  Cc: linux-kernel, linuxppc-dev

PP bits take user access into account, so no need to check _PAGE_USER
here. A DSI or ISI will be generated if needed.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/kernel/head_32.S | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
index abbaf51b6f58..44cc84857520 100644
--- a/arch/powerpc/kernel/head_32.S
+++ b/arch/powerpc/kernel/head_32.S
@@ -505,11 +505,9 @@ InstructionTLBMiss:
 	cmplw	0,r1,r3
 #endif
 	mfspr	r2, SPRN_SPRG_603_PGDIR
-	li	r1,_PAGE_USER|_PAGE_PRESENT|_PAGE_EXEC /* low addresses tested as user */
+	li	r1,_PAGE_PRESENT | _PAGE_EXEC
 #if defined(CONFIG_MODULES) || defined(CONFIG_DEBUG_PAGEALLOC)
 	bge-	112f
-	mfspr	r2,SPRN_SRR1		/* and MSR_PR bit from SRR1 */
-	rlwimi	r1,r2,32-12,29,29	/* shift MSR_PR to _PAGE_USER posn */
 	lis	r2, (swapper_pg_dir - PAGE_OFFSET)@ha	/* if kernel address, use */
 	addi	r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l	/* kernel page table */
 #endif
@@ -575,10 +573,8 @@ DataLoadTLBMiss:
 	lis	r1,PAGE_OFFSET@h		/* check if kernel address */
 	cmplw	0,r1,r3
 	mfspr	r2, SPRN_SPRG_603_PGDIR
-	li	r1,_PAGE_USER|_PAGE_PRESENT /* low addresses tested as user */
+	li	r1, _PAGE_PRESENT
 	bge-	112f
-	mfspr	r2,SPRN_SRR1		/* and MSR_PR bit from SRR1 */
-	rlwimi	r1,r2,32-12,29,29	/* shift MSR_PR to _PAGE_USER posn */
 	lis	r2, (swapper_pg_dir - PAGE_OFFSET)@ha	/* if kernel address, use */
 	addi	r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l	/* kernel page table */
 112:	rlwimi	r2,r3,12,20,29		/* insert top 10 bits of address */
@@ -655,10 +651,8 @@ DataStoreTLBMiss:
 	lis	r1,PAGE_OFFSET@h		/* check if kernel address */
 	cmplw	0,r1,r3
 	mfspr	r2, SPRN_SPRG_603_PGDIR
-	li	r1,_PAGE_RW|_PAGE_USER|_PAGE_PRESENT /* access flags */
+	li	r1, _PAGE_RW | _PAGE_PRESENT /* access flags */
 	bge-	112f
-	mfspr	r2,SPRN_SRR1		/* and MSR_PR bit from SRR1 */
-	rlwimi	r1,r2,32-12,29,29	/* shift MSR_PR to _PAGE_USER posn */
 	lis	r2, (swapper_pg_dir - PAGE_OFFSET)@ha	/* if kernel address, use */
 	addi	r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l	/* kernel page table */
 112:	rlwimi	r2,r3,12,20,29		/* insert top 10 bits of address */
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 09/10] powerpc/603: don't handle PAGE_ACCESSED in TLB miss handlers.
  2019-01-25 12:34 [PATCH 00/10] Optimise TLB miss handlers on 603/e300 Christophe Leroy
                   ` (7 preceding siblings ...)
  2019-01-25 12:34 ` [PATCH 08/10] powerpc/603: Don't worry about _PAGE_USER in TLB miss handlers Christophe Leroy
@ 2019-01-25 12:34 ` Christophe Leroy
  2019-01-25 12:34 ` [PATCH 10/10] powerpc/book3s32: Reorder _PAGE_XXX flags to simplify TLB handling Christophe Leroy
  9 siblings, 0 replies; 13+ messages in thread
From: Christophe Leroy @ 2019-01-25 12:34 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	joakim.tjernlund
  Cc: linux-kernel, linuxppc-dev

PAGE_ACCESSED is only needed for CONFIG_SWAP. When CONFIG_SWAP
is not set, just ignore it. If CONFIG_SWAP is set and PAGE_ACCESSED
is not, let's take a minor fault.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/kernel/head_32.S | 24 +++++++++++++-----------
 1 file changed, 13 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
index 44cc84857520..9f25f44b6448 100644
--- a/arch/powerpc/kernel/head_32.S
+++ b/arch/powerpc/kernel/head_32.S
@@ -505,7 +505,11 @@ InstructionTLBMiss:
 	cmplw	0,r1,r3
 #endif
 	mfspr	r2, SPRN_SPRG_603_PGDIR
+#ifdef CONFIG_SWAP
+	li	r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC
+#else
 	li	r1,_PAGE_PRESENT | _PAGE_EXEC
+#endif
 #if defined(CONFIG_MODULES) || defined(CONFIG_DEBUG_PAGEALLOC)
 	bge-	112f
 	lis	r2, (swapper_pg_dir - PAGE_OFFSET)@ha	/* if kernel address, use */
@@ -519,12 +523,6 @@ InstructionTLBMiss:
 	lwz	r0,0(r2)		/* get linux-style pte */
 	andc.	r1,r1,r0		/* check access & ~permission */
 	bne-	InstructionAddressInvalid /* return if access not permitted */
-	ori	r0,r0,_PAGE_ACCESSED	/* set _PAGE_ACCESSED in pte */
-	/*
-	 * NOTE! We are assuming this is not an SMP system, otherwise
-	 * we would need to update the pte atomically with lwarx/stwcx.
-	 */
-	stw	r0,0(r2)		/* update PTE (accessed bit) */
 	/* Convert linux-style PTE to low word of PPC-style PTE */
 	rlwimi	r0,r0,32-1,30,30	/* _PAGE_USER -> PP msb */
 	ori	r1, r1, 0xe05		/* clear out reserved bits */
@@ -573,7 +571,11 @@ DataLoadTLBMiss:
 	lis	r1,PAGE_OFFSET@h		/* check if kernel address */
 	cmplw	0,r1,r3
 	mfspr	r2, SPRN_SPRG_603_PGDIR
+#ifdef CONFIG_SWAP
+	li	r1, _PAGE_PRESENT | _PAGE_ACCESSED
+#else
 	li	r1, _PAGE_PRESENT
+#endif
 	bge-	112f
 	lis	r2, (swapper_pg_dir - PAGE_OFFSET)@ha	/* if kernel address, use */
 	addi	r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l	/* kernel page table */
@@ -585,12 +587,10 @@ DataLoadTLBMiss:
 	lwz	r0,0(r2)		/* get linux-style pte */
 	andc.	r1,r1,r0		/* check access & ~permission */
 	bne-	DataAddressInvalid	/* return if access not permitted */
-	ori	r0,r0,_PAGE_ACCESSED	/* set _PAGE_ACCESSED in pte */
 	/*
 	 * NOTE! We are assuming this is not an SMP system, otherwise
 	 * we would need to update the pte atomically with lwarx/stwcx.
 	 */
-	stw	r0,0(r2)		/* update PTE (accessed bit) */
 	/* Convert linux-style PTE to low word of PPC-style PTE */
 	rlwinm	r1,r0,32-10,31,31	/* _PAGE_RW -> PP lsb */
 	rlwimi	r0,r0,32-1,30,30	/* _PAGE_USER -> PP msb */
@@ -651,7 +651,11 @@ DataStoreTLBMiss:
 	lis	r1,PAGE_OFFSET@h		/* check if kernel address */
 	cmplw	0,r1,r3
 	mfspr	r2, SPRN_SPRG_603_PGDIR
-	li	r1, _PAGE_RW | _PAGE_PRESENT /* access flags */
+#ifdef CONFIG_SWAP
+	li	r1, _PAGE_RW | _PAGE_PRESENT | _PAGE_ACCESSED
+#else
+	li	r1, _PAGE_RW | _PAGE_PRESENT
+#endif
 	bge-	112f
 	lis	r2, (swapper_pg_dir - PAGE_OFFSET)@ha	/* if kernel address, use */
 	addi	r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l	/* kernel page table */
@@ -663,12 +667,10 @@ DataStoreTLBMiss:
 	lwz	r0,0(r2)		/* get linux-style pte */
 	andc.	r1,r1,r0		/* check access & ~permission */
 	bne-	DataAddressInvalid	/* return if access not permitted */
-	ori	r0,r0,_PAGE_ACCESSED
 	/*
 	 * NOTE! We are assuming this is not an SMP system, otherwise
 	 * we would need to update the pte atomically with lwarx/stwcx.
 	 */
-	stw	r0,0(r2)		/* update PTE (accessed/dirty bits) */
 	/* Convert linux-style PTE to low word of PPC-style PTE */
 	rlwimi	r0,r0,32-1,30,30	/* _PAGE_USER -> PP msb */
 	li	r1,0xe05		/* clear out reserved bits & PP lsb */
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 10/10] powerpc/book3s32: Reorder _PAGE_XXX flags to simplify TLB handling
  2019-01-25 12:34 [PATCH 00/10] Optimise TLB miss handlers on 603/e300 Christophe Leroy
                   ` (8 preceding siblings ...)
  2019-01-25 12:34 ` [PATCH 09/10] powerpc/603: don't handle PAGE_ACCESSED " Christophe Leroy
@ 2019-01-25 12:34 ` Christophe Leroy
  2019-02-22  9:47   ` [10/10] " Michael Ellerman
  9 siblings, 1 reply; 13+ messages in thread
From: Christophe Leroy @ 2019-01-25 12:34 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	joakim.tjernlund
  Cc: linux-kernel, linuxppc-dev

For pages without _PAGE_USER, PP field is 00
For pages with _PAGE_USER, PP field is 10 for RW and 11 for RO.

This patch sets _PAGE_USER to 0x002 and _PAGE_RW to 0x001
is order to simplify TLB handling by reducing amount of shifts.

The location of _PAGE_PRESENT and _PAGE_HASHPTE doesn't matter
as they are only SW related flags.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/include/asm/book3s/32/hash.h | 8 ++++----
 arch/powerpc/kernel/head_32.S             | 5 +----
 arch/powerpc/mm/hash_low_32.S             | 6 ++----
 3 files changed, 7 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/32/hash.h b/arch/powerpc/include/asm/book3s/32/hash.h
index 2a0a467d2985..a5907ea4fb40 100644
--- a/arch/powerpc/include/asm/book3s/32/hash.h
+++ b/arch/powerpc/include/asm/book3s/32/hash.h
@@ -17,9 +17,9 @@
  * updating the accessed and modified bits in the page table tree.
  */
 
-#define _PAGE_PRESENT	0x001	/* software: pte contains a translation */
-#define _PAGE_HASHPTE	0x002	/* hash_page has made an HPTE for this pte */
-#define _PAGE_USER	0x004	/* usermode access allowed */
+#define _PAGE_RW	0x001	/* PP = x1: user write access allowed */
+#define _PAGE_USER	0x002	/* PP = 1x: usermode access allowed */
+#define _PAGE_HASHPTE	0x004	/* software: hash_page has made an HPTE for this pte */
 #define _PAGE_GUARDED	0x008	/* G: prohibit speculative access */
 #define _PAGE_COHERENT	0x010	/* M: enforce memory coherence (SMP systems) */
 #define _PAGE_NO_CACHE	0x020	/* I: cache inhibit */
@@ -27,7 +27,7 @@
 #define _PAGE_DIRTY	0x080	/* C: page changed */
 #define _PAGE_ACCESSED	0x100	/* R: page referenced */
 #define _PAGE_EXEC	0x200	/* software: exec allowed */
-#define _PAGE_RW	0x400	/* software: user write access allowed */
+#define _PAGE_PRESENT	0x400	/* software: pte contains a translation */
 #define _PAGE_SPECIAL	0x800	/* software: Special page */
 
 #ifdef CONFIG_PTE_64BIT
diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
index 9f25f44b6448..3131fbd9e7f5 100644
--- a/arch/powerpc/kernel/head_32.S
+++ b/arch/powerpc/kernel/head_32.S
@@ -524,7 +524,6 @@ InstructionTLBMiss:
 	andc.	r1,r1,r0		/* check access & ~permission */
 	bne-	InstructionAddressInvalid /* return if access not permitted */
 	/* Convert linux-style PTE to low word of PPC-style PTE */
-	rlwimi	r0,r0,32-1,30,30	/* _PAGE_USER -> PP msb */
 	ori	r1, r1, 0xe05		/* clear out reserved bits */
 	andc	r1, r0, r1		/* PP = user? 2 : 0 */
 BEGIN_FTR_SECTION
@@ -592,8 +591,7 @@ DataLoadTLBMiss:
 	 * we would need to update the pte atomically with lwarx/stwcx.
 	 */
 	/* Convert linux-style PTE to low word of PPC-style PTE */
-	rlwinm	r1,r0,32-10,31,31	/* _PAGE_RW -> PP lsb */
-	rlwimi	r0,r0,32-1,30,30	/* _PAGE_USER -> PP msb */
+	rlwinm	r1, r0, 0, 31, 31	/* _PAGE_RW -> PP lsb */
 	rlwimi	r0,r0,32-1,31,31	/* _PAGE_USER -> PP lsb */
 	ori	r1,r1,0xe04		/* clear out reserved bits */
 	andc	r1,r0,r1		/* PP = user? rw? 2: 3: 0 */
@@ -672,7 +670,6 @@ DataStoreTLBMiss:
 	 * we would need to update the pte atomically with lwarx/stwcx.
 	 */
 	/* Convert linux-style PTE to low word of PPC-style PTE */
-	rlwimi	r0,r0,32-1,30,30	/* _PAGE_USER -> PP msb */
 	li	r1,0xe05		/* clear out reserved bits & PP lsb */
 	andc	r1,r0,r1		/* PP = user? 2: 0 */
 BEGIN_FTR_SECTION
diff --git a/arch/powerpc/mm/hash_low_32.S b/arch/powerpc/mm/hash_low_32.S
index 2afd0dce6d6a..ab60f541c4d2 100644
--- a/arch/powerpc/mm/hash_low_32.S
+++ b/arch/powerpc/mm/hash_low_32.S
@@ -311,11 +311,9 @@ Hash_msk = (((1 << Hash_bits) - 1) * 64)
 
 _GLOBAL(create_hpte)
 	/* Convert linux-style PTE (r5) to low word of PPC-style PTE (r8) */
-	rlwinm	r8,r5,32-10,31,31	/* _PAGE_RW -> PP lsb */
 	rlwinm	r0,r5,32-7,31,31	/* _PAGE_DIRTY -> PP lsb */
-	and	r8,r8,r0		/* writable if _RW & _DIRTY */
-	rlwimi	r5,r5,32-1,30,30	/* _PAGE_USER -> PP msb */
-	rlwimi	r5,r5,32-2,31,31	/* _PAGE_USER -> PP lsb */
+	and	r8, r5, r0		/* writable if _RW & _DIRTY */
+	rlwimi	r5, r5, 32 - 1, 31, 31	/* _PAGE_USER -> PP lsb */
 	ori	r8,r8,0xe04		/* clear out reserved bits */
 	andc	r8,r5,r8		/* PP = user? (rw&dirty? 2: 3): 0 */
 BEGIN_FTR_SECTION
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 02/10] powerpc/603: Store PGDIR physical address in a SPRG
  2019-01-25 12:34 ` [PATCH 02/10] powerpc/603: Store PGDIR physical address in a SPRG Christophe Leroy
@ 2019-02-20 17:39   ` Christophe Leroy
  0 siblings, 0 replies; 13+ messages in thread
From: Christophe Leroy @ 2019-02-20 17:39 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	joakim.tjernlund
  Cc: linuxppc-dev, linux-kernel



Le 25/01/2019 à 13:34, Christophe Leroy a écrit :
> Use SPRN_SPRG5 to store the current thread PGDIR and
> avoid reading thread_struct->pgdir at every TLB miss.

I'll send out v2 with an additional patch getting rid of SPRN_SPRG_RTAS 
hence freeing SPRN_SPRG2 which I will use here instead of SPRN_SPRG5 so 
that all 6xx will benefit.

Christophe

> 
> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> ---
>   arch/powerpc/include/asm/reg.h      |  1 +
>   arch/powerpc/kernel/cpu_setup_6xx.S |  4 ++++
>   arch/powerpc/kernel/head_32.S       | 28 ++++++++++++++++------------
>   3 files changed, 21 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
> index 1c98ef1f2d5b..ba0ab1a1431b 100644
> --- a/arch/powerpc/include/asm/reg.h
> +++ b/arch/powerpc/include/asm/reg.h
> @@ -1169,6 +1169,7 @@
>   #define SPRN_SPRG_SCRATCH1	SPRN_SPRG1
>   #define SPRN_SPRG_RTAS		SPRN_SPRG2
>   #define SPRN_SPRG_603_LRU	SPRN_SPRG4
> +#define SPRN_SPRG_603_PGDIR	SPRN_SPRG5
>   #endif
>   
>   #ifdef CONFIG_40x
> diff --git a/arch/powerpc/kernel/cpu_setup_6xx.S b/arch/powerpc/kernel/cpu_setup_6xx.S
> index 8c069e96c478..4c91d1f640fe 100644
> --- a/arch/powerpc/kernel/cpu_setup_6xx.S
> +++ b/arch/powerpc/kernel/cpu_setup_6xx.S
> @@ -24,6 +24,10 @@ BEGIN_MMU_FTR_SECTION
>   	li	r10,0
>   	mtspr	SPRN_SPRG_603_LRU,r10		/* init SW LRU tracking */
>   END_MMU_FTR_SECTION_IFSET(MMU_FTR_NEED_DTLB_SW_LRU)
> +	lis	r10, (swapper_pg_dir - PAGE_OFFSET)@h
> +	ori	r10, r10, (swapper_pg_dir - PAGE_OFFSET)@l
> +	mtspr	SPRN_SPRG_603_PGDIR, r10
> +
>   BEGIN_FTR_SECTION
>   	bl	__init_fpu_registers
>   END_FTR_SECTION_IFCLR(CPU_FTR_FPU_UNAVAILABLE)
> diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
> index c2f564690778..dbd15e03952a 100644
> --- a/arch/powerpc/kernel/head_32.S
> +++ b/arch/powerpc/kernel/head_32.S
> @@ -502,16 +502,15 @@ InstructionTLBMiss:
>   	mfspr	r3,SPRN_IMISS
>   	lis	r1,PAGE_OFFSET@h		/* check if kernel address */
>   	cmplw	0,r1,r3
> -	mfspr	r2,SPRN_SPRG_THREAD
> +	mfspr	r2, SPRN_SPRG_603_PGDIR
>   	li	r1,_PAGE_USER|_PAGE_PRESENT|_PAGE_EXEC /* low addresses tested as user */
> -	lwz	r2,PGDIR(r2)
>   	bge-	112f
>   	mfspr	r2,SPRN_SRR1		/* and MSR_PR bit from SRR1 */
>   	rlwimi	r1,r2,32-12,29,29	/* shift MSR_PR to _PAGE_USER posn */
>   	lis	r2,swapper_pg_dir@ha	/* if kernel address, use */
>   	addi	r2,r2,swapper_pg_dir@l	/* kernel page table */
> -112:	tophys(r2,r2)
> -	rlwimi	r2,r3,12,20,29		/* insert top 10 bits of address */
> +	tophys(r2,r2)
> +112:	rlwimi	r2,r3,12,20,29		/* insert top 10 bits of address */
>   	lwz	r2,0(r2)		/* get pmd entry */
>   	rlwinm.	r2,r2,0,0,19		/* extract address of pte page */
>   	beq-	InstructionAddressInvalid	/* return if no mapping */
> @@ -576,16 +575,15 @@ DataLoadTLBMiss:
>   	mfspr	r3,SPRN_DMISS
>   	lis	r1,PAGE_OFFSET@h		/* check if kernel address */
>   	cmplw	0,r1,r3
> -	mfspr	r2,SPRN_SPRG_THREAD
> +	mfspr	r2, SPRN_SPRG_603_PGDIR
>   	li	r1,_PAGE_USER|_PAGE_PRESENT /* low addresses tested as user */
> -	lwz	r2,PGDIR(r2)
>   	bge-	112f
>   	mfspr	r2,SPRN_SRR1		/* and MSR_PR bit from SRR1 */
>   	rlwimi	r1,r2,32-12,29,29	/* shift MSR_PR to _PAGE_USER posn */
>   	lis	r2,swapper_pg_dir@ha	/* if kernel address, use */
>   	addi	r2,r2,swapper_pg_dir@l	/* kernel page table */
> -112:	tophys(r2,r2)
> -	rlwimi	r2,r3,12,20,29		/* insert top 10 bits of address */
> +	tophys(r2,r2)
> +112:	rlwimi	r2,r3,12,20,29		/* insert top 10 bits of address */
>   	lwz	r2,0(r2)		/* get pmd entry */
>   	rlwinm.	r2,r2,0,0,19		/* extract address of pte page */
>   	beq-	DataAddressInvalid	/* return if no mapping */
> @@ -660,16 +658,15 @@ DataStoreTLBMiss:
>   	mfspr	r3,SPRN_DMISS
>   	lis	r1,PAGE_OFFSET@h		/* check if kernel address */
>   	cmplw	0,r1,r3
> -	mfspr	r2,SPRN_SPRG_THREAD
> +	mfspr	r2, SPRN_SPRG_603_PGDIR
>   	li	r1,_PAGE_RW|_PAGE_USER|_PAGE_PRESENT /* access flags */
> -	lwz	r2,PGDIR(r2)
>   	bge-	112f
>   	mfspr	r2,SPRN_SRR1		/* and MSR_PR bit from SRR1 */
>   	rlwimi	r1,r2,32-12,29,29	/* shift MSR_PR to _PAGE_USER posn */
>   	lis	r2,swapper_pg_dir@ha	/* if kernel address, use */
>   	addi	r2,r2,swapper_pg_dir@l	/* kernel page table */
> -112:	tophys(r2,r2)
> -	rlwimi	r2,r3,12,20,29		/* insert top 10 bits of address */
> +	tophys(r2,r2)
> +112:	rlwimi	r2,r3,12,20,29		/* insert top 10 bits of address */
>   	lwz	r2,0(r2)		/* get pmd entry */
>   	rlwinm.	r2,r2,0,0,19		/* extract address of pte page */
>   	beq-	DataAddressInvalid	/* return if no mapping */
> @@ -1030,6 +1027,13 @@ _ENTRY(switch_mmu_context)
>   	lis	r5, abatron_pteptrs@ha
>   	stw	r4, abatron_pteptrs@l + 0x4(r5)
>   #endif
> +BEGIN_MMU_FTR_SECTION
> +#ifndef CONFIG_BDI_SWITCH
> +	lwz	r4, MM_PGD(r4)
> +#endif
> +	tophys(r4, r4)
> +	mtspr	SPRN_SPRG_603_PGDIR, r4
> +END_MMU_FTR_SECTION_IFCLR(MMU_FTR_HPTE_TABLE)
>   	li	r4,0
>   	isync
>   3:
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [10/10] powerpc/book3s32: Reorder _PAGE_XXX flags to simplify TLB handling
  2019-01-25 12:34 ` [PATCH 10/10] powerpc/book3s32: Reorder _PAGE_XXX flags to simplify TLB handling Christophe Leroy
@ 2019-02-22  9:47   ` Michael Ellerman
  0 siblings, 0 replies; 13+ messages in thread
From: Michael Ellerman @ 2019-02-22  9:47 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
	joakim.tjernlund
  Cc: linuxppc-dev, linux-kernel

On Fri, 2019-01-25 at 12:34:20 UTC, Christophe Leroy wrote:
> For pages without _PAGE_USER, PP field is 00
> For pages with _PAGE_USER, PP field is 10 for RW and 11 for RO.
> 
> This patch sets _PAGE_USER to 0x002 and _PAGE_RW to 0x001
> is order to simplify TLB handling by reducing amount of shifts.
> 
> The location of _PAGE_PRESENT and _PAGE_HASHPTE doesn't matter
> as they are only SW related flags.
> 
> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/78ca1108b10927b3d068c8da91352b0f

cheers

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2019-02-22  9:48 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-25 12:34 [PATCH 00/10] Optimise TLB miss handlers on 603/e300 Christophe Leroy
2019-01-25 12:34 ` [PATCH 01/10] powerpc: simplify BDI switch Christophe Leroy
2019-01-25 12:34 ` [PATCH 02/10] powerpc/603: Store PGDIR physical address in a SPRG Christophe Leroy
2019-02-20 17:39   ` Christophe Leroy
2019-01-25 12:34 ` [PATCH 03/10] powerpc/603: use physical address directly in TLB miss handlers Christophe Leroy
2019-01-25 12:34 ` [PATCH 04/10] powerpc/hash32: use physical address directly in hash handlers Christophe Leroy
2019-01-25 12:34 ` [PATCH 05/10] powerpc/603: Don't handle kernel page TLB misses when not need Christophe Leroy
2019-01-25 12:34 ` [PATCH 06/10] powerpc/603: Don't handle _PAGE_RW and _PAGE_DIRTY on ITLB misses Christophe Leroy
2019-01-25 12:34 ` [PATCH 07/10] powerpc/603: let's handle PAGE_DIRTY directly Christophe Leroy
2019-01-25 12:34 ` [PATCH 08/10] powerpc/603: Don't worry about _PAGE_USER in TLB miss handlers Christophe Leroy
2019-01-25 12:34 ` [PATCH 09/10] powerpc/603: don't handle PAGE_ACCESSED " Christophe Leroy
2019-01-25 12:34 ` [PATCH 10/10] powerpc/book3s32: Reorder _PAGE_XXX flags to simplify TLB handling Christophe Leroy
2019-02-22  9:47   ` [10/10] " Michael Ellerman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).