linux-snps-arc.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/6] ARC MMU code updates
@ 2019-09-16 21:32 Vineet Gupta
  2019-09-16 21:32 ` [PATCH 1/6] ARCv2: mm: TLB Miss optim: SMP builds can cache pgd pointer in mmu scratch reg Vineet Gupta
                   ` (5 more replies)
  0 siblings, 6 replies; 9+ messages in thread
From: Vineet Gupta @ 2019-09-16 21:32 UTC (permalink / raw)
  To: linux-snps-arc

Hi,

This is set of patches almost lost in one of my older branches. I decided to
clean them and post given the work on newer MMU.

Thx,
-Vineet

Vineet Gupta (6):
  ARCv2: mm: TLB Miss optim: SMP builds can cache pgd pointer in mmu
    scratch reg
  ARCv2: mm: TLB Miss optim: Use double world load/stores LDD/STD
  ARC: mm: TLB Miss optim: avoid re-reading ECR
  ARC: mm: tlb flush optim: Make TLBWriteNI fallback to TLBWrite if not
    available
  ARC: mm: tlb flush optim: elide repeated uTLB invalidate in loop
  ARC: mm: tlb flush optim: elide redundant uTLB invalidates for MMUv3

 arch/arc/include/asm/entry-compact.h |  4 +-
 arch/arc/include/asm/mmu.h           |  6 +++
 arch/arc/include/asm/mmu_context.h   |  2 +-
 arch/arc/include/asm/pgtable.h       |  2 +-
 arch/arc/mm/tlb.c                    | 81 +++++++++++-----------------
 arch/arc/mm/tlbex.S                  | 18 ++++---
 6 files changed, 51 insertions(+), 62 deletions(-)

-- 
2.20.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/6] ARCv2: mm: TLB Miss optim: SMP builds can cache pgd pointer in mmu scratch reg
  2019-09-16 21:32 [PATCH 0/6] ARC MMU code updates Vineet Gupta
@ 2019-09-16 21:32 ` Vineet Gupta
  2019-09-16 21:32 ` [PATCH 2/6] ARCv2: mm: TLB Miss optim: Use double world load/stores LDD/STD Vineet Gupta
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Vineet Gupta @ 2019-09-16 21:32 UTC (permalink / raw)
  To: linux-snps-arc

ARC700 exception (and intr handling) didn't have auto stack switching
thus had to rely on stashing a reg temporarily (to free it up) at a
known place in memory, allowing to code up the low level stack switching.
This however was not re-entrant in SMP which thus had to repurpose the
per-cpu MMU SCRATCH DATA register otherwise used to "cache" the task pdg
pointer (vs. reading it from mm struct)

The newer HS cores do have auto-stack switching and thus even SMP builds
can use the MMU SCRATCH reg as originally intended.

This patch fixes the restriction to ARC700 SMP builds only

Signed-off-by: Vineet Gupta <vgupta at synopsys.com>
---
 arch/arc/include/asm/entry-compact.h | 4 ++--
 arch/arc/include/asm/mmu.h           | 4 ++++
 arch/arc/include/asm/mmu_context.h   | 2 +-
 arch/arc/include/asm/pgtable.h       | 2 +-
 arch/arc/mm/tlb.c                    | 2 +-
 arch/arc/mm/tlbex.S                  | 2 +-
 6 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/arc/include/asm/entry-compact.h b/arch/arc/include/asm/entry-compact.h
index 66a292335ee6..c3aa775878dc 100644
--- a/arch/arc/include/asm/entry-compact.h
+++ b/arch/arc/include/asm/entry-compact.h
@@ -130,7 +130,7 @@
  * to be saved again on kernel mode stack, as part of pt_regs.
  *-------------------------------------------------------------*/
 .macro PROLOG_FREEUP_REG	reg, mem
-#ifdef CONFIG_SMP
+#ifndef ARC_USE_SCRATCH_REG
 	sr  \reg, [ARC_REG_SCRATCH_DATA0]
 #else
 	st  \reg, [\mem]
@@ -138,7 +138,7 @@
 .endm
 
 .macro PROLOG_RESTORE_REG	reg, mem
-#ifdef CONFIG_SMP
+#ifndef ARC_USE_SCRATCH_REG
 	lr  \reg, [ARC_REG_SCRATCH_DATA0]
 #else
 	ld  \reg, [\mem]
diff --git a/arch/arc/include/asm/mmu.h b/arch/arc/include/asm/mmu.h
index 98cadf1a09ac..0abacb82a72b 100644
--- a/arch/arc/include/asm/mmu.h
+++ b/arch/arc/include/asm/mmu.h
@@ -40,6 +40,10 @@
 #define ARC_REG_SCRATCH_DATA0	0x46c
 #endif
 
+#if defined(CONFIG_ISA_ARCV2) || !defined(CONFIG_SMP)
+#define	ARC_USE_SCRATCH_REG
+#endif
+
 /* Bits in MMU PID register */
 #define __TLB_ENABLE		(1 << 31)
 #define __PROG_ENABLE		(1 << 30)
diff --git a/arch/arc/include/asm/mmu_context.h b/arch/arc/include/asm/mmu_context.h
index 035470816be5..3a5e6a5b9ed6 100644
--- a/arch/arc/include/asm/mmu_context.h
+++ b/arch/arc/include/asm/mmu_context.h
@@ -144,7 +144,7 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
 	 */
 	cpumask_set_cpu(cpu, mm_cpumask(next));
 
-#ifndef CONFIG_SMP
+#ifdef ARC_USE_SCRATCH_REG
 	/* PGD cached in MMU reg to avoid 3 mem lookups: task->mm->pgd */
 	write_aux_reg(ARC_REG_SCRATCH_DATA0, next->pgd);
 #endif
diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h
index 1d87c18a2976..210eb1df17df 100644
--- a/arch/arc/include/asm/pgtable.h
+++ b/arch/arc/include/asm/pgtable.h
@@ -351,7 +351,7 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
  * Thus use this macro only when you are certain that "current" is current
  * e.g. when dealing with signal frame setup code etc
  */
-#ifndef CONFIG_SMP
+#ifdef ARC_USE_SCRATCH_REG
 #define pgd_offset_fast(mm, addr)	\
 ({					\
 	pgd_t *pgd_base = (pgd_t *) read_aux_reg(ARC_REG_SCRATCH_DATA0);  \
diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c
index 10025e199353..417f05ac4397 100644
--- a/arch/arc/mm/tlb.c
+++ b/arch/arc/mm/tlb.c
@@ -868,7 +868,7 @@ void arc_mmu_init(void)
 	write_aux_reg(ARC_REG_PID, MMU_ENABLE);
 
 	/* In smp we use this reg for interrupt 1 scratch */
-#ifndef CONFIG_SMP
+#ifdef ARC_USE_SCRATCH_REG
 	/* swapper_pg_dir is the pgd for the kernel, used by vmalloc */
 	write_aux_reg(ARC_REG_SCRATCH_DATA0, swapper_pg_dir);
 #endif
diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S
index c55d95dd2f39..d6fbdeda400a 100644
--- a/arch/arc/mm/tlbex.S
+++ b/arch/arc/mm/tlbex.S
@@ -193,7 +193,7 @@ ex_saved_reg1:
 
 	lr  r2, [efa]
 
-#ifndef CONFIG_SMP
+#ifdef ARC_USE_SCRATCH_REG
 	lr  r1, [ARC_REG_SCRATCH_DATA0] ; current pgd
 #else
 	GET_CURR_TASK_ON_CPU  r1
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/6] ARCv2: mm: TLB Miss optim: Use double world load/stores LDD/STD
  2019-09-16 21:32 [PATCH 0/6] ARC MMU code updates Vineet Gupta
  2019-09-16 21:32 ` [PATCH 1/6] ARCv2: mm: TLB Miss optim: SMP builds can cache pgd pointer in mmu scratch reg Vineet Gupta
@ 2019-09-16 21:32 ` Vineet Gupta
  2019-09-16 21:32 ` [PATCH 3/6] ARC: mm: TLB Miss optim: avoid re-reading ECR Vineet Gupta
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Vineet Gupta @ 2019-09-16 21:32 UTC (permalink / raw)
  To: linux-snps-arc

Signed-off-by: Vineet Gupta <vgupta at synopsys.com>
---
 arch/arc/mm/tlbex.S | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S
index d6fbdeda400a..110c72536e8b 100644
--- a/arch/arc/mm/tlbex.S
+++ b/arch/arc/mm/tlbex.S
@@ -122,17 +122,27 @@ ex_saved_reg1:
 #else	/* ARCv2 */
 
 .macro TLBMISS_FREEUP_REGS
+#ifdef CONFIG_ARC_HAS_LL64
+	std   r0, [sp, -16]
+	std   r2, [sp, -8]
+#else
 	PUSH  r0
 	PUSH  r1
 	PUSH  r2
 	PUSH  r3
+#endif
 .endm
 
 .macro TLBMISS_RESTORE_REGS
+#ifdef CONFIG_ARC_HAS_LL64
+	ldd   r0, [sp, -16]
+	ldd   r2, [sp, -8]
+#else
 	POP   r3
 	POP   r2
 	POP   r1
 	POP   r0
+#endif
 .endm
 
 #endif
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/6] ARC: mm: TLB Miss optim: avoid re-reading ECR
  2019-09-16 21:32 [PATCH 0/6] ARC MMU code updates Vineet Gupta
  2019-09-16 21:32 ` [PATCH 1/6] ARCv2: mm: TLB Miss optim: SMP builds can cache pgd pointer in mmu scratch reg Vineet Gupta
  2019-09-16 21:32 ` [PATCH 2/6] ARCv2: mm: TLB Miss optim: Use double world load/stores LDD/STD Vineet Gupta
@ 2019-09-16 21:32 ` Vineet Gupta
  2019-09-16 23:36   ` Alexey Brodkin
  2019-09-16 21:32 ` [PATCH 4/6] ARC: mm: tlb flush optim: Make TLBWriteNI fallback to TLBWrite if not available Vineet Gupta
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 9+ messages in thread
From: Vineet Gupta @ 2019-09-16 21:32 UTC (permalink / raw)
  To: linux-snps-arc

For setting PTE Dirty bit, reuse the prior test for ST miss.

No need to reload ECR and test for ST cause code as the prev
condition code is still valid (uncloberred)

Signed-off-by: Vineet Gupta <vgupta at synopsys.com>
---
 arch/arc/mm/tlbex.S | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S
index 110c72536e8b..4c88148d4cd1 100644
--- a/arch/arc/mm/tlbex.S
+++ b/arch/arc/mm/tlbex.S
@@ -380,9 +380,7 @@ ENTRY(EV_TLBMissD)
 
 	;----------------------------------------------------------------
 	; UPDATE_PTE: Let Linux VM know that page was accessed/dirty
-	lr      r3, [ecr]
 	or      r0, r0, _PAGE_ACCESSED        ; Accessed bit always
-	btst_s  r3,  ECR_C_BIT_DTLB_ST_MISS   ; See if it was a Write Access ?
 	or.nz   r0, r0, _PAGE_DIRTY           ; if Write, set Dirty bit as well
 	st_s    r0, [r1]                      ; Write back PTE
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 4/6] ARC: mm: tlb flush optim: Make TLBWriteNI fallback to TLBWrite if not available
  2019-09-16 21:32 [PATCH 0/6] ARC MMU code updates Vineet Gupta
                   ` (2 preceding siblings ...)
  2019-09-16 21:32 ` [PATCH 3/6] ARC: mm: TLB Miss optim: avoid re-reading ECR Vineet Gupta
@ 2019-09-16 21:32 ` Vineet Gupta
  2019-09-16 21:32 ` [PATCH 5/6] ARC: mm: tlb flush optim: elide repeated uTLB invalidate in loop Vineet Gupta
  2019-09-16 21:32 ` [PATCH 6/6] ARC: mm: tlb flush optim: elide redundant uTLB invalidates for MMUv3 Vineet Gupta
  5 siblings, 0 replies; 9+ messages in thread
From: Vineet Gupta @ 2019-09-16 21:32 UTC (permalink / raw)
  To: linux-snps-arc

TLBWriteNI was introduced in MMUv2 (to not invalidate uTLBs in Fast Path
TLB Refill Handler). To avoid #ifdef'ery make it fallback to TLBWrite availabel on all MMUs. This will also help with next change

Signed-off-by: Vineet Gupta <vgupta at synopsys.com>
---
 arch/arc/include/asm/mmu.h | 2 ++
 arch/arc/mm/tlbex.S        | 4 ----
 2 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/arc/include/asm/mmu.h b/arch/arc/include/asm/mmu.h
index 0abacb82a72b..26b731d32a2b 100644
--- a/arch/arc/include/asm/mmu.h
+++ b/arch/arc/include/asm/mmu.h
@@ -67,6 +67,8 @@
 #if (CONFIG_ARC_MMU_VER >= 2)
 #define TLBWriteNI  0x5		/* write JTLB without inv uTLBs */
 #define TLBIVUTLB   0x6		/* explicitly inv uTLBs */
+#else
+#define TLBWriteNI  TLBWrite	/* Not present in hardware, fallback */
 #endif
 
 #if (CONFIG_ARC_MMU_VER >= 4)
diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S
index 4c88148d4cd1..2efaf6ca0c06 100644
--- a/arch/arc/mm/tlbex.S
+++ b/arch/arc/mm/tlbex.S
@@ -292,11 +292,7 @@ ex_saved_reg1:
 	sr  TLBGetIndex, [ARC_REG_TLBCOMMAND]
 
 	/* Commit the Write */
-#if (CONFIG_ARC_MMU_VER >= 2)   /* introduced in v2 */
 	sr TLBWriteNI, [ARC_REG_TLBCOMMAND]
-#else
-	sr TLBWrite, [ARC_REG_TLBCOMMAND]
-#endif
 
 #else
 	sr TLBInsertEntry, [ARC_REG_TLBCOMMAND]
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 5/6] ARC: mm: tlb flush optim: elide repeated uTLB invalidate in loop
  2019-09-16 21:32 [PATCH 0/6] ARC MMU code updates Vineet Gupta
                   ` (3 preceding siblings ...)
  2019-09-16 21:32 ` [PATCH 4/6] ARC: mm: tlb flush optim: Make TLBWriteNI fallback to TLBWrite if not available Vineet Gupta
@ 2019-09-16 21:32 ` Vineet Gupta
  2019-09-16 21:32 ` [PATCH 6/6] ARC: mm: tlb flush optim: elide redundant uTLB invalidates for MMUv3 Vineet Gupta
  5 siblings, 0 replies; 9+ messages in thread
From: Vineet Gupta @ 2019-09-16 21:32 UTC (permalink / raw)
  To: linux-snps-arc

The unconditional full TLB flush (on say ASID rollover) iterates over each
entry and uses TLBWrite to zero it out. TLBWrite by design also invalidates
the uTLBs thus we end up invalidating it as many times as numbe rof
entries (512 or 1k)

Optimize this by using a weaker TLBWriteNI cmd in loop, which doesn't
tinker with uTLBs and an explicit one time IVUTLB, outside the loop to
invalidate them all once.

And given the optimiztion, the IVUTLB is now needed on MMUv4 too where
the uTLBs and JTLBs are otherwise coherent given the TLBInsertEntry /
TLBDeleteEntry commands

Signed-off-by: Vineet Gupta <vgupta at synopsys.com>
---
 arch/arc/mm/tlb.c | 74 +++++++++++++++++++----------------------------
 1 file changed, 29 insertions(+), 45 deletions(-)

diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c
index 417f05ac4397..210d807983dd 100644
--- a/arch/arc/mm/tlb.c
+++ b/arch/arc/mm/tlb.c
@@ -118,6 +118,33 @@ static inline void __tlb_entry_erase(void)
 	write_aux_reg(ARC_REG_TLBCOMMAND, TLBWrite);
 }
 
+static void utlb_invalidate(void)
+{
+#if (CONFIG_ARC_MMU_VER >= 2)
+
+#if (CONFIG_ARC_MMU_VER == 2)
+	/* MMU v2 introduced the uTLB Flush command.
+	 * There was however an obscure hardware bug, where uTLB flush would
+	 * fail when a prior probe for J-TLB (both totally unrelated) would
+	 * return lkup err - because the entry didn't exist in MMU.
+	 * The Workround was to set Index reg with some valid value, prior to
+	 * flush. This was fixed in MMU v3
+	 */
+	unsigned int idx;
+
+	/* make sure INDEX Reg is valid */
+	idx = read_aux_reg(ARC_REG_TLBINDEX);
+
+	/* If not write some dummy val */
+	if (unlikely(idx & TLB_LKUP_ERR))
+		write_aux_reg(ARC_REG_TLBINDEX, 0xa);
+#endif
+
+	write_aux_reg(ARC_REG_TLBCOMMAND, TLBIVUTLB);
+#endif
+
+}
+
 #if (CONFIG_ARC_MMU_VER < 4)
 
 static inline unsigned int tlb_entry_lkup(unsigned long vaddr_n_asid)
@@ -149,44 +176,6 @@ static void tlb_entry_erase(unsigned int vaddr_n_asid)
 	}
 }
 
-/****************************************************************************
- * ARC700 MMU caches recently used J-TLB entries (RAM) as uTLBs (FLOPs)
- *
- * New IVUTLB cmd in MMU v2 explictly invalidates the uTLB
- *
- * utlb_invalidate ( )
- *  -For v2 MMU calls Flush uTLB Cmd
- *  -For v1 MMU does nothing (except for Metal Fix v1 MMU)
- *      This is because in v1 TLBWrite itself invalidate uTLBs
- ***************************************************************************/
-
-static void utlb_invalidate(void)
-{
-#if (CONFIG_ARC_MMU_VER >= 2)
-
-#if (CONFIG_ARC_MMU_VER == 2)
-	/* MMU v2 introduced the uTLB Flush command.
-	 * There was however an obscure hardware bug, where uTLB flush would
-	 * fail when a prior probe for J-TLB (both totally unrelated) would
-	 * return lkup err - because the entry didn't exist in MMU.
-	 * The Workround was to set Index reg with some valid value, prior to
-	 * flush. This was fixed in MMU v3 hence not needed any more
-	 */
-	unsigned int idx;
-
-	/* make sure INDEX Reg is valid */
-	idx = read_aux_reg(ARC_REG_TLBINDEX);
-
-	/* If not write some dummy val */
-	if (unlikely(idx & TLB_LKUP_ERR))
-		write_aux_reg(ARC_REG_TLBINDEX, 0xa);
-#endif
-
-	write_aux_reg(ARC_REG_TLBCOMMAND, TLBIVUTLB);
-#endif
-
-}
-
 static void tlb_entry_insert(unsigned int pd0, pte_t pd1)
 {
 	unsigned int idx;
@@ -219,11 +208,6 @@ static void tlb_entry_insert(unsigned int pd0, pte_t pd1)
 
 #else	/* CONFIG_ARC_MMU_VER >= 4) */
 
-static void utlb_invalidate(void)
-{
-	/* No need since uTLB is always in sync with JTLB */
-}
-
 static void tlb_entry_erase(unsigned int vaddr_n_asid)
 {
 	write_aux_reg(ARC_REG_TLBPD0, vaddr_n_asid | _PAGE_PRESENT);
@@ -267,7 +251,7 @@ noinline void local_flush_tlb_all(void)
 	for (entry = 0; entry < num_tlb; entry++) {
 		/* write this entry to the TLB */
 		write_aux_reg(ARC_REG_TLBINDEX, entry);
-		write_aux_reg(ARC_REG_TLBCOMMAND, TLBWrite);
+		write_aux_reg(ARC_REG_TLBCOMMAND, TLBWriteNI);
 	}
 
 	if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
@@ -278,7 +262,7 @@ noinline void local_flush_tlb_all(void)
 
 		for (entry = stlb_idx; entry < stlb_idx + 16; entry++) {
 			write_aux_reg(ARC_REG_TLBINDEX, entry);
-			write_aux_reg(ARC_REG_TLBCOMMAND, TLBWrite);
+			write_aux_reg(ARC_REG_TLBCOMMAND, TLBWriteNI);
 		}
 	}
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 6/6] ARC: mm: tlb flush optim: elide redundant uTLB invalidates for MMUv3
  2019-09-16 21:32 [PATCH 0/6] ARC MMU code updates Vineet Gupta
                   ` (4 preceding siblings ...)
  2019-09-16 21:32 ` [PATCH 5/6] ARC: mm: tlb flush optim: elide repeated uTLB invalidate in loop Vineet Gupta
@ 2019-09-16 21:32 ` Vineet Gupta
  5 siblings, 0 replies; 9+ messages in thread
From: Vineet Gupta @ 2019-09-16 21:32 UTC (permalink / raw)
  To: linux-snps-arc

For MMUv3 (and prior) the flush_tlb_{range,mm,page} API use the MMU
TLBWrite cmd which already nukes the entire uTLB, so NO need for
additional IVUTLB cmd from utlb_invalidate() - hence this patch

local_flush_tlb_all() is special since it uses a weaker TLBWriteNI
cmd (prec commit) to shoot down JTLB, hence we retain the explicit
uTLB flush

Signed-off-by: Vineet Gupta <vgupta at synopsys.com>
---
 arch/arc/mm/tlb.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c
index 210d807983dd..c340acd989a0 100644
--- a/arch/arc/mm/tlb.c
+++ b/arch/arc/mm/tlb.c
@@ -339,8 +339,6 @@ void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
 		}
 	}
 
-	utlb_invalidate();
-
 	local_irq_restore(flags);
 }
 
@@ -369,8 +367,6 @@ void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
 		start += PAGE_SIZE;
 	}
 
-	utlb_invalidate();
-
 	local_irq_restore(flags);
 }
 
@@ -391,7 +387,6 @@ void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 
 	if (asid_mm(vma->vm_mm, cpu) != MM_CTXT_NO_ASID) {
 		tlb_entry_erase((page & PAGE_MASK) | hw_pid(vma->vm_mm, cpu));
-		utlb_invalidate();
 	}
 
 	local_irq_restore(flags);
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/6] ARC: mm: TLB Miss optim: avoid re-reading ECR
  2019-09-16 21:32 ` [PATCH 3/6] ARC: mm: TLB Miss optim: avoid re-reading ECR Vineet Gupta
@ 2019-09-16 23:36   ` Alexey Brodkin
  2019-09-17 21:08     ` Vineet Gupta
  0 siblings, 1 reply; 9+ messages in thread
From: Alexey Brodkin @ 2019-09-16 23:36 UTC (permalink / raw)
  To: linux-snps-arc

Hi Vineet,

> -----Original Message-----
> From: Vineet Gupta <vgupta at synopsys.com>
> Sent: Monday, September 16, 2019 2:32 PM
> To: linux-snps-arc at lists.infradead.org
> Cc: Alexey Brodkin <abrodkin at synopsys.com>; Vineet Gupta <vgupta at synopsys.com>
> Subject: [PATCH 3/6] ARC: mm: TLB Miss optim: avoid re-reading ECR
> 
> For setting PTE Dirty bit, reuse the prior test for ST miss.
> 
> No need to reload ECR and test for ST cause code as the prev
> condition code is still valid (uncloberred)
> 
> Signed-off-by: Vineet Gupta <vgupta at synopsys.com>
> ---
>  arch/arc/mm/tlbex.S | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S
> index 110c72536e8b..4c88148d4cd1 100644
> --- a/arch/arc/mm/tlbex.S
> +++ b/arch/arc/mm/tlbex.S
> @@ -380,9 +380,7 @@ ENTRY(EV_TLBMissD)
> 
>  	;----------------------------------------------------------------
>  	; UPDATE_PTE: Let Linux VM know that page was accessed/dirty

I'd suggest to put a BOLD comment here saying that we rely on previously
set condition flag so that whoever reads or (even worse) modifies that or
previous code keeps in mind that we shouldn't clobber a particular flag.

> -	lr      r3, [ecr]
>  	or      r0, r0, _PAGE_ACCESSED        ; Accessed bit always
> -	btst_s  r3,  ECR_C_BIT_DTLB_ST_MISS   ; See if it was a Write Access ?
>  	or.nz   r0, r0, _PAGE_DIRTY           ; if Write, set Dirty bit as well
>  	st_s    r0, [r1]                      ; Write back PTE

-Alexey

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 3/6] ARC: mm: TLB Miss optim: avoid re-reading ECR
  2019-09-16 23:36   ` Alexey Brodkin
@ 2019-09-17 21:08     ` Vineet Gupta
  0 siblings, 0 replies; 9+ messages in thread
From: Vineet Gupta @ 2019-09-17 21:08 UTC (permalink / raw)
  To: linux-snps-arc

On 9/16/19 4:36 PM, Alexey Brodkin wrote:
>>
>>  	;----------------------------------------------------------------
>>  	; UPDATE_PTE: Let Linux VM know that page was accessed/dirty
> 
> I'd suggest to put a BOLD comment here saying that we rely on previously
> set condition flag so that whoever reads or (even worse) modifies that or
> previous code keeps in mind that we shouldn't clobber a particular flag.

The flag setting code is only a few lines prior. It would be messy to annotate for
flag checking instruction where the flags are clobbered. This is low level
assembly code - not for faint hearted.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2019-09-17 21:08 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-16 21:32 [PATCH 0/6] ARC MMU code updates Vineet Gupta
2019-09-16 21:32 ` [PATCH 1/6] ARCv2: mm: TLB Miss optim: SMP builds can cache pgd pointer in mmu scratch reg Vineet Gupta
2019-09-16 21:32 ` [PATCH 2/6] ARCv2: mm: TLB Miss optim: Use double world load/stores LDD/STD Vineet Gupta
2019-09-16 21:32 ` [PATCH 3/6] ARC: mm: TLB Miss optim: avoid re-reading ECR Vineet Gupta
2019-09-16 23:36   ` Alexey Brodkin
2019-09-17 21:08     ` Vineet Gupta
2019-09-16 21:32 ` [PATCH 4/6] ARC: mm: tlb flush optim: Make TLBWriteNI fallback to TLBWrite if not available Vineet Gupta
2019-09-16 21:32 ` [PATCH 5/6] ARC: mm: tlb flush optim: elide repeated uTLB invalidate in loop Vineet Gupta
2019-09-16 21:32 ` [PATCH 6/6] ARC: mm: tlb flush optim: elide redundant uTLB invalidates for MMUv3 Vineet Gupta

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).