All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] ARMv6 and ARMv7 mm fixes
@ 2011-05-20 11:19 Will Deacon
  2011-05-20 11:19 ` [PATCH 1/5] ARM: cache: ensure MVA is cacheline aligned in flush_kern_dcache_area Will Deacon
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Will Deacon @ 2011-05-20 11:19 UTC (permalink / raw)
  To: linux-arm-kernel

Hello,

There are a few issues with ASID handling and cache flushing on v6/v7
CPUs that have been identified when running Linux on the Cortex-A15.

These patches solve the problems for the classic page tables. Additional
LPAE changes will be posted separately.

Tested on a Realview-PBX platform with a dual-core Cortex-A9.

Thanks,

Will


Catalin Marinas (1):
  ARM: mm: make TTBR1 always point to swapper_pg_dir on ARMv6/7

Will Deacon (4):
  ARM: cache: ensure MVA is cacheline aligned in flush_kern_dcache_area
  ARM: mm: use TTBR1 instead of reserved context ID
  ARM: mm: fix racy ASID rollover broadcast on SMP platforms
  ARM: mm: allow ASID 0 to be allocated to tasks

 arch/arm/include/asm/proc-fns.h |   14 ++++++++++++++
 arch/arm/include/asm/smp.h      |    1 +
 arch/arm/kernel/head.S          |    7 +++++--
 arch/arm/kernel/smp.c           |    1 +
 arch/arm/mm/cache-v6.S          |    1 +
 arch/arm/mm/cache-v7.S          |    2 ++
 arch/arm/mm/context.c           |   20 ++++++++++----------
 arch/arm/mm/proc-v6.S           |    4 +++-
 arch/arm/mm/proc-v7.S           |   14 +++++++-------
 9 files changed, 44 insertions(+), 20 deletions(-)

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/5] ARM: cache: ensure MVA is cacheline aligned in flush_kern_dcache_area
  2011-05-20 11:19 [PATCH 0/5] ARMv6 and ARMv7 mm fixes Will Deacon
@ 2011-05-20 11:19 ` Will Deacon
  2011-05-20 11:19 ` [PATCH 2/5] ARM: mm: make TTBR1 always point to swapper_pg_dir on ARMv6/7 Will Deacon
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Will Deacon @ 2011-05-20 11:19 UTC (permalink / raw)
  To: linux-arm-kernel

The v6 and v7 implementations of flush_kern_dcache_area do not align
the passed MVA to the size of a cacheline in the data cache. If a
misaligned address is used, only a subset of the requested area will
be flushed. This has been observed to cause failures in SMP boot where
the secondary_data initialised by the primary CPU is not cacheline
aligned, causing the secondary CPUs to read incorrect values for their
pgd and stack pointers.

This patch ensures that the base address is cacheline aligned before
flushing the d-cache.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/mm/cache-v6.S |    1 +
 arch/arm/mm/cache-v7.S |    2 ++
 2 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/arch/arm/mm/cache-v6.S b/arch/arm/mm/cache-v6.S
index c96fa1b..73b4a8b 100644
--- a/arch/arm/mm/cache-v6.S
+++ b/arch/arm/mm/cache-v6.S
@@ -176,6 +176,7 @@ ENDPROC(v6_coherent_kern_range)
  */
 ENTRY(v6_flush_kern_dcache_area)
 	add	r1, r0, r1
+	bic	r0, r0, #D_CACHE_LINE_SIZE - 1
 1:
 #ifdef HARVARD_CACHE
 	mcr	p15, 0, r0, c7, c14, 1		@ clean & invalidate D line
diff --git a/arch/arm/mm/cache-v7.S b/arch/arm/mm/cache-v7.S
index dc18d81..d32f02b 100644
--- a/arch/arm/mm/cache-v7.S
+++ b/arch/arm/mm/cache-v7.S
@@ -221,6 +221,8 @@ ENDPROC(v7_coherent_user_range)
 ENTRY(v7_flush_kern_dcache_area)
 	dcache_line_size r2, r3
 	add	r1, r0, r1
+	sub	r3, r2, #1
+	bic	r0, r0, r3
 1:
 	mcr	p15, 0, r0, c7, c14, 1		@ clean & invalidate D line / unified line
 	add	r0, r0, r2
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/5] ARM: mm: make TTBR1 always point to swapper_pg_dir on ARMv6/7
  2011-05-20 11:19 [PATCH 0/5] ARMv6 and ARMv7 mm fixes Will Deacon
  2011-05-20 11:19 ` [PATCH 1/5] ARM: cache: ensure MVA is cacheline aligned in flush_kern_dcache_area Will Deacon
@ 2011-05-20 11:19 ` Will Deacon
  2011-05-20 11:19 ` [PATCH 3/5] ARM: mm: use TTBR1 instead of reserved context ID Will Deacon
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Will Deacon @ 2011-05-20 11:19 UTC (permalink / raw)
  To: linux-arm-kernel

From: Catalin Marinas <catalin.marinas@arm.com>

This patch makes TTBR1 point to swapper_pg_dir so that global, kernel
mappings can be used exclusively on v6 and v7 cores where they are
needed.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/smp.h |    1 +
 arch/arm/kernel/head.S     |    7 +++++--
 arch/arm/kernel/smp.c      |    1 +
 arch/arm/mm/proc-v6.S      |    4 +++-
 arch/arm/mm/proc-v7.S      |    4 +++-
 5 files changed, 13 insertions(+), 4 deletions(-)

diff --git a/arch/arm/include/asm/smp.h b/arch/arm/include/asm/smp.h
index 96ed521..be4b588 100644
--- a/arch/arm/include/asm/smp.h
+++ b/arch/arm/include/asm/smp.h
@@ -78,6 +78,7 @@ extern void platform_smp_prepare_cpus(unsigned int);
  */
 struct secondary_data {
 	unsigned long pgdir;
+	unsigned long swapper_pg_dir;
 	void *stack;
 };
 extern struct secondary_data secondary_data;
diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S
index c9173cf..8224b1d 100644
--- a/arch/arm/kernel/head.S
+++ b/arch/arm/kernel/head.S
@@ -113,6 +113,7 @@ ENTRY(stext)
 	ldr	r13, =__mmap_switched		@ address to jump to after
 						@ mmu has been enabled
 	adr	lr, BSYM(1f)			@ return (PIC) address
+	mov	r8, r4				@ set TTBR1 to swapper_pg_dir
  ARM(	add	pc, r10, #PROCINFO_INITFUNC	)
  THUMB(	add	r12, r10, #PROCINFO_INITFUNC	)
  THUMB(	mov	pc, r12				)
@@ -302,8 +303,10 @@ ENTRY(secondary_startup)
 	 */
 	adr	r4, __secondary_data
 	ldmia	r4, {r5, r7, r12}		@ address to jump to after
-	sub	r4, r4, r5			@ mmu has been enabled
-	ldr	r4, [r7, r4]			@ get secondary_data.pgdir
+	sub	lr, r4, r5			@ mmu has been enabled
+	ldr	r4, [r7, lr]			@ get secondary_data.pgdir
+	add	r7, r7, #4
+	ldr	r8, [r7, lr]			@ get secondary_data.swapper_pg_dir
 	adr	lr, BSYM(__enable_mmu)		@ return address
 	mov	r13, r12			@ __secondary_switched address
  ARM(	add	pc, r10, #PROCINFO_INITFUNC	) @ initialise processor
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index f29b8a2..10b5d55 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -105,6 +105,7 @@ int __cpuinit __cpu_up(unsigned int cpu)
 	 */
 	secondary_data.stack = task_stack_page(idle) + THREAD_START_SP;
 	secondary_data.pgdir = virt_to_phys(pgd);
+	secondary_data.swapper_pg_dir = virt_to_phys(swapper_pg_dir);
 	__cpuc_flush_dcache_area(&secondary_data, sizeof(secondary_data));
 	outer_clean_range(__pa(&secondary_data), __pa(&secondary_data + 1));
 
diff --git a/arch/arm/mm/proc-v6.S b/arch/arm/mm/proc-v6.S
index 7c99cb4..55ca716 100644
--- a/arch/arm/mm/proc-v6.S
+++ b/arch/arm/mm/proc-v6.S
@@ -218,7 +218,9 @@ __v6_setup:
 	mcr	p15, 0, r0, c2, c0, 2		@ TTB control register
 	ALT_SMP(orr	r4, r4, #TTB_FLAGS_SMP)
 	ALT_UP(orr	r4, r4, #TTB_FLAGS_UP)
-	mcr	p15, 0, r4, c2, c0, 1		@ load TTB1
+	ALT_SMP(orr	r8, r8, #TTB_FLAGS_SMP)
+	ALT_UP(orr	r8, r8, #TTB_FLAGS_UP)
+	mcr	p15, 0, r8, c2, c0, 1		@ load TTB1
 #endif /* CONFIG_MMU */
 	adr	r5, v6_crval
 	ldmia	r5, {r5, r6}
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index babfba0..3c38678 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -368,7 +368,9 @@ __v7_setup:
 	mcr	p15, 0, r10, c2, c0, 2		@ TTB control register
 	ALT_SMP(orr	r4, r4, #TTB_FLAGS_SMP)
 	ALT_UP(orr	r4, r4, #TTB_FLAGS_UP)
-	mcr	p15, 0, r4, c2, c0, 1		@ load TTB1
+	ALT_SMP(orr	r8, r8, #TTB_FLAGS_SMP)
+	ALT_UP(orr	r8, r8, #TTB_FLAGS_UP)
+	mcr	p15, 0, r8, c2, c0, 1		@ load TTB1
 	ldr	r5, =PRRR			@ PRRR
 	ldr	r6, =NMRR			@ NMRR
 	mcr	p15, 0, r5, c10, c2, 0		@ write PRRR
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/5] ARM: mm: use TTBR1 instead of reserved context ID
  2011-05-20 11:19 [PATCH 0/5] ARMv6 and ARMv7 mm fixes Will Deacon
  2011-05-20 11:19 ` [PATCH 1/5] ARM: cache: ensure MVA is cacheline aligned in flush_kern_dcache_area Will Deacon
  2011-05-20 11:19 ` [PATCH 2/5] ARM: mm: make TTBR1 always point to swapper_pg_dir on ARMv6/7 Will Deacon
@ 2011-05-20 11:19 ` Will Deacon
  2011-05-20 11:19 ` [PATCH 4/5] ARM: mm: fix racy ASID rollover broadcast on SMP platforms Will Deacon
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Will Deacon @ 2011-05-20 11:19 UTC (permalink / raw)
  To: linux-arm-kernel

On ARMv7 CPUs that cache first level page table entries (like the
Cortex-A15), using a reserved ASID while changing the TTBR or flushing
the TLB is unsafe.

This is because the CPU may cache the first level entry as the result of
a speculative memory access while the reserved ASID is assigned. After
the process owning the page tables dies, the memory will be reallocated
and may be written with junk values which can be interpreted as global,
valid PTEs by the processor. This will result in the TLB being populated
with bogus global entries.

This patch avoids the use of a reserved context ID in the v7 switch_mm
and ASID rollover code by temporarily using the swapper_pg_dir pointed
at by TTBR1, which contains only global entries that are not tagged
with ASIDs.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/mm/context.c |   11 ++++++-----
 arch/arm/mm/proc-v7.S |   10 ++++------
 2 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c
index b0ee9ba..0d86298 100644
--- a/arch/arm/mm/context.c
+++ b/arch/arm/mm/context.c
@@ -24,9 +24,7 @@ DEFINE_PER_CPU(struct mm_struct *, current_mm);
 
 /*
  * We fork()ed a process, and we need a new context for the child
- * to run in.  We reserve version 0 for initial tasks so we will
- * always allocate an ASID. The ASID 0 is reserved for the TTBR
- * register changing sequence.
+ * to run in.
  */
 void __init_new_context(struct task_struct *tsk, struct mm_struct *mm)
 {
@@ -36,8 +34,11 @@ void __init_new_context(struct task_struct *tsk, struct mm_struct *mm)
 
 static void flush_context(void)
 {
-	/* set the reserved ASID before flushing the TLB */
-	asm("mcr	p15, 0, %0, c13, c0, 1\n" : : "r" (0));
+	u32 ttb;
+	/* Copy TTBR1 into TTBR0 */
+	asm volatile("mrc	p15, 0, %0, c2, c0, 1\n"
+		     "mcr	p15, 0, %0, c2, c0, 0"
+		     : "=r" (ttb));
 	isb();
 	local_flush_tlb_all();
 	if (icache_is_vivt_asid_tagged()) {
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index 3c38678..b3b566e 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -108,18 +108,16 @@ ENTRY(cpu_v7_switch_mm)
 #ifdef CONFIG_ARM_ERRATA_430973
 	mcr	p15, 0, r2, c7, c5, 6		@ flush BTAC/BTB
 #endif
-#ifdef CONFIG_ARM_ERRATA_754322
-	dsb
-#endif
-	mcr	p15, 0, r2, c13, c0, 1		@ set reserved context ID
-	isb
-1:	mcr	p15, 0, r0, c2, c0, 0		@ set TTB 0
+	mrc	p15, 0, r2, c2, c0, 1		@ load TTB 1
+	mcr	p15, 0, r2, c2, c0, 0		@ into TTB 0
 	isb
 #ifdef CONFIG_ARM_ERRATA_754322
 	dsb
 #endif
 	mcr	p15, 0, r1, c13, c0, 1		@ set context ID
 	isb
+	mcr	p15, 0, r0, c2, c0, 0		@ set TTB 0
+	isb
 #endif
 	mov	pc, lr
 ENDPROC(cpu_v7_switch_mm)
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/5] ARM: mm: fix racy ASID rollover broadcast on SMP platforms
  2011-05-20 11:19 [PATCH 0/5] ARMv6 and ARMv7 mm fixes Will Deacon
                   ` (2 preceding siblings ...)
  2011-05-20 11:19 ` [PATCH 3/5] ARM: mm: use TTBR1 instead of reserved context ID Will Deacon
@ 2011-05-20 11:19 ` Will Deacon
  2011-05-20 11:19 ` [PATCH 5/5] ARM: mm: allow ASID 0 to be allocated to tasks Will Deacon
  2011-05-24 21:59 ` [PATCH 0/5] ARMv6 and ARMv7 mm fixes Stephen Boyd
  5 siblings, 0 replies; 11+ messages in thread
From: Will Deacon @ 2011-05-20 11:19 UTC (permalink / raw)
  To: linux-arm-kernel

If ASID rollover is detected on a CPU in an SMP system, a synchronous
IPI call is made to force the secondaries to reallocate their current
ASIDs.

There is a problem where a CPU may be interrupted in the cpu_switch_mm
code with the context ID held in r1. After servicing the IPI, the
context ID register will be updated with an ASID from the previous
generation, polluting the TLB for when that ASID becomes valid in the
new generation.

This patch disables interrupts during cpu_switch_mm for SMP systems,
preventing incoming rollover broadcasts from being serviced while the
register state is inconsistent. Additionally, the context resetting code
is modified to call cpu_switch_mm, rather than setting the context ID
register directly, so that the TTBR always agrees with the ASID.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/proc-fns.h |   14 ++++++++++++++
 arch/arm/mm/context.c           |    3 +--
 2 files changed, 15 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
index 8ec535e..233dccd 100644
--- a/arch/arm/include/asm/proc-fns.h
+++ b/arch/arm/include/asm/proc-fns.h
@@ -97,8 +97,22 @@ extern void cpu_resume(void);
 
 #ifdef CONFIG_MMU
 
+#ifdef CONFIG_SMP
+
+#define cpu_switch_mm(pgd,mm)	\
+	({						\
+		unsigned long flags;			\
+		local_irq_save(flags);			\
+		cpu_do_switch_mm(virt_to_phys(pgd),mm);	\
+		local_irq_restore(flags);		\
+	})
+
+#else /* SMP */
+
 #define cpu_switch_mm(pgd,mm) cpu_do_switch_mm(virt_to_phys(pgd),mm)
 
+#endif
+
 #define cpu_get_pgd()	\
 	({						\
 		unsigned long pg;			\
diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c
index 0d86298..55041c1 100644
--- a/arch/arm/mm/context.c
+++ b/arch/arm/mm/context.c
@@ -100,8 +100,7 @@ static void reset_context(void *info)
 	set_mm_context(mm, asid);
 
 	/* set the new ASID */
-	asm("mcr	p15, 0, %0, c13, c0, 1\n" : : "r" (mm->context.id));
-	isb();
+	cpu_switch_mm(mm->pgd, mm);
 }
 
 #else
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 5/5] ARM: mm: allow ASID 0 to be allocated to tasks
  2011-05-20 11:19 [PATCH 0/5] ARMv6 and ARMv7 mm fixes Will Deacon
                   ` (3 preceding siblings ...)
  2011-05-20 11:19 ` [PATCH 4/5] ARM: mm: fix racy ASID rollover broadcast on SMP platforms Will Deacon
@ 2011-05-20 11:19 ` Will Deacon
  2011-05-24 21:59 ` [PATCH 0/5] ARMv6 and ARMv7 mm fixes Stephen Boyd
  5 siblings, 0 replies; 11+ messages in thread
From: Will Deacon @ 2011-05-20 11:19 UTC (permalink / raw)
  To: linux-arm-kernel

Now that ASID 0 is no longer used as a reserved value, allow it to be
allocated to tasks.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/mm/context.c |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c
index 55041c1..2352395 100644
--- a/arch/arm/mm/context.c
+++ b/arch/arm/mm/context.c
@@ -94,7 +94,7 @@ static void reset_context(void *info)
 		return;
 
 	smp_rmb();
-	asid = cpu_last_asid + cpu + 1;
+	asid = cpu_last_asid + cpu;
 
 	flush_context();
 	set_mm_context(mm, asid);
@@ -143,13 +143,13 @@ void __new_context(struct mm_struct *mm)
 	 * to start a new version and flush the TLB.
 	 */
 	if (unlikely((asid & ~ASID_MASK) == 0)) {
-		asid = cpu_last_asid + smp_processor_id() + 1;
+		asid = cpu_last_asid + smp_processor_id();
 		flush_context();
 #ifdef CONFIG_SMP
 		smp_wmb();
 		smp_call_function(reset_context, NULL, 1);
 #endif
-		cpu_last_asid += NR_CPUS;
+		cpu_last_asid += NR_CPUS - 1;
 	}
 
 	set_mm_context(mm, asid);
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 0/5] ARMv6 and ARMv7 mm fixes
  2011-05-20 11:19 [PATCH 0/5] ARMv6 and ARMv7 mm fixes Will Deacon
                   ` (4 preceding siblings ...)
  2011-05-20 11:19 ` [PATCH 5/5] ARM: mm: allow ASID 0 to be allocated to tasks Will Deacon
@ 2011-05-24 21:59 ` Stephen Boyd
  2011-05-25 12:50   ` Will Deacon
  5 siblings, 1 reply; 11+ messages in thread
From: Stephen Boyd @ 2011-05-24 21:59 UTC (permalink / raw)
  To: linux-arm-kernel

On 05/20/2011 04:19 AM, Will Deacon wrote:
> Hello,
>
> There are a few issues with ASID handling and cache flushing on v6/v7
> CPUs that have been identified when running Linux on the Cortex-A15.
>
> These patches solve the problems for the classic page tables. Additional
> LPAE changes will be posted separately.
>
> Tested on a Realview-PBX platform with a dual-core Cortex-A9.

Should these patches be sent to the stable tree? Or do the problems only
manifest on Cortex-A15?

-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 0/5] ARMv6 and ARMv7 mm fixes
  2011-05-24 21:59 ` [PATCH 0/5] ARMv6 and ARMv7 mm fixes Stephen Boyd
@ 2011-05-25 12:50   ` Will Deacon
  2011-05-25 18:11     ` Stephen Boyd
  0 siblings, 1 reply; 11+ messages in thread
From: Will Deacon @ 2011-05-25 12:50 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Stephen,

> On 05/20/2011 04:19 AM, Will Deacon wrote:
> > Hello,
> >
> > There are a few issues with ASID handling and cache flushing on v6/v7
> > CPUs that have been identified when running Linux on the Cortex-A15.
> >
> > These patches solve the problems for the classic page tables. Additional
> > LPAE changes will be posted separately.
> >
> > Tested on a Realview-PBX platform with a dual-core Cortex-A9.
> 
> Should these patches be sent to the stable tree? Or do the problems only
> manifest on Cortex-A15?

I was planning to CC stable for patches 1 ("ARM: cache: ensure MVA is cacheline
aligned in flush_kern_dcache_area") and 4 ("ARM: mm: fix racy ASID rollover
broadcast on SMP platforms") as these affect existing v6 and v7 cores. The
remainder of the patches, although nice to have, only kick in on A15 as far as
I'm aware (due to aggressive caching of speculative level 1 entries).

I was hoping for some acks/tested-bys before then since these changes affect a
lot of platforms and the code is fairly scary.

Will

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 0/5] ARMv6 and ARMv7 mm fixes
  2011-05-25 12:50   ` Will Deacon
@ 2011-05-25 18:11     ` Stephen Boyd
  2011-05-25 20:52       ` Russell King - ARM Linux
  0 siblings, 1 reply; 11+ messages in thread
From: Stephen Boyd @ 2011-05-25 18:11 UTC (permalink / raw)
  To: linux-arm-kernel

On 05/25/2011 05:50 AM, Will Deacon wrote:
> I was planning to CC stable for patches 1 ("ARM: cache: ensure MVA is cacheline
> aligned in flush_kern_dcache_area") and 4 ("ARM: mm: fix racy ASID rollover
> broadcast on SMP platforms") as these affect existing v6 and v7 cores. The
> remainder of the patches, although nice to have, only kick in on A15 as far as
> I'm aware (due to aggressive caching of speculative level 1 entries).

Would it be appropriate to reorder the series then so patches 1 and 4
come first?

> I was hoping for some acks/tested-bys before then since these changes affect a
> lot of platforms and the code is fairly scary.

Yes the patches look scary. I could give it a test on MSM but I'm not
even sure that will help much. Why didn't you Cc Russell on these patches?

-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 0/5] ARMv6 and ARMv7 mm fixes
  2011-05-25 18:11     ` Stephen Boyd
@ 2011-05-25 20:52       ` Russell King - ARM Linux
  2011-05-26 10:15         ` Will Deacon
  0 siblings, 1 reply; 11+ messages in thread
From: Russell King - ARM Linux @ 2011-05-25 20:52 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, May 25, 2011 at 11:11:27AM -0700, Stephen Boyd wrote:
> On 05/25/2011 05:50 AM, Will Deacon wrote:
> > I was planning to CC stable for patches 1 ("ARM: cache: ensure MVA is cacheline
> > aligned in flush_kern_dcache_area") and 4 ("ARM: mm: fix racy ASID rollover
> > broadcast on SMP platforms") as these affect existing v6 and v7 cores. The
> > remainder of the patches, although nice to have, only kick in on A15 as far as
> > I'm aware (due to aggressive caching of speculative level 1 entries).
> 
> Would it be appropriate to reorder the series then so patches 1 and 4
> come first?

I'm not entirely convinced (4) is the best solution yet, but that's
mainly becaues I've not thought about it enough yet.  1-3 look fine
though.

> > I was hoping for some acks/tested-bys before then since these changes affect a
> > lot of platforms and the code is fairly scary.
> 
> Yes the patches look scary. I could give it a test on MSM but I'm not
> even sure that will help much. Why didn't you Cc Russell on these patches?

I do tend to read about 90% of linux-arm-kernel, whether its cc'd to
me or not.  The cc doesn't really affect whether I read it or not.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 0/5] ARMv6 and ARMv7 mm fixes
  2011-05-25 20:52       ` Russell King - ARM Linux
@ 2011-05-26 10:15         ` Will Deacon
  0 siblings, 0 replies; 11+ messages in thread
From: Will Deacon @ 2011-05-26 10:15 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Russell,

> On Wed, May 25, 2011 at 11:11:27AM -0700, Stephen Boyd wrote:
> > On 05/25/2011 05:50 AM, Will Deacon wrote:
> > > I was planning to CC stable for patches 1 ("ARM: cache: ensure MVA is cacheline
> > > aligned in flush_kern_dcache_area") and 4 ("ARM: mm: fix racy ASID rollover
> > > broadcast on SMP platforms") as these affect existing v6 and v7 cores. The
> > > remainder of the patches, although nice to have, only kick in on A15 as far as
> > > I'm aware (due to aggressive caching of speculative level 1 entries).
> >
> > Would it be appropriate to reorder the series then so patches 1 and 4
> > come first?
> 
> I'm not entirely convinced (4) is the best solution yet, but that's
> mainly becaues I've not thought about it enough yet.  1-3 look fine
> though.

Ok, thanks for looking at those. I'll put 1-3 and 5 into the patch system
since these can be applied irrespective of the other guy. Once you've had
a think about 4, I'd be happy to discuss it on the list if you can think
of a better way to solve the problem.
 
> > > I was hoping for some acks/tested-bys before then since these changes affect a
> > > lot of platforms and the code is fairly scary.
> >
> > Yes the patches look scary. I could give it a test on MSM but I'm not
> > even sure that will help much. Why didn't you Cc Russell on these patches?
> 
> I do tend to read about 90% of linux-arm-kernel, whether its cc'd to
> me or not.  The cc doesn't really affect whether I read it or not.

I figured as much and it saves me having to remember to remove the CC
line when I send to the patch system.

Cheers,

Will

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2011-05-26 10:15 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-05-20 11:19 [PATCH 0/5] ARMv6 and ARMv7 mm fixes Will Deacon
2011-05-20 11:19 ` [PATCH 1/5] ARM: cache: ensure MVA is cacheline aligned in flush_kern_dcache_area Will Deacon
2011-05-20 11:19 ` [PATCH 2/5] ARM: mm: make TTBR1 always point to swapper_pg_dir on ARMv6/7 Will Deacon
2011-05-20 11:19 ` [PATCH 3/5] ARM: mm: use TTBR1 instead of reserved context ID Will Deacon
2011-05-20 11:19 ` [PATCH 4/5] ARM: mm: fix racy ASID rollover broadcast on SMP platforms Will Deacon
2011-05-20 11:19 ` [PATCH 5/5] ARM: mm: allow ASID 0 to be allocated to tasks Will Deacon
2011-05-24 21:59 ` [PATCH 0/5] ARMv6 and ARMv7 mm fixes Stephen Boyd
2011-05-25 12:50   ` Will Deacon
2011-05-25 18:11     ` Stephen Boyd
2011-05-25 20:52       ` Russell King - ARM Linux
2011-05-26 10:15         ` Will Deacon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.