Hi all, Today's linux-next merge of the mips tree got conflicts in: arch/mips/include/asm/barrier.h arch/mips/include/asm/pgtable.h between commit: e02e07e3127d ("MIPS: Loongson: Introduce and use loongson_llsc_mb()") from the mips-fixes tree and commits: 535113896e80 ("MIPS: Add GINVT instruction helpers") 82f4f66ddf11 ("MIPS: Remove open-coded cmpxchg() in set_pte()") from the mips tree. I fixed it up (see below and for the latter file, I used the mips tree version but wondered if cmpxchg() needs the fixups from the mips-fixes tree commit) and can carry the fix as necessary. This is now fixed as far as linux-next is concerned, but any non trivial conflicts should be mentioned to your upstream maintainer when your tree is submitted for merging. You may also want to consider cooperating with the maintainer of the conflicting tree to minimise any particularly complex conflicts. -- Cheers, Stephen Rothwell diff --cc arch/mips/include/asm/barrier.h index b7f6ac5e513c,b48ee2caf78d..000000000000 --- a/arch/mips/include/asm/barrier.h +++ b/arch/mips/include/asm/barrier.h @@@ -222,42 -236,11 +236,47 @@@ #define __smp_mb__before_atomic() __smp_mb__before_llsc() #define __smp_mb__after_atomic() smp_llsc_mb() +/* + * Some Loongson 3 CPUs have a bug wherein execution of a memory access (load, + * store or pref) in between an ll & sc can cause the sc instruction to + * erroneously succeed, breaking atomicity. Whilst it's unusual to write code + * containing such sequences, this bug bites harder than we might otherwise + * expect due to reordering & speculation: + * + * 1) A memory access appearing prior to the ll in program order may actually + * be executed after the ll - this is the reordering case. + * + * In order to avoid this we need to place a memory barrier (ie. a sync + * instruction) prior to every ll instruction, in between it & any earlier + * memory access instructions. Many of these cases are already covered by + * smp_mb__before_llsc() but for the remaining cases, typically ones in + * which multiple CPUs may operate on a memory location but ordering is not + * usually guaranteed, we use loongson_llsc_mb() below. + * + * This reordering case is fixed by 3A R2 CPUs, ie. 3A2000 models and later. + * + * 2) If a conditional branch exists between an ll & sc with a target outside + * of the ll-sc loop, for example an exit upon value mismatch in cmpxchg() + * or similar, then misprediction of the branch may allow speculative + * execution of memory accesses from outside of the ll-sc loop. + * + * In order to avoid this we need a memory barrier (ie. a sync instruction) + * at each affected branch target, for which we also use loongson_llsc_mb() + * defined below. + * + * This case affects all current Loongson 3 CPUs. + */ +#ifdef CONFIG_CPU_LOONGSON3_WORKAROUNDS /* Loongson-3's LLSC workaround */ +#define loongson_llsc_mb() __asm__ __volatile__(__WEAK_LLSC_MB : : :"memory") +#else +#define loongson_llsc_mb() do { } while (0) +#endif + + static inline void sync_ginv(void) + { + asm volatile("sync\t%0" :: "i"(STYPE_GINV)); + } + #include #endif /* __ASM_BARRIER_H */ diff --cc arch/mips/include/asm/pgtable.h index 910851c62db3,6c13c1d44045..000000000000 --- a/arch/mips/include/asm/pgtable.h +++ b/arch/mips/include/asm/pgtable.h