All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] arm64: inline assembly fixes + cleanup
@ 2017-05-03 15:09 Mark Rutland
  2017-05-03 15:09 ` [PATCH 1/6] arm64: xchg: hazard against entire exchange variable Mark Rutland
                   ` (7 more replies)
  0 siblings, 8 replies; 10+ messages in thread
From: Mark Rutland @ 2017-05-03 15:09 UTC (permalink / raw)
  To: linux-arm-kernel

Recent attempts to make our inline assembly more clang-friendly [1,2]
made it clear that we have some latent problems. I've reviewed all the
inline assembly under arch/arm64/, and this series fixes the issues that
I noted.

The series is based on the arm64 for-next/core branch. I've built the
series with a Linaro 15,08 GCC 5.1.1 toolchain. I see no new warnings,
and the result boots happily on Juno R1.

The first four patches address latent bugs, with the final two patches
improving consistency and compatibility with clang. I believe that this
supersedes [2], with the GIC accessor having been fixed up by the recent
sysreg rework.

Thanks,
Mark.

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2017-April/503535.html
[2] http://lists.infradead.org/pipermail/linux-arm-kernel/2017-May/504072.html

Mark Rutland (6):
  arm64: xchg: hazard against entire exchange variable
  arm64: ensure extension of smp_store_release value
  arm64: uaccess: ensure extension of access_ok() addr
  arm64: armv8_deprecated: ensure extension of addr
  arm64: atomic_lse: match asm register sizes
  arm64: uaccess: suppress spurious clang warning

 arch/arm64/include/asm/atomic_lse.h  |  4 ++--
 arch/arm64/include/asm/barrier.h     | 20 +++++++++++++++-----
 arch/arm64/include/asm/cmpxchg.h     |  2 +-
 arch/arm64/include/asm/uaccess.h     |  7 ++++---
 arch/arm64/kernel/armv8_deprecated.c |  3 ++-
 5 files changed, 24 insertions(+), 12 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/6] arm64: xchg: hazard against entire exchange variable
  2017-05-03 15:09 [PATCH 0/6] arm64: inline assembly fixes + cleanup Mark Rutland
@ 2017-05-03 15:09 ` Mark Rutland
  2017-05-03 15:09 ` [PATCH 2/6] arm64: ensure extension of smp_store_release value Mark Rutland
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Mark Rutland @ 2017-05-03 15:09 UTC (permalink / raw)
  To: linux-arm-kernel

The inline assembly in __XCHG_CASE() uses a +Q constraint to hazard
against other accesses to the memory location being exchanged. However,
the pointer passed to the constraint is a u8 pointer, and thus the
hazard only applies to the first byte of the location.

GCC can take advantage of this, assuming that other portions of the
location are unchanged, as demonstrated with the following test case:

union u {
	unsigned long l;
	unsigned int i[2];
};

unsigned long update_char_hazard(union u *u)
{
	unsigned int a, b;

	a = u->i[1];
	asm ("str %1, %0" : "+Q" (*(char *)&u->l) : "r" (0UL));
	b = u->i[1];

	return a ^ b;
}

unsigned long update_long_hazard(union u *u)
{
	unsigned int a, b;

	a = u->i[1];
	asm ("str %1, %0" : "+Q" (*(long *)&u->l) : "r" (0UL));
	b = u->i[1];

	return a ^ b;
}

The linaro 15.08 GCC 5.1.1 toolchain compiles the above as follows when
using -O2 or above:

0000000000000000 <update_char_hazard>:
   0:	d2800001 	mov	x1, #0x0                   	// #0
   4:	f9000001 	str	x1, [x0]
   8:	d2800000 	mov	x0, #0x0                   	// #0
   c:	d65f03c0 	ret

0000000000000010 <update_long_hazard>:
  10:	b9400401 	ldr	w1, [x0,#4]
  14:	d2800002 	mov	x2, #0x0                   	// #0
  18:	f9000002 	str	x2, [x0]
  1c:	b9400400 	ldr	w0, [x0,#4]
  20:	4a000020 	eor	w0, w1, w0
  24:	d65f03c0 	ret

This patch fixes the issue by passing an unsigned long pointer into the
+Q constraint, as we do for our cmpxchg code. This may hazard against
more than is necessary, but this is better than missing a necessary
hazard.

Fixes: 305d454aaa292be3 ("arm64: atomics: implement native {relaxed, acquire, release} atomics")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/cmpxchg.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cmpxchg.h b/arch/arm64/include/asm/cmpxchg.h
index 91b26d2..ae852ad 100644
--- a/arch/arm64/include/asm/cmpxchg.h
+++ b/arch/arm64/include/asm/cmpxchg.h
@@ -46,7 +46,7 @@
 	"	swp" #acq_lse #rel #sz "\t%" #w "3, %" #w "0, %2\n"	\
 		__nops(3)						\
 	"	" #nop_lse)						\
-	: "=&r" (ret), "=&r" (tmp), "+Q" (*(u8 *)ptr)			\
+	: "=&r" (ret), "=&r" (tmp), "+Q" (*(unsigned long *)ptr)	\
 	: "r" (x)							\
 	: cl);								\
 									\
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/6] arm64: ensure extension of smp_store_release value
  2017-05-03 15:09 [PATCH 0/6] arm64: inline assembly fixes + cleanup Mark Rutland
  2017-05-03 15:09 ` [PATCH 1/6] arm64: xchg: hazard against entire exchange variable Mark Rutland
@ 2017-05-03 15:09 ` Mark Rutland
  2017-05-03 15:09 ` [PATCH 3/6] arm64: uaccess: ensure extension of access_ok() addr Mark Rutland
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Mark Rutland @ 2017-05-03 15:09 UTC (permalink / raw)
  To: linux-arm-kernel

When an inline assembly operand's type is narrower than the register it
is allocated to, the least significant bits of the register (up to the
operand type's width) are valid, and any other bits are permitted to
contain any arbitrary value. This aligns with the AAPCS64 parameter
passing rules.

Our __smp_store_release() implementation does not account for this, and
implicitly assumes that operands have been zero-extended to the width of
the type being stored to. Thus, we may store unknown values to memory
when the value type is narrower than the pointer type (e.g. when storing
a char to a long).

This patch fixes the issue by casting the value operand to the same
width as the pointer operand in all cases, which ensures that the value
is zero-extended as we expect. We use the same union trickery as
__smp_load_acquire and {READ,WRITE}_ONCE() to avoid GCC complaining that
pointers are potentially cast to narrower width integers in unreachable
paths.

A whitespace issue at the top of __smp_store_release() is also
corrected.

No changes are necessary for __smp_load_acquire(). Load instructions
implicitly clear any upper bits of the register, and the compiler will
only consider the least significant bits of the register as valid
regardless.

Fixes: 47933ad41a86a4a9 ("arch: Introduce smp_load_acquire(), smp_store_release()")
Fixes: 878a84d5a8a18a4a ("arm64: add missing data types in smp_load_acquire/smp_store_release")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Matthias Kaehlcke <mka@chromium.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/barrier.h | 20 +++++++++++++++-----
 1 file changed, 15 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 4e0497f..0fe7e43 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -42,25 +42,35 @@
 #define __smp_rmb()	dmb(ishld)
 #define __smp_wmb()	dmb(ishst)
 
-#define __smp_store_release(p, v)						\
+#define __smp_store_release(p, v)					\
 do {									\
+	union { typeof(*p) __val; char __c[1]; } __u =			\
+		{ .__val = (__force typeof(*p)) (v) }; 			\
 	compiletime_assert_atomic_type(*p);				\
 	switch (sizeof(*p)) {						\
 	case 1:								\
 		asm volatile ("stlrb %w1, %0"				\
-				: "=Q" (*p) : "r" (v) : "memory");	\
+				: "=Q" (*p)				\
+				: "r" (*(__u8 *)__u.__c)		\
+				: "memory");				\
 		break;							\
 	case 2:								\
 		asm volatile ("stlrh %w1, %0"				\
-				: "=Q" (*p) : "r" (v) : "memory");	\
+				: "=Q" (*p)				\
+				: "r" (*(__u16 *)__u.__c)		\
+				: "memory");				\
 		break;							\
 	case 4:								\
 		asm volatile ("stlr %w1, %0"				\
-				: "=Q" (*p) : "r" (v) : "memory");	\
+				: "=Q" (*p)				\
+				: "r" (*(__u32 *)__u.__c)		\
+				: "memory");				\
 		break;							\
 	case 8:								\
 		asm volatile ("stlr %1, %0"				\
-				: "=Q" (*p) : "r" (v) : "memory");	\
+				: "=Q" (*p)				\
+				: "r" (*(__u64 *)__u.__c)		\
+				: "memory");				\
 		break;							\
 	}								\
 } while (0)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/6] arm64: uaccess: ensure extension of access_ok() addr
  2017-05-03 15:09 [PATCH 0/6] arm64: inline assembly fixes + cleanup Mark Rutland
  2017-05-03 15:09 ` [PATCH 1/6] arm64: xchg: hazard against entire exchange variable Mark Rutland
  2017-05-03 15:09 ` [PATCH 2/6] arm64: ensure extension of smp_store_release value Mark Rutland
@ 2017-05-03 15:09 ` Mark Rutland
  2017-05-03 15:09 ` [PATCH 4/6] arm64: armv8_deprecated: ensure extension of addr Mark Rutland
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Mark Rutland @ 2017-05-03 15:09 UTC (permalink / raw)
  To: linux-arm-kernel

Our access_ok() simply hands its arguments over to __range_ok(), which
implicitly assummes that the addr parameter is 64 bits wide. This isn't
necessarily true for compat code, which might pass down a 32-bit address
parameter.

In these cases, we don't have a guarantee that the address has been zero
extended to 64 bits, and the upper bits of the register may contain
unknown values, potentially resulting in a suprious failure.

Avoid this by explicitly casting the addr parameter to an unsigned long
(as is done on other architectures), ensuring that the parameter is
widened appropriately.

Fixes: 0aea86a2176c2264 ("arm64: User access library functions")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/uaccess.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 5308d69..ed3ecf1 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -95,11 +95,12 @@ static inline void set_fs(mm_segment_t fs)
  */
 #define __range_ok(addr, size)						\
 ({									\
+	unsigned long __addr = (unsigned long __force)(addr);		\
 	unsigned long flag, roksum;					\
 	__chk_user_ptr(addr);						\
 	asm("adds %1, %1, %3; ccmp %1, %4, #2, cc; cset %0, ls"		\
 		: "=&r" (flag), "=&r" (roksum)				\
-		: "1" (addr), "Ir" (size),				\
+		: "1" (__addr), "Ir" (size),				\
 		  "r" (current_thread_info()->addr_limit)		\
 		: "cc");						\
 	flag;								\
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/6] arm64: armv8_deprecated: ensure extension of addr
  2017-05-03 15:09 [PATCH 0/6] arm64: inline assembly fixes + cleanup Mark Rutland
                   ` (2 preceding siblings ...)
  2017-05-03 15:09 ` [PATCH 3/6] arm64: uaccess: ensure extension of access_ok() addr Mark Rutland
@ 2017-05-03 15:09 ` Mark Rutland
  2017-05-05 14:51   ` Punit Agrawal
  2017-05-03 15:09 ` [PATCH 5/6] arm64: atomic_lse: match asm register sizes Mark Rutland
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 10+ messages in thread
From: Mark Rutland @ 2017-05-03 15:09 UTC (permalink / raw)
  To: linux-arm-kernel

Our compat swp emulation holds the compat user address in an unsigned
int, which it passes to __user_swpX_asm(). When a 32-bit value is passed
in a register, the upper 32 bits of the register are unknown, and we
must extend the value to 64 bits before we can use it as a base address.

This patch casts the address to unsigned long to ensure it has been
suitably extended, avoiding the potential issue, and silencing a related
warning from clang.

Fixes: bd35a4adc4131c53 ("arm64: Port SWP/SWPB emulation support from arm")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Matthias Kaehlcke <mka@chromium.org>
Cc: Punit Agrawal <punit.agrawal@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/armv8_deprecated.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
index 657977e..f0e6d71 100644
--- a/arch/arm64/kernel/armv8_deprecated.c
+++ b/arch/arm64/kernel/armv8_deprecated.c
@@ -306,7 +306,8 @@ static void __init register_insn_emulation_sysctl(struct ctl_table *table)
 	_ASM_EXTABLE(0b, 4b)					\
 	_ASM_EXTABLE(1b, 4b)					\
 	: "=&r" (res), "+r" (data), "=&r" (temp), "=&r" (temp2)	\
-	: "r" (addr), "i" (-EAGAIN), "i" (-EFAULT),		\
+	: "r" ((unsigned long)addr), "i" (-EAGAIN),		\
+	  "i" (-EFAULT),					\
 	  "i" (__SWP_LL_SC_LOOPS)				\
 	: "memory");						\
 	uaccess_disable();					\
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 5/6] arm64: atomic_lse: match asm register sizes
  2017-05-03 15:09 [PATCH 0/6] arm64: inline assembly fixes + cleanup Mark Rutland
                   ` (3 preceding siblings ...)
  2017-05-03 15:09 ` [PATCH 4/6] arm64: armv8_deprecated: ensure extension of addr Mark Rutland
@ 2017-05-03 15:09 ` Mark Rutland
  2017-05-03 15:09 ` [PATCH 6/6] arm64: uaccess: suppress spurious clang warning Mark Rutland
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Mark Rutland @ 2017-05-03 15:09 UTC (permalink / raw)
  To: linux-arm-kernel

The LSE atomic code uses asm register variables to ensure that
parameters are allocated in specific registers. In the majority of cases
we specifically ask for an x register when using 64-bit values, but in a
couple of cases we use a w regsiter for a 64-bit value.

For asm register variables, the compiler only cares about the register
index, with wN and xN having the same meaning. The compiler determines
the register size to use based on the type of the variable. Thus, this
inconsistency is merely confusing, and not harmful to code generation.

For consistency, this patch updates those cases to use the x register
alias. There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/atomic_lse.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h
index 7457ce0..99fa69c 100644
--- a/arch/arm64/include/asm/atomic_lse.h
+++ b/arch/arm64/include/asm/atomic_lse.h
@@ -322,7 +322,7 @@ static inline void atomic64_and(long i, atomic64_t *v)
 #define ATOMIC64_FETCH_OP_AND(name, mb, cl...)				\
 static inline long atomic64_fetch_and##name(long i, atomic64_t *v)	\
 {									\
-	register long x0 asm ("w0") = i;				\
+	register long x0 asm ("x0") = i;				\
 	register atomic64_t *x1 asm ("x1") = v;				\
 									\
 	asm volatile(ARM64_LSE_ATOMIC_INSN(				\
@@ -394,7 +394,7 @@ static inline void atomic64_sub(long i, atomic64_t *v)
 #define ATOMIC64_FETCH_OP_SUB(name, mb, cl...)				\
 static inline long atomic64_fetch_sub##name(long i, atomic64_t *v)	\
 {									\
-	register long x0 asm ("w0") = i;				\
+	register long x0 asm ("x0") = i;				\
 	register atomic64_t *x1 asm ("x1") = v;				\
 									\
 	asm volatile(ARM64_LSE_ATOMIC_INSN(				\
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 6/6] arm64: uaccess: suppress spurious clang warning
  2017-05-03 15:09 [PATCH 0/6] arm64: inline assembly fixes + cleanup Mark Rutland
                   ` (4 preceding siblings ...)
  2017-05-03 15:09 ` [PATCH 5/6] arm64: atomic_lse: match asm register sizes Mark Rutland
@ 2017-05-03 15:09 ` Mark Rutland
  2017-05-09 15:24 ` [PATCH 0/6] arm64: inline assembly fixes + cleanup Will Deacon
  2017-05-10  8:24 ` Catalin Marinas
  7 siblings, 0 replies; 10+ messages in thread
From: Mark Rutland @ 2017-05-03 15:09 UTC (permalink / raw)
  To: linux-arm-kernel

Clang tries to warn when there's a mismatch between an operand's size,
and the size of the register it is held in, as this may indicate a bug.
Specifically, clang warns when the operand's type is less than 64 bits
wide, and the register is used unqualified (i.e. %N rather than %xN or
%wN).

Unfortunately clang can generate these warnings for unreachable code.
For example, for code like:

do {                                            \
        typeof(*(ptr)) __v = (v);               \
        switch(sizeof(*(ptr))) {                \
        case 1:                                 \
                // assume __v is 1 byte wide    \
                asm ("{op}b %w0" : : "r" (v));  \
                break;                          \
        case 8:                                 \
                // assume __v is 8 bytes wide   \
                asm ("{op} %0" : : "r" (v));    \
                break;                          \
        }
while (0)

... if op() were passed a char value and pointer to char, clang may
produce a warning for the unreachable case where sizeof(*(ptr)) is 8.

For the same reasons, clang produces warnings when __put_user_err() is
used for types that are less than 64 bits wide.

We could avoid this with a cast to a fixed-width type in each of the
cases. However, GCC will then warn that pointer types are being cast to
mismatched integer sizes (in unreachable paths).

Another option would be to use the same union trickery as we do for
__smp_store_release() and __smp_load_acquire(), but this is fairly
invasive.

Instead, this patch suppresses the clang warning by using an x modifier
in the assembly for the 8 byte case of __put_user_err(). No additional
work is necessary as the value has been cast to typeof(*(ptr)), so the
compiler will have performed any necessary extension for the reachable
case.

For consistency, __get_user_err() is also updated to use the x modifier
for its 8 byte case.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Matthias Kaehlcke <mka@chromium.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/uaccess.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index ed3ecf1..2c7822c 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -257,7 +257,7 @@ static inline void uaccess_enable_not_uao(void)
 			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	case 8:								\
-		__get_user_asm("ldr", "ldtr", "%",  __gu_val, (ptr),	\
+		__get_user_asm("ldr", "ldtr", "%x",  __gu_val, (ptr),	\
 			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	default:							\
@@ -324,7 +324,7 @@ static inline void uaccess_enable_not_uao(void)
 			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	case 8:								\
-		__put_user_asm("str", "sttr", "%", __pu_val, (ptr),	\
+		__put_user_asm("str", "sttr", "%x", __pu_val, (ptr),	\
 			       (err), ARM64_HAS_UAO);			\
 		break;							\
 	default:							\
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/6] arm64: armv8_deprecated: ensure extension of addr
  2017-05-03 15:09 ` [PATCH 4/6] arm64: armv8_deprecated: ensure extension of addr Mark Rutland
@ 2017-05-05 14:51   ` Punit Agrawal
  0 siblings, 0 replies; 10+ messages in thread
From: Punit Agrawal @ 2017-05-05 14:51 UTC (permalink / raw)
  To: linux-arm-kernel

Mark Rutland <mark.rutland@arm.com> writes:

> Our compat swp emulation holds the compat user address in an unsigned
> int, which it passes to __user_swpX_asm(). When a 32-bit value is passed
> in a register, the upper 32 bits of the register are unknown, and we
> must extend the value to 64 bits before we can use it as a base address.
>
> This patch casts the address to unsigned long to ensure it has been
> suitably extended, avoiding the potential issue, and silencing a related
> warning from clang.
>
> Fixes: bd35a4adc4131c53 ("arm64: Port SWP/SWPB emulation support from arm")
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Matthias Kaehlcke <mka@chromium.org>
> Cc: Punit Agrawal <punit.agrawal@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>

>From the description, the problem looks quite serious. I'm surprised
this hasn't exploded before.

FWIW,

        Acked-by: Punit Agrawal <punit.agrawal@arm.com>

Thanks!


[...]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 0/6] arm64: inline assembly fixes + cleanup
  2017-05-03 15:09 [PATCH 0/6] arm64: inline assembly fixes + cleanup Mark Rutland
                   ` (5 preceding siblings ...)
  2017-05-03 15:09 ` [PATCH 6/6] arm64: uaccess: suppress spurious clang warning Mark Rutland
@ 2017-05-09 15:24 ` Will Deacon
  2017-05-10  8:24 ` Catalin Marinas
  7 siblings, 0 replies; 10+ messages in thread
From: Will Deacon @ 2017-05-09 15:24 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, May 03, 2017 at 04:09:32PM +0100, Mark Rutland wrote:
> Recent attempts to make our inline assembly more clang-friendly [1,2]
> made it clear that we have some latent problems. I've reviewed all the
> inline assembly under arch/arm64/, and this series fixes the issues that
> I noted.
> 
> The series is based on the arm64 for-next/core branch. I've built the
> series with a Linaro 15,08 GCC 5.1.1 toolchain. I see no new warnings,
> and the result boots happily on Juno R1.
> 
> The first four patches address latent bugs, with the final two patches
> improving consistency and compatibility with clang. I believe that this
> supersedes [2], with the GIC accessor having been fixed up by the recent
> sysreg rework.

For the series:

Acked-by: Will Deacon <will.deacon@arm.com>

It's a pity that we always cast to (unsigned long) for xchg, but I doubt
it actually makes a performance difference in practice.

Will

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 0/6] arm64: inline assembly fixes + cleanup
  2017-05-03 15:09 [PATCH 0/6] arm64: inline assembly fixes + cleanup Mark Rutland
                   ` (6 preceding siblings ...)
  2017-05-09 15:24 ` [PATCH 0/6] arm64: inline assembly fixes + cleanup Will Deacon
@ 2017-05-10  8:24 ` Catalin Marinas
  7 siblings, 0 replies; 10+ messages in thread
From: Catalin Marinas @ 2017-05-10  8:24 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, May 03, 2017 at 04:09:32PM +0100, Mark Rutland wrote:
> Mark Rutland (6):
>   arm64: xchg: hazard against entire exchange variable
>   arm64: ensure extension of smp_store_release value
>   arm64: uaccess: ensure extension of access_ok() addr
>   arm64: armv8_deprecated: ensure extension of addr
>   arm64: atomic_lse: match asm register sizes
>   arm64: uaccess: suppress spurious clang warning

Queued for 4.12.

-- 
Catalin

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-05-10  8:24 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-03 15:09 [PATCH 0/6] arm64: inline assembly fixes + cleanup Mark Rutland
2017-05-03 15:09 ` [PATCH 1/6] arm64: xchg: hazard against entire exchange variable Mark Rutland
2017-05-03 15:09 ` [PATCH 2/6] arm64: ensure extension of smp_store_release value Mark Rutland
2017-05-03 15:09 ` [PATCH 3/6] arm64: uaccess: ensure extension of access_ok() addr Mark Rutland
2017-05-03 15:09 ` [PATCH 4/6] arm64: armv8_deprecated: ensure extension of addr Mark Rutland
2017-05-05 14:51   ` Punit Agrawal
2017-05-03 15:09 ` [PATCH 5/6] arm64: atomic_lse: match asm register sizes Mark Rutland
2017-05-03 15:09 ` [PATCH 6/6] arm64: uaccess: suppress spurious clang warning Mark Rutland
2017-05-09 15:24 ` [PATCH 0/6] arm64: inline assembly fixes + cleanup Will Deacon
2017-05-10  8:24 ` Catalin Marinas

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.