From: Vineet Gupta <Vineet.Gupta1@synopsys.com>
To: Peter Zijlstra <peterz@infradead.org>, Will Deacon <will.deacon@arm.com>
Cc: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>,
"mingo@kernel.org" <mingo@kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Alexey Brodkin <Alexey.Brodkin@synopsys.com>,
"tglx@linutronix.de" <tglx@linutronix.de>,
"linux-snps-arc@lists.infradead.org"
<linux-snps-arc@lists.infradead.org>,
"yamada.masahiro@socionext.com" <yamada.masahiro@socionext.com>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>,
"linux-arch@vger.kernel.org" <linux-arch@vger.kernel.org>
Subject: Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash
Date: Tue, 14 Apr 2020 01:19:06 +0000 [thread overview]
Message-ID: <d9b26292-4b40-f282-b1f6-5ee238358f0e@synopsys.com> (raw)
In-Reply-To: <20180830144344.GW24142@hirez.programming.kicks-ass.net>
On 8/30/18 7:43 AM, Peter Zijlstra wrote:
> On Thu, Aug 30, 2018 at 04:29:20PM +0200, Peter Zijlstra wrote:
>
>> Also, once it all works, they should look at switching to _relaxed
>> atomics for LL/SC.
> A little something like so.. should save a few smp_mb().
Finally got to this - time for some spring cleaning ;-)
> ---
>
> diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
> index 4e0072730241..714b54c308b0 100644
> --- a/arch/arc/include/asm/atomic.h
> +++ b/arch/arc/include/asm/atomic.h
> @@ -44,7 +44,7 @@ static inline void atomic_##op(int i, atomic_t *v) \
> } \
>
> #define ATOMIC_OP_RETURN(op, c_op, asm_op) \
> -static inline int atomic_##op##_return(int i, atomic_t *v) \
> +static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \
> { \
This being relaxed, shoudn't it also remove the smp_mb() before the operation and
leave the generic code to add one / more smp_mb() as appropriate for fully
ordered, acquire and release variants ?
> unsigned int val; \
> \
> @@ -69,8 +69,11 @@ static inline int atomic_##op##_return(int i, atomic_t *v) \
> return val; \
> }
>
> +#define atomic_add_return_relaxed atomic_add_return_relaxed
> +#define atomic_sub_return_relaxed atomic_sub_return_relaxed
> +
> #define ATOMIC_FETCH_OP(op, c_op, asm_op) \
> -static inline int atomic_fetch_##op(int i, atomic_t *v) \
> +static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \
> { \
> unsigned int val, orig; \
> \
> @@ -96,6 +99,14 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \
> return orig; \
> }
>
> +#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed
> +#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed
> +
> +#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed
> +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed
> +#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed
> +#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed
> +
> #else /* !CONFIG_ARC_HAS_LLSC */
>
> #ifndef CONFIG_SMP
> @@ -379,7 +390,7 @@ static inline void atomic64_##op(long long a, atomic64_t *v) \
> } \
>
> #define ATOMIC64_OP_RETURN(op, op1, op2) \
> -static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \
> +static inline long long atomic64_##op##_return_relaxed(long long a, atomic64_t *v) \
> { \
> unsigned long long val; \
> \
> @@ -401,8 +412,11 @@ static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \
> return val; \
> }
>
> +#define atomic64_add_return_relaxed atomic64_add_return_relaxed
> +#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed
> +
> #define ATOMIC64_FETCH_OP(op, op1, op2) \
> -static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \
> +static inline long long atomic64_fetch_##op##_relaxed(long long a, atomic64_t *v) \
> { \
> unsigned long long val, orig; \
> \
> @@ -424,6 +438,14 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \
> return orig; \
> }
>
> +#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed
> +#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed
> +
> +#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed
> +#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed
> +#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed
> +#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed
> +
> #define ATOMIC64_OPS(op, op1, op2) \
> ATOMIC64_OP(op, op1, op2) \
> ATOMIC64_OP_RETURN(op, op1, op2) \
> @@ -434,6 +456,12 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \
>
> ATOMIC64_OPS(add, add.f, adc)
> ATOMIC64_OPS(sub, sub.f, sbc)
> +
> +#undef ATOMIC64_OPS
> +#define ATOMIC64_OPS(op, op1, op2) \
> + ATOMIC64_OP(op, op1, op2) \
> + ATOMIC64_FETCH_OP(op, op1, op2)
> +
For clarity I split off this hunk into a seperate patch as it elides generation of
unused bitwise ops.
> ATOMIC64_OPS(and, and, and)
> ATOMIC64_OPS(andnot, bic, bic)
> ATOMIC64_OPS(or, or, or)
>
WARNING: multiple messages have this Message-ID (diff)
From: Vineet Gupta <Vineet.Gupta1@synopsys.com>
To: Peter Zijlstra <peterz@infradead.org>, Will Deacon <will.deacon@arm.com>
Cc: "linux-arch@vger.kernel.org" <linux-arch@vger.kernel.org>,
Alexey Brodkin <Alexey.Brodkin@synopsys.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"yamada.masahiro@socionext.com" <yamada.masahiro@socionext.com>,
"tglx@linutronix.de" <tglx@linutronix.de>,
"linux-snps-arc@lists.infradead.org"
<linux-snps-arc@lists.infradead.org>,
Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>,
"mingo@kernel.org" <mingo@kernel.org>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>
Subject: Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash
Date: Tue, 14 Apr 2020 01:19:06 +0000 [thread overview]
Message-ID: <d9b26292-4b40-f282-b1f6-5ee238358f0e@synopsys.com> (raw)
In-Reply-To: <20180830144344.GW24142@hirez.programming.kicks-ass.net>
On 8/30/18 7:43 AM, Peter Zijlstra wrote:
> On Thu, Aug 30, 2018 at 04:29:20PM +0200, Peter Zijlstra wrote:
>
>> Also, once it all works, they should look at switching to _relaxed
>> atomics for LL/SC.
> A little something like so.. should save a few smp_mb().
Finally got to this - time for some spring cleaning ;-)
> ---
>
> diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
> index 4e0072730241..714b54c308b0 100644
> --- a/arch/arc/include/asm/atomic.h
> +++ b/arch/arc/include/asm/atomic.h
> @@ -44,7 +44,7 @@ static inline void atomic_##op(int i, atomic_t *v) \
> } \
>
> #define ATOMIC_OP_RETURN(op, c_op, asm_op) \
> -static inline int atomic_##op##_return(int i, atomic_t *v) \
> +static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \
> { \
This being relaxed, shoudn't it also remove the smp_mb() before the operation and
leave the generic code to add one / more smp_mb() as appropriate for fully
ordered, acquire and release variants ?
> unsigned int val; \
> \
> @@ -69,8 +69,11 @@ static inline int atomic_##op##_return(int i, atomic_t *v) \
> return val; \
> }
>
> +#define atomic_add_return_relaxed atomic_add_return_relaxed
> +#define atomic_sub_return_relaxed atomic_sub_return_relaxed
> +
> #define ATOMIC_FETCH_OP(op, c_op, asm_op) \
> -static inline int atomic_fetch_##op(int i, atomic_t *v) \
> +static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \
> { \
> unsigned int val, orig; \
> \
> @@ -96,6 +99,14 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \
> return orig; \
> }
>
> +#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed
> +#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed
> +
> +#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed
> +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed
> +#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed
> +#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed
> +
> #else /* !CONFIG_ARC_HAS_LLSC */
>
> #ifndef CONFIG_SMP
> @@ -379,7 +390,7 @@ static inline void atomic64_##op(long long a, atomic64_t *v) \
> } \
>
> #define ATOMIC64_OP_RETURN(op, op1, op2) \
> -static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \
> +static inline long long atomic64_##op##_return_relaxed(long long a, atomic64_t *v) \
> { \
> unsigned long long val; \
> \
> @@ -401,8 +412,11 @@ static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \
> return val; \
> }
>
> +#define atomic64_add_return_relaxed atomic64_add_return_relaxed
> +#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed
> +
> #define ATOMIC64_FETCH_OP(op, op1, op2) \
> -static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \
> +static inline long long atomic64_fetch_##op##_relaxed(long long a, atomic64_t *v) \
> { \
> unsigned long long val, orig; \
> \
> @@ -424,6 +438,14 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \
> return orig; \
> }
>
> +#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed
> +#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed
> +
> +#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed
> +#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed
> +#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed
> +#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed
> +
> #define ATOMIC64_OPS(op, op1, op2) \
> ATOMIC64_OP(op, op1, op2) \
> ATOMIC64_OP_RETURN(op, op1, op2) \
> @@ -434,6 +456,12 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \
>
> ATOMIC64_OPS(add, add.f, adc)
> ATOMIC64_OPS(sub, sub.f, sbc)
> +
> +#undef ATOMIC64_OPS
> +#define ATOMIC64_OPS(op, op1, op2) \
> + ATOMIC64_OP(op, op1, op2) \
> + ATOMIC64_FETCH_OP(op, op1, op2)
> +
For clarity I split off this hunk into a seperate patch as it elides generation of
unused bitwise ops.
> ATOMIC64_OPS(and, and, and)
> ATOMIC64_OPS(andnot, bic, bic)
> ATOMIC64_OPS(or, or, or)
>
_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc
WARNING: multiple messages have this Message-ID (diff)
From: Vineet Gupta <Vineet.Gupta1@synopsys.com>
To: Peter Zijlstra <peterz@infradead.org>, Will Deacon <will.deacon@arm.com>
Cc: "linux-arch@vger.kernel.org" <linux-arch@vger.kernel.org>,
Alexey Brodkin <Alexey.Brodkin@synopsys.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"yamada.masahiro@socionext.com" <yamada.masahiro@socionext.com>,
"tglx@linutronix.de" <tglx@linutronix.de>,
"linux-snps-arc@lists.infradead.org"
<linux-snps-arc@lists.infradead.org>,
Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>,
"mingo@kernel.org" <mingo@kernel.org>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>
Subject: Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash
Date: Tue, 14 Apr 2020 01:19:06 +0000 [thread overview]
Message-ID: <d9b26292-4b40-f282-b1f6-5ee238358f0e@synopsys.com> (raw)
In-Reply-To: <20180830144344.GW24142@hirez.programming.kicks-ass.net>
On 8/30/18 7:43 AM, Peter Zijlstra wrote:
> On Thu, Aug 30, 2018 at 04:29:20PM +0200, Peter Zijlstra wrote:
>
>> Also, once it all works, they should look at switching to _relaxed
>> atomics for LL/SC.
> A little something like so.. should save a few smp_mb().
Finally got to this - time for some spring cleaning ;-)
> ---
>
> diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
> index 4e0072730241..714b54c308b0 100644
> --- a/arch/arc/include/asm/atomic.h
> +++ b/arch/arc/include/asm/atomic.h
> @@ -44,7 +44,7 @@ static inline void atomic_##op(int i, atomic_t *v) \
> } \
>
> #define ATOMIC_OP_RETURN(op, c_op, asm_op) \
> -static inline int atomic_##op##_return(int i, atomic_t *v) \
> +static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \
> { \
This being relaxed, shoudn't it also remove the smp_mb() before the operation and
leave the generic code to add one / more smp_mb() as appropriate for fully
ordered, acquire and release variants ?
> unsigned int val; \
> \
> @@ -69,8 +69,11 @@ static inline int atomic_##op##_return(int i, atomic_t *v) \
> return val; \
> }
>
> +#define atomic_add_return_relaxed atomic_add_return_relaxed
> +#define atomic_sub_return_relaxed atomic_sub_return_relaxed
> +
> #define ATOMIC_FETCH_OP(op, c_op, asm_op) \
> -static inline int atomic_fetch_##op(int i, atomic_t *v) \
> +static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \
> { \
> unsigned int val, orig; \
> \
> @@ -96,6 +99,14 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \
> return orig; \
> }
>
> +#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed
> +#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed
> +
> +#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed
> +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed
> +#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed
> +#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed
> +
> #else /* !CONFIG_ARC_HAS_LLSC */
>
> #ifndef CONFIG_SMP
> @@ -379,7 +390,7 @@ static inline void atomic64_##op(long long a, atomic64_t *v) \
> } \
>
> #define ATOMIC64_OP_RETURN(op, op1, op2) \
> -static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \
> +static inline long long atomic64_##op##_return_relaxed(long long a, atomic64_t *v) \
> { \
> unsigned long long val; \
> \
> @@ -401,8 +412,11 @@ static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \
> return val; \
> }
>
> +#define atomic64_add_return_relaxed atomic64_add_return_relaxed
> +#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed
> +
> #define ATOMIC64_FETCH_OP(op, op1, op2) \
> -static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \
> +static inline long long atomic64_fetch_##op##_relaxed(long long a, atomic64_t *v) \
> { \
> unsigned long long val, orig; \
> \
> @@ -424,6 +438,14 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \
> return orig; \
> }
>
> +#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed
> +#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed
> +
> +#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed
> +#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed
> +#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed
> +#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed
> +
> #define ATOMIC64_OPS(op, op1, op2) \
> ATOMIC64_OP(op, op1, op2) \
> ATOMIC64_OP_RETURN(op, op1, op2) \
> @@ -434,6 +456,12 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \
>
> ATOMIC64_OPS(add, add.f, adc)
> ATOMIC64_OPS(sub, sub.f, sbc)
> +
> +#undef ATOMIC64_OPS
> +#define ATOMIC64_OPS(op, op1, op2) \
> + ATOMIC64_OP(op, op1, op2) \
> + ATOMIC64_FETCH_OP(op, op1, op2)
> +
For clarity I split off this hunk into a seperate patch as it elides generation of
unused bitwise ops.
> ATOMIC64_OPS(and, and, and)
> ATOMIC64_OPS(andnot, bic, bic)
> ATOMIC64_OPS(or, or, or)
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2020-04-14 1:19 UTC|newest]
Thread overview: 89+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-29 18:33 Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash Eugeniy Paltsev
2018-08-29 18:33 ` Eugeniy Paltsev
2018-08-29 18:33 ` Eugeniy Paltsev
2018-08-29 18:33 ` Eugeniy Paltsev
2018-08-29 21:16 ` Vineet Gupta
2018-08-29 21:16 ` Vineet Gupta
2018-08-29 21:16 ` Vineet Gupta
2018-08-29 21:16 ` Vineet Gupta
2018-08-30 9:35 ` Will Deacon
2018-08-30 9:35 ` Will Deacon
2018-08-30 9:35 ` Will Deacon
2018-08-30 9:35 ` Will Deacon
2018-08-30 9:44 ` Peter Zijlstra
2018-08-30 9:44 ` Peter Zijlstra
2018-08-30 9:44 ` Peter Zijlstra
2018-08-30 9:44 ` Peter Zijlstra
2018-08-30 9:44 ` Peter Zijlstra
2018-08-30 9:51 ` Will Deacon
2018-08-30 9:51 ` Will Deacon
2018-08-30 9:51 ` Will Deacon
2018-08-30 9:51 ` Will Deacon
2018-08-30 11:53 ` Eugeniy Paltsev
2018-08-30 11:53 ` Eugeniy Paltsev
2018-08-30 11:53 ` Eugeniy Paltsev
2018-08-30 11:53 ` Eugeniy Paltsev
2018-08-30 13:57 ` Will Deacon
2018-08-30 13:57 ` Will Deacon
2018-08-30 13:57 ` Will Deacon
2018-08-30 13:57 ` Will Deacon
2018-08-30 14:17 ` Peter Zijlstra
2018-08-30 14:17 ` Peter Zijlstra
2018-08-30 14:17 ` Peter Zijlstra
2018-08-30 14:17 ` Peter Zijlstra
2018-08-30 14:17 ` Peter Zijlstra
2018-08-30 14:23 ` Will Deacon
2018-08-30 14:23 ` Will Deacon
2018-08-30 14:23 ` Will Deacon
2018-08-30 14:23 ` Will Deacon
2018-08-30 14:29 ` Peter Zijlstra
2018-08-30 14:29 ` Peter Zijlstra
2018-08-30 14:29 ` Peter Zijlstra
2018-08-30 14:29 ` Peter Zijlstra
2018-08-30 14:43 ` Peter Zijlstra
2018-08-30 14:43 ` Peter Zijlstra
2018-08-30 14:43 ` Peter Zijlstra
2018-08-30 14:43 ` Peter Zijlstra
2018-08-30 14:43 ` Peter Zijlstra
2020-04-14 1:19 ` Vineet Gupta [this message]
2020-04-14 1:19 ` Vineet Gupta
2020-04-14 1:19 ` Vineet Gupta
2020-04-14 1:19 ` Vineet Gupta
2018-08-30 20:31 ` Vineet Gupta
2018-08-30 20:31 ` Vineet Gupta
2018-08-30 20:31 ` Vineet Gupta
2018-08-30 20:31 ` Vineet Gupta
2018-08-30 20:45 ` Peter Zijlstra
2018-08-30 20:45 ` Peter Zijlstra
2018-08-30 20:45 ` Peter Zijlstra
2018-08-30 20:45 ` Peter Zijlstra
2018-08-30 20:45 ` Peter Zijlstra
2018-08-31 0:30 ` Vineet Gupta
2018-08-31 0:30 ` Vineet Gupta
2018-08-31 0:30 ` Vineet Gupta
2018-08-31 0:30 ` Vineet Gupta
2018-08-31 9:53 ` Will Deacon
2018-08-31 9:53 ` Will Deacon
2018-08-31 9:53 ` Will Deacon
2018-08-31 9:53 ` Will Deacon
2018-08-30 14:46 ` Eugeniy Paltsev
2018-08-30 14:46 ` Eugeniy Paltsev
2018-08-30 14:46 ` Eugeniy Paltsev
2018-08-30 14:46 ` Eugeniy Paltsev
2018-08-30 14:46 ` Eugeniy Paltsev
2018-08-30 17:15 ` Peter Zijlstra
2018-08-30 17:15 ` Peter Zijlstra
2018-08-30 17:15 ` Peter Zijlstra
2018-08-30 17:15 ` Peter Zijlstra
2018-08-31 0:42 ` Vineet Gupta
2018-08-31 0:42 ` Vineet Gupta
2018-08-31 0:42 ` Vineet Gupta
2018-08-31 0:42 ` Vineet Gupta
2018-08-31 0:29 ` __clear_bit_lock to use atomic clear_bit (was Re: Patch "asm-generic/bitops/lock.h) Vineet Gupta
2018-08-31 0:29 ` Vineet Gupta
2018-08-31 0:29 ` Vineet Gupta
2018-08-31 0:29 ` Vineet Gupta
2018-08-31 7:24 ` Peter Zijlstra
2018-08-31 7:24 ` Peter Zijlstra
2018-08-31 7:24 ` Peter Zijlstra
2018-08-31 7:24 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d9b26292-4b40-f282-b1f6-5ee238358f0e@synopsys.com \
--to=vineet.gupta1@synopsys.com \
--cc=Alexey.Brodkin@synopsys.com \
--cc=Eugeniy.Paltsev@synopsys.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-snps-arc@lists.infradead.org \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=will.deacon@arm.com \
--cc=yamada.masahiro@socionext.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.