From: Ingo Molnar <mingo@kernel.org> To: Peter Zijlstra <peterz@infradead.org> Cc: Mark Rutland <mark.rutland@arm.com>, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, aryabinin@virtuozzo.com, boqun.feng@gmail.com, catalin.marinas@arm.com, dvyukov@google.com, will.deacon@arm.com Subject: [PATCH] locking/atomics: Shorten the __atomic_op() defines to __op() Date: Sat, 5 May 2018 12:48:58 +0200 [thread overview] Message-ID: <20180505104858.ap4bfv6ip2vprzyj@gmail.com> (raw) In-Reply-To: <20180505092911.GC12217@hirez.programming.kicks-ass.net> * Peter Zijlstra <peterz@infradead.org> wrote: > On Sat, May 05, 2018 at 11:09:03AM +0200, Ingo Molnar wrote: > > > > # ifndef atomic_fetch_dec_acquire > > > > # define atomic_fetch_dec_acquire(...) __atomic_op_acquire(atomic_fetch_dec, __VA_ARGS__) > > > > # endif > > > > # ifndef atomic_fetch_dec_release > > > > # define atomic_fetch_dec_release(...) __atomic_op_release(atomic_fetch_dec, __VA_ARGS__) > > > > # endif > > > > # ifndef atomic_fetch_dec > > > > # define atomic_fetch_dec(...) __atomic_op_fence(atomic_fetch_dec, __VA_ARGS__) > > > > # endif > > > > #endif > > > > > > > > The new variant is readable at a glance, and the hierarchy of defines is very > > > > obvious as well. > > > > > > It wraps and looks hideous in my normal setup. And I do detest that indent > > > after # thing. > > > > You should use wider terminals if you take a look at such code - there's already > > numerous areas of the kernel that are not readable on 80x25 terminals. > > > > _Please_ try the following experiment, for me: > > > > Enter the 21st century temporarily and widen two of your terminals from 80 cols to > > 100 cols - it's only ~20% wider. > > Doesn't work that way. The only way I get more columns is if I shrink my > font further. I work with tiles per monitor (left/right obv.) and use > two columns per editor. This gets me a total of 4 columns. > > On my desktop that is slightly over 100 characters per column, on my > laptop that is slightly below 100 -- mostly because I'm pixel limited on > fontsize on that thing (FullHD sucks). > > If it wraps it wraps. Out of the 707 lines in atomic.h only 25 are wider than 100 chars - and the max length is 104 chars. If that's too then there's a few more things we could do - for example the attached patch renames a (very minor) misnomer to a shorter name and thus saves on the longest lines, the column histogram now looks like this: 79 4 80 7 81 3 82 9 84 4 85 2 86 3 87 1 88 4 89 13 90 7 91 20 92 18 93 12 94 11 96 5 I.e. the longest line is down to 96 columns, and 99% of the file is 94 cols or shorter. Is this still too long? Thanks, Ingo ============================> From: Ingo Molnar <mingo@kernel.org> Date: Sat, 5 May 2018 12:41:57 +0200 Subject: [PATCH] locking/atomics: Shorten the __atomic_op() defines to __op() The __atomic prefix is somewhat of a misnomer, because not all APIs we use with these macros have an atomic_ prefix. This also reduces the length of the longest lines in the header, making them more readable on PeterZ's terminals. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Paul E. McKenney <paulmck@us.ibm.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: aryabinin@virtuozzo.com Cc: boqun.feng@gmail.com Cc: catalin.marinas@arm.com Cc: dvyukov@google.com Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org> --- include/linux/atomic.h | 204 +++++++++++++++++++++++++------------------------ 1 file changed, 103 insertions(+), 101 deletions(-) diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 1176cf7c6f03..f32ff6d9e4d2 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -37,33 +37,35 @@ * variant is already fully ordered, no additional barriers are needed. * * Besides, if an arch has a special barrier for acquire/release, it could - * implement its own __atomic_op_* and use the same framework for building + * implement its own __op_* and use the same framework for building * variants * - * If an architecture overrides __atomic_op_acquire() it will probably want + * If an architecture overrides __op_acquire() it will probably want * to define smp_mb__after_spinlock(). */ -#ifndef __atomic_op_acquire -#define __atomic_op_acquire(op, args...) \ +#ifndef __op_acquire +#define __op_acquire(op, args...) \ ({ \ typeof(op##_relaxed(args)) __ret = op##_relaxed(args); \ + \ smp_mb__after_atomic(); \ __ret; \ }) #endif -#ifndef __atomic_op_release -#define __atomic_op_release(op, args...) \ +#ifndef __op_release +#define __op_release(op, args...) \ ({ \ smp_mb__before_atomic(); \ op##_relaxed(args); \ }) #endif -#ifndef __atomic_op_fence -#define __atomic_op_fence(op, args...) \ +#ifndef __op_fence +#define __op_fence(op, args...) \ ({ \ typeof(op##_relaxed(args)) __ret; \ + \ smp_mb__before_atomic(); \ __ret = op##_relaxed(args); \ smp_mb__after_atomic(); \ @@ -77,9 +79,9 @@ # define atomic_add_return_release atomic_add_return #else # ifndef atomic_add_return -# define atomic_add_return(...) __atomic_op_fence(atomic_add_return, __VA_ARGS__) -# define atomic_add_return_acquire(...) __atomic_op_acquire(atomic_add_return, __VA_ARGS__) -# define atomic_add_return_release(...) __atomic_op_release(atomic_add_return, __VA_ARGS__) +# define atomic_add_return(...) __op_fence(atomic_add_return, __VA_ARGS__) +# define atomic_add_return_acquire(...) __op_acquire(atomic_add_return, __VA_ARGS__) +# define atomic_add_return_release(...) __op_release(atomic_add_return, __VA_ARGS__) # endif #endif @@ -89,9 +91,9 @@ # define atomic_inc_return_release atomic_inc_return #else # ifndef atomic_inc_return -# define atomic_inc_return(...) __atomic_op_fence(atomic_inc_return, __VA_ARGS__) -# define atomic_inc_return_acquire(...) __atomic_op_acquire(atomic_inc_return, __VA_ARGS__) -# define atomic_inc_return_release(...) __atomic_op_release(atomic_inc_return, __VA_ARGS__) +# define atomic_inc_return(...) __op_fence(atomic_inc_return, __VA_ARGS__) +# define atomic_inc_return_acquire(...) __op_acquire(atomic_inc_return, __VA_ARGS__) +# define atomic_inc_return_release(...) __op_release(atomic_inc_return, __VA_ARGS__) # endif #endif @@ -101,9 +103,9 @@ # define atomic_sub_return_release atomic_sub_return #else # ifndef atomic_sub_return -# define atomic_sub_return(...) __atomic_op_fence(atomic_sub_return, __VA_ARGS__) -# define atomic_sub_return_acquire(...) __atomic_op_acquire(atomic_sub_return, __VA_ARGS__) -# define atomic_sub_return_release(...) __atomic_op_release(atomic_sub_return, __VA_ARGS__) +# define atomic_sub_return(...) __op_fence(atomic_sub_return, __VA_ARGS__) +# define atomic_sub_return_acquire(...) __op_acquire(atomic_sub_return, __VA_ARGS__) +# define atomic_sub_return_release(...) __op_release(atomic_sub_return, __VA_ARGS__) # endif #endif @@ -113,9 +115,9 @@ # define atomic_dec_return_release atomic_dec_return #else # ifndef atomic_dec_return -# define atomic_dec_return(...) __atomic_op_fence(atomic_dec_return, __VA_ARGS__) -# define atomic_dec_return_acquire(...) __atomic_op_acquire(atomic_dec_return, __VA_ARGS__) -# define atomic_dec_return_release(...) __atomic_op_release(atomic_dec_return, __VA_ARGS__) +# define atomic_dec_return(...) __op_fence(atomic_dec_return, __VA_ARGS__) +# define atomic_dec_return_acquire(...) __op_acquire(atomic_dec_return, __VA_ARGS__) +# define atomic_dec_return_release(...) __op_release(atomic_dec_return, __VA_ARGS__) # endif #endif @@ -125,9 +127,9 @@ # define atomic_fetch_add_release atomic_fetch_add #else # ifndef atomic_fetch_add -# define atomic_fetch_add(...) __atomic_op_fence(atomic_fetch_add, __VA_ARGS__) -# define atomic_fetch_add_acquire(...) __atomic_op_acquire(atomic_fetch_add, __VA_ARGS__) -# define atomic_fetch_add_release(...) __atomic_op_release(atomic_fetch_add, __VA_ARGS__) +# define atomic_fetch_add(...) __op_fence(atomic_fetch_add, __VA_ARGS__) +# define atomic_fetch_add_acquire(...) __op_acquire(atomic_fetch_add, __VA_ARGS__) +# define atomic_fetch_add_release(...) __op_release(atomic_fetch_add, __VA_ARGS__) # endif #endif @@ -144,9 +146,9 @@ # endif #else # ifndef atomic_fetch_inc -# define atomic_fetch_inc(...) __atomic_op_fence(atomic_fetch_inc, __VA_ARGS__) -# define atomic_fetch_inc_acquire(...) __atomic_op_acquire(atomic_fetch_inc, __VA_ARGS__) -# define atomic_fetch_inc_release(...) __atomic_op_release(atomic_fetch_inc, __VA_ARGS__) +# define atomic_fetch_inc(...) __op_fence(atomic_fetch_inc, __VA_ARGS__) +# define atomic_fetch_inc_acquire(...) __op_acquire(atomic_fetch_inc, __VA_ARGS__) +# define atomic_fetch_inc_release(...) __op_release(atomic_fetch_inc, __VA_ARGS__) # endif #endif @@ -156,9 +158,9 @@ # define atomic_fetch_sub_release atomic_fetch_sub #else # ifndef atomic_fetch_sub -# define atomic_fetch_sub(...) __atomic_op_fence(atomic_fetch_sub, __VA_ARGS__) -# define atomic_fetch_sub_acquire(...) __atomic_op_acquire(atomic_fetch_sub, __VA_ARGS__) -# define atomic_fetch_sub_release(...) __atomic_op_release(atomic_fetch_sub, __VA_ARGS__) +# define atomic_fetch_sub(...) __op_fence(atomic_fetch_sub, __VA_ARGS__) +# define atomic_fetch_sub_acquire(...) __op_acquire(atomic_fetch_sub, __VA_ARGS__) +# define atomic_fetch_sub_release(...) __op_release(atomic_fetch_sub, __VA_ARGS__) # endif #endif @@ -175,9 +177,9 @@ # endif #else # ifndef atomic_fetch_dec -# define atomic_fetch_dec(...) __atomic_op_fence(atomic_fetch_dec, __VA_ARGS__) -# define atomic_fetch_dec_acquire(...) __atomic_op_acquire(atomic_fetch_dec, __VA_ARGS__) -# define atomic_fetch_dec_release(...) __atomic_op_release(atomic_fetch_dec, __VA_ARGS__) +# define atomic_fetch_dec(...) __op_fence(atomic_fetch_dec, __VA_ARGS__) +# define atomic_fetch_dec_acquire(...) __op_acquire(atomic_fetch_dec, __VA_ARGS__) +# define atomic_fetch_dec_release(...) __op_release(atomic_fetch_dec, __VA_ARGS__) # endif #endif @@ -187,9 +189,9 @@ # define atomic_fetch_or_release atomic_fetch_or #else # ifndef atomic_fetch_or -# define atomic_fetch_or(...) __atomic_op_fence(atomic_fetch_or, __VA_ARGS__) -# define atomic_fetch_or_acquire(...) __atomic_op_acquire(atomic_fetch_or, __VA_ARGS__) -# define atomic_fetch_or_release(...) __atomic_op_release(atomic_fetch_or, __VA_ARGS__) +# define atomic_fetch_or(...) __op_fence(atomic_fetch_or, __VA_ARGS__) +# define atomic_fetch_or_acquire(...) __op_acquire(atomic_fetch_or, __VA_ARGS__) +# define atomic_fetch_or_release(...) __op_release(atomic_fetch_or, __VA_ARGS__) # endif #endif @@ -199,9 +201,9 @@ # define atomic_fetch_and_release atomic_fetch_and #else # ifndef atomic_fetch_and -# define atomic_fetch_and(...) __atomic_op_fence(atomic_fetch_and, __VA_ARGS__) -# define atomic_fetch_and_acquire(...) __atomic_op_acquire(atomic_fetch_and, __VA_ARGS__) -# define atomic_fetch_and_release(...) __atomic_op_release(atomic_fetch_and, __VA_ARGS__) +# define atomic_fetch_and(...) __op_fence(atomic_fetch_and, __VA_ARGS__) +# define atomic_fetch_and_acquire(...) __op_acquire(atomic_fetch_and, __VA_ARGS__) +# define atomic_fetch_and_release(...) __op_release(atomic_fetch_and, __VA_ARGS__) # endif #endif @@ -211,9 +213,9 @@ # define atomic_fetch_xor_release atomic_fetch_xor #else # ifndef atomic_fetch_xor -# define atomic_fetch_xor(...) __atomic_op_fence(atomic_fetch_xor, __VA_ARGS__) -# define atomic_fetch_xor_acquire(...) __atomic_op_acquire(atomic_fetch_xor, __VA_ARGS__) -# define atomic_fetch_xor_release(...) __atomic_op_release(atomic_fetch_xor, __VA_ARGS__) +# define atomic_fetch_xor(...) __op_fence(atomic_fetch_xor, __VA_ARGS__) +# define atomic_fetch_xor_acquire(...) __op_acquire(atomic_fetch_xor, __VA_ARGS__) +# define atomic_fetch_xor_release(...) __op_release(atomic_fetch_xor, __VA_ARGS__) # endif #endif @@ -223,9 +225,9 @@ #define atomic_xchg_release atomic_xchg #else # ifndef atomic_xchg -# define atomic_xchg(...) __atomic_op_fence(atomic_xchg, __VA_ARGS__) -# define atomic_xchg_acquire(...) __atomic_op_acquire(atomic_xchg, __VA_ARGS__) -# define atomic_xchg_release(...) __atomic_op_release(atomic_xchg, __VA_ARGS__) +# define atomic_xchg(...) __op_fence(atomic_xchg, __VA_ARGS__) +# define atomic_xchg_acquire(...) __op_acquire(atomic_xchg, __VA_ARGS__) +# define atomic_xchg_release(...) __op_release(atomic_xchg, __VA_ARGS__) # endif #endif @@ -235,9 +237,9 @@ # define atomic_cmpxchg_release atomic_cmpxchg #else # ifndef atomic_cmpxchg -# define atomic_cmpxchg(...) __atomic_op_fence(atomic_cmpxchg, __VA_ARGS__) -# define atomic_cmpxchg_acquire(...) __atomic_op_acquire(atomic_cmpxchg, __VA_ARGS__) -# define atomic_cmpxchg_release(...) __atomic_op_release(atomic_cmpxchg, __VA_ARGS__) +# define atomic_cmpxchg(...) __op_fence(atomic_cmpxchg, __VA_ARGS__) +# define atomic_cmpxchg_acquire(...) __op_acquire(atomic_cmpxchg, __VA_ARGS__) +# define atomic_cmpxchg_release(...) __op_release(atomic_cmpxchg, __VA_ARGS__) # endif #endif @@ -267,9 +269,9 @@ # define cmpxchg_release cmpxchg #else # ifndef cmpxchg -# define cmpxchg(...) __atomic_op_fence(cmpxchg, __VA_ARGS__) -# define cmpxchg_acquire(...) __atomic_op_acquire(cmpxchg, __VA_ARGS__) -# define cmpxchg_release(...) __atomic_op_release(cmpxchg, __VA_ARGS__) +# define cmpxchg(...) __op_fence(cmpxchg, __VA_ARGS__) +# define cmpxchg_acquire(...) __op_acquire(cmpxchg, __VA_ARGS__) +# define cmpxchg_release(...) __op_release(cmpxchg, __VA_ARGS__) # endif #endif @@ -279,9 +281,9 @@ # define cmpxchg64_release cmpxchg64 #else # ifndef cmpxchg64 -# define cmpxchg64(...) __atomic_op_fence(cmpxchg64, __VA_ARGS__) -# define cmpxchg64_acquire(...) __atomic_op_acquire(cmpxchg64, __VA_ARGS__) -# define cmpxchg64_release(...) __atomic_op_release(cmpxchg64, __VA_ARGS__) +# define cmpxchg64(...) __op_fence(cmpxchg64, __VA_ARGS__) +# define cmpxchg64_acquire(...) __op_acquire(cmpxchg64, __VA_ARGS__) +# define cmpxchg64_release(...) __op_release(cmpxchg64, __VA_ARGS__) # endif #endif @@ -291,9 +293,9 @@ # define xchg_release xchg #else # ifndef xchg -# define xchg(...) __atomic_op_fence(xchg, __VA_ARGS__) -# define xchg_acquire(...) __atomic_op_acquire(xchg, __VA_ARGS__) -# define xchg_release(...) __atomic_op_release(xchg, __VA_ARGS__) +# define xchg(...) __op_fence(xchg, __VA_ARGS__) +# define xchg_acquire(...) __op_acquire(xchg, __VA_ARGS__) +# define xchg_release(...) __op_release(xchg, __VA_ARGS__) # endif #endif @@ -330,9 +332,9 @@ static inline int atomic_add_unless(atomic_t *v, int a, int u) # define atomic_fetch_andnot_release atomic_fetch_andnot #else # ifndef atomic_fetch_andnot -# define atomic_fetch_andnot(...) __atomic_op_fence(atomic_fetch_andnot, __VA_ARGS__) -# define atomic_fetch_andnot_acquire(...) __atomic_op_acquire(atomic_fetch_andnot, __VA_ARGS__) -# define atomic_fetch_andnot_release(...) __atomic_op_release(atomic_fetch_andnot, __VA_ARGS__) +# define atomic_fetch_andnot(...) __op_fence(atomic_fetch_andnot, __VA_ARGS__) +# define atomic_fetch_andnot_acquire(...) __op_acquire(atomic_fetch_andnot, __VA_ARGS__) +# define atomic_fetch_andnot_release(...) __op_release(atomic_fetch_andnot, __VA_ARGS__) # endif #endif @@ -472,9 +474,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_add_return_release atomic64_add_return #else # ifndef atomic64_add_return -# define atomic64_add_return(...) __atomic_op_fence(atomic64_add_return, __VA_ARGS__) -# define atomic64_add_return_acquire(...) __atomic_op_acquire(atomic64_add_return, __VA_ARGS__) -# define atomic64_add_return_release(...) __atomic_op_release(atomic64_add_return, __VA_ARGS__) +# define atomic64_add_return(...) __op_fence(atomic64_add_return, __VA_ARGS__) +# define atomic64_add_return_acquire(...) __op_acquire(atomic64_add_return, __VA_ARGS__) +# define atomic64_add_return_release(...) __op_release(atomic64_add_return, __VA_ARGS__) # endif #endif @@ -484,9 +486,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_inc_return_release atomic64_inc_return #else # ifndef atomic64_inc_return -# define atomic64_inc_return(...) __atomic_op_fence(atomic64_inc_return, __VA_ARGS__) -# define atomic64_inc_return_acquire(...) __atomic_op_acquire(atomic64_inc_return, __VA_ARGS__) -# define atomic64_inc_return_release(...) __atomic_op_release(atomic64_inc_return, __VA_ARGS__) +# define atomic64_inc_return(...) __op_fence(atomic64_inc_return, __VA_ARGS__) +# define atomic64_inc_return_acquire(...) __op_acquire(atomic64_inc_return, __VA_ARGS__) +# define atomic64_inc_return_release(...) __op_release(atomic64_inc_return, __VA_ARGS__) # endif #endif @@ -496,9 +498,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_sub_return_release atomic64_sub_return #else # ifndef atomic64_sub_return -# define atomic64_sub_return(...) __atomic_op_fence(atomic64_sub_return, __VA_ARGS__) -# define atomic64_sub_return_acquire(...) __atomic_op_acquire(atomic64_sub_return, __VA_ARGS__) -# define atomic64_sub_return_release(...) __atomic_op_release(atomic64_sub_return, __VA_ARGS__) +# define atomic64_sub_return(...) __op_fence(atomic64_sub_return, __VA_ARGS__) +# define atomic64_sub_return_acquire(...) __op_acquire(atomic64_sub_return, __VA_ARGS__) +# define atomic64_sub_return_release(...) __op_release(atomic64_sub_return, __VA_ARGS__) # endif #endif @@ -508,9 +510,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_dec_return_release atomic64_dec_return #else # ifndef atomic64_dec_return -# define atomic64_dec_return(...) __atomic_op_fence(atomic64_dec_return, __VA_ARGS__) -# define atomic64_dec_return_acquire(...) __atomic_op_acquire(atomic64_dec_return, __VA_ARGS__) -# define atomic64_dec_return_release(...) __atomic_op_release(atomic64_dec_return, __VA_ARGS__) +# define atomic64_dec_return(...) __op_fence(atomic64_dec_return, __VA_ARGS__) +# define atomic64_dec_return_acquire(...) __op_acquire(atomic64_dec_return, __VA_ARGS__) +# define atomic64_dec_return_release(...) __op_release(atomic64_dec_return, __VA_ARGS__) # endif #endif @@ -520,9 +522,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_fetch_add_release atomic64_fetch_add #else # ifndef atomic64_fetch_add -# define atomic64_fetch_add(...) __atomic_op_fence(atomic64_fetch_add, __VA_ARGS__) -# define atomic64_fetch_add_acquire(...) __atomic_op_acquire(atomic64_fetch_add, __VA_ARGS__) -# define atomic64_fetch_add_release(...) __atomic_op_release(atomic64_fetch_add, __VA_ARGS__) +# define atomic64_fetch_add(...) __op_fence(atomic64_fetch_add, __VA_ARGS__) +# define atomic64_fetch_add_acquire(...) __op_acquire(atomic64_fetch_add, __VA_ARGS__) +# define atomic64_fetch_add_release(...) __op_release(atomic64_fetch_add, __VA_ARGS__) # endif #endif @@ -539,9 +541,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # endif #else # ifndef atomic64_fetch_inc -# define atomic64_fetch_inc(...) __atomic_op_fence(atomic64_fetch_inc, __VA_ARGS__) -# define atomic64_fetch_inc_acquire(...) __atomic_op_acquire(atomic64_fetch_inc, __VA_ARGS__) -# define atomic64_fetch_inc_release(...) __atomic_op_release(atomic64_fetch_inc, __VA_ARGS__) +# define atomic64_fetch_inc(...) __op_fence(atomic64_fetch_inc, __VA_ARGS__) +# define atomic64_fetch_inc_acquire(...) __op_acquire(atomic64_fetch_inc, __VA_ARGS__) +# define atomic64_fetch_inc_release(...) __op_release(atomic64_fetch_inc, __VA_ARGS__) # endif #endif @@ -551,9 +553,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_fetch_sub_release atomic64_fetch_sub #else # ifndef atomic64_fetch_sub -# define atomic64_fetch_sub(...) __atomic_op_fence(atomic64_fetch_sub, __VA_ARGS__) -# define atomic64_fetch_sub_acquire(...) __atomic_op_acquire(atomic64_fetch_sub, __VA_ARGS__) -# define atomic64_fetch_sub_release(...) __atomic_op_release(atomic64_fetch_sub, __VA_ARGS__) +# define atomic64_fetch_sub(...) __op_fence(atomic64_fetch_sub, __VA_ARGS__) +# define atomic64_fetch_sub_acquire(...) __op_acquire(atomic64_fetch_sub, __VA_ARGS__) +# define atomic64_fetch_sub_release(...) __op_release(atomic64_fetch_sub, __VA_ARGS__) # endif #endif @@ -570,9 +572,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # endif #else # ifndef atomic64_fetch_dec -# define atomic64_fetch_dec(...) __atomic_op_fence(atomic64_fetch_dec, __VA_ARGS__) -# define atomic64_fetch_dec_acquire(...) __atomic_op_acquire(atomic64_fetch_dec, __VA_ARGS__) -# define atomic64_fetch_dec_release(...) __atomic_op_release(atomic64_fetch_dec, __VA_ARGS__) +# define atomic64_fetch_dec(...) __op_fence(atomic64_fetch_dec, __VA_ARGS__) +# define atomic64_fetch_dec_acquire(...) __op_acquire(atomic64_fetch_dec, __VA_ARGS__) +# define atomic64_fetch_dec_release(...) __op_release(atomic64_fetch_dec, __VA_ARGS__) # endif #endif @@ -582,9 +584,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_fetch_or_release atomic64_fetch_or #else # ifndef atomic64_fetch_or -# define atomic64_fetch_or(...) __atomic_op_fence(atomic64_fetch_or, __VA_ARGS__) -# define atomic64_fetch_or_acquire(...) __atomic_op_acquire(atomic64_fetch_or, __VA_ARGS__) -# define atomic64_fetch_or_release(...) __atomic_op_release(atomic64_fetch_or, __VA_ARGS__) +# define atomic64_fetch_or(...) __op_fence(atomic64_fetch_or, __VA_ARGS__) +# define atomic64_fetch_or_acquire(...) __op_acquire(atomic64_fetch_or, __VA_ARGS__) +# define atomic64_fetch_or_release(...) __op_release(atomic64_fetch_or, __VA_ARGS__) # endif #endif @@ -594,9 +596,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_fetch_and_release atomic64_fetch_and #else # ifndef atomic64_fetch_and -# define atomic64_fetch_and(...) __atomic_op_fence(atomic64_fetch_and, __VA_ARGS__) -# define atomic64_fetch_and_acquire(...) __atomic_op_acquire(atomic64_fetch_and, __VA_ARGS__) -# define atomic64_fetch_and_release(...) __atomic_op_release(atomic64_fetch_and, __VA_ARGS__) +# define atomic64_fetch_and(...) __op_fence(atomic64_fetch_and, __VA_ARGS__) +# define atomic64_fetch_and_acquire(...) __op_acquire(atomic64_fetch_and, __VA_ARGS__) +# define atomic64_fetch_and_release(...) __op_release(atomic64_fetch_and, __VA_ARGS__) # endif #endif @@ -606,9 +608,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_fetch_xor_release atomic64_fetch_xor #else # ifndef atomic64_fetch_xor -# define atomic64_fetch_xor(...) __atomic_op_fence(atomic64_fetch_xor, __VA_ARGS__) -# define atomic64_fetch_xor_acquire(...) __atomic_op_acquire(atomic64_fetch_xor, __VA_ARGS__) -# define atomic64_fetch_xor_release(...) __atomic_op_release(atomic64_fetch_xor, __VA_ARGS__) +# define atomic64_fetch_xor(...) __op_fence(atomic64_fetch_xor, __VA_ARGS__) +# define atomic64_fetch_xor_acquire(...) __op_acquire(atomic64_fetch_xor, __VA_ARGS__) +# define atomic64_fetch_xor_release(...) __op_release(atomic64_fetch_xor, __VA_ARGS__) # endif #endif @@ -618,9 +620,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_xchg_release atomic64_xchg #else # ifndef atomic64_xchg -# define atomic64_xchg(...) __atomic_op_fence(atomic64_xchg, __VA_ARGS__) -# define atomic64_xchg_acquire(...) __atomic_op_acquire(atomic64_xchg, __VA_ARGS__) -# define atomic64_xchg_release(...) __atomic_op_release(atomic64_xchg, __VA_ARGS__) +# define atomic64_xchg(...) __op_fence(atomic64_xchg, __VA_ARGS__) +# define atomic64_xchg_acquire(...) __op_acquire(atomic64_xchg, __VA_ARGS__) +# define atomic64_xchg_release(...) __op_release(atomic64_xchg, __VA_ARGS__) # endif #endif @@ -630,9 +632,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_cmpxchg_release atomic64_cmpxchg #else # ifndef atomic64_cmpxchg -# define atomic64_cmpxchg(...) __atomic_op_fence(atomic64_cmpxchg, __VA_ARGS__) -# define atomic64_cmpxchg_acquire(...) __atomic_op_acquire(atomic64_cmpxchg, __VA_ARGS__) -# define atomic64_cmpxchg_release(...) __atomic_op_release(atomic64_cmpxchg, __VA_ARGS__) +# define atomic64_cmpxchg(...) __op_fence(atomic64_cmpxchg, __VA_ARGS__) +# define atomic64_cmpxchg_acquire(...) __op_acquire(atomic64_cmpxchg, __VA_ARGS__) +# define atomic64_cmpxchg_release(...) __op_release(atomic64_cmpxchg, __VA_ARGS__) # endif #endif @@ -664,9 +666,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_fetch_andnot_release atomic64_fetch_andnot #else # ifndef atomic64_fetch_andnot -# define atomic64_fetch_andnot(...) __atomic_op_fence(atomic64_fetch_andnot, __VA_ARGS__) -# define atomic64_fetch_andnot_acquire(...) __atomic_op_acquire(atomic64_fetch_andnot, __VA_ARGS__) -# define atomic64_fetch_andnot_release(...) __atomic_op_release(atomic64_fetch_andnot, __VA_ARGS__) +# define atomic64_fetch_andnot(...) __op_fence(atomic64_fetch_andnot, __VA_ARGS__) +# define atomic64_fetch_andnot_acquire(...) __op_acquire(atomic64_fetch_andnot, __VA_ARGS__) +# define atomic64_fetch_andnot_release(...) __op_release(atomic64_fetch_andnot, __VA_ARGS__) # endif #endif
WARNING: multiple messages have this Message-ID (diff)
From: mingo@kernel.org (Ingo Molnar) To: linux-arm-kernel@lists.infradead.org Subject: [PATCH] locking/atomics: Shorten the __atomic_op() defines to __op() Date: Sat, 5 May 2018 12:48:58 +0200 [thread overview] Message-ID: <20180505104858.ap4bfv6ip2vprzyj@gmail.com> (raw) In-Reply-To: <20180505092911.GC12217@hirez.programming.kicks-ass.net> * Peter Zijlstra <peterz@infradead.org> wrote: > On Sat, May 05, 2018 at 11:09:03AM +0200, Ingo Molnar wrote: > > > > # ifndef atomic_fetch_dec_acquire > > > > # define atomic_fetch_dec_acquire(...) __atomic_op_acquire(atomic_fetch_dec, __VA_ARGS__) > > > > # endif > > > > # ifndef atomic_fetch_dec_release > > > > # define atomic_fetch_dec_release(...) __atomic_op_release(atomic_fetch_dec, __VA_ARGS__) > > > > # endif > > > > # ifndef atomic_fetch_dec > > > > # define atomic_fetch_dec(...) __atomic_op_fence(atomic_fetch_dec, __VA_ARGS__) > > > > # endif > > > > #endif > > > > > > > > The new variant is readable at a glance, and the hierarchy of defines is very > > > > obvious as well. > > > > > > It wraps and looks hideous in my normal setup. And I do detest that indent > > > after # thing. > > > > You should use wider terminals if you take a look at such code - there's already > > numerous areas of the kernel that are not readable on 80x25 terminals. > > > > _Please_ try the following experiment, for me: > > > > Enter the 21st century temporarily and widen two of your terminals from 80 cols to > > 100 cols - it's only ~20% wider. > > Doesn't work that way. The only way I get more columns is if I shrink my > font further. I work with tiles per monitor (left/right obv.) and use > two columns per editor. This gets me a total of 4 columns. > > On my desktop that is slightly over 100 characters per column, on my > laptop that is slightly below 100 -- mostly because I'm pixel limited on > fontsize on that thing (FullHD sucks). > > If it wraps it wraps. Out of the 707 lines in atomic.h only 25 are wider than 100 chars - and the max length is 104 chars. If that's too then there's a few more things we could do - for example the attached patch renames a (very minor) misnomer to a shorter name and thus saves on the longest lines, the column histogram now looks like this: 79 4 80 7 81 3 82 9 84 4 85 2 86 3 87 1 88 4 89 13 90 7 91 20 92 18 93 12 94 11 96 5 I.e. the longest line is down to 96 columns, and 99% of the file is 94 cols or shorter. Is this still too long? Thanks, Ingo ============================> From: Ingo Molnar <mingo@kernel.org> Date: Sat, 5 May 2018 12:41:57 +0200 Subject: [PATCH] locking/atomics: Shorten the __atomic_op() defines to __op() The __atomic prefix is somewhat of a misnomer, because not all APIs we use with these macros have an atomic_ prefix. This also reduces the length of the longest lines in the header, making them more readable on PeterZ's terminals. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Paul E. McKenney <paulmck@us.ibm.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: aryabinin at virtuozzo.com Cc: boqun.feng at gmail.com Cc: catalin.marinas at arm.com Cc: dvyukov at google.com Cc: linux-arm-kernel at lists.infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org> --- include/linux/atomic.h | 204 +++++++++++++++++++++++++------------------------ 1 file changed, 103 insertions(+), 101 deletions(-) diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 1176cf7c6f03..f32ff6d9e4d2 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -37,33 +37,35 @@ * variant is already fully ordered, no additional barriers are needed. * * Besides, if an arch has a special barrier for acquire/release, it could - * implement its own __atomic_op_* and use the same framework for building + * implement its own __op_* and use the same framework for building * variants * - * If an architecture overrides __atomic_op_acquire() it will probably want + * If an architecture overrides __op_acquire() it will probably want * to define smp_mb__after_spinlock(). */ -#ifndef __atomic_op_acquire -#define __atomic_op_acquire(op, args...) \ +#ifndef __op_acquire +#define __op_acquire(op, args...) \ ({ \ typeof(op##_relaxed(args)) __ret = op##_relaxed(args); \ + \ smp_mb__after_atomic(); \ __ret; \ }) #endif -#ifndef __atomic_op_release -#define __atomic_op_release(op, args...) \ +#ifndef __op_release +#define __op_release(op, args...) \ ({ \ smp_mb__before_atomic(); \ op##_relaxed(args); \ }) #endif -#ifndef __atomic_op_fence -#define __atomic_op_fence(op, args...) \ +#ifndef __op_fence +#define __op_fence(op, args...) \ ({ \ typeof(op##_relaxed(args)) __ret; \ + \ smp_mb__before_atomic(); \ __ret = op##_relaxed(args); \ smp_mb__after_atomic(); \ @@ -77,9 +79,9 @@ # define atomic_add_return_release atomic_add_return #else # ifndef atomic_add_return -# define atomic_add_return(...) __atomic_op_fence(atomic_add_return, __VA_ARGS__) -# define atomic_add_return_acquire(...) __atomic_op_acquire(atomic_add_return, __VA_ARGS__) -# define atomic_add_return_release(...) __atomic_op_release(atomic_add_return, __VA_ARGS__) +# define atomic_add_return(...) __op_fence(atomic_add_return, __VA_ARGS__) +# define atomic_add_return_acquire(...) __op_acquire(atomic_add_return, __VA_ARGS__) +# define atomic_add_return_release(...) __op_release(atomic_add_return, __VA_ARGS__) # endif #endif @@ -89,9 +91,9 @@ # define atomic_inc_return_release atomic_inc_return #else # ifndef atomic_inc_return -# define atomic_inc_return(...) __atomic_op_fence(atomic_inc_return, __VA_ARGS__) -# define atomic_inc_return_acquire(...) __atomic_op_acquire(atomic_inc_return, __VA_ARGS__) -# define atomic_inc_return_release(...) __atomic_op_release(atomic_inc_return, __VA_ARGS__) +# define atomic_inc_return(...) __op_fence(atomic_inc_return, __VA_ARGS__) +# define atomic_inc_return_acquire(...) __op_acquire(atomic_inc_return, __VA_ARGS__) +# define atomic_inc_return_release(...) __op_release(atomic_inc_return, __VA_ARGS__) # endif #endif @@ -101,9 +103,9 @@ # define atomic_sub_return_release atomic_sub_return #else # ifndef atomic_sub_return -# define atomic_sub_return(...) __atomic_op_fence(atomic_sub_return, __VA_ARGS__) -# define atomic_sub_return_acquire(...) __atomic_op_acquire(atomic_sub_return, __VA_ARGS__) -# define atomic_sub_return_release(...) __atomic_op_release(atomic_sub_return, __VA_ARGS__) +# define atomic_sub_return(...) __op_fence(atomic_sub_return, __VA_ARGS__) +# define atomic_sub_return_acquire(...) __op_acquire(atomic_sub_return, __VA_ARGS__) +# define atomic_sub_return_release(...) __op_release(atomic_sub_return, __VA_ARGS__) # endif #endif @@ -113,9 +115,9 @@ # define atomic_dec_return_release atomic_dec_return #else # ifndef atomic_dec_return -# define atomic_dec_return(...) __atomic_op_fence(atomic_dec_return, __VA_ARGS__) -# define atomic_dec_return_acquire(...) __atomic_op_acquire(atomic_dec_return, __VA_ARGS__) -# define atomic_dec_return_release(...) __atomic_op_release(atomic_dec_return, __VA_ARGS__) +# define atomic_dec_return(...) __op_fence(atomic_dec_return, __VA_ARGS__) +# define atomic_dec_return_acquire(...) __op_acquire(atomic_dec_return, __VA_ARGS__) +# define atomic_dec_return_release(...) __op_release(atomic_dec_return, __VA_ARGS__) # endif #endif @@ -125,9 +127,9 @@ # define atomic_fetch_add_release atomic_fetch_add #else # ifndef atomic_fetch_add -# define atomic_fetch_add(...) __atomic_op_fence(atomic_fetch_add, __VA_ARGS__) -# define atomic_fetch_add_acquire(...) __atomic_op_acquire(atomic_fetch_add, __VA_ARGS__) -# define atomic_fetch_add_release(...) __atomic_op_release(atomic_fetch_add, __VA_ARGS__) +# define atomic_fetch_add(...) __op_fence(atomic_fetch_add, __VA_ARGS__) +# define atomic_fetch_add_acquire(...) __op_acquire(atomic_fetch_add, __VA_ARGS__) +# define atomic_fetch_add_release(...) __op_release(atomic_fetch_add, __VA_ARGS__) # endif #endif @@ -144,9 +146,9 @@ # endif #else # ifndef atomic_fetch_inc -# define atomic_fetch_inc(...) __atomic_op_fence(atomic_fetch_inc, __VA_ARGS__) -# define atomic_fetch_inc_acquire(...) __atomic_op_acquire(atomic_fetch_inc, __VA_ARGS__) -# define atomic_fetch_inc_release(...) __atomic_op_release(atomic_fetch_inc, __VA_ARGS__) +# define atomic_fetch_inc(...) __op_fence(atomic_fetch_inc, __VA_ARGS__) +# define atomic_fetch_inc_acquire(...) __op_acquire(atomic_fetch_inc, __VA_ARGS__) +# define atomic_fetch_inc_release(...) __op_release(atomic_fetch_inc, __VA_ARGS__) # endif #endif @@ -156,9 +158,9 @@ # define atomic_fetch_sub_release atomic_fetch_sub #else # ifndef atomic_fetch_sub -# define atomic_fetch_sub(...) __atomic_op_fence(atomic_fetch_sub, __VA_ARGS__) -# define atomic_fetch_sub_acquire(...) __atomic_op_acquire(atomic_fetch_sub, __VA_ARGS__) -# define atomic_fetch_sub_release(...) __atomic_op_release(atomic_fetch_sub, __VA_ARGS__) +# define atomic_fetch_sub(...) __op_fence(atomic_fetch_sub, __VA_ARGS__) +# define atomic_fetch_sub_acquire(...) __op_acquire(atomic_fetch_sub, __VA_ARGS__) +# define atomic_fetch_sub_release(...) __op_release(atomic_fetch_sub, __VA_ARGS__) # endif #endif @@ -175,9 +177,9 @@ # endif #else # ifndef atomic_fetch_dec -# define atomic_fetch_dec(...) __atomic_op_fence(atomic_fetch_dec, __VA_ARGS__) -# define atomic_fetch_dec_acquire(...) __atomic_op_acquire(atomic_fetch_dec, __VA_ARGS__) -# define atomic_fetch_dec_release(...) __atomic_op_release(atomic_fetch_dec, __VA_ARGS__) +# define atomic_fetch_dec(...) __op_fence(atomic_fetch_dec, __VA_ARGS__) +# define atomic_fetch_dec_acquire(...) __op_acquire(atomic_fetch_dec, __VA_ARGS__) +# define atomic_fetch_dec_release(...) __op_release(atomic_fetch_dec, __VA_ARGS__) # endif #endif @@ -187,9 +189,9 @@ # define atomic_fetch_or_release atomic_fetch_or #else # ifndef atomic_fetch_or -# define atomic_fetch_or(...) __atomic_op_fence(atomic_fetch_or, __VA_ARGS__) -# define atomic_fetch_or_acquire(...) __atomic_op_acquire(atomic_fetch_or, __VA_ARGS__) -# define atomic_fetch_or_release(...) __atomic_op_release(atomic_fetch_or, __VA_ARGS__) +# define atomic_fetch_or(...) __op_fence(atomic_fetch_or, __VA_ARGS__) +# define atomic_fetch_or_acquire(...) __op_acquire(atomic_fetch_or, __VA_ARGS__) +# define atomic_fetch_or_release(...) __op_release(atomic_fetch_or, __VA_ARGS__) # endif #endif @@ -199,9 +201,9 @@ # define atomic_fetch_and_release atomic_fetch_and #else # ifndef atomic_fetch_and -# define atomic_fetch_and(...) __atomic_op_fence(atomic_fetch_and, __VA_ARGS__) -# define atomic_fetch_and_acquire(...) __atomic_op_acquire(atomic_fetch_and, __VA_ARGS__) -# define atomic_fetch_and_release(...) __atomic_op_release(atomic_fetch_and, __VA_ARGS__) +# define atomic_fetch_and(...) __op_fence(atomic_fetch_and, __VA_ARGS__) +# define atomic_fetch_and_acquire(...) __op_acquire(atomic_fetch_and, __VA_ARGS__) +# define atomic_fetch_and_release(...) __op_release(atomic_fetch_and, __VA_ARGS__) # endif #endif @@ -211,9 +213,9 @@ # define atomic_fetch_xor_release atomic_fetch_xor #else # ifndef atomic_fetch_xor -# define atomic_fetch_xor(...) __atomic_op_fence(atomic_fetch_xor, __VA_ARGS__) -# define atomic_fetch_xor_acquire(...) __atomic_op_acquire(atomic_fetch_xor, __VA_ARGS__) -# define atomic_fetch_xor_release(...) __atomic_op_release(atomic_fetch_xor, __VA_ARGS__) +# define atomic_fetch_xor(...) __op_fence(atomic_fetch_xor, __VA_ARGS__) +# define atomic_fetch_xor_acquire(...) __op_acquire(atomic_fetch_xor, __VA_ARGS__) +# define atomic_fetch_xor_release(...) __op_release(atomic_fetch_xor, __VA_ARGS__) # endif #endif @@ -223,9 +225,9 @@ #define atomic_xchg_release atomic_xchg #else # ifndef atomic_xchg -# define atomic_xchg(...) __atomic_op_fence(atomic_xchg, __VA_ARGS__) -# define atomic_xchg_acquire(...) __atomic_op_acquire(atomic_xchg, __VA_ARGS__) -# define atomic_xchg_release(...) __atomic_op_release(atomic_xchg, __VA_ARGS__) +# define atomic_xchg(...) __op_fence(atomic_xchg, __VA_ARGS__) +# define atomic_xchg_acquire(...) __op_acquire(atomic_xchg, __VA_ARGS__) +# define atomic_xchg_release(...) __op_release(atomic_xchg, __VA_ARGS__) # endif #endif @@ -235,9 +237,9 @@ # define atomic_cmpxchg_release atomic_cmpxchg #else # ifndef atomic_cmpxchg -# define atomic_cmpxchg(...) __atomic_op_fence(atomic_cmpxchg, __VA_ARGS__) -# define atomic_cmpxchg_acquire(...) __atomic_op_acquire(atomic_cmpxchg, __VA_ARGS__) -# define atomic_cmpxchg_release(...) __atomic_op_release(atomic_cmpxchg, __VA_ARGS__) +# define atomic_cmpxchg(...) __op_fence(atomic_cmpxchg, __VA_ARGS__) +# define atomic_cmpxchg_acquire(...) __op_acquire(atomic_cmpxchg, __VA_ARGS__) +# define atomic_cmpxchg_release(...) __op_release(atomic_cmpxchg, __VA_ARGS__) # endif #endif @@ -267,9 +269,9 @@ # define cmpxchg_release cmpxchg #else # ifndef cmpxchg -# define cmpxchg(...) __atomic_op_fence(cmpxchg, __VA_ARGS__) -# define cmpxchg_acquire(...) __atomic_op_acquire(cmpxchg, __VA_ARGS__) -# define cmpxchg_release(...) __atomic_op_release(cmpxchg, __VA_ARGS__) +# define cmpxchg(...) __op_fence(cmpxchg, __VA_ARGS__) +# define cmpxchg_acquire(...) __op_acquire(cmpxchg, __VA_ARGS__) +# define cmpxchg_release(...) __op_release(cmpxchg, __VA_ARGS__) # endif #endif @@ -279,9 +281,9 @@ # define cmpxchg64_release cmpxchg64 #else # ifndef cmpxchg64 -# define cmpxchg64(...) __atomic_op_fence(cmpxchg64, __VA_ARGS__) -# define cmpxchg64_acquire(...) __atomic_op_acquire(cmpxchg64, __VA_ARGS__) -# define cmpxchg64_release(...) __atomic_op_release(cmpxchg64, __VA_ARGS__) +# define cmpxchg64(...) __op_fence(cmpxchg64, __VA_ARGS__) +# define cmpxchg64_acquire(...) __op_acquire(cmpxchg64, __VA_ARGS__) +# define cmpxchg64_release(...) __op_release(cmpxchg64, __VA_ARGS__) # endif #endif @@ -291,9 +293,9 @@ # define xchg_release xchg #else # ifndef xchg -# define xchg(...) __atomic_op_fence(xchg, __VA_ARGS__) -# define xchg_acquire(...) __atomic_op_acquire(xchg, __VA_ARGS__) -# define xchg_release(...) __atomic_op_release(xchg, __VA_ARGS__) +# define xchg(...) __op_fence(xchg, __VA_ARGS__) +# define xchg_acquire(...) __op_acquire(xchg, __VA_ARGS__) +# define xchg_release(...) __op_release(xchg, __VA_ARGS__) # endif #endif @@ -330,9 +332,9 @@ static inline int atomic_add_unless(atomic_t *v, int a, int u) # define atomic_fetch_andnot_release atomic_fetch_andnot #else # ifndef atomic_fetch_andnot -# define atomic_fetch_andnot(...) __atomic_op_fence(atomic_fetch_andnot, __VA_ARGS__) -# define atomic_fetch_andnot_acquire(...) __atomic_op_acquire(atomic_fetch_andnot, __VA_ARGS__) -# define atomic_fetch_andnot_release(...) __atomic_op_release(atomic_fetch_andnot, __VA_ARGS__) +# define atomic_fetch_andnot(...) __op_fence(atomic_fetch_andnot, __VA_ARGS__) +# define atomic_fetch_andnot_acquire(...) __op_acquire(atomic_fetch_andnot, __VA_ARGS__) +# define atomic_fetch_andnot_release(...) __op_release(atomic_fetch_andnot, __VA_ARGS__) # endif #endif @@ -472,9 +474,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_add_return_release atomic64_add_return #else # ifndef atomic64_add_return -# define atomic64_add_return(...) __atomic_op_fence(atomic64_add_return, __VA_ARGS__) -# define atomic64_add_return_acquire(...) __atomic_op_acquire(atomic64_add_return, __VA_ARGS__) -# define atomic64_add_return_release(...) __atomic_op_release(atomic64_add_return, __VA_ARGS__) +# define atomic64_add_return(...) __op_fence(atomic64_add_return, __VA_ARGS__) +# define atomic64_add_return_acquire(...) __op_acquire(atomic64_add_return, __VA_ARGS__) +# define atomic64_add_return_release(...) __op_release(atomic64_add_return, __VA_ARGS__) # endif #endif @@ -484,9 +486,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_inc_return_release atomic64_inc_return #else # ifndef atomic64_inc_return -# define atomic64_inc_return(...) __atomic_op_fence(atomic64_inc_return, __VA_ARGS__) -# define atomic64_inc_return_acquire(...) __atomic_op_acquire(atomic64_inc_return, __VA_ARGS__) -# define atomic64_inc_return_release(...) __atomic_op_release(atomic64_inc_return, __VA_ARGS__) +# define atomic64_inc_return(...) __op_fence(atomic64_inc_return, __VA_ARGS__) +# define atomic64_inc_return_acquire(...) __op_acquire(atomic64_inc_return, __VA_ARGS__) +# define atomic64_inc_return_release(...) __op_release(atomic64_inc_return, __VA_ARGS__) # endif #endif @@ -496,9 +498,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_sub_return_release atomic64_sub_return #else # ifndef atomic64_sub_return -# define atomic64_sub_return(...) __atomic_op_fence(atomic64_sub_return, __VA_ARGS__) -# define atomic64_sub_return_acquire(...) __atomic_op_acquire(atomic64_sub_return, __VA_ARGS__) -# define atomic64_sub_return_release(...) __atomic_op_release(atomic64_sub_return, __VA_ARGS__) +# define atomic64_sub_return(...) __op_fence(atomic64_sub_return, __VA_ARGS__) +# define atomic64_sub_return_acquire(...) __op_acquire(atomic64_sub_return, __VA_ARGS__) +# define atomic64_sub_return_release(...) __op_release(atomic64_sub_return, __VA_ARGS__) # endif #endif @@ -508,9 +510,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_dec_return_release atomic64_dec_return #else # ifndef atomic64_dec_return -# define atomic64_dec_return(...) __atomic_op_fence(atomic64_dec_return, __VA_ARGS__) -# define atomic64_dec_return_acquire(...) __atomic_op_acquire(atomic64_dec_return, __VA_ARGS__) -# define atomic64_dec_return_release(...) __atomic_op_release(atomic64_dec_return, __VA_ARGS__) +# define atomic64_dec_return(...) __op_fence(atomic64_dec_return, __VA_ARGS__) +# define atomic64_dec_return_acquire(...) __op_acquire(atomic64_dec_return, __VA_ARGS__) +# define atomic64_dec_return_release(...) __op_release(atomic64_dec_return, __VA_ARGS__) # endif #endif @@ -520,9 +522,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_fetch_add_release atomic64_fetch_add #else # ifndef atomic64_fetch_add -# define atomic64_fetch_add(...) __atomic_op_fence(atomic64_fetch_add, __VA_ARGS__) -# define atomic64_fetch_add_acquire(...) __atomic_op_acquire(atomic64_fetch_add, __VA_ARGS__) -# define atomic64_fetch_add_release(...) __atomic_op_release(atomic64_fetch_add, __VA_ARGS__) +# define atomic64_fetch_add(...) __op_fence(atomic64_fetch_add, __VA_ARGS__) +# define atomic64_fetch_add_acquire(...) __op_acquire(atomic64_fetch_add, __VA_ARGS__) +# define atomic64_fetch_add_release(...) __op_release(atomic64_fetch_add, __VA_ARGS__) # endif #endif @@ -539,9 +541,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # endif #else # ifndef atomic64_fetch_inc -# define atomic64_fetch_inc(...) __atomic_op_fence(atomic64_fetch_inc, __VA_ARGS__) -# define atomic64_fetch_inc_acquire(...) __atomic_op_acquire(atomic64_fetch_inc, __VA_ARGS__) -# define atomic64_fetch_inc_release(...) __atomic_op_release(atomic64_fetch_inc, __VA_ARGS__) +# define atomic64_fetch_inc(...) __op_fence(atomic64_fetch_inc, __VA_ARGS__) +# define atomic64_fetch_inc_acquire(...) __op_acquire(atomic64_fetch_inc, __VA_ARGS__) +# define atomic64_fetch_inc_release(...) __op_release(atomic64_fetch_inc, __VA_ARGS__) # endif #endif @@ -551,9 +553,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_fetch_sub_release atomic64_fetch_sub #else # ifndef atomic64_fetch_sub -# define atomic64_fetch_sub(...) __atomic_op_fence(atomic64_fetch_sub, __VA_ARGS__) -# define atomic64_fetch_sub_acquire(...) __atomic_op_acquire(atomic64_fetch_sub, __VA_ARGS__) -# define atomic64_fetch_sub_release(...) __atomic_op_release(atomic64_fetch_sub, __VA_ARGS__) +# define atomic64_fetch_sub(...) __op_fence(atomic64_fetch_sub, __VA_ARGS__) +# define atomic64_fetch_sub_acquire(...) __op_acquire(atomic64_fetch_sub, __VA_ARGS__) +# define atomic64_fetch_sub_release(...) __op_release(atomic64_fetch_sub, __VA_ARGS__) # endif #endif @@ -570,9 +572,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # endif #else # ifndef atomic64_fetch_dec -# define atomic64_fetch_dec(...) __atomic_op_fence(atomic64_fetch_dec, __VA_ARGS__) -# define atomic64_fetch_dec_acquire(...) __atomic_op_acquire(atomic64_fetch_dec, __VA_ARGS__) -# define atomic64_fetch_dec_release(...) __atomic_op_release(atomic64_fetch_dec, __VA_ARGS__) +# define atomic64_fetch_dec(...) __op_fence(atomic64_fetch_dec, __VA_ARGS__) +# define atomic64_fetch_dec_acquire(...) __op_acquire(atomic64_fetch_dec, __VA_ARGS__) +# define atomic64_fetch_dec_release(...) __op_release(atomic64_fetch_dec, __VA_ARGS__) # endif #endif @@ -582,9 +584,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_fetch_or_release atomic64_fetch_or #else # ifndef atomic64_fetch_or -# define atomic64_fetch_or(...) __atomic_op_fence(atomic64_fetch_or, __VA_ARGS__) -# define atomic64_fetch_or_acquire(...) __atomic_op_acquire(atomic64_fetch_or, __VA_ARGS__) -# define atomic64_fetch_or_release(...) __atomic_op_release(atomic64_fetch_or, __VA_ARGS__) +# define atomic64_fetch_or(...) __op_fence(atomic64_fetch_or, __VA_ARGS__) +# define atomic64_fetch_or_acquire(...) __op_acquire(atomic64_fetch_or, __VA_ARGS__) +# define atomic64_fetch_or_release(...) __op_release(atomic64_fetch_or, __VA_ARGS__) # endif #endif @@ -594,9 +596,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_fetch_and_release atomic64_fetch_and #else # ifndef atomic64_fetch_and -# define atomic64_fetch_and(...) __atomic_op_fence(atomic64_fetch_and, __VA_ARGS__) -# define atomic64_fetch_and_acquire(...) __atomic_op_acquire(atomic64_fetch_and, __VA_ARGS__) -# define atomic64_fetch_and_release(...) __atomic_op_release(atomic64_fetch_and, __VA_ARGS__) +# define atomic64_fetch_and(...) __op_fence(atomic64_fetch_and, __VA_ARGS__) +# define atomic64_fetch_and_acquire(...) __op_acquire(atomic64_fetch_and, __VA_ARGS__) +# define atomic64_fetch_and_release(...) __op_release(atomic64_fetch_and, __VA_ARGS__) # endif #endif @@ -606,9 +608,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_fetch_xor_release atomic64_fetch_xor #else # ifndef atomic64_fetch_xor -# define atomic64_fetch_xor(...) __atomic_op_fence(atomic64_fetch_xor, __VA_ARGS__) -# define atomic64_fetch_xor_acquire(...) __atomic_op_acquire(atomic64_fetch_xor, __VA_ARGS__) -# define atomic64_fetch_xor_release(...) __atomic_op_release(atomic64_fetch_xor, __VA_ARGS__) +# define atomic64_fetch_xor(...) __op_fence(atomic64_fetch_xor, __VA_ARGS__) +# define atomic64_fetch_xor_acquire(...) __op_acquire(atomic64_fetch_xor, __VA_ARGS__) +# define atomic64_fetch_xor_release(...) __op_release(atomic64_fetch_xor, __VA_ARGS__) # endif #endif @@ -618,9 +620,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_xchg_release atomic64_xchg #else # ifndef atomic64_xchg -# define atomic64_xchg(...) __atomic_op_fence(atomic64_xchg, __VA_ARGS__) -# define atomic64_xchg_acquire(...) __atomic_op_acquire(atomic64_xchg, __VA_ARGS__) -# define atomic64_xchg_release(...) __atomic_op_release(atomic64_xchg, __VA_ARGS__) +# define atomic64_xchg(...) __op_fence(atomic64_xchg, __VA_ARGS__) +# define atomic64_xchg_acquire(...) __op_acquire(atomic64_xchg, __VA_ARGS__) +# define atomic64_xchg_release(...) __op_release(atomic64_xchg, __VA_ARGS__) # endif #endif @@ -630,9 +632,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_cmpxchg_release atomic64_cmpxchg #else # ifndef atomic64_cmpxchg -# define atomic64_cmpxchg(...) __atomic_op_fence(atomic64_cmpxchg, __VA_ARGS__) -# define atomic64_cmpxchg_acquire(...) __atomic_op_acquire(atomic64_cmpxchg, __VA_ARGS__) -# define atomic64_cmpxchg_release(...) __atomic_op_release(atomic64_cmpxchg, __VA_ARGS__) +# define atomic64_cmpxchg(...) __op_fence(atomic64_cmpxchg, __VA_ARGS__) +# define atomic64_cmpxchg_acquire(...) __op_acquire(atomic64_cmpxchg, __VA_ARGS__) +# define atomic64_cmpxchg_release(...) __op_release(atomic64_cmpxchg, __VA_ARGS__) # endif #endif @@ -664,9 +666,9 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_fetch_andnot_release atomic64_fetch_andnot #else # ifndef atomic64_fetch_andnot -# define atomic64_fetch_andnot(...) __atomic_op_fence(atomic64_fetch_andnot, __VA_ARGS__) -# define atomic64_fetch_andnot_acquire(...) __atomic_op_acquire(atomic64_fetch_andnot, __VA_ARGS__) -# define atomic64_fetch_andnot_release(...) __atomic_op_release(atomic64_fetch_andnot, __VA_ARGS__) +# define atomic64_fetch_andnot(...) __op_fence(atomic64_fetch_andnot, __VA_ARGS__) +# define atomic64_fetch_andnot_acquire(...) __op_acquire(atomic64_fetch_andnot, __VA_ARGS__) +# define atomic64_fetch_andnot_release(...) __op_release(atomic64_fetch_andnot, __VA_ARGS__) # endif #endif
next prev parent reply other threads:[~2018-05-05 10:49 UTC|newest] Thread overview: 103+ messages / expand[flat|nested] mbox.gz Atom feed top 2018-05-04 17:39 [PATCH 0/6] arm64: add instrumented atomics Mark Rutland 2018-05-04 17:39 ` Mark Rutland 2018-05-04 17:39 ` [PATCH 1/6] locking/atomic, asm-generic: instrument ordering variants Mark Rutland 2018-05-04 17:39 ` Mark Rutland 2018-05-04 18:01 ` Peter Zijlstra 2018-05-04 18:01 ` Peter Zijlstra 2018-05-04 18:09 ` Mark Rutland 2018-05-04 18:09 ` Mark Rutland 2018-05-04 18:24 ` Peter Zijlstra 2018-05-04 18:24 ` Peter Zijlstra 2018-05-05 9:12 ` Mark Rutland 2018-05-05 9:12 ` Mark Rutland 2018-05-05 8:11 ` [PATCH] locking/atomics: Clean up the atomic.h maze of #defines Ingo Molnar 2018-05-05 8:11 ` Ingo Molnar 2018-05-05 8:36 ` [PATCH] locking/atomics: Simplify the op definitions in atomic.h some more Ingo Molnar 2018-05-05 8:36 ` Ingo Molnar 2018-05-05 8:54 ` [PATCH] locking/atomics: Combine the atomic_andnot() and atomic64_andnot() API definitions Ingo Molnar 2018-05-05 8:54 ` Ingo Molnar 2018-05-06 12:15 ` [tip:locking/core] " tip-bot for Ingo Molnar 2018-05-06 14:15 ` [PATCH] " Andrea Parri 2018-05-06 14:15 ` Andrea Parri 2018-05-06 12:14 ` [tip:locking/core] locking/atomics: Simplify the op definitions in atomic.h some more tip-bot for Ingo Molnar 2018-05-09 7:33 ` Peter Zijlstra 2018-05-09 13:03 ` Will Deacon 2018-05-15 8:54 ` Ingo Molnar 2018-05-15 8:35 ` Ingo Molnar 2018-05-15 11:41 ` Peter Zijlstra 2018-05-15 12:13 ` Peter Zijlstra 2018-05-15 15:43 ` Mark Rutland 2018-05-15 17:10 ` Peter Zijlstra 2018-05-15 17:53 ` Mark Rutland 2018-05-15 18:11 ` Peter Zijlstra 2018-05-15 18:15 ` Peter Zijlstra 2018-05-15 18:52 ` Linus Torvalds 2018-05-15 19:39 ` Peter Zijlstra 2018-05-21 17:12 ` Mark Rutland 2018-05-06 14:12 ` [PATCH] " Andrea Parri 2018-05-06 14:12 ` Andrea Parri 2018-05-06 14:57 ` Ingo Molnar 2018-05-06 14:57 ` Ingo Molnar 2018-05-07 9:54 ` Andrea Parri 2018-05-07 9:54 ` Andrea Parri 2018-05-18 18:43 ` Palmer Dabbelt 2018-05-18 18:43 ` Palmer Dabbelt 2018-05-05 8:47 ` [PATCH] locking/atomics: Clean up the atomic.h maze of #defines Peter Zijlstra 2018-05-05 8:47 ` Peter Zijlstra 2018-05-05 9:04 ` Ingo Molnar 2018-05-05 9:04 ` Ingo Molnar 2018-05-05 9:24 ` Peter Zijlstra 2018-05-05 9:24 ` Peter Zijlstra 2018-05-05 9:38 ` Ingo Molnar 2018-05-05 9:38 ` Ingo Molnar 2018-05-05 10:00 ` [RFC PATCH] locking/atomics/powerpc: Introduce optimized cmpxchg_release() family of APIs for PowerPC Ingo Molnar 2018-05-05 10:00 ` Ingo Molnar 2018-05-05 10:26 ` Boqun Feng 2018-05-05 10:26 ` Boqun Feng 2018-05-06 1:56 ` Benjamin Herrenschmidt 2018-05-06 1:56 ` Benjamin Herrenschmidt 2018-05-05 10:16 ` [PATCH] locking/atomics: Clean up the atomic.h maze of #defines Boqun Feng 2018-05-05 10:16 ` Boqun Feng 2018-05-05 10:35 ` [RFC PATCH] locking/atomics/powerpc: Clarify why the cmpxchg_relaxed() family of APIs falls back to full cmpxchg() Ingo Molnar 2018-05-05 10:35 ` Ingo Molnar 2018-05-05 11:28 ` Boqun Feng 2018-05-05 11:28 ` Boqun Feng 2018-05-05 13:27 ` [PATCH] locking/atomics/powerpc: Move cmpxchg helpers to asm/cmpxchg.h and define the full set of cmpxchg APIs Ingo Molnar 2018-05-05 13:27 ` Ingo Molnar 2018-05-05 14:03 ` Boqun Feng 2018-05-05 14:03 ` Boqun Feng 2018-05-06 12:11 ` Ingo Molnar 2018-05-06 12:11 ` Ingo Molnar 2018-05-07 1:04 ` Boqun Feng 2018-05-07 1:04 ` Boqun Feng 2018-05-07 6:50 ` Ingo Molnar 2018-05-07 6:50 ` Ingo Molnar 2018-05-06 12:13 ` [tip:locking/core] " tip-bot for Boqun Feng 2018-05-07 13:31 ` [PATCH v2] " Boqun Feng 2018-05-07 13:31 ` Boqun Feng 2018-05-05 9:05 ` [PATCH] locking/atomics: Clean up the atomic.h maze of #defines Dmitry Vyukov 2018-05-05 9:05 ` Dmitry Vyukov 2018-05-05 9:32 ` Peter Zijlstra 2018-05-05 9:32 ` Peter Zijlstra 2018-05-07 6:43 ` [RFC PATCH] locking/atomics/x86/64: Clean up and fix details of <asm/atomic64_64.h> Ingo Molnar 2018-05-07 6:43 ` Ingo Molnar 2018-05-05 9:09 ` [PATCH] locking/atomics: Clean up the atomic.h maze of #defines Ingo Molnar 2018-05-05 9:09 ` Ingo Molnar 2018-05-05 9:29 ` Peter Zijlstra 2018-05-05 9:29 ` Peter Zijlstra 2018-05-05 10:48 ` Ingo Molnar [this message] 2018-05-05 10:48 ` [PATCH] locking/atomics: Shorten the __atomic_op() defines to __op() Ingo Molnar 2018-05-05 10:59 ` Ingo Molnar 2018-05-05 10:59 ` Ingo Molnar 2018-05-06 12:15 ` [tip:locking/core] " tip-bot for Ingo Molnar 2018-05-06 12:14 ` [tip:locking/core] locking/atomics: Clean up the atomic.h maze of #defines tip-bot for Ingo Molnar 2018-05-04 17:39 ` [PATCH 2/6] locking/atomic, asm-generic: instrument atomic*andnot*() Mark Rutland 2018-05-04 17:39 ` Mark Rutland 2018-05-04 17:39 ` [PATCH 3/6] arm64: use <linux/atomic.h> for cmpxchg Mark Rutland 2018-05-04 17:39 ` Mark Rutland 2018-05-04 17:39 ` [PATCH 4/6] arm64: fix assembly constraints " Mark Rutland 2018-05-04 17:39 ` Mark Rutland 2018-05-04 17:39 ` [PATCH 5/6] arm64: use instrumented atomics Mark Rutland 2018-05-04 17:39 ` Mark Rutland 2018-05-04 17:39 ` [PATCH 6/6] arm64: instrument smp_{load_acquire,store_release} Mark Rutland 2018-05-04 17:39 ` Mark Rutland
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20180505104858.ap4bfv6ip2vprzyj@gmail.com \ --to=mingo@kernel.org \ --cc=aryabinin@virtuozzo.com \ --cc=boqun.feng@gmail.com \ --cc=catalin.marinas@arm.com \ --cc=dvyukov@google.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=mark.rutland@arm.com \ --cc=peterz@infradead.org \ --cc=will.deacon@arm.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.