From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753771AbbJZJvv (ORCPT ); Mon, 26 Oct 2015 05:51:51 -0400 Received: from mail-oi0-f67.google.com ([209.85.218.67]:34591 "EHLO mail-oi0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753708AbbJZJvs (ORCPT ); Mon, 26 Oct 2015 05:51:48 -0400 From: Boqun Feng To: linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Peter Zijlstra , Ingo Molnar , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Thomas Gleixner , Will Deacon , "Paul E. McKenney" , Waiman Long , Davidlohr Bueso , Boqun Feng Subject: [PATCH tip/locking/core v5 3/6] atomics: Allow architectures to define their own __atomic_op_* helpers Date: Mon, 26 Oct 2015 17:50:54 +0800 Message-Id: <1445853057-20735-4-git-send-email-boqun.feng@gmail.com> X-Mailer: git-send-email 2.6.2 In-Reply-To: <1445853057-20735-1-git-send-email-boqun.feng@gmail.com> References: <1445853057-20735-1-git-send-email-boqun.feng@gmail.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Some architectures may have their special barriers for acquire, release and fence semantics, so that general memory barriers(smp_mb__*_atomic()) in the default __atomic_op_*() may be too strong, so allow architectures to define their own helpers which can overwrite the default helpers. Signed-off-by: Boqun Feng --- include/linux/atomic.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 27e580d..947c1dc 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -43,20 +43,29 @@ static inline int atomic_read_ctrl(const atomic_t *v) * The idea here is to build acquire/release variants by adding explicit * barriers on top of the relaxed variant. In the case where the relaxed * variant is already fully ordered, no additional barriers are needed. + * + * Besides, if an arch has a special barrier for acquire/release, it could + * implement its own __atomic_op_* and use the same framework for building + * variants */ +#ifndef __atomic_op_acquire #define __atomic_op_acquire(op, args...) \ ({ \ typeof(op##_relaxed(args)) __ret = op##_relaxed(args); \ smp_mb__after_atomic(); \ __ret; \ }) +#endif +#ifndef __atomic_op_release #define __atomic_op_release(op, args...) \ ({ \ smp_mb__before_atomic(); \ op##_relaxed(args); \ }) +#endif +#ifndef __atomic_op_fence #define __atomic_op_fence(op, args...) \ ({ \ typeof(op##_relaxed(args)) __ret; \ @@ -65,6 +74,7 @@ static inline int atomic_read_ctrl(const atomic_t *v) smp_mb__after_atomic(); \ __ret; \ }) +#endif /* atomic_add_return_relaxed */ #ifndef atomic_add_return_relaxed -- 2.6.2