From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751729AbdFISPc (ORCPT ); Fri, 9 Jun 2017 14:15:32 -0400 Received: from merlin.infradead.org ([205.233.59.134]:41606 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751577AbdFISPb (ORCPT ); Fri, 9 Jun 2017 14:15:31 -0400 Subject: Re: [RFC][PATCH]: documentation,atomic: Add a new atomic_t document To: Peter Zijlstra , Will Deacon , Paul McKenney , Boqun Feng Cc: linux-kernel@vger.kernel.org, Ingo Molnar , Thomas Gleixner References: <20170609092450.jwmldgtli57ozxgq@hirez.programming.kicks-ass.net> From: Randy Dunlap Message-ID: <2efe9a60-cbab-c0e1-8fe2-fa96328244a7@infradead.org> Date: Fri, 9 Jun 2017 11:15:20 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.1.0 MIME-Version: 1.0 In-Reply-To: <20170609092450.jwmldgtli57ozxgq@hirez.programming.kicks-ass.net> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/09/17 02:24, Peter Zijlstra wrote: > > --- /dev/null 2017-05-05 13:16:22.636212333 +0200 > +++ b/Documentation/atomic_t.txt 2017-06-09 11:05:31.501599153 +0200 > @@ -0,0 +1,147 @@ > + > +The one detail to this is that atomic_set() should be observable to the RmW > +ops. That is: > + > + CPU0 CPU1 > + > + val = atomic_read(&X) > + do { > + atomic_set(&X, 0) > + new = val + 1; > + } while (!atomic_try_cmpxchg(&X, &val, new)); > + > +Should cause the cmpxchg to *FAIL* (when @val != 0). This is typically true; should > +on 'normal' platforms; a regular competing STORE will invalidate a LL/SC. too many semi-colons above. > + > +The obvious case where this is not so is where we need to implement atomic ops > +with a spinlock hashtable; the typical solution is to then implement > +atomic_set() with atomic_xchg(). > + > + > +RmW ops: > + > +These come in various forms: > + > + - plain operations without return value: atomic_{}() > + > + - operations which return the modified value: atomic_{}_return() > + > + these are limited to the arithmetic operations because those are > + reversible. Bitops are irreversible and therefore the modified value > + is of dubious utility. > + > + - operations which return the original value: atomic_fetch_{}() > + > + - swap operations: xchg(), cmpxchg() and try_cmpxchg() > + > + - misc; the special purpose operations that are commonly used and would, > + given the interface, normally be implemented using (try_)cmpxchg loops but > + are time critical and can, (typically) on LL/SC architectures, be more > + efficiently implemented. > + > + > +All these operations are SMP atomic; that is, the operations (for a single > +atomic variable) can be fully ordered and no intermediate state is lost or > +visible. > + > + > +Ordering: (go read memory-barriers.txt first) > + > +The rule of thumb: > + > + - non-RmW operations are unordered; > + > + - RmW operations that have no return value are unordered; > + > + - RmW operations that have a return value are Sequentially Consistent; > + > + - RmW operations that are conditional are unordered on FAILURE, otherwise the > + above rules apply. > + > +Except of course when an operation has an explicit ordering like: > + > + {}_relaxed: unordered > + {}_acquire: the R of the RmW is an ACQUIRE > + {}_release: the W of the RmW is a RELEASE > + > +NOTE: our ACQUIRE/RELEASE are RCpc > + > + > +The barriers: > + > + smp_mb__{before,after}_atomic() > + > +only apply to the RmW ops and can be used to augment/upgrade the ordering > +inherit to the used atomic op. These barriers provide a full smp_mb(). inherent ? > + > +These helper barriers exist because architectures have varying implicit > +ordering on their SMP atomic primitives. For example our TSO architectures > +provide SC atomics and these barriers are no-ops. > + > +So while something like: > + > + smp_mb__before_atomic(); > + val = atomic_dec_return_relaxed(&X); > + > +is a 'typical' RELEASE pattern (please use atomic_dec_return_release()), the > +barrier is strictly stronger than a RELEASE. > -- ~Randy