From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932332AbdBVL1y (ORCPT ); Wed, 22 Feb 2017 06:27:54 -0500 Received: from bombadil.infradead.org ([65.50.211.133]:51933 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932310AbdBVL1p (ORCPT ); Wed, 22 Feb 2017 06:27:45 -0500 Date: Wed, 22 Feb 2017 12:27:37 +0100 From: Peter Zijlstra To: Stafford Horne Cc: Jonas Bonn , Stefan Kristiansson , linux@roeck-us.net, openrisc@lists.librecores.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v3 09/25] openrisc: add optimized atomic operations Message-ID: <20170222112737.GM6515@twins.programming.kicks-ass.net> References: <1479fc4a4a18712003849341affe74b2a0da609a.1487702890.git.shorne@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1479fc4a4a18712003849341affe74b2a0da609a.1487702890.git.shorne@gmail.com> User-Agent: Mutt/1.5.23.1 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 22, 2017 at 04:11:38AM +0900, Stafford Horne wrote: > +#define atomic_add_return atomic_add_return > +#define atomic_sub_return atomic_sub_return > +#define atomic_fetch_add atomic_fetch_add > +#define atomic_fetch_sub atomic_fetch_sub > +#define atomic_fetch_and atomic_fetch_and > +#define atomic_fetch_or atomic_fetch_or > +#define atomic_fetch_xor atomic_fetch_xor > +#define atomic_and atomic_and > +#define atomic_or atomic_or > +#define atomic_xor atomic_xor > + It would be good to also implement __atomic_add_unless(). Something like so, if I got your asm right.. static inline int __atomic_add_unless(atomic_t *v, int a, int u) { int old, tmp; __asm__ __volatile__( "1: l.lwa %0, 0(%2) \n" " l.sfeq %0, %4 \n" " l.bf 2f \n" " l.nop \n" " l.add %1, %0, %3 \n" " l.swa 0(%2), %1 \n" " l.bnf 1b \n" "2: l.nop \n" : "=&r"(old), "=&r" (tmp) : "r"(&v->counter), "r"(a), "r"(u) : "cc", "memory"); return old; }