From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752617AbdKVDCI (ORCPT ); Tue, 21 Nov 2017 22:02:08 -0500 Received: from mail-it0-f65.google.com ([209.85.214.65]:43460 "EHLO mail-it0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751565AbdKVDCF (ORCPT ); Tue, 21 Nov 2017 22:02:05 -0500 X-Google-Smtp-Source: AGs4zMZKa9tvYfLjKibYf/RsJPi1kdLXCG1+p/mVQGrDuiaU+6euERyK+Yz568lLM0y9A+GTEXzHJ5wSWHkv1B8XxQ4= MIME-Version: 1.0 In-Reply-To: <20171120142925.GC32488@arm.com> References: <20171120142925.GC32488@arm.com> From: Vincent Chen Date: Wed, 22 Nov 2017 11:02:03 +0800 Message-ID: Subject: Re: [PATCH 11/31] nds32: Atomic operations To: Will Deacon Cc: Greentime Hu , greentime@andestech.com, linux-kernel@vger.kernel.org, Arnd Bergmann , linux-arch@vger.kernel.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, robh+dt@kernel.org, netdev@vger.kernel.org, Vincent Chen , peterz@infradead.org, paulmck@linux.vnet.ibm.com Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 2017-11-20 22:29 GMT+08:00 Will Deacon : > Hi Greentime, > > On Wed, Nov 08, 2017 at 01:54:59PM +0800, Greentime Hu wrote: >> From: Greentime Hu >> >> Signed-off-by: Vincent Chen >> Signed-off-by: Greentime Hu >> --- >> arch/nds32/include/asm/futex.h | 116 ++++++++++++++++++++++++ >> arch/nds32/include/asm/spinlock.h | 178 +++++++++++++++++++++++++++++++++++++ >> 2 files changed, 294 insertions(+) >> create mode 100644 arch/nds32/include/asm/futex.h >> create mode 100644 arch/nds32/include/asm/spinlock.h > > [...] > >> +static inline int >> +futex_atomic_cmpxchg_inatomic(u32 * uval, u32 __user * uaddr, >> + u32 oldval, u32 newval) >> +{ >> + int ret = 0; >> + u32 val, tmp, flags; >> + >> + if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32))) >> + return -EFAULT; >> + >> + smp_mb(); >> + asm volatile (" movi $ta, #0\n" >> + "1: llw %1, [%6 + $ta]\n" >> + " sub %3, %1, %4\n" >> + " cmovz %2, %5, %3\n" >> + " cmovn %2, %1, %3\n" >> + "2: scw %2, [%6 + $ta]\n" >> + " beqz %2, 1b\n" >> + "3:\n " __futex_atomic_ex_table("%7") >> + :"+&r"(ret), "=&r"(val), "=&r"(tmp), "=&r"(flags) >> + :"r"(oldval), "r"(newval), "r"(uaddr), "i"(-EFAULT) >> + :"$ta", "memory"); >> + smp_mb(); >> + >> + *uval = val; >> + return ret; >> +} > > I see you rely on asm-generic/barrier.h for your barrier definitions, which > suggests that you only need to prevent reordering by the compiler because > you're not SMP. Is that right? If so, using smp_mb() is a little weird. > Thanks. So, Is it better to replace smp_mb() with mb() for us? > What about DMA transactions? I imagine you might need some extra > instructions for the mandatory barriers there. > I don't get it. Do you mean before DMA transations? Data are moved from memory to device, we will writeback data cache before DMA transactions. Data are moved from device to memory, we will invalidate data cache after DMA transactions. > Also: > >> +static inline void arch_spin_lock(arch_spinlock_t * lock) >> +{ >> + unsigned long tmp; >> + >> + __asm__ __volatile__("1:\n" >> + "\tllw\t%0, [%1]\n" >> + "\tbnez\t%0, 1b\n" >> + "\tmovi\t%0, #0x1\n" >> + "\tscw\t%0, [%1]\n" >> + "\tbeqz\t%0, 1b\n" >> + :"=&r"(tmp) >> + :"r"(&lock->lock) >> + :"memory"); >> +} > > Here it looks like you're eliding an explicit barrier here because you > already have a "memory" clobber. Can't you do the same for the futex code > above? > Thanks. OK. I will modify it in the next version patch. > Will Best regards Vincent