From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF6BBECE560 for ; Sat, 15 Sep 2018 14:55:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 765A220866 for ; Sat, 15 Sep 2018 14:55:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 765A220866 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727690AbeIOUOt (ORCPT ); Sat, 15 Sep 2018 16:14:49 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:52407 "EHLO smtp2200-217.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727028AbeIOUOt (ORCPT ); Sat, 15 Sep 2018 16:14:49 -0400 X-Alimail-AntiSpam: AC=CONTINUE;BC=0.07438338|-1;CH=green;FP=0|0|0|0|0|-1|-1|-1;HT=e01e01534;MF=ren_guo@c-sky.com;NM=1;PH=DS;RN=14;RT=14;SR=0;TI=SMTPD_---.CrEQRz2_1537023313; Received: from localhost(mailfrom:ren_guo@c-sky.com fp:SMTPD_---.CrEQRz2_1537023313) by smtp.aliyun-inc.com(10.147.42.197); Sat, 15 Sep 2018 22:55:14 +0800 Date: Sat, 15 Sep 2018 22:55:13 +0800 From: Guo Ren To: Peter Zijlstra Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, daniel.lezcano@linaro.org, jason@lakedaemon.net, arnd@arndb.de, devicetree@vger.kernel.org, andrea.parri@amarulasolutions.com, c-sky_gcc_upstream@c-sky.com, gnu-csky@mentor.com, thomas.petazzoni@bootlin.com, wbx@uclibc-ng.org, green.hu@gmail.com Subject: Re: [PATCH V3 11/27] csky: Atomic operations Message-ID: <20180915145512.GA18355@guoren-Inspiron-7460> References: <93e8b592e429c156ad4d4ca5d85ef48fd0ab8b70.1536757532.git.ren_guo@c-sky.com> <20180912155514.GV24082@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180912155514.GV24082@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Thx for the review, that's very helpful. On Wed, Sep 12, 2018 at 05:55:14PM +0200, Peter Zijlstra wrote: > On Wed, Sep 12, 2018 at 09:24:45PM +0800, Guo Ren wrote: > > > +#define ATOMIC_OP(op, c_op) \ > > +static inline void atomic_##op(int i, atomic_t *v) \ > > +{ \ > > + unsigned long tmp; \ > > + \ > > + smp_mb(); \ > > + asm volatile ( \ > > + "1: ldex.w %0, (%2) \n" \ > > + " " #op " %0, %1 \n" \ > > + " stex.w %0, (%2) \n" \ > > + " bez %0, 1b \n" \ > > + : "=&r" (tmp) \ > > + : "r" (i), "r"(&v->counter) \ > > + : "memory"); \ > > + smp_mb(); \ > > +} > > ATOMIC_OP doesn't need to imply any smp_mb()'s what so ever. Ok. > > +#define ATOMIC_OP_RETURN(op, c_op) \ > > +static inline int atomic_##op##_return(int i, atomic_t *v) \ > > +{ \ > > + unsigned long tmp, ret; \ > > + \ > > + smp_mb(); \ > > + asm volatile ( \ > > + "1: ldex.w %0, (%3) \n" \ > > + " " #op " %0, %2 \n" \ > > + " mov %1, %0 \n" \ > > + " stex.w %0, (%3) \n" \ > > + " bez %0, 1b \n" \ > > + : "=&r" (tmp), "=&r" (ret) \ > > + : "r" (i), "r"(&v->counter) \ > > + : "memory"); \ > > + smp_mb(); \ > > + \ > > + return ret; \ > > +} > > + > > +#define ATOMIC_FETCH_OP(op, c_op) \ > > +static inline int atomic_fetch_##op(int i, atomic_t *v) \ > > +{ \ > > + unsigned long tmp, ret; \ > > + \ > > + smp_mb(); \ > > + asm volatile ( \ > > + "1: ldex.w %0, (%3) \n" \ > > + " mov %1, %0 \n" \ > > + " " #op " %0, %2 \n" \ > > + " stex.w %0, (%3) \n" \ > > + " bez %0, 1b \n" \ > > + : "=&r" (tmp), "=&r" (ret) \ > > + : "r" (i), "r"(&v->counter) \ > > + : "memory"); \ > > + smp_mb(); \ > > + \ > > + return ret; \ > > +} > > For these you could generate _relaxed variants and not provide smp_mb() > inside them. Ok, but I'll modify it in next commit. > > +#else /* CONFIG_CPU_HAS_LDSTEX */ > > + > > +#include > > + > > > +#define ATOMIC_OP(op, c_op) \ > > +static inline void atomic_##op(int i, atomic_t *v) \ > > +{ \ > > + unsigned long tmp, flags; \ > > + \ > > + raw_local_irq_save(flags); \ > > + \ > > + asm volatile ( \ > > + " ldw %0, (%2) \n" \ > > + " " #op " %0, %1 \n" \ > > + " stw %0, (%2) \n" \ > > + : "=&r" (tmp) \ > > + : "r" (i), "r"(&v->counter) \ > > + : "memory"); \ > > + \ > > + raw_local_irq_restore(flags); \ > > +} > > Is this really 'better' than the generic UP fallback implementation? There is a lock irq instruction "idly4" with out irq_save. eg: asm volatile ( \ " idly4 \n" \ " ldw %0, (%2) \n" \ " " #op " %0, %1 \n" \ " stw %0, (%2) \n" \ I'll change to that after full tested. > > +static inline void arch_spin_lock(arch_spinlock_t *lock) > > +{ > > + arch_spinlock_t lockval; > > + u32 ticket_next = 1 << TICKET_NEXT; > > + u32 *p = &lock->lock; > > + u32 tmp; > > + > > + smp_mb(); > > spin_lock() doesn't need smp_mb() before. read_lock and write_lock also needn't smp_mb() before, isn't it? > > + > > +static inline void arch_spin_unlock(arch_spinlock_t *lock) > > +{ > > + smp_mb(); > > + lock->tickets.owner++; > > + smp_mb(); > > spin_unlock() doesn't need smp_mb() after. read_unlock and write_unlock also needn't smp_mb() after, isn't it? > > +#else /* CONFIG_QUEUED_RWLOCKS */ > > + > > +/* > > + * Test-and-set spin-locking. > > + */ > > Why retain that? > > same comments; it has far too many smp_mb()s in. I'm not sure about queued_rwlocks and just for 2-cores-smp test-and-set is faster and simpler, isn't it? Best Regards Guo Ren