From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69554C433E0 for ; Wed, 24 Mar 2021 12:30:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 36B6E619B6 for ; Wed, 24 Mar 2021 12:30:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233059AbhCXM3j (ORCPT ); Wed, 24 Mar 2021 08:29:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51938 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232658AbhCXM3M (ORCPT ); Wed, 24 Mar 2021 08:29:12 -0400 Received: from mail-lf1-x130.google.com (mail-lf1-x130.google.com [IPv6:2a00:1450:4864:20::130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34A33C061763 for ; Wed, 24 Mar 2021 05:29:12 -0700 (PDT) Received: by mail-lf1-x130.google.com with SMTP id q29so31490317lfb.4 for ; Wed, 24 Mar 2021 05:29:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=brainfault-org.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=rdJvCy0plmg5CHNmgit4+S41GZn0t5VRajQX0HllGVk=; b=vbI954kA5G6fRAm3NlLe+iuuIB0n6EkmguJm50T0v3JV9A9yNE35SM5JvSs9RHmLbt xGaGqVyx59H0hZnIlcBHTT3Vz4s9p6i+F/QuJKYyQEhlwtt5hsiO5esIVVjqRO4FXbTC TJDZz+GjYqVymwiN2iVKVSk5Qxlix/ut7gucwuNjuMAUHONQHr1MhyniTDhhOjKEsnzq CbxDt8ZcUpIaZLAEsr0uDjKCzmU2GF54A20gR7X/gKwc5WFLoKWf6h1sIJi1Ca4pDWdd a/fJHp4ah7zXGJxMcV8sU/7lj8fOf4VnnKcr+F5VLGRVJEE0Lj3y25qnMZPz22C32eIO /iHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=rdJvCy0plmg5CHNmgit4+S41GZn0t5VRajQX0HllGVk=; b=EL+44hejJa40a8dHvDV7Rh+EoZfse7aDOVkKsdElDHYot+/jBLvJEKtN4Q0mgXEZJG 8iOk54LOGyvgqMfKS27cnYHSeRvz/D6KLtp8YDOYk+F1VMg8R7ir8sA14gLnwf4qjU36 cUwnCVlnB5iXpHpacy1WU1SK88GrvC8T9um+2prr/SJXMSEuzZCwJ3BaprGrvbr+NXyH SjCMfUpzue2ohptpJAGUNeYAwhKKM377gB738+v1FPt27b7n6Xz8e0vL/QcLGtDsL6AX kxyMSyuyTRIiPNna/UIMDI9qkFJDXb/qxqN8AnFksKixqNYmVDwteM7oO+f67Gakx2Dh 18Yg== X-Gm-Message-State: AOAM532prXaInmI1us6AOiYJ3cAp8HNsU17dodUU+7WkOgYym9x7G+XD oytPifCFHtNI12lOikPYlzsSK7fAI4YFMEQEtRlNzA== X-Google-Smtp-Source: ABdhPJzOh8wUTZg4ahrsrxrbNvLjXhBbiH1F+KKbzCi+ZNCt1jwRW3RtsBqBO84MG1V2I1q5u7mzBH5bFn8mmMhAhi0= X-Received: by 2002:ac2:57c2:: with SMTP id k2mr1370181lfo.511.1616588950598; Wed, 24 Mar 2021 05:29:10 -0700 (PDT) MIME-Version: 1.0 References: <1616580892-80815-1-git-send-email-guoren@kernel.org> In-Reply-To: <1616580892-80815-1-git-send-email-guoren@kernel.org> From: Anup Patel Date: Wed, 24 Mar 2021 17:58:58 +0530 Message-ID: Subject: Re: [PATCH] riscv: locks: introduce ticket-based spinlock implementation To: Guo Ren Cc: linux-riscv , "linux-kernel@vger.kernel.org List" , Guo Ren , Catalin Marinas , Will Deacon , Peter Zijlstra , Palmer Dabbelt , Arnd Bergmann Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Mar 24, 2021 at 3:45 PM wrote: > > From: Guo Ren > > This patch introduces a ticket lock implementation for riscv, along the > same lines as the implementation for arch/arm & arch/csky. > > Signed-off-by: Guo Ren > Cc: Catalin Marinas > Cc: Will Deacon > Cc: Peter Zijlstra > Cc: Palmer Dabbelt > Cc: Anup Patel > Cc: Arnd Bergmann > --- > arch/riscv/Kconfig | 1 + > arch/riscv/include/asm/Kbuild | 1 + > arch/riscv/include/asm/spinlock.h | 158 ++++++++++++-------------------- > arch/riscv/include/asm/spinlock_types.h | 19 ++-- NACK from myside. Linux ARM64 has moved away from ticket spinlock to qspinlock. We should directly go for qspinlock. Regards, Anup > 4 files changed, 74 insertions(+), 105 deletions(-) > > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig > index 87d7b52..7c56a20 100644 > --- a/arch/riscv/Kconfig > +++ b/arch/riscv/Kconfig > @@ -30,6 +30,7 @@ config RISCV > select ARCH_HAS_STRICT_KERNEL_RWX if MMU > select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX > select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT > + select ARCH_USE_QUEUED_RWLOCKS > select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU > select ARCH_WANT_FRAME_POINTERS > select ARCH_WANT_HUGE_PMD_SHARE if 64BIT > diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild > index 445ccc9..e57ef80 100644 > --- a/arch/riscv/include/asm/Kbuild > +++ b/arch/riscv/include/asm/Kbuild > @@ -3,5 +3,6 @@ generic-y += early_ioremap.h > generic-y += extable.h > generic-y += flat.h > generic-y += kvm_para.h > +generic-y += qrwlock.h > generic-y += user.h > generic-y += vmlinux.lds.h > diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h > index f4f7fa1..2c81764 100644 > --- a/arch/riscv/include/asm/spinlock.h > +++ b/arch/riscv/include/asm/spinlock.h > @@ -7,129 +7,91 @@ > #ifndef _ASM_RISCV_SPINLOCK_H > #define _ASM_RISCV_SPINLOCK_H > > -#include > -#include > -#include > - > /* > - * Simple spin lock operations. These provide no fairness guarantees. > + * Ticket-based spin-locking. > */ > +static inline void arch_spin_lock(arch_spinlock_t *lock) > +{ > + arch_spinlock_t lockval; > + u32 tmp; > + > + asm volatile ( > + "1: lr.w %0, %2 \n" > + " mv %1, %0 \n" > + " addw %0, %0, %3 \n" > + " sc.w %0, %0, %2 \n" > + " bnez %0, 1b \n" > + : "=&r" (tmp), "=&r" (lockval), "+A" (lock->lock) > + : "r" (1 << TICKET_NEXT) > + : "memory"); > > -/* FIXME: Replace this with a ticket lock, like MIPS. */ > - > -#define arch_spin_is_locked(x) (READ_ONCE((x)->lock) != 0) > + while (lockval.tickets.next != lockval.tickets.owner) { > + /* > + * FIXME - we need wfi/wfe here to prevent: > + * - cache line bouncing > + * - saving cpu pipeline in multi-harts-per-core > + * processor > + */ > + lockval.tickets.owner = READ_ONCE(lock->tickets.owner); > + } > > -static inline void arch_spin_unlock(arch_spinlock_t *lock) > -{ > - smp_store_release(&lock->lock, 0); > + __atomic_acquire_fence(); > } > > static inline int arch_spin_trylock(arch_spinlock_t *lock) > { > - int tmp = 1, busy; > - > - __asm__ __volatile__ ( > - " amoswap.w %0, %2, %1\n" > - RISCV_ACQUIRE_BARRIER > - : "=r" (busy), "+A" (lock->lock) > - : "r" (tmp) > + u32 tmp, contended, res; > + > + do { > + asm volatile ( > + " lr.w %0, %3 \n" > + " srliw %1, %0, %5 \n" > + " slliw %2, %0, %5 \n" > + " or %1, %2, %1 \n" > + " li %2, 0 \n" > + " sub %1, %1, %0 \n" > + " bnez %1, 1f \n" > + " addw %0, %0, %4 \n" > + " sc.w %2, %0, %3 \n" > + "1: \n" > + : "=&r" (tmp), "=&r" (contended), "=&r" (res), > + "+A" (lock->lock) > + : "r" (1 << TICKET_NEXT), "I" (TICKET_NEXT) > : "memory"); > + } while (res); > > - return !busy; > -} > - > -static inline void arch_spin_lock(arch_spinlock_t *lock) > -{ > - while (1) { > - if (arch_spin_is_locked(lock)) > - continue; > - > - if (arch_spin_trylock(lock)) > - break; > + if (!contended) { > + __atomic_acquire_fence(); > + return 1; > + } else { > + return 0; > } > } > > -/***********************************************************/ > - > -static inline void arch_read_lock(arch_rwlock_t *lock) > +static inline void arch_spin_unlock(arch_spinlock_t *lock) > { > - int tmp; > - > - __asm__ __volatile__( > - "1: lr.w %1, %0\n" > - " bltz %1, 1b\n" > - " addi %1, %1, 1\n" > - " sc.w %1, %1, %0\n" > - " bnez %1, 1b\n" > - RISCV_ACQUIRE_BARRIER > - : "+A" (lock->lock), "=&r" (tmp) > - :: "memory"); > + smp_store_release(&lock->tickets.owner, lock->tickets.owner + 1); > + /* FIXME - we need ipi/sev here to notify above */ > } > > -static inline void arch_write_lock(arch_rwlock_t *lock) > +static inline int arch_spin_value_unlocked(arch_spinlock_t lock) > { > - int tmp; > - > - __asm__ __volatile__( > - "1: lr.w %1, %0\n" > - " bnez %1, 1b\n" > - " li %1, -1\n" > - " sc.w %1, %1, %0\n" > - " bnez %1, 1b\n" > - RISCV_ACQUIRE_BARRIER > - : "+A" (lock->lock), "=&r" (tmp) > - :: "memory"); > + return lock.tickets.owner == lock.tickets.next; > } > > -static inline int arch_read_trylock(arch_rwlock_t *lock) > +static inline int arch_spin_is_locked(arch_spinlock_t *lock) > { > - int busy; > - > - __asm__ __volatile__( > - "1: lr.w %1, %0\n" > - " bltz %1, 1f\n" > - " addi %1, %1, 1\n" > - " sc.w %1, %1, %0\n" > - " bnez %1, 1b\n" > - RISCV_ACQUIRE_BARRIER > - "1:\n" > - : "+A" (lock->lock), "=&r" (busy) > - :: "memory"); > - > - return !busy; > + return !arch_spin_value_unlocked(READ_ONCE(*lock)); > } > > -static inline int arch_write_trylock(arch_rwlock_t *lock) > +static inline int arch_spin_is_contended(arch_spinlock_t *lock) > { > - int busy; > - > - __asm__ __volatile__( > - "1: lr.w %1, %0\n" > - " bnez %1, 1f\n" > - " li %1, -1\n" > - " sc.w %1, %1, %0\n" > - " bnez %1, 1b\n" > - RISCV_ACQUIRE_BARRIER > - "1:\n" > - : "+A" (lock->lock), "=&r" (busy) > - :: "memory"); > + struct __raw_tickets tickets = READ_ONCE(lock->tickets); > > - return !busy; > + return (tickets.next - tickets.owner) > 1; > } > +#define arch_spin_is_contended arch_spin_is_contended > > -static inline void arch_read_unlock(arch_rwlock_t *lock) > -{ > - __asm__ __volatile__( > - RISCV_RELEASE_BARRIER > - " amoadd.w x0, %1, %0\n" > - : "+A" (lock->lock) > - : "r" (-1) > - : "memory"); > -} > - > -static inline void arch_write_unlock(arch_rwlock_t *lock) > -{ > - smp_store_release(&lock->lock, 0); > -} > +#include > > #endif /* _ASM_RISCV_SPINLOCK_H */ > diff --git a/arch/riscv/include/asm/spinlock_types.h b/arch/riscv/include/asm/spinlock_types.h > index f398e76..d7b38bf 100644 > --- a/arch/riscv/include/asm/spinlock_types.h > +++ b/arch/riscv/include/asm/spinlock_types.h > @@ -10,16 +10,21 @@ > # error "please don't include this file directly" > #endif > > +#define TICKET_NEXT 16 > + > typedef struct { > - volatile unsigned int lock; > + union { > + u32 lock; > + struct __raw_tickets { > + /* little endian */ > + u16 owner; > + u16 next; > + } tickets; > + }; > } arch_spinlock_t; > > -#define __ARCH_SPIN_LOCK_UNLOCKED { 0 } > - > -typedef struct { > - volatile unsigned int lock; > -} arch_rwlock_t; > +#define __ARCH_SPIN_LOCK_UNLOCKED { { 0 } } > > -#define __ARCH_RW_LOCK_UNLOCKED { 0 } > +#include > > #endif /* _ASM_RISCV_SPINLOCK_TYPES_H */ > -- > 2.7.4 > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 143D7C433C1 for ; Wed, 24 Mar 2021 12:29:34 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7965D619B6 for ; Wed, 24 Mar 2021 12:29:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7965D619B6 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=brainfault.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:Cc:To:Subject:Message-ID:Date:From:In-Reply-To: References:MIME-Version:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=s/qFf+mnMGPIWLvwHAku182gS3U5D7sksWGzEHb4jOk=; b=K454vRo98UtlZ/EXyYFTB8MGq VOnT0RYcdO0RQJXBGpXTafc5L5dOg3yUYxy38Qw1x73mCewCE/vJP4tUyEBS1CpgNJKnB5fTBvMox FPzj/95/dbzq+h8nEw1zqkBtV3+5gDnHXIzNHylvrEI5JV9GyH+kFth5vUebpjiz6HOyWLqwlUCHy kPPyxU6kWWicrLEG18Ghp4VY16bQBKhIdynWC4twc4d8EmqHMn03yp4zta7elqmuEBwK58g9f0myX pAgRt4uOuBOzkhhC+LxhnxI15L0XPRbuCC2HADdZQkwnSKiZL7sf1rhx9RIo4Mg4zK/dvFZJDEEsK slcbmvMAA==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lP2dS-00H5jO-ES; Wed, 24 Mar 2021 12:29:18 +0000 Received: from mail-lf1-x129.google.com ([2a00:1450:4864:20::129]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lP2dN-00H5hX-6T for linux-riscv@lists.infradead.org; Wed, 24 Mar 2021 12:29:15 +0000 Received: by mail-lf1-x129.google.com with SMTP id o126so22159622lfa.0 for ; Wed, 24 Mar 2021 05:29:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=brainfault-org.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=rdJvCy0plmg5CHNmgit4+S41GZn0t5VRajQX0HllGVk=; b=vbI954kA5G6fRAm3NlLe+iuuIB0n6EkmguJm50T0v3JV9A9yNE35SM5JvSs9RHmLbt xGaGqVyx59H0hZnIlcBHTT3Vz4s9p6i+F/QuJKYyQEhlwtt5hsiO5esIVVjqRO4FXbTC TJDZz+GjYqVymwiN2iVKVSk5Qxlix/ut7gucwuNjuMAUHONQHr1MhyniTDhhOjKEsnzq CbxDt8ZcUpIaZLAEsr0uDjKCzmU2GF54A20gR7X/gKwc5WFLoKWf6h1sIJi1Ca4pDWdd a/fJHp4ah7zXGJxMcV8sU/7lj8fOf4VnnKcr+F5VLGRVJEE0Lj3y25qnMZPz22C32eIO /iHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=rdJvCy0plmg5CHNmgit4+S41GZn0t5VRajQX0HllGVk=; b=E3bGG/MoBoqfNFgMTsdPAnukoR5Q2MpZzZgsEunOLgnevb65+Avs6O6ydAAcbJeX3I 2//q0IZ17c6BJGAFqrk4QJH0Q74/dq4GgUtWgNsUm3ghfpEN2Fng+1P2CwR+pO8mnlzX 5/gLoWm3ZyrUe68Vu20z1mIZzF9xUamfPWSt/GtfWpSwgX3TykzQhDga87MRNDuwdvTO iplhlbXP2aHCQbMgOz8ZlT7/xJTCnJ3IcmWesx1BZXsx8Cbchv8uH+MYsAJAC/xM6Nhp L2W28l7Zii0gmpx3TnssahDSjJ2Wc0kxZdjqeud1QawcNecAka9xT4JO7Ys1M5dHqVEp I96Q== X-Gm-Message-State: AOAM53075o6vm+RagdYEVZTkHjvyWXMUNuUa1/i2enM2Q0h7Herht6oo GkHj5AH7M9uFT/tvPazp70Z/AikzlE3EVxl5rDamzA== X-Google-Smtp-Source: ABdhPJzOh8wUTZg4ahrsrxrbNvLjXhBbiH1F+KKbzCi+ZNCt1jwRW3RtsBqBO84MG1V2I1q5u7mzBH5bFn8mmMhAhi0= X-Received: by 2002:ac2:57c2:: with SMTP id k2mr1370181lfo.511.1616588950598; Wed, 24 Mar 2021 05:29:10 -0700 (PDT) MIME-Version: 1.0 References: <1616580892-80815-1-git-send-email-guoren@kernel.org> In-Reply-To: <1616580892-80815-1-git-send-email-guoren@kernel.org> From: Anup Patel Date: Wed, 24 Mar 2021 17:58:58 +0530 Message-ID: Subject: Re: [PATCH] riscv: locks: introduce ticket-based spinlock implementation To: Guo Ren Cc: linux-riscv , "linux-kernel@vger.kernel.org List" , Guo Ren , Catalin Marinas , Will Deacon , Peter Zijlstra , Palmer Dabbelt , Arnd Bergmann X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210324_122913_599407_046B005B X-CRM114-Status: GOOD ( 26.35 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org On Wed, Mar 24, 2021 at 3:45 PM wrote: > > From: Guo Ren > > This patch introduces a ticket lock implementation for riscv, along the > same lines as the implementation for arch/arm & arch/csky. > > Signed-off-by: Guo Ren > Cc: Catalin Marinas > Cc: Will Deacon > Cc: Peter Zijlstra > Cc: Palmer Dabbelt > Cc: Anup Patel > Cc: Arnd Bergmann > --- > arch/riscv/Kconfig | 1 + > arch/riscv/include/asm/Kbuild | 1 + > arch/riscv/include/asm/spinlock.h | 158 ++++++++++++-------------------- > arch/riscv/include/asm/spinlock_types.h | 19 ++-- NACK from myside. Linux ARM64 has moved away from ticket spinlock to qspinlock. We should directly go for qspinlock. Regards, Anup > 4 files changed, 74 insertions(+), 105 deletions(-) > > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig > index 87d7b52..7c56a20 100644 > --- a/arch/riscv/Kconfig > +++ b/arch/riscv/Kconfig > @@ -30,6 +30,7 @@ config RISCV > select ARCH_HAS_STRICT_KERNEL_RWX if MMU > select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX > select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT > + select ARCH_USE_QUEUED_RWLOCKS > select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU > select ARCH_WANT_FRAME_POINTERS > select ARCH_WANT_HUGE_PMD_SHARE if 64BIT > diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild > index 445ccc9..e57ef80 100644 > --- a/arch/riscv/include/asm/Kbuild > +++ b/arch/riscv/include/asm/Kbuild > @@ -3,5 +3,6 @@ generic-y += early_ioremap.h > generic-y += extable.h > generic-y += flat.h > generic-y += kvm_para.h > +generic-y += qrwlock.h > generic-y += user.h > generic-y += vmlinux.lds.h > diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h > index f4f7fa1..2c81764 100644 > --- a/arch/riscv/include/asm/spinlock.h > +++ b/arch/riscv/include/asm/spinlock.h > @@ -7,129 +7,91 @@ > #ifndef _ASM_RISCV_SPINLOCK_H > #define _ASM_RISCV_SPINLOCK_H > > -#include > -#include > -#include > - > /* > - * Simple spin lock operations. These provide no fairness guarantees. > + * Ticket-based spin-locking. > */ > +static inline void arch_spin_lock(arch_spinlock_t *lock) > +{ > + arch_spinlock_t lockval; > + u32 tmp; > + > + asm volatile ( > + "1: lr.w %0, %2 \n" > + " mv %1, %0 \n" > + " addw %0, %0, %3 \n" > + " sc.w %0, %0, %2 \n" > + " bnez %0, 1b \n" > + : "=&r" (tmp), "=&r" (lockval), "+A" (lock->lock) > + : "r" (1 << TICKET_NEXT) > + : "memory"); > > -/* FIXME: Replace this with a ticket lock, like MIPS. */ > - > -#define arch_spin_is_locked(x) (READ_ONCE((x)->lock) != 0) > + while (lockval.tickets.next != lockval.tickets.owner) { > + /* > + * FIXME - we need wfi/wfe here to prevent: > + * - cache line bouncing > + * - saving cpu pipeline in multi-harts-per-core > + * processor > + */ > + lockval.tickets.owner = READ_ONCE(lock->tickets.owner); > + } > > -static inline void arch_spin_unlock(arch_spinlock_t *lock) > -{ > - smp_store_release(&lock->lock, 0); > + __atomic_acquire_fence(); > } > > static inline int arch_spin_trylock(arch_spinlock_t *lock) > { > - int tmp = 1, busy; > - > - __asm__ __volatile__ ( > - " amoswap.w %0, %2, %1\n" > - RISCV_ACQUIRE_BARRIER > - : "=r" (busy), "+A" (lock->lock) > - : "r" (tmp) > + u32 tmp, contended, res; > + > + do { > + asm volatile ( > + " lr.w %0, %3 \n" > + " srliw %1, %0, %5 \n" > + " slliw %2, %0, %5 \n" > + " or %1, %2, %1 \n" > + " li %2, 0 \n" > + " sub %1, %1, %0 \n" > + " bnez %1, 1f \n" > + " addw %0, %0, %4 \n" > + " sc.w %2, %0, %3 \n" > + "1: \n" > + : "=&r" (tmp), "=&r" (contended), "=&r" (res), > + "+A" (lock->lock) > + : "r" (1 << TICKET_NEXT), "I" (TICKET_NEXT) > : "memory"); > + } while (res); > > - return !busy; > -} > - > -static inline void arch_spin_lock(arch_spinlock_t *lock) > -{ > - while (1) { > - if (arch_spin_is_locked(lock)) > - continue; > - > - if (arch_spin_trylock(lock)) > - break; > + if (!contended) { > + __atomic_acquire_fence(); > + return 1; > + } else { > + return 0; > } > } > > -/***********************************************************/ > - > -static inline void arch_read_lock(arch_rwlock_t *lock) > +static inline void arch_spin_unlock(arch_spinlock_t *lock) > { > - int tmp; > - > - __asm__ __volatile__( > - "1: lr.w %1, %0\n" > - " bltz %1, 1b\n" > - " addi %1, %1, 1\n" > - " sc.w %1, %1, %0\n" > - " bnez %1, 1b\n" > - RISCV_ACQUIRE_BARRIER > - : "+A" (lock->lock), "=&r" (tmp) > - :: "memory"); > + smp_store_release(&lock->tickets.owner, lock->tickets.owner + 1); > + /* FIXME - we need ipi/sev here to notify above */ > } > > -static inline void arch_write_lock(arch_rwlock_t *lock) > +static inline int arch_spin_value_unlocked(arch_spinlock_t lock) > { > - int tmp; > - > - __asm__ __volatile__( > - "1: lr.w %1, %0\n" > - " bnez %1, 1b\n" > - " li %1, -1\n" > - " sc.w %1, %1, %0\n" > - " bnez %1, 1b\n" > - RISCV_ACQUIRE_BARRIER > - : "+A" (lock->lock), "=&r" (tmp) > - :: "memory"); > + return lock.tickets.owner == lock.tickets.next; > } > > -static inline int arch_read_trylock(arch_rwlock_t *lock) > +static inline int arch_spin_is_locked(arch_spinlock_t *lock) > { > - int busy; > - > - __asm__ __volatile__( > - "1: lr.w %1, %0\n" > - " bltz %1, 1f\n" > - " addi %1, %1, 1\n" > - " sc.w %1, %1, %0\n" > - " bnez %1, 1b\n" > - RISCV_ACQUIRE_BARRIER > - "1:\n" > - : "+A" (lock->lock), "=&r" (busy) > - :: "memory"); > - > - return !busy; > + return !arch_spin_value_unlocked(READ_ONCE(*lock)); > } > > -static inline int arch_write_trylock(arch_rwlock_t *lock) > +static inline int arch_spin_is_contended(arch_spinlock_t *lock) > { > - int busy; > - > - __asm__ __volatile__( > - "1: lr.w %1, %0\n" > - " bnez %1, 1f\n" > - " li %1, -1\n" > - " sc.w %1, %1, %0\n" > - " bnez %1, 1b\n" > - RISCV_ACQUIRE_BARRIER > - "1:\n" > - : "+A" (lock->lock), "=&r" (busy) > - :: "memory"); > + struct __raw_tickets tickets = READ_ONCE(lock->tickets); > > - return !busy; > + return (tickets.next - tickets.owner) > 1; > } > +#define arch_spin_is_contended arch_spin_is_contended > > -static inline void arch_read_unlock(arch_rwlock_t *lock) > -{ > - __asm__ __volatile__( > - RISCV_RELEASE_BARRIER > - " amoadd.w x0, %1, %0\n" > - : "+A" (lock->lock) > - : "r" (-1) > - : "memory"); > -} > - > -static inline void arch_write_unlock(arch_rwlock_t *lock) > -{ > - smp_store_release(&lock->lock, 0); > -} > +#include > > #endif /* _ASM_RISCV_SPINLOCK_H */ > diff --git a/arch/riscv/include/asm/spinlock_types.h b/arch/riscv/include/asm/spinlock_types.h > index f398e76..d7b38bf 100644 > --- a/arch/riscv/include/asm/spinlock_types.h > +++ b/arch/riscv/include/asm/spinlock_types.h > @@ -10,16 +10,21 @@ > # error "please don't include this file directly" > #endif > > +#define TICKET_NEXT 16 > + > typedef struct { > - volatile unsigned int lock; > + union { > + u32 lock; > + struct __raw_tickets { > + /* little endian */ > + u16 owner; > + u16 next; > + } tickets; > + }; > } arch_spinlock_t; > > -#define __ARCH_SPIN_LOCK_UNLOCKED { 0 } > - > -typedef struct { > - volatile unsigned int lock; > -} arch_rwlock_t; > +#define __ARCH_SPIN_LOCK_UNLOCKED { { 0 } } > > -#define __ARCH_RW_LOCK_UNLOCKED { 0 } > +#include > > #endif /* _ASM_RISCV_SPINLOCK_TYPES_H */ > -- > 2.7.4 > _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv