From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755383AbdDMSMY (ORCPT ); Thu, 13 Apr 2017 14:12:24 -0400 Received: from bombadil.infradead.org ([65.50.211.133]:58584 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755282AbdDMSMU (ORCPT ); Thu, 13 Apr 2017 14:12:20 -0400 Date: Thu, 13 Apr 2017 20:12:12 +0200 From: Peter Zijlstra To: Yury Norov Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Ingo Molnar , Arnd Bergmann , Catalin Marinas , Will Deacon , Jan Glauber Subject: Re: [PATCH 3/3] arm64/locking: qspinlocks and qrwlocks support Message-ID: <20170413181212.y3ezah76qoztxhnn@hirez.programming.kicks-ass.net> References: <1491860104-4103-1-git-send-email-ynorov@caviumnetworks.com> <1491860104-4103-4-git-send-email-ynorov@caviumnetworks.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1491860104-4103-4-git-send-email-ynorov@caviumnetworks.com> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Apr 11, 2017 at 01:35:04AM +0400, Yury Norov wrote: > +++ b/arch/arm64/include/asm/qspinlock.h > @@ -0,0 +1,20 @@ > +#ifndef _ASM_ARM64_QSPINLOCK_H > +#define _ASM_ARM64_QSPINLOCK_H > + > +#include > + > +#define queued_spin_unlock queued_spin_unlock > +/** > + * queued_spin_unlock - release a queued spinlock > + * @lock : Pointer to queued spinlock structure > + * > + * A smp_store_release() on the least-significant byte. > + */ > +static inline void queued_spin_unlock(struct qspinlock *lock) > +{ > + smp_store_release((u8 *)lock, 0); > +} I'm afraid this isn't enough for arm64. I suspect you want your own variant of queued_spin_unlock_wait() and queued_spin_is_locked() as well. Much memory ordering fun to be had there. From mboxrd@z Thu Jan 1 00:00:00 1970 From: peterz@infradead.org (Peter Zijlstra) Date: Thu, 13 Apr 2017 20:12:12 +0200 Subject: [PATCH 3/3] arm64/locking: qspinlocks and qrwlocks support In-Reply-To: <1491860104-4103-4-git-send-email-ynorov@caviumnetworks.com> References: <1491860104-4103-1-git-send-email-ynorov@caviumnetworks.com> <1491860104-4103-4-git-send-email-ynorov@caviumnetworks.com> Message-ID: <20170413181212.y3ezah76qoztxhnn@hirez.programming.kicks-ass.net> To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org On Tue, Apr 11, 2017 at 01:35:04AM +0400, Yury Norov wrote: > +++ b/arch/arm64/include/asm/qspinlock.h > @@ -0,0 +1,20 @@ > +#ifndef _ASM_ARM64_QSPINLOCK_H > +#define _ASM_ARM64_QSPINLOCK_H > + > +#include > + > +#define queued_spin_unlock queued_spin_unlock > +/** > + * queued_spin_unlock - release a queued spinlock > + * @lock : Pointer to queued spinlock structure > + * > + * A smp_store_release() on the least-significant byte. > + */ > +static inline void queued_spin_unlock(struct qspinlock *lock) > +{ > + smp_store_release((u8 *)lock, 0); > +} I'm afraid this isn't enough for arm64. I suspect you want your own variant of queued_spin_unlock_wait() and queued_spin_is_locked() as well. Much memory ordering fun to be had there.