From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932688AbaAaTqV (ORCPT ); Fri, 31 Jan 2014 14:46:21 -0500 Received: from merlin.infradead.org ([205.233.59.134]:54248 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754033AbaAaTqT (ORCPT ); Fri, 31 Jan 2014 14:46:19 -0500 Date: Fri, 31 Jan 2014 20:45:35 +0100 From: Peter Zijlstra To: Waiman Long Cc: Tim Chen , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Arnd Bergmann , linux-arch@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Steven Rostedt , Andrew Morton , Michel Lespinasse , Andi Kleen , Rik van Riel , "Paul E. McKenney" , Linus Torvalds , Raghavendra K T , George Spelvin , Daniel J Blueman , Alexander Fyodorov , Aswin Chandramouleeswaran , Scott J Norton , Thavatchai Makphaibulchoke Subject: Re: [PATCH v3 1/2] qspinlock: Introducing a 4-byte queue spinlock implementation Message-ID: <20140131194535.GN5002@laptop.programming.kicks-ass.net> References: <1390933151-1797-1-git-send-email-Waiman.Long@hp.com> <1390933151-1797-2-git-send-email-Waiman.Long@hp.com> <1391108430.3138.86.camel@schen9-DESK> <20140130192835.GK5002@laptop.programming.kicks-ass.net> <52EBEAD5.3000502@hp.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <52EBEAD5.3000502@hp.com> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jan 31, 2014 at 01:26:29PM -0500, Waiman Long wrote: > >I don't get why we need the used thing at all; something like: > > > >struct qna { > > int cnt; > > struct qnode nodes[4]; > >}; > > > >DEFINE_PER_CPU(struct qna, qna); > > > >struct qnode *get_qnode(void) > >{ > > struct qna *qna = this_cpu_ptr(&qna); > > > > return qna->nodes[qna->cnt++]; /* RMW */ > >} > > > >void put_qnode(struct qnode *qnode) > >{ > > struct qna *qna = this_cpu_ptr(&qna); > > qna->cnt--; > >} > > > >Should do fine, right? > > Yes, we can do something like that. However I think put_qnode() needs to use > atomic dec as well. As a result, we will need 2 additional atomic operations > per slowpath invocation. The code may look simpler, but I don't think it > will be faster than what I am currently doing as the cases where the used > flag is set will be relatively rare. No, put doesn't need an atomic; nor is it as well; because the inc doesn't need an atomic either. > >If we interrupt the RMW above the interrupted context hasn't yet used > >the queue and once we return its free again, so all should be well even > >on load-store archs. > > > >The nodes array might as well be 3, because NMIs should never contend on > >a spinlock, so all we're left with is task, softirq and hardirq context. > > I am not so sure about NMI not taking a spinlock. I seem to remember seeing > code that did that. Actually, I think the NMI code is trying to printk > something which, in turn, need to acquire a spinlock. Yeah I know, terribly broken that, I've been waiting for that to explode :-)