From mboxrd@z Thu Jan 1 00:00:00 1970 From: Waiman Long Subject: Re: [PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock Date: Thu, 09 Apr 2015 16:36:51 -0400 Message-ID: <5526E2E3.7030503__21447.2010186457$1428611951$gmane$org@hp.com> References: <1428375350-9213-1-git-send-email-Waiman.Long@hp.com> <1428375350-9213-10-git-send-email-Waiman.Long@hp.com> <20150409181327.GY5029@twins.programming.kicks-ass.net> <20150409182314.GU24151@twins.programming.kicks-ass.net> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1YgJCB-0003v9-AI for xen-devel@lists.xenproject.org; Thu, 09 Apr 2015 20:37:03 +0000 In-Reply-To: <20150409182314.GU24151@twins.programming.kicks-ass.net> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Peter Zijlstra Cc: linux-arch@vger.kernel.org, Rik van Riel , Raghavendra K T , Oleg Nesterov , kvm@vger.kernel.org, Daniel J Blueman , x86@kernel.org, Paolo Bonzini , linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Scott J Norton , Ingo Molnar , David Vrabel , "H. Peter Anvin" , xen-devel@lists.xenproject.org, Thomas Gleixner , "Paul E. McKenney" , Linus Torvalds , Boris Ostrovsky , Douglas Hatch List-Id: xen-devel@lists.xenproject.org On 04/09/2015 02:23 PM, Peter Zijlstra wrote: > On Thu, Apr 09, 2015 at 08:13:27PM +0200, Peter Zijlstra wrote: >> On Mon, Apr 06, 2015 at 10:55:44PM -0400, Waiman Long wrote: >>> +#define PV_HB_PER_LINE (SMP_CACHE_BYTES / sizeof(struct pv_hash_bucket)) >>> +static struct qspinlock **pv_hash(struct qspinlock *lock, struct pv_node *node) >>> +{ >>> + unsigned long init_hash, hash = hash_ptr(lock, pv_lock_hash_bits); >>> + struct pv_hash_bucket *hb, *end; >>> + >>> + if (!hash) >>> + hash = 1; >>> + >>> + init_hash = hash; >>> + hb =&pv_lock_hash[hash_align(hash)]; >>> + for (;;) { >>> + for (end = hb + PV_HB_PER_LINE; hb< end; hb++) { >>> + if (!cmpxchg(&hb->lock, NULL, lock)) { >>> + WRITE_ONCE(hb->node, node); >>> + /* >>> + * We haven't set the _Q_SLOW_VAL yet. So >>> + * the order of writing doesn't matter. >>> + */ >>> + smp_wmb(); /* matches rmb from pv_hash_find */ >>> + goto done; >>> + } >>> + } >>> + >>> + hash = lfsr(hash, pv_lock_hash_bits, 0); >> Since pv_lock_hash_bits is a variable, you end up running through that >> massive if() forest to find the corresponding tap every single time. It >> cannot compile-time optimize it. >> >> Hence: >> hash = lfsr(hash, pv_taps); >> >> (I don't get the bits argument to the lfsr). >> >> In any case, like I said before, I think we should try a linear probe >> sequence first, the lfsr was over engineering from my side. >> >>> + hb =&pv_lock_hash[hash_align(hash)]; >>> > So one thing this does -- and one of the reasons I figured I should > ditch the LFSR instead of fixing it -- is that you end up scanning each > bucket HB_PER_LINE times. I am aware of that when I was trying to add the hash table debug code, but I want to get the code out for review and so hasn't made any change yet. I have just done testing by adding some debug code to check the hashing efficiency. With the kernel build workload, with over 1M calls to pv_hash(), all of them get an empty entry on the first try. Maybe the minimum hash table size of 256 helps. > > The 'fix' would be to LFSR on cachelines instead of HBs but then you're > stuck with the 0-th cacheline. This should not be a big problem. I just need to add a check at the end of the for loop that if hash is 0, change it to a certain non-0 value instead of calling lfsr(). As for ditching the lfsr idea, I am fine with that. So there will be 4 entries (1 cacheline) for each hash value. If all the entries are full, we proceed to the next cacheline. Right? Cheers, Longman