From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751993AbbJTDYG (ORCPT ); Mon, 19 Oct 2015 23:24:06 -0400 Received: from mail-wi0-f176.google.com ([209.85.212.176]:37152 "EHLO mail-wi0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750876AbbJTDYF (ORCPT ); Mon, 19 Oct 2015 23:24:05 -0400 MIME-Version: 1.0 In-Reply-To: <20151019094659.GL3816@twins.programming.kicks-ass.net> References: <1445221642-15319-1-git-send-email-ling.ma.program@gmail.com> <20151019094659.GL3816@twins.programming.kicks-ass.net> Date: Tue, 20 Oct 2015 11:24:02 +0800 Message-ID: Subject: Re: [RFC PATCH] qspinlock: Improve performance by reducing load instruction rollback From: Ling Ma To: Peter Zijlstra Cc: mingo@redhat.com, linux-kernel@vger.kernel.org, Ma Ling , Waiman Long Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 2015-10-19 17:46 GMT+08:00 Peter Zijlstra : > On Mon, Oct 19, 2015 at 10:27:22AM +0800, ling.ma.program@gmail.com wrote: >> From: Ma Ling >> >> All load instructions can run speculatively but they have to follow >> memory order rule in multiple cores as below: >> _x = _y = 0 >> >> Processor 0 Processor 1 >> >> mov r1, [ _y] //M1 mov [ _x], 1 //M3 >> mov r2, [ _x] //M2 mov [ _y], 1 //M4 >> >> If r1 = 1, r2 must be 1 >> >> In order to guarantee above rule, although Processor 0 execute >> M1 and M2 instruction out of order, they are kept in ROB, >> when load buffer for _x in Processor 0 received the update >> message from Processor 1, Processor 0 need to roll back >> from M2 instruction, which will flush the whole pipeline, >> the latency is over the penalty from branch prediction miss. >> >> In this patch we use lock cmpxchg instruction to force load > > "lock cmpxchg" makes me think you're working on x86. > >> instructions to be serialization, > > smp_rmb() does that, and that's 'free' on x86. Because x86 doesn't do > read reordering. > >> the destination operand >> receives a write cycle without regard to the result of >> the comparison, which can help us to reduce the penalty >> from load instruction roll back. > > And that makes me think I'm not understanding what you're getting at. If > you need to force memory order, a "fence" (or smp_mb()) would still be > cheaper than endlessly pulling the line into exclusive state for no > reason, right? Peter, we tested instruction lfence, but we hard to see any benefit, lfence only force load instruction , but load instruction still will rollback ,actually cmpxchg behavior is more like write operation, so we choose it. Thanks Ling > >> Our experiment indicates the performance can be improved by 10%~15% >> for 2 and 3 threads cases, the conflicts from lock cache line >> spend them most of the time. > > That just doesn't parse, what?