From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752392AbbJSH63 (ORCPT ); Mon, 19 Oct 2015 03:58:29 -0400 Received: from mail-wi0-f170.google.com ([209.85.212.170]:36588 "EHLO mail-wi0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751103AbbJSH61 (ORCPT ); Mon, 19 Oct 2015 03:58:27 -0400 Date: Mon, 19 Oct 2015 09:58:23 +0200 From: Ingo Molnar To: ling.ma.program@gmail.com Cc: peterz@infradead.org, mingo@redhat.com, linux-kernel@vger.kernel.org, Ma Ling Subject: Re: [RFC PATCH] qspinlock: Improve performance by reducing load instruction rollback Message-ID: <20151019075823.GB22488@gmail.com> References: <1445221642-15319-1-git-send-email-ling.ma.program@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1445221642-15319-1-git-send-email-ling.ma.program@gmail.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * ling.ma.program@gmail.com wrote: > From: Ma Ling > > All load instructions can run speculatively but they have to follow > memory order rule in multiple cores as below: > _x = _y = 0 > > Processor 0 Processor 1 > > mov r1, [ _y] //M1 mov [ _x], 1 //M3 > mov r2, [ _x] //M2 mov [ _y], 1 //M4 > > If r1 = 1, r2 must be 1 > > In order to guarantee above rule, although Processor 0 execute > M1 and M2 instruction out of order, they are kept in ROB, > when load buffer for _x in Processor 0 received the update > message from Processor 1, Processor 0 need to roll back > from M2 instruction, which will flush the whole pipeline, > the latency is over the penalty from branch prediction miss. > > In this patch we use lock cmpxchg instruction to force load > instructions to be serialization, the destination operand > receives a write cycle without regard to the result of > the comparison, which can help us to reduce the penalty > from load instruction roll back. > > Our experiment indicates the performance can be improved by 10%~15% > for 2 and 3 threads cases, the conflicts from lock cache line > spend them most of the time. So it would be nice to create a new user-space spinlock testing facility, via a new 'perf bench spinlock' feature or so. That way others can test and validate your results on different hardware as well. Thanks, Ingo