From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751482AbdFFP7H (ORCPT ); Tue, 6 Jun 2017 11:59:07 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:48965 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751388AbdFFP7F (ORCPT ); Tue, 6 Jun 2017 11:59:05 -0400 Date: Tue, 6 Jun 2017 08:58:48 -0700 From: "Paul E. McKenney" To: Christian Borntraeger Cc: Heiko Carstens , Paolo Bonzini , Peter Zijlstra , linux-kernel@vger.kernel.org, mingo@kernel.org, jiangshanlai@gmail.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, tglx@linutronix.de, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, fweisbec@gmail.com, oleg@redhat.com, kvm@vger.kernel.org, Linus Torvalds , Martin Schwidefsky , linux-s390 Subject: Re: [PATCH RFC tip/core/rcu 1/2] srcu: Allow use of Tiny/Tree SRCU from both process and interrupt context Reply-To: paulmck@linux.vnet.ibm.com References: <20170605220919.GA27820@linux.vnet.ibm.com> <1496700591-30177-1-git-send-email-paulmck@linux.vnet.ibm.com> <20170606105343.ibhzrk6jwhmoja5t@hirez.programming.kicks-ass.net> <20170606152705.GD6681@osiris> <01328b70-38fa-384d-d75a-3d615ef3244c@de.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <01328b70-38fa-384d-d75a-3d615ef3244c@de.ibm.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 17060615-0024-0000-0000-000002892459 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00007183; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000212; SDB=6.00871003; UDB=6.00433165; IPR=6.00650995; BA=6.00005402; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00015720; XFM=3.00000015; UTC=2017-06-06 15:59:02 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17060615-0025-0000-0000-000044484777 Message-Id: <20170606155848.GE3721@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-06-06_12:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=2 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1703280000 definitions=main-1706060274 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jun 06, 2017 at 05:37:05PM +0200, Christian Borntraeger wrote: > On 06/06/2017 05:27 PM, Heiko Carstens wrote: > > On Tue, Jun 06, 2017 at 04:45:57PM +0200, Christian Borntraeger wrote: > >> Adding s390 folks and list > >>>> Only s390 is TSO, arm64 is very much a weak arch. > >>> > >>> Right, and thus arm64 can implement a fast this_cpu_inc using LL/SC. > >>> s390 cannot because its atomic_inc has implicit memory barriers. > >>> > >>> s390's this_cpu_inc is *faster* than the generic one, but still pretty slow. > >> > >> FWIW, we improved the performance of local_irq_save/restore some time ago > >> with commit 204ee2c5643199a2 ("s390/irqflags: optimize irq restore") and > >> disable/enable seem to be reasonably fast (3-5ns on my system doing both > >> disable/enable in a loop) on todays systems. So I would assume that the > >> generic implementation would not be that bad. > >> > >> A the same time, the implicit memory barrier of the atomic_inc should be > >> even cheaper. In contrast to x86, a full smp_mb seems to be almost for > >> free (looks like <= 1 cycle for a bcr 14,0 and no contention). So I > >> _think_ that this should be really fast enough. > >> > >> As a side note, I am asking myself, though, why we do need the > >> preempt_disable/enable for the cases where we use the opcodes > >> like lao (atomic load and or to a memory location) and friends. > > > > Because you want the atomic instruction to be executed on the local cpu for > > which you have to per cpu pointer. If you get preempted to a different cpu > > between the ptr__ assignment and lan instruction it might be executed not > > on the local cpu. It's not really a correctness issue. > > > > #define arch_this_cpu_to_op(pcp, val, op) \ > > { \ > > typedef typeof(pcp) pcp_op_T__; \ > > pcp_op_T__ val__ = (val); \ > > pcp_op_T__ old__, *ptr__; \ > > preempt_disable(); \ > > ptr__ = raw_cpu_ptr(&(pcp)); \ > > asm volatile( \ > > op " %[old__],%[val__],%[ptr__]\n" \ > > : [old__] "=d" (old__), [ptr__] "+Q" (*ptr__) \ > > : [val__] "d" (val__) \ > > : "cc"); \ > > preempt_enable(); \ > > } > > > > #define this_cpu_and_4(pcp, val) arch_this_cpu_to_op(pcp, val, "lan") > > > > However in reality it doesn't matter at all, since all distributions we > > care about have preemption disabled. > > > > So this_cpu_inc() should just generate three instructions: two to calculate > > the percpu pointer and an additional asi for the atomic increment, with > > operand specific serialization. This is supposed to be a lot faster than > > disabling/enabling interrupts around a non-atomic operation. > > > > But maybe I didn't get the point of this thread :) > > I think on x86 a memory barrier is relatively expensive (e.g. 33 cycles for mfence > on Haswell according to http://www.agner.org/optimize/instruction_tables.pdf). The > thread started with a change to rcu, which now happens to use these percpu things > more often so I think Paolos fear is that on s390 we now pay the price for an extra > memory barrier due to that change. For the inc case (asi instruction) this should be > really really cheap. So what I am seeing from this is that there aren't any real performance issues for this patch series. I will update accordingly. ;-) Thanx, Paul