From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1161319AbbEEMau (ORCPT ); Tue, 5 May 2015 08:30:50 -0400 Received: from e37.co.us.ibm.com ([32.97.110.158]:34374 "EHLO e37.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1031049AbbEEMan (ORCPT ); Tue, 5 May 2015 08:30:43 -0400 Date: Tue, 5 May 2015 05:30:39 -0700 From: "Paul E. McKenney" To: Peter Zijlstra Cc: Rik van Riel , Paolo Bonzini , Ingo Molnar , Andy Lutomirski , "linux-kernel@vger.kernel.org" , X86 ML , williams@redhat.com, Andrew Lutomirski , fweisbec@redhat.com, Heiko Carstens , Thomas Gleixner , Ingo Molnar , Linus Torvalds Subject: Re: question about RCU dynticks_nesting Message-ID: <20150505123039.GM5381@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <5543C05E.9040209@redhat.com> <20150501184025.GA2114@gmail.com> <5543CFE5.1030509@redhat.com> <20150502052733.GA9983@gmail.com> <55473B47.6080600@redhat.com> <55479749.7070608@redhat.com> <5547C1DC.10802@redhat.com> <20150505104834.GI21418@twins.programming.kicks-ass.net> <20150505105102.GB16478@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150505105102.GB16478@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15050512-0025-0000-0000-00000A551D45 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 05, 2015 at 12:51:02PM +0200, Peter Zijlstra wrote: > On Tue, May 05, 2015 at 12:48:34PM +0200, Peter Zijlstra wrote: > > On Mon, May 04, 2015 at 03:00:44PM -0400, Rik van Riel wrote: > > In case of the non-preemptible RCU, we could easily also > > > increase current->rcu_read_lock_nesting at the same time > > > we increase the preempt counter, and use that as the > > > indicator to test whether the cpu is in an extended > > > rcu quiescent state. That way there would be no extra > > > overhead at syscall entry or exit at all. The trick > > > would be getting the preempt count and the rcu read > > > lock nesting count in the same cache line for each task. > > > > Can't do that. Remember, on x86 we have per-cpu preempt count, and your > > rcu_read_lock_nesting is per task. > > Hmm, I suppose you could do the rcu_read_lock_nesting thing in a per-cpu > counter too and transfer that into the task_struct on context switch. > > If you manage to put both sides of that in the same cache things should > not add significant overhead. > > You'd have to move the rcu_read_lock_nesting into the thread_info, which > would be painful as you'd have to go touch all archs etc.. Last I tried doing that, things got really messy at context-switch time. Perhaps I simply didn't do the save/restore in the right place? Thanx, Paul