From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753340AbbFVQvK (ORCPT ); Mon, 22 Jun 2015 12:51:10 -0400 Received: from e35.co.us.ibm.com ([32.97.110.153]:34331 "EHLO e35.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752142AbbFVQvB (ORCPT ); Mon, 22 Jun 2015 12:51:01 -0400 X-Helo: d03dlp02.boulder.ibm.com X-MailFrom: paulmck@linux.vnet.ibm.com X-RcptTo: linux-kernel@vger.kernel.org Date: Mon, 22 Jun 2015 09:29:49 -0700 From: "Paul E. McKenney" To: Peter Zijlstra Cc: Dave Hansen , Andi Kleen , dave.hansen@linux.intel.com, akpm@linux-foundation.org, jack@suse.cz, viro@zeniv.linux.org.uk, eparis@redhat.com, john@johnmccutchan.com, rlove@rlove.org, tim.c.chen@linux.intel.com, linux-kernel@vger.kernel.org Subject: Re: [RFC][PATCH] fs: optimize inotify/fsnotify code for unwatched files Message-ID: <20150622162949.GA3892@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20150619215025.4F689817@viggo.jf.intel.com> <20150619233306.GT25760@tassilo.jf.intel.com> <5584B62F.5080506@sr71.net> <20150620022135.GF3913@linux.vnet.ibm.com> <5585AAA0.1030305@sr71.net> <20150621013058.GH3913@linux.vnet.ibm.com> <20150622132821.GB12596@twins.programming.kicks-ass.net> <20150622151121.GK3913@linux.vnet.ibm.com> <20150622152013.GW3644@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150622152013.GW3644@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15062216-0013-0000-0000-00000F616236 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jun 22, 2015 at 05:20:13PM +0200, Peter Zijlstra wrote: > On Mon, Jun 22, 2015 at 08:11:21AM -0700, Paul E. McKenney wrote: > > That depends on how slow the resulting slow global state would be. > > We have some use cases (definitely KVM, perhaps also some of the VFS > > code) that need the current speed, as opposed to the profound slowness > > that three trips through synchronize_sched() would provide. > > But we have call_srcu() these days, not everything needs to use > sync_srcu() anymore. Although I've not checked recently. I believe that the KVM guys do need synchronize_srcu(), but yes, there probably are at least some cases where people might do well to move from synchronize_srcu() to call_srcu(). That said, the added complexity might or might not be worthwhile in all cases. > > Plus we > > would lose the ability to have SRCU readers on idle and offline CPUs. > > Are we actually doing that? offline CPUs in particular seems iffy, I > don't think we need (or should) worry about that. I know its been an > issue with regular RCU due to tracing, but I'm not sure we should care > for it for SRCU. I believe that there still are some cases. But why would offline CPUs seem so iffy? CPUs coming up execute code before they are fully operational, and during that time, much of the kernel views them as being offline. Yet they do have to execute significant code in order to get themselves set up. Thanx, Paul -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in Please read the FAQ at http://www.tux.org/lkml/