From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753338AbbFWAaJ (ORCPT ); Mon, 22 Jun 2015 20:30:09 -0400 Received: from e35.co.us.ibm.com ([32.97.110.153]:32768 "EHLO e35.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752429AbbFWAaD (ORCPT ); Mon, 22 Jun 2015 20:30:03 -0400 X-Helo: d03dlp02.boulder.ibm.com X-MailFrom: paulmck@linux.vnet.ibm.com X-RcptTo: linux-kernel@vger.kernel.org Date: Mon, 22 Jun 2015 17:29:59 -0700 From: "Paul E. McKenney" To: Peter Zijlstra Cc: Dave Hansen , Andi Kleen , dave.hansen@linux.intel.com, akpm@linux-foundation.org, jack@suse.cz, viro@zeniv.linux.org.uk, eparis@redhat.com, john@johnmccutchan.com, rlove@rlove.org, tim.c.chen@linux.intel.com, linux-kernel@vger.kernel.org Subject: Re: [RFC][PATCH] fs: optimize inotify/fsnotify code for unwatched files Message-ID: <20150623002959.GE3892@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20150619215025.4F689817@viggo.jf.intel.com> <20150619233306.GT25760@tassilo.jf.intel.com> <5584B62F.5080506@sr71.net> <20150620022135.GF3913@linux.vnet.ibm.com> <5585AAA0.1030305@sr71.net> <20150621013058.GH3913@linux.vnet.ibm.com> <20150622132821.GB12596@twins.programming.kicks-ass.net> <20150622151121.GK3913@linux.vnet.ibm.com> <20150622185229.GX3644@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150622185229.GX3644@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15062300-0013-0000-0000-00000F64819F Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jun 22, 2015 at 08:52:29PM +0200, Peter Zijlstra wrote: > On Mon, Jun 22, 2015 at 08:11:21AM -0700, Paul E. McKenney wrote: > > That depends on how slow the resulting slow global state would be. > > We have some use cases (definitely KVM, perhaps also some of the VFS > > code) that need the current speed, as opposed to the profound slowness > > that three trips through synchronize_sched() would provide. > > So what we have with that percpu-rwsem code that I send out earlier > today is a conditional smp_mb(), and I think we can do the same for > SRCU. > > I'm just not sure !GP is common enough for all SRCU cases to be worth > doing. Especially given that we don't want the readers to have to acquire a lock in order to get a consistent view of whether or not a grace period is in progress. > Those that rely on sync_srcu() and who do it rarely would definitely > benefit. The same with those that rarely do call_srcu(). > > But those that heavily use call_srcu() would be better off with the > prolonged GP with 3 sync_sched() calls in. Those are indeed two likely possibilities. Other possibilities include cases where synchronize_srcu() is invoked rarely, but where its latency is visible to userspace, and those where there really is a need to wait synchronously for a grace period, so that call_srcu() doesn't buy you anything. Thanx, Paul -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in Please read the FAQ at http://www.tux.org/lkml/