From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753395AbcFFVqs (ORCPT ); Mon, 6 Jun 2016 17:46:48 -0400 Received: from mail-oi0-f53.google.com ([209.85.218.53]:34507 "EHLO mail-oi0-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752335AbcFFVqq (ORCPT ); Mon, 6 Jun 2016 17:46:46 -0400 MIME-Version: 1.0 In-Reply-To: <20160606211522.GF14480@ZenIV.linux.org.uk> References: <5755D671.9070908@intel.com> <20160606211522.GF14480@ZenIV.linux.org.uk> From: Linus Torvalds Date: Mon, 6 Jun 2016 14:46:44 -0700 X-Google-Sender-Auth: Cq5X6GnKDWNY-Wo01tA4KJiaIN8 Message-ID: Subject: Re: performance delta after VFS i_mutex=>i_rwsem conversion To: Al Viro Cc: Dave Hansen , "Chen, Tim C" , Ingo Molnar , Davidlohr Bueso , "Peter Zijlstra (Intel)" , Jason Low , Michel Lespinasse , "Paul E. McKenney" , Waiman Long , LKML Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jun 6, 2016 at 2:15 PM, Al Viro wrote: > > FWIW, there's another fun issue on ramfs - dcache_readdir() is doing an > obscene amount of grabbing/releasing ->d_lock Yeah, I saw that report, and I'm not entirely surprised. There was no real reason to worry about it before, since readdir() couldn't really cause contention with the previous mutual exclusion at the upper level. > I think I have a kinda-sorta solution, but it has a problem. What I want > to do is > * list_move() only once per dcache_readdir() > * ->d_lock taken for that and only for that. > * list_move() itself surrounded with write_seqcount_{begin,end} on > some seqcount > * traversal to the next real entry done under rcu_read_lock in a > seqretry loop. > > The only problem is where to put that seqcount (unsigned int, really). > ->i_dir_seq is an obvious candidate, but that'll need careful profiling > on getdents/lookup mixes... Let's look at smaller changes first. In particular, why the f*ck do we take "next->d_lock" at all, much less twice? Do we really need that? Both of them seem bogus: - the first one only protects the "simple_positive()" test. That seems stupid. We hold the parent d_lock _and_ we hold the parent inode lock for reading, how the hell is simple_positive() going to change? Yeah, yeah, *maybe* we could catch a lookup that just happens to add the inode field in process, but it's not a race we even care about. If we see the inode being non-NULL, it will now *stay* non-NULL, and we already depend on that (that "d_inode(next)" is then done without the lock held. - the second one only protects "list_move()" of the cursor. But since it's the child list, the "next->d_lock" thing ends up being irrelevant. It's the parent dentry lock we need to hold, nothing else. Not the "next" one. so I don't see the point of half the d_lock games we play. And the thing about spinlock contention: having *nested* spinlocks be contented turns contention into an exponential thing. I really suspect that if we can just remove the nested spinlock, the dentry->d_lock contention will go down by a huge amount, because then you completely remove the "wait on lock while holding another lock" thing, which is what tends to really suck. So before you do something fancy, look at the simple parts. I think getting rid of the nested d_lock might be really simple. Maybe we might want to do some really careful inode = READ_ONCE(dentry->d_inode); because we're reading the thing without locking, and maybe there's something else subtle going on. But that really looks fairly low-hanging, and it might make the problem go away in practice. Linus