From mboxrd@z Thu Jan 1 00:00:00 1970 From: Chris Mason Subject: Re: [PATCH -v7][RFC]: mutex: implement adaptive spinning Date: Thu, 08 Jan 2009 14:02:30 -0500 Message-ID: <1231441350.14304.48.camel@think.oraclecorp.com> References: <1231347442.11687.344.camel@twins> <1231365115.11687.361.camel@twins> <1231366716.11687.377.camel@twins> <1231408718.11687.400.camel@twins> <20090108141808.GC11629@elte.hu> <1231426014.11687.456.camel@twins> <1231434515.14304.27.camel@think.oraclecorp.com> <1231439234.14304.33.camel@think.oraclecorp.com> Mime-Version: 1.0 Content-Type: text/plain Cc: Steven Rostedt , Peter Zijlstra , Ingo Molnar , paulmck@linux.vnet.ibm.com, Gregory Haskins , Matthew Wilcox , Andi Kleen , Andrew Morton , Linux Kernel Mailing List , linux-fsdevel , linux-btrfs , Thomas Gleixner , Nick Piggin , Peter Morreale , Sven Dietrich To: Linus Torvalds Return-path: In-Reply-To: <1231439234.14304.33.camel@think.oraclecorp.com> List-ID: On Thu, 2009-01-08 at 13:27 -0500, Chris Mason wrote: > On Thu, 2009-01-08 at 10:16 -0800, Linus Torvalds wrote: > > > > On Thu, 8 Jan 2009, Steven Rostedt wrote: > > > > > > Ouch! I think you are on to something: > > > > Yeah, there's somethign there, but looking at Chris' backtrace, there's > > nothing there to disable preemption. So if it was this simple case, it > > should still have preempted him to let the other process run and finish > > up. > > > > My .config has no lockdep or schedule debugging and voluntary preempt. > I do have CONFIG_INLINE_OPTIMIZE on, its a good name for trusting gcc I > guess. The patch below isn't quite what Linus suggested, but it is working here at least. In every test I've tried so far, this is faster than the ugly btrfs spin. dbench v7.1: 789mb/s dbench simple spin: 566MB/s 50 proc parallel creates v7.1: 162 files/s avg sys: 1.6 50 proc parallel creates simple spin: 152 files/s avg sys: 2 50 proc parallel stat v7.1: 2.3s total 50 proc parallel stat simple spin: 3.8s total It is less fair though, the 50 proc parallel creates had a much bigger span between the first and last proc's exit time. This isn't a huge shock, I think it shows the hot path is closer to a real spin lock. Here's the incremental I was using. It looks to me like most of the things that could change inside spin_on_owner mean we still want to spin. The only exception is the need_resched() flag. -chris diff --git a/kernel/mutex.c b/kernel/mutex.c index bd6342a..8936410 100644 --- a/kernel/mutex.c +++ b/kernel/mutex.c @@ -161,11 +161,13 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, return 0; } - if (old_val < 0 && !list_empty(&lock->wait_list)) + if (!list_empty(&lock->wait_list)) break; /* See who owns it, and spin on him if anybody */ owner = ACCESS_ONCE(lock->owner); + if (need_resched()) + break; if (owner && !spin_on_owner(lock, owner)) break;