From: Tim Chen <tim.c.chen@linux.intel.com> To: Ingo Molnar <mingo@kernel.org> Cc: Ingo Molnar <mingo@elte.hu>, Andrew Morton <akpm@linux-foundation.org>, Linus Torvalds <torvalds@linux-foundation.org>, Andrea Arcangeli <aarcange@redhat.com>, Alex Shi <alex.shi@linaro.org>, Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>, Davidlohr Bueso <davidlohr.bueso@hp.com>, Matthew R Wilcox <matthew.r.wilcox@intel.com>, Dave Hansen <dave.hansen@intel.com>, Peter Zijlstra <a.p.zijlstra@chello.nl>, Rik van Riel <riel@redhat.com>, Peter Hurley <peter@hurleysoftware.com>, "Paul E.McKenney" <paulmck@linux.vnet.ibm.com>, Jason Low <jason.low2@hp.com>, Waiman Long <Waiman.Long@hp.com>, linux-kernel@vger.kernel.org, linux-mm <linux-mm@kvack.org> Subject: Re: [PATCH v8 0/9] rwsem performance optimizations Date: Mon, 07 Oct 2013 15:57:54 -0700 [thread overview] Message-ID: <1381186674.11046.105.camel@schen9-DESK> (raw) In-Reply-To: <20131003073212.GC5775@gmail.com> On Thu, 2013-10-03 at 09:32 +0200, Ingo Molnar wrote: > * Tim Chen <tim.c.chen@linux.intel.com> wrote: > > > For version 8 of the patchset, we included the patch from Waiman to > > streamline wakeup operations and also optimize the MCS lock used in > > rwsem and mutex. > > I'd be feeling a lot easier about this patch series if you also had > performance figures that show how mmap_sem is affected. > > These: > > > Tim got the following improvement for exim mail server > > workload on 40 core system: > > > > Alex+Tim's patchset: +4.8% > > Alex+Tim+Waiman's patchset: +5.3% > > appear to be mostly related to the anon_vma->rwsem. But once that lock is > changed to an rwlock_t, this measurement falls away. > > Peter Zijlstra suggested the following testcase: > > ===============================> > In fact, try something like this from userspace: > > n-threads: > > pthread_mutex_lock(&mutex); > foo = mmap(); > pthread_mutex_lock(&mutex); > > /* work */ > > pthread_mutex_unlock(&mutex); > munma(foo); > pthread_mutex_unlock(&mutex); > > vs > > n-threads: > > foo = mmap(); > /* work */ > munmap(foo); Ingo, I ran the vanilla kernel, the kernel with all rwsem patches and the kernel with all patches except the optimistic spin one. I am listing two presentations of the data. Please note that there is about 5% run-run variation. % change in performance vs vanilla kernel #threads all without optspin mmap only 1 1.9% 1.6% 5 43.8% 2.6% 10 22.7% -3.0% 20 -12.0% -4.5% 40 -26.9% -2.0% mmap with mutex acquisition 1 -2.1% -3.0% 5 -1.9% 1.0% 10 4.2% 12.5% 20 -4.1% 0.6% 40 -2.8% -1.9% The optimistic spin case does very well at low to moderate contentions, but worse when there are very heavy contentions for the pure mmap case. For the case with pthread mutex, there's not much change from vanilla kernel. % change in performance of the mmap with pthread-mutex vs pure mmap #threads vanilla all without optspin 1 3.0% -1.0% -1.7% 5 7.2% -26.8% 5.5% 10 5.2% -10.6% 22.1% 20 6.8% 16.4% 12.5% 40 -0.2% 32.7% 0.0% In general, vanilla and no-optspin case perform better with pthread-mutex. For the case with optspin, mmap with pthread-mutex is worse at low to moderate contention and better at high contention. Tim > > I've had reports that the former was significantly faster than the > latter. > <=============================== > > this could be put into a standalone testcase, or you could add it as a new > subcommand of 'perf bench', which already has some pthread code, see for > example in tools/perf/bench/sched-messaging.c. Adding: > > perf bench mm threads > > or so would be a natural thing to have. > > Thanks, > > Ingo > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/
WARNING: multiple messages have this Message-ID (diff)
From: Tim Chen <tim.c.chen@linux.intel.com> To: Ingo Molnar <mingo@kernel.org> Cc: Ingo Molnar <mingo@elte.hu>, Andrew Morton <akpm@linux-foundation.org>, Linus Torvalds <torvalds@linux-foundation.org>, Andrea Arcangeli <aarcange@redhat.com>, Alex Shi <alex.shi@linaro.org>, Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>, Davidlohr Bueso <davidlohr.bueso@hp.com>, Matthew R Wilcox <matthew.r.wilcox@intel.com>, Dave Hansen <dave.hansen@intel.com>, Peter Zijlstra <a.p.zijlstra@chello.nl>, Rik van Riel <riel@redhat.com>, Peter Hurley <peter@hurleysoftware.com>, "Paul E.McKenney" <paulmck@linux.vnet.ibm.com>, Jason Low <jason.low2@hp.com>, Waiman Long <Waiman.Long@hp.com>, linux-kernel@vger.kernel.org, linux-mm <linux-mm@kvack.org> Subject: Re: [PATCH v8 0/9] rwsem performance optimizations Date: Mon, 07 Oct 2013 15:57:54 -0700 [thread overview] Message-ID: <1381186674.11046.105.camel@schen9-DESK> (raw) In-Reply-To: <20131003073212.GC5775@gmail.com> On Thu, 2013-10-03 at 09:32 +0200, Ingo Molnar wrote: > * Tim Chen <tim.c.chen@linux.intel.com> wrote: > > > For version 8 of the patchset, we included the patch from Waiman to > > streamline wakeup operations and also optimize the MCS lock used in > > rwsem and mutex. > > I'd be feeling a lot easier about this patch series if you also had > performance figures that show how mmap_sem is affected. > > These: > > > Tim got the following improvement for exim mail server > > workload on 40 core system: > > > > Alex+Tim's patchset: +4.8% > > Alex+Tim+Waiman's patchset: +5.3% > > appear to be mostly related to the anon_vma->rwsem. But once that lock is > changed to an rwlock_t, this measurement falls away. > > Peter Zijlstra suggested the following testcase: > > ===============================> > In fact, try something like this from userspace: > > n-threads: > > pthread_mutex_lock(&mutex); > foo = mmap(); > pthread_mutex_lock(&mutex); > > /* work */ > > pthread_mutex_unlock(&mutex); > munma(foo); > pthread_mutex_unlock(&mutex); > > vs > > n-threads: > > foo = mmap(); > /* work */ > munmap(foo); Ingo, I ran the vanilla kernel, the kernel with all rwsem patches and the kernel with all patches except the optimistic spin one. I am listing two presentations of the data. Please note that there is about 5% run-run variation. % change in performance vs vanilla kernel #threads all without optspin mmap only 1 1.9% 1.6% 5 43.8% 2.6% 10 22.7% -3.0% 20 -12.0% -4.5% 40 -26.9% -2.0% mmap with mutex acquisition 1 -2.1% -3.0% 5 -1.9% 1.0% 10 4.2% 12.5% 20 -4.1% 0.6% 40 -2.8% -1.9% The optimistic spin case does very well at low to moderate contentions, but worse when there are very heavy contentions for the pure mmap case. For the case with pthread mutex, there's not much change from vanilla kernel. % change in performance of the mmap with pthread-mutex vs pure mmap #threads vanilla all without optspin 1 3.0% -1.0% -1.7% 5 7.2% -26.8% 5.5% 10 5.2% -10.6% 22.1% 20 6.8% 16.4% 12.5% 40 -0.2% 32.7% 0.0% In general, vanilla and no-optspin case perform better with pthread-mutex. For the case with optspin, mmap with pthread-mutex is worse at low to moderate contention and better at high contention. Tim > > I've had reports that the former was significantly faster than the > latter. > <=============================== > > this could be put into a standalone testcase, or you could add it as a new > subcommand of 'perf bench', which already has some pthread code, see for > example in tools/perf/bench/sched-messaging.c. Adding: > > perf bench mm threads > > or so would be a natural thing to have. > > Thanks, > > Ingo > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-10-07 22:58 UTC|newest] Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top [not found] <cover.1380748401.git.tim.c.chen@linux.intel.com> 2013-10-02 22:38 ` [PATCH v8 0/9] rwsem performance optimizations Tim Chen 2013-10-02 22:38 ` Tim Chen 2013-10-03 7:32 ` Ingo Molnar 2013-10-03 7:32 ` Ingo Molnar 2013-10-07 22:57 ` Tim Chen [this message] 2013-10-07 22:57 ` Tim Chen 2013-10-09 6:15 ` Ingo Molnar 2013-10-09 6:15 ` Ingo Molnar 2013-10-09 7:28 ` Peter Zijlstra 2013-10-09 7:28 ` Peter Zijlstra 2013-10-10 3:14 ` Linus Torvalds 2013-10-10 3:14 ` Linus Torvalds 2013-10-10 5:03 ` Davidlohr Bueso 2013-10-10 5:03 ` Davidlohr Bueso 2013-10-09 16:34 ` Tim Chen 2013-10-09 16:34 ` Tim Chen 2013-10-10 7:54 ` Ingo Molnar 2013-10-10 7:54 ` Ingo Molnar 2013-10-16 0:09 ` Tim Chen 2013-10-16 0:09 ` Tim Chen 2013-10-16 6:55 ` Ingo Molnar 2013-10-16 6:55 ` Ingo Molnar 2013-10-16 18:28 ` Tim Chen 2013-10-16 18:28 ` Tim Chen 2013-11-04 22:36 ` Tim Chen 2013-11-04 22:36 ` Tim Chen 2013-10-16 21:55 ` Tim Chen 2013-10-16 21:55 ` Tim Chen 2013-10-18 6:52 ` Ingo Molnar 2013-10-18 6:52 ` Ingo Molnar 2013-10-02 22:38 ` [PATCH v8 1/9] rwsem: check the lock before cpmxchg in down_write_trylock Tim Chen 2013-10-02 22:38 ` Tim Chen 2013-10-02 22:38 ` [PATCH v8 2/9] rwsem: remove 'out' label in do_wake Tim Chen 2013-10-02 22:38 ` Tim Chen 2013-10-02 22:38 ` [PATCH v8 3/9] rwsem: remove try_reader_grant label do_wake Tim Chen 2013-10-02 22:38 ` Tim Chen 2013-10-02 22:38 ` [PATCH v8 4/9] rwsem/wake: check lock before do atomic update Tim Chen 2013-10-02 22:38 ` Tim Chen 2013-10-02 22:38 ` [PATCH v8 5/9] MCS Lock: Restructure the MCS lock defines and locking code into its own file Tim Chen 2013-10-02 22:38 ` Tim Chen 2013-10-08 19:51 ` Rafael Aquini 2013-10-08 19:51 ` Rafael Aquini 2013-10-08 20:34 ` Tim Chen 2013-10-08 20:34 ` Tim Chen 2013-10-08 21:31 ` Rafael Aquini 2013-10-08 21:31 ` Rafael Aquini 2013-10-02 22:38 ` [PATCH v8 6/9] MCS Lock: optimizations and extra comments Tim Chen 2013-10-02 22:38 ` Tim Chen 2013-10-02 22:38 ` [PATCH v8 7/9] MCS Lock: Barrier corrections Tim Chen 2013-10-02 22:38 ` Tim Chen 2013-10-02 22:38 ` [PATCH v8 8/9] rwsem: do optimistic spinning for writer lock acquisition Tim Chen 2013-10-02 22:38 ` Tim Chen 2013-10-02 22:38 ` [PATCH v8 9/9] rwsem: reduce spinlock contention in wakeup code path Tim Chen 2013-10-02 22:38 ` Tim Chen
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1381186674.11046.105.camel@schen9-DESK \ --to=tim.c.chen@linux.intel.com \ --cc=Waiman.Long@hp.com \ --cc=a.p.zijlstra@chello.nl \ --cc=aarcange@redhat.com \ --cc=akpm@linux-foundation.org \ --cc=alex.shi@linaro.org \ --cc=andi@firstfloor.org \ --cc=dave.hansen@intel.com \ --cc=davidlohr.bueso@hp.com \ --cc=jason.low2@hp.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=matthew.r.wilcox@intel.com \ --cc=mingo@elte.hu \ --cc=mingo@kernel.org \ --cc=paulmck@linux.vnet.ibm.com \ --cc=peter@hurleysoftware.com \ --cc=riel@redhat.com \ --cc=torvalds@linux-foundation.org \ --cc=walken@google.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.