From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752717AbeCNIwG (ORCPT ); Wed, 14 Mar 2018 04:52:06 -0400 Received: from merlin.infradead.org ([205.233.59.134]:35012 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751480AbeCNIwF (ORCPT ); Wed, 14 Mar 2018 04:52:05 -0400 Date: Wed, 14 Mar 2018 09:48:44 +0100 From: Peter Zijlstra To: Laurent Dufour Cc: paulmck@linux.vnet.ibm.com, akpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net, jack@suse.cz, Matthew Wilcox , benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org, Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Will Deacon , Sergey Senozhatsky , Andrea Arcangeli , Alexei Starovoitov , kemi.wang@intel.com, sergey.senozhatsky.work@gmail.com, Daniel Jordan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com, npiggin@gmail.com, bsingharora@gmail.com, Tim Chen , linuxppc-dev@lists.ozlabs.org, x86@kernel.org Subject: Re: [PATCH v9 17/24] mm: Protect mm_rb tree with a rwlock Message-ID: <20180314084844.GP4043@hirez.programming.kicks-ass.net> References: <1520963994-28477-1-git-send-email-ldufour@linux.vnet.ibm.com> <1520963994-28477-18-git-send-email-ldufour@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1520963994-28477-18-git-send-email-ldufour@linux.vnet.ibm.com> User-Agent: Mutt/1.9.3 (2018-01-21) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 13, 2018 at 06:59:47PM +0100, Laurent Dufour wrote: > This change is inspired by the Peter's proposal patch [1] which was > protecting the VMA using SRCU. Unfortunately, SRCU is not scaling well in > that particular case, and it is introducing major performance degradation > due to excessive scheduling operations. Do you happen to have a little more detail on that? > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index 34fde7111e88..28c763ea1036 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -335,6 +335,7 @@ struct vm_area_struct { > struct vm_userfaultfd_ctx vm_userfaultfd_ctx; > #ifdef CONFIG_SPECULATIVE_PAGE_FAULT > seqcount_t vm_sequence; > + atomic_t vm_ref_count; /* see vma_get(), vma_put() */ > #endif > } __randomize_layout; > > @@ -353,6 +354,9 @@ struct kioctx_table; > struct mm_struct { > struct vm_area_struct *mmap; /* list of VMAs */ > struct rb_root mm_rb; > +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT > + rwlock_t mm_rb_lock; > +#endif > u32 vmacache_seqnum; /* per-thread vmacache */ > #ifdef CONFIG_MMU > unsigned long (*get_unmapped_area) (struct file *filp, When I tried this, it simply traded contention on mmap_sem for contention on these two cachelines. This was for the concurrent fault benchmark, where mmap_sem is only ever acquired for reading (so no blocking ever happens) and the bottle-neck was really pure cacheline access. Only by using RCU can you avoid that thrashing. Also note that if your database allocates the one giant mapping, it'll be _one_ VMA and that vm_ref_count gets _very_ hot indeed.