From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6DAFC433DF for ; Fri, 15 May 2020 08:02:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B353E205CB for ; Fri, 15 May 2020 08:02:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B353E205CB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4A74590000C; Fri, 15 May 2020 04:02:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 47DC08E0005; Fri, 15 May 2020 04:02:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3BAB990000C; Fri, 15 May 2020 04:02:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0132.hostedemail.com [216.40.44.132]) by kanga.kvack.org (Postfix) with ESMTP id 1F8978E0005 for ; Fri, 15 May 2020 04:02:19 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C51B64DD8 for ; Fri, 15 May 2020 08:02:18 +0000 (UTC) X-FDA: 76818210756.15.whip84_6b3c42d5fcf34 X-HE-Tag: whip84_6b3c42d5fcf34 X-Filterd-Recvd-Size: 5160 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Fri, 15 May 2020 08:02:16 +0000 (UTC) IronPort-SDR: HenwDZ0ht/NY+aUz+/aLl/z7fbgopBAIWtuJCean0JMnaTAPvAqU1R7+XaPyDqtyD0i91n6vFS 3NPDr5ruEFdw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 May 2020 01:02:15 -0700 IronPort-SDR: rb7XtPi0zvHUhqz+bSx+ZcEFecunPYvkkmE1HmSArCgCCEEW6bC8ej5sUZb+h3zyx25ufaRLrb xPlsf0jLeilQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,394,1583222400"; d="scan'208";a="266528464" Received: from shbuild999.sh.intel.com (HELO localhost) ([10.239.146.107]) by orsmga006.jf.intel.com with ESMTP; 15 May 2020 01:02:11 -0700 Date: Fri, 15 May 2020 16:02:10 +0800 From: Feng Tang To: Michal Hocko Cc: Andrew Morton , Johannes Weiner , Matthew Wilcox , Mel Gorman , Kees Cook , Luis Chamberlain , Iurii Zaikin , "Kleen, Andi" , "Chen, Tim C" , "Hansen, Dave" , "Huang, Ying" , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" Subject: Re: [PATCH 3/3] mm: adjust vm_committed_as_batch according to vm overcommit policy Message-ID: <20200515080210.GC69177@shbuild999.sh.intel.com> References: <1588922717-63697-1-git-send-email-feng.tang@intel.com> <1588922717-63697-4-git-send-email-feng.tang@intel.com> <20200515074125.GH29153@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200515074125.GH29153@dhcp22.suse.cz> User-Agent: Mutt/1.5.24 (2015-08-30) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Michal, Thanks for the thorough reviews for these 3 patches! On Fri, May 15, 2020 at 03:41:25PM +0800, Michal Hocko wrote: > On Fri 08-05-20 15:25:17, Feng Tang wrote: > > When checking a performance change for will-it-scale scalability > > mmap test [1], we found very high lock contention for spinlock of > > percpu counter 'vm_committed_as': > > > > 94.14% 0.35% [kernel.kallsyms] [k] _raw_spin_lock_irqsave > > 48.21% _raw_spin_lock_irqsave;percpu_counter_add_batch;__vm_enough_memory;mmap_region;do_mmap; > > 45.91% _raw_spin_lock_irqsave;percpu_counter_add_batch;__do_munmap; > > > > Actually this heavy lock contention is not always necessary. The > > 'vm_committed_as' needs to be very precise when the strict > > OVERCOMMIT_NEVER policy is set, which requires a rather small batch > > number for the percpu counter. > > > > So lift the batch number to 16X for OVERCOMMIT_ALWAYS and > > OVERCOMMIT_GUESS policies, and add a sysctl handler to adjust it > > when the policy is reconfigured. > > Increasing the batch size for weaker overcommit modes makes sense. But > your patch is also tuning OVERCOMMIT_NEVER without any explanation why > that is still "small enough to be precise". Actually, it keeps the batch algorithm for "OVERCOMMIT_NEVER", but change the other 2 policies, which I should set it clear in the commit log. > > Benchmark with the same testcase in [1] shows 53% improvement on a > > 8C/16T desktop, and 2097%(20X) on a 4S/72C/144T server. And no change > > for some platforms, due to the test mmap size of the case is bigger > > than the batch number computed, though the patch will help mmap/munmap > > generally. > > > > [1] https://lkml.org/lkml/2020/3/5/57 > > Please do not use lkml.org links in the changelog. Use > http://lkml.kernel.org/r/$msg instead. Thanks, will keep that in mind for this and future patches. > > Signed-off-by: Feng Tang > > s32 vm_committed_as_batch = 32; > > > > -static void __meminit mm_compute_batch(void) > > +void mm_compute_batch(void) > > { > > u64 memsized_batch; > > s32 nr = num_present_cpus(); > > s32 batch = max_t(s32, nr*2, 32); > > - > > - /* batch size set to 0.4% of (total memory/#cpus), or max int32 */ > > - memsized_batch = min_t(u64, (totalram_pages()/nr)/256, 0x7fffffff); > > + unsigned long ram_pages = totalram_pages(); > > + > > + /* > > + * For policy of OVERCOMMIT_NEVER, set batch size to 0.4% > > + * of (total memory/#cpus), and lift it to 6.25% for other > > + * policies to easy the possible lock contention for percpu_counter > > + * vm_committed_as, while the max limit is INT_MAX > > + */ > > + if (sysctl_overcommit_memory == OVERCOMMIT_NEVER) > > + memsized_batch = min_t(u64, ram_pages/nr/256, INT_MAX); > > + else > > + memsized_batch = min_t(u64, ram_pages/nr/16, INT_MAX); Also as you mentioned there are real-world work loads with big mmap size and multi-threading, can we lift it even further ? memsized_batch = min_t(u64, ram_pages/nr/4, INT_MAX) Thanks, Feng