From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751478AbdLUIZ2 (ORCPT ); Thu, 21 Dec 2017 03:25:28 -0500 Received: from mga06.intel.com ([134.134.136.31]:11068 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750797AbdLUIZZ (ORCPT ); Thu, 21 Dec 2017 03:25:25 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,435,1508828400"; d="scan'208";a="14473768" Subject: Re: [PATCH v2 3/5] mm: enlarge NUMA counters threshold size To: Michal Hocko Cc: Greg Kroah-Hartman , Andrew Morton , Vlastimil Babka , Mel Gorman , Johannes Weiner , Christopher Lameter , YASUAKI ISHIMATSU , Andrey Ryabinin , Nikolay Borisov , Pavel Tatashin , David Rientjes , Sebastian Andrzej Siewior , Dave , Andi Kleen , Tim Chen , Jesper Dangaard Brouer , Ying Huang , Aaron Lu , Aubrey Li , Linux MM , Linux Kernel References: <1513665566-4465-1-git-send-email-kemi.wang@intel.com> <1513665566-4465-4-git-send-email-kemi.wang@intel.com> <20171219124045.GO2787@dhcp22.suse.cz> <439918f7-e8a3-c007-496c-99535cbc4582@intel.com> <20171220101229.GJ4831@dhcp22.suse.cz> <268b1b6e-ff7a-8f1a-f97c-f94e14591975@intel.com> <20171221081706.GA4831@dhcp22.suse.cz> From: kemi Message-ID: <1fb66dfd-b64c-f705-ea27-a9f2e11729a4@intel.com> Date: Thu, 21 Dec 2017 16:23:23 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.0 MIME-Version: 1.0 In-Reply-To: <20171221081706.GA4831@dhcp22.suse.cz> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2017年12月21日 16:17, Michal Hocko wrote: > On Thu 21-12-17 16:06:50, kemi wrote: >> >> >> On 2017年12月20日 18:12, Michal Hocko wrote: >>> On Wed 20-12-17 13:52:14, kemi wrote: >>>> >>>> >>>> On 2017年12月19日 20:40, Michal Hocko wrote: >>>>> On Tue 19-12-17 14:39:24, Kemi Wang wrote: >>>>>> We have seen significant overhead in cache bouncing caused by NUMA counters >>>>>> update in multi-threaded page allocation. See 'commit 1d90ca897cb0 ("mm: >>>>>> update NUMA counter threshold size")' for more details. >>>>>> >>>>>> This patch updates NUMA counters to a fixed size of (MAX_S16 - 2) and deals >>>>>> with global counter update using different threshold size for node page >>>>>> stats. >>>>> >>>>> Again, no numbers. >>>> >>>> Compare to vanilla kernel, I don't think it has performance improvement, so >>>> I didn't post performance data here. >>>> But, if you would like to see performance gain from enlarging threshold size >>>> for NUMA stats (compare to the first patch), I will do that later. >>> >>> Please do. I would also like to hear _why_ all counters cannot simply >>> behave same. In other words why we cannot simply increase >>> stat_threshold? Maybe calculate_normal_threshold needs a better scaling >>> for larger machines. >>> >> >> I will add this performance data to changelog in V3 patch series. >> >> Test machine: 2-sockets skylake platform (112 CPUs, 62G RAM) >> Benchmark: page_bench03 >> Description: 112 threads do single page allocation/deallocation in parallel. >> before after >> (enlarge threshold size) >> CPU cycles 722 379(-47.5%) > > Please describe the numbers some more. Is this an average? Yes > What is the std? I increase the loop times to 10m, so the std is quite slow (repeat 3 times) > Can you see any difference with a more generic workload? > I didn't see obvious improvement for will-it-scale.page_fault1 Two reasons for that: 1) too long code path 2) server zone lock and lru lock contention (access to buddy system frequently) >> Some thinking about that: >> a) the overhead due to cache bouncing caused by NUMA counter update in fast path >> severely increase with more and more CPUs cores > > What is an effect on a smaller system with fewer CPUs? > Several CPU cycles can be saved using single thread for that. >> b) AFAIK, the typical usage scenario (similar at least)for which this optimization can >> benefit is 10/40G NIC used in high-speed data center network of cloud service providers. > > I would expect those would disable the numa accounting altogether. > Yes, but it is still worthy to do some optimization, isn't?