From mboxrd@z Thu Jan 1 00:00:00 1970 From: Aaron Lu Subject: Re: Page allocator bottleneck Date: Mon, 23 Apr 2018 21:10:33 +0800 Message-ID: <20180423131033.GA13792@intel.com> References: <20170915102320.zqceocmvvkyybekj@techsingularity.net> <1c218381-067e-7757-ccc2-4e5befd2bfc3@mellanox.com> <20180421081505.GA24916@intel.com> <127df719-b978-60b7-5d77-3c8efbf2ecff@mellanox.com> <0dea4da6-8756-22d4-c586-267217a5fa63@mellanox.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Linux Kernel Network Developers , linux-mm , Mel Gorman , David Miller , Jesper Dangaard Brouer , Eric Dumazet , Alexei Starovoitov , Saeed Mahameed , Eran Ben Elisha , Andrew Morton , Michal Hocko To: Tariq Toukan Return-path: Received: from mga18.intel.com ([134.134.136.126]:58347 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754842AbeDWNKL (ORCPT ); Mon, 23 Apr 2018 09:10:11 -0400 Content-Disposition: inline In-Reply-To: <0dea4da6-8756-22d4-c586-267217a5fa63@mellanox.com> Sender: netdev-owner@vger.kernel.org List-ID: On Mon, Apr 23, 2018 at 11:54:57AM +0300, Tariq Toukan wrote: > Hi, > > I ran my tests with your patches. > Initial BW numbers are significantly higher than I documented back then in > this mail-thread. > For example, in driver #2 (see original mail thread), with 6 rings, I now > get 92Gbps (slightly less than linerate) in comparison to 64Gbps back then. > > However, there were many kernel changes since then, I need to isolate your > changes. I am not sure I can finish this today, but I will surely get to it > next week after I'm back from vacation. > > Still, when I increase the scale (more rings, i.e. more cpus), I see that > queued_spin_lock_slowpath gets to 60%+ cpu. Still high, but lower than it > used to be. I wonder if it is on allocation path or free path? Also, increasing PCP size through vm.percpu_pagelist_fraction would still help with my patches since it can avoid touching even more cache lines on allocation path with a higher PCP->batch(which has an upper limit of 96 though at the moment). > > This should be root solved by the (orthogonal) changes planned in network > subsystem, which will change the SKB allocation/free scheme so that SKBs are > released on the originating cpu.