From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751926AbeDFJK2 (ORCPT ); Fri, 6 Apr 2018 05:10:28 -0400 Received: from pandora.armlinux.org.uk ([78.32.30.218]:47642 "EHLO pandora.armlinux.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751750AbeDFJK0 (ORCPT ); Fri, 6 Apr 2018 05:10:26 -0400 Date: Fri, 6 Apr 2018 10:09:20 +0100 From: Russell King - ARM Linux To: Matthew Wilcox Cc: Jia He , Catalin Marinas , Will Deacon , Mark Rutland , Ard Biesheuvel , Andrew Morton , Michal Hocko , Wei Yang , Kees Cook , Laura Abbott , Vladimir Murzin , Philip Derrin , AKASHI Takahiro , James Morse , Steve Capper , Pavel Tatashin , Gioh Kim , Vlastimil Babka , Mel Gorman , Johannes Weiner , Kemi Wang , Petr Tesarik , YASUAKI ISHIMATSU , Andrey Ryabinin , Nikolay Borisov , Daniel Jordan , Daniel Vacek , Eugeniu Rosca , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Jia He Subject: Re: [PATCH v7 2/5] arm: arm64: page_alloc: reduce unnecessary binary search in memblock_next_valid_pfn() Message-ID: <20180406090920.GM16141@n2100.armlinux.org.uk> References: <1522915478-5044-1-git-send-email-hejianet@gmail.com> <1522915478-5044-3-git-send-email-hejianet@gmail.com> <20180405113444.GB2647@bombadil.infradead.org> <1f809296-e88d-1090-0027-890782b91d6e@gmail.com> <20180405125054.GC2647@bombadil.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180405125054.GC2647@bombadil.infradead.org> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 05, 2018 at 05:50:54AM -0700, Matthew Wilcox wrote: > On Thu, Apr 05, 2018 at 08:44:12PM +0800, Jia He wrote: > > > > > > On 4/5/2018 7:34 PM, Matthew Wilcox Wrote: > > > On Thu, Apr 05, 2018 at 01:04:35AM -0700, Jia He wrote: > > > > Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns > > > > where possible") optimized the loop in memmap_init_zone(). But there is > > > > still some room for improvement. E.g. if pfn and pfn+1 are in the same > > > > memblock region, we can simply pfn++ instead of doing the binary search > > > > in memblock_next_valid_pfn. > > > Sure, but I bet if we are >end_pfn, we're almost certainly going to the > > > start_pfn of the next block, so why not test that as well? > > > > > > > + /* fast path, return pfn+1 if next pfn is in the same region */ > > > > + if (early_region_idx != -1) { > > > > + start_pfn = PFN_DOWN(regions[early_region_idx].base); > > > > + end_pfn = PFN_DOWN(regions[early_region_idx].base + > > > > + regions[early_region_idx].size); > > > > + > > > > + if (pfn >= start_pfn && pfn < end_pfn) > > > > + return pfn; > > > early_region_idx++; > > > start_pfn = PFN_DOWN(regions[early_region_idx].base); > > > if (pfn >= end_pfn && pfn <= start_pfn) > > > return start_pfn; > > Thanks, thus the binary search in next step can be discarded? > > I don't know all the circumstances in which this is called. Maybe a linear > search with memo is more appropriate than a binary search. That's been brought up before, and the reasoning appears to be something along the lines of... Academics and published wisdom is that on cached architectures, binary searches are bad because it doesn't operate efficiently due to the overhead from having to load cache lines. Consequently, there seems to be a knee-jerk reaction that "all binary searches are bad, we must eliminate them." What is failed to be grasped here, though, is that it is typical that the number of entries in this array tend to be small, so the entire array takes up one or two cache lines, maybe a maximum of four lines depending on your cache line length and number of entries. This means that the binary search expense is reduced, and is lower than a linear search for the majority of cases. What is key here as far as performance is concerned is whether the general usage of pfn_valid() by the kernel is optimal. We should not optimise only for the boot case, which means evaluating the effect of these changes with _real_ workloads, not just "does my machine boot a milliseconds faster". -- RMK's Patch system: http://www.armlinux.org.uk/developer/patches/ FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up According to speedtest.net: 8.21Mbps down 510kbps up From mboxrd@z Thu Jan 1 00:00:00 1970 From: linux@armlinux.org.uk (Russell King - ARM Linux) Date: Fri, 6 Apr 2018 10:09:20 +0100 Subject: [PATCH v7 2/5] arm: arm64: page_alloc: reduce unnecessary binary search in memblock_next_valid_pfn() In-Reply-To: <20180405125054.GC2647@bombadil.infradead.org> References: <1522915478-5044-1-git-send-email-hejianet@gmail.com> <1522915478-5044-3-git-send-email-hejianet@gmail.com> <20180405113444.GB2647@bombadil.infradead.org> <1f809296-e88d-1090-0027-890782b91d6e@gmail.com> <20180405125054.GC2647@bombadil.infradead.org> Message-ID: <20180406090920.GM16141@n2100.armlinux.org.uk> To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org On Thu, Apr 05, 2018 at 05:50:54AM -0700, Matthew Wilcox wrote: > On Thu, Apr 05, 2018 at 08:44:12PM +0800, Jia He wrote: > > > > > > On 4/5/2018 7:34 PM, Matthew Wilcox Wrote: > > > On Thu, Apr 05, 2018 at 01:04:35AM -0700, Jia He wrote: > > > > Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns > > > > where possible") optimized the loop in memmap_init_zone(). But there is > > > > still some room for improvement. E.g. if pfn and pfn+1 are in the same > > > > memblock region, we can simply pfn++ instead of doing the binary search > > > > in memblock_next_valid_pfn. > > > Sure, but I bet if we are >end_pfn, we're almost certainly going to the > > > start_pfn of the next block, so why not test that as well? > > > > > > > + /* fast path, return pfn+1 if next pfn is in the same region */ > > > > + if (early_region_idx != -1) { > > > > + start_pfn = PFN_DOWN(regions[early_region_idx].base); > > > > + end_pfn = PFN_DOWN(regions[early_region_idx].base + > > > > + regions[early_region_idx].size); > > > > + > > > > + if (pfn >= start_pfn && pfn < end_pfn) > > > > + return pfn; > > > early_region_idx++; > > > start_pfn = PFN_DOWN(regions[early_region_idx].base); > > > if (pfn >= end_pfn && pfn <= start_pfn) > > > return start_pfn; > > Thanks, thus the binary search in next step can be discarded? > > I don't know all the circumstances in which this is called. Maybe a linear > search with memo is more appropriate than a binary search. That's been brought up before, and the reasoning appears to be something along the lines of... Academics and published wisdom is that on cached architectures, binary searches are bad because it doesn't operate efficiently due to the overhead from having to load cache lines. Consequently, there seems to be a knee-jerk reaction that "all binary searches are bad, we must eliminate them." What is failed to be grasped here, though, is that it is typical that the number of entries in this array tend to be small, so the entire array takes up one or two cache lines, maybe a maximum of four lines depending on your cache line length and number of entries. This means that the binary search expense is reduced, and is lower than a linear search for the majority of cases. What is key here as far as performance is concerned is whether the general usage of pfn_valid() by the kernel is optimal. We should not optimise only for the boot case, which means evaluating the effect of these changes with _real_ workloads, not just "does my machine boot a milliseconds faster". -- RMK's Patch system: http://www.armlinux.org.uk/developer/patches/ FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up According to speedtest.net: 8.21Mbps down 510kbps up