From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752555AbeDCDHi (ORCPT ); Mon, 2 Apr 2018 23:07:38 -0400 Received: from mail-pl0-f65.google.com ([209.85.160.65]:37743 "EHLO mail-pl0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751131AbeDCDHg (ORCPT ); Mon, 2 Apr 2018 23:07:36 -0400 X-Google-Smtp-Source: AIpwx49raODOYSRkfL8o3I08iD7+GC1ChSmUVh+YIlRpwi5tbEx6ZkioZGi+i00a9o8VCGyW0E5ROA== Subject: Re: [PATCH v5 1/5] mm: page_alloc: remain memblock_next_valid_pfn() on arm and arm64 To: Ard Biesheuvel Cc: Russell King , Catalin Marinas , Will Deacon , Mark Rutland , Andrew Morton , Michal Hocko , Wei Yang , Kees Cook , Laura Abbott , Vladimir Murzin , Philip Derrin , AKASHI Takahiro , James Morse , Steve Capper , Pavel Tatashin , Gioh Kim , Vlastimil Babka , Mel Gorman , Johannes Weiner , Kemi Wang , Petr Tesarik , YASUAKI ISHIMATSU , Andrey Ryabinin , Nikolay Borisov , Daniel Jordan , Daniel Vacek , Eugeniu Rosca , linux-arm-kernel , Linux Kernel Mailing List , Linux-MM , Jia He References: <1522636236-12625-1-git-send-email-hejianet@gmail.com> <1522636236-12625-2-git-send-email-hejianet@gmail.com> <41445229-043c-976f-3961-13770163444f@gmail.com> From: Jia He Message-ID: <392a8945-fe83-9864-0968-4a13a5443a02@gmail.com> Date: Tue, 3 Apr 2018 11:07:18 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 4/2/2018 3:53 PM, Ard Biesheuvel Wrote: > On 2 April 2018 at 09:49, Jia He wrote: >> >> On 4/2/2018 2:55 PM, Ard Biesheuvel Wrote: >>> On 2 April 2018 at 04:30, Jia He wrote: >>>> Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns >>>> where possible") optimized the loop in memmap_init_zone(). But it causes >>>> possible panic bug. So Daniel Vacek reverted it later. >>>> >>>> But as suggested by Daniel Vacek, it is fine to using memblock to skip >>>> gaps and finding next valid frame with CONFIG_HAVE_ARCH_PFN_VALID. >>>> >>>> On arm and arm64, memblock is used by default. But generic version of >>>> pfn_valid() is based on mem sections and memblock_next_valid_pfn() does >>>> not always return the next valid one but skips more resulting in some >>>> valid frames to be skipped (as if they were invalid). And that's why >>>> kernel was eventually crashing on some !arm machines. >>>> >>>> And as verified by Eugeniu Rosca, arm can benifit from commit >>>> b92df1de5d28. So remain the memblock_next_valid_pfn on arm{,64} and move >>>> the related codes to arm64 arch directory. >>>> >>>> Suggested-by: Daniel Vacek >>>> Signed-off-by: Jia He >>> Hello Jia, >>> >>> Apologies for chiming in late. >> no problem, thanks for your comments ;-) >>> >>> If we are going to rearchitect this, I'd rather we change the loop in >>> memmap_init_zone() so that we skip to the next valid PFN directly >>> rather than skipping to the last invalid PFN so that the pfn++ in the >> hmm... Maybe this macro name makes you confused >> >> pfn = skip_to_last_invalid_pfn(pfn); >> >> how about skip_to_next_valid_pfn? >> >>> for () results in the next value. Can we replace the pfn++ there with >>> a function calls that defaults to 'return pfn + 1', but does the skip >>> for architectures that implement it? >> I am not sure I understand your question here. >> With this patch, on !arm arches, skip_to_last_invalid_pfn is equal to (pfn), >> and will be increased >> when for{} loop continue. We only *skip* to the start pfn of next valid >> region when >> CONFIG_HAVE_MEMBLOCK and CONFIG_HAVE_ARCH_PFN_VALID(arm/arm64 supports >> both). >> > What I am saying is that the loop in memmap_init_zone > > for (pfn = start_pfn; pfn < end_pfn; pfn++) { ... } > > should be replaced by something like > > for (pfn = start_pfn; pfn < end_pfn; pfn = next_valid_pfn(pfn)) > > where next_valid_pfn() is simply defined as > > static ulong next_valid_pfn(ulong pfn) > { > return pfn + 1; > } Hi Ard, Do you think a macro instead of simply fuction is better here? -- Cheer, Jia