From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AAEAC433E0 for ; Thu, 11 Mar 2021 10:45:20 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9675E64F94 for ; Thu, 11 Mar 2021 10:45:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9675E64F94 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=whzsjS18/Pr9JfaHLXdvJY7WnFTQ4hRaTt+aN5FliRk=; b=OIkJclIpjYr+bfzSSf4K5MePu VIuLSOrvsvp+rZQ3ekrlpt/ANHUVTfL7euzzZCfOqBfq+2ZqsvYXF55fj56xGcdYCk7cLWRZ2pEhF nMyl5RFlMV1CLxMHu1oZyw6wYtHl98SF2672zeFwxuheV74vCseJioPtkAlbb/zqp7TLcW0fBWbj1 67KlWkO2Bw/aNtGr5Qw/vx5hTdvj0FCGx9tA2JTcFdqLZDgELBGhgfEkhaQX90JI6IVUCms/mWwXw JIZjuOwbHg+OiL/iDn3C5GiDDVEmg4lfdhFDHXApKSkg+ZJA+dUB53lVX7E3pBwOAoMKleanhnC4E 1cEQHqkbg==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lKIn9-008rY5-Sm; Thu, 11 Mar 2021 10:43:44 +0000 Received: from mail.kernel.org ([198.145.29.99]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lKIn1-008rUp-Ad for linux-arm-kernel@lists.infradead.org; Thu, 11 Mar 2021 10:43:37 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id DC8B764FC1; Thu, 11 Mar 2021 10:43:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1615459410; bh=W+wnECo6RRj9GRx2hZne8BKNcZLOKizWgp0shy4URrU=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=hgAS24faT96JxuxYyBDnKYTGtEQ1usnqCu0767AmcH8DPCEQYdnLYvPXtxwoQj0Yy yBLeXZvStZhRmRlujZfn0Xy70hNi5DCaUXrhIs18UN4s0nR1k9XGMYS8SeQWLt5J78 ca9a32eoJGobjPn94gr8lkwCLwSJOBJ/E9updVUE3ehiVdA/ph8NqCneWh6QqZQabL CLO54CI6vA6fj3TS+N50AyxX/janeNSZ4pdqL5RX9trZDl6gSEf53OhO6AhtXDB7yK bWDIF/GOw574sEFbiMPF38XFl/1ROZm3jwtBK70BtTO5Eyjt9ltYCi33zPqF5Orj6D yyeOfVpMo1mMw== Date: Thu, 11 Mar 2021 12:43:22 +0200 From: Mike Rapoport To: Will Deacon Cc: Anshuman Khandual , linux-mm@kvack.org, Russell King , Catalin Marinas , Andrew Morton , David Hildenbrand , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [RFC] mm: Enable generic pfn_valid() to handle early sections with memmap holes Message-ID: References: <1615174073-10520-1-git-send-email-anshuman.khandual@arm.com> <003d8a4b-9687-3e9a-c27b-908db280b44c@arm.com> <20210311093302.GA30603@willie-the-truck> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20210311093302.GA30603@willie-the-truck> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210311_104335_964401_F5CE89B8 X-CRM114-Status: GOOD ( 33.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Thu, Mar 11, 2021 at 09:33:02AM +0000, Will Deacon wrote: > On Thu, Mar 11, 2021 at 01:22:53PM +0530, Anshuman Khandual wrote: > > On 3/8/21 2:25 PM, Mike Rapoport wrote: > > > On Mon, Mar 08, 2021 at 08:57:53AM +0530, Anshuman Khandual wrote: > > >> Platforms like arm and arm64 have redefined pfn_valid() because their early > > >> memory sections might have contained memmap holes caused by memblock areas > > >> tagged with MEMBLOCK_NOMAP, which should be skipped while validating a pfn > > >> for struct page backing. This scenario could be captured with a new option > > >> CONFIG_HAVE_EARLY_SECTION_MEMMAP_HOLES and then generic pfn_valid() can be > > >> improved to accommodate such platforms. This reduces overall code footprint > > >> and also improves maintainability. > > > > > > I wonder whether arm64 would still need to free parts of its memmap after > > > > free_unused_memmap() is applicable when CONFIG_SPARSEMEM_VMEMMAP is not enabled. > > I am not sure whether there still might be some platforms or boards which would > > benefit from this. Hence lets just keep this unchanged for now. > > In my opinion, unless there's a compelling reason for us to offer all of > these different implementations of the memmap on arm64 then we shouldn't > bother -- it's not like it's fun to maintain! Just use sparsemem vmemmap > and be done with it. Is there some reason we can't do that? Regardless if the decision to stop supporting other memory models, I think it's a long due for arm64 to stop using pfn_valid() for anything except to check whether there is a valid struct page for a pfn. Something like the completely untested patch below: >From 3a753a56c2d87711f937ba09e4e14e4ad4926c38 Mon Sep 17 00:00:00 2001 From: Mike Rapoport Date: Thu, 11 Mar 2021 12:28:29 +0200 Subject: [PATCH] arm64: decouple check whether pfn is normal memory from pfn_valid() The intended semantics of pfn_valid() is to verify whether there is a struct page for the pfn in question and nothing else. Yet, on arm64 it is used to distinguish memory areas that are mapped in the linear map vs those that require ioremap() to access them. Introduce a dedicated pfn_is_memory() to perform such check and use it where appropriate. Signed-off-by: Mike Rapoport --- arch/arm64/include/asm/memory.h | 2 +- arch/arm64/include/asm/page.h | 1 + arch/arm64/kvm/mmu.c | 2 +- arch/arm64/mm/init.c | 6 ++++++ arch/arm64/mm/ioremap.c | 4 ++-- arch/arm64/mm/mmu.c | 2 +- 6 files changed, 12 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index c759faf7a1ff..778dbfe95d0e 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -346,7 +346,7 @@ static inline void *phys_to_virt(phys_addr_t x) #define virt_addr_valid(addr) ({ \ __typeof__(addr) __addr = __tag_reset(addr); \ - __is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr)); \ + __is_lm_address(__addr) && pfn_is_memory(virt_to_pfn(__addr)); \ }) void dump_mem_limit(void); diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 012cffc574e8..32b485bcc6ff 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -38,6 +38,7 @@ void copy_highpage(struct page *to, struct page *from); typedef struct page *pgtable_t; extern int pfn_valid(unsigned long); +extern int pfn_is_memory(unsigned long); #include diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 77cb2d28f2a4..a60069604361 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -85,7 +85,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) static bool kvm_is_device_pfn(unsigned long pfn) { - return !pfn_valid(pfn); + return !pfn_is_memory(pfn); } /* diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 0ace5e68efba..77c08853bafc 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -235,6 +235,12 @@ int pfn_valid(unsigned long pfn) } EXPORT_SYMBOL(pfn_valid); +int pfn_is_memory(unsigned long pfn) +{ + return memblock_is_map_memory(PFN_PHYS(pfn)); +} +EXPORT_SYMBOL(pfn_is_memory); + static phys_addr_t memory_limit = PHYS_ADDR_MAX; /* diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c index b5e83c46b23e..82a369b22ef5 100644 --- a/arch/arm64/mm/ioremap.c +++ b/arch/arm64/mm/ioremap.c @@ -43,7 +43,7 @@ static void __iomem *__ioremap_caller(phys_addr_t phys_addr, size_t size, /* * Don't allow RAM to be mapped. */ - if (WARN_ON(pfn_valid(__phys_to_pfn(phys_addr)))) + if (WARN_ON(pfn_is_memory(__phys_to_pfn(phys_addr)))) return NULL; area = get_vm_area_caller(size, VM_IOREMAP, caller); @@ -84,7 +84,7 @@ EXPORT_SYMBOL(iounmap); void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size) { /* For normal memory we already have a cacheable mapping. */ - if (pfn_valid(__phys_to_pfn(phys_addr))) + if (pfn_is_memory(__phys_to_pfn(phys_addr))) return (void __iomem *)__phys_to_virt(phys_addr); return __ioremap_caller(phys_addr, size, __pgprot(PROT_NORMAL), diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 3802cfbdd20d..ee66f2f21b6f 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -81,7 +81,7 @@ void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd) pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, unsigned long size, pgprot_t vma_prot) { - if (!pfn_valid(pfn)) + if (!pfn_is_memory(pfn)) return pgprot_noncached(vma_prot); else if (file->f_flags & O_SYNC) return pgprot_writecombine(vma_prot); -- 2.28.0 -- Sincerely yours, Mike. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel