From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D485C433B4 for ; Wed, 7 Apr 2021 17:26:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DED826121E for ; Wed, 7 Apr 2021 17:26:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DED826121E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 846436B007D; Wed, 7 Apr 2021 13:26:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 81D956B007E; Wed, 7 Apr 2021 13:26:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BD7F6B0080; Wed, 7 Apr 2021 13:26:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0039.hostedemail.com [216.40.44.39]) by kanga.kvack.org (Postfix) with ESMTP id 52FA66B007D for ; Wed, 7 Apr 2021 13:26:49 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0A0CD8152 for ; Wed, 7 Apr 2021 17:26:49 +0000 (UTC) X-FDA: 78006250938.25.0CD614E Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf07.hostedemail.com (Postfix) with ESMTP id 60A08A0000FD for ; Wed, 7 Apr 2021 17:26:48 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 9987861369; Wed, 7 Apr 2021 17:26:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1617816407; bh=Bx7vBRBtBfyhkUDHZU8okvQ6SxOxODu+haM6suN1Xig=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QzSbcmT1pNRo+Z9Sjh90FCZ3MOF6Ubw+JIDfQ7AgByXx0gfGI+ZL81OYAnxtclb9x RcjODjnN18OrCcDJzOxQGJTiuzdT0uPmqUH2AO/DEUdRIrpvMg29cIVq3mhRbeoxkH HXDq3mkwldDP9XacgW/6dj6NOnz35tkqLUfjWT21jYOOitQyilTHsyaWHTnX+oEMgS gtVLibMnbDUJoeqlG/ajZt64HfmQQ2hT1Ue4Y4T4pAUDdxL6rlqVHdtySwyF9uuj9L MJ73VUeq2wSUcz4sZXhKWyraVJ/0C4AX8e/lp87PV07dnO7s/isTs4adj+UDlYCdXm Hm+qZDMo4xY+A== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC/RFT PATCH 2/3] arm64: decouple check whether pfn is normal memory from pfn_valid() Date: Wed, 7 Apr 2021 20:26:06 +0300 Message-Id: <20210407172607.8812-3-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210407172607.8812-1-rppt@kernel.org> References: <20210407172607.8812-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 60A08A0000FD X-Stat-Signature: fjqagffrbc53h61udaz35c5mae4qxpm6 X-Rspamd-Server: rspam02 Received-SPF: none (kernel.org>: No applicable sender policy available) receiver=imf07; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617816408-323247 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport The intended semantics of pfn_valid() is to verify whether there is a struct page for the pfn in question and nothing else. Yet, on arm64 it is used to distinguish memory areas that are mapped in t= he linear map vs those that require ioremap() to access them. Introduce a dedicated pfn_is_memory() to perform such check and use it where appropriate. Signed-off-by: Mike Rapoport --- arch/arm64/include/asm/memory.h | 2 +- arch/arm64/include/asm/page.h | 1 + arch/arm64/kvm/mmu.c | 2 +- arch/arm64/mm/init.c | 6 ++++++ arch/arm64/mm/ioremap.c | 4 ++-- arch/arm64/mm/mmu.c | 2 +- 6 files changed, 12 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/mem= ory.h index 0aabc3be9a75..7e77fdf71b9d 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -351,7 +351,7 @@ static inline void *phys_to_virt(phys_addr_t x) =20 #define virt_addr_valid(addr) ({ \ __typeof__(addr) __addr =3D __tag_reset(addr); \ - __is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr)); \ + __is_lm_address(__addr) && pfn_is_memory(virt_to_pfn(__addr)); \ }) =20 void dump_mem_limit(void); diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.= h index 012cffc574e8..32b485bcc6ff 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -38,6 +38,7 @@ void copy_highpage(struct page *to, struct page *from); typedef struct page *pgtable_t; =20 extern int pfn_valid(unsigned long); +extern int pfn_is_memory(unsigned long); =20 #include =20 diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 8711894db8c2..ad2ea65a3937 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -85,7 +85,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) =20 static bool kvm_is_device_pfn(unsigned long pfn) { - return !pfn_valid(pfn); + return !pfn_is_memory(pfn); } =20 /* diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 3685e12aba9b..258b1905ed4a 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -258,6 +258,12 @@ int pfn_valid(unsigned long pfn) } EXPORT_SYMBOL(pfn_valid); =20 +int pfn_is_memory(unsigned long pfn) +{ + return memblock_is_map_memory(PFN_PHYS(pfn)); +} +EXPORT_SYMBOL(pfn_is_memory); + static phys_addr_t memory_limit =3D PHYS_ADDR_MAX; =20 /* diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c index b5e83c46b23e..82a369b22ef5 100644 --- a/arch/arm64/mm/ioremap.c +++ b/arch/arm64/mm/ioremap.c @@ -43,7 +43,7 @@ static void __iomem *__ioremap_caller(phys_addr_t phys_= addr, size_t size, /* * Don't allow RAM to be mapped. */ - if (WARN_ON(pfn_valid(__phys_to_pfn(phys_addr)))) + if (WARN_ON(pfn_is_memory(__phys_to_pfn(phys_addr)))) return NULL; =20 area =3D get_vm_area_caller(size, VM_IOREMAP, caller); @@ -84,7 +84,7 @@ EXPORT_SYMBOL(iounmap); void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size) { /* For normal memory we already have a cacheable mapping. */ - if (pfn_valid(__phys_to_pfn(phys_addr))) + if (pfn_is_memory(__phys_to_pfn(phys_addr))) return (void __iomem *)__phys_to_virt(phys_addr); =20 return __ioremap_caller(phys_addr, size, __pgprot(PROT_NORMAL), diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 5d9550fdb9cf..038d20fe163f 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -81,7 +81,7 @@ void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd) pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, unsigned long size, pgprot_t vma_prot) { - if (!pfn_valid(pfn)) + if (!pfn_is_memory(pfn)) return pgprot_noncached(vma_prot); else if (file->f_flags & O_SYNC) return pgprot_writecombine(vma_prot); --=20 2.28.0