From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5949CC433B4 for ; Tue, 11 May 2021 10:25:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1ED6F610C9 for ; Tue, 11 May 2021 10:25:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231446AbhEKK0a (ORCPT ); Tue, 11 May 2021 06:26:30 -0400 Received: from mail.kernel.org ([198.145.29.99]:32870 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231245AbhEKK01 (ORCPT ); Tue, 11 May 2021 06:26:27 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id CB9FA61936 for ; Tue, 11 May 2021 10:25:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1620728720; bh=JV3oIYlPRhAEEPVctWNPV/cOycFyYs1Lq6DGChiCd+g=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=DmYACPuIeG6DUf3e9aSCRKD4sAXZdKw4RqF6Pp00JLxdMo2YhXtCrwAXNpr/NHMqZ oDldT6x/cS0KPsVlOjNNNacP62Pn/6s4/9Dtr/nMv6QNvTDIfo+1DuXY+Owpv0Oomq t7cH1Ux8nSmob7o/jDbNynCu6GQt5eC2SLSqk9y8EnyptvliHdxwbANdSfFReShuuf ZnW/vrbHs+PASPY+A4JKR+qnEyxPSI/lla+nFpB8wzyuiuCWGgY2lIR1VXMBLaYTJt pT/DL2qD7qzYkzrAvR/290AS6znOUJ3Fs68ZhEpe90J4d0bracvkNvC4Vmj4OI7WXT BWZYO6ccSYXqA== Received: by mail-ot1-f42.google.com with SMTP id d3-20020a9d29030000b029027e8019067fso17021804otb.13 for ; Tue, 11 May 2021 03:25:20 -0700 (PDT) X-Gm-Message-State: AOAM533BX5eOPEvAxWyVDxeYfoWBYJpoDYkxTPpRCU3ZtH0mkKFmP7vD Z1Heq7Nyy+RpEG30GzmnTuFw28IpGA/UIBe7HBQ= X-Google-Smtp-Source: ABdhPJytWz1IVkHqKkcpbQyOgm8jLChtkDskRM+4A0ChV/Q2KAjwVVeEAON+ryI7vhB7a9AgJZb69e4AhTOmXsHGZbY= X-Received: by 2002:a05:6830:4da:: with SMTP id s26mr25288213otd.77.1620728719935; Tue, 11 May 2021 03:25:19 -0700 (PDT) MIME-Version: 1.0 References: <20210511100550.28178-1-rppt@kernel.org> <20210511100550.28178-4-rppt@kernel.org> In-Reply-To: <20210511100550.28178-4-rppt@kernel.org> From: Ard Biesheuvel Date: Tue, 11 May 2021 12:25:09 +0200 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v4 3/4] arm64: decouple check whether pfn is in linear map from pfn_valid() To: Mike Rapoport Cc: Andrew Morton , Anshuman Khandual , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Will Deacon , kvmarm , Linux ARM , Linux Kernel Mailing List , Linux Memory Management List Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 11 May 2021 at 12:06, Mike Rapoport wrote: > > From: Mike Rapoport > > The intended semantics of pfn_valid() is to verify whether there is a > struct page for the pfn in question and nothing else. > > Yet, on arm64 it is used to distinguish memory areas that are mapped in the > linear map vs those that require ioremap() to access them. > > Introduce a dedicated pfn_is_map_memory() wrapper for > memblock_is_map_memory() to perform such check and use it where > appropriate. > > Using a wrapper allows to avoid cyclic include dependencies. > > While here also update style of pfn_valid() so that both pfn_valid() and > pfn_is_map_memory() declarations will be consistent. > > Signed-off-by: Mike Rapoport > Acked-by: David Hildenbrand Acked-by: Ard Biesheuvel > --- > arch/arm64/include/asm/memory.h | 2 +- > arch/arm64/include/asm/page.h | 3 ++- > arch/arm64/kvm/mmu.c | 2 +- > arch/arm64/mm/init.c | 12 ++++++++++++ > arch/arm64/mm/ioremap.c | 4 ++-- > arch/arm64/mm/mmu.c | 2 +- > 6 files changed, 19 insertions(+), 6 deletions(-) > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h > index 87b90dc27a43..9027b7e16c4c 100644 > --- a/arch/arm64/include/asm/memory.h > +++ b/arch/arm64/include/asm/memory.h > @@ -369,7 +369,7 @@ static inline void *phys_to_virt(phys_addr_t x) > > #define virt_addr_valid(addr) ({ \ > __typeof__(addr) __addr = __tag_reset(addr); \ > - __is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr)); \ > + __is_lm_address(__addr) && pfn_is_map_memory(virt_to_pfn(__addr)); \ > }) > > void dump_mem_limit(void); > diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h > index 012cffc574e8..75ddfe671393 100644 > --- a/arch/arm64/include/asm/page.h > +++ b/arch/arm64/include/asm/page.h > @@ -37,7 +37,8 @@ void copy_highpage(struct page *to, struct page *from); > > typedef struct page *pgtable_t; > > -extern int pfn_valid(unsigned long); > +int pfn_valid(unsigned long pfn); > +int pfn_is_map_memory(unsigned long pfn); > > #include > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index c5d1f3c87dbd..470070073085 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -85,7 +85,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) > > static bool kvm_is_device_pfn(unsigned long pfn) > { > - return !pfn_valid(pfn); > + return !pfn_is_map_memory(pfn); > } > > static void *stage2_memcache_zalloc_page(void *arg) > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > index 16a2b2b1c54d..798f74f501d5 100644 > --- a/arch/arm64/mm/init.c > +++ b/arch/arm64/mm/init.c > @@ -255,6 +255,18 @@ int pfn_valid(unsigned long pfn) > } > EXPORT_SYMBOL(pfn_valid); > > +int pfn_is_map_memory(unsigned long pfn) > +{ > + phys_addr_t addr = PFN_PHYS(pfn); > + > + /* avoid false positives for bogus PFNs, see comment in pfn_valid() */ > + if (PHYS_PFN(addr) != pfn) > + return 0; > + > + return memblock_is_map_memory(addr); > +} > +EXPORT_SYMBOL(pfn_is_map_memory); > + > static phys_addr_t memory_limit = PHYS_ADDR_MAX; > > /* > diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c > index b5e83c46b23e..b7c81dacabf0 100644 > --- a/arch/arm64/mm/ioremap.c > +++ b/arch/arm64/mm/ioremap.c > @@ -43,7 +43,7 @@ static void __iomem *__ioremap_caller(phys_addr_t phys_addr, size_t size, > /* > * Don't allow RAM to be mapped. > */ > - if (WARN_ON(pfn_valid(__phys_to_pfn(phys_addr)))) > + if (WARN_ON(pfn_is_map_memory(__phys_to_pfn(phys_addr)))) > return NULL; > > area = get_vm_area_caller(size, VM_IOREMAP, caller); > @@ -84,7 +84,7 @@ EXPORT_SYMBOL(iounmap); > void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size) > { > /* For normal memory we already have a cacheable mapping. */ > - if (pfn_valid(__phys_to_pfn(phys_addr))) > + if (pfn_is_map_memory(__phys_to_pfn(phys_addr))) > return (void __iomem *)__phys_to_virt(phys_addr); > > return __ioremap_caller(phys_addr, size, __pgprot(PROT_NORMAL), > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index 6dd9369e3ea0..ab5914cebd3c 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -82,7 +82,7 @@ void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd) > pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, > unsigned long size, pgprot_t vma_prot) > { > - if (!pfn_valid(pfn)) > + if (!pfn_is_map_memory(pfn)) > return pgprot_noncached(vma_prot); > else if (file->f_flags & O_SYNC) > return pgprot_writecombine(vma_prot); > -- > 2.28.0 > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFD47C433ED for ; Tue, 11 May 2021 10:25:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 759E661925 for ; Tue, 11 May 2021 10:25:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 759E661925 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E3BD96B0036; Tue, 11 May 2021 06:25:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DC4D86B0072; Tue, 11 May 2021 06:25:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C3D918D0002; Tue, 11 May 2021 06:25:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0194.hostedemail.com [216.40.44.194]) by kanga.kvack.org (Postfix) with ESMTP id A3A9C6B0036 for ; Tue, 11 May 2021 06:25:22 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 59DE28249980 for ; Tue, 11 May 2021 10:25:22 +0000 (UTC) X-FDA: 78128568084.36.68F8444 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf12.hostedemail.com (Postfix) with ESMTP id E7873DD for ; Tue, 11 May 2021 10:25:05 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id CA62161925 for ; Tue, 11 May 2021 10:25:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1620728720; bh=JV3oIYlPRhAEEPVctWNPV/cOycFyYs1Lq6DGChiCd+g=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=DmYACPuIeG6DUf3e9aSCRKD4sAXZdKw4RqF6Pp00JLxdMo2YhXtCrwAXNpr/NHMqZ oDldT6x/cS0KPsVlOjNNNacP62Pn/6s4/9Dtr/nMv6QNvTDIfo+1DuXY+Owpv0Oomq t7cH1Ux8nSmob7o/jDbNynCu6GQt5eC2SLSqk9y8EnyptvliHdxwbANdSfFReShuuf ZnW/vrbHs+PASPY+A4JKR+qnEyxPSI/lla+nFpB8wzyuiuCWGgY2lIR1VXMBLaYTJt pT/DL2qD7qzYkzrAvR/290AS6znOUJ3Fs68ZhEpe90J4d0bracvkNvC4Vmj4OI7WXT BWZYO6ccSYXqA== Received: by mail-ot1-f42.google.com with SMTP id i23-20020a9d68d70000b02902dc19ed4c15so13080378oto.0 for ; Tue, 11 May 2021 03:25:20 -0700 (PDT) X-Gm-Message-State: AOAM532YWjRyyrtK4N0r/spnL3bIKHlKPGuuGeX/9E0XAsdOYasbjGlQ 8xp4zfTSVRkjudLxlbLh4AItME+3zJRwMN1biTA= X-Google-Smtp-Source: ABdhPJytWz1IVkHqKkcpbQyOgm8jLChtkDskRM+4A0ChV/Q2KAjwVVeEAON+ryI7vhB7a9AgJZb69e4AhTOmXsHGZbY= X-Received: by 2002:a05:6830:4da:: with SMTP id s26mr25288213otd.77.1620728719935; Tue, 11 May 2021 03:25:19 -0700 (PDT) MIME-Version: 1.0 References: <20210511100550.28178-1-rppt@kernel.org> <20210511100550.28178-4-rppt@kernel.org> In-Reply-To: <20210511100550.28178-4-rppt@kernel.org> From: Ard Biesheuvel Date: Tue, 11 May 2021 12:25:09 +0200 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v4 3/4] arm64: decouple check whether pfn is in linear map from pfn_valid() To: Mike Rapoport Cc: Andrew Morton , Anshuman Khandual , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Will Deacon , kvmarm , Linux ARM , Linux Kernel Mailing List , Linux Memory Management List Content-Type: text/plain; charset="UTF-8" Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=DmYACPuI; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf12.hostedemail.com: domain of ardb@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=ardb@kernel.org X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: E7873DD X-Stat-Signature: hftmbo1hyfrdhudcuoyowmewyombpyhh Received-SPF: none (kernel.org>: No applicable sender policy available) receiver=imf12; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1620728705-111014 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, 11 May 2021 at 12:06, Mike Rapoport wrote: > > From: Mike Rapoport > > The intended semantics of pfn_valid() is to verify whether there is a > struct page for the pfn in question and nothing else. > > Yet, on arm64 it is used to distinguish memory areas that are mapped in the > linear map vs those that require ioremap() to access them. > > Introduce a dedicated pfn_is_map_memory() wrapper for > memblock_is_map_memory() to perform such check and use it where > appropriate. > > Using a wrapper allows to avoid cyclic include dependencies. > > While here also update style of pfn_valid() so that both pfn_valid() and > pfn_is_map_memory() declarations will be consistent. > > Signed-off-by: Mike Rapoport > Acked-by: David Hildenbrand Acked-by: Ard Biesheuvel > --- > arch/arm64/include/asm/memory.h | 2 +- > arch/arm64/include/asm/page.h | 3 ++- > arch/arm64/kvm/mmu.c | 2 +- > arch/arm64/mm/init.c | 12 ++++++++++++ > arch/arm64/mm/ioremap.c | 4 ++-- > arch/arm64/mm/mmu.c | 2 +- > 6 files changed, 19 insertions(+), 6 deletions(-) > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h > index 87b90dc27a43..9027b7e16c4c 100644 > --- a/arch/arm64/include/asm/memory.h > +++ b/arch/arm64/include/asm/memory.h > @@ -369,7 +369,7 @@ static inline void *phys_to_virt(phys_addr_t x) > > #define virt_addr_valid(addr) ({ \ > __typeof__(addr) __addr = __tag_reset(addr); \ > - __is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr)); \ > + __is_lm_address(__addr) && pfn_is_map_memory(virt_to_pfn(__addr)); \ > }) > > void dump_mem_limit(void); > diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h > index 012cffc574e8..75ddfe671393 100644 > --- a/arch/arm64/include/asm/page.h > +++ b/arch/arm64/include/asm/page.h > @@ -37,7 +37,8 @@ void copy_highpage(struct page *to, struct page *from); > > typedef struct page *pgtable_t; > > -extern int pfn_valid(unsigned long); > +int pfn_valid(unsigned long pfn); > +int pfn_is_map_memory(unsigned long pfn); > > #include > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index c5d1f3c87dbd..470070073085 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -85,7 +85,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) > > static bool kvm_is_device_pfn(unsigned long pfn) > { > - return !pfn_valid(pfn); > + return !pfn_is_map_memory(pfn); > } > > static void *stage2_memcache_zalloc_page(void *arg) > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > index 16a2b2b1c54d..798f74f501d5 100644 > --- a/arch/arm64/mm/init.c > +++ b/arch/arm64/mm/init.c > @@ -255,6 +255,18 @@ int pfn_valid(unsigned long pfn) > } > EXPORT_SYMBOL(pfn_valid); > > +int pfn_is_map_memory(unsigned long pfn) > +{ > + phys_addr_t addr = PFN_PHYS(pfn); > + > + /* avoid false positives for bogus PFNs, see comment in pfn_valid() */ > + if (PHYS_PFN(addr) != pfn) > + return 0; > + > + return memblock_is_map_memory(addr); > +} > +EXPORT_SYMBOL(pfn_is_map_memory); > + > static phys_addr_t memory_limit = PHYS_ADDR_MAX; > > /* > diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c > index b5e83c46b23e..b7c81dacabf0 100644 > --- a/arch/arm64/mm/ioremap.c > +++ b/arch/arm64/mm/ioremap.c > @@ -43,7 +43,7 @@ static void __iomem *__ioremap_caller(phys_addr_t phys_addr, size_t size, > /* > * Don't allow RAM to be mapped. > */ > - if (WARN_ON(pfn_valid(__phys_to_pfn(phys_addr)))) > + if (WARN_ON(pfn_is_map_memory(__phys_to_pfn(phys_addr)))) > return NULL; > > area = get_vm_area_caller(size, VM_IOREMAP, caller); > @@ -84,7 +84,7 @@ EXPORT_SYMBOL(iounmap); > void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size) > { > /* For normal memory we already have a cacheable mapping. */ > - if (pfn_valid(__phys_to_pfn(phys_addr))) > + if (pfn_is_map_memory(__phys_to_pfn(phys_addr))) > return (void __iomem *)__phys_to_virt(phys_addr); > > return __ioremap_caller(phys_addr, size, __pgprot(PROT_NORMAL), > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index 6dd9369e3ea0..ab5914cebd3c 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -82,7 +82,7 @@ void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd) > pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, > unsigned long size, pgprot_t vma_prot) > { > - if (!pfn_valid(pfn)) > + if (!pfn_is_map_memory(pfn)) > return pgprot_noncached(vma_prot); > else if (file->f_flags & O_SYNC) > return pgprot_writecombine(vma_prot); > -- > 2.28.0 > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC9DFC433ED for ; Tue, 11 May 2021 10:25:27 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id 03DED6192C for ; Tue, 11 May 2021 10:25:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 03DED6192C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvmarm-bounces@lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 7BDE84B3BB; Tue, 11 May 2021 06:25:26 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@kernel.org Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 2LkPZLjrWx4N; Tue, 11 May 2021 06:25:24 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id D65B14B499; Tue, 11 May 2021 06:25:24 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 8CF234B499 for ; Tue, 11 May 2021 06:25:23 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id iquFtykIPCGj for ; Tue, 11 May 2021 06:25:22 -0400 (EDT) Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 34F9D4B3BB for ; Tue, 11 May 2021 06:25:22 -0400 (EDT) Received: by mail.kernel.org (Postfix) with ESMTPSA id CAA9861935 for ; Tue, 11 May 2021 10:25:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1620728720; bh=JV3oIYlPRhAEEPVctWNPV/cOycFyYs1Lq6DGChiCd+g=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=DmYACPuIeG6DUf3e9aSCRKD4sAXZdKw4RqF6Pp00JLxdMo2YhXtCrwAXNpr/NHMqZ oDldT6x/cS0KPsVlOjNNNacP62Pn/6s4/9Dtr/nMv6QNvTDIfo+1DuXY+Owpv0Oomq t7cH1Ux8nSmob7o/jDbNynCu6GQt5eC2SLSqk9y8EnyptvliHdxwbANdSfFReShuuf ZnW/vrbHs+PASPY+A4JKR+qnEyxPSI/lla+nFpB8wzyuiuCWGgY2lIR1VXMBLaYTJt pT/DL2qD7qzYkzrAvR/290AS6znOUJ3Fs68ZhEpe90J4d0bracvkNvC4Vmj4OI7WXT BWZYO6ccSYXqA== Received: by mail-ot1-f54.google.com with SMTP id r26-20020a056830121ab02902a5ff1c9b81so17060565otp.11 for ; Tue, 11 May 2021 03:25:20 -0700 (PDT) X-Gm-Message-State: AOAM531+Z0URHxB/KUn/LCJH0xtERcRJw5G5HJsjgSh4QDqMir1MxzMz 3DASxGcADLx/FHuRcteJMcKD/H1ZiJo/p+Dz+1I= X-Google-Smtp-Source: ABdhPJytWz1IVkHqKkcpbQyOgm8jLChtkDskRM+4A0ChV/Q2KAjwVVeEAON+ryI7vhB7a9AgJZb69e4AhTOmXsHGZbY= X-Received: by 2002:a05:6830:4da:: with SMTP id s26mr25288213otd.77.1620728719935; Tue, 11 May 2021 03:25:19 -0700 (PDT) MIME-Version: 1.0 References: <20210511100550.28178-1-rppt@kernel.org> <20210511100550.28178-4-rppt@kernel.org> In-Reply-To: <20210511100550.28178-4-rppt@kernel.org> From: Ard Biesheuvel Date: Tue, 11 May 2021 12:25:09 +0200 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v4 3/4] arm64: decouple check whether pfn is in linear map from pfn_valid() To: Mike Rapoport Cc: David Hildenbrand , Catalin Marinas , Anshuman Khandual , Linux Kernel Mailing List , Mike Rapoport , Linux Memory Management List , Marc Zyngier , Andrew Morton , Will Deacon , kvmarm , Linux ARM X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu On Tue, 11 May 2021 at 12:06, Mike Rapoport wrote: > > From: Mike Rapoport > > The intended semantics of pfn_valid() is to verify whether there is a > struct page for the pfn in question and nothing else. > > Yet, on arm64 it is used to distinguish memory areas that are mapped in the > linear map vs those that require ioremap() to access them. > > Introduce a dedicated pfn_is_map_memory() wrapper for > memblock_is_map_memory() to perform such check and use it where > appropriate. > > Using a wrapper allows to avoid cyclic include dependencies. > > While here also update style of pfn_valid() so that both pfn_valid() and > pfn_is_map_memory() declarations will be consistent. > > Signed-off-by: Mike Rapoport > Acked-by: David Hildenbrand Acked-by: Ard Biesheuvel > --- > arch/arm64/include/asm/memory.h | 2 +- > arch/arm64/include/asm/page.h | 3 ++- > arch/arm64/kvm/mmu.c | 2 +- > arch/arm64/mm/init.c | 12 ++++++++++++ > arch/arm64/mm/ioremap.c | 4 ++-- > arch/arm64/mm/mmu.c | 2 +- > 6 files changed, 19 insertions(+), 6 deletions(-) > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h > index 87b90dc27a43..9027b7e16c4c 100644 > --- a/arch/arm64/include/asm/memory.h > +++ b/arch/arm64/include/asm/memory.h > @@ -369,7 +369,7 @@ static inline void *phys_to_virt(phys_addr_t x) > > #define virt_addr_valid(addr) ({ \ > __typeof__(addr) __addr = __tag_reset(addr); \ > - __is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr)); \ > + __is_lm_address(__addr) && pfn_is_map_memory(virt_to_pfn(__addr)); \ > }) > > void dump_mem_limit(void); > diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h > index 012cffc574e8..75ddfe671393 100644 > --- a/arch/arm64/include/asm/page.h > +++ b/arch/arm64/include/asm/page.h > @@ -37,7 +37,8 @@ void copy_highpage(struct page *to, struct page *from); > > typedef struct page *pgtable_t; > > -extern int pfn_valid(unsigned long); > +int pfn_valid(unsigned long pfn); > +int pfn_is_map_memory(unsigned long pfn); > > #include > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index c5d1f3c87dbd..470070073085 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -85,7 +85,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) > > static bool kvm_is_device_pfn(unsigned long pfn) > { > - return !pfn_valid(pfn); > + return !pfn_is_map_memory(pfn); > } > > static void *stage2_memcache_zalloc_page(void *arg) > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > index 16a2b2b1c54d..798f74f501d5 100644 > --- a/arch/arm64/mm/init.c > +++ b/arch/arm64/mm/init.c > @@ -255,6 +255,18 @@ int pfn_valid(unsigned long pfn) > } > EXPORT_SYMBOL(pfn_valid); > > +int pfn_is_map_memory(unsigned long pfn) > +{ > + phys_addr_t addr = PFN_PHYS(pfn); > + > + /* avoid false positives for bogus PFNs, see comment in pfn_valid() */ > + if (PHYS_PFN(addr) != pfn) > + return 0; > + > + return memblock_is_map_memory(addr); > +} > +EXPORT_SYMBOL(pfn_is_map_memory); > + > static phys_addr_t memory_limit = PHYS_ADDR_MAX; > > /* > diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c > index b5e83c46b23e..b7c81dacabf0 100644 > --- a/arch/arm64/mm/ioremap.c > +++ b/arch/arm64/mm/ioremap.c > @@ -43,7 +43,7 @@ static void __iomem *__ioremap_caller(phys_addr_t phys_addr, size_t size, > /* > * Don't allow RAM to be mapped. > */ > - if (WARN_ON(pfn_valid(__phys_to_pfn(phys_addr)))) > + if (WARN_ON(pfn_is_map_memory(__phys_to_pfn(phys_addr)))) > return NULL; > > area = get_vm_area_caller(size, VM_IOREMAP, caller); > @@ -84,7 +84,7 @@ EXPORT_SYMBOL(iounmap); > void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size) > { > /* For normal memory we already have a cacheable mapping. */ > - if (pfn_valid(__phys_to_pfn(phys_addr))) > + if (pfn_is_map_memory(__phys_to_pfn(phys_addr))) > return (void __iomem *)__phys_to_virt(phys_addr); > > return __ioremap_caller(phys_addr, size, __pgprot(PROT_NORMAL), > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index 6dd9369e3ea0..ab5914cebd3c 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -82,7 +82,7 @@ void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd) > pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, > unsigned long size, pgprot_t vma_prot) > { > - if (!pfn_valid(pfn)) > + if (!pfn_is_map_memory(pfn)) > return pgprot_noncached(vma_prot); > else if (file->f_flags & O_SYNC) > return pgprot_writecombine(vma_prot); > -- > 2.28.0 > _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA2B7C43460 for ; Tue, 11 May 2021 12:55:15 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2ED1761923 for ; Tue, 11 May 2021 12:55:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2ED1761923 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:Cc:To:Subject:Message-ID:Date:From:In-Reply-To: References:MIME-Version:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=mW/A13cR8QH/r4YYCPMSvpQP+xG+UWE7gLTSL5owL64=; b=ZRXfuaOXx5d/vFyfhNiv5mc72 NR80965I4ugodVxqgOwUQsXnwTMJVSofsmgNNMll1CmXP/kQdX0KgUmuoBl24dAkIeZ2Rrbi3+U1u U4TJlyg4nHhJM652Fp1u8sgVOHbl1MMN/gEVADLgvua3Z9z27aXgQ1unahPNJpARjFrEgued0w2uO dVM2Sldc2HVAkMP3EeD2eY1/oZjNP9I6St086R76sLDv6ZhJLbiQL77JOZg7x/uH2NIgRN9gw6P04 mKQ1i0FAhoM3tQM4X7Dnn6RbCdut5SABI0pJGbSyWwCdWiHV3/4ZdhZQgHetMH6zoCSaPXSY9lKk9 DT5/yh2Cw==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lgRtB-00HVJu-0I; Tue, 11 May 2021 12:53:31 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lgPZs-00Gutd-8e for linux-arm-kernel@desiato.infradead.org; Tue, 11 May 2021 10:25:24 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:Cc:To:Subject:Message-ID :Date:From:In-Reply-To:References:MIME-Version:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=dBmZxciDa+hXpD5tCCDUdNPHL3VdK5bSfZciEDJVysU=; b=fhSDrKsp+Q4ZRPn1OVwpJ0g+c1 3d0zqtjCJX8SrHr5NkzsfVfvs2G2rS+LQ6dpvgSGVZLahYiXt8XJQ0CwC9aD+7ilm4l/nIuAL6OEx bW9FiuZ67WTiaB5R0iNqNt2BnobaBZCROJay8TXGqnFUhCNHSO9NbBy9zwZ4ylh02eXaYhaKrNYFh Jlf4fGy1VQ96e8TETFmeajHTCsKvEqFwTTZV2158uc/Wg6nk6hrp0+XoABo1RmKyNh9EvNgnxtGU2 ta0qK8G/5tpTcG9/pcQyvU8ZbzaK6K1wVaa1F+HavvCU7zPWOw+zF8n1ZO0RwW2FcIjktd+zhBs5q Nm0rKarw==; Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lgPZp-009TCZ-CV for linux-arm-kernel@lists.infradead.org; Tue, 11 May 2021 10:25:22 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id CD76761933 for ; Tue, 11 May 2021 10:25:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1620728720; bh=JV3oIYlPRhAEEPVctWNPV/cOycFyYs1Lq6DGChiCd+g=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=DmYACPuIeG6DUf3e9aSCRKD4sAXZdKw4RqF6Pp00JLxdMo2YhXtCrwAXNpr/NHMqZ oDldT6x/cS0KPsVlOjNNNacP62Pn/6s4/9Dtr/nMv6QNvTDIfo+1DuXY+Owpv0Oomq t7cH1Ux8nSmob7o/jDbNynCu6GQt5eC2SLSqk9y8EnyptvliHdxwbANdSfFReShuuf ZnW/vrbHs+PASPY+A4JKR+qnEyxPSI/lla+nFpB8wzyuiuCWGgY2lIR1VXMBLaYTJt pT/DL2qD7qzYkzrAvR/290AS6znOUJ3Fs68ZhEpe90J4d0bracvkNvC4Vmj4OI7WXT BWZYO6ccSYXqA== Received: by mail-ot1-f42.google.com with SMTP id 36-20020a9d0ba70000b02902e0a0a8fe36so10749734oth.8 for ; Tue, 11 May 2021 03:25:20 -0700 (PDT) X-Gm-Message-State: AOAM531TXbmy21RbQFSG+n7EIkdYaPQbnc5rh+g2JMa3yBtivI+Za+L2 owBjZhA4ABNbuI8o9FoCEOWgZtwbXhhmV0Li+Ug= X-Google-Smtp-Source: ABdhPJytWz1IVkHqKkcpbQyOgm8jLChtkDskRM+4A0ChV/Q2KAjwVVeEAON+ryI7vhB7a9AgJZb69e4AhTOmXsHGZbY= X-Received: by 2002:a05:6830:4da:: with SMTP id s26mr25288213otd.77.1620728719935; Tue, 11 May 2021 03:25:19 -0700 (PDT) MIME-Version: 1.0 References: <20210511100550.28178-1-rppt@kernel.org> <20210511100550.28178-4-rppt@kernel.org> In-Reply-To: <20210511100550.28178-4-rppt@kernel.org> From: Ard Biesheuvel Date: Tue, 11 May 2021 12:25:09 +0200 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v4 3/4] arm64: decouple check whether pfn is in linear map from pfn_valid() To: Mike Rapoport Cc: Andrew Morton , Anshuman Khandual , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Will Deacon , kvmarm , Linux ARM , Linux Kernel Mailing List , Linux Memory Management List X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210511_032521_519070_BCA46848 X-CRM114-Status: GOOD ( 28.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, 11 May 2021 at 12:06, Mike Rapoport wrote: > > From: Mike Rapoport > > The intended semantics of pfn_valid() is to verify whether there is a > struct page for the pfn in question and nothing else. > > Yet, on arm64 it is used to distinguish memory areas that are mapped in the > linear map vs those that require ioremap() to access them. > > Introduce a dedicated pfn_is_map_memory() wrapper for > memblock_is_map_memory() to perform such check and use it where > appropriate. > > Using a wrapper allows to avoid cyclic include dependencies. > > While here also update style of pfn_valid() so that both pfn_valid() and > pfn_is_map_memory() declarations will be consistent. > > Signed-off-by: Mike Rapoport > Acked-by: David Hildenbrand Acked-by: Ard Biesheuvel > --- > arch/arm64/include/asm/memory.h | 2 +- > arch/arm64/include/asm/page.h | 3 ++- > arch/arm64/kvm/mmu.c | 2 +- > arch/arm64/mm/init.c | 12 ++++++++++++ > arch/arm64/mm/ioremap.c | 4 ++-- > arch/arm64/mm/mmu.c | 2 +- > 6 files changed, 19 insertions(+), 6 deletions(-) > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h > index 87b90dc27a43..9027b7e16c4c 100644 > --- a/arch/arm64/include/asm/memory.h > +++ b/arch/arm64/include/asm/memory.h > @@ -369,7 +369,7 @@ static inline void *phys_to_virt(phys_addr_t x) > > #define virt_addr_valid(addr) ({ \ > __typeof__(addr) __addr = __tag_reset(addr); \ > - __is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr)); \ > + __is_lm_address(__addr) && pfn_is_map_memory(virt_to_pfn(__addr)); \ > }) > > void dump_mem_limit(void); > diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h > index 012cffc574e8..75ddfe671393 100644 > --- a/arch/arm64/include/asm/page.h > +++ b/arch/arm64/include/asm/page.h > @@ -37,7 +37,8 @@ void copy_highpage(struct page *to, struct page *from); > > typedef struct page *pgtable_t; > > -extern int pfn_valid(unsigned long); > +int pfn_valid(unsigned long pfn); > +int pfn_is_map_memory(unsigned long pfn); > > #include > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index c5d1f3c87dbd..470070073085 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -85,7 +85,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) > > static bool kvm_is_device_pfn(unsigned long pfn) > { > - return !pfn_valid(pfn); > + return !pfn_is_map_memory(pfn); > } > > static void *stage2_memcache_zalloc_page(void *arg) > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > index 16a2b2b1c54d..798f74f501d5 100644 > --- a/arch/arm64/mm/init.c > +++ b/arch/arm64/mm/init.c > @@ -255,6 +255,18 @@ int pfn_valid(unsigned long pfn) > } > EXPORT_SYMBOL(pfn_valid); > > +int pfn_is_map_memory(unsigned long pfn) > +{ > + phys_addr_t addr = PFN_PHYS(pfn); > + > + /* avoid false positives for bogus PFNs, see comment in pfn_valid() */ > + if (PHYS_PFN(addr) != pfn) > + return 0; > + > + return memblock_is_map_memory(addr); > +} > +EXPORT_SYMBOL(pfn_is_map_memory); > + > static phys_addr_t memory_limit = PHYS_ADDR_MAX; > > /* > diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c > index b5e83c46b23e..b7c81dacabf0 100644 > --- a/arch/arm64/mm/ioremap.c > +++ b/arch/arm64/mm/ioremap.c > @@ -43,7 +43,7 @@ static void __iomem *__ioremap_caller(phys_addr_t phys_addr, size_t size, > /* > * Don't allow RAM to be mapped. > */ > - if (WARN_ON(pfn_valid(__phys_to_pfn(phys_addr)))) > + if (WARN_ON(pfn_is_map_memory(__phys_to_pfn(phys_addr)))) > return NULL; > > area = get_vm_area_caller(size, VM_IOREMAP, caller); > @@ -84,7 +84,7 @@ EXPORT_SYMBOL(iounmap); > void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size) > { > /* For normal memory we already have a cacheable mapping. */ > - if (pfn_valid(__phys_to_pfn(phys_addr))) > + if (pfn_is_map_memory(__phys_to_pfn(phys_addr))) > return (void __iomem *)__phys_to_virt(phys_addr); > > return __ioremap_caller(phys_addr, size, __pgprot(PROT_NORMAL), > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index 6dd9369e3ea0..ab5914cebd3c 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -82,7 +82,7 @@ void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd) > pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, > unsigned long size, pgprot_t vma_prot) > { > - if (!pfn_valid(pfn)) > + if (!pfn_is_map_memory(pfn)) > return pgprot_noncached(vma_prot); > else if (file->f_flags & O_SYNC) > return pgprot_writecombine(vma_prot); > -- > 2.28.0 > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel