From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAE91C433ED for ; Thu, 22 Apr 2021 08:57:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 09BA761458 for ; Thu, 22 Apr 2021 08:57:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 09BA761458 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 79AF46B0070; Thu, 22 Apr 2021 04:57:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 749FA6B0071; Thu, 22 Apr 2021 04:57:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 59E0A6B0072; Thu, 22 Apr 2021 04:57:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id 3A3F16B0070 for ; Thu, 22 Apr 2021 04:57:40 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id DF2328248047 for ; Thu, 22 Apr 2021 08:57:39 +0000 (UTC) X-FDA: 78059399838.35.27E25BA Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf01.hostedemail.com (Postfix) with ESMTP id BF69E5001531 for ; Thu, 22 Apr 2021 08:57:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1619081859; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oIBj66zrZyUe4ZvWNAXgo+JuFRJz0vbgrSJAOGRSdws=; b=W72Wnm570B4XmKtmVf6MBWE2e3gyyY11WaRxylyQAF25PC+cQij0IeW/FdpfZW7tUpc5iK Bk7h3SJPoDAkE/RHmWHDB9k7Y6LuOTgvh97HudaJeKMOZfKxxOIfZ474DT4Ry83SerorNF rKd0qfrxo9BlYvghjxZRvf7R32eiriE= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-397-SQCBTgeRMmOuuO3hUJjmXQ-1; Thu, 22 Apr 2021 04:57:37 -0400 X-MC-Unique: SQCBTgeRMmOuuO3hUJjmXQ-1 Received: by mail-wr1-f71.google.com with SMTP id j4-20020adfe5040000b0290102bb319b87so13510037wrm.23 for ; Thu, 22 Apr 2021 01:57:37 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:organization :message-id:date:user-agent:mime-version:in-reply-to :content-language:content-transfer-encoding; bh=oIBj66zrZyUe4ZvWNAXgo+JuFRJz0vbgrSJAOGRSdws=; b=o5zYbPMZsQi/L4mV93cKKga7VH5DGQ4UULFtzIuZABzmsb8PpwhwQi6JibRZsZv1hx 1p+/vAyVRRUNI5Z3yg7jq2JrBlWWe0/pnFs9C7wMY5ub1ZmWNwEcW/l1POOGuM7zSZTN jwA7mGadQjIfY305cx9bh4accHczr4u3ZQh9c4BgUtyMBhoPPVqTNT15XDHMewlENGEF l0tKUG+WYUB/us/ElPr4z+bemKgySivhuGrUhgNkaIjEn59I8VihYDqEyEVbp+MjxNeX 6Q+haT+t7Z95MAJIrjO4sLuc4H0AHGkZQ6mbACgkEHgb7tkqr3X1eDo5uLYKqDFtxGt6 4fvg== X-Gm-Message-State: AOAM5327f2dwVHZ3rrc4qlbiJcR7gSucQsj1pV/sGCPLeeJ4cJlRI6Aj DY7IMkxCUYhW9/COguEXE8mkUURUXCt/cra9tJyFlWUZrFxtbtoU6vk9hXKqmGjeOYR60jaekFb Ivve+2jmcX1S1CaB+a0tgPynZSl0locwEZrA/colWZk8QGPmphVSL8I4RAa8= X-Received: by 2002:a5d:4d92:: with SMTP id b18mr2781999wru.269.1619081856143; Thu, 22 Apr 2021 01:57:36 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz9GgNZ576++3awjONvNnTZg6aX15DKGFHF5B2kOpG6WOphS2Kg3Qfdb7AfYfoZjofNzuVBag== X-Received: by 2002:a5d:4d92:: with SMTP id b18mr2781956wru.269.1619081855793; Thu, 22 Apr 2021 01:57:35 -0700 (PDT) Received: from [192.168.3.132] (p4ff23eb0.dip0.t-ipconnect.de. [79.242.62.176]) by smtp.gmail.com with ESMTPSA id h8sm2072267wmq.19.2021.04.22.01.57.35 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 22 Apr 2021 01:57:35 -0700 (PDT) Subject: Re: [PATCH v3 3/4] arm64: decouple check whether pfn is in linear map from pfn_valid() To: Mike Rapoport , linux-arm-kernel@lists.infradead.org Cc: Andrew Morton , Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , Marc Zyngier , Mark Rutland , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20210422061902.21614-1-rppt@kernel.org> <20210422061902.21614-4-rppt@kernel.org> From: David Hildenbrand Organization: Red Hat Message-ID: <041411e9-0fba-f053-d901-10f1d7a8cc5e@redhat.com> Date: Thu, 22 Apr 2021 10:57:34 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1 MIME-Version: 1.0 In-Reply-To: <20210422061902.21614-4-rppt@kernel.org> Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: BF69E5001531 X-Stat-Signature: 749b5cgje4eukc74zwkh7u3ih4am1gp6 X-Rspamd-Server: rspam02 Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf01; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=216.205.24.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1619081856-597875 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 22.04.21 08:19, Mike Rapoport wrote: > From: Mike Rapoport > > The intended semantics of pfn_valid() is to verify whether there is a > struct page for the pfn in question and nothing else. > > Yet, on arm64 it is used to distinguish memory areas that are mapped in the > linear map vs those that require ioremap() to access them. > > Introduce a dedicated pfn_is_map_memory() wrapper for > memblock_is_map_memory() to perform such check and use it where > appropriate. > > Using a wrapper allows to avoid cyclic include dependencies. > > While here also update style of pfn_valid() so that both pfn_valid() and > pfn_is_map_memory() declarations will be consistent. > > Signed-off-by: Mike Rapoport > --- > arch/arm64/include/asm/memory.h | 2 +- > arch/arm64/include/asm/page.h | 3 ++- > arch/arm64/kvm/mmu.c | 2 +- > arch/arm64/mm/init.c | 12 ++++++++++++ > arch/arm64/mm/ioremap.c | 4 ++-- > arch/arm64/mm/mmu.c | 2 +- > 6 files changed, 19 insertions(+), 6 deletions(-) > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h > index 0aabc3be9a75..194f9f993d30 100644 > --- a/arch/arm64/include/asm/memory.h > +++ b/arch/arm64/include/asm/memory.h > @@ -351,7 +351,7 @@ static inline void *phys_to_virt(phys_addr_t x) > > #define virt_addr_valid(addr) ({ \ > __typeof__(addr) __addr = __tag_reset(addr); \ > - __is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr)); \ > + __is_lm_address(__addr) && pfn_is_map_memory(virt_to_pfn(__addr)); \ > }) > > void dump_mem_limit(void); > diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h > index 012cffc574e8..75ddfe671393 100644 > --- a/arch/arm64/include/asm/page.h > +++ b/arch/arm64/include/asm/page.h > @@ -37,7 +37,8 @@ void copy_highpage(struct page *to, struct page *from); > > typedef struct page *pgtable_t; > > -extern int pfn_valid(unsigned long); > +int pfn_valid(unsigned long pfn); > +int pfn_is_map_memory(unsigned long pfn); > > #include > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index 8711894db8c2..23dd99e29b23 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -85,7 +85,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) > > static bool kvm_is_device_pfn(unsigned long pfn) > { > - return !pfn_valid(pfn); > + return !pfn_is_map_memory(pfn); > } > > /* > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > index 3685e12aba9b..966a7a18d528 100644 > --- a/arch/arm64/mm/init.c > +++ b/arch/arm64/mm/init.c > @@ -258,6 +258,18 @@ int pfn_valid(unsigned long pfn) > } > EXPORT_SYMBOL(pfn_valid); > > +int pfn_is_map_memory(unsigned long pfn) > +{ > + phys_addr_t addr = PFN_PHYS(pfn); > + > + /* avoid false positives for bogus PFNs, see comment in pfn_valid() */ > + if (PHYS_PFN(addr) != pfn) > + return 0; > + > + return memblock_is_map_memory(addr); > +} > +EXPORT_SYMBOL(pfn_is_map_memory); > + > static phys_addr_t memory_limit = PHYS_ADDR_MAX; > > /* > diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c > index b5e83c46b23e..b7c81dacabf0 100644 > --- a/arch/arm64/mm/ioremap.c > +++ b/arch/arm64/mm/ioremap.c > @@ -43,7 +43,7 @@ static void __iomem *__ioremap_caller(phys_addr_t phys_addr, size_t size, > /* > * Don't allow RAM to be mapped. > */ > - if (WARN_ON(pfn_valid(__phys_to_pfn(phys_addr)))) > + if (WARN_ON(pfn_is_map_memory(__phys_to_pfn(phys_addr)))) > return NULL; > > area = get_vm_area_caller(size, VM_IOREMAP, caller); > @@ -84,7 +84,7 @@ EXPORT_SYMBOL(iounmap); > void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size) > { > /* For normal memory we already have a cacheable mapping. */ > - if (pfn_valid(__phys_to_pfn(phys_addr))) > + if (pfn_is_map_memory(__phys_to_pfn(phys_addr))) > return (void __iomem *)__phys_to_virt(phys_addr); > > return __ioremap_caller(phys_addr, size, __pgprot(PROT_NORMAL), > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index 5d9550fdb9cf..26045e9adbd7 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -81,7 +81,7 @@ void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd) > pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, > unsigned long size, pgprot_t vma_prot) > { > - if (!pfn_valid(pfn)) > + if (!pfn_is_map_memory(pfn)) > return pgprot_noncached(vma_prot); > else if (file->f_flags & O_SYNC) > return pgprot_writecombine(vma_prot); > Acked-by: David Hildenbrand -- Thanks, David / dhildenb