From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00A64C43461 for ; Wed, 21 Apr 2021 05:53:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CE2E761417 for ; Wed, 21 Apr 2021 05:53:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235355AbhDUFxb (ORCPT ); Wed, 21 Apr 2021 01:53:31 -0400 Received: from mail.kernel.org ([198.145.29.99]:36398 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235285AbhDUFxO (ORCPT ); Wed, 21 Apr 2021 01:53:14 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id D42916140C; Wed, 21 Apr 2021 05:52:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1618984357; bh=g72o3X3Q6Y/a2mSNpa0HjIO9b/1mdQbo0EaTxru/zZU=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=lDwRsR/E5mE9ve2Akk/3k+XO8YXnjrfevy9BmPLxs+lWkLg/W8jvzL1dYtVYW+3Rf c0TNDace3LJilMFVznrYeCRDwGrbHzJWGhCicEdzHL6l8Ssmr+GPd2k1tR0J/Oz8gY HNlkCXKomx+V/ot9YHj+QTDszjYr8FZyUKXC3ov3JbPn6ar8+9fn4C2rsNGgfwD1xK jxVIemH8Bz2l6DRPlexUeyQ5bNbq1sXxlzRv+Ip3azOMMDa2+YNNQtjVpoPrF/KHOO C7w6maQQFuV4pFX+kZC5rXtQY8I5cntBPjYPkw4E8WX1mZ5dZcqHIin3LRMZrwh4pk FPk4mfTeux5gA== Date: Wed, 21 Apr 2021 08:52:29 +0300 From: Mike Rapoport To: David Hildenbrand Cc: linux-arm-kernel@lists.infradead.org, Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , Marc Zyngier , Mark Rutland , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v1 4/4] arm64: drop pfn_valid_within() and simplify pfn_valid() Message-ID: References: <20210420090925.7457-1-rppt@kernel.org> <20210420090925.7457-5-rppt@kernel.org> <8e7171e7-a85c-6066-4ab6-d2bc98ec103b@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <8e7171e7-a85c-6066-4ab6-d2bc98ec103b@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Apr 20, 2021 at 06:00:55PM +0200, David Hildenbrand wrote: > On 20.04.21 11:09, Mike Rapoport wrote: > > From: Mike Rapoport > > > > The arm64's version of pfn_valid() differs from the generic because of two > > reasons: > > > > * Parts of the memory map are freed during boot. This makes it necessary to > > verify that there is actual physical memory that corresponds to a pfn > > which is done by querying memblock. > > > > * There are NOMAP memory regions. These regions are not mapped in the > > linear map and until the previous commit the struct pages representing > > these areas had default values. > > > > As the consequence of absence of the special treatment of NOMAP regions in > > the memory map it was necessary to use memblock_is_map_memory() in > > pfn_valid() and to have pfn_valid_within() aliased to pfn_valid() so that > > generic mm functionality would not treat a NOMAP page as a normal page. > > > > Since the NOMAP regions are now marked as PageReserved(), pfn walkers and > > the rest of core mm will treat them as unusable memory and thus > > pfn_valid_within() is no longer required at all and can be disabled by > > removing CONFIG_HOLES_IN_ZONE on arm64. > > > > pfn_valid() can be slightly simplified by replacing > > memblock_is_map_memory() with memblock_is_memory(). > > > > Signed-off-by: Mike Rapoport > > --- > > arch/arm64/Kconfig | 3 --- > > arch/arm64/mm/init.c | 4 ++-- > > 2 files changed, 2 insertions(+), 5 deletions(-) > > > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > > index e4e1b6550115..58e439046d05 100644 > > --- a/arch/arm64/Kconfig > > +++ b/arch/arm64/Kconfig > > @@ -1040,9 +1040,6 @@ config NEED_PER_CPU_EMBED_FIRST_CHUNK > > def_bool y > > depends on NUMA > > -config HOLES_IN_ZONE > > - def_bool y > > - > > source "kernel/Kconfig.hz" > > config ARCH_SPARSEMEM_ENABLE > > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > > index c54e329aca15..370f33765b64 100644 > > --- a/arch/arm64/mm/init.c > > +++ b/arch/arm64/mm/init.c > > @@ -243,7 +243,7 @@ int pfn_valid(unsigned long pfn) > > /* > > * ZONE_DEVICE memory does not have the memblock entries. > > - * memblock_is_map_memory() check for ZONE_DEVICE based > > + * memblock_is_memory() check for ZONE_DEVICE based > > * addresses will always fail. Even the normal hotplugged > > * memory will never have MEMBLOCK_NOMAP flag set in their > > * memblock entries. Skip memblock search for all non early > > @@ -254,7 +254,7 @@ int pfn_valid(unsigned long pfn) > > return pfn_section_valid(ms, pfn); > > } > > #endif > > - return memblock_is_map_memory(addr); > > + return memblock_is_memory(addr); > > } > > EXPORT_SYMBOL(pfn_valid); > > > > What are the steps needed to get rid of custom pfn_valid() completely? > > I'd assume we would have to stop freeing parts of the mem map during boot. > How relevant is that for arm64 nowadays, especially with reduced section > sizes? Yes, for arm64 to use the generic pfn_valid() it'd need to stop freeing parts of the memory map. Presuming struct page is 64 bytes, the memory map takes 2M per section in the worst case (128M per section, 4k pages). So for systems that have less than 128M populated in each section freeing unused memory map would mean significant savings. But nowadays when a clock has at least 1G of RAM I doubt this is relevant to many systems if at all. -- Sincerely yours, Mike. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE940C433ED for ; Wed, 21 Apr 2021 05:52:48 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id 61EF861419 for ; Wed, 21 Apr 2021 05:52:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 61EF861419 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvmarm-bounces@lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id DD1D14B381; Wed, 21 Apr 2021 01:52:47 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@kernel.org Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id pGivqR0QkDCv; Wed, 21 Apr 2021 01:52:45 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id A5B224B369; Wed, 21 Apr 2021 01:52:45 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 0EEA74B369 for ; Wed, 21 Apr 2021 01:52:44 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id GphBCDiraTaF for ; Wed, 21 Apr 2021 01:52:39 -0400 (EDT) Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id CCE7E4B35E for ; Wed, 21 Apr 2021 01:52:38 -0400 (EDT) Received: by mail.kernel.org (Postfix) with ESMTPSA id D42916140C; Wed, 21 Apr 2021 05:52:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1618984357; bh=g72o3X3Q6Y/a2mSNpa0HjIO9b/1mdQbo0EaTxru/zZU=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=lDwRsR/E5mE9ve2Akk/3k+XO8YXnjrfevy9BmPLxs+lWkLg/W8jvzL1dYtVYW+3Rf c0TNDace3LJilMFVznrYeCRDwGrbHzJWGhCicEdzHL6l8Ssmr+GPd2k1tR0J/Oz8gY HNlkCXKomx+V/ot9YHj+QTDszjYr8FZyUKXC3ov3JbPn6ar8+9fn4C2rsNGgfwD1xK jxVIemH8Bz2l6DRPlexUeyQ5bNbq1sXxlzRv+Ip3azOMMDa2+YNNQtjVpoPrF/KHOO C7w6maQQFuV4pFX+kZC5rXtQY8I5cntBPjYPkw4E8WX1mZ5dZcqHIin3LRMZrwh4pk FPk4mfTeux5gA== Date: Wed, 21 Apr 2021 08:52:29 +0300 From: Mike Rapoport To: David Hildenbrand Subject: Re: [PATCH v1 4/4] arm64: drop pfn_valid_within() and simplify pfn_valid() Message-ID: References: <20210420090925.7457-1-rppt@kernel.org> <20210420090925.7457-5-rppt@kernel.org> <8e7171e7-a85c-6066-4ab6-d2bc98ec103b@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <8e7171e7-a85c-6066-4ab6-d2bc98ec103b@redhat.com> Cc: Anshuman Khandual , Catalin Marinas , linux-kernel@vger.kernel.org, Mike Rapoport , linux-mm@kvack.org, kvmarm@lists.cs.columbia.edu, Marc Zyngier , Will Deacon , linux-arm-kernel@lists.infradead.org X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu On Tue, Apr 20, 2021 at 06:00:55PM +0200, David Hildenbrand wrote: > On 20.04.21 11:09, Mike Rapoport wrote: > > From: Mike Rapoport > > > > The arm64's version of pfn_valid() differs from the generic because of two > > reasons: > > > > * Parts of the memory map are freed during boot. This makes it necessary to > > verify that there is actual physical memory that corresponds to a pfn > > which is done by querying memblock. > > > > * There are NOMAP memory regions. These regions are not mapped in the > > linear map and until the previous commit the struct pages representing > > these areas had default values. > > > > As the consequence of absence of the special treatment of NOMAP regions in > > the memory map it was necessary to use memblock_is_map_memory() in > > pfn_valid() and to have pfn_valid_within() aliased to pfn_valid() so that > > generic mm functionality would not treat a NOMAP page as a normal page. > > > > Since the NOMAP regions are now marked as PageReserved(), pfn walkers and > > the rest of core mm will treat them as unusable memory and thus > > pfn_valid_within() is no longer required at all and can be disabled by > > removing CONFIG_HOLES_IN_ZONE on arm64. > > > > pfn_valid() can be slightly simplified by replacing > > memblock_is_map_memory() with memblock_is_memory(). > > > > Signed-off-by: Mike Rapoport > > --- > > arch/arm64/Kconfig | 3 --- > > arch/arm64/mm/init.c | 4 ++-- > > 2 files changed, 2 insertions(+), 5 deletions(-) > > > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > > index e4e1b6550115..58e439046d05 100644 > > --- a/arch/arm64/Kconfig > > +++ b/arch/arm64/Kconfig > > @@ -1040,9 +1040,6 @@ config NEED_PER_CPU_EMBED_FIRST_CHUNK > > def_bool y > > depends on NUMA > > -config HOLES_IN_ZONE > > - def_bool y > > - > > source "kernel/Kconfig.hz" > > config ARCH_SPARSEMEM_ENABLE > > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > > index c54e329aca15..370f33765b64 100644 > > --- a/arch/arm64/mm/init.c > > +++ b/arch/arm64/mm/init.c > > @@ -243,7 +243,7 @@ int pfn_valid(unsigned long pfn) > > /* > > * ZONE_DEVICE memory does not have the memblock entries. > > - * memblock_is_map_memory() check for ZONE_DEVICE based > > + * memblock_is_memory() check for ZONE_DEVICE based > > * addresses will always fail. Even the normal hotplugged > > * memory will never have MEMBLOCK_NOMAP flag set in their > > * memblock entries. Skip memblock search for all non early > > @@ -254,7 +254,7 @@ int pfn_valid(unsigned long pfn) > > return pfn_section_valid(ms, pfn); > > } > > #endif > > - return memblock_is_map_memory(addr); > > + return memblock_is_memory(addr); > > } > > EXPORT_SYMBOL(pfn_valid); > > > > What are the steps needed to get rid of custom pfn_valid() completely? > > I'd assume we would have to stop freeing parts of the mem map during boot. > How relevant is that for arm64 nowadays, especially with reduced section > sizes? Yes, for arm64 to use the generic pfn_valid() it'd need to stop freeing parts of the memory map. Presuming struct page is 64 bytes, the memory map takes 2M per section in the worst case (128M per section, 4k pages). So for systems that have less than 128M populated in each section freeing unused memory map would mean significant savings. But nowadays when a clock has at least 1G of RAM I doubt this is relevant to many systems if at all. -- Sincerely yours, Mike. _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE328C433B4 for ; Wed, 21 Apr 2021 05:58:41 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4E3886100A for ; Wed, 21 Apr 2021 05:58:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4E3886100A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=xvPP/8GrODeGAfp2dRIctK+ZdhHOGHGDFtcTlRfAVjU=; b=MW8P8Fa7AtMLaAg5kbeE1sCy8 QqOOSC4GS5QlAj9Lm1OzTwaGVjNKcy4XfkAD7RyTeeoKD0L+UmhqiQTfaSdKHmOMbUj66ZJXb/Sl3 Kvtb4fiEavlvIRDR2qFUMZ+MKP192rqrwwLbn14wic5d3gq5Cmi4uK136SS/cAsLZmChGvOEa3uii 7sG6XJJvohe9pw1a2uqlmbxqEMlJVXsp2j1J4OzMrBsTd8mHYck+TcGmPWy6U9+bAOUR6kW0yverQ Rt2xrZ6M9hVpmYxLyYJ/e3w1bKqlTJPcI17Y6vGcdCv+fZkD4cKCmTLnPoDk3MqpFWBYvYqBWoZZS i5RLlPPyg==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lZ5oN-00DkHM-2r; Wed, 21 Apr 2021 05:54:07 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lZ5my-00DkAn-Se for linux-arm-kernel@desiato.infradead.org; Wed, 21 Apr 2021 05:53:17 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=Xe984KKJubhUl5PVadt8O67+ug3UdgtnuvGDSFQFD+o=; b=EnnSzTLaEW53oTaEFi4ugo25iV 72Y9WZ//pUyULWEVIAr8atZdXFhqsNwjOCJhJImodtGD17h6622EXE+SZ57gfzUPFwkaQdddqP4xD 8/kPhNJwbGWS6Jy2dYU98NwWQAdORcSZ2063CcQDVlDV3FOJAJKdN2t1FZrEeY+dySOVQU71raAJA XUZuvx3dCJPvGI8tw9OregQyyJG1rdF+CtPAZUEyh2rKtfdMwf/ogP9PT+pdtEQWfvCiywDgjkci/ YzBMtW3eubC+dmOeScSfgO14BH+T/SxqIlxWxjMPiKFsflNgi9qNT3NjlwQS+BVFysfc3cl1VP0mA hLDSrsvg==; Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lZ5mw-00Ccja-7t for linux-arm-kernel@lists.infradead.org; Wed, 21 Apr 2021 05:52:39 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id D42916140C; Wed, 21 Apr 2021 05:52:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1618984357; bh=g72o3X3Q6Y/a2mSNpa0HjIO9b/1mdQbo0EaTxru/zZU=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=lDwRsR/E5mE9ve2Akk/3k+XO8YXnjrfevy9BmPLxs+lWkLg/W8jvzL1dYtVYW+3Rf c0TNDace3LJilMFVznrYeCRDwGrbHzJWGhCicEdzHL6l8Ssmr+GPd2k1tR0J/Oz8gY HNlkCXKomx+V/ot9YHj+QTDszjYr8FZyUKXC3ov3JbPn6ar8+9fn4C2rsNGgfwD1xK jxVIemH8Bz2l6DRPlexUeyQ5bNbq1sXxlzRv+Ip3azOMMDa2+YNNQtjVpoPrF/KHOO C7w6maQQFuV4pFX+kZC5rXtQY8I5cntBPjYPkw4E8WX1mZ5dZcqHIin3LRMZrwh4pk FPk4mfTeux5gA== Date: Wed, 21 Apr 2021 08:52:29 +0300 From: Mike Rapoport To: David Hildenbrand Cc: linux-arm-kernel@lists.infradead.org, Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , Marc Zyngier , Mark Rutland , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v1 4/4] arm64: drop pfn_valid_within() and simplify pfn_valid() Message-ID: References: <20210420090925.7457-1-rppt@kernel.org> <20210420090925.7457-5-rppt@kernel.org> <8e7171e7-a85c-6066-4ab6-d2bc98ec103b@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <8e7171e7-a85c-6066-4ab6-d2bc98ec103b@redhat.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210420_225238_348991_586040F1 X-CRM114-Status: GOOD ( 33.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, Apr 20, 2021 at 06:00:55PM +0200, David Hildenbrand wrote: > On 20.04.21 11:09, Mike Rapoport wrote: > > From: Mike Rapoport > > > > The arm64's version of pfn_valid() differs from the generic because of two > > reasons: > > > > * Parts of the memory map are freed during boot. This makes it necessary to > > verify that there is actual physical memory that corresponds to a pfn > > which is done by querying memblock. > > > > * There are NOMAP memory regions. These regions are not mapped in the > > linear map and until the previous commit the struct pages representing > > these areas had default values. > > > > As the consequence of absence of the special treatment of NOMAP regions in > > the memory map it was necessary to use memblock_is_map_memory() in > > pfn_valid() and to have pfn_valid_within() aliased to pfn_valid() so that > > generic mm functionality would not treat a NOMAP page as a normal page. > > > > Since the NOMAP regions are now marked as PageReserved(), pfn walkers and > > the rest of core mm will treat them as unusable memory and thus > > pfn_valid_within() is no longer required at all and can be disabled by > > removing CONFIG_HOLES_IN_ZONE on arm64. > > > > pfn_valid() can be slightly simplified by replacing > > memblock_is_map_memory() with memblock_is_memory(). > > > > Signed-off-by: Mike Rapoport > > --- > > arch/arm64/Kconfig | 3 --- > > arch/arm64/mm/init.c | 4 ++-- > > 2 files changed, 2 insertions(+), 5 deletions(-) > > > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > > index e4e1b6550115..58e439046d05 100644 > > --- a/arch/arm64/Kconfig > > +++ b/arch/arm64/Kconfig > > @@ -1040,9 +1040,6 @@ config NEED_PER_CPU_EMBED_FIRST_CHUNK > > def_bool y > > depends on NUMA > > -config HOLES_IN_ZONE > > - def_bool y > > - > > source "kernel/Kconfig.hz" > > config ARCH_SPARSEMEM_ENABLE > > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > > index c54e329aca15..370f33765b64 100644 > > --- a/arch/arm64/mm/init.c > > +++ b/arch/arm64/mm/init.c > > @@ -243,7 +243,7 @@ int pfn_valid(unsigned long pfn) > > /* > > * ZONE_DEVICE memory does not have the memblock entries. > > - * memblock_is_map_memory() check for ZONE_DEVICE based > > + * memblock_is_memory() check for ZONE_DEVICE based > > * addresses will always fail. Even the normal hotplugged > > * memory will never have MEMBLOCK_NOMAP flag set in their > > * memblock entries. Skip memblock search for all non early > > @@ -254,7 +254,7 @@ int pfn_valid(unsigned long pfn) > > return pfn_section_valid(ms, pfn); > > } > > #endif > > - return memblock_is_map_memory(addr); > > + return memblock_is_memory(addr); > > } > > EXPORT_SYMBOL(pfn_valid); > > > > What are the steps needed to get rid of custom pfn_valid() completely? > > I'd assume we would have to stop freeing parts of the mem map during boot. > How relevant is that for arm64 nowadays, especially with reduced section > sizes? Yes, for arm64 to use the generic pfn_valid() it'd need to stop freeing parts of the memory map. Presuming struct page is 64 bytes, the memory map takes 2M per section in the worst case (128M per section, 4k pages). So for systems that have less than 128M populated in each section freeing unused memory map would mean significant savings. But nowadays when a clock has at least 1G of RAM I doubt this is relevant to many systems if at all. -- Sincerely yours, Mike. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel