From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 052EDC433ED for ; Mon, 29 Mar 2021 09:04:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DA74860C3E for ; Mon, 29 Mar 2021 09:04:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236713AbhC2JDC (ORCPT ); Mon, 29 Mar 2021 05:03:02 -0400 Received: from mail.kernel.org ([198.145.29.99]:32858 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231725AbhC2Iif (ORCPT ); Mon, 29 Mar 2021 04:38:35 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 826FF61864; Mon, 29 Mar 2021 08:38:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1617007115; bh=8km8l5VSew3GbQK2/Vpmt3lL3a2JnnSKkzsZAWyFuuo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=1an+dTIjro9mlhO4D/Exm9OIpdoEki2fbZkF8qIWiCv+YPfdKwyTW0afCp3VNahSE 5TPnw9ochHoG2KziLWn1mU3GvD7B4+UuRezLcOdE9iFek6djBth1Gu+3YB+yrqA/wA HbwjNKBZDQ6ogZ4kaJGaLH/HAgwIYyvM/+k8VCgs= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Anshuman Khandual , David Hildenbrand , Catalin Marinas , Will Deacon , Ard Biesheuvel , Mark Rutland , Heiko Carstens , Jason Wang , Jonathan Cameron , "Michael S. Tsirkin" , Michal Hocko , Oscar Salvador , Pankaj Gupta , Pankaj Gupta , teawater , Vasily Gorbik , Wei Yang , Andrew Morton , Linus Torvalds , Sasha Levin Subject: [PATCH 5.11 225/254] arm64/mm: define arch_get_mappable_range() Date: Mon, 29 Mar 2021 09:59:01 +0200 Message-Id: <20210329075640.480623043@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210329075633.135869143@linuxfoundation.org> References: <20210329075633.135869143@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Anshuman Khandual [ Upstream commit 03aaf83fba6e5af08b5dd174c72edee9b7d9ed9b ] This overrides arch_get_mappable_range() on arm64 platform which will be used with recently added generic framework. It drops inside_linear_region() and subsequent check in arch_add_memory() which are no longer required. It also adds a VM_BUG_ON() check that would ensure that mhp_range_allowed() has already been called. Link: https://lkml.kernel.org/r/1612149902-7867-3-git-send-email-anshuman.khandual@arm.com Signed-off-by: Anshuman Khandual Reviewed-by: David Hildenbrand Reviewed-by: Catalin Marinas Cc: Will Deacon Cc: Ard Biesheuvel Cc: Mark Rutland Cc: Heiko Carstens Cc: Jason Wang Cc: Jonathan Cameron Cc: "Michael S. Tsirkin" Cc: Michal Hocko Cc: Oscar Salvador Cc: Pankaj Gupta Cc: Pankaj Gupta Cc: teawater Cc: Vasily Gorbik Cc: Wei Yang Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- arch/arm64/mm/mmu.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 6f0648777d34..92b3be127796 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1443,16 +1443,19 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size) free_empty_tables(start, end, PAGE_OFFSET, PAGE_END); } -static bool inside_linear_region(u64 start, u64 size) +struct range arch_get_mappable_range(void) { + struct range mhp_range; + /* * Linear mapping region is the range [PAGE_OFFSET..(PAGE_END - 1)] * accommodating both its ends but excluding PAGE_END. Max physical * range which can be mapped inside this linear mapping range, must * also be derived from its end points. */ - return start >= __pa(_PAGE_OFFSET(vabits_actual)) && - (start + size - 1) <= __pa(PAGE_END - 1); + mhp_range.start = __pa(_PAGE_OFFSET(vabits_actual)); + mhp_range.end = __pa(PAGE_END - 1); + return mhp_range; } int arch_add_memory(int nid, u64 start, u64 size, @@ -1460,11 +1463,7 @@ int arch_add_memory(int nid, u64 start, u64 size, { int ret, flags = 0; - if (!inside_linear_region(start, size)) { - pr_err("[%llx %llx] is outside linear mapping region\n", start, start + size); - return -EINVAL; - } - + VM_BUG_ON(!mhp_range_allowed(start, size, true)); if (rodata_full || debug_pagealloc_enabled()) flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; -- 2.30.1