From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0CEDC4360C for ; Sun, 29 Sep 2019 17:32:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9C86221A4C for ; Sun, 29 Sep 2019 17:32:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1569778324; bh=hAUa0TrWfb4X363IN8Z1okqGk+klZDY+uWc9BkkRJr4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=VJuLNgxIxy+Rnm2PAvIVNlmz6W81DDeIEQRumOf7I5dWIN9JXc2JARsbTWlgld738 KF3g40QLMj1i3iwB9jRtSUx6mg5LzXLdSY47L6HfEumgtB6K5zZtgcSViIWWLDdUpz ZB9e05tlciqkDPPmW8vYTgwgcN8rPX386h4Z4eO8= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729606AbfI2RcD (ORCPT ); Sun, 29 Sep 2019 13:32:03 -0400 Received: from mail.kernel.org ([198.145.29.99]:42748 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729591AbfI2RcC (ORCPT ); Sun, 29 Sep 2019 13:32:02 -0400 Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 635D121835; Sun, 29 Sep 2019 17:32:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1569778322; bh=hAUa0TrWfb4X363IN8Z1okqGk+klZDY+uWc9BkkRJr4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bORbGyy6Jit1WjpnSO/X3N4hEkwfYIRZPJcCuUCcnaU7wL/iO24UkABaluMWOVR7Q 9qv9J/uliWXbApLOujsOgRJtrQSbcW5CvzoMSLhmNdF5S7QTwvDSbfCt88zjx3Zl2V Ws5BPbNmDmHVFLV3Tex/3WEWTkKyfDikBBndbH3w= From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Mike Rapoport , Mike Rapoport , Russell King , Sasha Levin Subject: [PATCH AUTOSEL 5.3 34/49] ARM: 8903/1: ensure that usable memory in bank 0 starts from a PMD-aligned address Date: Sun, 29 Sep 2019 13:30:34 -0400 Message-Id: <20190929173053.8400-34-sashal@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190929173053.8400-1-sashal@kernel.org> References: <20190929173053.8400-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mike Rapoport [ Upstream commit 00d2ec1e6bd82c0538e6dd3e4a4040de93ba4fef ] The calculation of memblock_limit in adjust_lowmem_bounds() assumes that bank 0 starts from a PMD-aligned address. However, the beginning of the first bank may be NOMAP memory and the start of usable memory will be not aligned to PMD boundary. In such case the memblock_limit will be set to the end of the NOMAP region, which will prevent any memblock allocations. Mark the region between the end of the NOMAP area and the next PMD-aligned address as NOMAP as well, so that the usable memory will start at PMD-aligned address. Signed-off-by: Mike Rapoport Signed-off-by: Russell King Signed-off-by: Sasha Levin --- arch/arm/mm/mmu.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index d9a0038774a6d..d5e0b908f0bad 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1177,6 +1177,22 @@ void __init adjust_lowmem_bounds(void) */ vmalloc_limit = (u64)(uintptr_t)vmalloc_min - PAGE_OFFSET + PHYS_OFFSET; + /* + * The first usable region must be PMD aligned. Mark its start + * as MEMBLOCK_NOMAP if it isn't + */ + for_each_memblock(memory, reg) { + if (!memblock_is_nomap(reg)) { + if (!IS_ALIGNED(reg->base, PMD_SIZE)) { + phys_addr_t len; + + len = round_up(reg->base, PMD_SIZE) - reg->base; + memblock_mark_nomap(reg->base, len); + } + break; + } + } + for_each_memblock(memory, reg) { phys_addr_t block_start = reg->base; phys_addr_t block_end = reg->base + reg->size; -- 2.20.1