From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 10E6815C82 for ; Mon, 5 Dec 2022 19:13:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 89015C433C1; Mon, 5 Dec 2022 19:13:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1670267638; bh=z7rJajtSq9kjBSIIkiiGWxOBOCQckGoTRL/iaeMpEck=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=1EuGnTpaUqXHLZC2HhZ2W42xdnc/hI4AQmXv50PubN9m4vyHECgz4VAYlBEllN5rl T/v2N84VPEBeyzQB2SBBbOkJooow2PY7h8e5jR4AdpRYI8M2DUNFaYLvgTFx36SC8T vp886UTHhjR1rNO96X+5Yf/9jAiy163j2onVsCHY= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Michael Kelley , Borislav Petkov , Dave Hansen , stable@kernel.org, Sasha Levin Subject: [PATCH 4.9 60/62] x86/ioremap: Fix page aligned size calculation in __ioremap_caller() Date: Mon, 5 Dec 2022 20:09:57 +0100 Message-Id: <20221205190800.348567859@linuxfoundation.org> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221205190758.073114639@linuxfoundation.org> References: <20221205190758.073114639@linuxfoundation.org> User-Agent: quilt/0.67 Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit From: Michael Kelley [ Upstream commit 4dbd6a3e90e03130973688fd79e19425f720d999 ] Current code re-calculates the size after aligning the starting and ending physical addresses on a page boundary. But the re-calculation also embeds the masking of high order bits that exceed the size of the physical address space (via PHYSICAL_PAGE_MASK). If the masking removes any high order bits, the size calculation results in a huge value that is likely to immediately fail. Fix this by re-calculating the page-aligned size first. Then mask any high order bits using PHYSICAL_PAGE_MASK. Fixes: ffa71f33a820 ("x86, ioremap: Fix incorrect physical address handling in PAE mode") Signed-off-by: Michael Kelley Signed-off-by: Borislav Petkov Acked-by: Dave Hansen Cc: Link: https://lore.kernel.org/r/1668624097-14884-2-git-send-email-mikelley@microsoft.com Signed-off-by: Sasha Levin --- arch/x86/mm/ioremap.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index ecae9ac216fa..696fd6fdc107 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -126,9 +126,15 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr, * Mappings have to be page-aligned */ offset = phys_addr & ~PAGE_MASK; - phys_addr &= PHYSICAL_PAGE_MASK; + phys_addr &= PAGE_MASK; size = PAGE_ALIGN(last_addr+1) - phys_addr; + /* + * Mask out any bits not part of the actual physical + * address, like memory encryption bits. + */ + phys_addr &= PHYSICAL_PAGE_MASK; + retval = reserve_memtype(phys_addr, (u64)phys_addr + size, pcm, &new_pcm); if (retval) { -- 2.35.1